text
stringlengths
1
7.6M
meta
dict
\section{ Introduction} The presence of the boundaries brings the new features to the quantum field theory. One of the features is the modification of local quantities such as the heat kernel coefficients due to the boundary terms that results in the boundary terms in the quantum effective action. Those terms were studied for already some time in the literature and the findings, available to the date, were summarized in a review \cite{Vassilevich:2003xt}. The related new feature is a modification of the conformal transformations of the quantum effective action \cite{Dowker:1989ue} and of the conformal anomaly. In the absence of the boundaries the local conformal anomaly is non-trivial only if the dimension $d$ of the space-time is even \cite{Capper:1974ic}. It is because it is presented by a combination of the Euler density and the local conformal anomalies constructed form the Riemann curvature of dimension $d$. No such invariants exist if $d$ is odd. A related fact is that the Euler number of an odd-dimensional compact manifold is zero. In the presence of the boundaries two new things happen. Note that in this case it is more appropriate to consider the integrated conformal anomaly. Then, in even dimension $d=2n$ there appear certain boundary terms in the integrated anomaly. Those boundary terms are constructed from the Riemann curvature of the bulk metric and the extrinsic curvature $K_{ij}$ of the boundary. Since $K_{ij}$ has dimension $1$ one can now construct terms of dimension $2n-1$ that can be further integrated over the boundary. These boundary terms in $d=4$ dimensions were identified in \cite{Fursaev:2015wpa}, \cite{Herzog:2015ioa}. The other new feature is that the anomaly in odd dimension $d=2n+1$ is non-trivial. It is entirely due to the boundary terms \cite{Solodukhin:2015eca} that can be of two types: the topological Euler number of the boundary $E_{2n}$ and the local conformal invariants constructed from the Riemann tensor and the extrinsic curvature $K_{ij}$ and their derivatives. In dimension $d=3$ there is only one such invariant so that the anomaly is determined by two conformal charges: one for the topological term $E_2$ and the other for the conformal invariant. In \cite{Solodukhin:2015eca} these charges were determined for the conformal scalar both for the Dirichlet and the conformal Robin boundary conditions and in \cite{Fursaev:2016inw} for the Dirac fermions. The primary goal of the present note is to advance the analysis to the next odd dimension $d=5$. Some preliminary list of the conformal invariants in this dimension was given in \cite{Solodukhin:2015eca}. This list, as it was clear already in the time of writing the paper \cite{Solodukhin:2015eca}, was rather incomplete since it was anticipated that the other, unknown at that time, invariants may exist. The essential missing element was one or more invariants that could be constructed from the derivatives of the extrinsic curvature along the boundary and that would not vanish if the bulk 5d spacetime were flat. The differential invariant presented in \cite{Solodukhin:2015eca} does not have this last property since, as we show this in the present paper, it happens to be not an independent invariant as it reduces to a combination of the other invariants constructed from the various contractions of the Weyl tensor and the extrinsic curvature tensor. One of the main results of the present paper is that we have now found this new invariant, see eq.(\ref{I8}), that contains derivatives of the extrinsic curvature tensor and that was absent in \cite{Solodukhin:2015eca}. With the new invariant the list of the conformal boundary invariants in five dimensions is now complete. In a wider context the present paper is a part of the on-going study of the various manifestations of the boundaries, defects and interfaces in the conformal field theories, \cite{Herzog:2017xha} -\cite{Seminara:2018pmr}. It should be noted that contrary to the situation with the conformal invariants in the bulk of space-time, where all such invariants are classified, no such a classification theorem exists in general for the boundary conformal invariants. Discussing the preliminary work we should note the earlier results in mathematical literature \cite{Glaros:2015pfa} where the conformal invariant boundary structures were studied in various dimensions. The other preliminary work relevant to our present study is the calculation of the heat kernel coefficient $a_5$ for the various boundary conditions and the different elliptic operators that was performed in a series of papers \cite{Branson:1999jz}, \cite{Branson:1995cm}, \cite{Kirsten:1997qd} that was summarized in \cite{Kirsten:2000xc}. It is our other principle goal in the paper to use these earlier results, compute the conformal anomaly in the case of a conformal scalar both for the Dirichlet and conformal Robin boundary conditions and express the anomaly in terms of the complete set of the invariants thus determining the exact values of the boundary conformal charges. It should be noted that our findings can be also relevant to the study of the logarithmic terms in the entanglement entropy of a conformal field theory in $d=6$. Indeed, the entangling surface then has dimension $4$ and the logarithmic terms are due to the possible conformal invariants constructed from the curvature of spacetime and the extrinsic curvature of the surface. The previous study of the logarithmic terms within the holographic paradigm includes \cite{Safdi:2012sn}. In the paper we make contact with this previous study. The holographic aspect of the conformal anomaly in five dimensions in the presence of the boundaries is interesting. We, however, leave it for future work. \section{Boundary conformal invariants in $d=5$} \subsection{ Notations and conformal transformations} We denote by $n^\mu$ the components of the out-ward normal vector to the boundary $\partial {\cal M}_5$, $n^\mu n_\mu=1$. The metric induced on the boundary is $\gamma_{\mu\nu}=g_{\mu\nu}-n_\mu n_\nu$. In this and the subsequent sections we use the notations that the latin indices $i\, , \, j\, , \, k\, \dots$ should be understood as projections along the boundary, they take values $1\, , \, 2\, , \, 3\, , \, 4$. The index projected on the normal direction is denoted by $n$. With these notations for any tensor $X_{\mu\alpha\beta}$ one has that $X_{nij}=n^\mu\gamma_i^{\ \alpha}\gamma_j^{\ \beta}X_{\mu\alpha\beta}$. The extrinsic curvature is defined as $K_{ij}=\gamma_{i}^{\ \mu}\gamma_{j}^{\ \nu}\nabla_{(\mu}n_{\nu )}$, where $\nabla_\mu$ is the covariant derivative defined with respect to the 5d metric $g_{\mu\nu}$. The covariant derivative defined with respect to the intrinsic metric $\gamma_{ij}$ is denoted by $\bar{\nabla}_i$ and the respective curvature by $\bar{R}$, $\bar{R}_{ij}$ and $\bar{R}_{ijkl}$. The relations between the intrinsic curvature of the boundary and the curvature in the 5d space-time are given by the Gauss-Codazzi identities presented in Appendix A. Under the infinitesimal conformal transformations $\delta g_{\mu\nu}=2\sigma g_{\mu\nu}$, $\delta n_{\mu}=\sigma n_{\mu}$ the Weyl tensor transforms as $\delta W_{\alpha\mu\beta\nu}=2\sigma W_{\alpha\mu\beta\nu}$. The extrinsic curvature transforms as follows \begin{eqnarray} \delta{K}_{ij}=\sigma K_{ij}+\gamma_{ij}\nabla_n\sigma \, , \ \ \ \delta K=-\sigma K+4\nabla_n\sigma\, , \ \ \ \delta\hat{K}_{ij}=\sigma\hat{K}_{ij}\, , \label{2} \end{eqnarray} where $\nabla_n=n^\mu\nabla_\mu$ and $\hat{K}_{ij}=K_{ij}-\frac{1}{4}\gamma_{ij}K$ is the trace free part of the extrinsic curvature. The basic conformal tensors are, thus, the bulk Weyl tensor $W_{\alpha\mu\beta\nu}$ and the trace free extrinsic curvature of the boundary $\hat{K}_{ij}$. The intrinsic Weyl tensor of the boundary metric is expressed in terms of the $5$-dimensional Weyl tensor and the extrinsic curvature by means of the Gauss-Codazzi relations. \subsection{Conformal anomaly} The integrated conformal anomaly in five dimensions can be presented in the form, \begin{eqnarray} \int_{{\cal M}_5}\langle T_{\mu\nu}\rangle g^{\mu\nu}=\frac{1}{5760 (4\pi)^2}(aE_4+\sum_{k=1}^8 c_k I_k)\, , \label{1} \end{eqnarray} where $E_4$ is the 4d Euler density integrated over the boundary $\partial{\cal M}_5$, so that $\chi[\partial {\cal M}_5]=\frac{1}{32\pi^2}E_4$ is the Euler number of the boundary, and $I_k\, , \, k=1,\dots , 8$ are the conformal invariants constructed from the 5-dimensional Weyl tensor and the trace free part of the extrinsic curvature of the boundary $\hat{K}_{ij}=K_{ij}-\frac{1}{4}\gamma_{ij}K$. We note that the Euler number of an odd-dimensional open manifold is 1/2 of the Euler number of the boundary. This explains why the Euler number $E_5$ is not an independent quantity and, thus, is absent in the anomaly (\ref{1}). The numerical pre-factor in (\ref{1}) is chosen for the further convenience. $(a\, , \, c_1\, , \dots \, , \, c_8)$ are the boundary conformal charges to be determined below in the paper. \subsection{Euler number of the boundary } The Euler density of the 4-dimensional boundary has the standard expression in terms of the intrinsic curvature of the boundary, \begin{eqnarray} E_4=\int_{\partial{\mathcal{M}}_5}(\bar{R}_{ikj\ell}^2-4\bar{R}_{ij}^2+\bar{R}^2)\, . \label{E4} \end{eqnarray} Applying the Gauss-Codazzi equations, it can be expanded in terms of the curvature tensors of the bulk manifold as well as the extrinsic curvature of the four-dimensional boundary, \begin{eqnarray} \begin{split} &E_4=\int_{\partial{\mathcal{M}}_5}\Big(R^2_{ijk\ell}-4R^2_{ij}+R^2-4R_{injn}^2+8R^{ij}R_{injn}+4R^2_{nn}-4RR_{nn}+4K^{ij}K^{k\ell}R_{ikj\ell}\\&+8(K^2)^{ij}R_{ij}-8KK^{ij}R_{ij}-2{\rm Tr} K^2R+2K^2R -8(K^2)^{ij} R_{injn}+8KK^{ij}R_{injn}\\ &+4{\rm Tr} K^2R_{nn} -4K^2R_{nn}-6{\rm Tr} K^4+8K{\rm Tr} K^3+3({\rm Tr} K^2)^2-6K^2{\rm Tr} K^2+K^4\Big)\, , \end{split} \label{3} \end{eqnarray} where we defined $(K^2)^{ij}=K^i_{\ k} K^{kj}$. \bigskip Now we list the conformal invariants which can be called algebraic. These invariants do not contain the derivatives of the extrinsic curvature. \subsection{Invariants constructed from the extrinsic curvature tensor} The first group of the invariants of this type is constructed from the trace free extrinsic curvature $\hat{K}_{ij}$, \begin{eqnarray} I_1=\int_{\partial{\mathcal{M}}_5}({\rm Tr}\hat{K}^2)^2=\int_{\partial{\mathcal{M}}_5}[({\rm Tr} K^2)^2-\frac{1}{2}K^2{\rm Tr} K^2+\frac{1}{16}K^4]\, , \end{eqnarray} and \begin{eqnarray} I_2=\int_{\partial{\mathcal{M}}_5}{\rm Tr} \hat{K}^4=\int_{\partial{\mathcal{M}}_5}({\rm Tr} K^4-K{\rm Tr} K^3+\frac{3}{8}K^2{\rm Tr} K^2-\frac{3}{64}K^4)\, . \end{eqnarray} \bigskip \subsection{Invariants constructed from the Weyl tensor} The next set of the invariants is constructed from the 5d Weyl tensor, \begin{eqnarray} I_3=\int_{\partial{\mathcal{M}}_5}W_{ikj\ell}^2=\int_{\partial{\mathcal{M}}_5}\left(R_{ikj\ell}^2-\frac{16}{9}R_{ij}^2+\frac{5}{18}R^2+\frac{8}{3}R^{ij}R_{injn}+\frac{4}{9}R_{nn}^2-\frac{8}{9}RR_{nn}\right)\, , \end{eqnarray} and \begin{eqnarray} I_4=\int_{\partial{\mathcal{M}}_5}W_{injn}^2=\int_{\partial{\mathcal{M}}_5}\left(\frac{1}{9}R_{ij}^2-\frac{1}{36}R^2+R_{injn}^2-\frac{2}{3}R^{ij}R_{injn}-\frac{4}{9}R_{nn}^2+\frac{2}{9}RR_{nn}\right)\, . \end{eqnarray} \subsection{Invariants constructed from the Weyl tensor contracted with the extrinsic curvature tensor} Contractions of the Weyl tensor and the trace free extrinsic curvature give us a new set of the conformal invariants, \begin{eqnarray} \begin{split} I_5&=\int_{\partial{\mathcal{M}}_5}\hat{K}^{ij}\hat{K}^{k\ell}W_{ikj\ell}\\ =&\int_{\partial{\mathcal{M}}_5}\Big(K^{ij}K^{k\ell}R_{ikj\ell}+\frac{2}{3}(K^2)^{ij}R_{ij}-\frac{5}{6}KK^{ij}R_{ij}-\frac{1}{12}{\rm Tr} K^2R+\frac{1}{8}K^2R\\ &+\frac{1}{2}KK^{ij}R_{injn}-\frac{1}{6}K^2R_{nn}\Big)\, , \end{split} \end{eqnarray} and \begin{eqnarray} \begin{split} I_6&=\int_{\partial{\mathcal{M}}_5}\hat{K}^i_k\hat{K}^{kj}W_{injn}\\ =&\int_{\partial{\mathcal{M}}_5}\Big(-\frac{1}{3}K^i_kK^{kj}R_{ij}+\frac{1}{6}KK^{ij}R_{ij}+\frac{1}{12}{\rm Tr} K^2R-\frac{1}{24}K^2R\\ &K^i_kK^{kj}R_{injn}-\frac{1}{2}KK^{ij}R_{injn}-\frac{1}{3}{\rm Tr} K^2R_{nn}+\frac{1}{6}K^2R_{nn}\Big)\, . \end{split} \end{eqnarray} \bigskip \subsection{Invariants with derivatives and their independence} The invariants of this group will be expressed in terms of the derivatives of the extrinsic curvature along the boundary. The first invariant of this kind is \begin{eqnarray} I_7=\int_{\partial{\mathcal{M}}_5}W_{nijk}^2\, . \end{eqnarray} Using the Gauss-Codazzi equation, $W_{nijk}$ can be expanded in terms of the derivatives of the extrinsic curvature tensor \begin{eqnarray} W_{nijk}=2\bar{\nabla}_{[k}K_{j]i}+\frac{2}{3}\gamma_{i[j}\left(\bar{\nabla}_\ell K^\ell_{k]}-\bar{\nabla}_{k]}K\right)\, . \end{eqnarray} This identity helps us to re-write the invariant $I_7$ in the form that contains only the extrinsic curvature tensor and its derivatives along the boundary, \begin{eqnarray} I_7=\int_{\partial{\mathcal{M}}_5}\left(2\bar{\nabla}_kK_{ij}\bar{\nabla}^kK^{ij}-2\bar{\nabla}_kK^{ij}\bar{\nabla}_jK^k_i+\frac{4}{3}\bar{\nabla}_iK\bar{\nabla}_jK^{ij}-\frac{2}{3}\bar{\nabla}_iK^{ij}\bar{\nabla}_kK^k_j-\frac{2}{3}(\bar{\nabla} K)^2\right)\, , \end{eqnarray} using \eqref{identity 4}, one re-writes this expression as follows \begin{eqnarray} \begin{split} I_7&=\int_{\partial{\mathcal{M}}_5}\Big[2\bar{\nabla}_kK_{ij}\bar{\nabla}^kK^{ij}-\frac{8}{3}\bar{\nabla}_iK^i_j\bar{\nabla}_kK^{kj}+\frac{4}{3}\bar{\nabla}_iK\bar{\nabla}_jK^{ij}-\frac{2}{3}(\bar{\nabla} K)^2\\ &-2K^{ij}K^{k\ell}R_{ikj\ell}-2(K^2)^{ij} R_{injn}+2(K^2)^{ij} R_{ij}+2K{\rm Tr} K^3-2({\rm Tr} K^2)^2\Big]\, . \end{split} \end{eqnarray} \bigskip It is instructive to analyse the relation of this invariant to another conformal invariant that is written in terms of the conformal invariant operator acting on a tensor of rank two. The following term is introduced in \cite{Solodukhin:2015eca} and is known to be a conformal invariant in 5 dimensions, \begin{eqnarray} I^{(D)}=\int_{\partial{\mathcal{M}}_5}{\rm Tr} \hat{K} {\mathcal{F}}\hat{K}\, , \end{eqnarray} where the differential operator \begin{eqnarray}\label{conformal derivative} \begin{split} &{\mathcal{F}}^{ij}_{k\ell}=\delta^i_{(k}\delta^j_{\ell)}\bar{\Box}-\frac{4}{3}\bar{\nabla}^{(i}\bar{\nabla}_{(k}\delta^{j)}_{\ell)}-\frac{8}{6}\bar{R}^i\,_{(k}\,^j\,_{\ell)}-\frac{2}{3}\bar{R}^{(i}_{(k}\delta^{j)}_{\ell)}+\frac{1}{6}\bar{R}\delta^{i}_{(k}\delta^j_{\ell)}\, \end{split} \end{eqnarray} acts on a tensor with conformal weight $+1$. It should be noted that there is only one such differential operator that respects the conformal symmetry \cite{Gusynin:1986kz}, \cite{Achour:2013afa}. The explicit form of this invariant reads \begin{eqnarray} \begin{split} \label{ID} &I^{(D)}=\int_{\partial{\mathcal{M}}_5}{\rm Tr} \hat{K} {\mathcal{F}}\hat{K}=\int_{\partial{\mathcal{M}}_4}\hat{K}^{k\ell} {\mathcal{F}}^{ij}_{k\ell}\hat{K}_{ij}\\ &=\int_{\partial{\mathcal{M}}_5}\Big[ K^{ij}\bar{\Box}K_{ij}-\frac{1}{3}K\bar{\Box}K+\frac{1}{3}K^{ij}\bar{\nabla}_i\bar{\nabla}_jK +\frac{1}{3}K\bar{\nabla}_i\bar{\nabla}_jK^{ij}-\frac{4}{3}K^{ij}\bar{\nabla}_j\bar{\nabla}_kK^k_i\\ &+2(K^2)^{ij}R_{injn}-KK^{ij}R_{injn}-2(K^2)^{ij}R_{ij}+KK^{ij}R_{ij}-\frac{1}{3}{\rm Tr} K^2R_{nn}+\frac{1}{3}K^2R_{nn}\\ &+\frac{1}{6}{\rm Tr} K^2R-\frac{1}{6}K^2R+2{\rm Tr} K^4-3K{\rm Tr} K^3-\frac{1}{6}({\rm Tr} K^2)^2+\frac{4}{3}K^2{\rm Tr} K^2-\frac{1}{6}K^4\Big]\, . \end{split} \end{eqnarray} It can be shown using the Gauss-Codazzi relations that in flat bulk spacetime ($R_{\mu\nu\alpha\beta}=0$) all derivatives of the extrinsic curvature in (\ref{ID}) are mutually cancelled. In what follows it will be shown that this term is not an independent invariant. To see this, let us start with the observation that \begin{eqnarray} I^{(D)}=-\frac{1}{2}I_7-\int_{\partial{\mathcal{M}}_5}\hat{K}^{ij}\hat{K}^{k\ell}\bar{W}_{ikj\ell} \end{eqnarray} Note also that using the Gauss-Codazzi relations one has that \begin{eqnarray} \begin{split} &\int_{\partial{\mathcal{M}}_5}\hat{K}^{ij}\hat{K}^{k\ell}\bar{W}_{ikj\ell}=I_5+\int_{\partial{\mathcal{M}}_5}\Big[-(R_{injn}-\frac{1}{3}R_{ij})((K^2)^{ij} -\frac{1}{2}KK^{ij})\\ &+\frac{1}{3}(R_{nn}-\frac{1}{4}R)({\rm Tr} K^2-\frac{1}{2}K^2)-2{\rm Tr} K^4+2K{\rm Tr} K^3+\frac{7}{6}({\rm Tr} K^2)^2-\frac{4}{3}K^2{\rm Tr} K^2+\frac{1}{6}K^4\Big]\, . \end{split} \end{eqnarray} The integral in the right hand side of the above equation can be written in terms of $I_3$, $I_4$ and $I_6$. So finally we conclude that \begin{eqnarray} I^{(D)}=-\frac{1}{2}I_7-I_5+I_6-\frac{7}{6}I_3+2I_4\, . \end{eqnarray} This indicates that it is not an independent invariant and, thus, it has to be excluded from the list. Note that all derivative terms in $I^{(D)}$ come just from $I_7$, so it is natural that all derivative terms disappear in the flat spacetime case, where $W_{nijk}=0$. \subsection{Invariants with derivatives: new invariant} To construct the last invariant, we start with the normal derivative of the Weyl tensor, contracted with the normal vector and the traceless extrinsic curvature tensor \begin{eqnarray} I_8^{(1)}=\int_{\partial{\mathcal{M}}_5}\hat{K}^{ij}\nabla_n W_{injn}\, , \end{eqnarray} where by $\nabla_n W_{injn}$ we mean $n^\rho n^\mu n^\nu \nabla_\rho W_{i\mu j\nu}$\, .\\ We can show that \begin{eqnarray}\label{CTI81} \delta I_8^{(1)}=-2\int_{\partial{\mathcal{M}}_5}\left(\hat{K}^{ij}W_{injn}\partial_n\sigma+\hat{K}^{ij}W_{nijk}\bar{\nabla}^k\sigma\right)\, . \end{eqnarray} The first term in the conformal variation can be removed if we simply add \begin{eqnarray} I_8^{(2)}=\frac{1}{2}\int_{\partial{\mathcal{M}}_5}K\hat{K}^{ij}W_{injn}\, . \end{eqnarray} Let us focus now on the second term in \eqref{CTI81}. We can construct an integral with the same transformation \begin{eqnarray} I_8^{(3)}=\int_{{\mathcal{M}}_5}\left(\frac{1}{9}\bar{\nabla}_i\hat{K}^i_j\bar{\nabla}_k\hat{K}^{kj}-(\hat{K}^2)^{ij}\bar{S}_{ij}+\frac{1}{2}{\rm Tr} \hat{K}^2\bar{S}^i_i\right)\, , \end{eqnarray} where \begin{eqnarray} \bar{S}_{ij}=\frac{1}{2}(\bar{R}_{ij}-\frac{1}{6}\bar{R}\gamma_{ij})\, \end{eqnarray} is the 4d Schouten tensor computed with respect to the intrinsic boundary metric $\gamma_{ij}$. Under the conformal transformations one has \begin{eqnarray} \begin{split} &\delta(\bar{\nabla}_i\hat{K}^i_j\bar{\nabla}_k\hat{K}^{kj})=-4\sigma \bar{\nabla}_i\hat{K}^i_j\bar{\nabla}_k\hat{K}^{kj}+6\hat{K}^{ij}\bar{\nabla}_k\hat{K}^k_j\bar{\nabla}_i\sigma\, ,\\ &\delta \bar{S}_{ij}=-\bar{\nabla}_i\bar{\nabla}_j\sigma\, , \ \ \delta \bar{S}^i_i=-2\sigma \bar{S}^i_i-\bar{\Box}\sigma\, . \end{split} \end{eqnarray} Therefore one finds that \begin{eqnarray} \delta I_8^{(3)}=\int_{\partial{\mathcal{M}}_5}\left[\frac{2}{3}\hat{K}^{ij}\bar{\nabla}_k\hat{K}^k_j\bar{\nabla}_i\sigma+(\hat{K}^2)^{ij}(\bar{\nabla}_i\bar{\nabla}_j\sigma-\frac{1}{2}\gamma_{ij}\bar{\Box}\sigma)\right]\, . \end{eqnarray} Interestingly, we may recast the above expression as follows, \begin{eqnarray} \delta I_8^{(3)}=\int_{\partial{\mathcal{M}}_5}\hat{K}^{ij}W_{nijk}\bar{\nabla}^k\sigma\, . \end{eqnarray} This is precisely what we need to cancel the second term in the conformal transformation of $I^{(1)}_{(8)}$. So we can now construct the following conformal invariant, $I_8=I_8^{(1)}+I_8^{(2)}+2I_8^{(3)}$, \begin{eqnarray} \label{I8} I_8=\int_{{\mathcal{M}}_5}\left(\hat{K}^{ij}\nabla_n W_{injn}+\frac{1}{2}K\hat{K}^{ij}W_{injn}+\frac{2}{9}\bar{\nabla}_i\hat{K}^i_j\bar{\nabla}_k\hat{K}^{kj}-2(\hat{K}^2)^{ij}\bar{S}_{ij}+{\rm Tr} \hat{K}^2\bar{S}^i_i\right)\, . \end{eqnarray} Using the Gauss-Codazzi equations and the differential relations (\ref{identity 3}), (\ref{identity 2}) and (\ref{identity 4}) the new invariant $I_8$ can be rewritten as follows, \begin{eqnarray} \label{I8-2} \begin{split} I_8&=\int_{\partial{\mathcal{M}}_5}\Big[\frac{2}{3}K^{ij}\nabla_nR_{injn}-\frac{1}{12}K\nabla_nR+\frac{2}{3}K^{ij}K^{k\ell}R_{ikj\ell}-K^i_kK^{jk}R_{ij}\\ &+\frac{1}{3}{\rm Tr} K^2 R-\frac{5}{48}K^2R+\frac{5}{3}K^i_kK^{kj}R_{injn}-\frac{1}{3}KK^{ij}R_{injn}-{\rm Tr} K^2R_{nn}+\frac{11}{24}K^2R_{nn}\\ &+{\rm Tr} K^4-\frac{11}{6}K{\rm Tr} K^3+\frac{47}{48}K^2{\rm Tr} K^2-\frac{7}{48}K^4\\ &-\frac{1}{3}\bar{\nabla}_kK_{ij}\bar{\nabla}^kK^{ij}+\frac{8}{9}\bar{\nabla}_iK^i_j\bar{\nabla}_kK^{kj}-\frac{7}{9}\bar{\nabla}_iK^{ij}\bar{\nabla}_jK+\frac{25}{72}(\bar{\nabla} K)^2\Big]\, . \end{split} \end{eqnarray} To the best of our knowledge the invariant $I_8$ is a new invariant that was not available in the literature before. It has been for a long time an important missing element in the discussion of the conformal anomalies in five dimensions. Having said that we should notice that in some particular case this invariant reduces to the one that already appeared in the literature on the holographic computation of the entanglement entropy in $d=6$. Indeed, provided the 5d manifold is flat and using the Gauss-Codazzi equations, one has that \begin{eqnarray} R_{nijk}=0\rightarrow \bar{\nabla}_kK_{ij}=\bar{\nabla}_jK_{ik}\, . \end{eqnarray} In this case $I_8$ takes a simpler form, \begin{eqnarray} I_8^{flat}=\int_{\partial{\mathcal{M}}_5}\left[\frac{1}{8}(\bar{\nabla} K)^2+{\rm Tr} K^4-\frac{3}{2}K{\rm Tr} K^3-\frac{1}{3}({\rm Tr} K^2)^2+\frac{47}{48}K^2{\rm Tr} K^2-\frac{7}{48}K^4\right]\, . \end{eqnarray} Interestingly, in this form, it is related to invariant $T_3$ found by Safdi \cite{Safdi:2012sn} in a holographic calculation of the entanglement entropy in a 6 dimensional flat manifold, \begin{eqnarray} I_8^{flat}=\frac{1}{8}(T_3+\frac{10}{3}I_3-I_4)\, . \end{eqnarray} It should be noted that no conformal invariant form of $T_3$ was given in \cite{Safdi:2012sn}. Thus, our result (\ref{I8}), (\ref{I8-2}), among other things, offers a conformal invariant form, valid in a curved spacetime, for this holographic calculation. \section{Conformal anomaly for a scalar field} The free conformal scalar field in five dimensions is described by a conformal Laplace operator, \begin{eqnarray} D=-(\nabla^2+E)\, , \ \ E=-\frac{3}{16}R\, . \label{D} \end{eqnarray} In the space-time with boundaries it should be supplemented by a boundary condition. There are two possible boundary conditions consistent with the conformal symmetry: the Dirichlet condition and the conformal Robin condition, \begin{eqnarray} \label{DR} \begin{split} &\text{Dirichlet b.c.}\, : \, \phi\vert_{\partial{\mathcal{M}}_5}=0\, ,\\ &\text{Robin b.c.}\, : \, (\partial_n-S)\phi\vert_{\partial{\mathcal{M}}_5}=0\, ,\ \ S=-\frac{3}{8}K\, . \end{split} \end{eqnarray} The important object that encodes the main information about the quantum field theory is the heat kernel $K(D,s)=e^{-sD}$ and its small $s$ expansion, \begin{eqnarray} {\rm Tr} K(D,s)=\sum_{p=0}a_p (D) \, s^{(p-5)/2}\, , \ \ s\rightarrow 0 \label{K} \end{eqnarray} where $a_p(D)$ are the heat kernel coefficients that are represented by integrals over the manifold and its boundary of the local invariants constructed from the curvature of the bulk metric, the extrinsic curvature of the boundary and the quantities $E$ and $S$. The integrated conformal anomaly in dimension $d=5$ is determined by the coefficient $a_5$, \begin{eqnarray} \int_{{\cal M}_5}\langle T_{\mu\nu}\rangle g^{\mu\nu}=a_5\, . \label{anomaly} \end{eqnarray} As was explained above, there is no a bulk term in $a_5$ and the entire contribution comes only from the boundary $\partial{\mathcal{M}}_5$. The exact form of the coefficient $a_5$ is available in the literature for a rather general elliptic operator with the boundary condition being a mixture of the Dirichlet and the Robin conditions, see \cite{Branson:1999jz}, \cite{Branson:1995cm}, \cite{Kirsten:1997qd}, \cite{Kirsten:2000xc}. In what follows we use the general form of $a_5$ presented in \cite{Branson:1999jz}. For the conformal operator $D$ and the Dirichlet boundary condition we find that \begin{eqnarray} \begin{split} a_5^{(D)}&=-\frac{1}{5760(4\pi)^2}\int_{\partial{\mathcal{M}}_5}\Big(8R_{\mu\alpha\nu\beta}^2-8R_{\mu\nu}^2+\frac{5}{16}R^2-10R_{injn}^2+16R^{ij}R_{injn}-17R_{nn}^2+\frac{5}{2}RR_{nn}\\ &+48\Box R-45\bar{\Box}R-\frac{111}{2}\nabla_n^2R+24\bar{\Box}R_{nn}+15\nabla_n^2R_{nn}-\frac{339}{8}K\nabla_nR\\ &+32K^{ij}K^{k\ell}R_{ikj\ell}+16K^i_kK^{kj}R_{ij}+14KK^{ij}R_{ij}+\frac{25}{8}{\rm Tr} K^2R-\frac{35}{16}K^2R\\ &-\frac{47}{2}K^i_kK^{kj}R_{injn}+\frac{49}{4}KK^{ij}R_{injn}-\frac{215}{8}{\rm Tr} K^2 R_{nn}-\frac{215}{16}K^2R_{nn}\\ &-\frac{327}{8}{\rm Tr} K^4+\frac{17}{2}K{\rm Tr} K^3+\frac{777}{32}({\rm Tr} K^2)^2-\frac{141}{32}K^2{\rm Tr} K^2-\frac{65}{128}K^4\\ &+\frac{355}{8}\bar{\nabla}_kK_{ij}\bar{\nabla}^kK^{ij}-\frac{11}{4}\bar{\nabla}_iK^i_j\bar{\nabla}_kK^{jk}-\frac{29}{4}\bar{\nabla}_kK^j_i\bar{\nabla}_jK^{ik}+58\bar{\nabla}_iK^{ij}\bar{\nabla}_jK-30K\bar{\nabla}_i\bar{\nabla}_jK^{ij}\\ &-\frac{75}{2}K^{jk}\bar{\nabla}_k\bar{\nabla}_iK^i_j+\frac{285}{4}K^{ij}\bar{\nabla}_i\bar{\nabla}_jK+54K^{ij}\bar{\Box} K_{ij}+6K\bar{\Box}K-\frac{413}{16}\bar{\nabla}_iK\bar{\nabla}^iK\Big)\, , \end{split} \end{eqnarray} while for the conformal Robin boundary condition we have that \begin{eqnarray} \begin{split} a_5^{(R)}&=\frac{1}{5760(4\pi)^2}\int_{\partial{\mathcal{M}}_5}\Big(8R_{\mu\alpha\nu\beta}^2-8R_{\mu\nu}^2+\frac{5}{16}R^2-10R_{injn}^2+16R^{ij}R_{injn}-17R_{nn}^2+\frac{5}{2}RR_{nn}\\ &+48\Box R-45\bar{\Box}R-\frac{111}{2}\nabla_n^2R+24\bar{\Box}R_{nn}+15\nabla_n^2R_{nn}-30K^{ij}\nabla_n R_{injn}-\frac{309}{8}K\nabla_nR\\ &+32K^{ij}K^{k\ell}R_{ikj\ell}+16K^i_kK^{kj}R_{ij}+\frac{43}{2}KK^{ij}R_{ij}-\frac{5}{8}{\rm Tr} K^2R-\frac{5}{4}K^2R\\ &+\frac{133}{2}K^i_kK^{kj}R_{injn}-\frac{161}{4}KK^{ij}R_{injn}-\frac{275}{8}{\rm Tr} K^2 R_{nn}-\frac{185}{16}K^2R_{nn}\\ &+\frac{231}{8}{\rm Tr} K^4-\frac{125}{4}K{\rm Tr} K^3+\frac{375}{32}({\rm Tr} K^2)^2+\frac{147}{32}K^2{\rm Tr} K^2-\frac{37}{64}K^4\\ &+\frac{535}{8}\bar{\nabla}_kK_{ij}\bar{\nabla}^kK^{ij}+\frac{49}{4}\bar{\nabla}_iK^i_j\bar{\nabla}_kK^{jk}+\frac{151}{4}\bar{\nabla}_kK^j_i\bar{\nabla}_jK^{ik}+58\bar{\nabla}_iK^{ij}\bar{\nabla}_jK-\frac{75}{2}K\bar{\nabla}_i\bar{\nabla}_jK^{ij}\\ &-\frac{15}{2}K^{jk}\bar{\nabla}_k\bar{\nabla}_iK^i_j+\frac{315}{4}\bar{\nabla}_i\bar{\nabla}_jK+114K^{ij}\bar{\Box} K_{ij}-\frac{3}{2}K\bar{\Box}K-\frac{503}{16}\bar{\nabla}_iK\bar{\nabla}^iK\Big)\, . \end{split} \end{eqnarray} We shall insert here \begin{eqnarray} R^2_{\mu\alpha\nu\beta}=R_{ikj\ell}^2+4R_{nijk}^2+4R_{injn}^2\, , \ \ R_{\mu\nu}^2=R_{ij}^2+2R_{in}^2+R_{nn}^2\, . \end{eqnarray} The other important point is that the general expression for the coefficient $a_5$ given in \cite{Branson:1999jz} contains some sort of redundancy since the differential relation (\ref{identity 5}), which together with (\ref{identity 2}) generalises the Gauss-Codazzi identities, was not taken into account. In fact, a similar redundancy is present in the coefficient $a_4$ given in \cite{Vassilevich:2003xt}, where the relation\footnote{The relation (\ref{identity 2}) has appeared earlier in \cite{McAvity:1990we} and \cite{Fursaev:2015wpa}.} (\ref{identity 2}) was not taken into account. Since these relations contain the higher order normal derivatives one might worry whether one would have to specify an extension for the normal vector outside the boundary to make these relations valid. We, however, have checked that the relations (\ref{identity 1}) - (\ref{identity 5}) do not depend on the way the normal vector is extended. Thus, using the Gauss-Codazzi equations and the identities (\ref{identity 1}), (\ref{identity 5}) we finally arrive at the expression \begin{eqnarray} \begin{split} a_5^{(D)}&=\frac{1}{5760(4\pi)^2}\int_{\partial{\mathcal{M}}_5}\Big(-8R_{ikj\ell}^2+8R_{ij}^2-\frac{5}{16}R^2-22R_{injn}^2-R^{ij}R_{injn}+10R_{nn}^2-\frac{5}{2}RR_{nn}\\ &-15K^{ij}\nabla_n R_{injn}+\frac{15}{8}K\nabla_nR\\ &+\frac{277}{4}K^{ij}K^{k\ell}R_{ikj\ell}-\frac{289}{4}K^i_kK^{kj}R_{ij}+KK^{ij}R_{ij}-\frac{25}{8}{\rm Tr} K^2R+\frac{35}{16}K^2R\\ &+\frac{499}{4}K^i_kK^{kj}R_{injn}-\frac{109}{4}KK^{ij}R_{injn}-\frac{25}{8}{\rm Tr} K^2 R_{nn}-\frac{25}{16}K^2R_{nn}\\ &+\frac{327}{8}{\rm Tr} K^4-\frac{379}{4}K{\rm Tr} K^3+\frac{1983}{32}({\rm Tr} K^2)^2+\frac{141}{32}K^2{\rm Tr} K^2+\frac{65}{128}K^4\\ &-\frac{555}{8}\bar{\nabla}_kK_{ij}\bar{\nabla}^kK^{ij}+\frac{165}{2}\bar{\nabla}_iK^i_j\bar{\nabla}_kK^{jk}-\frac{135}{4}\bar{\nabla}_iK^{ij}\bar{\nabla}_jK+\frac{285}{16}\bar{\nabla}_iK\bar{\nabla}^iK\Big)\, \end{split} \end{eqnarray} for the conformal scalar operator with the Dirichlet boundary condition and \begin{eqnarray} \begin{split} a_5^{(R)}&=\frac{1}{5760(4\pi)^2}\int_{\partial{\mathcal{M}}_5}\Big(8R_{ikj\ell}^2-8R_{ij}^2+\frac{5}{16}R^2+22R_{injn}^2+R^{ij}R_{injn}-10R_{nn}^2+\frac{5}{2}RR_{nn}\\ &-15K^{ij}\nabla_n R_{injn}+\frac{15}{8}K\nabla_nR\\ &-\frac{97}{4}K^{ij}K^{k\ell}R_{ikj\ell}+\frac{109}{4}K^i_kK^{kj}R_{ij}+\frac{13}{2}KK^{ij}R_{ij}-\frac{5}{8}{\rm Tr} K^2R-\frac{5}{4}K^2R\\ &+\frac{41}{4}K^i_kK^{kj}R_{injn}-\frac{101}{4}KK^{ij}R_{injn}-\frac{35}{8}{\rm Tr} K^2 R_{nn}+\frac{55}{16}K^2R_{nn}\\ &+\frac{231}{8}{\rm Tr} K^4+10K{\rm Tr} K^3-\frac{945}{32}({\rm Tr} K^2)^2+\frac{147}{32}K^2{\rm Tr} K^2-\frac{37}{64}K^4\\ &+\frac{255}{8}\bar{\nabla}_kK_{ij}\bar{\nabla}^kK^{ij}-\frac{105}{2}\bar{\nabla}_iK^i_j\bar{\nabla}_kK^{jk}+\frac{135}{4}\bar{\nabla}_iK^{ij}\bar{\nabla}_jK-\frac{255}{16}\bar{\nabla}_iK\bar{\nabla}^iK\Big)\, \end{split} \end{eqnarray} for the conformal scalar operator with the conformal Robin boundary condition (\ref{DR}). Following the earlier works \cite{Branson:1999jz} - \cite{Kirsten:2000xc}, we adopted in this paper the convention that the components of the normal vector $n^\mu$ are always outside the covariant derivatives, for instance $\nabla_n R_{nn}\equiv n^\alpha n^\mu n^\nu \nabla_\alpha R_{\mu\nu}$, $\nabla_n R_{ninj}\equiv n^\alpha n^\mu n^\nu \nabla_\alpha R_{\mu i \nu j}$ etc. This guarantees that these terms do not depend on how the components of the normal vector are extended outside the boundary. It should be noted that the elliptic operators in question do not require any such extension so that the respective heat kernel coefficients contain only terms that are independent of the way the vectors normal to the boundary are extended to the nearest vicinity of the boundary. This, in particular, explains why in the heat kernel coefficients there do not appear any terms that contain the normal derivative of the extrinsic curvature, $\nabla_n K_{ij}$, since for these terms one would need to specify the extension of the normal vector outside the boundary\footnote{We thank D. Vassilevich for correspondence on this point.}. The expressions for $a_5$, obtained above, should be now compared with the general form (\ref{1}) of the anomaly decomposed over the possible conformal invariants. We have checked that the decomposition is indeed possible and unique both for $a_5^{(D)}$ and $a_5^{(R)}$ so that the list of conformal invariants $E_4\, , \, I_1\, , \dots \, , \, I_8$ is indeed complete. The details of the analysis are given in Appendix B. The decomposition allows us to determine the values of the boundary conformal charges in the case of a conformal scalar field. For each boundary condition, one arrives at 27 equations on 9 charges. That solution of this highly overdetermined system of algebraic equations exists is a nice check on our formulas. The result is presented in Table 1. The values found for the Euler charge $a$ agree with the earlier computations \cite{Rodriguez-Gomez:2017aca}, \cite{Nishioka:2021uef}, \cite{Wang:2021mdq}. \begin{table}\label{table} \begin{center} \renewcommand{1.2}{2} \medskip \caption{ Conformal charges for Dirichlet and Robin boundary conditions} \bigskip \begin{tabular}{ | c | c | c | } \hline \text{Conformal charges} & \text{Dirichlet b.c.} & \text{Robin b.c.} \\ \hline $a$ & $\frac{17}{8}$ & $-\frac{17}{8}$ \\ \hline $c_1$ & $-\frac{681}{32}$ & $\frac{39}{32}$ \\ \hline $c_2$ & $\frac{609}{8}$ & $\frac{309}{8}$ \\ \hline $c_3$ & $-\frac{81}{8}$ & $\frac{81}{8}$ \\ \hline $c_4$ & $-\frac{27}{2}$ & $\frac{27}{2}$ \\ \hline $c_5$ & $-\frac{9}{8}$ & $\frac{189}{8}$ \\ \hline $c_6$ & $\frac{819}{8}$ & $\frac{441}{8}$ \\ \hline $c_7$ & $-\frac{615}{16}$ & $\frac{195}{16}$ \\ \hline $c_8$ & $-\frac{45}{2}$ & $-\frac{45}{2}$ \\ \hline \end{tabular} \renewcommand{1.2}{1} \end{center} \end{table} \section{Conclusions} In this note we have presented an exhaustive discussion of the boundary conformal invariants in five dimensions. In particular, and this is one of our main results, we have found a new conformal invariant $I_8$ that contains the derivatives of the extrinsic curvature tensor along the boundary. In a flat space limit this invariant is related to the one found holographically by Safdi. We note that the invariants discussed in this paper do not depend on the way the normal vector is extended outside the boundary. Only invariants of this type may appear in the heat kernel of an elliptic operator that knows nothing about such an extension. We, however, note here for the record that there exists a family of the conformal tensors and the respective conformal invariants that can be defined provided an extension of the normal vector is given. These invariants have not been discussed in this paper. We use the available in the literature general results for the heat kernel coefficient $a_5$ and compute the integrated conformal anomaly for a conformal scalar satisfying either the Dirichlet or the conformal Robin boundary conditions. The anomaly is then uniquely decomposed over the set of conformal invariants we have found. We then compute the respective conformal charges. This is our second main result. It would be interesting to extend our results and compute the conformal anomaly and the respective boundary charges for the conformal fields of higher spin. Although we do not anticipate any principle difficulties we do not present this analysis here leaving it to the future. Among other possible applications, our results will be useful in classifying the possible conformal invariants that appear in the logarithmic terms of entanglement entropy of a $d=6$ conformal field theory. The holographic derivation of the conformal anomaly in five dimensions is yet another interesting subject to explore. We leave these directions for future work. \section*{Acknowledgements} We thank Cl\'ement Berthiere for collaboration at the beginning of this project. We are grateful to Dmitri Vassilevich for a useful correspondence. This project was started when both of us were visiting the Theory Division at CERN in the spring of 2018. We thank CERN for hospitality. \newpage
{ "timestamp": "2021-04-09T02:02:25", "yymm": "2102", "arxiv_id": "2102.07661", "language": "en", "url": "https://arxiv.org/abs/2102.07661" }
\section{Introduction} \label{sec:intro} Consider the following two problems, which on the face of it have nothing to do with each other: \begin{enumerate} \item Will the cue ball's trajectory on a billiards table ever end up in a pocket? \item Given a bipartite graph $G$, and two functions $w$, $w'$ assigning weights to edges, is it the case that they assign the same weight to \emph{every} perfect matching~$M$ of~$G$? \end{enumerate} Both turn out to be orbit problems for torus actions, and exemplify the class of problems we study in this paper. As our introduction is somewhat long, we break it up as follows. We start with general background to algorithmic invariant theory in~\S\ref{subsec:alg inv th} and discuss general orbit problems in~\S\ref{subsec:orbit}. In~\S\ref{subsec:intro-main-results} we define torus actions, discuss our main results, and explain their motivation from the perspective of algebraic complexity. In~\S\ref{subsec:applications}, we give examples of how these orbit problems for torus actions arise in and capture natural problems in physics and optimization. In~\S\ref{subsec:organization}, we discuss the organization of the paper and logical structure of our results. \subsection{Algorithms in invariant theory}\label{subsec:alg inv th} Computational invariant theory is a subject whose origins can be traced back to ``masters of computation'' in the 19th century such as Boole, Gordan, Sylvester and Cayley among others. The second half of the 20th century injected a major impetus to both structural and computational aspects of these mathematical areas. On the one hand, the advent of digital computers allowed mathematicians means to study much larger such algebraic structures than could be accessed by hand. On the other, the parallel development of computational complexity provided a mathematical theory with precise computational models for algorithms and their efficiency analysis. This combination has injected many new ideas and questions into invariant theory and related fields, leading to the development of algorithmic techniques such as Gr\"obner bases and many others, which supported faster and faster algorithms. Texts on this large body of work can be found, for example, in the books~\cite{DK,Sturmfels,Cox-Little-Oshea}. While the computational complexity put focus on polynomial time as the staple of efficiency, it also provided means to argue the likely impossibility of such fast algorithms for certain tasks, through the Cook-Karp-Levin theory~\cite{cook:71,karp:72,levin:73} of NP-completeness (for Boolean computation) and Valiant's theory of VNP-completeness. More recently, a further surge in collaboration between algebraists and complexity theorists on these algorithmic questions in invariant theory and representation theory arose from two (related) sources starting in the turn of this century. Both imply that these very algorithmic questions in algebra are deeply entwined with the core complexity questions of P vs.~NP and VP vs.~VNP. Not surprisingly, new enriching connections between these two research directions are newly found as they develop, providing an exciting collaboration. The first source is Mulmuley and Sohoni's Geometric Complexity Theory (GCT)~\cite{mulmuley2001geometric}, which highlights the inherent symmetries of complete problems of these complexity classes, and through these suggests concrete invariant theoretic and representation theoretic attacks on the questions above. This has lead to many new questions, techniques, and much faster algorithms (see, for example, \cite{GCTV,FS,BCMW:17,MW19}). The second source is the work of Impagliazzo and Kabanets~\cite{KI04}, using Valiant's completeness theory for VP and VNP to again attack these major complexity problems directly through the development of efficient deterministic algorithms for the basic PIT (Polynomial Identity Testing) problem. This problem, which (again, thanks to Valiant's completeness) has natural symmetries, is very similar to basic invariant theory problems. Major progress was recently made on resolving such related algorithmic problems, starting with~\cite{Gur04,GGOW16,IQS,IQS2,DM-poly}. Many others continue to follow, see, for example, \cite{DM-oc,AZGLOW,GGOW:20,BGOWW,BFGOWW,BLNW:20,BFGOWW2}. We refer to \cite{BFGOWW2} for a recent description of the state-of-art. \subsection{Orbit problems}\label{subsec:orbit} We now briefly describe the basic setting and problems of interest, postponing some of the technical details to later sections for the sake of brevity. A group homomorphism $\rho\colon G \rightarrow \GL(V)$, where $V$ is a vector space (always complex and finite-dimensional) is called a representation of~$G$. One can think of this as a (linear) action of~$G$ on~$V$, i.e., a map $G \times V \rightarrow V$ where $(g,v) \mapsto \rho(g) v$ satisfies the usual axioms of a group action. For us, groups will always be algebraic and representations rational, that is, morphisms of algebraic groups. We will denote $\rho(g) v$ by~$gv$ or~$g \cdot v$. For $v \in V$, we define its {\em orbit} $O_v := \{gv \ |\ g \in G\}$ (denoted $O_{G,v}$ if the group is not clear from context) to be the subset of points that can be reached from $v$ by applying a group element. We denote by $\overline{O_v}$ the topological closure of~$O_v$. These notions are extremely basic and in many concrete instances very familiar. One simple example is the action of $\GL_n \times \GL_n$ on $n \times n$ matrices by left and right multiplication: clearly, the orbit of a matrix~$A$ consists of the matrices having the same rank as~$A$; moreover, the orbit closure of~$A$ is the set of matrices whose rank is at most the rank of $A$. Another example is the conjugation action of $\GL_n$ on $n \times n$ matrices, where the orbits are characterized by Jordan normal forms.% \footnote{The orbit closures of two matrices intersect if and only if the matrices have the same eigenvalues (counted with multiplicity).} Understanding the space of orbits of a given group action is perhaps the most basic task of invariant theory. The following three basic algorithmic problems will be the focus of this paper. \begin{problem}\label{prob:main} Let $\rho \colon G \rightarrow \GL(V)$ be a representation of a group $G$. Given $v,w \in V$: \begin{enumerate} \item \textbf{Orbit equality:} Decide if $O_v = O_w$; \item\label{prob:oci} \textbf{Orbit closure intersection:} Decide if $\overline{O_v} \cap \overline{O_w} \ne \emptyset$; \item \textbf{Orbit closure containment:} Decide if $w \in \overline{O_v}$.\footnote{The special case of $w = 0$ is called the null cone membership problem. In fact, many of the recent algorithmic advances mentioned above efficiently solve the null cone problem for specific group actions, see \cite{BFGOWW2} and references therein. The motivation of this paper is to extend that understanding to these more general problems.} \end{enumerate} \end{problem} As we will discuss the computational complexity of algorithms for these problems, one needs to specify how inputs are given and how we measure their size. We will discuss this, but for now it suffices to think of $n=\dim(V)$, the degree of~$\rho$ (assuming it is a polynomial function), and the bit-length of the input vectors $v,w$ as the key size parameters. The aforementioned problems capture and are related to a natural class of ``isomorphism'' or ``classification'' problems across many domains in mathematics, physics and computer science. Examples include the graph isomorphism problem~\cite{Derksen-graph}, non-commutative rational identity testing~\cite{GGOW16,IQS2}, equivalence problems on quiver representations~\cite{DM-arbchar,DM-si}, matrix and tensor scaling~\cite{BGOWW,BFGOWW}, classification of quantum states~\cite{bennett1993teleporting} and module isomorphism problems~\cite{BL08}. To briefly hint at the role of invariant theory, let us take a closer look at problem~(\ref{prob:oci}), that is, the problem of orbit closure intersection. We denote by $\mathbb{C}[V]$ the ring of polynomial functions on $V$. A polynomial function~$f$ on~$V$ is called \emph{invariant} if it is constant along orbits, i.e., $f(gv) = f(v)$ for all~$g \in G$ and~$v \in V$. The collection of all invariant polynomials forms a subring $\mathbb{C}[V]^G$, called the \emph{invariant ring}. Since polynomials are continuous, invariant polynomials are constant along orbit closures. In particular, two points $v$ and $w$ are indistinguishable by invariant polynomials when their orbit closures intersect. Amazingly, the converse is also true for a large class of group actions thanks to a result due to Mumford: if the orbit closures of~$v$ and~$w$ do not intersect, then they can always be distinguished by an invariant polynomial. See Theorem~\ref{thm:Mumford} for a precise statement. Mumford's theorem suggests an approach to orbit closure intersection -- test if $f(v) = f(w)$ for all invariant polynomials $f$. For this strategy to be effective, one needs a computational handle on invariant polynomials. Naively there are infinitely many polynomials, but a foundational result of Hilbert helps tackle this issue. A \emph{system of generating polynomial invariants} is a collection of invariant polynomials~$\{f_1,\dots,f_r\}$ such that any other invariant polynomial can be written as a polynomial in the~$f_i$'s. In particular, to test for orbit closure intersection it suffices to test whether each of the~$f_i$ take the same value on both points. Hilbert showed the existence of a finite system of generating polynomial invariants and also gave an algorithm to produce them~\cite{Hilb1}. Since then, many improvements on the complexity of such algorithms were developed, but even today this task is, in general, infeasible. One basic obstacle is the very description of such a system of generating invariants, coming both from the size of this set and the degree of each polynomial in it. Nearly a century later, a (singly) exponential bound (in~$n$) on the degrees of a system of generating polynomial invariants was achieved for a very general class of group actions~\cite{Der01}, which is unfortunately the best possible in this generality, see~\cite{DM-exp}. A singly exponential bound is necessary to capture a polynomial with a poly-sized (in $n$) arithmetic circuit, but is by no means sufficient.% \footnote{For example, the permanent of an $n\times n$ matrix, which has degree~$n$, is believed to require exponential circuit size. This is essentially the content of Valiant's proof that the permanent is complete for the class VNP, combined with the hypothesis that VNP~$\ne$~VP.} Another issue that one has to deal with is the number of invariants in a system of generating polynomial invariants, and it is often the case that there are exponentially many in any system.% \footnote{This is already the case for the matrix scaling action discussed in Section~\ref{subsec:applications}.} This led Mulmuley \cite{GCTV} to suggest the notion of a \emph{succinct circuit} as a way to capture a system of generating polynomial invariants with a view towards using them for orbit closure intersection. Unfortunately, this approach does not seem to be computationally feasible either. See~\cite{GIMOWW} where Mulmuley's conjecture~\cite[Conjecture~5.3]{GCTV} on the existence of succinct circuits was disproved under natural complexity assumptions. What is perhaps most surprising is that this already happens for a \emph{commutative} group action, namely when~$G$ is a torus. Further, an example of a group action was given where any system of generating polynomial invariants must contain a VNP-hard polynomial. The negative result above seem to suggest that the algorithmic tasks at hand are infeasible, even for torus actions, i.e., groups of the form $(\mathbb{C}^\times)^d$. The main results of our paper show the opposite: \emph{all of them are efficiently solvable for torus actions!} The main novelty on our approach is using {\em rational invariants} instead of polynomial invariants. A rational invariant is a quotient of polynomials that is invariant, see Section~\ref{subsec:intro-main-results} for a precise definition. This is a bit unexpected since Mumford's theorem simply does not extend to rational invariants: it is easy to construct examples where two points whose orbit closures intersect are distinguished by a rational invariant. Yet, for representations of tori, we show that (a certain special collection of) rational invariants can be used (in a delicate way) to capture not just orbit closure intersection, but orbit closure containment and orbit equality as well. Moreover, we show that rational invariants are computationally easy in this case, in stark contrast with the aforementioned hardness results for polynomial invariants~\cite{GIMOWW}. Inspired by the connections to the P vs.~NP problem, the GCT program makes several predictions in invariant theory. The setting in which most of the predictions and conjectures are formulated is the setting of rational representations of connected reductive groups (which we will define later). Here, we want to point out that among connected reductive groups, the class of commutative groups happen to be precisely tori. Thus, our main results should be viewed as conclusively verifying several predictions of GCT in the commutative case. Moreover, the barrier result on the computational efficiency of polynomial invariants \cite{GIMOWW} along with our results on rational invariants suggest that a more thorough investigation of rational invariants is needed in the case where the acting group is non-commutative, e.g., $\SL_n$. \subsection{Torus actions and main results} \label{subsec:intro-main-results} We now discuss the main contributions of our paper in more detail and precision. Our results concern torus actions, so we specialize the discussion of the preceding section and consider a $d$-dimensional complex torus~$T = (\mathbb{C}^\times)^d$ as the acting group~$G$. The group law is just pointwise multiplication, i.e., $(t_1,\dots,t_d) \cdot (s_1,\dots,s_d) = (t_1s_1,\dots,t_ds_d)$. Any linear action of a torus can be described by an integer matrix $M \in \Mat_{d,n}(\mathbb{Z})$ called the \emph{weight matrix} (where $\Mat_{d,n}(\mathbb{Z})$ denotes the space of $d \times n$ integer matrices). The representation~$\rho_M \colon T \rightarrow \GL_n(\mathbb{C})$ corresponding to a weight matrix $M=(m_{ij})$ looks as follows: \begin{align}\label{eq:toract} \rho_M(t) = \begin{psmallmatrix} \prod_{i=1}^d t_i^{m_{i1}} & & \\ & \ddots & \\ & & \prod_{i=1}^d t_i^{m_{in}} \end{psmallmatrix} \end{align} Thus any torus action can be viewed as a scaling action, where each coordinate is scaled separately according to a Laurent monomial.% \footnote{ We can also describe this action as follows: Identify $v\in\mathbb{C}^n$ with a Laurent polynomial $\sum_{j=1}^n v_j \, z_1^{m_{1j}} \cdots z_d^{m_{dj}}$; then the action of $T$ corresponds precisely to rescaling the variables $z_1,\dots,z_d$~\cite{gurvits2004combinatorial}.} The weight matrix (up to reordering of columns) determines the representation. Despite the simple description of commutative torus actions, they as well capture fundamental notions, and the associated orbits can be quite complex. One example is the matrix scaling problem, where the orbits capture weights of perfect matchings (see Problem~\ref{prblm:MatchW}). In this paper, we will assume that a torus action is given by specifying the weight matrix. Thus the bit-length of the entries of the weight matrix are included in the input size of the problems. Moreover, we will allow complex number inputs. These can be described up to finite precision by elements in the field of Gaussian rationals~$\mathbb{Q}(i) = \{ s + i t \;|\; s,t \in \mathbb{Q} \}$, which will be encoded in the standard way; see, e.g., \cite{GCTV}.% \footnote{In fact, our results hold more generally when the elements in $\mathbb{Q}(i)$ are given in a `floating point' format, namely in the form $(s + it) 2^p$, with $s,t\in\mathbb{Q}$ and $p\in\mathbb{Z}$ encoded in binary in the standard way. The same is true for input of the form~$2^p$, with $p\in\mathbb{Q}$ encoded in binary. See Remark~\ref{rem:float}.\label{foot:float}} The following theorem captures the main results of our paper. \begin{theorem}\label{thm:main} Given as input a weight matrix $M \in\Mat_{d,n}(\mathbb{Z})$ as well as vectors $v,w \in \mathbb{Q}(i)^n$, denote by $b$ the maximal bit-length of the entries of $v,w$, and $M$. Then we can in time $\poly(d,n,b)$: \begin{enumerate} \item decide whether $O_v = O_w$; \item decide whether $\overline{O_v} \cap \overline{O_w} \ne \emptyset$; \item decide whether $w \in \overline{O_v}$. \end{enumerate} In other words, for rational representations of tori, there are polynomial time algorithms for orbit equality, orbit closure intersection, and orbit closure containment. \end{theorem} We note that the null cone membership problem mentioned earlier, namely Problems~\ref{prob:main}~(2)/(3) when the input vector~$w$ is the $0$ vector, was known to have a polynomial time algorithm by a simple reduction to linear programming.% \footnote Namely, a vector $v$ is in the null cone if and only if the convex hull of the weights corresponding to the nonzero coordinates of~$v$ does not contain the origin.} There is no known way of doing the same for the orbit problems above, and indeed our theorem above takes an alternative route. While one might hope for efficient algorithms for Problems~\ref{prob:main}~(1) and~(2) in much more general situations than for tori (for general reductive group actions), our efficient algorithm for orbit closure containment is in stark contrast to the known NP-hardness of the general orbit closure containment problem~\cite{BILPS:20}. Our work points to a key difference: namely, for torus group actions, one can use one-parameter subgroups combined with linear programming techniques to reduce orbit closure containment to orbit equality, while this is impossible in this form for general actions. See Section~\ref{sec:occ} for more details. A common core underlying all our results is an efficient algorithm for computing invariant Laurent polynomials for torus actions. The key idea is the following. Invariant polynomials for torus actions can be quite complicated. However, suppose that we restrict to vectors of some fixed support, i.e., ``nonzero pattern'' of the coordinates. This restriction is without loss of generality, since two vectors can only be in the same orbit when their supports coincide. However, it allows us to study a richer class of functions, namely \emph{Laurent polynomials} instead of ordinary polynomials. Allowing for negative exponents makes an important difference: while polynomial invariants naturally form a semigroup, invariant Laurent polynomials form a \emph{lattice}, isomorphic to the integral vectors in the kernel of the weight matrix. Lattices are much better behaved than semigroups, for example they have small bases which can be found efficiently. Before describing our results, let us define invariant Laurent polynomials more precisely. For a representation $\rho\colon G \rightarrow \GL(V)$ of a group $G$, we have an action of~$G$ on the polynomial ring~$\mathbb{C}[V]$ defined by $(g \cdot f) (v) := f(\rho(g)^{-1}v)$. When $V=\mathbb{C}^n$, we can identify $\mathbb{C}[V]=\mathbb{C}[x_1,\dots,x_n]$ with the polynomial ring in~$n$ variables. Now consider the set of vectors with nonzero coordinates in~$S\subseteq[n]$: \begin{align*} X_S = \{ v \in \mathbb{C}^n \;|\; v_j \neq 0 \text{ if and only if } j \in S \}. \end{align*} The Laurent polynomials in the variables~$x_j$ for~$j \in S$ form the natural class of functions on~$X_S$ (since we can always divide by the nonzero coordinates). Accordingly, we will denote their collection by~$\mathbb{C}[X_S]$.\footnote{In the language of algebraic geometry, these are the ``regular'' functions on~$X_S$.} Now, for a torus action of the form~\eqref{eq:toract}, the group $T$ acts on any monomial $x^c=x_1^{c_1} \cdots x_n^{c_n}$ by a simple rescaling. Accordingly, we also have an action of $T$ on the algebra of Laurent polynomials~$\mathbb{C}[X_S]$. A Laurent polynomial $f$ is called \emph{invariant} if $g \cdot f = f$ for all $g \in G$. Clearly, if $f$ is invariant, then so are all the Laurent monomials occuring in~$f$. The collection of all invariant Laurent polynomials forms the subalgebra $\mathbb{C}[X_S]^G$ of \emph{invariant Laurent polynomials}. A collection of invariant Laurent polynomials~$f_1,\dots,f_r$ is called a \emph{system of generating invariant Laurent polynomials in the variables $\{x_j\}_{j\in S}$} if they generate~$\mathbb{C}[X_S]^G$ as an algebra. For torus actions, these can always be taken to be Laurent \emph{monomials}, in which case we call them a \emph{system of generating invariant Laurent monomials}. We can then state our key result: \begin{theorem}\label{th:laurent} Let $M \in\Mat_{d,n}(\mathbb{Z})$ define an $n$-dimensional representation of $T = (\mathbb{C}^\times)^d$, and let~$S \subseteq [n]$. Assume that the bit-lengths of the entries of $M$ are bounded by $b$. Then, in $\poly(d,n,b)$-time, we can construct an arithmetic circuit with division $\mathcal{C}$ whose output gates compute a system of generating invariant Laurent monomials $f_1,\dots,f_r$ in the variables $\{x_j\}_{j\in S}$, where $r \leq n$. \end{theorem} Here we recall the notion of an \emph{arithmetic circuit with division}, which is a directed acyclic graph as follows. Every node of indegree zero is called an input gate and is labeled by either a variable or a rational (complex) number. Nodes of indegree one and outdegree one are labeled by $^{-1}$ and are called divison gates. Nodes of indegree two and outdegree one and is labeled either $+$ or $\times$; in the first case it is a sum gate and in the second a product gate. The only other nodes allowed are output gates which have indegree one and outdegree zero. Given an arithmetic circuit with division, it computes a rational function at each output node in the obvious way. The bit size of such an arithmetic circuit is the total number of nodes plus the total bit-length of the specification of all rational numbers computed in {\em all} of its gates. The notion of {\em (division free) arithmetic circuits} is obtained by disallowing division gates. They compute polynomials in the obvious way. We emphasize that the number of generators produced by Theorem~\ref{th:laurent} is at most~$n$ (in particular, independent of the bit-length~$b$), in stark contrast to the situation for monomial invariants. Moreover, the bit-length of~$\mathcal{C}$ is polynomially bounded. As a consequence of Theorem~\ref{th:laurent}, we are also able to construct arithmetic circuits that compute a generating set of \emph{rational invariants}. For a representation $\rho\colon G \rightarrow \GL(V)$, the action of $G$ on the polynomial ring $\mathbb{C}[V]$ always extends to an action on its field of rational functions, the rational functions $\mathbb{C}(V)$. A rational function $f \in \mathbb{C}(V)$ is called \emph{invariant} if $g \cdot f = f$ for all $g \in G$. The collection of all rational invariants forms the sub-field $\mathbb{C}(V)^G$ of \emph{rational invariants}. A collection of rational invariants~$f_1,\dots,f_r \in \mathbb{C}(V)$ is called a \emph{system of generating rational invariants} if they generate $\mathbb{C}(V)^G$ as a field extension of~$\mathbb{C}$. Note that any invariant Laurent polynomial is a rational invariant, but the converse is not necessarily true. Nevertheless: \begin{corollary}\label{cor:1.3} Let $M \in\Mat_{d,n}(\mathbb{Z})$ define an $n$-dimensional representation of $T = (\mathbb{C}^\times)^d$. Assume that the bit-lengths of the entries of $M$ are bounded by $b$. Then, in $\poly(d,n,b)$-time, we can construct an arithmetic circuit with division $\mathcal{C}$ whose output gates compute a system of generating rational invariants $f_1,\dots,f_r \in \mathbb{C}(x_1,\dots,x_n)^T$, where $r \leq n$. \end{corollary} This result is in distinct contrast to the impossibility of finding succinct circuits for generating \emph{polynomial} invariants under natural complexity assumptions~\cite{GIMOWW}. Furthermore, we can complement Theorem~\ref{thm:main} in the following way: if two orbit closures do not intersect, $\overline{O_v} \cap \overline{O_w} \ne \emptyset$, then we can construct in polynomial time an arithmetic circuit computing a separating invariant monomial that can serve as a ``witness'' of the non-intersection. \begin{corollary}\label{cor:sep-invar} Let $M \in\Mat_{d,n}(\mathbb{Z})$ define an $n$-dimensional representation of $T = (\mathbb{C}^\times)^d$. Let $v,w \in \mathbb{Q}(i)$ be such that $\overline{O_v} \cap \overline{O_w} = \emptyset$. Assume the bit-lengths of the entries of $v,w$ and $M$ are bounded by~$b$. Then, in $\poly(d,n,b)$-time, we can construct an arithmetic circuit of bit-length $\poly(d,n,b)$, which computes an invariant monomial $f$ such that $f(v) \neq f(w)$. \end{corollary} So far, we have discussed orbit problems for complex tori $T = (\mathbb{C}^\times)^d$. It is interesting to ask to which extent our results hold for ``compact'' tori, which are groups of the form~$K = (\Ss^1)^d$, where $\Ss^1 = \{z \in \mathbb{C}^\times \ | \ |z| = 1\}$.% \footnote{Note that $K$ is indeed compact, and a subgroup of $T$. Moreover, any commutative compact connected Lie group is of this form.} Besides the fundamental algorithmic interest in this setting, such group actions are important in several areas. For example, the time evolution of periodic systems in Hamiltonian mechanics are naturally given by $S^1$-actions, and important symmetries in classical and quantum physics are given by compact group actions. In fact, the results discussed so far can also be used to give an efficient solution for orbit problems for compact tori. Any (continuous) finite-dimensional representation of $(\Ss^1)^d$ extends to a representation of $(\mathbb{C}^\times)^d$, so representations are specified as before by a weight matrix $M \in\Mat_{d,n}(\mathbb{Z})$. Moreover, the compactness implies that orbits $O_{K,v} = \{kv \ |\ k \in K\}$ are closed and so all three problems mentioned in Problem~\ref{prob:main} coincide. Therefore, the following corollary solves all three problems for compact tori: \begin{corollary}\label{cor:main} Let the weight matrix $M \in\Mat_{d,n}(\mathbb{Z})$ define an $n$-dimensional representation of $T=(\mathbb{C}^\times)^d$ and put $K = (\Ss^1)^d$. Further, let $v,w \in \mathbb{Q}(i)^n$ and assume that the bit-lengths of the entries of $v,w$ and $M$ are bounded by $b$. Then, in $\poly(d,n,b)$-time, we can decide if $O_{K,v} = O_{K,w}$. \end{corollary} To give additional context to this result, we briefly mention some recent results achieving polynomial time algorithms for orbit closure intersection of \emph{specific} group actions. For the left-right action (of $\SL_n \times \SL_n$ on $m$-tuples of $n \times n$ matrices), one approach to solving the orbit closure intersection problem is to (approximately) reduce to the orbit equality problem for the maximal compact subgroup (which happens to be ${\rm SU}(n) \times {\rm SU}(n)$, where ${\rm SU}(n)$ denotes the group of $n \times n$ unitary matrices with determinant $1$), see\cite{AZGLOW}. This was achieved by using a geodesic convex optimization algorithm. Given the recent advances in this area (see, e.g., \cite{BFGOWW2} and references therein), it is natural to ask if a similar approach could be useful for general reductive group actions. For torus actions, interestingly, we can also go in the other direction. Namely, our result for the orbit equality problem for the maximal compact subgroup, Corollary~\ref{cor:main}, is derived from our main result for complex tori, i.e., Theorem~\ref{thm:main}. More generally, we observe that for arbitrary reductive group actions, the orbit equality problem for the maximal compact subgroup is always equivalent to an orbit closure intersection (or equality) problem for a related action of the larger group, see Theorem~\ref{thm:compacttogeneral} for a precise statement. The results in this paper warrant the investigation of several interesting directions that we leave for future work, some of which we will discuss in Section~\ref{sec:future}. \subsection{Further motivation and algorithmic applications}\label{subsec:applications} As we saw above, orbit problems are related to a great number of applications. Despite significant progress, for general reductive group actions it is still an open problem to design fast algorithms for these problems. Our results fully resolve the situation in the case of torus actions and also show how to overcome barriers that had previously been pointed out in the literature~\cite{ikenmeyer2017vanishing,GIMOWW}. Apart from its fundamental complexity theoretic interest, there are also several algorithmic applications where torus actions arise naturally. Here we discuss in more detail some concrete applications to combinatorial optimization and to dynamical systems, which were already mentioned briefly at the beginning of the introduction. We first explain a link to combinatorial optimization. Consider edge weights $w$ for the complete bipartite graph on $2n$ labeled vertices ($n$ on each side): the weight $w(e)$ of an edge~$e$ is assumed to be a rational number, encoded in binary. We define the weight $w(M)$ of a perfect matching~$M$ of~$G$ as the sum of the weights of the edges occurring in~$M$. \begin{problem}\label{prblm:MatchW} Given edge weights $w$ and $w'$ as above, decide whether they assign the same weight to \emph{every} perfect matching $M$ of $G$. \end{problem} Perhaps surprisingly, this problem can be reformulated as an orbit intersection problem for a torus action (see below). Therefore, Theorem~\ref{thm:main} implies that Problem~\ref{prblm:MatchW} can be solved in polynomial time. This insight seems far from being obvious! The relevant torus action here results from from matrix scaling, which has been widely studied and has many applications; see~\cite{Sink} and~\cite{cohen2017matrix} for more recent developments. Consider $\ST_n:= \{ (t_1,\ldots,t_n) \in \mathbb{C}^\times \mid t_1\cdots t_n =1\}$, which is isomorphic to the algebraic torus $(\mathbb{C}^{\times})^{n-1}$. We let $\ST_n\times\ST_n$ act on $\Mat_n(\mathbb{C})$ by left-right multipliation as follows: \begin{equation}\label{eq:ma-scal} ((t_1,\ldots,t_n),(s_1,\ldots,s_n)) \cdot (v_{ij}) := (t_i v_{ij} s_j)_{ij} . \end{equation} Moreover, we shall identify the edge weights~$w_{ij}$, where $i,j \in [n]$, with the matrix $v_w=(2^{w_{ij}}) \in\Mat_n(\mathbb{C})$.% \footnote{As explained in footnote~\ref{foot:float}, our results also hold for input of this form, where the $w_{ij}$ are specified in binary.} Then one can show that the answer to Problem~\ref{prblm:MatchW} is affirmative if and only if the orbit closures of~$v_w$ and~$v_{w'}$ in~$\Mat_n(\mathbb{C})$ intersect. This follows from Mumford's theorem mentioned earlier, along with the fact that the invariant polynomials for this action are generated by the perfect matchings, namely the monomials~$f_\pi = x_{1,\pi(1)} \cdots x_{n,\pi(n)}$ where $\pi\in S_n$ ranges over the permutations~\cite[Theorem~3]{leep1999marriage}. Indeed, multiplying entries of $v_{w}$ is the same as summing the corresponding edge weights in the exponent, hence~$f_\pi(v_w) = 2^{w(M)}$, where $M$ is the perfect matching defined by the permutation $\pi$. We briefly comment on the 3-dimensional generalization of this action. Here, $\ST_n \times \ST_n \times \ST_n$ acts on 3-tensors in $\mathbb{C}^n \otimes \mathbb{C}^n \otimes \mathbb{C}^n$ in the natural way: \begin{align*} ((t_1,\dots,t_n),(s_1,\dots,s_n),(u_1,\dots,u_n)) \cdot (v_{ijk}) = (t_i s_j u_k v_{ijk})_{ijk}. \end{align*} In this case, any system of generating polynomial invariants must include the (maximum) 3-dimensional matching monomials $f_{\pi,\tau} = x_{1,\pi(1),\tau(1)} \cdots x_{1,\pi(n),\tau(n)}$ for~$\pi,\tau\in S_n$, which led to the barrier result for torus actions in~\cite{GIMOWW}. Of course, in this case there are additional generating invariants, see, e.g., \cite{linial2014vertices}. Our results show that the corresponding orbit problems can nevertheless be solved in polynomial time! Moreover, it is possible to efficiently exhibit \emph{separating} polynomial invariants (whenever they exists) as well as to construct systems of generating invariant \emph{Laurent polynomial} or \emph{rational} invariants. Our second example concerns a connection to dynamical systems. Consider a (massless) cue ball on a billiard table (assumed to be square to simplify the discussion). We can ask: \begin{problem}\label{prblm:billiard} If we hit the cue ball at a given angle, will its trajectory end up in a pocket? \end{problem} It is well-known, and easy to see, that one can map trajectories on an ordinary billiard with reflecting boundaries to a billiard of twice the size with periodic boundaries, say $(\mathbb{R}/2\pi\mathbb{Z})^2$. The trajectory of the ball depends fundamentally on the angle or slope. If the slope is irrational, then the trajectory will be dense, so the answer to Problem~\ref{prblm:billiard} is trivially yes. Otherwise, the trajectory will be periodic and the problem is nontrivial. We can model it as an orbit problem as follows. Let the compact torus $\Ss^1$ act on $\mathbb{C}^2$ by \begin{align*} t \cdot (x,y) := (t^p x, t^q y), \end{align*} where $s=\frac qp$ is the slope by which we hit the ball. We can identify points $(\theta,\nu)$ on the periodic billiard with points $(e^{i\theta},e^{i\nu}) \in \mathbb{C}^2$. In this way, Problem~\ref{prblm:billiard} reduces to a constant number of orbit equality problems for this action (one for each pocket). While the problem is certainly easy to solve by a variety of methods, one can ask analogous questions for billiards in $n>2$~dimensions and by allowing a $d$-dimensional hyperplane worth of allowed cue directions. Such generalizations similarly correspond to orbit problems for compact tori~$(\Ss^1)^d$ on some~$\mathbb{C}^n$, and they can all be solved in polynomial time by using Corollary~\ref{cor:main}. \subsection{Organization of the paper}\label{subsec:organization} In Section~\ref{sec:inv}, we give an introduction to basic results in invariant theory that we will need to establish our results. In Section~\ref{sec:rep-tori}, we focus on tori, their representations, and their invariants. In particular, we will show that the faces of a natural convex polyhedral ``Newton cone'' are in one-to-one correspondence with the orbits in an orbit closure, which will be an important ingredient later on. In Section~\ref{sec:gen-inv}, we discuss the definition and computation of suitable \emph{rational} invariants. As mentioned above, our key result is that for fixed support, a small generating set of invariant Laurent monomials can be computed efficiently. This result, which is Theorem~\ref{th:laurent}, is at the heart of our algorithms, and also of independent interest. We achieve this using Smith normal forms. As an easy consequence, this also implies that we can efficiently compute a small generating set of rational invariants for a given representation, that is, Corollary~\ref{cor:1.3}. In Section~\ref{sec:oe}, we explain how to use the results of the preceding section to solve the orbit equality problem in polynomial time. This establishes part~(1) of Theorem~\ref{thm:main}. Here we rely on known results for testing if a given Laurent monomial (of possibly exponential degree) evaluates to the same value on two given vectors, and we present a brief sketch for completeness. In Sections~\ref{sec:oci-to-oe} and \ref{sec:occ}, we show how to solve the orbit closure intersection and containment problems by reducing them to orbit equality. This establishes parts~(2) and (3) of Theorem~\ref{thm:main}. Here we use the polyhedral description of the structure of orbit closures as furnished by the Newton cone. Furthermore, we show that given two points whose orbit closures do not intersect, we can efficiently construct a separating monomial invariant as a ``witness''. This proves Corollary~\ref{cor:sep-invar}. In Section~\ref{sec:compact}, we show how to solve the orbit equality problem for compact tori. This establishes Corollary~\ref{cor:main}. We also give, for general reductive groups~$G$, a reduction from orbit equality for a maximally compact subgroup~$K\subseteq G$ to orbit equality and orbit closure intersection for~$G$. In Section~\ref{sec:future}, we summarize our results and discuss some important open problems and future directions. \medskip\noindent\textbf{Conventions.} In this paper, sometimes we work with monomials and sometimes with Laurent monomials. Unless we use the prefix ``Laurent'', by a monomial, we mean $\prod_j x_j^{c_j}$ where $c_j \in \mathbb{Z}_{\geq 0}$, i.e., all exponents are non-negative. Whenever exponents are allowed to be negative, we will be careful to specify that it is a Laurent monomial. \section{Preliminaries of invariant theory}\label{sec:inv} We will briefly recall the main results in invariant theory that are relevant for us (see~\cite{kraft,dolgachev:03,DK,Mumford-book} for details). We will take our ground field to be $\mathbb{C}$, the field of complex numbers, for simplicity. However, much of this theory works for any algebraically closed field. For a (finite-dimensional) vector space $V$, we denote by $\mathbb{C}[V]$ the ring of polynomial functions on~$V$. For our purposes, if $V$ is the standard vector space $\mathbb{C}^n$, then $\mathbb{C}[V] = \mathbb{C}[x_1,\dots,x_n]$, the polynomial ring in $n$ variables, where $x_i$ is to be interpreted as the $i^{th}$ coordinate function. Let $G$ be an algebraic group, i.e., it has the structure of an algebraic variety (not necessarily irreducible) such that the multiplication map $m\colon G \times G \rightarrow G$ and the inverse map $\iota\colon G \rightarrow G$ are morphisms of varieties.% \footnote{A morphism of varieties simply means that in local coordinates the map is given by ratios of polynomials. For concreteness, the reader may simply think of an algebraic group as a matrix group, i.e., a subgroup of $\GL_n(\mathbb{C})$ that is described as the zero locus of a collection of polynomials.} A morphism of algebraic groups $\rho: G \rightarrow \GL(V)$ is called a rational representation of $G$.% \footnote{One can interpret this action as the action of the subgroup $\rho(G) \subseteq \GL(V)$ on $V$ by matrix-vector multiplication, where $\rho(G)$ is parametrized algebraically by an algebraic group $G$.} We write $gv$ or $g \cdot v$ for $\rho(g) v$. For a point $v \in V$, its orbit $O_v$ (or $O_{G,v}$ when the group is not clear from context) is the set of all points that can be reached from $v$ by the action of an element of the group, i.e., \begin{align*} O_v := \{gv \ |\ g \in G\}. \end{align*} We denote by $\overline{O_v}$ the closure of the orbit $O_v$. The closure is to be taken either with respect to the Euclidean topology or the Zariski topology. Indeed, the closures in both topologies coincide, a well-known fact that relies on a fundamental result in algebraic geometry due to Chevalley (see~\cite[I.\S10]{mumford:88}). A polynomial function $f \in \mathbb{C}[V]$ is called \emph{invariant} if it is oblivious to the group action, i.e., $f(gv) = f(v)$ for all $g \in G$, $v \in V$. The collection of all invariant polynomials forms a subring \begin{align*} \mathbb{C}[V]^G := \{f \in \mathbb{C}[V]\ |\ \forall\ g\in G, v \in V\ f(gv) = f(v) \}. \end{align*} One key observation is that invariant functions are constant along orbits and hence constant along orbit closures as well. Hence, if the orbit closures of two points intersect, then they cannot be distinguished by an invariant function. The converse was proved by Mumford for a special class of groups called reductive groups~\cite{Mumford-book} (see also~\cite[Corollary~2.3.8]{DK}). An algebraic group $G$ is called {\em reductive} if every rational representation is a direct sum of irreducible representations, wherein a representation is called irreducible if it has no non-trivial subrepresentations. Examples of reductive groups include $\SL_n, \GL_n, {\rm Sp}_n, {\rm O}_n$, finite groups, and most importantly for us, tori (which we define formally in the next section), as well as direct products thereof.\footnote{The group $B_n$ of upper triangular $n \times n$ invertible matrices is a typical example of a group that is not reductive.} Reductive groups have played a central role for a number of mathematical fields for over a century. A particularly important result in the invariant theory of reductive groups is that invariant rings are finitely generated \cite{Hilb1, Hilb2,weyl:39}. To state Mumford's result in the generality we need, we will define rational actions on varieties (a notion that naturally generalizes rational representations). Let $X$ be an algebraic variety and let $\mathbb{C}[X]$ denote the ring of regular functions on~$X$. A rational action of an algebraic group $G$ on~$X$ is a morphism of varieties $G \times X \rightarrow X, (g,x) \mapsto g\cdot x$ satisfying $g \cdot (g' \cdot x) = (gg') \cdot x$ and $e \cdot x = x$ for all $x \in X$, $g,g' \in G$. As in the vector space case, we denote the orbit of a vector~$v\in X$ by $O_v$. \begin{theorem}[Mumford, \cite{Mumford-book}]\label{thm:Mumford} Let $G$ be a reductive group. Let $X$ be an algebraic variety and suppose we have a rational action of $G$ on $X$. For $v,w \in X$ we have $\overline{O_v} \cap \overline{O_w} = \emptyset$ if and only if there exists $f \in \mathbb{C}[X]^G$ such that $f(v) \neq f(w)$. \end{theorem} Another well-known important structural result states that every orbit closure $\overline{O_v}$ contains a {\em unique} closed orbit. \begin{theorem}\label{th:uniqueclosedorbit} Let $\rho\colon G \rightarrow \GL(V)$ be a rational representation of a reductive group $G$. Then: \begin{enumerate} \item For any $v\in V$, the orbit closure $\overline{O_v}$ contains a unique closed orbit, that we denote by $O_{\widetilde{v}}$. \item\label{it:oci red} If $v,w \in V$, then \[ \overline{O_v} \cap \overline{O_w} \ne \emptyset \Longleftrightarrow O_{\widetilde{v}} = O_{\widetilde{w}}. \] \end{enumerate} \end{theorem} \begin{proof} (1) The first assertion is \cite[Theorem~2.3.6]{DK}. (2) For the second assertion, if the orbit closures $\overline{O_v}$ and $\overline{O_w}$ are disjoint, then so are the orbits~$O_{\widetilde{v}}$ and~$O_{\widetilde{w}}$, which therefore must be different. Conversely, suppose $O_{\widetilde{v}} \neq O_{\widetilde{w}}$. Since these orbits are closed, by Theorem~\ref{thm:Mumford}, there is an invariant $f \in \mathbb{C}[V]^G$ such that $f(\widetilde{v}) \neq f(\widetilde{w})$. By continuity, $f(v) = f(\widetilde{v}) \neq f(\widetilde{w}) = f(w)$, which implies $\overline{O_v} \cap \overline{O_w} = \emptyset$ by another application of Theorem~\ref{thm:Mumford}. \end{proof} Part(2) of this theorem shows that the orbit closure intersection problem can be reduced to the orbit equality problem, provided we can compute the unique closed orbit $O_{\widetilde{v}}$ contained in $\overline{O_v}$. We will see in Section~\ref{sec:oci-to-oe} that if the group~$G$ is a torus, this can be achieved in polynomial time. Another key result in understanding orbit closures is the Hilbert--Mumford criterion. A {\em one-parameter subgroup} of $G$ is a morphism of algebraic groups $\sigma \colon \mathbb{C}^\times \rightarrow G$. For a representation of~$G$ on a vector space~$V$, we say that a subset $S \subseteq V$ is $G$-stable if $g \cdot s \in S$ for all $g \in G$, $s \in S$. \begin{theorem}[Hilbert--Mumford criterion, \cite{Hilb2,Mumford-book}]\label{th:HM-crit} Let $\rho \colon G \rightarrow \GL(V)$ be a rational representation of a reductive group $G$. Suppose $S \subseteq V$ is a $G$-stable closed subvariety of $V$ and let~$v \in V$ such that $\overline{O_v} \cap S \neq \emptyset$. Then there exists a one-parameter subgroup $\sigma \colon \mathbb{C}^\times \rightarrow G$ such that $\lim_{\epsilon \to 0} \sigma(\epsilon) \cdot v \in S$. \end{theorem} A particular use of the above theorem is to take $S = \{0\}$ or $S = O_{\tilde v}$. When $G$ is a torus, the set of one-parameter subgroups has the structure of a $\mathbb{Z}$-lattice. We will discuss this further in the next section. We end this section by introducing a key notion in invariant theory called the \emph{null cone}, whose significance will become clear in later sections. For a collection $F$ of polynomials in $\mathbb{C}[V]$, we denote by~$\mathbb{V}(F)$ their common zero locus in $V$. \begin{definition}[Null cone]\label{def:null} Let $\rho\colon G \rightarrow \GL(V)$ be a rational representation of a reductive group~$G$. Then the \emph{null cone} is defined as \[ \mathcal{N}_G(V) := \mathcal{N}(\rho) := \{v \in V\ |\ 0 \in \overline{O_v}\}. \] It can also be defined as the common zero locus of all invariant polynomials without constant part: \[ \mathcal{N}_G(V) := \mathcal{N}(\rho) := \mathbb{V}(\bigcup_{d > 0} \mathbb{C}[V]^G_d), \] where $\mathbb{C}[V]^G_d$ denotes the space of invariant polynomials that are homogeneous of degree $d$. The equivalence of the two definitions of the null cone follows from Theorem~\ref{thm:Mumford}. \end{definition} \section{Invariants and orbit closures of torus actions}\label{sec:rep-tori} Invariant theory for general reductive groups can get very complicated. However, for representations of tori, that is, \emph{commutative} connected reductive groups, a lot of the theory can be viewed as a combination of linear algebra and the study of convex polytopes. We will collect important results regarding torus actions in this section and refer the reader to~\cite{Wehlau93, DK} for more details. All the results in this section are already known or can be deduced from the existing literature, and we provide proof sketches for completeness. Note that tori are reductive groups, so the results of the previous section hold in this setting. We will first briefly recall torus actions and the notions of characters/weights, one-parameter subgroups and how weight matrices define a representation. Then, we give a linear algebraic description of invariant rings by determining the monomials that are invariant. Then, we describe a polyhedral perspective on orbits. In particular given a point $v$ in the vector space of the representation, we define a polyhedral cone, called the Newton cone. The Newton cone can be used to determine whether $v$ is in the null cone and moreover we give a correspondence between the faces of the Newton cone to orbits in the orbit closure of $v$, which is crucial in understanding the orbit closure containment problem. For this entire section, fix a torus $T = (\mathbb{C}^\times)^d$.\footnote{Any commutative connected reductive group is isomorphic to some $(\mathbb{C}^\times)^d$. Important examples include ${\rm T}_d$, the group of diagonal $d \times d$ invertible matrices and its subgroup ${\rm ST}_d$ consisting of diagonal matrices with determinant $1$.} \subsection{Representations and invariants}\label{subsec:rep inv tori} As described in Section~\ref{subsec:intro-main-results}, any representation of a torus~$T$ is a ``scaling'' action (after identifying $V$ with $\mathbb{C}^n$ by an appropriate choice of basis). Namely, each coordinate of $v \in \mathbb{C}^n$ is multiplied by some (Laurent) monomial $\prod_{i=1}^d t_i^{\lambda_i}$ for integers~$\lambda_i \in \mathbb{Z}$. These monomials (succinctly described by the so-called weight matrix, see below) together specify the representation. We now make this more precise. A 1-dimensional (rational) representation is called a \emph{character} or a \emph{weight}. Let $\mathcal{X}(T)$ denote the set of weights of $T$, which forms a group where the binary operation is (pointwise) multiplication of functions. To each $\lambda = (\lambda_1,\dots,\lambda_d) \in \mathbb{Z}^d$, we associate a weight, also denoted $\lambda$ by slight abuse of notation, namely \[ \lambda\colon T \rightarrow \mathbb{C}^\times, \quad \lambda(t) = \prod_{i=1}^d t_i^{\lambda_i}, \] which gives an identification of abelian groups $\mathbb{Z}^d \cong \mathcal{X}(T)$. Let $\rho \colon T \rightarrow \GL(V)$ be a (rational) representation of~$T$ where $V$ is an $n$-dimensional vector space. We can choose a basis of $V$ consisting of weight vectors, wherein a vector~$v \in V$ is called a \emph{weight vector} of weight~$\lambda \in \mathcal{X}(T)$ if $t \cdot v = \lambda(t) v$ for all $t \in T$. Once we have chosen a weight basis, using the identification~$\mathcal{X}(T) \cong \mathbb{Z}^d$, the corresponding $n$ weights can be collected into a $d \times n$ matrix with integer entries, which we call the \emph{weight matrix} of the representation. Up to permutation of the columns, it is independent of the choice of weight basis, and it classifies the representation up to isomorphism. Concretely, a matrix $M = (m_{ij}) \in \Mat_{d,n}(\mathbb{Z})$ describes the representation $\rho_M \colon T \rightarrow \GL_n(\mathbb{C})$ defined in \eqref{eq:toract}. That is, for $t = (t_1,\dots,t_d)$ and $v = (v_1,\dots,v_n) \in \mathbb{C}^n$, we have \[ t \cdot v = \rho_M(t) v = \left( \left(\prod_{i=1}^d t_i^{m_{i1}}\right) v_1, \left(\prod_{i=1}^d t_i^{m_{i2}}\right) v_2, \dots, \left(\prod_{i=1}^d t_i^{m_{in}}\right) v_n \right). \] The matrix $M$ is the weight matrix for this action. The $j^{th}$ standard basis vector~$e_j$ is a weight vector of weight~$m^{(j)} = (m_{1j}, m_{2j},\dots, m_{dj}) \in \mathbb{Z}^d = \mathcal{X}(T)$. Note that $m^{(j)}$ is the $j^{th}$ column vector of $M$. For the rest of this section, we fix an $n$-dimensional representation $\rho_M \colon T \rightarrow \GL_n(\mathbb{C})$ of the torus~$T=(\mathbb{C}^\times)^d$ given by a weight matrix $M \in \Mat_{d,n}(\mathbb{Z})$ with columns $m^{(j)}$ for $j\in[n]$. The following well-known result describes the invariant ring of this action (see, e.g., \cite[Section~3]{DM-exp}): \begin{proposition}\label{pro:3.1} \begin{enumerate} \item\label{it:semigroup} Let $c\in\mathbb{Z}_{\geq 0}^n$. A monomial $x^c=\prod_j x_j^{c_j}$ is invariant if and only if $\sum_j c_j m^{(j)} = 0$; \item The invariant ring $\mathbb{C}[x_1,\dots,x_n]^T$ is spanned as a vector space by the invariant monomials. \end{enumerate} \end{proposition} \begin{proof} For the action~$\rho$ of $G$ on $V$, there is a natural induced action of $G$ on the ring of polynomial functions~$\mathbb{C}[V]$ defined by the formula $g \cdot f (v) := f(\rho(g)^{-1}v)$. Applying this for the action $\rho_M$, we get an induced action of $T$ on $\mathbb{C}[x_1,\dots,x_n]$. It is easy to compute this action: for a monomial~$x^c$ and $t \in T$, we have $t \cdot x^c = \lambda(t)^{-1} \, x^c$, where $\lambda \in \mathcal{X}(T)$ is the character corresponding to~$\sum_j c_j m^{(j)} \in \mathbb{Z}^d$. It follows that the monomials which are invariant are precisely the ones for which $\sum_j c_j m^{(j)} = 0$, the trivial character, proving the first part. The second part follows from the observation that a polynomial is invariant if and only if each monomial that occurs in it is invariant. \end{proof} Part~(\ref{it:semigroup}) of Proposition~\ref{pro:3.1} shows that the invariant monomials are in one-to-one correspondence with the nonnegative integer vectors in the kernel of the weight matrix. Accordingly, they form a semigroup. In general, such semigroups can have a large number of generators, which explains the difficulty of using polynomial invariants~\cite{durand-et-al:02}. Our key idea to obtain efficient algorithms will be to instead consider invariant Laurent monomials, which form a lattice rather than a semigroup. We will return to this in Section~\ref{sec:gen-inv}. In turns out that the weights lead to a strong link to convex polyhedral geometry, which in turn characterizes the orbits in an orbit closure. For this, we make the following definitions. The \textit{support} of a vector $v\in \mathbb{C}^n$ is defined as \[ \supp(v) := \{j\in [n]\mid v_j\neq 0\}. \] Let us record some of the properties of the support. By dimension (of an orbit, orbit closure, algebraic group, etc), we mean the dimension of the underlying variety. \begin{lemma} \label{lem:supp-dim} For $v,w \in \mathbb{C}^n$ we have: \begin{enumerate} \item If $O_v = O_w$, then $\supp(v) = \supp(w)$. \item If $\supp(v) = \supp(w)$, then $\dim O_v = \dim O_w$. \item\label{it:supp down cl} If $w \in \overline{O_v}$, then $\supp(w) \subseteq \supp(v)$. This inclusion is strict if and only if $w \in \overline{O_v} \setminus O_v$. \end{enumerate} \end{lemma} \begin{proof} (1) is clear, since each coordinate simply gets rescaled by a nonzero number by the group action. For (2) we note that the stabilizer group $\stab(v)$ of $v$ only depends on $\supp(v)$. The claim follows using $\dim O_v = d - \dim \stab(v)$. For~(3), the inclusion of supports holds since taking limits can never increase the support. Finally, it is known~\cite[\S8.3]{Humphreys:75} that $\overline{O_v} \setminus O_v$ is a Zariski closed subset of dimension strictly less than $\dim O_v$. Hence $w \in \overline{O_v} \setminus O_v$ implies $\dim O_w < \dim O_v$ and therefore $\supp(w) \subsetneq \supp(v)$ by part~(2). \end{proof} \subsection{Newton cone and orbit closures} We define the \emph{Newton cone}~$C(v)$ of a vector~$v\in \mathbb{C}^n$ to be the rational polyhedral cone generated by the weights corresponding to the indices in the support, that is, \[ C(v) := \Big\{\sum_{j\in\supp(v)} c_j m^{(j)} \mid c_j \ge 0 \Big\}\subseteq\mathbb{R}^d . \] The {\em lineality space} of the cone $C(v)$ is defined as $L(v) := C(v) \cap (-C(v))$. Clearly, it is the largest linear subspace contained in $C(v)$. The cone $C(v)$ is called {\em pointed} iff $L(v)=0$. (Compare~\cite{schrijver:86} for the structure of polyhedral cones.) These notions are standard in geometric programming, which essentially studies optimization problems associated with torus actions, albeit often with a different representation and motivation; see, e.g.,~\cite{BLNW:20} and references therein. The connection is particularly apparent and useful in the study of polynomial capacities which have important applications to approximate counting~\cite{linial2000deterministic,gurvits2004combinatorial}. We will see that the Newton cone contains all the information about the orbits contained in an orbit closure. To start, we show that membership in the null cone can be characterized as follows. Define the \emph{essential support} of a vector $v \in V$ as \begin{equation}\label{def:esupp} \esupp(v) := \{ j \in \supp(v) \mid m^{(j)} \in L(v) \}. \end{equation} \begin{lemma}\label{lem-esupp} Let $k \in \supp(v)$. We have $k \in \esupp(v)$ if and only if there exists an invariant monomial $\prod_{j \in \supp(v)} x_j^{c_j}$ with $c_j \in \mathbb{Z}_{\geq 0}$ such that $c_k > 0$. \end{lemma} \begin{proof} It is easy to see that $m^{(k)} \in L(v)$ if and only if there is a non-negative integral linear combination $\smash{\sum_{j \in \supp(v)} c_j m^{(j)}} = 0$ with $c_k > 0$. By Proposition~\ref{pro:3.1}, this is equivalent to the existence of an invariant monomial $\smash{\prod_{j \in \supp(v)} x_j^{c_j}}$ with $c_j \in \mathbb{Z}_{\geq 0}$ such that $c_k > 0$. \end{proof} \begin{corollary}\label{cor:null cone} We have that $v$ is in the null cone $\mathcal{N}(\rho_M)$ if and only if $\esupp(v) = \emptyset$. \end{corollary} \noindent Equivalently, $v$ is in the null cone if and only if $C(v)$ is pointed and $m^{(j)} \neq 0$ for all $j \in \supp(v)$. In fact, much more can be said. Let us first recall the notion of faces of polyhedral cones. If $C(v)$ is contained in a closed halfspace $H_+$ of $\mathbb{R}^d$ bounded by a linear hyperplane $H$, then we call the intersection $F=H\cap C(v)$ a {\em face} of $C(v)$ when it is non-empty. The cone itself is also considered a face of $C(v)$: by definition, it is the largest face of $C(v)$. On the other hand, each face of $C(v)$ must contain the lineality space $L(v)$, which is therefore the smallest face of $C(v)$. We will see shortly that the faces of $C(v)$ are in bijective correspondence with the orbits contained in $\overline{O_v}$. For this, we need to introduce some more notation. For a subset $J \subseteq \supp(v)$, we define the \emph{restriction}~$v|_{J}$ to be the vector with entries \[ (v|_{J})_j = \begin{cases} v_j & \mbox{ if } j \in J,\\ 0 & \mbox{ otherwise,} \end{cases} \] as its $j$-th coordinate. Let now $F$ be a face of $C(v)$ defined by a closed half-space $H_+ = \{ y \in \mathbb{R}^d \;|\; \nu \cdot y \geq 0 \}$ for some $\nu\in\mathbb{R}^d$, that is, \[ F= \{ y\in C(V) \mid \nu \cdot y = 0 \}. \] Since $C(v)$ is rational, we may assume that $\nu$ has integer components. We assign to $F$ the subset of indices \[ S_F := \{ j \in \supp(v) \mid m^{(j)} \in F \} \] and define $v_F := v|_{S_F}$. Let us check that the orbit $O_{v_F}$ of $v_F$ is contained in $\overline{O_v}$. The one-parameter subgroup $\sigma\colon \mathbb{C}^\times \to T$ given by $\sigma(\epsilon) = (\epsilon^{\nu_1},\ldots,\epsilon^{\nu_d})$ satisfies \begin{equation}\label{eq:SEV} \sigma(\epsilon) \cdot v = \rho_M(\sigma(\epsilon)) v = (\epsilon^{\nu \cdot m^{(1)}} v_1,\ldots,\epsilon^{\nu \cdot m^{(n)}} v_n) . \end{equation} It follows that $\lim_{\epsilon\to 0} \sigma(\epsilon) \cdot v = v_F$ and hence $v_F \in \overline{O_v}$. The same reasoning shows that $v_{F} \in \overline{O_{v_{F'}}}$ if $F$ is a face contained in the face $F'$. The following result is well known, see e.g., \cite[Example~1.3]{Popov-occ}, but we sketch a proof for completeness. \begin{proposition} \label{prop:orbit-structure} The map $F\mapsto O_{v_F}$ is a bijection between the set of faces of $C(v)$ and the set of orbits contained in $\overline{O_v}$. Moreover, we have $$ F\subseteq F' \Longleftrightarrow \overline{O_{v_F}}\subseteq\overline{O_{v_{F'}}}\ . $$ \end{proposition} The proof of surjectivity relies on a strengthening of the Hilbert--Mumford criterion (Theorem~\ref{th:HM-crit}). Recall this states that if we consider a closed subset $S$ that is stable under the group action and intersects the orbit closure of some point $v$, then there is a one-parameter subgroup that will drive $v$ to a point in $S$ in the limit. However, a subtle point is that this requires $S$ to be closed. In general, orbits are not closed, so a point $w$ could be in the orbit closure of a point $v$, but the orbit of $w$ may not be closed. In this case, Theorem~\ref{th:HM-crit} does not apply to $S=O_w$, and indeed the orbit of~$w$ need not be reachable from~$v$ by a limit of a one-parameter subgroup. The following theorem shows that for torus actions such a phenomenon does not happen. This crucial fact will also prove useful for us algorithmically in Section~\ref{sec:occ}. \begin{theorem}[\cite{kraft}, Kapitel~III.2.2]{\label{thm:hmtori}} Let $\rho\colon T \rightarrow \GL(V)$ be a rational representation. Suppose $v,w \in V$ are such that $w\in\overline{O_v}$. Then there exists a one-parameter subgroup $\sigma\colon\mathbb{C}^\times\rightarrow T$ such that \[ \lim_{\epsilon\to 0}\sigma(\epsilon) \cdot v \in O_w. \] \end{theorem} Before we prove Proposition~\ref{prop:orbit-structure}, we discuss a bit about the structure of one-parameter subgroups. For each $\nu \in \mathbb{Z}^d$, we define a one-parameter subgroup of $T$, namely $\sigma\colon \mathbb{C}^\times \rightarrow T$ defined by $\sigma(\epsilon) = (\epsilon^{\nu_1},\ldots,\epsilon^{\nu_d})$. Any one-parameter subgroup of $T$ is of this form. This gives an identification of abelian groups $\mathbb{Z}^d \cong \mathcal{Y}(T)$, where $\mathcal{Y}(T)$ denotes the collection of all one-parameter subgroups of $T$. We leave the proof of the following well known lemma to the reader. \begin{lemma}\label{lem:tori-1psg} Let $\sigma\colon \mathbb{C}^\times \rightarrow T$ be a one-parameter subgroup, so $\sigma(\epsilon) = (\epsilon^{\nu_1},\ldots,\epsilon^{\nu_d})$ for some~$\nu\in\mathbb{Z}^d$, and let $v\in \mathbb{C}^n$. \begin{enumerate} \item The limit $\lim_{t \to 0} \sigma(t) \cdot v$ exists if and only if $m^{(j)} \cdot \sigma \geq 0$ for all $j \in \supp(v)$. \item If the limit exists, then $\lim_{t \to 0} \sigma(t) \cdot v = v|_S$, where $S = \{j \in \supp(v) \ | \ m^{(j)} \cdot \sigma = 0\}$. \end{enumerate} \end{lemma} \begin{proof}[Proof of Proposition~\ref{prop:orbit-structure}] We have already verified that $O_{v_F}$ is an orbit contained in $\overline{O_v}$, hence $F \mapsto O_{v_F}$ is well-defined as a map from the set of faces of~$C(v)$ to the set of orbits contained in~$\overline{O_v}$. To see that it is injective, note that $F$ is the cone generated by $\supp(v_F) = S_F$. For surjectivity, let~$O_w$ be an orbit contained in $\overline{O_v}$ and $\sigma\colon\mathbb{C}^\times\rightarrow T$ be a one-parameter subgroup as in Theorem~\ref{thm:hmtori}. There is $\nu\in\mathbb{Z}^d$ such that $\sigma(\epsilon) = (\epsilon^{\nu_1},\ldots,\epsilon^{\nu_d})$. By Lemma~\ref{lem:tori-1psg}, the existence of $\lim_{\epsilon\to 0}\sigma(\epsilon) \cdot v$ means that $\nu\cdot m^{(j)} \ge 0$ for all $j\in \supp(v)$. In other words, $C(v)$ is contained in the halfspace $\{ y\in\mathbb{R}^d \mid \nu \cdot y \ge 0 \}$. Moreover, the limit equals $v_F$, where~$F$ is the face $F := \{ y\in C(v) \mid \nu \cdot y = 0 \}$ of $C(v)$. Therefore, $v_F \in O_w$, hence $O_{v_F}=O_w$, and we have shown surjectivity. In order to show the remaining equivalence, recall that we argued below \eqref{eq:SEV} that if $F \subseteq F'$ then~$v_F \in \overline{O_{v_{F'}}}$. The preceding argument also implies the converse. \end{proof} As an immediate consequence of Proposition~\ref{prop:orbit-structure}, we get the following result, which not only reproves Lemma~\ref{lem-esupp} but also characterizes the closed orbit in an orbit closure. For this, define \[ \widetilde{v} := v|_{L(v)} = v|_{\esupp(v)}. \] \begin{corollary}\label{cor:CO} The orbit $O_{\tilde v}$ corresponding to the lineality space $L(v)$ is contained in every orbit closure contained in $\overline{O_v}$. Therefore, it is the unique closed orbit contained in $\overline{O_v}$. In particular, the orbit $O_v$ is closed if and only if $C(v) = L(v)$, i.e., $C(v)$ equals its linear span. Moreover, $v$ is in the null cone if and only if $\esupp(v) = \emptyset$. \end{corollary} \section{Generating Laurent polynomials and rational invariants}\label{sec:gen-inv} In this section, we discuss the computation of suitable rational invariants, which is the heart of our algorithms, and the main novelty of this paper. As explained in the introduction, the starting point is the simple observation that two orbits can only be equal when they have the same support (Lemma~\ref{lem:supp-dim}). But once we restrict to vectors of fixed support, it is natural to consider a larger class of invariants, namely Laurent polynomials, which are polynomials that can also have negative exponents. In Section~\ref{se:lattice} we will see that the invariant Laurent polynomials for a given support naturally form a lattice that can be computed from the weight matrix. This allows us to give an efficient algorithm for computing small sets of generators. As a consequence, we can also efficiently compute a system of generating rational invariants. For the rest of this section, we fix an $n$-dimensional representation $\rho_M \colon T \rightarrow \GL_n(\mathbb{C})$ of the torus~$T=(\mathbb{C}^\times)^d$ given by a weight matrix $M \in \Mat_{d,n}(\mathbb{Z})$ with columns $m^{(j)}$ for $j\in[n]$. \subsection{Invariant Laurent polynomials}\label{se:lattice} For $S \subseteq [n]$, consider the set of vectors with support~$S$, that is, the variety \begin{equation}\label{eq:XS} X_S = \{ v \in \mathbb{C}^n \;|\; \supp(v) = S \} = \{ v \in \mathbb{C}^n \;|\; v_j \neq 0 \text{ if and only if } j \in S \}. \end{equation} The ring of regular functions on~$X_S$, denoted~$\mathbb{C}[X_S]$, is naturally identified with the ring of Laurent polynomials in variables $\{x_j\}_{j\in S}$. That is, \[ \mathbb{C}[X_S] = \mathbb{C}[x_j, x_j^{-1} \;|\; j \in S]. \] We observe that $\rho_M$ restricts to an action of $T$ on $X_S$, and induces an action on $\mathbb{C}[X_S]$. The proposition below shows that the algebra~$\mathbb{C}[X_S]^T$ of \emph{invariant Laurent polynomials} can be succinctly described in terms of the lattice \begin{equation}\label{def:L} L_S = \Bigl\{ c \in \mathbb{Z}^S \;|\; \sum_{j \in S} c_j m^{(j)} = 0 \Bigr\} = \ker(M_S) \cap \mathbb{Z}^{|S|}, \end{equation} where $\mathbb{Z}^S := \{ c \in \mathbb{R}^n \;|\; c_j = 0 \text{ for all } j \not\in S \} \cong \mathbb{Z}^{|S|}$, and $M_S$ denotes the submatrix of the weight matrix~$M$, obtained by removing all columns except those labeled by~$S$. \begin{proposition}\label{prop:inv-Laurent} \begin{enumerate} \item Let $c\in\mathbb{Z}^S$. A Laurent monomial $x^c=\prod_{j \in S} x_j^{c_j}$ is invariant if and only if~$c \in L_S$. \item The algebra of invariant Laurent polynomials $\mathbb{C}[X_S]^T$ is spanned as a vector space by the invariant Laurent monomials. \item If $\{c^{(1)}, c^{(2)},\dots,c^{(r)}\}$ is a lattice basis of~$L_S$, then $\mathbb{C}[X_S]^T$ is generated as an algebra by the invariant Laurent monomials $\smash{\{x^{c^{(1)}},\dots,x^{c^{(r)}}\}}$. \end{enumerate} \end{proposition} \begin{proof} The first two parts are shown using an argument similar to the proof of Proposition~\ref{pro:3.1}. The third statement is an immediate consequence. \end{proof} It is instructive to compare this with the discussion below Proposition~\ref{pro:3.1}, where we saw that the invariant polynomials are similarly described by the \emph{semigroup} of nonnegative vectors in the kernel of the weight matrix. By working with vectors of fixed support, we instead obtain a natural lattice structure, which simplifies the situation considerably. For example, the lattice~$L_S$ and hence the algebra of invariant Laurent polynomials~$\mathbb{C}[X_S]^T$ have at most~$|S|\leq n$ generators -- in stark contrast to the situation for invariant polynomials. We now discuss how to compute lattice bases as in Proposition~\ref{prop:inv-Laurent}. It is well known that every integer matrix $M$ can be diagonalized by multiplying from left and right with unimodular matrices. This is known as the \emph{Smith normal form}~\cite{smith}. The Smith normal form can be computed in polynomial time~\cite{snf}. We record these facts in the following theorem. \begin{theorem}[Smith normal form]\label{thm:smith} Let $M\in\Mat_{d,n}(\mathbb{Z})$. Then, there exist unimodular matrices $U\in\Mat_{d,d}(\mathbb{Z}), W\in\Mat_{n,n}(\mathbb{Z})$ such that \[ UMW=\begin{bmatrix} \alpha_1 & 0 & 0 & & \dots & & 0\\ 0 & \alpha_2 & 0 & & \dots & & 0\\ 0 & 0 & \ddots & & & & 0\\ & & & \alpha_r & & & \vdots\\ \vdots &\vdots & & & 0 & & \\ & & & & &\ddots & \\ 0 & 0 & 0 & \dots & & & 0 \end{bmatrix} \] and the diagonal elements satisfy $\alpha_i\mid \alpha_{i+1}$ for $i=1,2,\dots,r-1$, where $r$ equals the rank of $M$. The matrix $UMW$ is unique and called the Smith normal form of $M$. Moreover, if the bit-lengths of the entries of $M$ are bounded by $b$, then the matrices $U$, $W$, and~$UMW$ can be computed in $\poly(d,n,b)$-time. \end{theorem} Using the Smith normal form it is easy to compute a basis of the lattice~$L_S$. We state this in the following algorithm and corollary. \begin{algorithm}\label{algo-lattice} \emph{Computation of a basis of the lattice of invariant Laurent monomials:} \begin{description} \item [Input] $M \in \Mat_{d,n}(\mathbb{Z})$ and $S\subseteq[n]$. \item [Step 1] Compute the submatrix~$M_S$ of $M$ obtained by deleting all columns except those in~$S$. \item [Step 2] Compute the Smith normal form $UM_SW$ of $M_S$ (as in Theorem~\ref{thm:smith}). \item [Step 3] Return $\{w^{(r+1)}, w^{(r+2)}, \dots, w^{(n)}\}$, where $w^{(j)}$ denotes the $j^{th}$ column of $W$. \end{description} \end{algorithm} \begin{corollary}\label{cor:smith} Let $M\in\Mat_{d, n}(\mathbb{Z})$ and $S \subseteq [n]$, and suppose the bit-lengths of the entries of~$M$ are bounded by~$b$. Then Algorithm~\ref{algo-lattice} computes a basis for the lattice $L_S$ defined in \eqref{def:L} in $\poly(d,n,b)$-time. In particular, each $w^{(j)}$ has bit-length $\poly(d,n,b)$. \end{corollary} Alternatively, one can use lattice algorithms; we refer the interested reader to \cite[Corollary~5.4.10]{GLS-book}. \begin{remark}\label{rmk:easy-circuit} It is easy to see that given an exponent vector $c = (c_1,\dots,c_n) \in \mathbb{Z}_{\ge 0}^n$, where the bit-lengths of the $c_i$s are bounded by $b$, an arithmetic circuit computing the monomial $x^c$ of size $\poly(n,b)$ can be constructed in $\poly(n,b)$-time. Similarly, if $c\in \mathbb{Z}^n$, an arithmetic circuit with division computing the Laurent monomial $x^c$ can be constructed in $\poly(n,b)$-time. \end{remark} \begin{proof}[Proof of Theorem~\ref{th:laurent}] This follows from Proposition~\ref{prop:inv-Laurent}, Corollary~\ref{cor:smith}, and Remark~\ref{rmk:easy-circuit}. \end{proof} \subsection{Rational invariants}\label{se:rational} In the remainder of this section we will discuss rational invariants. For $V = \mathbb{C}^n$, recall that $\mathbb{C}[V] = \mathbb{C}[x_1,\dots,x_n]$ is the polynomial ring in~$n$ variables. Let $\mathbb{C}(V) = \mathbb{C}(x_1,\dots,x_n)$ the field of rational functions (its fraction field). In other words, any element in $\mathbb{C}(V)$ is a ratio of two polynomials. The action of $T$ on $\mathbb{C}[V]$ extends to $\mathbb{C}(V)$. Then $\mathbb{C}(V)^T$ is the field of \emph{rational invariants}. Clearly, any invariant Laurent polynomial is a rational invariant, but the converse need not be the case. Nevertheless, we can show that the invariant Laurent polynomials in all variables (that is, for support $S=[n]$) generate the rational invariants as a field. \begin{proposition}\label{pro:rat} Let $A := \mathbb{C}[X_{[n]}] = \mathbb{C}[x_1,x_1^{-1},\dots,x_n,x_n^{-1}]^T$ denote the algebra of invariant Laurent polynomials, and let $F := \mathbb{C}(x_1,\dots,x_n)^T$ denote the field of rational invariants. Then, $A$~generates $F$ as a field, i.e., the field of fractions of $A$ is $F$. \end{proposition} \begin{proof} Let $f \in F^\times$ and write $f = \frac p q$, where $p,q \in \mathbb{C}[x_1,\dots,x_n]$ have no common factors. Since $f$ is invariant, we have for any~$t \in T$ that \begin{align*} \frac{t \cdot p}{t \cdot q} = t \cdot f = f = \frac pq. \end{align*} Accordingly, $t \cdot p = \alpha(t) p$ and $t \cdot q = \alpha(t) q$ for some $\alpha(t) \in \mathbb{C}^\times$. Thus, $p$ and $q$ span one-dimensional representations. This in turn implies that $\alpha\colon T \to \mathbb{C}^\times$ is a character, as discussed in Section~\ref{subsec:rep inv tori}, and further that $p$ (and also $q$) is a sum of monomials with the same weight, i.e., $p = \sum_e p_e x^e$ such that $t \cdot x^e = \alpha(t) x^e$ for $p_e\neq0$. In particular, $f_e = \frac q {x^e}$ is a Laurent polynomial \emph{invariant} if $p_e\ne 0$, and we can write \begin{align*} f = \frac p q = \sum_e p_e \frac{x^e}q = \sum_e p_e \frac1{f_e}, \end{align*} which concludes the proof. \end{proof} As a direct consequence, any system of generating invariant Laurent polynomials (as an algebra) also serves as a system of generating rational invariants (as a field extension of $\mathbb{C}$). Thus we obtain: \begin{proof}[Proof of Corollary~\ref{cor:1.3}] This follows from Theorem~\ref{th:laurent} (with $S=[n]$) and Proposition~\ref{pro:rat}. \end{proof} \section{Orbit equality problem}\label{sec:oe} In this section, we will give a polynomial time algorithm for the orbit equality problem. Given two points, the strategy is to compute a small collection of invariant Laurent monomials (using the result of Section~\ref{sec:gen-inv}) whose evaluations at the two given points will determine whether the two points are in the same orbit. The efficient testing of whether two Laurent monomials evaluate to the same value actually requires an idea: this has already been studied in the literature and we briefly sketch in Section~\ref{subsec:moneq} how to do this. We still assume an $n$-dimensional representation $\rho_M \colon T \rightarrow \GL_n(\mathbb{C})$ of the torus~$T=(\mathbb{C}^\times)^d$ given by a weight matrix $M \in \Mat_{d,n}(\mathbb{Z})$ with columns $m^{(j)}$ for $j\in[n]$. In general, invariants can only decide orbit closure intersection, not orbit equality. However, the crucial point is that in the varieties~\eqref{eq:XS} consisting of vectors of fixed support any $T$-orbit is closed. \begin{proposition}\label{prop:X orbits closed} Let $S \in [n]$, $X_S$ be the variety defined in~\eqref{eq:XS}, and $v \in X_S$. Then the orbit $O_v$ is a closed subset of~$X_S$. \end{proposition} \begin{proof} By Lemma~\ref{lem:supp-dim}~(3) we have $O_v=\overline{O_v}\cap X_S$ which implies that the orbits are closed in $X_S$. \end{proof} Orbit equality in~$V$ can always be reduced to orbit equality in some~$X_S$, since equality of supports is a necessary condition (Lemma~\ref{lem:supp-dim}~(1)). The importance of the above result is that the latter orbit equality and orbit closure intersection are equivalent in $X_S$. Together with Theorem~\ref{thm:Mumford} we obtain the following result. \begin{corollary}\label{cor:orb sup} Suppose $\supp(v) = \supp(w) = S$. Then, $O_v \neq O_w$ if and only if there is an invariant Laurent monomial $f = \prod_{j \in S} x_j^{c_j}$ such that $f(v) \neq f(w)$. \end{corollary} Thus, we obtain the following algorithm for the orbit equality problem. \begin{algorithm}\label{algo-oe} \emph{Deciding orbit equality:} \begin{description} \item [Input] $M \in \Mat_{d,n}(\mathbb{Z})$ and $v,w \in \mathbb{Q}(i)^n$. \item [Step 1] Check if $\supp(v) = \supp(w)$. If not, $O_v \neq O_w$, so we can stop. \item [Step 2] Use Algorithm~\ref{algo-lattice} to compute a lattice basis~$\mathcal B$ for the lattice~$L_S$ defined in~\eqref{def:L}. \item [Step 3] For each $e \in \mathcal{B}$, we check if $v^e = w^e$ (as described in Section~\ref{subsec:moneq} below). \\ If they are all equal, then~$O_v = O_w$. Else, $O_v \neq O_w$. \end{description} \end{algorithm} \begin{proof} [Proof of Theorem~\ref{thm:main}, part (1)] The correctness of Algorithm~\ref{algo-oe} follows from Proposition~\ref{prop:inv-Laurent} and Corollary~\ref{cor:orb sup}. We now analyze its runtime. Clearly, the first step can be implemented efficiently. For the second step, we can appeal to Corollary~\ref{cor:smith}. For step~3, we first observe that, again by Corollary~\ref{cor:smith}, the exponents~$e$ have bit-length $\poly(d,n,b)$. Then Proposition~\ref{prop:monomial} below shows that this step can also be implemented in time $\poly(d,n,b)$. \end{proof} \subsection{Laurent monomial equivalence}\label{subsec:moneq} We now discuss how to test if a Laurent monomial $x^e$ evaluates to the same value at two points $v$ and $w$. In our context, where each component $e_j$ of the exponent vector $e = (e_1,\dots,e_n)$ has poly-sized bit-lengths, it is unreasonable to evaluate the Laurent monomials explicitly, because the answer may very well require exponentially large bit-length. Yet, it is possible to check if $v^e = w^e$ efficiently. We describe a simple algorithm based on g.c.d.'s, which has appeared before (see, for example, \cite{ESY14}) in the case where the entries of $v$ and $w$ are in $\mathbb{Z}$ (or equivalently $\mathbb{Q}$). The result is much older; for example, it follows from the results in~\cite{BDS}, as mentioned in~\cite{Ge93}, which gives a generalization to number fields.% \footnote{In particular Ge's result~\cite{Ge93} implies that Theorem~\ref{thm:main} extends to the case where the entries of $v$ and $w$ are taken from some algebraic number field.} Here we present a short self-contained proof and then follow up with the rather simple extension to Gaussian rationals. \begin{lemma} Suppose $a_1,\dots,a_k,b_1,\dots,b_r \in\mathbb{Q}$ and $e_1,\dots,e_k, f_1,\dots,f_r \in \mathbb{Z}$ have bit-lengths at most $s$. Then, in $\poly(k,r,s)$-time, we can decide if $\prod_{i=1}^k a_i^{e_i} = \prod_{j=1}^r b_j^{f_j}$. \end{lemma} \begin{proof} By clearing denominators, we may assume that $a_1,\dots,a_k,b_1,\dots,b_r$ are integers. By moving terms to the other side, we can further assume w.l.o.g.that all $e_i, f_j \geq 0$. Pick some $a_l$ and some~$b_m$ that are not coprime. Then, consider $d = {\rm gcd}(a_l, b_m) \geq 2$. W.l.o.g., we can assume $e_l \geq f_m$. Then, test if $ d^{e_l - f_m} (a'_l)^{e_l}\prod_{i \neq l} a_i^{e_i} = (b'_m)^{f_m} \prod_{j \neq m} b_j^{f_j}$, where $a'_l = a_l/d$ and $b'_m = b_m/d$. This is an iterative procedure which stops when each $a_i$ is coprime to $b_j$. At which point, unless all $a_i$'s and $b_j$'s are equal to $1$, both sides cannot be equal. The question is how long does such an iterative procedure take. Consider the quantity $P:= |a_1\cdots a_k b_1 \cdots b_r|$. After applying one step, the resulting quantity $P'$ satisfies $P' = P/d^2 \le P/4$. Since initially, $P$ is $2^{\poly(k,r,s)}$-sized, there are at most a polynomial number of iterative steps. Hence, the entire procedure takes $\poly(k,r,s)$-time. \end{proof} An analogous result with the same proof holds for the ring $\mathbb{Z}[i]$ of Gaussian integers and its quotient field $\mathbb{Q}(i)$ of Gaussian rationals, using that this ring has unique factorization into irreducible elements. In the following proposition, we assume that a Gaussian rational $a = \alpha + i \beta \in \mathbb{Q}(i)$ is described by giving the encodings of $\alpha$ and $\beta$ in binary. \begin{proposition}\label{prop:monomial Suppose $a_1,\dots,a_k,b_1,\dots,b_r\in \mathbb{Q}(i)$ and $e_1,\dots,e_k, f_1,\dots,f_r \in \mathbb{Z}$ all have bit-lengths bounded by $s$. Then, in $\poly(k,r,s)$-time, we can decide if $\prod_{i=1}^k a_i^{e_i} = \prod_{j=1}^r b_j^{f_j}$. \end{proposition} \begin{remark}\label{rem:float} For computational purposes, in many instances, numbers are described by their `floating point' representations. The floating point description of a Gaussian rational $a \in \mathbb{Q}(i)$ is described by giving the binary encodings of $\alpha,\beta \in \mathbb{Q}$ and $p \in \mathbb{Z}$ such that $a = (\alpha+ i \beta) 2^p$. If we assume that $a_1,\dots,a_k,b_1,\dots,b_r\in \mathbb{Q}(i)$ in the proposition above are given by their floating point descriptions, we can still decide monomial equivalence in polynomial time. Indeed, if we write each $a_j = (\alpha_j + i \beta_j) 2^{p_j}$ and $b_j = (\gamma_j + i \delta_j) 2^{q_j}$, then deciding whether $\prod_{j=1}^k a_j^{e_j} = \prod_{j=1}^r b_j^{f_j}$ simplifies to deciding if \[ \left(\prod_{j=1}^k (\alpha_j + i \beta_j)^{e_j} \right) \cdot 2^{\sum_{j=1}^k e_j p_j} = \left(\prod_{j=1}^r (\gamma_j + i \delta_j)^{f_j}\right) \cdot 2^{\sum_{j=1}^r f_j q_j}, \] which can again be interpreted as an instance of Proposition~\ref{prop:monomial} and hence can be checked in polynomial time. Since all other computations in our algorithms only involve supports of vectors, it follows that all results in this paper generalize to this input model, as claimed in footnote~\ref{foot:float}. An even easier special case arises for numbers of the form $a=2^p$, with $p\in\mathbb{Q}$ specified by its binary encoding, as in the perfect matching application discussed in Section~\ref{subsec:applications}. Indeed, if $a_j = 2^{p_j}$ and $b_j = 2^{q_j}$ for $j\in[n]$, then deciding whether $\prod_{j=1}^k a_j^{e_j} = \prod_{j=1}^r b_j^{f_j}$ simply amounts to verifying whether $\sum_{j=1}^n p_j e_j = \sum_{j=1}^n q_j f_j$, which is clearly possible in polynomial time. \end{remark} \section{Orbit closure intersection and explicit separating invariants}\label{sec:oci-to-oe} In this section, we discuss how to solve the orbit closure intersection problem in polynomial time by efficiently reducing it to the orbit equality problem. The problem of orbit closure intersection has a manifestly analytic point of view, but also an algebraic point of view by Mumford's theorem, Theorem~\ref{thm:Mumford}. In other words, when orbit closures of two points do not intersect, there is an invariant polynomial that takes different values on both points, serving as a ``witness'' to the fact that the orbit closures do not intersect. Accordingly, given two vectors whose orbit closures do not intersect, we also explain how to efficiently construct an arithmetic circuit which computes an invariant monomial separating the two vectors. \subsection{Reduction to orbit equality} The key idea is the following. Recall from Theorem~\ref{th:uniqueclosedorbit} that any orbit closure~$\overline{O_v}$ contains as unique closed orbit~$O_{\tilde v}$, and that two orbit closures intersect if and only if they contain the same closed orbit. In Corollary~\ref{cor:CO}, we showed that the unique closed orbit has a concrete polyhedral characterization: we can take $\tilde v = v|_{\esupp(v)}$, the restriction of the vector~$v$ to its essential support. Accordingly, the map~$v\mapsto \widetilde{v}$ provides a reduction of the orbit closure intersection problem for~$\rho_M$ to the the orbit equality problem for~$\rho_M$. The following lemma shows that the essential support (and hence the reduction map) can be computed in polynomial time by using linear programming. \begin{lemma}\label{lem-esupp-poly} Let $M \in\Mat_{d,n}(\mathbb{Z})$ define an $n$-dimensional representation of $T = (\mathbb{C}^\times)^d$, and let~$v\in \mathbb{C}^n$. For $k\in\supp(v)$, we have $k \in \esupp(v)$ if and only if there is a non-negative linear combination $\sum_{j \in \supp(v)} c_j m^{(j)} = 0$ such that $c_k > 0$. If the bit-lengths of the entries of~$M$ are bounded by $b$, the latter can be decided in $\poly(d,n,b)$-time by using linear programming. \end{lemma} \begin{proof} The characterization follows from Proposition~\ref{pro:3.1} and Lemma~\ref{lem-esupp}. It amounts to a basic decisional problem of linear programming, which is well known to be solvable in polynomial time, see~\cite{GLS-book}. \end{proof} The above proof also shows that a nonvanishing invariant monomial as in Lemma~\ref{lem-esupp-poly} can be computed in polynomial time. As explained above, we arrive at the following algorithm and results. \begin{algorithm}\label{algo:oci-to-oe} \emph{Reduction of orbit closure intersection to orbit equality:} \begin{description} \item[Input] $M \in \Mat_{d,n}(\mathbb{Z}), v,w \in \mathbb{Q}(i)^n$. \item[Step 1] Compute $\esupp(v)$ in the following way: For each $k\in\supp(v)$, use linear programming to determine if there is a non-negative linear combination $\sum_{j \in \supp(v)} c_j m^{(j)} = 0$ with~$c_k>0$. The set $\esupp(v)$ consists of all $k\in\supp(v)$ for which this is the case. \item[Step 2] Compute $\esupp(w)$ in the same way. \item[Step 3] Return $\tilde v = v|_{\esupp(v)}$ and $\tilde w = w|_{\esupp(w)}$. \end{description} \end{algorithm} \begin{corollary} \label{cor:red-oci-oe} Let $M \in \Mat_{d,n}(\mathbb{Z})$ describe an $n$-dimensional representation of $T = (\mathbb{C}^\times)^d$. Further, let $v,w \in \mathbb{Q}(i)^n$ and assume the bit-lengths of the entries of $M,v$, and $w$ are bounded by~$b$. Then there is a $\poly(d,n,b)$-time reduction that reduces the problem of deciding $\overline{O_v} \cap \overline{O_w} \ne \emptyset$ to the problem of deciding if $O_{\widetilde{v}} = O_{\widetilde{w}}$, where $\widetilde{v}$ and $\widetilde{w}$ have bit-lengths bounded by $b$. \end{corollary} \begin{proof}[Proof of Theorem~\ref{thm:main}, part $(2)$] This follows from part~(1), combined with Corollary~\ref{cor:red-oci-oe}. \end{proof} \subsection{Explicit separating invariant} For torus actions, our reduction of orbit closure intersection to orbit equality will give us an invariant Laurent monomial that takes different values on the two points. But a separating invariant Laurent monomial itself does \emph{not} serve as a witness (at least not naively, one needs further properties about the support of the Laurent monomial for it to serve as a witness). We now prove Corollary~\ref{cor:sep-invar}, which asserts that given two vectors we can nevertheless efficiently construct an arithmetic circuit which computes an invariant \emph{monomial} separating them. \begin{proof} [Proof of Corollary~\ref{cor:sep-invar}] We already noted that, by linear programming, we can compute the essential supports of $v$ and $w$ in $\poly(d,n,b)$-time. We distinguish two cases. \textbf{Case 1:} $\esupp(v) \ne \esupp(w)$ \noindent Suppose $k \in \esupp(v) \setminus \esupp(w)$ without loss of generality. By Lemma~\ref{lem-esupp} there is an invariant monomial $f = \prod_{j \in \supp(v)} x_j^{c_j}$ such that $c_k > 0$. Let us verify that $f(v) \ne f(w)$. We clearly have $f(v)\ne 0$. On the other hand, $f(w)=f(\widetilde{w})=0$, since $\widetilde{w} \in \overline{O_w}$, but $k$ is not contained in $\supp(\widetilde{w})=\esupp(w)$. So we indeed have $f(v) \ne f(w)$. In addition, we can find $(c_1,\dots,c_n)$ in $\poly(d,n,b)$-time by linear programming (Lemma~\ref{lem-esupp-poly}), so we can construct an arithmetic circuit for $f$ in $\poly(d,n,b)$-time by Remark~\ref{rmk:easy-circuit}. \medskip {\bf Case 2:} $\esupp(v) =\esupp(w)$ \noindent Let $S := \esupp(v) =\esupp(w)$. We assume that $\overline{O_v} \cap \overline{O_w} = \emptyset$, which implies $O_{\widetilde{v}} \cap O_{\widetilde{w}} = \emptyset$. Thus, by Corollary~\ref{cor:orb sup}, there is an invariant Laurent monomial $f = x^e$ with the property that~$f(\widetilde v) \neq f(\widetilde w)$, and hence $f(v) \neq f(w)$. Just like in Algorithm~\ref{algo-oe}, we can in $\poly(d,n,b)$-time compute such an exponent vector~$e\in\mathbb{Z}^n$, with bit-length of the $e_i$ bounded above by $\poly(d,n,b)$. Our goal is to produce an invariant monomial that separates $v$ and $w$, so we need to modify~$f$ so as to get rid of the negative exponents. In the process, we must ensure that the bit-length of the circuit does not explode. By Lemma~\ref{lem-esupp}, for each $k \in S$, there exists $c^{(k)}\in\mathbb{Z}_{\geq 0}^n$ such that $\smash{\sum_{j \in \supp(v)} c^{(k)}_j m^{(j)} = 0}$ and~$\smash{c^{(k)}_k > 0}$. We can compute $\smash{c^{(k)}}$ in $\poly(d,n,b)$-time by linear programming. Let $m_k = \smash{x^{c^{(k)}}}$ denote the corresponding invariant monomial. Put $S_-:= \{j \in S\ | \ e_j < 0\}$. If~$m_j(v) \neq m_j(w)$ for some $j \in S_-$, then $m_j$ is an explicit separating invariant monomial and we are done by Remark~\ref{rmk:easy-circuit}. Assume now $m_j(v) = m_j(w)$ for all $j \in S_-$. Then $\widetilde{f} := x^d := f \cdot \prod_{j\in S_-} m_j^{-e_j}$ is a Laurent monomial that separates~$v$ and~$w$. We verify now that the exponent vector $d$ has non-negative entries. By construction, we have for $k\in S_-$, \begin{align*} d_k &= e_k + (-e_k) c^{(k)}_k + \sum_{j \in S_-, j\ne k} (-e_j)\cdot c^{(j)}_k \geq 0 , \intertext{since $e_k<0$ and $e_j<0$ for all $j\in S_-$, while $\smash{c^{(k)}_k\geq1}$, and $\smash{c^{(j)}_k \geq 0}$. For $k\in[n]\setminus S_-$, we have} d_k &= e_k + \sum_{j \in S_-} (-e_j)\cdot c^{(j)}_k \geq 0, \end{align*} since $e_k \geq 0$ for $k \in S \setminus S_-$ and $e_k = 0$ for $k\not\in S$, while $e_j<0$ for $j \in S_-$. Altogether, we have shown that indeed all components of $d$ are non-negative. We finally note that $d$ can be computed in polynomial time, in particular, it has bit-length $\poly(d,n,b)$. So by Remark~\ref{rmk:easy-circuit}, we can construct an arithmetic circuit of size $\poly(d,n,b)$ that computes $\widetilde{f}$ in $\poly(d,n,b)$-time. \end{proof} \section{Orbit closure containment}\label{sec:occ} In this section, we discuss how to solve the the orbit closure containment problem in polynomial time by efficiently reducing it to the orbit equality problem. The notion of orbit closure containment is in general quite tricky to capture. Polynomial invariants do not suffice, since two orbit closures can intersect (hence all polynomial invariants agree) with neither being contained in the other -- this is precisely the difference between the orbit closure intersection and the orbit closure containment problem. Instead, the key idea for the reduction comes from one-parameter subgroups. We already discussed in Section~\ref{sec:rep-tori} that if $w \in \overline{O_v}$ then~$O_w$ can be reached from~$v$ by a one-parameter subgroup. The following proposition gives a concrete polyhedral description of the relevant one-parameter subgroups. \begin{lemma} \label{lem:tori-occ-justify} Let $M \in\Mat_{d,n}(\mathbb{Z})$ define an $n$-dimensional representation of $T = (\mathbb{C}^\times)^d$, and let~$v,w \in \mathbb{C}^n$. Then $w \in \overline{O_v}$ if and only if there exists a one-parameter subgroup $\sigma\colon \mathbb{C}^\times \to T$, so $\sigma(\epsilon) = (\epsilon^{\nu_1},\dots,\epsilon^{\nu_d})$ for some $\nu\in\mathbb{Z}^d$, such that \begin{enumerate} \item $ \{j \in \supp(v) \ | \ m^{(j)} \cdot \nu = 0\} = \supp(w)$ and $m^{(k)} \cdot \nu > 0$ for all $k \in \supp(v) \setminus \supp(w)$; \item $O_{(v|_{\supp(w)})} = O_w$. \end{enumerate} \end{lemma} \begin{proof} If $w \in \overline{O_v}$, then by Theorem~\ref{thm:hmtori}, we know that there is a one-parameter subgroup~$\sigma$ such that $\lim_{t \to 0} \sigma(t) v \in O_w$. In particular this implies that $\lim_{t \to 0} \sigma(t) v$ has the same support as~$w$ and has the same orbit as $w$. Now, both (1) and (2) follow from Lemma~\ref{lem:tori-1psg}. For the converse, note that, again by Lemma~\ref{lem:tori-1psg}, (1) implies that $\lim_{t \to 0} \sigma(t) v = v|_{\supp(w)} \in \overline{O_v}$, hence it follows that $O_w = O_{(v|_{\supp(w)})} \subseteq \overline{O_v}$ by (2). \end{proof} Now, we can give our algorithm to test if $w$ is in the orbit closure of $v$. \begin{algorithm} \label{algo:occ} \emph{Orbit closure containment:} \begin{description} \item[Input] $M \in \Mat_{d,n}(\mathbb{Z})$ and $v,w \in \mathbb{Q}(i)^n$. \item[Step 1] Check if $\supp(w) \subseteq \supp(v)$. If not, $w \notin \overline{O_v}$, so we can stop. \item[Step 2] Using linear programming, determine whether there exists a solution $y\in\mathbb{R}^d$ to the collection of linear equalities $m^{(j)} \cdot \nu = 0$ for each $j \in \supp(w)$ and linear inequalities $m^{(k)} \cdot \nu > 0$ for all $k \in \supp(v) \setminus \supp(w)$. If there is no solution, then $w \notin \overline{O_v}$, so we can stop. \item[Step 3] Use Algorithm~\ref{algo-oe} check whether $O_{(v|_{\supp(w)})} = O_w$. If yes, then $w \in \overline{O_v}$. Else, it is not. \end{description} \end{algorithm} \begin{proof} [Proof of Theorem~\ref{thm:main}, part (3)] The correctness of Algorithm~\ref{algo:occ} follows from Lemma~\ref{lem:tori-occ-justify}. Indeed, condition~(1) in the lemma is satisfied if and only if the algorithm passes the first two steps, and then condition~(2) is tested in the last step. We still need to argue about the efficiency of the algorithm. Clearly, step~1 can be done in linear time. Step~2 can be done in $\poly(d,n,b)$-time by linear programming. Step~3 appeals to the orbit equality problem, which by part~(1) of the theorem can be done in $\poly(d,n,b)$-time. \end{proof} \section{Orbit problems for compact tori}\label{sec:compact} So far, we have studied orbit problems for algebraic tori, that is, groups of the form $T = (\mathbb{C}^\times)^d$. In this section we consider the groups~$K = (\Ss^1)^d$, where $\Ss^1 = \{z \in \mathbb{C}^\times \ | \ |z| = 1\}$. Such groups are often called \emph{compact tori}. Indeed, any commutative compact connected Lie group is of this form. Besides the fundamental algorithmic interest in this setting, it is also important in applications. For example, in physics, symmetries are often given by compact group actions, such as compact tori~\cite{guillemin1990symplectic,audin2012torus}. We give further complexity-theoretic motivation below. The compactness implies that orbits are closed and so the three problems in Problem~\ref{prob:main} coincide. In this section, we show how to solve the orbit equality problem for a compact torus by reducing it to orbit equality for the corresponding algebraic torus. Subsequently, we give an alternative reduction that works not only for tori but in fact for any connected reductive group such as~$\SL_n$. To start, we note that it is known that any (continuous) finite-dimensional representation of $K = (\Ss^1)^d$ extends to a representation of $T = (\mathbb{C}^\times)^d$ \cite{weyl:39}. In particular, representations can be specified as before by a weight matrix $M \in\Mat_{d,n}(\mathbb{Z})$. Then we have the following result: \begin{proposition} Let $M \in\Mat_{d,n}(\mathbb{Z})$ define an $n$-dimensional representation of $T = (\mathbb{C}^\times)^d$ and $K = (\Ss^1)^d$. Let~$v,w \in \mathbb{C}^n$. Then, $O_{K,v} = O_{K,w}$ if and only if $O_{T,v} = O_{T,w}$ and $|v_j| = |w_j|$ for all $j$. \end{proposition} \begin{proof} Since $K \subseteq T$, it is clear that if $O_{K,v} = O_{K,w}$, then $O_{T,v} = O_{T,w}$ and $|v_j| = |w_j|$ for all $j$. Conversely, suppose $O_{T,v} = O_{T,w}$ and $|v_j| = |w_j|$ for all $j$. Then, there is some $t \in T$ such that~$t \cdot v = w$. Write $t = (t_1,\dots,t_d)$ and write each $t_i = r_i \cdot e^{\mathrm i \theta_i}$, with $r_i > 0$ and $\theta_i \in \mathbb{R}$. Then, it is easy to see that we must have $(e^{\mathrm i \theta_1},\dots,e^{\mathrm i \theta_d}) \cdot v = w$. Thus $v$ and $w$ are in the same $K$-orbit. \end{proof} \begin{proof} [Proof of Corollary~\ref{cor:main}] We are given $M \in \Mat_{d,n}(\mathbb{Z})$ and $v,w \in \mathbb{Q}(i)^n$. By the above proposition, we need to check if $O_{T,v} = O_{T,w}$ and if $|v_j| = |w_j|$ for all $j$. The former can be done in polynomial time by Theorem~\ref{thm:main} and the latter can clearly be done in polynomial time. \end{proof} Before proceeding we give some further context and motivation. Algorithms for the null cone membership problem (given a rational representation $\rho:G \rightarrow \GL(V)$ of a reductive group $G$ and $v \in V$, decide if $0 \in \overline{O_v}$) based on optimization methods have emerged in recent years. They take advantage of the fact that $0 \in \overline{O_v}$ if and only if one can drive the norm to~$0$ along the orbit~$O_v$. This can be viewed as an optimization problem where one tries to minimize (infimize) the norm along the orbit. While this is not a convex optimization problem, it is geodesically convex by the Kempf-Ness theory~\cite{KN:78}, which allows for many of the ideas to be modified appropriately. As far as the orbit closure intersection problem is concerned, the natural extension of this idea is as follows: Given $v,w \in V$, first use an optimization algorithm to approximately find a point in each orbit closure with minimal norm; let us call these points $\check{v}$, $\check{w}$. Then, appealing to the Kempf-Ness theory again, we have that $\overline{O_v} \cap \overline{O_w} \neq \emptyset$ if and only if $\check{v}$ and $\check{w}$ are in the same orbit for a maximal compact subgroup~$K$ of $G$. In this way, the orbit closure intersection problem for~$G$ can be reduced to the orbit equality problem for the maximal compact subgroup~$K$. In fact, for the so-called left-right action of $\SL_n \times \SL_n$ on matrix-tuples, this idea was carried out successfully to obtain a polynomial-time algorithm for orbit closure intersection~\cite{AZGLOW}. This further emphasizes the importance of the orbit equality problem for compact Lie group actions. Here we report on an interesting phenomenon, which provides a kind of converse to the strategy explained above. Namely, for any action of a connected reductive group~$G$, the orbit equality problem for the maximal compact subgroup $K\subseteq G$ is equivalent to an orbit intersection (or equality) problem for a related action of~$G$! As this result is not crucial to the rest of the paper and requires significantly different background, we will be brief in our explanations. We denote by~$V^*$ the contragredient or dual representation of $V$. \begin{theorem}\label{thm:compacttogeneral} Let $\rho\colon G \to \GL(V)$ be a finite-dimensional representation of a connected reductive group~$G$. Let $K$ be a maximal compact subgroup of~$G$, and $\langle\cdot,\cdot\rangle$ be a $K$-invariant Hermitian inner product on~$V$. For $v \in V$, let $\widehat{v} \in V^*$ be defined by~$\widehat{v}(w) := \langle v,w\rangle$. Then, for $v,w \in V$, the following are equivalent: \begin{enumerate} \item $O_{K,v} = O_{K,w}$; \item $O_{G,(v,\widehat{v})} = O_{G,(w,\widehat{w})}$ in $V \oplus V^*$; \item The $G$-orbit closures of $(v,\widehat{v})$ and $(w,\widehat{w})$ in $V \oplus V^*$ intersect. \end{enumerate} \end{theorem} \begin{proof} Let $\Lie(G) \subseteq L(V)$ denote the Lie algebra of $G$. For any linear action of~$G$ on a vector space~$U$, we get an induced action of~$\Lie(G)$ on~$U$. Given a $K$-invariant Hermitian form $\langle\cdot,\cdot\rangle$ on~$U$, we define the so-called moment map $\mu_U \colon U \rightarrow \Lie(G)^*$ by the formula $\mu_U(u)(X) = \left<u, X \cdot u\right>$ for~$u \in U$ and~$X \in \Lie(G)$ (up to a scalar which is not relevant for our purposes). The celebrated Kempf-Ness theorem says that if $\mu_U(u) = 0$ then the $G$-orbit of $u$ is closed. Moreover, it asserts that if $u'\in U$ is another point such that $\mu_U(u') = 0$, then $O_{G,u} = O_{G,u'}$ if and only if $O_{K,u} = O_{K,u'}$. Applying the preceding to $(v,\widehat{v})$ and $(w,\widehat{w})$ in $U = V \oplus V^*$, a simple calculation shows that the moment map vanishes at either point, so the two orbits are closed. This shows the equivalence between~(2) and~(3). The equivalence between~(1) and~(2) follows immediately from the second part of the Kempf-Ness theorem, using that $k \widehat v = \widehat{k v}$ for any $k\in K$, since $K$ acts unitarily. \end{proof} \section{Concluding remarks, future directions, and open problems} \label{sec:future} To better understand the context of our results and their potential impact on future progress, we briefly discuss some results in literature and then suggest further research directions. In very high level, we feel that the following aspects are highlighted by this work: the relative power and interplay between algebraic and analytical algorithms, the importance of understanding commutative actions as a stepping stone towards understanding general actions, the role of rational (as opposed to polynomial) invariants, and the subtlety of ``no go'' results, which evidently can be surpassed. There has been an explosion of interest over the last decade in understanding invariant theory from a complexity theoretic perspective (we survey some of this literature in the introduction). This rapidly developing field can be seen as an endeavor to classifying computational problems in invariant theory according to their difficulty, finding efficient algorithms whenever possible, as well as connecting to applications in mathematics, physics, optimization, and statistics. Invariant theory in the setting of a rational representation of a connected reductive group is the most relevant for complexity theory. The commutative case of tori is an important special case. Despite the well-understood structural simplicity of the corresponding invariant theory, even basic algorithmic problems are non-trivial. Null cone membership, arguably the most basic problem, has long been known to have an efficient algorithm, as it reduces to linear programming, which non-trivially admits polynomial-time algorithms. The problems of orbit equality, orbit closure intersection, and orbit closure containment have polynomial time algorithms, as shown in this paper. We stress that while efficient algorithms for linear programming are ``continuous'' or ``analytic'' in nature, our algorithms use a combination of \emph{both} analytic \emph{and} algebraic techniques. The more general problem of succinct circuits for generating polynomial invariants, which is one of the basic challenges proposed in~\cite{GCTV}, has recently shown to be impossible under natural complexity assumptions~\cite{GIMOWW}. Yet, in this paper, we bypass this negative result, and see that \emph{rational} invariants for torus actions can be captured in a computationally efficient way without the need for succinct circuits. It is an interesting open problem to determine if there are succinct circuits for separating invariants or null cone definers, see \cite[Problems~1.14, 1.15]{GIMOWW}. The invariant theory of non-commutative groups has a different flavor from, and is far more complex than, the commutative case, see, for example,~\cite{Humphreys:75}. Many interesting problems in computational invariant theory remain open in the non-commutative case. We list a few. First and foremost, the results in this paper motivate the investigation of the computational efficiency of systems of generating \emph{rational} invariants. Further, it is natural to wonder if rational invariants can help capture orbit closure intersection and orbit equality for non-commutative group actions. Another open problem is to give \emph{any} polynomial time algorithm for orbit closure intersection (and the subproblem of null cone membership). An intermediate challenge is to ascertain whether null cone membership is in NP $\cap$ co-NP. Note that in~\cite{BILPS:20} it is shown that the general orbit closure containment problem is NP-hard. \subsection*{Acknowledgements} Peter B\"urgisser and M. Levent Do\u{g}an were supported by the ERC under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 787840); Visu Makam was supported by the University of Melbourne and by NSF grant CCF-1900460. Michael Walter acknowledges NWO Veni grant no.~680-47-459 and NWO grant OCENW.KLEIN.267. Avi Wigderson was supported by NSF grant CCF-1900460. \bibliographystyle{alpha}
{ "timestamp": "2021-02-16T02:40:20", "yymm": "2102", "arxiv_id": "2102.07727", "language": "en", "url": "https://arxiv.org/abs/2102.07727" }
\section*{Acknowlegements} \section*{Broader Impacts} As the primary application of study for these experiments lies within the medical space, both positive and negative applications jump to mind. Large, high-resolution datasets both for training and evaluation come at the benefit of those in developed and wealthy communities. Our research -- through its focus on developing methods robust to degradation -- provides an opportunity for an improvement in prediction methods in lower fidelity data regimes; i.e. methods designed with downsampling and data reduction in mind alleviate needs for larger and more complete datasets. The present study examines EEGs and eye states, which could easily extend to other ailments. More accurate models for predicting seizure, for example, could greatly benefit those privileged enough to share characteristics with those used to train the model in the first place. But what of those not sampled? As (\cite{Hall-rep}) has observed: Blacks, women, and the elderly have historically been excluded from clinical trial research. Such arrangements can lead to what (\cite{Veinot-intentions}) has referred to as intervention-generated inequalities (IGI), a social arrangement where one group gets better, while others don't. On top of their original ailments, groups left out are burdened with continued medical involvement and the associated costs (e.g. additional tests, transportation, childcare, and missed opportunities). We offer the following suggestions for those in the medical industry hoping to combat some of this inequity: 1. Insist on multiple representative datasets including those from underrepresented groups -- incentivization where appropriate. 2. Identify and assist in eliminating barriers to involvement in data collection or diagnostics. \section{Downsampling} \subsection{Time series downsampling} We consider a downsampling a selection of a subsequence of points, or a smaller set of points that summarize the timeseries. We assume the timeseries has $n+2$ points, and construct a downsample of $m+2$ points. \textbf{Naive Bucketing}: Select the first and last points of the timeseries; cover the the rest of the points with $m$ even-width intervals(up to integer rounding). We call this a \textit{bucketing}. Consider a sequence of sequences: \begin{equation*} \left\lbrace\left\lbrace x_0 \right\rbrace, \left\lbrace x_1,\ldots,x_k\right\rbrace, \left\lbrace x_{k+1},\ldots,x_{2k}\right\rbrace, \ldots, \left\lbrace x_{(m-1)k+1},\ldots,x_{mk}\right\rbrace, \left\lbrace x_{n+1} \right\rbrace\right\rbrace \end{equation*} and for simplicity, call the sub-sequences $\left\lbrace{b_i}\right\rbrace_{0\leq i\leq n+1}$ such that $b_0 = \left\lbrace x_0 \right\rbrace$, $b_{n+1} = \left\lbrace x_{n+1} \right\rbrace$, and $b_j = \left\lbrace x_{(j-1)k + 1}\ldots x_{(j-1)k} \right\rbrace$, we refer to these as \emph{buckets}. \textbf{Dropout:} For each bucket, select the first point in the subsequence. \textbf{Bucket Averaging:} For each bucket in a naive bucketing, average the $x$ and $y$ coordinates and take this as the reprsentative point; take also the first and last points. For a bucket, we compute the 2-dimensional average of the points contained within: $\mu_j$. For convenience of notation, we write elements of $b_i$, as $\left\lbrace x^j_i\right\rbrace$. \textbf{Largest-Triangle Three-Bucket Downsample (LTTB)} We compute the subsequence via the optimization problem: \begin{equation*} \textbf{compute } l_{i} = \argmax_{x^j_i} \mathbf{\triangle}\left({l_{i-1}, x^j_i, \mu_{i+1}}\right)\text{ such that } l_{0} = b_0 \text{ and } \mu_{n+1} = b_{n+1}. \end{equation*} The sequence $\left\lbrace l_0,\ldots, l_{n+1}\right\rbrace$ is the \emph{largest triangle three bucket downsample.} For more details and intuition around this construction we recommend the original paper. \begin{remark} This is computed via a recursive optimization process iterating through the buckets; a non-recursive formulation to find the global optima is also possible. The distinction between these two solutions is that in the recursive solution each optimization is conditioned on the previous bucket, where-as the global solution conditions on all buckets simultaneously. \end{remark} \subsection{Dynamic downsampling} In the above bucketing strategies, points in all regions of the time series are given equal weight in the downsample. Often times, the lagging-variance of a time series is not uniform across the time-domain. One might expect that regions of higher variance might warrant higher resolutions in the downsample, while low variance might require lower resolutions. A simple implementation of this idea, \textit{(inspired by \cite{sveinn})} is demonstrated in Algorithm \ref{alg:dynamic bucketing}(implementation included in Appendix). Downsampling methods are then applied to this bucketing of the timeseries. \begin{algorithm} \caption{Variance weighted dynamic bucketing \label{alg:dynamic bucketing}} \begin{algorithmic}[1] \Require{$\mathcal{B}$ a naive bucketing, and $P$ an iteration count} \Statex \Function{dynamicbuckets}{$B:List[List[Float]$} \Let{$[b_j]$}{$\mathcal{B}$} \For{$i \gets 1 \textrm{ to } P$} \For{$j \gets 1 \textrm{ to } m$} \Let{$S(b_j)$}{$SSE(OLS(b_j))$} \EndFor \Let{$z$}{$\argmax_j(S(b_j))$} \Let{$b^l_z$}{$\left\lbrace x^z_1, \ldots, x^z_{\floor{k/2}}\right\rbrace$} \Let{$b^r_z$}{$\left\lbrace x^z_{\floor{k/2}+1}, \ldots, x^z_k\right\rbrace$} \Let{$\mathcal{B}$}{$\mathcal{B}\setminus \left\lbrace b_z\right\rbrace \cup \left\lbrace b^l_z, b^r_z \right\rbrace$} \EndFor \For{$i \gets 1 \textrm{ to } P$} \For{$j \gets 1 \textrm{ to } m+P$} \Let{$S(b_j)$}{$SSE(OLS(b_j))$} \EndFor \Let{$a$}{$\argmin_a(S(b_a)+S(b_{a+1}))$} \Let{$b^*_a$}{$\left\lbrace x^a_1, \ldots, x^a_{k}\right\rbrace \cup \left\lbrace x^{a+1}_{1}, \ldots, x^{a+1}_k\right\rbrace$} \Let{$\mathcal{B}$}{$\mathcal{B}\setminus \left\lbrace b_a, b_{a+1} \right\rbrace \cup \left\lbrace b^*_a \right\rbrace$} \EndFor \State \Return{$\mathcal{B}$} \EndFunction \end{algorithmic} \end{algorithm} \begin{remark} Rather than serially splitting, and then combining buckets to arrive at the rebucketing, it's natural to ask how alternating these operations effects the result. The authors carried out several simulations of this technique and found that convergence to `stable' bucketing took place much more quickly, but produced far worse results with respect to total SSE. \end{remark} \subsection{Dynamic downsampling results} \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{paper_materials/figures/eeg_eye_results_dynamic_v2.png} \caption{Experiment results for dynamic bucketing downsampling methods.} \label{fig:perf_dynamic} \end{figure} \section{another} \section{Outcomes} We've explored a collection of `experiments' in training and testing CNNs built on EEG data to predict if a patient's eyes are open or closed. The aforementioned experiments primarily sought to establish performance comparisons while varying the feature engineering choice, the chunk size, and the downsampling resolution used. We established a baseline performance using a raw time series feature set and reproduced performance in (\cite{Umeda2017}) to compare to this baseline. We saw that the performance of this baseline actually outperforms the TDA feature engineered experiment as reported. This suggests that for a task of this type the TDA approach is not SOTA, but may hold value in other regimes, or under more specific hyperparameter tuning. Building on these results, we explored the novel geometric feature engineering method of persistent eigenvalues of the Laplacian. This method also outperforms TDA, but does not significantly outperform the raw time series experiment. We showed the performance of these networks under the strain of reduced data samples, and in resolution reduction. The impact on performance as we iterate through the parameter space is relatively smaller for the eigenvalue features, but perform worse than the raw time-series when a significant difference is detectable. Finally, we provided a testbed for further iteration on these sorts of prediction tasks, and opened up a discussion around sensor resolution, sample data size, downsampling, feature engineering, and CNNs. The comparison pipelines are easily extensible for further experimentation with this dataset or others. All of the code for feature engineering and testing is available on GitHub. Variable resolution training has been employed on ImageNet (\cite{DAWNBench}) to dramatically reduce training time, it's interesting to consider the implications of explicitly controlling downsampling schemes for this ansatz. Larger scope, we have left open the question of ``multi-resolution'' sensor networks and the impact on geometric feature engineering and downsampling. \section{Time series downsampling} We consider a downsampling a selection of a subsequence of points, or a fewer set of points that summarize the timeseries. We assume the timeseries has $n+2$ points, and construct a downsample of $m+2$ points. \textbf{Naive Bucketing}: Select the first and last points of the timeseries; cover the the rest of the points with $m$ even-width intervals(up to integer rounding). We call this a \textit{bucketing}. Consider a sequence of sequences: \begin{equation*} \left\lbrace\left\lbrace x_0 \right\rbrace, \left\lbrace x_1,\ldots,x_k\right\rbrace, \left\lbrace x_{k+1},\ldots,x_{2k}\right\rbrace, \ldots, \left\lbrace x_{(m-1)k+1},\ldots,x_{mk}\right\rbrace, \left\lbrace x_{n+1} \right\rbrace\right\rbrace \end{equation*} and for simplicity, call the sub-sequences $\left\lbrace{b_i}\right\rbrace_{0\leq i\leq n+1}$ such that $b_0 = \left\lbrace x_0 \right\rbrace$, $b_{n+1} = \left\lbrace x_{n+1} \right\rbrace$, and $b_j = \left\lbrace x_{(j-1)k + 1}\ldots x_{(j-1)k} \right\rbrace$, we refer to these as \emph{buckets}. \textbf{Dropout:} For each bucket, select the first point in the subsequence. \textbf{Bucket Averaging:} For each bucket in a naive bucketing, average the $x$ and $y$ coordinates and take this as the reprsentative point; take also the first and last points. For a bucket, we compute the 2-dimensional average of the points contained within: $\mu_j$. For convenience of notation, we write elements of $b_i$, as $\left\lbrace x^j_i\right\rbrace$. \textbf{Largest-Triangle Three-Bucket Downsample (LTTB)} We compute the subsequence via the optimization problem: \begin{equation*} \textbf{compute } l_{i} = \argmax_{x^j_i} \mathbf{\triangle}\left({l_{i-1}, x^j_i, \mu_{i+1}}\right)\text{ such that } l_{0} = b_0 \text{ and } \mu_{n+1} = b_{n+1}. \end{equation*} The sequence $\left\lbrace l_0,\ldots, l_{n+1}\right\rbrace$ is the \emph{largest triangle three bucket downsample.} For more details and intuition around this construction we recommend the original paper. \begin{remark} This is computed via a recursive optimization process iterating through the buckets; a non-recursive formulation to find the global optima is also possible. The distinction between these two solutions is that in the recursive solution each optimization is conditioned on the previous bucket, where-as the global solution conditions on all buckets simultaneously. \end{remark} \subsection{Dynamic Bucketing} In the above bucketing strategies, points in all regions of the time series are given equal weight in the downsample. Often times, the lagging-variance of a time series is not uniform across the time-domain. One might expect that regions of higher variance might warrant higher resolutions in the downsample, while low variance might require lower resolutions. A simple implementation of this idea, \textit{(inspired by \cite{sveinn})} is demonstrated in Algorithm \ref{alg:dynamic bucketing}(implementation included in Appendix). Downsampling methods are then applied to this bucketing of the timeseries. \section{Graph Laplacians} Spectral graph theory is an integral facet of graph theory (\cite{chung1997spectral}) and one of the key objects of this theory is the Laplacian matrix of a graph, as well as its eigenvalues. We assume all graphs are undirected and simple. For a graph $G$, let $A$ and $D$ be the adjacency matrix and the degree matrix of $G$ respectively. The \emph{Laplacian} of $G$ is defined to be $L = D - A$. The \emph{normalized Laplacian} of $G$ is then defined to be $\tilde{L} = D^{-1/2} A D^{-1/2}.$ Denote the eigenvalues(or \emph{spectrum}) of $\tilde{L}$ by $0=\lambda_0 \leq \lambda_1 \leq \cdots \leq \lambda_{n-1}$. There is a well known connection between the spectrum of $\tilde{L}$ to properties of $G$: \begin{theorem*}[Lemma 1.7, \cite{chung1997spectral}]\label{thm:graphspec} For a graph $G$ with $n$ vertices, we have that \begin{enumerate} \item $0 \leq \lambda_i \leq 2$, with $\lambda_0 = 0$. Further, $\lambda_{n-1} = 2$ if and only if a connected component of $G$ is bipartite and nontrivial. \item If $G$ is connected, then $\lambda_1 > 0$. If $\lambda_i = 0$ and $\lambda_{i+1} \neq 0$ then $G$ has exactly $i+1$ connected components. \end{enumerate} \end{theorem*} \textbf{Persistent Laplacian Eigenvalues for Time Series Analysis}\label{sec:laplacians_ts} Denote by $\tilde{L}_{\epsilon}(X)$ the normalized Laplacian of $G_{\epsilon}(X)$. Define $\hat{\lambda}_{\epsilon}(X) = [ \lambda_{\epsilon}(X)_0, \lambda_{\epsilon}(X)_1, \dots \lambda_{\epsilon}(X)_{n-1} ]$ to be the vector of eigenvalues of $\tilde{L}_{\epsilon}(X)$, in ascending order: $0 = \lambda_{\epsilon}(X)_0 \leq \lambda_{\epsilon}(X)_1 \leq \cdots \leq \lambda_{\epsilon}(X)_{n-1} \leq 2$. When the context is understood, we will drop the designation $(X)$ in the above notations; e.g. $G_{\epsilon}$ or $\lambda_{\epsilon 0}$. Let $I$ be an interval, $\hat{v} = [v_0, v_1, \dots, v_{n-1}]$ be a vector, and define \begin{equation} \textbf{count}_{I}(v) : = \#\{ v_i \mid v_i \in I \}. \end{equation} For a given interval $[0, r]$ (this will be our range of resolutions), and a finite collection of real numbers $0 = \tau_0 < \tau_1 < \cdots < \tau_k = 2$, define for $\epsilon \in [0, r]$: \begin{equation} \mu_j(\epsilon) := \begin{cases} \textbf{count}_{[\tau_j, \tau_{j + 1})}(\hat{\lambda_{\epsilon}}) \text{ for } 0 \leq j < k - 1 \\ \textbf{count}_{[\tau_j, \tau_{j + 1}]}(\hat{\lambda_{\epsilon}}) \text{ for } j = k - 1. \end{cases} \end{equation} \noindent That is, $\mu_j(\epsilon)$ counts the number of eigenvalues of $\tilde{L}_{\epsilon}$ that lie between $\tau_j$ and $\tau_{j + 1}$. Observe that $\textbf{count}_{[0, 0]}(\hat{\lambda_{\epsilon}})$ is equal to the number of connected components of $G_{\epsilon}$. We will view the collection $\{ \mu_j \}$ as a collection of $j$ real-valued functions with domain $[0, r]$. Mirroring Sec. \ref{sec:ph_ts}, we refer to the collection of $\mu_j$'s as \emph{persistent Laplacian eigenvalues}. Given a time series $\{ f(t_i) \}_{i=0}^n$ we form $T^m$, and compute $\{ \mu_j \}_{j=0}^l$ for some choice of $\tau_0, \dots \tau_l$. \subsection{Comments and Caveats} In machine learning tasks, especially those with more complicated models, it is essential to attempt to establish a baseline, a set of target metrics, and a comparison pipeline for apples-to-apples evaluation\textemdash in some literature this is called boat-racing, or horse-racing. During our literature review of TDA methods for the EEG prediction dataset, we did not find sufficient comparators and thus integrate this into our experimentation goals, whence the large volume of experiments considered in this paper. We do not treat the neuroscience topics necessary for a deeper investigation of EEG technology or seizures. We treat the dataset as a mathematical task, and focus instead on the models and methods in machine learning. In particular, as highlighted in (\cite{GCN-EEG}) EEG datasets are structural time series, i.e. physical geometry of data collection correlates with relationships between series. We use only local readings to reduce this covariant structure. Furthermore, train and test splitting can be challenging as several samples are taken from the same patient. For these, and other reasons, we make no comparison to SOTA networks. \subsection{This work in context} Generally, CNN architectures are optimized via network features like batch size, learning rate, kernels, pooling layers, and the like. Some papers have experimented with resizing to improve training time --- in (\cite{DAWNBench}) they introduced dynamic resizing with progressive resolution. Others (\cite{DBLP}) have investigated the robustness of CNN performance under image degradation due to noise, but we've not found in the literature examinations of explicit downsampling algorithms' effects. For a collection classification task, the encoding of the data itself is also at opportunity for tuning. While common approaches to sequential data sets use RNNs and LSTMs to take advantage of data characteristics like autoregressive features, the encoding methods in this paper need not only apply to sequential collections. We wish to highlight that while time series classification is well studied and has clear and effective baselines (\cite{TSclass}), EEG classifiers are of great value and remain elusive. \subsection*{Overview of model pipelines} \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[width=.48\textwidth]{paper_materials/figures/raw_time_series_example.png} \caption{Raw time series from a segment of time where the patient's eyes were closed, segmented to 600 time steps, not downsampled.} \label{fig:raw_time_series} \includegraphics[width=.48\textwidth]{paper_materials/figures/eeg_eye_betti_numbers_example.png} \caption{$\beta_j(\epsilon)$ computed for the time series in Fig. \ref{fig:raw_time_series} as described in Section \ref{sec:ph_ts}.} \label{fig:pers_betti_nums} \includegraphics[width=.48\textwidth]{paper_materials/figures/eeg_eye_persistent_laplacian_example_area.png} \caption{Area plot of $\mu_j(\epsilon)$ for the time series in Fig. \ref{fig:raw_time_series} as described in Section \ref{sec:laplacians_ts}.} \label{fig:pers_lpn_eigs} \end{wrapfigure} \textbf{Raw Time Series Features:} We feed the sequential series values into two kernel layers of one-dimensional convolution and max pool layers followed by a fully connected layer. \textbf{Persistent Betti Numbers Features:} We encode each time series with the `k-step` Takens' embedding into $\mathbb{R}^k$. This point cloud's $\epsilon$-neighbor graph generates the Vietoris-Rips filtration up to dimension 3. The order of the degree $n$ simplicial homology (or $n$'th Betti number) is computed for each $\epsilon$ neighbor complex, and encoded as $n$ discrete $\epsilon$-series. We feed the sequential $\epsilon$-series values --- each on their own channel --- into two kernel layers of one-dimensional convolution and max pool layers followed by a fully connected layer. \textbf{Persistent Laplacian Eigenvalue Features:} We encode each time series with the `k-step` Takens' embedding into $\mathbb{R}^k$. This point cloud's $\epsilon$-neighbor graph is collected and it's normalized graph Laplacians are computed. The eigenvalues of these Laplacians are computed and bucketed into a partition of $m$ buckets. The counts of eigenvalues in each bucket are encoded as $m$ discrete $\epsilon$-series. We feed the sequential $\epsilon$-series values --- each on their own channel --- into two kernel layers of one-dimensional convolution and max pool layers followed by a fully connected layer. \textbf{Downsampled input data:} In each of the pipelines, we prepend the model pipeline with a downsampling step using one of three downsampling algorithms and several downsampling resolutions. \textbf{Experimental design:} Our design matrix consists of the three downsampling methods applied to $\left\lbrace 200, 300, 400, 500, 600\right\rbrace$ initial data resolutions downsampled by steps of $50$. Each of these initial data sets are fed through each of the model pipelines and subsequent CNNs. We use cross-validation and accuracy to evaluate the performance. \section*{Introduction} Topological Data Analysis (TDA) (\cite{computingPH}, \cite{Scopigno04persistencebarcodes}, \cite{Edelsbrunner2000TopologicalPA}, \cite{topo_persis}) has emerged from Algebraic Topology as an approach to the problem of describing the shape of data. In recent years, TDA has gained much attention due to applications for data analysis and machine learning. In particular, persistent homology (\cite{topo_persis},\cite{Edelsbrunner2000TopologicalPA}) --- frequently in the form of barcodes --- has been leveraged for machine learning purposes in numerous tasks (\cite{Chazal2017AnIT}). Classification tasks seeking to label data collections so too frequently utilize the notion of the shape of data (in a latent geometry), via feature engineering. In this paper, we investigate the performance of a one dimensional CNN on a binary classifying task for EEG data, under a collection of feature engineering approaches. While CNN architectures are heavily experimented on, less research has explored models for feature engineering using modern geometric techniques (\cite{gcrn, geom_dl}). We engineer and train networks using three types of features from the time series, investigating robustness of the classifiers to varying resolution and downsampling. We establish performance results across multiple regimes of EEG resolution, and effective resolutions, comparing degradation across the feature types. We also compare across differing downsampling methods: a practice common in the signal processing literature but less so in classification of time-series. Persistent homology of the Takens' embedding (c.f. \ref{sec:takens}) comprises the first set of engineered features (\cite{Umeda2017}). Unfortunately, the introduction of that approach lacked a comparison to current state of the art techniques, or even to standard techniques in the appropriately related tasks --- we fill that gap amidst our experiments. Our second feature set is a novel geometric method beyond that of traditional TDA methods utilizing eigenvalues of a sequence of graph Laplacians (c.f. \ref{sec:laplacians_ts}). We benchmark the performance of these two methods against a more modern method for time-series classification: a one dimensional CNN model trained on the time series data itself. We hold the classification network architecture constant to more directly evaluate the effectiveness of these features as estimators. The principle contributions of this work include: \begin{itemize} \item Introduce a new feature engineering technique utilizing latent geometric properties of the time series \item Apply the theory and methods of downsampling to a time-series classification pipelines \item Propose and demonstrate a comparison framework and baseline results for time series clustering via varying features and CNN architectures \end{itemize} \subsection{Comments} We do not treat the neuroscience topics necessary for a deeper investigation of EEG technology or seizures. We treat the dataset as a mathematical task, and focus instead on the models and methods in machine learning. In particular, as highlighted in (\cite{GCN-EEG}) EEG datasets are structural time series, i.e. physical geometry of data collection correlates with relationships between series. We use only local readings to reduce this covariant structure. Furthermore, train and test splitting can be challenging as several samples are taken from the same patient. For these, and other reasons, we make no comparison to SOTA networks. \subsection{This work in context} Generally, CNN architectures are optimized via network features like batch size, learning rate, kernels, pooling layers, and the like. Some papers have experimented with resizing to improve training time --- in (\cite{DAWNBench}) they introduced dynamic resizing with progressive resolution. Others (\cite{DBLP}) have investigated the robustness of CNN performance under image degradation due to noise, but we've not found in the literature examinations of explicit downsampling algorithms' effects. For a collection classification task, the encoding of the data itself is also at opportunity for tuning. While common approaches to sequential data sets use RNNs and LSTMs to take advantage of data characteristics like autoregressive features, the encoding methods in this paper need not only apply to sequential collections. We wish to highlight that while time series classification is well studied and has clear and effective baselines (\cite{TSclass}), EEG classifiers are of great value and remain elusive. \subsection{Overview of model pipelines} \begin{figure}[ht] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=.95\textwidth]{paper_materials/figures/raw_time_series_example.png} \caption{Raw time series from a segment of time where the patient's eyes were closed, segmented to 600 time steps, not downsampled.} \label{fig:raw_time_series} \includegraphics[width=.95\textwidth]{paper_materials/figures/eeg_eye_betti_numbers_example.png} \caption{$\beta_j(\epsilon)$ computed for the time series in Fig. \ref{fig:raw_time_series} as described in Section \ref{sec:ph_ts}.} \label{fig:pers_betti_nums} \includegraphics[width=.95\textwidth]{paper_materials/figures/eeg_eye_persistent_laplacian_example_area.png} \caption{Area plot of $\mu_j(\epsilon)$ for the time series in Fig. \ref{fig:raw_time_series} as described in Section \ref{sec:laplacians_ts}.} \label{fig:pers_lpn_eigs} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=.95\textwidth]{paper_materials/figures/TallCartoonwEx.png} \label{fig:model_pipelines} \end{minipage} \centering\captionsetup{justification=RaggedLeft} \caption{Model pipelines; steps (1,2): raw time series CNN; steps (1,3,4,5,6): persistent Laplacian eigenvalues; steps (1,3,4,7,8,9): persistent Betti numbers; Ex: $\epsilon$-neighbors graph construction} \end{figure} \textbf{Raw Time Series Features:} We feed the sequential series values into two kernel layers of one-dimensional convolution and max pool layers followed by a fully connected layer. \textbf{Persistent Betti Numbers Features:} We encode each time series with the `k-step` Takens' embedding into $\mathbb{R}^k$. This point cloud's $\epsilon$-neighbor graph (c.f. Figure \ref{fig:model_pipelines} \textbf{Ex.}) generates the Vietoris-Rips filtration up to dimension 3. The order of the degree $n$ simplicial homology (or $n$'th Betti number) is computed for each $\epsilon$ neighbor complex, and encoded as $n$ discrete $\epsilon$-series. We feed the sequential $\epsilon$-series values --- each on their own channel --- into two kernel layers of one-dimensional convolution and max pool layers followed by a fully connected layer. \textbf{Persistent Laplacian Eigenvalue Features:} We encode each time series with the `k-step` Takens' embedding into $\mathbb{R}^k$. This point cloud's $\epsilon$-neighbor graph is collected and it's normalized graph Laplacians are computed. The eigenvalues of these Laplacians are computed and bucketed into a partition of $m$ buckets. The counts of eigenvalues in each bucket are encoded as $m$ discrete $\epsilon$-series. We feed the sequential $\epsilon$-series values --- each on their own channel --- into two kernel layers of one-dimensional convolution and max pool layers followed by a fully connected layer. \textbf{Downsampled input data:} In each of the pipelines, we prepend the model pipeline with a downsampling step using one of three downsampling algorithms and several downsampling resolutions. \textbf{Experimental design:} Our design matrix consists of the three downsampling methods applied to $\left\lbrace 200, 300, 400, 500, 600,\right\rbrace$ initial data resolutions downsampled by steps of $50$. Each of these initial data sets are fed through each of the model pipelines and subsequent CNNs. We use cross-validation and accuracy to evaluate the performance. \section{Methods} \subsection{Our Approach} Spectral graph theory is an integral facet of graph theory (\cite{chung1997spectral}) and one of the key objects of this theory is the Laplacian matrix of a graph, as well as its eigenvalues. We assume all graphs are undirected and simple. For a graph $G$, let $A$ and $D$ be the adjacency matrix and the degree matrix of $G$ respectively. The \emph{Laplacian} of $G$ is defined to be $L = D - A$. The \emph{normalized Laplacian} of $G$ is then defined to be $\tilde{L} = D^{-1/2} A D^{-1/2}.$ Denote the eigenvalues(or \emph{spectrum}) of $\tilde{L}$ by $0=\lambda_0 \leq \lambda_1 \leq \cdots \leq \lambda_{n-1}$. Recall: \begin{theorem*}[Lemma 1.7, \cite{chung1997spectral}]\label{thm:graphspec} For a graph $G$ with $n$ vertices, we have that \begin{enumerate} \item $0 \leq \lambda_i \leq 2$, with $\lambda_0 = 0$. Further, $\lambda_{n-1} = 2$ if and only if a connected component of $G$ is bipartite and nontrivial. \item If $G$ is connected, then $\lambda_1 > 0$. If $\lambda_i = 0$ and $\lambda_{i+1} \neq 0$ then $G$ has exactly $i+1$ connected components. \end{enumerate} \end{theorem*} \textbf{Persistent Laplacian Eigenvalues for Time Series Analysis}\label{sec:laplacians_ts} Denote by $\tilde{L}_{\epsilon}(X)$ the normalized Laplacian of $G_{\epsilon}(X)$. Define $\hat{\lambda}_{\epsilon}(X) = [ \lambda_{\epsilon}(X)_0, \lambda_{\epsilon}(X)_1, \dots \lambda_{\epsilon}(X)_{n-1} ]$ to be the vector of eigenvalues of $\tilde{L}_{\epsilon}(X)$, in ascending order: $0 = \lambda_{\epsilon}(X)_0 \leq \lambda_{\epsilon}(X)_1 \leq \cdots \leq \lambda_{\epsilon}(X)_{n-1} \leq 2$. When the context is understood, we will drop the designation $(X)$ in the above notations; e.g. $G_{\epsilon}$ or $\lambda_{\epsilon 0}$. Let $I$ be an interval, $\hat{v} = [v_0, v_1, \dots, v_{n-1}]$ be a vector, and define \begin{equation} \textbf{count}_{I}(v) : = \#\{ v_i \mid v_i \in I \}. \end{equation} For a given interval $[0, r]$ (this will be our range of resolutions), and a finite collection of real numbers $0 = \tau_0 < \tau_1 < \cdots < \tau_k = 2$, define for $\epsilon \in [0, r]$: \begin{equation} \mu_j(\epsilon) := \begin{cases} \textbf{count}_{[\tau_j, \tau_{j + 1})}(\hat{\lambda_{\epsilon}}) \text{ for } 0 \leq j < k - 1 \\ \textbf{count}_{[\tau_j, \tau_{j + 1}]}(\hat{\lambda_{\epsilon}}) \text{ for } j = k - 1. \end{cases} \end{equation} \noindent That is, $\mu_j(\epsilon)$ counts the number of eigenvalues of $\tilde{L}_{\epsilon}$ that lie between $\tau_j$ and $\tau_{j + 1}$. Observe that $\textbf{count}_{[0, 0]}(\hat{\lambda_{\epsilon}})$ is equal to the number of connected components of $G_{\epsilon}$. We will view the collection $\{ \mu_j \}$ as a collection of $j$ real-valued functions with domain $[0, r]$. We refer to the collection of $\mu_j$'s as \emph{persistent Laplacian eigenvalues}. Given a time series $\{ f(t_i) \}_{i=0}^n$ we form $T^m$, and compute $\{ \mu_j \}_{j=0}^l$ for some choice of $\tau_0, \dots \tau_l$. \section{Submission details} We require electronic submissions via OpenReview. Use the following site for your submission: \begin{center} \url{https://openreview.net/group?id=NeurIPS.cc/2020/Workshop/TDA_and_Beyond} \end{center} Please read the instructions below carefully and follow them faithfully. If you have any questions, reach out to \href{mailto:workshop@topology.rocks}{workshop@topology.rocks}. \section{Style file} Papers to be submitted to the `Topological Data Analysis and Beyond workshop' must be prepared according to the instructions presented here. Papers may only be up to four pages long, including figures. Additional pages containing \begin{inparaenum}[(i)] \item \emph{acknowledgements}, \item \emph{funding statements}, and/or \item \emph{cited references} \end{inparaenum} are allowed. Papers that exceed four pages of content will not be reviewed, or in any other way considered for presentation at the workshop. The margins follow the NeurIPS~2020 template. This \LaTeX{} style file contains three optional arguments: \verb+final+, which creates a camera-ready copy, \verb+preprint+, which creates a preprint for submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the \verb+natbib+ package for you in case of package clash. \paragraph{Preprint option} If you wish to post a preprint of your work online, e.g.\ on arXiv, using the NeurIPS style, please use the \verb+preprint+ option. This will create a non-anonymous version of your work with the text ``Preprint. Work in progress.'' in the footer. This version may be distributed as you see fit. Please \textbf{do not} use the \verb+final+ option, which should \textbf{only} be used for papers accepted to the workshop. At submission time, please omit the \verb+final+ and \verb+preprint+ options. This will anonymise your submission and add line numbers to aid review. Please do \emph{not} refer to these line numbers in your paper as they will be removed during generation of camera-ready copies. The file \verb+neurips_2020_tda.tex+ may be used as a ``shell'' for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in Sections \ref{sec:General formatting instructions}, \ref{sec:Headings}, and \ref{sec:Others} below. \section{General formatting instructions} \label{sec:General formatting instructions} For the final version, authors' names are set in boldface, and each name is centred above the corresponding address. The lead author's name is to be listed first~(left-most), and the co-authors' names~(if different address) are set to follow. If there is only one co-author, list both author and co-author side by side. Feel free to use footnote marks such as $\dagger$ in order to indicate equal contributions. Please pay special attention to the instructions in Section~\ref{sec:Others} regarding figures, tables, acknowledgements, and references. \section{Headings} \label{sec:Headings} All headings should be lower case (except for first word and proper nouns), flush left, and bold. \section{Citations, figures, tables, and references} \label{sec:Others} These instructions apply to everyone. \subsection{Citations within the text} The \verb+natbib+ package will be loaded for you by default. Citations may be author/year or numeric, as long as you maintain internal consistency. As to the format of the references themselves, any style is acceptable as long as it is used consistently. The documentation for \verb+natbib+ may be found at \begin{center} \url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf} \end{center} Of note is the command \verb+\citet+, which produces citations appropriate for use in inline text. For example, \begin{verbatim} \citet{hasselmo} investigated\dots \end{verbatim} produces \begin{quote} Hasselmo, et al.\ (1995) investigated\dots \end{quote} If you wish to load the \verb+natbib+ package with options, you may add the following before loading the \verb+neurips_2020_tda+ package: \begin{verbatim} \PassOptionsToPackage{options}{natbib} \end{verbatim} If \verb+natbib+ clashes with another package you load, you can add the optional argument \verb+nonatbib+ when loading the style file: \begin{verbatim} \usepackage[nonatbib]{neurips_2020_tda} \end{verbatim} As submission is double-blind, you should refer to your own published work in the third person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our previous work [4].'' If you cite your other papers that are not widely available (e.g.\ a journal paper under review), use anonymous author names in the citation, e.g.\ an author of the form ``A.\ Anonymous.'' \subsection{Footnotes} Footnotes should be used sparingly. If you do require a footnote, indicate footnotes with a number\footnote{Sample of the first footnote.} in the text. Note that footnotes are properly typeset \emph{after} punctuation marks.\footnote{As in this example.} \subsection{Figures} All artwork must be neat, clean, and legible. Lines should be dark enough for purposes of reproduction. The figure number and caption always appear after the figure. Place one line space before the figure caption and one line space after the figure. The figure caption should be lower case (except for first word and proper nouns); figures are numbered consecutively. You may use colour figures. However, it is best for the figure captions and the paper body to be legible regardless of whether the paper is printed in black/white or in colour. Moreover, when designing figures, please ensure that your colours can be distinguished by the colour-blind. Visit \begin{center} \url{https://colorbrewer2.org} \end{center} for more advice on choosing colours. \subsection{Tables} All tables must be centred, neat, clean and legible. The table number and title always appear before the table. See Table~\ref{sample-table}. The table title must be lower case (except for first word and proper nouns). Note that publication-quality tables \emph{do not contain vertical rules.} We strongly suggest the use of the \verb+booktabs+ package, which allows for typesetting high-quality, professional tables: \begin{center} \url{https://www.ctan.org/pkg/booktabs} \end{center} This package was used to typeset Table~\ref{sample-table}. \begin{table} \caption{% Sample table title } \label{sample-table} \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Part} \\ \cmidrule(r){1-2} Name & Description & Size (\si{\micro\meter}) \\ \midrule Dendrite & Input terminal & $\sim$100 \\ Axon & Output terminal & $\sim$10 \\ Soma & Cell body & up to $10^6$ \\ \bottomrule \end{tabular} \end{table} \section{Final instructions} Do \textbf{not} change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes~(except perhaps in the \textbf{References} section; see below). Please note that pages should be numbered. \paragraph{Margins in \LaTeX{}} Most of the margin problems come from figures positioned by hand using \verb+\special+ or other commands. We suggest using the command \verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the figure width as a multiple of the line width as in the example below: \begin{verbatim} \usepackage[pdftex]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.pdf} \end{verbatim} See Section 4.4 in the graphics bundle documentation (\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf}) \paragraph{Hyphenation} A number of width problems arise when \LaTeX{} cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \verb+\-+ command when necessary. \section*{Broader Impact} Authors may optionally include a statement of the broader impact of their work, including its ethical aspects and future societal consequences. Authors may discuss both positive and negative outcomes, if any. For instance, authors might discuss \begin{inparaenum}[(i)] \item who may benefit from this research, \item who may be put at disadvantage from this research, \item what are the consequences of failure of the system, and \item whether the task/method leverages biases in the data. \end{inparaenum} Use unnumbered first level headings for this section, which should go at the end of the paper. \textbf{Note that this section does not count towards the eight pages of content that are allowed.} \begin{ack} Use unnumbered first level headings for the acknowledgements. All acknowledgements go at the end of the paper before the list of references. Moreover, you are required to declare funding~(financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work). More information about this disclosure can be found at: \url{https://neurips.cc/Conferences/2020/PaperInformation/FundingDisclosure}. Do \textbf{not} include this section in the anonymised submission, only in the final paper. You can use the \texttt{ack} environment provided in the style file to automatically hide this section in an anonymous submission. \end{ack} \section*{References} References follow the acknowledgements. Use an unnumbered first-level heading for the references. Any choice of citation style is acceptable as long as you are consistent. It is permissible to reduce the font size to \verb+small+~(9 point) when listing the references. \textbf{Note that the Reference section does not count towards the eight pages of content that are allowed.} \end{document} \section{Past Work} In (\cite{Umeda2017}), the author calculates persistent Betti numbers of EEG time series signals via the aforementioned Persistent Betti-number model pipeline before feeding the output into a CNN for an eyes open/closed prediction task. In particular, for time series $\{ f(t_i) \}_{i=0}^n$, consider $T^m$ its Takens' embedding as the enumerated subset in $\mathbb{R}^m$, then compute $\beta_{k}(\epsilon)$ for all $0 \leq k < m$ and $\epsilon \in [0,r]$. \section{TDA for machine learning} \subsection{Persistent Homology} \label{ph} \textbf{Takens' Embedding}\label{sec:takens} Given a time series, $\{t_i \}_{i=0}^n \subset \mathbb{R}$, and a function $f: \mathbb{R} \rightarrow \mathbb{R}$. Then the \emph{Takens' embedding} with window size $m$, denoted $T^m$, is the collection of points in $\mathbb{R}^m$ given by \begin{equation*} \left\{ \left[ f(t_0), f(t_1), \dots, f(t_{m-1}) \right], \left[ f(t_1), f(t_2), \dots, f(t_{m}) \right], \dots, \left[ f(t_{n-m+1}), f(t_{n - m +1}), \dots f(t_{n}) \right] \right\} \end{equation*} \noindent That is, $T^m$ is the collection of points in $\mathbb{R}^m$ given by taking sliding windows over the time series $f(t_i)$. For volatile series, smooth transformations on this geometry are known to preserve properties of the time series, such as the dimension of chaotic attractor, and Lyapunov exponents of the dynamics. \textbf{$\epsilon$-neighbor graph} Given a finite subset $X \subset \mathbb{R}^m$ with $n$ points, and a real number $\epsilon \geq 0$ we form the $\epsilon$-graph $G_{\epsilon}(X)$ with nodes of $G_{\epsilon}(X)$ indexed by the elements of $X$, and edges $v_x\rightarrow v_y$ for $x, y \in X$ if and only if $\norm{x - y} < \epsilon$. \textbf{Persistent Homology and Time Series Analysis}\label{sec:ph_ts} In (\cite{Umeda2017}), the author calculates persistent Betti numbers(c.f. the appendix: \ref{ph} for definitions) of EEG time series signals via the aforementioned Persistent Betti-number model pipeline before feeding the output into a CNN for an eyes open/closed prediction task. In particular, for time series $\{ f(t_i) \}_{i=0}^n$, consider $T^m$ its Takens' embedding as the enumerated subset in $\mathbb{R}^m$, then compute $\beta_{k}(\epsilon)$ for all $0 \leq k < m$ and $\epsilon \in [0,r]$. \section{Experiments} \textbf{Data set and classification task} The data used in this work are time series EEG signals, and are provided by the University of Bonn, explored in \cite{eeg_data}. This data set is comprised of five sets (labeled A-E), each containing 100 single-channel EEG segments 23.6 seconds in duration, with 4097 observations. The segments were hand selected from within a continuous multi-channel EEG recording, chosen for absence of artifacts as well as fulfilling a stationarity condition. Set E contains segments of seizure activity, sets C and D are taken from epilepsy patients during an interval where no seizure activity was occurring, and sets A and B are observations from non epilepsy diagnosed patients. The observations in set A occur during times when the patient's eyes were open, while those in set B occur during times when the patient's eyes were closed. We study the classification task of A vs. B. \textbf{CNN architectures} All of the prediction algorithms used in this paper are CNNs, each with two sets of one dimensional convolution and max pool layers, followed by a fully connected layer to predict the class label. Architecture parameters are vectors representing $\left<\textrm{input},\textrm{channels},\textrm{factor},\textrm{kernel1 size},\textrm{kernel2 size}\right>$; $\left<\textrm{res},1,5,(\textrm{res}/600)*18,2\right>,\left<300,3,7,6,2\right>,\left<300,7,3,6,2\right>$ for raw time-series, Bettis, and eigenvalues respectively. \texttt{factor} refers to the multiplicative factor of input channels to output channels in each convolution layer, and \texttt{res} refers to the resolution to which the time series was downsampled to. Stride and dilation in both conv layers are 1 for each model; first pooling layer has size 7, second has size 3; each model is trained for 10 epochs. \textbf{Experiments and Results} For a given chunk size, downsampling method, and downsampling rate, we run 10-fold cross validation for each of the three prediction methods described. The average and standard deviation of accuracy is recorded and displayed in Figure \ref{fig:perf_non_dynamic} for non-dynamic bucketing downsampling methods, and Figure \ref{fig:perf_dynamic} for dynamic bucketing downsampling methods. \begin{figure}[ht] \centering \includegraphics[width=0.75\textwidth]{paper_materials/figures/eeg_eye_results_non_dynamic_v2.png} \caption{Experiment results for non dynamic bucketing downsampling methods.} \label{fig:perf_non_dynamic} \end{figure} \section*{Acknowlegements} The authors thank the anonymous reviewers from the NeurIPS 2020 TDA And Beyond Workshop. We extend heartfelt appreciation to Will Chernoff for excellent feedback on the manuscript including a careful review of the possible implications of this work, leading to the broader impacts section. We also thank Dan Marthaler, Sven Schmit, and Hector Yee for feedback and comments on the manuscript. Janu Verma provided detailed feedback and recommendations on the exposition. \section*{Broader Impacts} As the primary application of study for these experiments lies within the medical space, both positive and negative applications jump to mind. Large, high-resolution datasets both for training and evaluation come at the benefit of those in developed and wealthy communities. Our research -- through its focus on developing methods robust to degradation -- provides an opportunity for an improvement in prediction methods in lower fidelity data regimes; i.e. methods designed with downsampling and data reduction in mind alleviate needs for larger and more complete datasets. The present study examines EEGs and eye states, which could easily extend to other ailments. More accurate models for predicting seizure, for example, could greatly benefit those privileged enough to share characteristics with those used to train the model in the first place. But what of those not sampled? As (\cite{Hall-rep}) has observed: Blacks, women, and the elderly have historically been excluded from clinical trial research. Such arrangements can lead to what (\cite{Veinot-intentions}) has referred to as intervention-generated inequalities (IGI), a social arrangement where one group gets better, while others don't. On top of their original ailments, groups left out are burdened with continued medical involvement and the associated costs (e.g. additional tests, transportation, childcare, and missed opportunities). We offer the following suggestions for those in the medical industry hoping to combat some of this inequity: 1. Insist on multiple representative datasets including those from underrepresented groups -- incentivization where appropriate. 2. Identify and assist in eliminating barriers to involvement in data collection or diagnostics. \section{Appendix: Persistent Homology} \label{ph} Fix $\epsilon \geq 0$, and let $X = \{ x_1, \ldots, x_n \}$ be an enumerated subset of $\mathbb{R}^m$. For $k = 0, 1, \ldots$ define $C_{\epsilon, k}$ to be the $\mathbb{R}$-vector space whose formal basis is given by all subsets of $X$ of the form \begin{equation*} \{ (x_{i_0}, x_{i_1}, \ldots, x_{i_k}) \text{ such that } i_0 < i_1 < \cdots < i_k \text{ and } \norm{x_{i_{\alpha}} - x_{i_{\beta}}} < \epsilon \text{ for all } \alpha, \beta = 0, 1, \ldots, k \}. \end{equation*} We take $C_{\epsilon, k}$ to be the zero vector space if there are no such subsets. There are linear maps $\partial_{k}: C_{\epsilon, k} \rightarrow C_{\epsilon, k-1}$ defined by \begin{equation*} \partial_{\epsilon, k}(x_{i_0}, \dots, x_{i_k}) = \sum_{j = 0}^{k}(-1)^j(x_{i_0}, \ldots, \widehat{x_{i_j}}, \ldots, x_{i_k}) \end{equation*} \noindent where $(x_{i_0}, \ldots, \widehat{x_{i_j}}, \ldots, x_{i_k})$ denotes the element $( x_{i_0},\ldots, x_{i_{j-1}}, x_{i_{j+1}},\dots, x_{i_k} ) \in C_{\epsilon, k-1}$. One can check that $\text{Im}(\partial_{\epsilon, k+1}) \subseteq \text{Ker}(\partial_{\epsilon, k})$. The \emph{$k^{th}\: \epsilon$-homology group} of $X$ is defined to be the vector space quotient $\text{H}_{\epsilon, k}(X) := \text{Ker}(\partial_{\epsilon, k})/\text{Im}(\partial_{\epsilon, k+1})$. The \emph{$k^{th}\: \epsilon$-Betti number} of $X$ is defined to be $\beta_{\epsilon, k}(X) := \text{dim}(\text{H}_{\epsilon, k}(X))$. Intuitively, $\beta_{\epsilon, k}(X)$ measures the number of $k$-dimensional holes in the point cloud $X$ at resolution $\epsilon$. We will typically write $\beta_k(\epsilon)$, and allow $\epsilon$ to vary over a fixed range $[0, r]$. We will also refer to $\beta_k(\epsilon)$ as an $\epsilon$-series. In the sequel, \emph{persistent homology} will in general refer to the collection of $\beta_k$'s, as they capture information about how the homology persists over varying resolutions. (c.f. \cite{Hatcher}) \section{Appendix: Downsampling}\label{downsampling} \subsection{Time series downsampling} We consider a downsampling a selection of a subsequence of points, or a smaller set of points that summarize the timeseries. We assume the timeseries has $n+2$ points, and construct a downsample of $m+2$ points. \textbf{Naive Bucketing}: Select the first and last points of the timeseries; cover the the rest of the points with $m$ even-width intervals(up to integer rounding). We call this a \textit{bucketing}. Consider a sequence of sequences: \begin{equation*} \left\lbrace\left\lbrace x_0 \right\rbrace, \left\lbrace x_1,\ldots,x_k\right\rbrace, \left\lbrace x_{k+1},\ldots,x_{2k}\right\rbrace, \ldots, \left\lbrace x_{(m-1)k+1},\ldots,x_{mk}\right\rbrace, \left\lbrace x_{n+1} \right\rbrace\right\rbrace \end{equation*} and for simplicity, call the sub-sequences $\left\lbrace{b_i}\right\rbrace_{0\leq i\leq n+1}$ such that $b_0 = \left\lbrace x_0 \right\rbrace$, $b_{n+1} = \left\lbrace x_{n+1} \right\rbrace$, and $b_j = \left\lbrace x_{(j-1)k + 1}\ldots x_{(j-1)k} \right\rbrace$, we refer to these as \emph{buckets}. \textbf{Dropout:} For each bucket, select the first point in the subsequence. \textbf{Bucket Averaging:} For each bucket in a naive bucketing, average the $x$ and $y$ coordinates and take this as the reprsentative point; take also the first and last points. For a bucket, we compute the 2-dimensional average of the points contained within: $\mu_j$. For convenience of notation, we write elements of $b_i$, as $\left\lbrace x^j_i\right\rbrace$. \textbf{Largest-Triangle Three-Bucket Downsample (LTTB)} We compute the subsequence via the optimization problem: \begin{equation*} \textbf{compute } l_{i} = \argmax_{x^j_i} \mathbf{\triangle}\left({l_{i-1}, x^j_i, \mu_{i+1}}\right)\text{ such that } l_{0} = b_0 \text{ and } \mu_{n+1} = b_{n+1}. \end{equation*} The sequence $\left\lbrace l_0,\ldots, l_{n+1}\right\rbrace$ is the \emph{largest triangle three bucket downsample.} For more details and intuition around this construction we recommend the original paper. \begin{remark} This is computed via a recursive optimization process iterating through the buckets; a non-recursive formulation to find the global optima is also possible. The distinction between these two solutions is that in the recursive solution each optimization is conditioned on the previous bucket, where-as the global solution conditions on all buckets simultaneously. \end{remark} \subsection{Dynamic downsampling} In the above bucketing strategies, points in all regions of the time series are given equal weight in the downsample. Often times, the lagging-variance of a time series is not uniform across the time-domain. One might expect that regions of higher variance might warrant higher resolutions in the downsample, while low variance might require lower resolutions. A simple implementation of this idea, \textit{(inspired by \cite{sveinn})} is demonstrated in Algorithm \ref{alg:dynamic bucketing}(implementation included in Appendix). Downsampling methods are then applied to this bucketing of the timeseries. \begin{algorithm} \caption{Variance weighted dynamic bucketing \label{alg:dynamic bucketing}} \begin{algorithmic}[1] \Require{$\mathcal{B}$ a naive bucketing, and $P$ an iteration count} \Statex \Function{dynamicbuckets}{$B:List[List[Float]$} \Let{$[b_j]$}{$\mathcal{B}$} \For{$i \gets 1 \textrm{ to } P$} \For{$j \gets 1 \textrm{ to } m$} \Let{$S(b_j)$}{$SSE(OLS(b_j))$} \EndFor \Let{$z$}{$\argmax_j(S(b_j))$} \Let{$b^l_z$}{$\left\lbrace x^z_1, \ldots, x^z_{\floor{k/2}}\right\rbrace$} \Let{$b^r_z$}{$\left\lbrace x^z_{\floor{k/2}+1}, \ldots, x^z_k\right\rbrace$} \Let{$\mathcal{B}$}{$\mathcal{B}\setminus \left\lbrace b_z\right\rbrace \cup \left\lbrace b^l_z, b^r_z \right\rbrace$} \EndFor \For{$i \gets 1 \textrm{ to } P$} \For{$j \gets 1 \textrm{ to } m+P$} \Let{$S(b_j)$}{$SSE(OLS(b_j))$} \EndFor \Let{$a$}{$\argmin_a(S(b_a)+S(b_{a+1}))$} \Let{$b^*_a$}{$\left\lbrace x^a_1, \ldots, x^a_{k}\right\rbrace \cup \left\lbrace x^{a+1}_{1}, \ldots, x^{a+1}_k\right\rbrace$} \Let{$\mathcal{B}$}{$\mathcal{B}\setminus \left\lbrace b_a, b_{a+1} \right\rbrace \cup \left\lbrace b^*_a \right\rbrace$} \EndFor \State \Return{$\mathcal{B}$} \EndFunction \end{algorithmic} \end{algorithm} \begin{remark} Rather than serially splitting, and then combining buckets to arrive at the rebucketing, it's natural to ask how alternating these operations effects the result. The authors carried out several simulations of this technique and found that convergence to `stable' bucketing took place much more quickly, but produced far worse results with respect to total SSE. \end{remark} \subsection{Dynamic downsampling results} We tested our dynamic downsampling strategies for the Laplacian eigenvalues features. \begin{figure}[h] \includegraphics[width=0.9\textwidth]{figures/eeg_eye_results_dynamic_v2.png} \caption{Experiment results for dynamic bucketing downsampling methods. Diagram columns refer to downsampling method. Diagram rows correspond to initial length of time-series segment, with each point reflecting number of points after downsampling. Within rows, y-axis is accuracy on binary cross-entropy, x-axis is the number of points in the time-series samples.} \label{fig:perf_dynamic} \end{figure} \section{Appendix: Cartoon version of this paper} We thought it would be fun to capture the methods of this paper in a cartoon. The following image has a few numbered paths to walk through the three feature generation paths. \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{figures/TallCartoonwEx.png} \caption{Three model pipelines. ($1\rightarrow 2$) Direct CNN on the raw features. ($1\rightarrow 3$) Takens' embedding. ($3\rightarrow 4$) generating the neighbor graph. ($4\rightarrow 5$) persistent Spectrum. ($4\rightarrow 7$) persistent Betti numbers. ($5\rightarrow 6$, $8\rightarrow 9$) CNN's from geometric features. (Ex.) $\epsilon$-neighbors for four points in turquoise.} \label{fig:model_pipelines} \end{figure} \section{Experiments} \textbf{Data set and classification task} The data used in this work are time series EEG signals, and are provided by the University of Bonn, explored in \cite{eeg_data}. This data set is comprised of five sets (labeled A-E), each containing 100 single-channel EEG segments 23.6 seconds in duration, with 4097 observations. The segments were hand selected from within a continuous single-channel EEG recording, chosen for absence of artifacts as well as fulfilling a stationarity condition. Set E contains segments of seizure activity, sets C and D are taken from epilepsy patients during an interval where no seizure activity was occurring, and sets A and B are observations from non epilepsy diagnosed patients. The observations in set A occur during times when the patient's eyes were open, while those in set B occur during times when the patient's eyes were closed. We study the classification task of A vs. B., i.e. eyes open or closed. \textbf{CNN architectures} All of the prediction algorithms used in this paper are CNNs, each with two sets of one dimensional convolution and max pool layers, followed by a fully connected layer to predict the class label. Architecture parameters are vectors representing $\left<\textrm{input},\textrm{channels},\textrm{factor},\textrm{kernel1 size},\textrm{kernel2 size}\right>$; $\left<\textrm{res},1,5,(\textrm{res}/600)*18,2\right>,\left<300,3,7,6,2\right>,\left<300,7,3,6,2\right>$ for raw time-series, Betti numbers, and eigenvalues respectively. \texttt{factor} refers to the multiplicative factor of input channels to output channels in each convolution layer, and \texttt{res} refers to the resolution to which the time series was downsampled to. Stride and dilation in both conv layers are 1 for each model; first pooling layer has size 7, second has size 3; each model is trained for 10 epochs. Longer training was explored but in testing, more than ten epochs showed little improvement in performance. \textbf{Experiments and Results} For a given observation length, downsampling method, and downsampling rate, we run 10-fold cross validation for each of the three prediction methods described. The average and standard deviation of accuracy is recorded and displayed in Figure \ref{fig:perf_non_dynamic} for non-dynamic bucketing downsampling methods, and Figure \ref{fig:perf_dynamic} for dynamic bucketing downsampling methods. \begin{figure}[ht] \centering \includegraphics[width=0.75\textwidth]{figures/eeg_eye_results_non_dynamic_v2.png} \caption{Experiment results for non dynamic bucketing downsampling methods. Diagram rows correspond to initial length of time-series segment, with each point reflecting number of points after downsampling. Within rows, y-axis is accuracy on binary cross-entropy, x-axis is the number of points in the time-series samples.} \label{fig:perf_non_dynamic} \end{figure} \section*{Introduction} Topological Data Analysis (TDA) (\cite{computingPH},\cite{topo_persis}) has gained much attention due to applications for data analysis and machine learning. In particular, persistent homology (\cite{Scopigno04persistencebarcodes}, \cite{Edelsbrunner2000TopologicalPA}) has been leveraged for machine learning purposes in numerous tasks. The methods attempt to describe the shape of the data in a latent space particularly amenable to feature engineering. The efficacy of topological features has been demonstrated in various tasks (\cite{Chazal2017AnIT}, \cite{TDAforArrhythmia}). In this paper, we investigate the performance of various topological feature engineering approaches for EEG time-series classification using one dimensional CNN as the classifiers. While CNN architectures are heavily experimented on, less research has explored models for feature engineering using modern geometric techniques (\cite{gcrn, geom_dl}). Usually, CNNs are trained on the raw time-series data where a convolutional kernels of a fixed sizes and strides are applied to the series with moving windows to compute higher-order features. Persistent homology of the {\em Takens' embedding} provides one geometric procedure to engineer features for a time-series (\cite{Umeda2017}). For a time-series, the $k$-dimensional {\em Takens' embedding} of the time-series is the Euclidean embedding of points defined by a sliding window of size $k$---this provides a point-cloud representation of the time-series. From this point cloud the common TDA approach to compute homology of the $\epsilon$-Rips complex provides persistent features. Both the raw series and the persistent features can be exploited for machine learning tasks. As a first step towards better understanding of feature engineering on time-series, we compare the performance of these two approaches for the classification task. Furthermore, we propose a novel geometric method beyond homology theories utilizing eigenvalues of sequences of graph Laplacians. Again, we utilize the point-cloud representation's $\epsilon$-neighbor graph, and compute the normalized graph Laplacians' eigenvalues. Density counts of these eigenvalues are encoded as $m$ discrete $\epsilon$-series. We demonstrate the superiority of our approach over the homological features and compare to the raw time-series via classification experiments while keeping the classifier architecture fixed. Generally, neural networks are optimized via network features like batch size, learning rate, kernels, pooling layers, and the like. Some papers have experimented with data resizing to improve training time --- in (\cite{DAWNBench}) they introduced dynamic resizing with progressive resolution; (\cite{DBLP}) have investigated the CNN robustness under noise. We have not found in the literature examinations of explicit downsampling algorithms' effects. To this effect, we study the performance of the feature engineering methods across multiple regimes. We vary features, time-series resolution, effective resolutions, and compare degradation across the feature types and downsampling methods–––a practice common in the signal processing literature but less so in classification of time-series. Note that the encoding methods in this paper need not only apply to sequential collections. The principal contributions of this work include: \begin{itemize} \item Introduce a new feature engineering technique utilizing latent geometric properties of the time series. \item Apply the theory and methods of downsampling to time-series classification problem. \item Propose and demonstrate a comparison framework and baseline results for time series clustering via varying features and CNN architectures. \end{itemize} \subsection*{Comments and Caveats} In machine learning tasks, especially those with more complicated models, it is essential to attempt to establish a baseline, a set of target metrics, and a comparison pipeline for apples-to-apples evaluation\textemdash in some literature this is called boat-racing, or horse-racing. During our literature review of TDA methods for the EEG prediction dataset, we did not find sufficient comparators and thus integrate this into our experimentation goals, whence the large volume of experiments considered in this paper. We do not treat the neuroscience topics necessary for a deeper investigation of EEG technology or seizures. We treat the dataset as a mathematical task, and focus instead on the models and methods in machine learning. In particular, as highlighted in (\cite{GCN-EEG}) EEG datasets are structural time series, i.e. physical geometry of data collection correlates with relationships between series. We use only local readings thus reducing this covariance structure. Furthermore, train and test splitting can be challenging as several samples are taken from the same patient. For these, and other reasons, we make no comparison to SOTA networks. \textbf{The methods and architectures covered in this work are not state-of-the-art in terms of performance.} \subsection*{This work in context} Generally, CNN architectures are optimized via network features like batch size, learning rate, kernels, pooling layers, and the like. Some papers have experimented with resizing to improve training time --- in (\cite{DAWNBench}) they introduced dynamic resizing with progressive resolution. Others (\cite{DBLP}) have investigated the robustness of CNN performance under image degradation due to noise, but we've not found in the literature examinations of explicit downsampling algorithms' effects. There is renewed interest in small models over the last few years in other ML tasks; for time-series prediction this paper provides one such framework for pursuing those ends. For a collection classification task, the encoding of the data itself is also at opportunity for tuning. While common approaches to sequential data sets use RNNs and LSTMs to take advantage of data characteristics like autoregressive features, the encoding methods in this paper need not only apply to sequential collections. We wish to highlight that while time series classification is well studied and has clear and effective baselines (\cite{TSclass}), EEG classifiers are of great value and remain elusive. \section{Methods} We use two geometric methods for feature engineering, persistent Betti numbers and persistent Spectra. The first is a development of \cite{Umeda2017}, the second is a new construction. \textbf{Takens' Embedding}\label{sec:takens} Given a time series, $\{t_i \}_{i=0}^n \subset \mathbb{R}$, and a function $f: \mathbb{R} \rightarrow \mathbb{R}$. Then the \emph{Takens' embedding} with window size $m$, denoted $T^m$, is the collection of points in $\mathbb{R}^m$ given by \begin{equation*} \left\{ \left[ f(t_0), f(t_1), \dots, f(t_{m-1}) \right], \left[ f(t_1), f(t_2), \dots, f(t_{m}) \right], \dots, \left[ f(t_{n-m+1}), f(t_{n - m +1}), \dots f(t_{n}) \right] \right\} \end{equation*} \noindent That is, $T^m$ is the collection of points in $\mathbb{R}^m$ given by taking sliding windows over the time series $f(t_i)$. For volatile series, smooth transformations on this geometry are known to preserve properties of the time series, such as the dimension of chaotic attractor, and Lyapunov exponents of the dynamics. \textbf{$\epsilon$-neighbor graph} Given a finite subset $X \subset \mathbb{R}^m$ with $n$ points, and a real number $\epsilon \geq 0$ we form the $\epsilon$-graph $G_{\epsilon}(X)$ with nodes of $G_{\epsilon}(X)$ indexed by the elements of $X$, and edges $v_x\rightarrow v_y$ for $x, y \in X$ if and only if $\norm{x - y} < \epsilon$. \textbf{Persistent Homology and Time Series Analysis}\label{sec:ph_ts} In (\cite{Umeda2017}), the author calculates persistent Betti numbers(c.f. the appendix: \ref{ph} for definitions) of EEG time series signals via the aforementioned Persistent Betti-number model pipeline before feeding the output into a CNN for an eyes open/closed prediction task. In particular, for time series $\{ f(t_i) \}_{i=0}^n$, consider $T^m$ its Takens' embedding as the enumerated subset in $\mathbb{R}^m$, then compute $\beta_{k}(\epsilon)$ for all $0 \leq k < m$ and $\epsilon \in [0,r]$. \textbf{Graph Laplacians} Spectral graph theory is an integral facet of graph theory (\cite{chung1997spectral}) and one of the key objects of this theory is the Laplacian matrix of a graph, as well as its eigenvalues. We assume all graphs are undirected and simple. For a graph $G$, let $A$ and $D$ be the adjacency matrix and the degree matrix of $G$ respectively. The \emph{Laplacian} of $G$ is defined to be $L = D - A$. The \emph{normalized Laplacian} of $G$ is then defined to be $\tilde{L} = D^{-1/2} A D^{-1/2}.$ Denote the eigenvalues(or \emph{spectrum}) of $\tilde{L}$ by $0=\lambda_0 \leq \lambda_1 \leq \cdots \leq \lambda_{n-1}$. You may recall: [\textbf{Lemma 1.7}, \cite{chung1997spectral}]\label{thm:graphspec} For a graph $G$ with $n$ vertices, we have that \begin{enumerate} \item $0 \leq \lambda_i \leq 2$, with $\lambda_0 = 0$. \item If $G$ is connected, then $\lambda_1 > 0$. If $\lambda_i = 0$ and $\lambda_{i+1} \neq 0$ then $G$ has exactly $i+1$ connected components. \end{enumerate} \textbf{Persistent Laplacian Eigenvalues for Time Series Analysis}\label{sec:laplacians_ts} Denote by $\tilde{L}_{\epsilon}(X)$ the normalized Laplacian of $G_{\epsilon}(X)$. Define $\hat{\lambda}_{\epsilon}(X) = [ \lambda_{\epsilon}(X)_0, \lambda_{\epsilon}(X)_1, \dots \lambda_{\epsilon}(X)_{n-1} ]$ to be the vector of eigenvalues of $\tilde{L}_{\epsilon}(X)$, in ascending order: $0 = \lambda_{\epsilon}(X)_0 \leq \lambda_{\epsilon}(X)_1 \leq \cdots \leq \lambda_{\epsilon}(X)_{n-1} \leq 2$. When the context is understood, we will drop the designation $(X)$ in the above notations; e.g. $G_{\epsilon}$ or $\lambda_{\epsilon 0}$. Let $I$ be an interval, $\hat{v} = [v_0, v_1, \dots, v_{n-1}]$ be a vector, and define \begin{equation} \textbf{count}_{I}(v) : = \#\{ v_i \mid v_i \in I \}. \end{equation} For a given interval $[0, r]$ (this will be our range of resolutions), and a finite collection of real numbers $0 = \tau_0 < \tau_1 < \cdots < \tau_k = 2 + \delta $ for any fixed $\delta$, such that $0<\delta\in\mathbb{R}$. For $\epsilon \in [0, r]$, define: \begin{equation} \mu_j(\epsilon) := \textbf{count}_{[\tau_j, \tau_{j + 1})}(\hat{\lambda_{\epsilon}}) \text{ for } 0 \leq j \leq k - 1 \end{equation} \noindent That is, $\mu_j(\epsilon)$ counts the number of eigenvalues of $\tilde{L}_{\epsilon}$ that lie between $\tau_j$ and $\tau_{j + 1}$. Observe that $\textbf{count}_{[0, 0]}(\hat{\lambda_{\epsilon}})$ is equal to the number of connected components of $G_{\epsilon}$. We will view the collection $\{ \mu_j \}$ as a collection of $j$ real-valued functions with domain $[0, r]$. We refer to the collection of $\mu_j$'s as \emph{persistent Laplacian eigenvalues}. Given a time series $\{ f(t_i) \}_{i=0}^n$ we form $T^m$, and compute $\{ \mu_j \}_{j=0}^l$ for some choice of $\tau_0, \dots \tau_l$. \newpage \subsection*{Overview of model pipelines} \begin{wrapfigure}{r}{0.5\textwidth} \centering \label{fig:pers_lpn_eigs} \includegraphics[width=.48\textwidth]{figures/raw_time_series_example.png} \caption{Raw time series from a segment of time where the patient's eyes were closed, a segment of 600 time steps, not downsampled. The x-axis is in units of time-steps and the y-axis is $\mu V$ amplitude} \label{fig:raw_time_series} \includegraphics[width=.48\textwidth]{figures/betti_numbers_example.png} \caption{$\beta_j(\epsilon)$ computed for the time series shown in Fig. \ref{fig:raw_time_series}. The x-axis is in units of $\epsilon$-steps and the y-axis is the counts identified by legend.} \label{fig:pers_betti_nums} \includegraphics[width=.48\textwidth]{figures/eeg_eye_persistent_laplacian_example_area.png} \caption{Area plot of $\mu_j(\epsilon)$, i.e. percentage of binned eigenvalues, for the time series in Fig. \ref{fig:raw_time_series}. The x-axis is in units of $\epsilon$-steps and the y-axis is the counts.} \end{wrapfigure} \textbf{Raw Time Series Features:} We feed the sequential time-series values into our CNN architecture. \textbf{Persistent Betti Numbers Features:} We encode each time series with the `k-step` Takens' embedding into $\mathbb{R}^k$. This point cloud's $\epsilon$-neighbor graph generates the Vietoris-Rips filtration up to dimension 3. The order of the degree $n$ simplicial homology (or $n$'th Betti number) is computed for each $\epsilon$ neighbor complex, and encoded as $n$ discrete $\epsilon$-series. We feed the sequential $\epsilon$-series values --- each on their own channel --- into our CNN architecture. \textbf{Persistent Laplacian Eigenvalue Features:} We encode each time series with the 'k-step' Takens' embedding into $\mathbb{R}^k$. This point cloud's $\epsilon$-neighbor graph is collected and it's normalized graph Laplacians are computed. The eigenvalues of these Laplacians are computed and bucketed into a partition of $m$ buckets. The counts of eigenvalues in each bucket are encoded as $m$ discrete $\epsilon$-series. We feed the sequential $\epsilon$-series values --- each on their own channel --- into our CNN architecture. \textbf{Experimental design:} In each of the pipelines, we prepend the model pipeline with a downsampling step using one of three downsampling algorithms and several downsampling resolutions(c.f. the appendix \ref{downsampling}). Our design matrix consists of the three downsampling methods applied to $\left\lbrace 200, 300, 400, 500, 600\right\rbrace$ initial data lengths downsampled by multiples of $50$. Each of these initial data sets are fed through each of the model pipelines and subsequent CNNs. We use cross-validation and accuracy to evaluate the performance. \section{Outcomes} We've explored a collection of `experiments' in training and testing CNNs built on EEG data to predict if a patient's eyes are open or closed. The aforementioned experiments primarily sought to establish performance comparisons while varying the feature engineering choice, the chunk size, and the downsampling resolution used. We established a baseline performance using a raw time series feature set and reproduced performance in (\cite{Umeda2017}) to compare to this baseline. We saw that the performance of this baseline actually outperforms the TDA feature engineered experiment as reported. This suggests that for a task of this type the TDA approach is not SOTA, but may hold value in other regimes, or under more specific hyperparameter tuning. Building on these results, we explored the novel geometric feature engineering method of persistent eigenvalues of the Laplacian. This method also outperforms TDA, but does not outperform the raw time series experiment. We showed the performance of these networks under the strain of reduced data samples, and in resolution reduction. The impact on performance as we iterate through the parameter space is relatively smaller for the eigenvalue features, but performance remains worse than the raw time-series. Finally, we provided a test-bed for further iteration on these sorts of prediction tasks, and opened up a discussion around sensor resolution, sample data size, downsampling, feature engineering, and CNNs. The comparison pipelines are easily extensible for further experimentation with this dataset or others. All of the code for feature engineering and testing is available on GitHub. Variable resolution training has been employed on ImageNet (\cite{DAWNBench}) to dramatically reduce training time, it's interesting to consider the implications of explicitly controlling downsampling schemes for this ansatz. Larger scope, we have left open the question of ``multi-resolution'' sensor networks and the impact on geometric feature engineering and downsampling.
{ "timestamp": "2021-02-16T02:38:38", "yymm": "2102", "arxiv_id": "2102.07669", "language": "en", "url": "https://arxiv.org/abs/2102.07669" }
\section{Introduction} Deep neural networks have been widely adopted in various computer vision tasks, e.g.\ image classification, semantic segmentation, image translation and generation, etc. Due to the high sample-complexity of such models, they require large amounts of training data. However, collection and annotation of many training samples is an expensive and labor intensive process. In many domains, such as medical imaging, publicly available training data are particularly scarce due to privacy concerns. In such settings, a common solution is training the model privately and then providing black-box access to the trained model. However, even black-box access may leak sensitive information about the training data. \textit{Membership inference attacks (MIA)} are one way to detect such leakage. Given access to a data sample, an attacker attempts to find whether or not the sample was used in the training process. Due to memorization in deep neural networks, prediction confidence tends to be higher for images used in training. This difference in prediction confidence helps MIA methods to successfully determine which images were used for training. Therefore, in addition to detecting information leakage, MIA also provide insights on the degree of memorization in the victim model. MIA has previously been applied to a variety of neural network tasks including: classification, generative adversarial models, and segmentation. The accuracy achieved by MIA can vary greatly as a function of different properties of the attempted tasks. Our empirical results highlight two properties that make tasks more vulnerable to MIA attacks: i) Uncertainty: tasks where there is more uncertainty in the prediction of the output given an input are more susceptible to MIA. ii) Output dimensionality: tasks with higher-dimensional outputs are more vulnerable to MIA. These apparently simple properties can explain non-intuitive observations, for example, that reconstruction-based MIA on CycleGAN leads to very low accuracy. As the training images should be reconstructed back after being translated to the other domain and back, there is no uncertainty of the output given the input. As the desired output is fully specified by the input (i.e. they are identical), the reconstruction task is easy leading to low MIA accuracy. Motivated by the above findings, we focus our attention on two tasks that exhibit these properties: supervised image translation and semantic segmentation, including medical image segmentation. We begin by evaluating a simple-to-implement but effective MIA that uses pixel-wise reconstruction error between the model output and ground truth. This approach exploits memorization of the training data in the victim model, resulting in lower reconstruction error on images used for training. However, we observe that for some sample images the ground truth result can be easily predicted, and for others it is harder to predict. Reconstruction error alone is therefore less accurate at discriminating between hard to predict samples used in training and easy samples not seen before. To overcome this limitation, we propose a novel predictability error which is computed for each query input image and its ground truth output. Our predictability error uses the accuracy of a linear predictor computed over the given query image, predicting pixel values from deep features of the input image. The linear predictor serves as a simple approximation of the task attempted by the victim model, providing an indication of the ease of predicting the output image from the input. The reconstruction error, together with the predictability error, helps to discriminate between two factors of variation in the reconstruction error: (i) The "intrinsic" difficulty of the generation task for each image, based on its predictability error, and (ii) The boost in accuracy due to memorization of the training images. Defining a membership error that subtracts the predictability error from the reconstruction error is shown empirically to achieve high success rates in MIA. Differently from other MIA approaches, we do not assume the existence of a large number of in-distribution data samples for training a shadow model - but rather operate on a single sample, using only a single query to the victim model. Our method is demonstrated to be effective over strong baselines on an extensive number of benchmarks, taken from image translation, semantic segmentation, and medical image segmentation. We discuss possible defenses against MIA and show their ineffectiveness against our method. Our main contributions are: \vspace{-6pt} \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{1pt} \setlength{\parsep}{1pt} \item Highlighting two key properties of tasks that are highly vulnerable to MIA. \item Presenting the first MIA on image translation models. \item Proposing a general single-sample, self-supervised, image predictability error for MIA. \end{enumerate} \begin{table}[tb] \centering \small \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Dataset} & \textbf{Reconstruction} & \textbf{Membership} \\ \textbf{} & \textbf{} & \textbf{Error} & \textbf{Error} \\ \midrule Pix2pix & Facades & 93.62\% & \textbf{97.59\%} \\ Pix2pix & Maps2sat & 84.22\% & \textbf{85.65\%} \\ Pix2pix & Cityscapes & 77.74\% & \textbf{83.23\%} \\ \midrule Pix2pixHD & Facades & 98.92\% & \textbf{99.95\%} \\ Pix2pixHD & Maps2sat & 95.74\% & \textbf{99.42\%} \\ Pix2pixHD & Cityscapes & 96.04\% & \textbf{99.09\%} \\ \midrule SPADE & Cityscapes & 99.75\% & \textbf{100\%}\\ SPADE & ADE20K & 85.31\% & \textbf{89.79\%}\\ \midrule UperNet50 & ADE20K & 96.80\% & \textbf{98.09\%} \\ UperNet101 & ADE20K & 95.74\% & \textbf{96.94\%} \\ HRNetV2 & ADE20K & 83.67\% & \textbf{85.92\%} \\ \midrule Inf-Net & COVID19 & 97.16\% & \textbf{99.01\%} \\ PraNet & Polyp & 96.03\% & \textbf{96.38\%} \\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Membership attack ROCAUC using our (i) reconstruction error $L_{rec}$ and (ii) membership error $L_{mem}$. Using the membership error, which subtracts the image predictability error from the reconstruction error, substantially improves performance.} \label{tab:main_method_results} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=0.95\linewidth, height=0.45\linewidth]{figures/method_inp2out_new.pdf} \end{center} \caption{Illustration of the proposed black-box membership inference attack. Here shown for the case of image translation over the Cityscapes dataset. We would like to determine if a given sample was used in training. The victim model predicts a reconstructed image based on the input. In the top path the difference between the reconstructed image and the ground truth image gives the reconstruction error $L_{rec}$. In the bottom path we compute the predictability error $L_{pred}$ of the sample from the error of a linear predictor to predict pixel values of the ground-truth image from deep features of the input. Subtracting $L_{pred}$ from $L_{rec}$ gives the membership error, $L_{mem}$.} \label{fig:method_plot} \end{figure} \vspace{-10pt} \section{Related Work} \subsection{Membership Inference Attacks (MIA)} Shokri et al. \cite{shokri2017membership} were the first to study MIA against classification models in a black-box setting. In this setting the attacker can only send queries to the victim model and get the full probability vector response, without being exposed to the model itself. They proposed to train multiple shadow models to mimic the behavior of the victim model, and then use those to train a binary classifier to distinguish between known samples from the train set and unknown samples. They assume the existence of in-distribution new training data and knowledge of the victim model architecture. Salem et al. \cite{salem2018ml} further relaxed those assumptions and demonstrated that using only one shadow model is sufficient, and proposed using out-of-distribution dataset and different shadow model architectures, for a slightly inferior attack. Even more interestingly, they showed that without any training, a simple threshold on the victim model's confidence score is sufficient. This shows that classification models are more confident of samples that appeared in the training process, compared to unseen samples. Sablayrolles et al. \cite{sablayrolles2019white} proposed an attack based on applying a threshold over the loss value rather then the confidence and showed that black-box attacks are as good as white-box attacks. As the naive defense against such attacks is to modify the victim model's API to only output the predicted label, other works proposed label-only attacks \cite{yeom2018privacy, li2020label, choo2020label}. While most previous work has been around classification models, there has been some effort regarding MIA on generative models such as GANs and VAEs \cite{chen2019gan, hayes2019logan, hilprecht2019monte}. An attack against semantic segmentation models was proposed by He et al. \cite{he2019segmentations}, where a shadow semantic segmentation model is trained, and is used to train a binary classifier. The classifier is trained on image patches, and final decision regarding the query image is set by aggregation of the per-patch classification scores. The input to the classifier is a structured loss map between the shadow model's output and the ground truth segmentation map. Although this task is the closest to ours, our work is the first study of membership inference attacks on image translation models. We also note that \cite{he2019segmentations} consider the setting where other input-output samples from the data distribution (or a very similar distribution) are available, whereas our attack does not require this information. Besides membership inference attacks, other privacy attacks against neural networks exist. We refer the reader to the supplementary material (SM) for more details. \subsection{Conditional Image Generation} Image-to-image translation is the task of mapping an image from a source domain to a target domain, while preserving the semantic and geometric content of the input image. Currently, the most popular methods for training image-to-image translation models use Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} and are currently used in two main scenarios: (i) unsupervised image translation between domains \cite{zhu2017unpaired, kim2017learning, liu2017unsupervised, choi2018stargan}; (ii) serving as a perceptual image loss function \cite{isola2017image, wang2018high, zhu2017toward}. In this work we introduce the novel task of MIA on conditional image generation models. \subsection{Semantic Segmentation} Semantic segmentation is the task assigning a class label to each pixel in the input image. This can be thought of a classification problem for each pixel. State-of-the-art methods \cite{xiao2018unified, sun2019high} are based on fully convolutional networks and multi-scale representations of the input \cite{liu2019recent, minaee2020image}. \section{Difficulty-based MIA} We investigate the effect of task difficulty and dimensionality on the success of MIA. Consequently, we concentrate on two promising tasks for MIA, image translation and semantic segmentation. We also present a novel image predictability error which significantly improves MIA accuracy. \subsection{Effects of task difficulty and dimensionality} \label{sec:invesigation_of_tasks} Every neural network model is a potential target for MIA. Previous work attempted MIA on many different models (classification, GANs, segmentation) with highly variable accuracy. In this section we present an investigation into two factors that affect MIA accuracy: task difficulty and output dimensionality. Full details are provided in Sec.~\ref{sec:investigation_experimetns}. We perform reconstruction-based MIA by measuring the reconstruction error between the model output and the ground truth. This is done for multiple models, datasets and tasks. The attack method is described in more detail in Sec.~\ref{sec:main_method}. \begin{table}[tb] \centering \small \begin{tabular}{lcc} \toprule \textbf{Model} & \textbf{Task} & \textbf{Reconstruction Error}\\ \midrule NVAE & CelebA2Self & 50.74\% \\ \midrule Pix2pixHD & Maps2sat & 95.74\% \\ Pix2pixHD & Cityscapes & 96.04\%\\ \midrule Pix2pixHD & Landmarks2CelebA & 99.54\% \\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Comparison of reconstruction-based MIA accuracy on tasks with different difficulties. Easier tasks, e.g.\ CelebA2Self, in which there no uncertainty in the output given the input image, suffer less from memorization of the training data and therefore have lower vulnerability to MIA. As the uncertainty increases (segmentation maps and landmarks) models tend to memorize the training data and therefore the MIA accuracy increases. } \label{tab:mia_diff_tasks} \end{table} \textbf{MIA accuracy vs. task difficulty:} We present results of reconstruction-based attack on three different tasks of different difficulties. We define task difficulty as the uncertainty in the output given the input image. The tasks are: i) auto-encoding - translating an image to itself. ii) Segmentation-to-image translation. iii) Landmark to face translation. The first task is the easiest as the output is trivially determined by the input. Landmark-to-face is harder than segmentation-to-image as the input contains less information on the output (e.g. no information on the identity of the face requires much more memorization). The results are presented in Tab.~\ref{tab:mia_diff_tasks}, where it can be seen that indeed MIA performance is more accurate for harder tasks. \begin{figure}[t] \begin{center} \subfigure[Pix2pixHD]{ \includegraphics[width=0.46\linewidth, height=0.4\linewidth]{figures/dim_reduce_effect/reduce_dim_effect_p2phd.pdf} } \subfigure[Semantic Segmentation]{ \includegraphics[width=0.46\linewidth, height=0.4\linewidth]{figures/dim_reduce_effect/reduce_dim_effect_semseg.pdf} } \end{center} \caption{Effect of reducing output dimensionality over a reconstruction-based attack. MIA accuracy is correlated with the output dimension, i.e.\ number of pixels, demonstrating that high output dimensionality tasks are more vulnerable to MIA.} \label{fig:dim_reduce_effect} \end{figure} \textbf{MIA accuracy vs. output dimensionality:} Many MIA approaches attack classification networks that have only a single output, usually a probability vector or in the more restrictive case, a single label. It is natural to ask if tasks with higher dimensional outputs are more vulnerable to MIA due to the ensemble effect of attacking each individual output dimension. The intuition behind this is similar to the ensemble effect of boosting algorithms - each output pixel serves as a weak attack, and the aggregation of the reconstruction errors between all output pixels can produce a strong attack. In Fig.~\ref{fig:dim_reduce_effect}, we provide a comparison of reconstruction-based MIA when subsets of different sizes are used as the output. Note that segmentation with only a single pixel output is equivalent to classification. We can observe that MIA accuracy indeed scales with output dimensionality. \begin{figure*}[t] \begin{center} \begin{tabular}{@{\hskip0pt}c@{\hskip0pt}c@{\hskip3pt}c@{\hskip0pt}c@{\hskip2pt}c@{\hskip0pt}c@{\hskip2pt}c@{\hskip0pt}c} & & City - input & City - out & Maps - input & Maps - out & Covid - input & Covid - out \\ \begin{turn}{90} ~~~~~~~~ Low \end{turn} & \begin{turn}{90} ~ predictability \end{turn} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cityscapes_easy_inp_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cityscapes_easy_gt_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/maps_easy_inp_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/maps_easy_gt_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cov_easy_inp_0.jpg} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cov_easy_gt_0.png} \\ \begin{turn}{90} ~~~~~~~~ High \end{turn} & \begin{turn}{90} ~ predictability \end{turn} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cityscapes_hard_inp_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cityscapes_hard_gt_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/maps_hard_inp_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/maps_hard_gt_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cov_hard_inp_0.jpg} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cov_hard_gt_0.png} \\ \end{tabular} \end{center} \caption{Examples of input-output pairs from the Cityscapes, Maps2sat and COVID19-CT datasets that received the lowest (first row) and highest (second row) predictability errors using our single-sample approach. It can be seen that detailed images with complicated patterns are ranked as difficult to predict, while images with less details and lower contrast are ranked as easier to predict.} \label{fig:diff_score_samples} \end{figure*} \subsection{MIA for high output dimensionality} \label{sec:main_method} We showed in Sec.~\ref{sec:invesigation_of_tasks} that both task difficulty and output dimensionality are correlated with the accuracy of MIA. We therefore focus on two important but difficult image tasks that have high-dimensional outputs: image translation and semantic segmentation. To the best of our knowledge, this is the first paper to consider MIA on image translation models. We propose a simple but effective attack, assuming that the attacker has only a black-box access to the victim model $\mathbf{V}$. Differently from most previous works, we do not use shadow models or train a binary classifier, and thus do not require any additional training data and query the victim model only once. Our membership attack is performed on a pair of query images $(x, y)$ where $x \in \mathbb{R}^{h \times w}$ is an image from the input domain ($h$ and $w$ are the image height and width respectively) and $y \in \mathbb{R}^{h \times w}$ is the ground truth from the output domain. The requirement of the availability of the ground truth image $y$ is in-line with previous works, and is a reasonable assumption in our target scenario. For each query we compute a membership error, $L_{mem}$ (see Eq.~(\ref{eq:L-mem})), to which we apply a pre-defined threshold $\tau$, such that all queries where $L_{mem}(x, y) < \tau$ are marked as members of the training data. The membership error has two elements: reconstruction error and predictability error. \vspace{-6pt} \subsubsection{Reconstruction Error for Membership} \label{sec:l_rec_method} Typical MIA on classification models consider the probability (or confidence) given by the model to the correct class. Semantic segmentation is an extension where the output is a probability vector for each pixel. Image translation models are different as they output a color value of each pixel. This value is the maximum likelihood estimate, and no probability distribution over possible values is given. We propose to use the loss term as a reconstruction error, $L_{rec}$, to compute the pixel-wise difference between the output produced by a black-box access to the model, $\mathbf{V}(x)$, and the ground truth $y$. For semantic segmentation, where the output is a probability vector for each pixel, we use the cross-entropy error. For medical segmentation, we use the weighted IoU (Intersection over Union) loss and binary cross entropy loss. In the case of image-translation we use the $L_1$ error as we do not assume access to the discriminator and therefore can not use a GAN loss. Due to memorization during the training process, the model output typically has lower reconstruction errors for images in the training set compared to unknown images. \subsubsection{Predictable and Unpredictable Images} \label{sec:diff_score} In this section we address the following question: Given an input-output sample, is the output easily predictable from the input. Consider, for example, supervised segmentation-to-image translation. I.e., the task is to "invert" the segmentation process, and recover the original image that gave rise to a given segmentation map. It is clear that not all cases are equally predictable: (i) hard to predict images have sharp and detailed textures, whereas more predictable images have blurrier textures; (ii) images with semantic segmentation maps that contain only few categories provide less guidance than those with more detailed segmentation maps, making the correct prediction less certain. The image predictability error should quantify these observations. In Sec.~\ref{sec:main_method_results} we show that such a predictability error is important for increasing accuracy of membership inference attacks. We briefly describe two previous approaches for measuring image difficulty: \textbf{Human-Supervised:} Ionescu et al. \cite{tudor2016hard} proposed to define image difficulty as the human response time for solving a visual search task. For this, they collected human annotations for the PASCAL VOC 2012 dataset \cite{Everingham10} and trained a regression model, based on pre-trained deep features, to predict the collected difficulty score. The disadvantage of this method is that human-specified difficulty scores may not correlate to the predictability of the image synthesis by neural networks. This is demonstrated empirically in Sec.~\ref{sec:supervised_results}. \textbf{Multi-Image:} Another approach taken by \cite{chen2019gan} is training a model on a set of image pairs similar to the target distribution. This approach uses the reconstruction error of the external model on the target image pairs as its predictability error - larger reconstruction errors correspond to harder to predict images. This approach has a significant drawback: a large number of images, similar to the target image, are required in order to learn a reliable generative model. In many cases, images from the target distribution may not be available. Additionally, training a model for every task is tedious and computationally expensive. \textbf{The Proposed Single-sample predictability error:} We propose a new method to assign a predictability error for models with image outputs. This error measures the accuracy of predicting the output pixel from its high-level representation using linear regression. A related approach was proposed by Hacohen et al. \cite{hacohen2019power} for measuring image difficulty for classification models. Our method is significantly different as it is trained on a single input-output sample rather than on a large dataset. The linear regression model uses image features of a pre-trained Wide-ResNet$50{\times}2$ \cite{zagoruyko2016wide}. We concatenate the activation values in the first $4$ blocks, giving $56{\times}56$ feature vectors of size $3840$ each. The output image resolution is reduced to $56{\times}56$ to match the size of the first Wide-ResNet$50{\times}2$ block. We denote the concatenated feature vector for pixel $i$ as $\psi(i)$. The linear regression model $\mathbf{P}$ is a matrix of size $3840{\times}3$, multiplied with the feature vector $\psi(i)$ of pixel $i$ to give an estimate of the RGB colors $y^i$. We optimize $\mathbf{P}$ over 70\% randomly selected pixels. The image predictability error is the average absolute error over the 30\% unselected pixels: \begin{equation} \label{eq:L-diff} L_{pred}(x, y) = \frac{1}{N}\sum_{i=1}^{N} \| \mathbf{P} \psi (i) - y^i\| _1 \end{equation} where $y^i$ is the ground truth value of the $i_{th}$ pixel in the resized ground truth image $y$. Fig.~\ref{fig:diff_score_samples} presents some images that received the highest and lowest predictability errors. See supplementary material (SM) for more details. \subsubsection{Membership Error} As observed before, for some samples the output images can be easily predicted from the input, while for other samples the output can not be predicted. While the reconstruction error achieves high MIA success rates, it has a significant limitation - it does not discriminate between predictable and unpredictable samples. The victim model will have higher errors when generating unpredictable training samples, and lower errors when generating easily predictable ones. In such samples reconstruction error may result in wrong membership classification. Given our image predictability error $L_{pred}$ in Eq.~(\ref{eq:L-diff}) and the reconstruction error $L_{rec}$ we calculate a membership error $L_{mem}$ as follows: \begin{equation} \label{eq:L-mem} L_{mem}(x,y) = L_{rec}(x, y) - \alpha \cdot L_{pred}(x, y) \end{equation} $L_{mem}$ is computed by subtracting the predictability error $L_{pred}$ from the reconstruction error $L_{rec}$ weighted by $\alpha$. Unless specified otherwise, we use $\alpha=1.0$, and present the effect of different $\alpha$ values in the SM. This lowers the membership error $L_{mem}$ for harder-to-predict images compared to easier-to-predict images having the same reconstruction error. See Fig.~\ref{fig:method_plot} for an overview illustration of our method. Using the membership error $L_{mem}$ (\ref{eq:L-mem}) for MIA substantially improves the success rates in all of our experiments, as shown in Tab.~\ref{tab:main_method_results} and Fig.~\ref{fig:calibration_effect} \section{Experiments} We first investigate two factors that affect MIA accuracy, task difficulty and output dimensionality, and show that MIA attacks are easier on difficult tasks with high output dimensionality. We then extensively evaluate our MIA on image translation and semantic segmentation networks. We also compare our single-sample predictability error against strong baselines. Additional results and ablations can be found in the SM. In accordance with previous membership attack works, the success rate is measured using the area under the ROC curve (ROCAUC) metric. \subsection{Effectiveness of membership inference attacks} \label{sec:investigation_experimetns} As mentioned in Sec.~\ref{sec:invesigation_of_tasks}, there has been extensive research on MIA against various neural networks, resulting in variable accuracy. We investigated two factors that affect MIA accuracy: task difficulty and output dimensionality. \subsubsection{MIA accuracy vs. task difficulty} We defined task difficulty as the uncertainty of the output given the input image. In the limit of sufficiently large training datasets, when models are trained to perform easy tasks, such as auto-encoding - translating an image to itself, they are able to generalize well to unseen images, and do not need to depend on memorization of the training data in order to minimize the loss function. As membership inference attacks are highly correlated with model memorization, their performance decreases on such tasks. Similarly, models struggle with learning difficult tasks, in which \ul{the input does not contain sufficient information to fully specify the output}, and therefore the loss minimization encourages the model to memorize the training samples. This lack of predictability acts as a strong motivation for memorization, even at the limit of well-trained models trained on large datasets, provided sufficient capacity. In order to demonstrate this, we performed reconstruction-based MIA, by using $L_{rec}$ described in Sec.~\ref{sec:l_rec_method}, on three tasks of different levels of difficulty. The first and easiest task is auto-encoding. For this, we attack a NVAE \cite{vahdat2020nvae} model, trained on the CelebA \cite{liu2015faceattributes} dataset. The second task, more difficult than auto-encoding, is segmentation-to-image translation. We attack two Pix2PixHD models \cite{wang2018high}, trained on the Maps2sat \cite{isola2017image} and Cityscapes \cite{Cordts2016Cityscapes} datasets. The third and most difficult task is landmarks-to-face translation. For this task we extracted facial landmarks \cite{bulat2017far} from $50$K CelebA images \cite{liu2015faceattributes}. We consider this to be the most difficult task out of the three as the input contains less information on the output in comparison to segmentation maps (e.g.\ no information regarding the identity of the face). Results, presented in Tab.~\ref{tab:mia_diff_tasks}, demonstrate that reconstruction-based MIA are more successful on difficult tasks. \begin{figure}[tb] \begin{center} \subfigure[Pix2pixHD - Maps]{ \includegraphics[width=0.47\linewidth, height=0.35\linewidth]{figures/calibration_effect/p2phd_maps_calibration_effect_2.pdf} } \subfigure[Pix2pixHD - Cityscapes]{ \includegraphics[width=0.47\linewidth, height=0.35\linewidth]{figures/calibration_effect/p2phd_cityscapes_calibration_effect_2.pdf} } \end{center} \caption{The proposed membership error $L_{mem}$ can better separate train (blue) and test (orange) images by a simple threshold (i.e. a vertical line) compared to the reconstruction error $L_{rec}$.} \label{fig:calibration_effect} \end{figure} \vspace{-6pt} \subsubsection{MIA accuracy vs. output dimensionality} Previous works mostly focused on MIA against classification models, where there is a single output, i.e.\ probability vector or in the more restrictive case, a single label. It is natural to ask whether higher dimensional outputs are more vulnerable due to the ensemble effect of combining the attacks on each individual output dimension to a stronger, joint attack. We perform reconstruction-based MIA on the Pix2PixHD architecture, trained on the CMP Facades \cite{Tylecek13}, Maps2Sat and Cityscapes datasets, as well as on three semantic segmentaion models - UperNet50, UperNet101 \cite{xiao2018unified} (using ResNet50 and ResNet101 as backbones) and HRNetV2 \cite{sun2019high} - trained on the ADE20K dataset \cite{zhou2017scene}. Fig.~\ref{fig:dim_reduce_effect} demonstrates the effect of reducing the output dimension on the attack accuracy. The reduction is achieved by randomly sampling $N$ output pixels, and using them as the output, where $N$ ranges from a single pixel and up to $200$ pixels. Note that in the case of semantic segmentation, having only a single pixel output is equivalent to classification. We repeat this experiment $10$ times and report the average result. We observed that MIA accuracy indeed scales with the number of output dimensions. Results for other models are presented in the SM. \subsection{MIA accuracy evaluation} \label{sec:main_method_results} We evaluate our membership attack on three image translation architectures, Pix2Pix \cite{isola2017image}, Pix2PixHD \cite{wang2018high}, SPADE \cite{park2019semantic}, three semantic segmentation architectures - UperNet50 and UperNet101 \cite{xiao2018unified} (ResNet50 and ResNet101 as backbones), HRNetV2 \cite{sun2019high} as well as two medical segmentation architectures - Inf-Net \cite{fan2020inf} and PraNet \cite{fan2020pra}. We evaluated on various datasets, including CMP Facades \cite{Tylecek13}, Maps2sat \cite{isola2017image}, Cityscapes \cite{Cordts2016Cityscapes}, ADE20K \cite{zhou2017scene}. In the case of medical segmentation we evaluated two tasks: lung infection segmentation from Covid-19 CT images \cite{fan2020inf} and polyp segmentation in colonoscopy images \cite{silva2014toward, bernal2015wm, tajbakhsh2015automated, vazquez2017benchmark, jha2020kvasir}. All pix2pix and pix2pixHD models are trained from scratch, with the exception of the Cityscapes dataset on the Pix2pixHD architecture in which we use the supplied large pre-trained model for computational constraints on the high resolution. The rest of the models are pre-trained. It can be seen in Tab.~\ref{tab:main_method_results} that while using the reconstruction error alone achieves a high success rate, the membership error (which calibrates the result by sample predictability) significantly improves the results. Fig~\ref{fig:calibration_effect} demonstrates the effect of subtracting the predictability error from the reconstruction error. After calibration, a single threshold on the membership error can separate train and test images. Results on additional benchmarks are presented in the SM. Utilizing common image augmentations, i.e.\ horizontal flipping and random cropping, in order to construct a larger a larger set $\{(x_{aug}, y_{aug})\}$ has a small impact over the attack accuracy, as the output dimension is large enough as is. \begin{table}[tb] \centering \small \begin{tabular}{lccccc} \toprule \textbf{Model} & \textbf{Dataset} & \multicolumn{2}{c}{\textbf{Single}} & \multicolumn{2}{c}{\textbf{Multi}}\\ & & \textbf{Ours} & \textbf{Superv.} & \textbf{In-Dist.}\\ \midrule Pix2pix & Facades & \textbf{97.59\%} & 93.67\% & -\\ Pix2pix & Maps2sat & 85.65\% & 86.48\% & \textbf{92.43\%}\\ Pix2pix & Cityscapes & \textbf{83.23\%} & 77.06\% & 82.47\%\\ \midrule Pix2pixHD & Facades & \textbf{99.95\%} & 98.86\% & -\\ Pix2pixHD & Maps2sat & \textbf{99.42\%} & 98.38\% & 82.87\%\\ Pix2pixHD & Cityscapes & \textbf{99.09}\% & 96.86\% & 94.76\%\\ \midrule UperNet50 & ADE20K & \textbf{98.09\%} & 96.79\% & 79.47\% \\ UperNet101 & ADE20K & \textbf{96.94\%} & 95.49\% & 76.01\% \\ HRNetV2 & ADE20K & \textbf{85.92\%} & 84.42\% & 84.48\%\\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{MIA accuracy of our self-supervised single-sample method vs. using human-supervised single-sample and multi-image baselines for the predictability error. Note that in-distribution multi-image requires extra supervision of $100$ images} \label{tab:single_vs_multi_diff_score} \end{table} \subsubsection{Comparison to Human Supervision} \label{sec:supervised_results} We compare our self-supervised single-sample predictability error with the human-supervised difficulty score described in Sec.~\ref{sec:diff_score}. This score was proposed by Ionescu et al. \cite{tudor2016hard}, which defined image difficulty to be the human response time for solving a visual search task. In order to provide a fair comparison, we replace the pretrained VGG-f \cite{chatfield2014return} features, used by \cite{tudor2016hard}, with the more powerful pretrained Wide-ResNet$50\times2$ \cite{zagoruyko2016wide} features we use in our predictability error. Samples of images ranked as easy and hard by the supervised score are presented in the SM. As can be seen in Tab.~\ref{tab:single_vs_multi_diff_score}, our self-supervised single-sample predictability error outperforms the human-supervised difficulty score. In the SM, we provide a comparison of the relation between the reconstruction error and the supervised score to the relation between the reconstruction error and our self-supervised predictability error, and show that our score is better correlated to the reconstruction error. \begin{table}[tb] \centering \small \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Dataset} & \textbf{Ours} & \textbf{Shadow Model}\\ \midrule Pix2pix & Maps2sat & \textbf{85.65\%} & 80.15\%\\ Pix2pix & Cityscapes & \textbf{83.23\%} & 78.68\%\\ \midrule Pix2pixHD & Maps2sat & \textbf{99.42\%} & 98.63\%\\ Pix2pixHD & Cityscapes & \textbf{99.09\%} & 96.39\%\\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Comparison between our MIA and the popular shadow-model-based classifier attack, using $100$ train and $100$ test samples. Our MIA outperforms while not requiring extra training images.} \label{tab:shadow_model} \end{table} \subsubsection{Comparison to Multi-Image Scores} \label{sec:single_vs_multi_diff_score} Although our MIA method does not require the availability of multiple auxiliary samples from the target distribution or from a similar distribution, it is interesting to compare our single sample predictability error to methods that use multiple samples. We compute multi-sample predictability errors (MSPS) by training a "shadow" model to map the input to output images in the auxiliary samples. As an upper-bound on MSPS performance, the shadow model is given exactly the same architecture as used by the victim model (although this knowledge may not be available in practice). The MSPS is defined by the reconstruction error of the shadow model on the target sample. Two scenario were evaluated: \textbf{In-distribution data:} In this setting the shadow model's data shares the distribution of the victim's training data, by being trained on $100$ randomly sampled image pairs from the test set of the corresponding dataset. Facades was not used as it did not have enough images. The results are presented in Tab.~\ref{tab:single_vs_multi_diff_score}. For Pix2PixHD and semantic segmentation, MSPS underperformed our method (as $100$ samples are insufficient for training such large models). As Pix2Pix is a smaller network, MSPS was more successful there, obtaining competitive results with our method. Note that it still requires extra samples, often not available. We analyzed the number of samples required for MSPS to reach the accuracy of our method, in most tasks, even 100 were insufficient (see SM). \textbf{Auxiliary dataset:} As suggested by He et al. \cite{he2019segmentations}, we also compare our method to the setting were many out-of-distribution but similar samples are available. We trained shadow models on $4K$ image pairs from the BDD dataset \cite{yu2018bdd100k} as MSPS for the Cityscapes dataset, as both datasets consist of street scene images and have compatible label spaces. We found that MSPS underpeformed our method by $10\% - 30\%$. (see SM for exact results). Note that it is rare to have similar datasets with nearly identical labels. Cityscapes was the only dataset from those evaluated in this paper for which such a similar dataset could be found. \textbf{Shadow-model classifiers:} Although deviating somewhat from predictability errors, for the interest of completeness, we report the ROCAUC results of the popular approach of shadow-model classifiers for image translation MIA, see Tab.~\ref{tab:shadow_model} (classification accuracy is lower, see SM). We use the approach of He et al. \cite{he2019segmentations} and train a classifier to distinguish between the "loss maps" of the train and test auxiliary samples of the shadow model. The classifier is then used to score images of the target dataset as train or test (see \cite{he2019segmentations} for the complete details). We show that this approach underperforms our method in both in-distribution and auxiliary dataset settings (exact results presented in the SM). It is surprising that shadow models do not perform well on image translation MIA as they are very effective for image segmentation MIA (as shown in \cite{he2019segmentations}). We believe the difference in performance can be explained by the fact that segmentation maps have similar distributions between datasets with similar label spaces while natural images have very different distribution - making membership classifiers on the image2seg task generalize better than the seg2image task. We note again, that such techniques requires the availability of auxiliary in-distribution samples or very similar datasets which is often not possible. For example, medical segmentation datasets are often quite small due to the sensitivity of the data and high cost of obtaining ground truth labels. The Covid-19 CT dataset \cite{fan2020inf} is composed of $50$ train and $50$ test samples, too small to apply a shadow model based attack. \section{Discussion} \label{sec:discussion} \textbf{Effect of Memorization:} Membership inference attacks are closely related to memorization in the victim model. In order to better understand this relation, we measure the success of our attack under different levels of memorization. We do so by evaluating our attack on checkpoints saved at different epochs during the training of our image-translation victim models. We observed that as the training process progresses, the victim model memorizes the training data which results in higher attack success rates (See SM). \begin{table}[tb] \centering \small \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Dataset} & \textbf{Orig} & \textbf{No $L_{rec}$}\\ \midrule Pix2pix & Maps2sat & 84.22\% & 68.44\%\\ Pix2pix & Cityscapes & 77.74\% & 51.06\% \\ \midrule Pix2pixHD & Maps2sat & 95.74\% & 75.76\%\\ Pix2pixHD & Cityscapes & 96.04\% & 56.58\%\\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Effect of cGAN and reconstruction losses on the accuracy of reconstruction-based MIA. The cGAN loss is less susceptible to MIA.} \label{tab:mia_rec_loss} \end{table} \textbf{Reconstruction loss effect on MIA:} We evaluated the effect of the reconstruction loss term on the accuracy of reconstruction-based MIA. For this, we compared the accuracy of the attack against image translation models, trained using both reconstruction and cGAN loss terms versus models that were only trained using the cGAN loss term. As can be seen in Tab.~\ref{tab:mia_rec_loss}, the reconstruction loss indeed has a significant impact over the attack accuracy. \begin{table}[tb] \centering \small \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Dataset} & \textbf{No-defense} & \textbf{Argmax-defense}\\ \midrule UperNet50 & ADE20K & 98.09\% & 98.05\% \\ UperNet101 & ADE20K & 96.94\% & 97.11\% \\ HRNetV2 & ADE20K & 85.92\% & 89.32\% \\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Comparison between our attack accuracy (ROCAUC) on undefended semantic segmentation models and models defended by the argmax defense. As can be seen, our attack still manages to succeed much better than random guessing.} \label{tab:argmax_defense} \end{table} \textbf{Argmax defense:} In the argmax defense, the victim model returns only the predicted label, rather then the full probability vector. As image translation models predict pixel values, and does not output probability vectors, this defense does not apply. For semantic segmentation models, we evaluate our attack against this defense, by replacing the cross-entropy loss in $L_{rec}$ to the $L_0$ error. As can be seen in Tab.~\ref{tab:argmax_defense}, the attack efficacy is not reduced, demonstrating the weakness of this defense. \textbf{Differential private SGD (DP-SGD) defense}: In the defense by Abadi et al. \cite{abadi2016deep}, the commonly used Stochastic Gradient Descent optimization algorithm is modified in order to provide a differentially private \cite{dwork2014algorithmic} model. This is done by adding Gaussian noise to clipped gradients for each sample in every training batch. There exists a trade-off between privacy and utility, in which the amount of added noise must be large enough to ensure privacy while not degrading the model's outputs to the point where the model is useless. Training a deep model with DP-SGD is an unstable process. We experimented with multiple common configurations, i.e. added noise ratios and maximal gradient clipping threshold, and were not able to find a configuration that yields visually satisfying results. Hence, although the DP-SGD defense is theoretically protecting the victim model against membership inference attacks, in practice we find it to be impractical against our attack as it results with total corruption of the victim model. \textbf{Gauss defense:} In this defense, we add Gaussian noise to the image generated by the victim model \cite{gilmer2019adversarial}. This attempts to hide specific artifacts of the model. We evaluate our attack accuracy as a function of different noise STD. Fig.~\ref{fig:gauss_defense} shows that a considerable amount of noise, which corrupts the generated output, is required in order to have a significant effect over our attack success. Moreover, it can be seen that even with large amounts of noise, our attack still manages to succeed much better than random guessing. This implies that our attack is robust to the Gauss defense. Results on additional benchmarks are presented in the SM. \begin{figure}[tb] \begin{center} \subfigure[Pix2pixHD]{ \includegraphics[width=0.45\linewidth, height=0.4\linewidth]{figures/gauss_defense/gauss_effect_p2phd.pdf} } \subfigure[Semantic Segmentaion]{ \includegraphics[width=0.45\linewidth, height=0.4\linewidth]{figures/gauss_defense/gauss_effect_semseg.pdf} } \end{center} \caption{Effect of Gauss defense on the attack accuracy. Even with large amounts of added noise, our attack still manages to succeed much better then random guessing.} \label{fig:gauss_defense} \end{figure} \vspace{-6pt} \section{Conclusion} In this work, we highlight two properties that make tasks more vulnerable to MIA: i) Uncertainty: tasks where there is more uncertainty in the prediction of the output given an input ii) Output dimensionality: tasks with high-dimensional output. We show that a black-box reconstruction-based membership attack is very effective on two tasks that exhibit these properties: image translation and semantic segmentation, including medical segmentation. We further improve the attack by proposing a novel image predictability error. Our membership error, composed of the reconstruction and predictability errors, has been extensively evaluated on various benchmarks and was shown to achieve high accuracy. \section{Acknowledgments} This research was supported by grants from the Israel Science Foundation and from the DFG. {\small \bibliographystyle{ieee_fullname} \section{Introduction} Deep neural networks have been widely adopted in various computer vision tasks, e.g.\ image classification, semantic segmentation, image translation and generation, etc. Due to the high sample-complexity of such models, they require large amounts of training data. However, collection and annotation of many training samples is an expensive and labor intensive process. In many domains, such as medical imaging, publicly available training data are particularly scarce due to privacy concerns. In such settings, a common solution is training the model privately and then providing black-box access to the trained model. However, even black-box access may leak sensitive information about the training data. \textit{Membership inference attacks (MIA)} are one way to detect such leakage. Given access to a data sample, an attacker attempts to find whether or not the sample was used in the training process. Due to memorization in deep neural networks, prediction confidence tends to be higher for images used in training. This difference in prediction confidence helps MIA methods to successfully determine which images were used for training. Therefore, in addition to detecting information leakage, MIA also provide insights on the degree of memorization in the victim model. MIA has previously been applied to a variety of neural network tasks including: classification, generative adversarial models, and segmentation. The accuracy achieved by MIA can vary greatly as a function of different properties of the attempted tasks. Our empirical results highlight two properties that make tasks more vulnerable to MIA attacks: i) Uncertainty: tasks where there is more uncertainty in the prediction of the output given an input are more susceptible to MIA. ii) Output dimensionality: tasks with higher-dimensional outputs are more vulnerable to MIA. These apparently simple properties can explain non-intuitive observations, for example, that reconstruction-based MIA on CycleGAN leads to very low accuracy. As the training images should be reconstructed back after being translated to the other domain and back, there is no uncertainty of the output given the input. As the desired output is fully specified by the input (i.e. they are identical), the reconstruction task is easy leading to low MIA accuracy. Motivated by the above findings, we focus our attention on two tasks that exhibit these properties: supervised image translation and semantic segmentation, including medical image segmentation. We begin by evaluating a simple-to-implement but effective MIA that uses pixel-wise reconstruction error between the model output and ground truth. This approach exploits memorization of the training data in the victim model, resulting in lower reconstruction error on images used for training. However, we observe that for some sample images the ground truth result can be easily predicted, and for others it is harder to predict. Reconstruction error alone is therefore less accurate at discriminating between hard to predict samples used in training and easy samples not seen before. To overcome this limitation, we propose a novel predictability error which is computed for each query input image and its ground truth output. Our predictability error uses the accuracy of a linear predictor computed over the given query image, predicting pixel values from deep features of the input image. The linear predictor serves as a simple approximation of the task attempted by the victim model, providing an indication of the ease of predicting the output image from the input. The reconstruction error, together with the predictability error, helps to discriminate between two factors of variation in the reconstruction error: (i) The "intrinsic" difficulty of the generation task for each image, based on its predictability error, and (ii) The boost in accuracy due to memorization of the training images. Defining a membership error that subtracts the predictability error from the reconstruction error is shown empirically to achieve high success rates in MIA. Differently from other MIA approaches, we do not assume the existence of a large number of in-distribution data samples for training a shadow model - but rather operate on a single sample, using only a single query to the victim model. Our method is demonstrated to be effective over strong baselines on an extensive number of benchmarks, taken from image translation, semantic segmentation, and medical image segmentation. We discuss possible defenses against MIA and show their ineffectiveness against our method. Our main contributions are: \vspace{-6pt} \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{1pt} \setlength{\parsep}{1pt} \item Highlighting two key properties of tasks that are highly vulnerable to MIA. \item Presenting the first MIA on image translation models. \item Proposing a general single-sample, self-supervised, image predictability error for MIA. \end{enumerate} \begin{table}[tb] \centering \small \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Dataset} & \textbf{Reconstruction} & \textbf{Membership} \\ \textbf{} & \textbf{} & \textbf{Error} & \textbf{Error} \\ \midrule Pix2pix & Facades & 93.62\% & \textbf{97.59\%} \\ Pix2pix & Maps2sat & 84.22\% & \textbf{85.65\%} \\ Pix2pix & Cityscapes & 77.74\% & \textbf{83.23\%} \\ \midrule Pix2pixHD & Facades & 98.92\% & \textbf{99.95\%} \\ Pix2pixHD & Maps2sat & 95.74\% & \textbf{99.42\%} \\ Pix2pixHD & Cityscapes & 96.04\% & \textbf{99.09\%} \\ \midrule SPADE & Cityscapes & 99.75\% & \textbf{100\%}\\ SPADE & ADE20K & 85.31\% & \textbf{89.79\%}\\ \midrule UperNet50 & ADE20K & 96.80\% & \textbf{98.09\%} \\ UperNet101 & ADE20K & 95.74\% & \textbf{96.94\%} \\ HRNetV2 & ADE20K & 83.67\% & \textbf{85.92\%} \\ \midrule Inf-Net & COVID19 & 97.16\% & \textbf{99.01\%} \\ PraNet & Polyp & 96.03\% & \textbf{96.38\%} \\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Membership attack ROCAUC using our (i) reconstruction error $L_{rec}$ and (ii) membership error $L_{mem}$. Using the membership error, which subtracts the image predictability error from the reconstruction error, substantially improves performance.} \label{tab:main_method_results} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=0.95\linewidth, height=0.45\linewidth]{figures/method_inp2out_new.pdf} \end{center} \caption{Illustration of the proposed black-box membership inference attack. Here shown for the case of image translation over the Cityscapes dataset. We would like to determine if a given sample was used in training. The victim model predicts a reconstructed image based on the input. In the top path the difference between the reconstructed image and the ground truth image gives the reconstruction error $L_{rec}$. In the bottom path we compute the predictability error $L_{pred}$ of the sample from the error of a linear predictor to predict pixel values of the ground-truth image from deep features of the input. Subtracting $L_{pred}$ from $L_{rec}$ gives the membership error, $L_{mem}$.} \label{fig:method_plot} \end{figure} \vspace{-10pt} \section{Related Work} \subsection{Membership Inference Attacks (MIA)} Shokri et al. \cite{shokri2017membership} were the first to study MIA against classification models in a black-box setting. In this setting the attacker can only send queries to the victim model and get the full probability vector response, without being exposed to the model itself. They proposed to train multiple shadow models to mimic the behavior of the victim model, and then use those to train a binary classifier to distinguish between known samples from the train set and unknown samples. They assume the existence of in-distribution new training data and knowledge of the victim model architecture. Salem et al. \cite{salem2018ml} further relaxed those assumptions and demonstrated that using only one shadow model is sufficient, and proposed using out-of-distribution dataset and different shadow model architectures, for a slightly inferior attack. Even more interestingly, they showed that without any training, a simple threshold on the victim model's confidence score is sufficient. This shows that classification models are more confident of samples that appeared in the training process, compared to unseen samples. Sablayrolles et al. \cite{sablayrolles2019white} proposed an attack based on applying a threshold over the loss value rather then the confidence and showed that black-box attacks are as good as white-box attacks. As the naive defense against such attacks is to modify the victim model's API to only output the predicted label, other works proposed label-only attacks \cite{yeom2018privacy, li2020label, choo2020label}. While most previous work has been around classification models, there has been some effort regarding MIA on generative models such as GANs and VAEs \cite{chen2019gan, hayes2019logan, hilprecht2019monte}. An attack against semantic segmentation models was proposed by He et al. \cite{he2019segmentations}, where a shadow semantic segmentation model is trained, and is used to train a binary classifier. The classifier is trained on image patches, and final decision regarding the query image is set by aggregation of the per-patch classification scores. The input to the classifier is a structured loss map between the shadow model's output and the ground truth segmentation map. Although this task is the closest to ours, our work is the first study of membership inference attacks on image translation models. We also note that \cite{he2019segmentations} consider the setting where other input-output samples from the data distribution (or a very similar distribution) are available, whereas our attack does not require this information. Besides membership inference attacks, other privacy attacks against neural networks exist. We refer the reader to the supplementary material (SM) for more details. \subsection{Conditional Image Generation} Image-to-image translation is the task of mapping an image from a source domain to a target domain, while preserving the semantic and geometric content of the input image. Currently, the most popular methods for training image-to-image translation models use Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} and are currently used in two main scenarios: (i) unsupervised image translation between domains \cite{zhu2017unpaired, kim2017learning, liu2017unsupervised, choi2018stargan}; (ii) serving as a perceptual image loss function \cite{isola2017image, wang2018high, zhu2017toward}. In this work we introduce the novel task of MIA on conditional image generation models. \subsection{Semantic Segmentation} Semantic segmentation is the task assigning a class label to each pixel in the input image. This can be thought of a classification problem for each pixel. State-of-the-art methods \cite{xiao2018unified, sun2019high} are based on fully convolutional networks and multi-scale representations of the input \cite{liu2019recent, minaee2020image}. \section{Difficulty-based MIA} We investigate the effect of task difficulty and dimensionality on the success of MIA. Consequently, we concentrate on two promising tasks for MIA, image translation and semantic segmentation. We also present a novel image predictability error which significantly improves MIA accuracy. \subsection{Effects of task difficulty and dimensionality} \label{sec:invesigation_of_tasks} Every neural network model is a potential target for MIA. Previous work attempted MIA on many different models (classification, GANs, segmentation) with highly variable accuracy. In this section we present an investigation into two factors that affect MIA accuracy: task difficulty and output dimensionality. Full details are provided in Sec.~\ref{sec:investigation_experimetns}. We perform reconstruction-based MIA by measuring the reconstruction error between the model output and the ground truth. This is done for multiple models, datasets and tasks. The attack method is described in more detail in Sec.~\ref{sec:main_method}. \begin{table}[tb] \centering \small \begin{tabular}{lcc} \toprule \textbf{Model} & \textbf{Task} & \textbf{Reconstruction Error}\\ \midrule NVAE & CelebA2Self & 50.74\% \\ \midrule Pix2pixHD & Maps2sat & 95.74\% \\ Pix2pixHD & Cityscapes & 96.04\%\\ \midrule Pix2pixHD & Landmarks2CelebA & 99.54\% \\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Comparison of reconstruction-based MIA accuracy on tasks with different difficulties. Easier tasks, e.g.\ CelebA2Self, in which there no uncertainty in the output given the input image, suffer less from memorization of the training data and therefore have lower vulnerability to MIA. As the uncertainty increases (segmentation maps and landmarks) models tend to memorize the training data and therefore the MIA accuracy increases. } \label{tab:mia_diff_tasks} \end{table} \textbf{MIA accuracy vs. task difficulty:} We present results of reconstruction-based attack on three different tasks of different difficulties. We define task difficulty as the uncertainty in the output given the input image. The tasks are: i) auto-encoding - translating an image to itself. ii) Segmentation-to-image translation. iii) Landmark to face translation. The first task is the easiest as the output is trivially determined by the input. Landmark-to-face is harder than segmentation-to-image as the input contains less information on the output (e.g. no information on the identity of the face requires much more memorization). The results are presented in Tab.~\ref{tab:mia_diff_tasks}, where it can be seen that indeed MIA performance is more accurate for harder tasks. \begin{figure}[t] \begin{center} \subfigure[Pix2pixHD]{ \includegraphics[width=0.46\linewidth, height=0.4\linewidth]{figures/dim_reduce_effect/reduce_dim_effect_p2phd.pdf} } \subfigure[Semantic Segmentation]{ \includegraphics[width=0.46\linewidth, height=0.4\linewidth]{figures/dim_reduce_effect/reduce_dim_effect_semseg.pdf} } \end{center} \caption{Effect of reducing output dimensionality over a reconstruction-based attack. MIA accuracy is correlated with the output dimension, i.e.\ number of pixels, demonstrating that high output dimensionality tasks are more vulnerable to MIA.} \label{fig:dim_reduce_effect} \end{figure} \textbf{MIA accuracy vs. output dimensionality:} Many MIA approaches attack classification networks that have only a single output, usually a probability vector or in the more restrictive case, a single label. It is natural to ask if tasks with higher dimensional outputs are more vulnerable to MIA due to the ensemble effect of attacking each individual output dimension. The intuition behind this is similar to the ensemble effect of boosting algorithms - each output pixel serves as a weak attack, and the aggregation of the reconstruction errors between all output pixels can produce a strong attack. In Fig.~\ref{fig:dim_reduce_effect}, we provide a comparison of reconstruction-based MIA when subsets of different sizes are used as the output. Note that segmentation with only a single pixel output is equivalent to classification. We can observe that MIA accuracy indeed scales with output dimensionality. \begin{figure*}[t] \begin{center} \begin{tabular}{@{\hskip0pt}c@{\hskip0pt}c@{\hskip3pt}c@{\hskip0pt}c@{\hskip2pt}c@{\hskip0pt}c@{\hskip2pt}c@{\hskip0pt}c} & & City - input & City - out & Maps - input & Maps - out & Covid - input & Covid - out \\ \begin{turn}{90} ~~~~~~~~ Low \end{turn} & \begin{turn}{90} ~ predictability \end{turn} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cityscapes_easy_inp_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cityscapes_easy_gt_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/maps_easy_inp_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/maps_easy_gt_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cov_easy_inp_0.jpg} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cov_easy_gt_0.png} \\ \begin{turn}{90} ~~~~~~~~ High \end{turn} & \begin{turn}{90} ~ predictability \end{turn} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cityscapes_hard_inp_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cityscapes_hard_gt_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/maps_hard_inp_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/maps_hard_gt_0.png} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cov_hard_inp_0.jpg} & \includegraphics[width=0.14\linewidth, height=0.14\linewidth]{figures/high_low_samples/cov_hard_gt_0.png} \\ \end{tabular} \end{center} \caption{Examples of input-output pairs from the Cityscapes, Maps2sat and COVID19-CT datasets that received the lowest (first row) and highest (second row) predictability errors using our single-sample approach. It can be seen that detailed images with complicated patterns are ranked as difficult to predict, while images with less details and lower contrast are ranked as easier to predict.} \label{fig:diff_score_samples} \end{figure*} \subsection{MIA for high output dimensionality} \label{sec:main_method} We showed in Sec.~\ref{sec:invesigation_of_tasks} that both task difficulty and output dimensionality are correlated with the accuracy of MIA. We therefore focus on two important but difficult image tasks that have high-dimensional outputs: image translation and semantic segmentation. To the best of our knowledge, this is the first paper to consider MIA on image translation models. We propose a simple but effective attack, assuming that the attacker has only a black-box access to the victim model $\mathbf{V}$. Differently from most previous works, we do not use shadow models or train a binary classifier, and thus do not require any additional training data and query the victim model only once. Our membership attack is performed on a pair of query images $(x, y)$ where $x \in \mathbb{R}^{h \times w}$ is an image from the input domain ($h$ and $w$ are the image height and width respectively) and $y \in \mathbb{R}^{h \times w}$ is the ground truth from the output domain. The requirement of the availability of the ground truth image $y$ is in-line with previous works, and is a reasonable assumption in our target scenario. For each query we compute a membership error, $L_{mem}$ (see Eq.~(\ref{eq:L-mem})), to which we apply a pre-defined threshold $\tau$, such that all queries where $L_{mem}(x, y) < \tau$ are marked as members of the training data. The membership error has two elements: reconstruction error and predictability error. \vspace{-6pt} \subsubsection{Reconstruction Error for Membership} \label{sec:l_rec_method} Typical MIA on classification models consider the probability (or confidence) given by the model to the correct class. Semantic segmentation is an extension where the output is a probability vector for each pixel. Image translation models are different as they output a color value of each pixel. This value is the maximum likelihood estimate, and no probability distribution over possible values is given. We propose to use the loss term as a reconstruction error, $L_{rec}$, to compute the pixel-wise difference between the output produced by a black-box access to the model, $\mathbf{V}(x)$, and the ground truth $y$. For semantic segmentation, where the output is a probability vector for each pixel, we use the cross-entropy error. For medical segmentation, we use the weighted IoU (Intersection over Union) loss and binary cross entropy loss. In the case of image-translation we use the $L_1$ error as we do not assume access to the discriminator and therefore can not use a GAN loss. Due to memorization during the training process, the model output typically has lower reconstruction errors for images in the training set compared to unknown images. \subsubsection{Predictable and Unpredictable Images} \label{sec:diff_score} In this section we address the following question: Given an input-output sample, is the output easily predictable from the input. Consider, for example, supervised segmentation-to-image translation. I.e., the task is to "invert" the segmentation process, and recover the original image that gave rise to a given segmentation map. It is clear that not all cases are equally predictable: (i) hard to predict images have sharp and detailed textures, whereas more predictable images have blurrier textures; (ii) images with semantic segmentation maps that contain only few categories provide less guidance than those with more detailed segmentation maps, making the correct prediction less certain. The image predictability error should quantify these observations. In Sec.~\ref{sec:main_method_results} we show that such a predictability error is important for increasing accuracy of membership inference attacks. We briefly describe two previous approaches for measuring image difficulty: \textbf{Human-Supervised:} Ionescu et al. \cite{tudor2016hard} proposed to define image difficulty as the human response time for solving a visual search task. For this, they collected human annotations for the PASCAL VOC 2012 dataset \cite{Everingham10} and trained a regression model, based on pre-trained deep features, to predict the collected difficulty score. The disadvantage of this method is that human-specified difficulty scores may not correlate to the predictability of the image synthesis by neural networks. This is demonstrated empirically in Sec.~\ref{sec:supervised_results}. \textbf{Multi-Image:} Another approach taken by \cite{chen2019gan} is training a model on a set of image pairs similar to the target distribution. This approach uses the reconstruction error of the external model on the target image pairs as its predictability error - larger reconstruction errors correspond to harder to predict images. This approach has a significant drawback: a large number of images, similar to the target image, are required in order to learn a reliable generative model. In many cases, images from the target distribution may not be available. Additionally, training a model for every task is tedious and computationally expensive. \textbf{The Proposed Single-sample predictability error:} We propose a new method to assign a predictability error for models with image outputs. This error measures the accuracy of predicting the output pixel from its high-level representation using linear regression. A related approach was proposed by Hacohen et al. \cite{hacohen2019power} for measuring image difficulty for classification models. Our method is significantly different as it is trained on a single input-output sample rather than on a large dataset. The linear regression model uses image features of a pre-trained Wide-ResNet$50{\times}2$ \cite{zagoruyko2016wide}. We concatenate the activation values in the first $4$ blocks, giving $56{\times}56$ feature vectors of size $3840$ each. The output image resolution is reduced to $56{\times}56$ to match the size of the first Wide-ResNet$50{\times}2$ block. We denote the concatenated feature vector for pixel $i$ as $\psi(i)$. The linear regression model $\mathbf{P}$ is a matrix of size $3840{\times}3$, multiplied with the feature vector $\psi(i)$ of pixel $i$ to give an estimate of the RGB colors $y^i$. We optimize $\mathbf{P}$ over 70\% randomly selected pixels. The image predictability error is the average absolute error over the 30\% unselected pixels: \begin{equation} \label{eq:L-diff} L_{pred}(x, y) = \frac{1}{N}\sum_{i=1}^{N} \| \mathbf{P} \psi (i) - y^i\| _1 \end{equation} where $y^i$ is the ground truth value of the $i_{th}$ pixel in the resized ground truth image $y$. Fig.~\ref{fig:diff_score_samples} presents some images that received the highest and lowest predictability errors. See supplementary material (SM) for more details. \subsubsection{Membership Error} As observed before, for some samples the output images can be easily predicted from the input, while for other samples the output can not be predicted. While the reconstruction error achieves high MIA success rates, it has a significant limitation - it does not discriminate between predictable and unpredictable samples. The victim model will have higher errors when generating unpredictable training samples, and lower errors when generating easily predictable ones. In such samples reconstruction error may result in wrong membership classification. Given our image predictability error $L_{pred}$ in Eq.~(\ref{eq:L-diff}) and the reconstruction error $L_{rec}$ we calculate a membership error $L_{mem}$ as follows: \begin{equation} \label{eq:L-mem} L_{mem}(x,y) = L_{rec}(x, y) - \alpha \cdot L_{pred}(x, y) \end{equation} $L_{mem}$ is computed by subtracting the predictability error $L_{pred}$ from the reconstruction error $L_{rec}$ weighted by $\alpha$. Unless specified otherwise, we use $\alpha=1.0$, and present the effect of different $\alpha$ values in the SM. This lowers the membership error $L_{mem}$ for harder-to-predict images compared to easier-to-predict images having the same reconstruction error. See Fig.~\ref{fig:method_plot} for an overview illustration of our method. Using the membership error $L_{mem}$ (\ref{eq:L-mem}) for MIA substantially improves the success rates in all of our experiments, as shown in Tab.~\ref{tab:main_method_results} and Fig.~\ref{fig:calibration_effect} \section{Experiments} We first investigate two factors that affect MIA accuracy, task difficulty and output dimensionality, and show that MIA attacks are easier on difficult tasks with high output dimensionality. We then extensively evaluate our MIA on image translation and semantic segmentation networks. We also compare our single-sample predictability error against strong baselines. Additional results and ablations can be found in the SM. In accordance with previous membership attack works, the success rate is measured using the area under the ROC curve (ROCAUC) metric. \subsection{Effectiveness of membership inference attacks} \label{sec:investigation_experimetns} As mentioned in Sec.~\ref{sec:invesigation_of_tasks}, there has been extensive research on MIA against various neural networks, resulting in variable accuracy. We investigated two factors that affect MIA accuracy: task difficulty and output dimensionality. \subsubsection{MIA accuracy vs. task difficulty} We defined task difficulty as the uncertainty of the output given the input image. In the limit of sufficiently large training datasets, when models are trained to perform easy tasks, such as auto-encoding - translating an image to itself, they are able to generalize well to unseen images, and do not need to depend on memorization of the training data in order to minimize the loss function. As membership inference attacks are highly correlated with model memorization, their performance decreases on such tasks. Similarly, models struggle with learning difficult tasks, in which \ul{the input does not contain sufficient information to fully specify the output}, and therefore the loss minimization encourages the model to memorize the training samples. This lack of predictability acts as a strong motivation for memorization, even at the limit of well-trained models trained on large datasets, provided sufficient capacity. In order to demonstrate this, we performed reconstruction-based MIA, by using $L_{rec}$ described in Sec.~\ref{sec:l_rec_method}, on three tasks of different levels of difficulty. The first and easiest task is auto-encoding. For this, we attack a NVAE \cite{vahdat2020nvae} model, trained on the CelebA \cite{liu2015faceattributes} dataset. The second task, more difficult than auto-encoding, is segmentation-to-image translation. We attack two Pix2PixHD models \cite{wang2018high}, trained on the Maps2sat \cite{isola2017image} and Cityscapes \cite{Cordts2016Cityscapes} datasets. The third and most difficult task is landmarks-to-face translation. For this task we extracted facial landmarks \cite{bulat2017far} from $50$K CelebA images \cite{liu2015faceattributes}. We consider this to be the most difficult task out of the three as the input contains less information on the output in comparison to segmentation maps (e.g.\ no information regarding the identity of the face). Results, presented in Tab.~\ref{tab:mia_diff_tasks}, demonstrate that reconstruction-based MIA are more successful on difficult tasks. \begin{figure}[tb] \begin{center} \subfigure[Pix2pixHD - Maps]{ \includegraphics[width=0.47\linewidth, height=0.35\linewidth]{figures/calibration_effect/p2phd_maps_calibration_effect_2.pdf} } \subfigure[Pix2pixHD - Cityscapes]{ \includegraphics[width=0.47\linewidth, height=0.35\linewidth]{figures/calibration_effect/p2phd_cityscapes_calibration_effect_2.pdf} } \end{center} \caption{The proposed membership error $L_{mem}$ can better separate train (blue) and test (orange) images by a simple threshold (i.e. a vertical line) compared to the reconstruction error $L_{rec}$.} \label{fig:calibration_effect} \end{figure} \vspace{-6pt} \subsubsection{MIA accuracy vs. output dimensionality} Previous works mostly focused on MIA against classification models, where there is a single output, i.e.\ probability vector or in the more restrictive case, a single label. It is natural to ask whether higher dimensional outputs are more vulnerable due to the ensemble effect of combining the attacks on each individual output dimension to a stronger, joint attack. We perform reconstruction-based MIA on the Pix2PixHD architecture, trained on the CMP Facades \cite{Tylecek13}, Maps2Sat and Cityscapes datasets, as well as on three semantic segmentaion models - UperNet50, UperNet101 \cite{xiao2018unified} (using ResNet50 and ResNet101 as backbones) and HRNetV2 \cite{sun2019high} - trained on the ADE20K dataset \cite{zhou2017scene}. Fig.~\ref{fig:dim_reduce_effect} demonstrates the effect of reducing the output dimension on the attack accuracy. The reduction is achieved by randomly sampling $N$ output pixels, and using them as the output, where $N$ ranges from a single pixel and up to $200$ pixels. Note that in the case of semantic segmentation, having only a single pixel output is equivalent to classification. We repeat this experiment $10$ times and report the average result. We observed that MIA accuracy indeed scales with the number of output dimensions. Results for other models are presented in the SM. \subsection{MIA accuracy evaluation} \label{sec:main_method_results} We evaluate our membership attack on three image translation architectures, Pix2Pix \cite{isola2017image}, Pix2PixHD \cite{wang2018high}, SPADE \cite{park2019semantic}, three semantic segmentation architectures - UperNet50 and UperNet101 \cite{xiao2018unified} (ResNet50 and ResNet101 as backbones), HRNetV2 \cite{sun2019high} as well as two medical segmentation architectures - Inf-Net \cite{fan2020inf} and PraNet \cite{fan2020pra}. We evaluated on various datasets, including CMP Facades \cite{Tylecek13}, Maps2sat \cite{isola2017image}, Cityscapes \cite{Cordts2016Cityscapes}, ADE20K \cite{zhou2017scene}. In the case of medical segmentation we evaluated two tasks: lung infection segmentation from Covid-19 CT images \cite{fan2020inf} and polyp segmentation in colonoscopy images \cite{silva2014toward, bernal2015wm, tajbakhsh2015automated, vazquez2017benchmark, jha2020kvasir}. All pix2pix and pix2pixHD models are trained from scratch, with the exception of the Cityscapes dataset on the Pix2pixHD architecture in which we use the supplied large pre-trained model for computational constraints on the high resolution. The rest of the models are pre-trained. It can be seen in Tab.~\ref{tab:main_method_results} that while using the reconstruction error alone achieves a high success rate, the membership error (which calibrates the result by sample predictability) significantly improves the results. Fig~\ref{fig:calibration_effect} demonstrates the effect of subtracting the predictability error from the reconstruction error. After calibration, a single threshold on the membership error can separate train and test images. Results on additional benchmarks are presented in the SM. Utilizing common image augmentations, i.e.\ horizontal flipping and random cropping, in order to construct a larger a larger set $\{(x_{aug}, y_{aug})\}$ has a small impact over the attack accuracy, as the output dimension is large enough as is. \begin{table}[tb] \centering \small \begin{tabular}{lccccc} \toprule \textbf{Model} & \textbf{Dataset} & \multicolumn{2}{c}{\textbf{Single}} & \multicolumn{2}{c}{\textbf{Multi}}\\ & & \textbf{Ours} & \textbf{Superv.} & \textbf{In-Dist.}\\ \midrule Pix2pix & Facades & \textbf{97.59\%} & 93.67\% & -\\ Pix2pix & Maps2sat & 85.65\% & 86.48\% & \textbf{92.43\%}\\ Pix2pix & Cityscapes & \textbf{83.23\%} & 77.06\% & 82.47\%\\ \midrule Pix2pixHD & Facades & \textbf{99.95\%} & 98.86\% & -\\ Pix2pixHD & Maps2sat & \textbf{99.42\%} & 98.38\% & 82.87\%\\ Pix2pixHD & Cityscapes & \textbf{99.09}\% & 96.86\% & 94.76\%\\ \midrule UperNet50 & ADE20K & \textbf{98.09\%} & 96.79\% & 79.47\% \\ UperNet101 & ADE20K & \textbf{96.94\%} & 95.49\% & 76.01\% \\ HRNetV2 & ADE20K & \textbf{85.92\%} & 84.42\% & 84.48\%\\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{MIA accuracy of our self-supervised single-sample method vs. using human-supervised single-sample and multi-image baselines for the predictability error. Note that in-distribution multi-image requires extra supervision of $100$ images} \label{tab:single_vs_multi_diff_score} \end{table} \subsubsection{Comparison to Human Supervision} \label{sec:supervised_results} We compare our self-supervised single-sample predictability error with the human-supervised difficulty score described in Sec.~\ref{sec:diff_score}. This score was proposed by Ionescu et al. \cite{tudor2016hard}, which defined image difficulty to be the human response time for solving a visual search task. In order to provide a fair comparison, we replace the pretrained VGG-f \cite{chatfield2014return} features, used by \cite{tudor2016hard}, with the more powerful pretrained Wide-ResNet$50\times2$ \cite{zagoruyko2016wide} features we use in our predictability error. Samples of images ranked as easy and hard by the supervised score are presented in the SM. As can be seen in Tab.~\ref{tab:single_vs_multi_diff_score}, our self-supervised single-sample predictability error outperforms the human-supervised difficulty score. In the SM, we provide a comparison of the relation between the reconstruction error and the supervised score to the relation between the reconstruction error and our self-supervised predictability error, and show that our score is better correlated to the reconstruction error. \begin{table}[tb] \centering \small \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Dataset} & \textbf{Ours} & \textbf{Shadow Model}\\ \midrule Pix2pix & Maps2sat & \textbf{85.65\%} & 80.15\%\\ Pix2pix & Cityscapes & \textbf{83.23\%} & 78.68\%\\ \midrule Pix2pixHD & Maps2sat & \textbf{99.42\%} & 98.63\%\\ Pix2pixHD & Cityscapes & \textbf{99.09\%} & 96.39\%\\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Comparison between our MIA and the popular shadow-model-based classifier attack, using $100$ train and $100$ test samples. Our MIA outperforms while not requiring extra training images.} \label{tab:shadow_model} \end{table} \subsubsection{Comparison to Multi-Image Scores} \label{sec:single_vs_multi_diff_score} Although our MIA method does not require the availability of multiple auxiliary samples from the target distribution or from a similar distribution, it is interesting to compare our single sample predictability error to methods that use multiple samples. We compute multi-sample predictability errors (MSPS) by training a "shadow" model to map the input to output images in the auxiliary samples. As an upper-bound on MSPS performance, the shadow model is given exactly the same architecture as used by the victim model (although this knowledge may not be available in practice). The MSPS is defined by the reconstruction error of the shadow model on the target sample. Two scenario were evaluated: \textbf{In-distribution data:} In this setting the shadow model's data shares the distribution of the victim's training data, by being trained on $100$ randomly sampled image pairs from the test set of the corresponding dataset. Facades was not used as it did not have enough images. The results are presented in Tab.~\ref{tab:single_vs_multi_diff_score}. For Pix2PixHD and semantic segmentation, MSPS underperformed our method (as $100$ samples are insufficient for training such large models). As Pix2Pix is a smaller network, MSPS was more successful there, obtaining competitive results with our method. Note that it still requires extra samples, often not available. We analyzed the number of samples required for MSPS to reach the accuracy of our method, in most tasks, even 100 were insufficient (see SM). \textbf{Auxiliary dataset:} As suggested by He et al. \cite{he2019segmentations}, we also compare our method to the setting were many out-of-distribution but similar samples are available. We trained shadow models on $4K$ image pairs from the BDD dataset \cite{yu2018bdd100k} as MSPS for the Cityscapes dataset, as both datasets consist of street scene images and have compatible label spaces. We found that MSPS underpeformed our method by $10\% - 30\%$. (see SM for exact results). Note that it is rare to have similar datasets with nearly identical labels. Cityscapes was the only dataset from those evaluated in this paper for which such a similar dataset could be found. \textbf{Shadow-model classifiers:} Although deviating somewhat from predictability errors, for the interest of completeness, we report the ROCAUC results of the popular approach of shadow-model classifiers for image translation MIA, see Tab.~\ref{tab:shadow_model} (classification accuracy is lower, see SM). We use the approach of He et al. \cite{he2019segmentations} and train a classifier to distinguish between the "loss maps" of the train and test auxiliary samples of the shadow model. The classifier is then used to score images of the target dataset as train or test (see \cite{he2019segmentations} for the complete details). We show that this approach underperforms our method in both in-distribution and auxiliary dataset settings (exact results presented in the SM). It is surprising that shadow models do not perform well on image translation MIA as they are very effective for image segmentation MIA (as shown in \cite{he2019segmentations}). We believe the difference in performance can be explained by the fact that segmentation maps have similar distributions between datasets with similar label spaces while natural images have very different distribution - making membership classifiers on the image2seg task generalize better than the seg2image task. We note again, that such techniques requires the availability of auxiliary in-distribution samples or very similar datasets which is often not possible. For example, medical segmentation datasets are often quite small due to the sensitivity of the data and high cost of obtaining ground truth labels. The Covid-19 CT dataset \cite{fan2020inf} is composed of $50$ train and $50$ test samples, too small to apply a shadow model based attack. \section{Discussion} \label{sec:discussion} \textbf{Effect of Memorization:} Membership inference attacks are closely related to memorization in the victim model. In order to better understand this relation, we measure the success of our attack under different levels of memorization. We do so by evaluating our attack on checkpoints saved at different epochs during the training of our image-translation victim models. We observed that as the training process progresses, the victim model memorizes the training data which results in higher attack success rates (See SM). \begin{table}[tb] \centering \small \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Dataset} & \textbf{Orig} & \textbf{No $L_{rec}$}\\ \midrule Pix2pix & Maps2sat & 84.22\% & 68.44\%\\ Pix2pix & Cityscapes & 77.74\% & 51.06\% \\ \midrule Pix2pixHD & Maps2sat & 95.74\% & 75.76\%\\ Pix2pixHD & Cityscapes & 96.04\% & 56.58\%\\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Effect of cGAN and reconstruction losses on the accuracy of reconstruction-based MIA. The cGAN loss is less susceptible to MIA.} \label{tab:mia_rec_loss} \end{table} \textbf{Reconstruction loss effect on MIA:} We evaluated the effect of the reconstruction loss term on the accuracy of reconstruction-based MIA. For this, we compared the accuracy of the attack against image translation models, trained using both reconstruction and cGAN loss terms versus models that were only trained using the cGAN loss term. As can be seen in Tab.~\ref{tab:mia_rec_loss}, the reconstruction loss indeed has a significant impact over the attack accuracy. \begin{table}[tb] \centering \small \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{Dataset} & \textbf{No-defense} & \textbf{Argmax-defense}\\ \midrule UperNet50 & ADE20K & 98.09\% & 98.05\% \\ UperNet101 & ADE20K & 96.94\% & 97.11\% \\ HRNetV2 & ADE20K & 85.92\% & 89.32\% \\ \bottomrule \end{tabular} \vspace{0.25cm} \caption{Comparison between our attack accuracy (ROCAUC) on undefended semantic segmentation models and models defended by the argmax defense. As can be seen, our attack still manages to succeed much better than random guessing.} \label{tab:argmax_defense} \end{table} \textbf{Argmax defense:} In the argmax defense, the victim model returns only the predicted label, rather then the full probability vector. As image translation models predict pixel values, and does not output probability vectors, this defense does not apply. For semantic segmentation models, we evaluate our attack against this defense, by replacing the cross-entropy loss in $L_{rec}$ to the $L_0$ error. As can be seen in Tab.~\ref{tab:argmax_defense}, the attack efficacy is not reduced, demonstrating the weakness of this defense. \textbf{Differential private SGD (DP-SGD) defense}: In the defense by Abadi et al. \cite{abadi2016deep}, the commonly used Stochastic Gradient Descent optimization algorithm is modified in order to provide a differentially private \cite{dwork2014algorithmic} model. This is done by adding Gaussian noise to clipped gradients for each sample in every training batch. There exists a trade-off between privacy and utility, in which the amount of added noise must be large enough to ensure privacy while not degrading the model's outputs to the point where the model is useless. Training a deep model with DP-SGD is an unstable process. We experimented with multiple common configurations, i.e. added noise ratios and maximal gradient clipping threshold, and were not able to find a configuration that yields visually satisfying results. Hence, although the DP-SGD defense is theoretically protecting the victim model against membership inference attacks, in practice we find it to be impractical against our attack as it results with total corruption of the victim model. \textbf{Gauss defense:} In this defense, we add Gaussian noise to the image generated by the victim model \cite{gilmer2019adversarial}. This attempts to hide specific artifacts of the model. We evaluate our attack accuracy as a function of different noise STD. Fig.~\ref{fig:gauss_defense} shows that a considerable amount of noise, which corrupts the generated output, is required in order to have a significant effect over our attack success. Moreover, it can be seen that even with large amounts of noise, our attack still manages to succeed much better than random guessing. This implies that our attack is robust to the Gauss defense. Results on additional benchmarks are presented in the SM. \begin{figure}[tb] \begin{center} \subfigure[Pix2pixHD]{ \includegraphics[width=0.45\linewidth, height=0.4\linewidth]{figures/gauss_defense/gauss_effect_p2phd.pdf} } \subfigure[Semantic Segmentaion]{ \includegraphics[width=0.45\linewidth, height=0.4\linewidth]{figures/gauss_defense/gauss_effect_semseg.pdf} } \end{center} \caption{Effect of Gauss defense on the attack accuracy. Even with large amounts of added noise, our attack still manages to succeed much better then random guessing.} \label{fig:gauss_defense} \end{figure} \vspace{-6pt} \section{Conclusion} In this work, we highlight two properties that make tasks more vulnerable to MIA: i) Uncertainty: tasks where there is more uncertainty in the prediction of the output given an input ii) Output dimensionality: tasks with high-dimensional output. We show that a black-box reconstruction-based membership attack is very effective on two tasks that exhibit these properties: image translation and semantic segmentation, including medical segmentation. We further improve the attack by proposing a novel image predictability error. Our membership error, composed of the reconstruction and predictability errors, has been extensively evaluated on various benchmarks and was shown to achieve high accuracy. \section{Acknowledgments} This research was supported by grants from the Israel Science Foundation and from the DFG. {\small \bibliographystyle{ieee_fullname}
{ "timestamp": "2021-08-19T02:26:40", "yymm": "2102", "arxiv_id": "2102.07762", "language": "en", "url": "https://arxiv.org/abs/2102.07762" }
\subsection{Graph Convolution on Flat Graphs}\label{sub:plaingcn} Analogous to the one-dimensional Discrete Fourier Transform (Definition \ref{def:dft}), the graph Fourier transform is given by Definition \ref{def:gft}. Then the spectral graph convolution (Definition \ref{def:graph_conv}) is defined based on one-dimensional convolution and the convolution theorem. The free parameter of the convolution filter is further replaced by Chebyshev polynomials and thus we have Chebyshev approximation for graph convolution (Definition \ref{def:cheby_graph}). \begin{definition}[Flat Graph]\label{def:flat_graph} A flat graph contains a one-dimentional graph signal $\mathbf{x}\in\mathbb{R}^{N}$ and an adjacency matrix $\mathbf{A}\in\mathbb{R}^{N\times N}$. \end{definition} \begin{definition}[Discrete Fourier Transform]\label{def:dft} Given an one dimensional signal $\mathbf{x}\in\mathbb{R}^N$, where $N$ is the length of the sequence, its Fourier transform is defined by: \begin{equation} \Tilde{\mathbf{x}}[n] = \sum_{k=1}^N\mathbf{x}[k]e^{-\frac{i2\pi}{N}kn} \end{equation} where $\mathbf{x}[k]$ is the $k$-th element of $\mathbf{x}$ and $\Tilde{\mathbf{x}}[n]$ is the $n$-th element of the transformed vector $\Tilde{\mathbf{x}}$. The above definition can be rewritten as: \begin{equation} \Tilde{\mathbf{x}} = \mathbf{F}\mathbf{x} \end{equation} where $\mathbf{F}\in\mathbb{R}^{N\times N}$ is the filter matrix and $\mathbf{F}[n,k] = e^{-\frac{i2\pi}{N}kn}$. \end{definition} \begin{definition}[Graph Fourier Transform \cite{bruna2013spectral}]\label{def:gft} Given a graph signal $\mathbf{x}\in\mathbb{R}^{N}$, along with its adjacency matrix $\mathbf{A}\in\mathbb{R}^{N\times N}$, where $N$ is the number of nodes, the graph Fourier transform is defined by: \begin{equation} \Tilde{\mathbf{x}} = \mathbf{\Phi}^T\mathbf{x} \end{equation} where $\mathbf{\Phi}$ is the eigenvector matrix of the graph Laplacian matrix $\mathbf{L}=\mathbf{I} - \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}=\mathbf{\Phi}\mathbf{\Lambda}\mathbf{\Phi}^T$, $\mathbf{I}\in\mathbb{R}^{N\times N}$, $\mathbf{D}\in\mathbb{R}^{N\times N}$ denote the identity matrix and the degree matrix, and $\mathbf{\Lambda}$ is a diagonal matrix whose diagonal elements are eigenvalues. \end{definition} \begin{definition}[Spectral Graph Convolution \cite{bruna2013spectral}]\label{def:graph_conv} Given a signal $\mathbf{x}\in\mathbb{R}^N$ and a filter $\mathbf{g}\in\mathbb{R}^N$, the spectral graph convolution is defined in the Fourier domain according to the convolution theorem: \begin{align} \mathbf{\Phi}^T(\mathbf{g}\star\mathbf{x}) &= (\mathbf{\Phi}^T\mathbf{g})\odot(\mathbf{\Phi}^T\mathbf{x})\\ \mathbf{g}\star\mathbf{x} &= \mathbf{\Phi}(\mathbf{\Phi}^T\mathbf{g})\odot(\mathbf{\Phi}^T\mathbf{x}) = \mathbf{\Phi}\text{diag}(\Tilde{\mathbf{g}}) \mathbf{\Phi}^T\mathbf{x}\label{eq:spectral_conv} \end{align} where $\star$ and $\odot$ denote convolution operation and Hadamard product; the second equation holds due to the orthonormality. \end{definition} \begin{definition}[Chebyshev Approximation for Spectral Graph Convolution \cite{defferrard2016convolutional}]\label{def:cheby_graph} Given an input graph signal $\mathbf{x}\in\mathbb{R}^N$ and its adjacency matrix $\mathbf{A}\in\mathbb{R}^{N\times N}$, the Chebyshev approximation for graph convolution on a flat graph is given by \cite{kipf2016semi, defferrard2016convolutional}: \begin{equation} \mathbf{g}_\theta \star \mathbf{x} =\mathbf{\Phi} (\sum_{p=0}^P\theta_pT_p(\Tilde{\mathbf{\Lambda}}))\mathbf{\Phi}^T\mathbf{x} = \sum_{p=0}^P\theta_pT_p(\Tilde{\mathbf{L}})\mathbf{x} \end{equation} where $\Tilde{\mathbf{\Lambda}} = \frac{2}{\lambda_{max}}\mathbf{\Lambda} - \mathbf{I}$ is the normalized eigenvalues, $\lambda_{max}$ is maximum eigenvalue of the matrix $\mathbf{\Lambda}$; $\Tilde{\mathbf{L}} = \frac{2}{\lambda_{max}}\mathbf{L} - \mathbf{I}$; $T_p(x)$ is Chebyshev polynomials defined by $T_p(x) = 2xT_{p-1}(x) - T_{p-2}(x)$ with $T_0(x) = 1$ and $T_1(x) = x$, and $p$ denotes the order of polynomials; $\mathbf{g}_\theta$ and $\theta_p$ denote the filter vector and the parameter respectively. \end{definition} \subsection{Tensor Algebra}\label{sub:tensor} \begin{definition}[Mode-m Product] \label{def:mode_product} The mode-m product generalizes matrix-matrix product to tensor-matrix product. Given a matrix $\mathbf{U}\in\mathbb{R}^{N_m\times N'}$, and a tensor $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots N_{m-1}\times N_{m}\times N_{m+1}\cdots\times N_M}$, then $\mathcal{X}\times_{m}\mathbf{U}\in\mathbb{R}^{N_1\times\cdots N_{m-1}\times N'\times N_{m+1} \cdots \times N_M}$ is its mode-m product. Its element $[n_1, \cdots, n_{m-1}, n', n_{m+1}, \cdots, n_M]$ is defined as: \begin{equation} \begin{split} & (\mathcal{X} \times_{m} \mathbf{U})[n_1, \cdots, n_{m-1}, n', n_{m+1}, \cdots, n_M]\\ =& \sum_{n_m=1}^{N_m}\mathcal{X}[n_1, \cdots, n_{m-1}, n_m, n_{m+1}, \cdots, n_M]\mathbf{U}[n_m, n'] \end{split} \end{equation} \end{definition} \begin{definition}[Tucker Decomposition]\label{def:tucker} The Tucker decomposition can be viewed as a form of high-order principal component analysis \cite{kolda2009tensor}. A tensor $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M}$ can be decomposed into a smaller core tensor $\mathcal{Z}\in\mathbb{R}^{N'_1\times\cdots\times N'_M}$ by $M$ orthonormal matrices $\mathbf{U}_m\in\mathbb{R}^{N'_m\times N_m}$ ($N'_m < N_m$): \begin{equation} \mathcal{X} = \mathcal{Z}\prod_{m=1}^M\times_m\mathbf{U}_m \end{equation} The matrix $\mathbf{U}_m$ is comprised of principal components for the $m$-\textrm{th} mode and the core tensor $\mathcal{Z}$ indicates the interactions among the components. Due to the orthonormality of $\mathbf{U}_m$, we have: \begin{equation} \mathcal{Z} = \mathcal{X}\prod_{m=1}^M\times_m\mathbf{U}_m^T \end{equation} \end{definition} \begin{figure*}[t!] \centering \includegraphics[width=0.9\linewidth]{model.png} \caption{The framework of the proposed model \textsc{NeT$^3$}. At each time step $t$, the model takes a snapshot $\mathcal{S}_t$ from the tensor time series $\mathcal{S}$ and extracts its node embedding tensor $\mathcal{H}_t$ via Tensor Graph Convolution Network (TGCN) module. $\mathcal{H}_t$ will be fed into the Tensor RNN (TRNN) module to encode the temporal dynamics. Finally, the output module takes both of $\mathcal{H}_t$ and $\mathcal{R}_t$ to predict the snapshot of the next time step $\hat{\mathcal{S}}_{t+1}$. Note that $\mathcal{Y}_t$ and $\mathcal{Y}_{t+1}$ are the hidden states of TRNN at time step $t$ and $t+1$ respectively.} \label{fig:model} \end{figure*} \subsection{Multi-dimensional Fourier Transform}\label{sub:multift} \begin{definition}[Multi-dimensional Discrete Fourier Transform]\label{def:mdft} Given a multi-dimensional/mode signal $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M}$, the multi-dimensional Fourier transform is defined by: \begin{equation} \begin{split} \Tilde{\mathcal{X}}[n_1, \cdots, n_M] = \prod_{m=1}^M\sum_{k_m=1}^{N_m}e^{-\frac{i2\pi}{N_m}k_mn_m} \mathcal{X}[k_1, \cdots, k_M] \end{split} \end{equation} Similar to the one-dimensional Fourier transform (Definition \ref{def:dft}), the above equation can be re-written by a multi-linear form: \begin{equation} \Tilde{\mathcal{X}} = \mathcal{X}\times_1\mathbf{F_1}\cdots\times_M\mathbf{F_M} = \mathcal{X}\prod_{m=1}^M\times_m\mathbf{F}_m \end{equation} where $\times_m$ denotes the mode-m product, $\mathbf{F}_m\in\mathbb{R}^{N_m\times N_m}$ is the filter matrix, and $\mathbf{F}_m[n, k] = e^{-\frac{i2\pi}{N_m}kn}$. \end{definition} \begin{definition}[Separable Multi-dimensional Convolution]\label{def:sep_conv} The separable multi-dimensional convolution is defined based on Definition \ref{def:mdft}. Given a signal $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M}$ and a separable filter $\mathcal{Y}\in\mathbb{R}^{N_1\times\cdots\times N_M}$ such that $\mathcal{Y}[n_1, \cdots, n_m] = \mathbf{y}_1[n_1]\cdots\mathbf{y}_M[n_m]$, where $\mathbf{y}_m\in\mathbb{R}^{N_m}$ is the filter vector for the $m$-th mode, then the multi-dimensional convolution is the same as iteratively applying one dimensional convolution onto $\mathcal{X}$: \begin{equation} \begin{split} \mathcal{Y}\star\mathcal{X} = \mathbf{y}_1\star_1\cdots\star_{M-1}\mathbf{y}_M\star_{M}\mathcal{X} \end{split} \end{equation} where $\star_m$ denotes convolution on the $m$-th mode. Suppose $\mathcal{X}\in\mathbb{R}^{N_1\times N_2}$ and $\mathcal{Y}=\mathbf{y}_1\cdot\mathbf{y}_2^T$, where $\mathbf{y}_1\in\mathbb{R}^{N_1}$ and $\mathbf{y}_2\in\mathbb{R}^{N_2}$. Then $\mathcal{Y}\star\mathcal{X}$ means applying $\mathbf{y}_1$ and $\mathbf{y}_2$ to the rows and columns of $\mathcal{X}$ respectively. Formally we have: \begin{equation} \mathcal{Y}\star\mathcal{X} = \mathbf{y}_1\star_1\mathbf{y}_2\star_2\mathcal{X} = \mathbf{Y}_1^T\mathcal{X}\mathbf{Y}_2 = \mathcal{X}\prod_{m=1}^2\times_{m}\mathbf{Y}_m \end{equation} where $\mathbf{Y}_1\in\mathbb{R}^{N_1\times N_1}$ and $\mathbf{Y}_2\in\mathbb{R}^{N_2\times N_2}$ are the transformation matrix corresponding to $\mathbf{y}_1$ and $\mathbf{y}_2$ respectively. \end{definition} \subsection{Network of Tensor Time Series}\label{sub:net3} \begin{definition}[Tensor Time Series] A tensor time series is a $(M+1)$-mode tensor $\mathcal{S}\in\mathbb{R}^{N_1\times\cdots\times N_M\times T}$ or $\{\mathcal{S}_t\in\mathbb{R}^{N_1\times\cdots\times N_M}\}_{t=1}^T$, where the $(M+1)$-th mode is the time and its dimension is $T$. \end{definition} \begin{definition}[Tensor Graph]\label{def:tensor_graph} The tensor graph is comprised of a $M$-mode tensor $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M}$ and the adjacency matrices for each mode $\mathbf{A}_m\in\mathbb{R}^{N_m\times N_m}$. Note that if $m$-th mode is not associated with an adjacency matrix, then $\mathbf{A}_m = \mathbf{I}_m$, where $\mathbf{I}_m\in\mathbb{R}^{N_m\times N_m}$ denotes the identity matrix. \end{definition} \begin{definition}[Network of Tensor Time Series] A network of tensor time series is comprised of (1) a tensor time series $\mathcal{S}\in\mathbb{R}^{N_1\times\cdots\times N_M\times T}$ and (2) a set of adjacency matrices $\mathbf{A}_m\in\mathbb{R}^{N_m\times N_m}$ ($m\in[1,\cdots, M]$) for all but the last mode (i.e., the time mode). \end{definition} \subsection{Problem Definition}\label{sec:problem_definition} In this paper, we focus on the representation learning for the network of tensor time series by predicting its future values. The model trained by predicting the future values can also be applied to recover the missing values of the time series. \begin{definition}[Future Value Prediction]\label{prob:future} Given a network of tensor time series with $\mathcal{S}\in\mathbb{R}^{N_1\times\cdots\times N_M\times T}$ and $\{\mathbf{A}_m\in\mathbb{R}^{N_m\times N_m}\}_{m=1}^M$, and a time step $T'$, the task of the future value prediction is to predict the future values of $\mathcal{S}$ from $T+1$ to $T+T'$. \end{definition} \begin{definition}[Missing Value Recovery]\label{prob:missing} We formulate the task of missing value recovery from the perspective of future value prediction. Suppose the data point $\mathcal{S}[n_1, \cdots, n_M, T']$ ($T'\leq T$) of $\mathcal{S}\in\mathbb{R}^{N_1\times\cdots\times N_M\times T}$ is missing, then we takes $\omega\leq T'$ historical values of $\mathcal{S}$ prior to the time step $T'$: $\{\mathcal{S}_t\}_{\omega=T'-\omega}^{T'-1}$ as input, and predict the value of the $\hat{\mathcal{S}}[n_1, \cdots, n_M, T']$. \end{definition} \subsection{Tensor Graph Convolution Network}\label{sec:tensor_graph_conv} In this subsection, we first introduce spectral graph convolution on tensor graphs and its Chebychev approximation in Subsection \ref{subsec:spectral_convo_for_tensor_graph}. Then we provide a detailed derivation for the layer-wise updating function of the proposed TGCN in Subsection \ref{subsec:tgcl}. \subsubsection{Spectral Convolution for Tensor Graph}\label{subsec:spectral_convo_for_tensor_graph} Analogues to the multi-dimensional Fourier transform (Definition \ref{def:mdft}) and the graph Fourier transform on flat graphs (Definition \ref{def:gft}), we first define the Fourier transform on tensor graphs in Definition \ref{def:tensor_graph_ft}. Then based on the separable multi-dimensional convolution (Definition \ref{def:sep_conv}), and tensor graph Fourier transform (Definition \ref{def:tensor_graph_ft}), we propose spectral convolution on tensor graphs in Definition \ref{def:multi_graph_conv}. Finally, in Definition \ref{def:cheby_conv_tensor}, we propose to use Chebychev approximation in order to parameterize the free parameters in the filters of spectral convolution. \begin{definition}[Tensor Graph Fourier Transform]\label{def:tensor_graph_ft} Given a graph signal $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M}$, along with its adjacency matrices for each mode $\mathbf{A}_m\in\mathbb{R}^{N_m\times N_m}$ ($m\in[1,\cdots, M]$), the tensor graph Fourier transform is defined by: \begin{equation} \Tilde{\mathcal{X}}=\mathcal{X}\prod_{m=1}^M\times_m\mathbf{\Phi}_m \end{equation} where $\mathbf{\Phi}_m$ is the eigenvector matrix of graph Laplacian matrix $\mathbf{L}_m=\mathbf{\Phi}_m\mathbf{\Lambda}_m\mathbf{\Phi}_m^T$ for $\mathbf{A}_m$; $\times_m$ denotes the mode-m product. \end{definition} \begin{definition}[Spectral Convolution for Tensor Graph]\label{def:multi_graph_conv} Given an input graph signal $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M}$, and a multi-dimensional filter $\mathcal{G}\in\mathbb{R}^{N_1\times\cdots\times N_M}$ defined by $\mathcal{G}[n_1,\cdots,n_M]=\mathbf{g}_1[n_1]\cdots\mathbf{g}_M[n_M]$, where $\mathbf{g}_m\in\mathbb{R}^{N_m}$ is the filter vector for the $m$-th mode. By analogizing to spectral graph convolution (Definition \ref{def:graph_conv}) and separable multi-dimensional convolution (Definition \ref{def:sep_conv}) , we define spectral convolution for tensor graph as: \begin{equation} \mathcal{G}\star\mathcal{X} = \mathcal{X}\prod_{m=1}^M\times_m\mathbf{\Phi}^T_m\textrm{diag}(\Tilde{\mathbf{g}}_m)\mathbf{\Phi}_m \end{equation} where $\Tilde{\mathbf{g}}_m=\mathbf{\Phi}_m^T\mathbf{g}_m$ is the Fourier transformed filter for the $m$-th mode; $\star$ and $\times_m$ denote the convolution operation and the mode-m product respectively; $\textrm{diag}(\mathbf{g}_m)$ denotes the diagonal matrix, of which the diagonal elements are the elements in $\mathbf{g}_m$. \end{definition} \begin{definition}[Chebyshev Approximation for Spectral Convolution on Tensor Graph]\label{def:cheby_conv_tensor} Given a tensor graph $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M}$, where each mode is associated with an adjacency matrix $\mathbf{A}_m\in\mathbb{R}^{N_m\times N_m}$, the Chebychev approximation for spectral convolution on tensor graphs is given by approximating $\Tilde{\mathbf{g}}_m$ by Chebyshev polynomials: \begin{equation}\label{eq:cheby_conv_tensor} \begin{split} \mathcal{G}_\theta\star\mathcal{X} &= \mathcal{X}\prod_{m=1}^M\times_m\mathbf{\Phi}^T_m(\sum_{p_m=0}^P\theta_{m,p_m}T_{p_m}(\Tilde{\mathbf{\Lambda}}_m))\mathbf{\Phi}_m\\ &=\mathcal{X}\prod_{m=1}^M\times_m\sum_{p_m=0}^P\theta_{m,p_m}T_{p_m}(\Tilde{\mathbf{L}}_m) \end{split} \end{equation} where $\mathcal{G}_\theta$ denotes the convolution filter parameterized by $\theta$; $\mathbf{\Lambda}_m\in\mathbb{R}^{N_m\times N_m}$ is the matrix of eigenvalues for the graph Laplacian matrix $\mathbf{L}_m=\mathbf{I}_m - \mathbf{D}_m^{-\frac{1}{2}}\mathbf{A}_m\mathbf{D}_m^{-\frac{1}{2}}=\mathbf{\Phi}_m\mathbf{\Lambda}_m\mathbf{\Phi}_m^T$; $\Tilde{\mathbf{\Lambda}}_m = \frac{2}{\lambda_{m,max}}\mathbf{\Lambda}_m - \mathbf{I}_m$ is the normalized eigenvalues, $\lambda_{m,max}$ is maximum eigenvalue in the matrix $\mathbf{\Lambda}_m$; $\Tilde{\mathbf{L}}_m = \frac{2}{\lambda_{m,max}}\mathbf{L}_m - \mathbf{I}_m$; $T_{p_m}(x)$ is Chebyshev polynomials defined by $T_{p_m}(x) = 2xT_{p_m-1}(x) - T_{p_m-2}(x)$ with $T_0(x) = 1$ and $T_1(x) = x$, and $p_m$ denotes the order of polynomials; $\theta_{m,p_m}$ denote the co-efficient of $T_{p_m}(x)$. For clarity, we use the same polynomial degree $P$ for all modes. \end{definition} \subsubsection{Tensor Graph Convolutional Layer}\label{subsec:tgcl} Due to the linearity of mode-m product, Equation \eqref{eq:cheby_conv_tensor} can be re-formulated as: \begin{equation}\label{eq:cheby_conv_tensor_2} \begin{split} \mathcal{G}_\theta\star\mathcal{X} &= \sum_{p_1, \cdots, p_M=0}^P\mathcal{X}\prod_{m=1}^M\times_m\theta_{m,p_m} T_{p_m}(\Tilde{\mathbf{L}}_m)\\ &=\sum_{p_1, \cdots, p_M=0}^P\prod_{m=1}^M\theta_{m,p_m} \mathcal{X}\prod_{m=1}^M\times_mT_{p_m}(\Tilde{\mathbf{L}}_m) \end{split} \end{equation} We follow \cite{kipf2016semi} to simplify Equation \eqref{eq:cheby_conv_tensor_2}. Firstly, let $\lambda_{m,max}=2$ and we have: \begin{equation}\label{eq:L_tmp} \begin{split} \Tilde{\mathbf{L}}_m &=\frac{2}{\lambda_{m,max}}\mathbf{L}_m - \mathbf{I}_m\\ &=\mathbf{I}_m - \mathbf{D}_m^{-\frac{1}{2}}\mathbf{A}_m\mathbf{D}_m^{-\frac{1}{2}}- \mathbf{I}_m\\ &=-\mathbf{D}_m^{-\frac{1}{2}}\mathbf{A}_m\mathbf{D}_m^{-\frac{1}{2}} \end{split} \end{equation} For clarity, we use $\Tilde{\mathbf{A}}_m$ to represent $\mathbf{D}_m^{-\frac{1}{2}}\mathbf{A}_m\mathbf{D}_m^{-\frac{1}{2}}$. Then we fix $P=1$ and drop the negative sign in Equation \eqref{eq:L_tmp} by absorbing it to parameter $\theta_{m,p_m}$. Therefore, we have \begin{equation}\label{eq:tmp2} \sum_{p=0}^P\theta_{m,p_m}T_p(\Tilde{\mathbf{L}}_m) = \theta_{m,0} + \theta_{m,1}\Tilde{\mathbf{A}}_m \end{equation} Furthermore, by plugging Equation \eqref{eq:tmp2} back into Equation \eqref{eq:cheby_conv_tensor_2} and replacing the product of parameters $\prod_{m=1}^M\theta_{m,p_m}$ by a single parameter $\theta_{p_1,\cdots, p_M}$, we will obtain: \begin{equation}\label{eq:cheby_tensor_graph_final} \begin{split} \mathcal{G}_\theta\star\mathcal{X} = \sum_{\exists p_m=1}\theta_{p_1,\cdots, p_M}\mathcal{X}\prod_{p_m=1}\times_m\Tilde{\mathbf{A}}_m + \theta_{0,\cdots, 0}\mathcal{X} \end{split} \end{equation} We can observe from the above equation that $p_m$ works as an indicator for whether applying the convolution filter $\Tilde{\mathbf{A}}_m$ to $\mathcal{X}$ or not. If $p_m=1$, then $\Tilde{\mathbf{A}}_m$ will be applied to $\mathcal{X}$, otherwise, $\mathbf{I}_m$ will be applied. When $p_m=0$ for $\forall m\in[1, \cdots, M]$, we will have $\theta_{0,\cdots, 0}\mathcal{X}$. To better understand how the above approximation works on tensor graphs, let us assume $M=2$. Then we have: \begin{equation}\label{eq:cheby_tensor_graph_example} \mathcal{G}_\theta\star\mathcal{X} = \theta_{1,1}\mathcal{X}\times_1\Tilde{\mathbf{A}}_1\times_2\Tilde{\mathbf{A}}_2 + \theta_{1,0}\mathcal{X}\times_1\Tilde{\mathbf{A}}_1 + \theta_{0,1}\mathcal{X}\times_2\Tilde{\mathbf{A}}_2 + \theta_{0,0}\mathcal{X} \end{equation} Given the approximation in Equation \eqref{eq:cheby_tensor_graph_final}, we propose the tensor graph convolution layer in Definition \ref{def:tgcl}. \begin{definition}[Tensor Graph Convolution Layer]\label{def:tgcl} Given an input tensor $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M\times d}$, where $d$ is the number of channels, along with its adjacency matrices $\{\mathbf{A}_m\}_{m=1}^M$, the Tensor Graph Convolution Layer (TGCL) with $d'$ output channels is defined by: \begin{equation}\label{eq:tgcl} \begin{split} & \text{TGCL}(\mathcal{X}, \{\mathbf{A}_m\}_{m=1}^M)\\ = & \sigma(\sum_{\exists p_m=1}\mathcal{X}\prod_{p_m=1}\times_m\Tilde{\mathbf{A}}_m\times_{M+1}\mathbf{\Theta}_{p_1,\cdots, p_M} + \mathcal{X}\times_{M+1}\mathbf{\Theta}_{0}) \end{split} \end{equation} where $\mathbf{\Theta}\in\mathbb{R}^{d\times d'}$ is parameter matrix; $\sigma(\cdot)$ is activation function. \end{definition} In the \textsc{NeT$^3$} model (Figure \ref{fig:model}), given a snapshot $\mathcal{S}_t\in\mathbb{R}^{N_1\times\cdots\times N_M}$ along with its adjacency matrices $\{\mathbf{A}_m\}_{m=1}^M$, we use a one layer TGCL to obtain the node embeddings $\mathcal{H}_t\in\mathbb{R}^{N_1\times\cdots\times N_M\times d}$, where $d$ is the dimension of the node embeddings: \begin{equation}\label{eq:tgcn2h} \mathcal{H}_t = \textrm{TGCN}(\mathcal{S}_t) \end{equation} \subsubsection{Synergy Analysis} The proposed TGCL effectively models tensor graphs and captures the synergy among different adjacency matrices. The vector $\mathbf{p}=[p_1, \cdots, p_M]\in[0,1]^M$ represents a combination of $M$ networks, where $p_m=1$ and $p_m=0$ respectively indicate the presence and absence of the $\Tilde{\mathbf{A}}_m$. Therefore, each node in $\mathcal{X}$ could collect other nodes' information along the adjacency matrix $\Tilde{\mathbf{A}}_m$ if $p_m=1$. For example, suppose $M=2$ and $p_1=p_2=1$ (as shown in Figure \ref{fig:synergy} and Equation \eqref{eq:cheby_tensor_graph_example}), then node $\mathcal{X}[1, 1]$ (node $v$) could reach node $\mathcal{X}[2,2]$ (node $w'$) by passing node $\mathcal{X}[2, 1]$ along the adjacency matrix $\Tilde{\mathbf{A}}_1$ ($\mathcal{X}\times_1\Tilde{\mathbf{A}}_1$) and then arriving at node $\mathcal{X}[2,2]$ via $\Tilde{\mathbf{A}}_2$ ($\mathcal{X}\times_1\Tilde{\mathbf{A}}_1\times_2\Tilde{\mathbf{A}}_2$). In contrast, with a traditional GCN layer, node $v$ can only gather information of its direct neighbors from a given model (node $v'$ via $\Tilde{\mathbf{A}}_1$ or $w$ via $\Tilde{\mathbf{A}}_2$). An additional advantage of TGCL lies in that it is robust to missing values in $\mathcal{X}$ since TGCL is able to recover the value of a node from various combination of adjacency matrices. For example, suppose the value of node $v=0$, then TGCL could recover its value by referencing the value of $v'$ (via $\mathcal{X}\times_1\Tilde{\mathbf{A}}_1$), or the value of $w$ (via $\mathcal{X}\times_2\Tilde{\mathbf{A}}_2$), or the value of $w'$ (via $\mathcal{X}\times_1\Tilde{\mathbf{A}}_1\times_2\Tilde{\mathbf{A}}_2$). However, a GCN layer could only refer to the node $v'$ via $\Tilde{\mathbf{A}}_1$ or $w$ via $\Tilde{\mathbf{A}}_2$. \begin{figure}[h!] \centering \includegraphics[width=.18\textwidth]{synergy.png} \caption{An illustration of synergy analysis of TGCL.} \label{fig:synergy} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.99\textwidth]{trnn.png} \caption{Tensor Recurrent Neural Network (TRNN).} \label{fig:trnn} \end{figure*} \subsubsection{Complexity Analysis} For a $M$-mode tensor with $K (1\leq K\leq M)$ networks, the complexity of the tensor graph convolution (Equation \eqref{eq:cheby_tensor_graph_final}) is $O(2^{K-1}\prod_{m=1}^MN_m(2+\sum_{k=1}^KN_k))$. \subsection{Tensor Recurrent Neural Network}\label{sec:trnn} Given the output from TGCN: $\mathcal{H}_t\in\mathbb{R}^{N_1\times\cdots\times N_M\times d}$ (Equation \eqref{eq:tgcn2h}), the next step is to incorporate temporal dynamics for $\mathcal{H}_t$. As shown in Figure \ref{fig:trnn}, we propose a novel Tensor Recurrent Neural Network (TRNN), which captures the implicit relation among co-evolving time series by decomposing $\mathcal{H}_t$ into a low dimensional core tensor $\mathcal{Z}_t\in\mathbb{R}^{N_1'\times\cdots\times N_M'\times d}$ ($N_m'<N_m$) via a Tensor Dimension Reduction module (Section \ref{sec:tensor_reduction}). The Tensor RNN Cell (Section \ref{sec:tensor_rnn_cell}) further introduces non-linear temporal dynamics into $\mathcal{Z}_t$ and produces the hidden state $\mathcal{Y}_{t}\in\mathbb{R}^{N_1'\times\cdots\times N_M'\times d}$. Finally, the Tensor Dimension Reconstruction module (Section \ref{sec:tensor_reconstruction}) reconstructs $\mathcal{Y}_{t}$ and generates the reconstructed tensor $\mathcal{R}_t\in\mathbb{R}^{N_1\times\cdots\times N_M\times d}$. \subsubsection{Tensor Dimension Reduction}\label{sec:tensor_reduction} As shown in the left part of Figure \ref{fig:trnn}, the proposed tensor dimension reduction module will reduce the dimensionality of each mode of $\mathcal{H}_t\in\mathbb{R}^{N_1\times\cdots\times N_M\times d}$, except for the last mode (hidden features), by leveraging Tucker decomposition (Definition \ref{def:tucker}): \begin{equation}\label{eq:trnn_decompose} \mathcal{Z}_t = \mathcal{H}_t\prod_{m=1}^M\times_m\mathbf{U}_m^T \end{equation} where $\mathbf{U}_m\in\mathbb{R}^{N'_m\times N_m}$ denotes the orthonormal parameter matrix, which is learnable via backpropagation; $\mathcal{Z}_t\in\mathbb{R}^{N_1' \times\cdots\times N_m'\times d}$ is the core tensor of $\mathcal{H}_t$. \subsubsection{Tensor RNN Cell}\label{sec:tensor_rnn_cell} Classic RNN cells, e.g. Long-Short-Term-Memory (LSTM) \cite{hochreiter1997long} are designed for a single input sequence, and therefore do not directly capture the correlation among co-evolving sequences. To address this problem, we propose a novel Tensor RNN (TRNN) cell based on tensor algebra. We first propose a Tensor Linear Layer (TLL): \begin{equation} \label{eq:ttl} \text{TLL}(\mathcal{X}) = \mathcal{X}\prod_{m=1}^{M+1}\times_m\mathbf{W}_m + \mathbf{b} \end{equation} where $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M\times d}$ is the input tensor, and $\mathbf{W}_m\in\mathbb{R}^{N_m\times N_m'}$ ($\forall m\in[1,\cdots, M]$) and $\mathbf{W}_{M+1}\in\mathbb{R}^{d\times d'}$ are the linear transition parameter matrices; $\mathbf{b}\in\mathbb{R}^{d'}$ denotes the bias vector. TRNN can be obtained by replacing the linear functions in any RNN cell with the proposed TLL. We take LSTM as an example to re-formulate its updating equations. By replacing the linear functions in the LSTM with the proposed TLL, we have updating functions for Tensor LSTM (TLSTM)\footnote{Bias vectors are omitted for clarity.}: \begin{align} \mathcal{F}_{t} &= \sigma(\textrm{TLL}_{fz}(\mathcal{Z}_t) + \textrm{TLL}_{fy}(\mathcal{Y}_{t-1})) \\ \mathcal{I}_{t} &= \sigma(\textrm{TLL}_{iz}(\mathcal{Z}_t) + \textrm{TLL}_{iy}(\mathcal{Y}_{t-1})) \\ \mathcal{O}_{t} &= \sigma(\textrm{TLL}_{oz}(\mathcal{Z}_t) + \textrm{TLL}_{oy}( \mathcal{Y}_{t-1})) \\ \Tilde{\mathcal{C}}_{t} &= \tanh(\textrm{TLL}_{cz}(\mathcal{Z}_t) + \textrm{TLL}_{cy}(\mathcal{Y}_{t-1})) \\ \mathcal{C}_{t} &= \mathcal{F}_{t} \odot \mathcal{C}_{t-1} + \mathcal{I}_{t} \odot \Tilde{\mathcal{C}}_{t} \\ \mathcal{Y}_{t} &= \mathcal{O}_{t} \odot \sigma(\mathcal{C}_{t}) \end{align} where $\mathcal{Z}_t\in\mathbb{R}^{N_1'\times\cdots\times N_M'\times d}$ and $\mathcal{Y}_t\in\mathbb{R}^{N_1'\times\cdots\times N_M'\times d'}$ denote the input core tensor and the hidden state tensor at the time step $t$; $\mathcal{F}_{t}$, $\mathcal{I}_{t}$, $\mathcal{O}_{t}\in\mathbb{R}^{N_1'\times\cdots\times N_M'\times d'}$ denote the forget gate, the input gate and the output gate, respectively; $\Tilde{\mathcal{C}}_{t}\in\mathbb{R}^{N_1'\times\cdots\times N_M'\times d'}$ is the tensor for updating the cell memory $\mathcal{C}_t\in\mathbb{R}^{N_1'\times\cdots\times N_M'\times d'}$; TLL$_{\ast}(\cdot)$ denotes the tensor linear layer (Equation \eqref{eq:ttl}), and its subscripts in the above equations are used to distinguish different initialization of TLL\footnote{For all TLL related to $\mathcal{Z}_t$: TLL$_{\ast z}(\cdot)$, $\mathbf{W}_m\in\mathbb{R}^{N_m\times N_m'}$ ($\forall m\in[1,\cdots, M]$) and $\mathbf{W}_{M+1}\in\mathbb{R}^{d\times d'}$. For all TLL related to $\mathcal{Y}_{t-1}$: TLL$_{\ast y}(\cdot)$, $\mathbf{W}_m\in\mathbb{R}^{N_m'\times N_m'}$ ($\forall m\in[1,\cdots, M]$) and $\mathbf{W}_{M+1}\in\mathbb{R}^{d'\times d'}$.}; $\sigma(\cdot)$ and $\tanh(\cdot)$ denote the sigmoid activation and tangent activation functions respectively; $\odot$ denotes the Hadamard product. \subsubsection{Tensor Dimension Reconstruction}\label{sec:tensor_reconstruction} To predict the values of each time series, we need to reconstruct the dimensionality of each mode. Thanks to the orthonormality of $\mathbf{U}_m$ ($\forall m\in[1, \cdots, M]$), we can naturally reconstruct the dimensionality of $\mathcal{Y}_t\in\mathbb{R}^{N_1'\times\cdots\times N_M'\times d'}$ as follows: \begin{equation} \mathcal{R}_t = \mathcal{Y}_t\prod_{m=1}^M\times_m\mathbf{U}_m \end{equation} where $\mathcal{R}_{t}\in\mathbb{R}^{N_1\times\cdots\times N_M\times d'}$ is the reconstructed tensor. \subsubsection{Implicit Relationship} The Tucker decomposition (Definition \ref{def:tucker} and Equation \eqref{eq:trnn_decompose} can be regarded as high-order principal component analysis \cite{kolda2009tensor}. The matrix $\mathbf{U}_m$ extracts eigenvectors of the $m$-th mode, and each element in $\mathcal{Z}$ indicates the relation between different eigenvectors. We define $\rho\geq0$ as the indicator of \textit{interaction degree}, such that $N_m' = \rho N_m$ ($\forall m\in[1,\cdots, M]$), to represent to what degree does the TLSTM capture the correlation. The ideal range for $\rho$ is $(0,1)$. When $\rho=0$, the TLSTM does not capture any relations and it is reduced to a single LSTM. When $\rho=1$, the TLSTM captures the relation for each pair of the eigenvectors. When $\rho>1$, the $\mathbf{U}_m$ is over-complete and contains redundant information. Despite the dimentionality reduced by Equation \eqref{eq:trnn_decompose}, it is not guaranteed that the number of parameters in TLSTM will always be less than the number of parameters in multiple separate LSTMs, because of the newly introduced parameters $\mathbf{U}_m$ ($\forall m\in[1, \cdots, M]$). The following lemma provides an upper-bound for $\rho$ given the dimensions of the input tensor and the hidden dimensions. \begin{lemma}[Upper-bound for $\rho$] Let $N_m$ and $N_m'$ be the dimensions of $\mathbf{U}_m$ in Equation \eqref{eq:trnn_decompose}, and let $d\in\mathbb{R}$ and $d'\in\mathbb{R}$ be the hidden dimensions of the inputs and outputs of TLSTM. TLSTM uses less parameters than multiple separate LSTMs, as long as the following condition holds: \begin{equation}\label{eq:reduction_guarantee} \rho \leq \sqrt{\frac{(\prod_{m=1}^MN_m-1)d'(d+d'+1)}{2\sum_{m=1}^MN_m^2}+\frac{1}{256}} - \sqrt{\frac{1}{256}} \end{equation} \end{lemma} \begin{proof} There are totally $\prod_{m=1}^MN_m$ time series in the tensor time series $\mathcal{S}\in\mathbb{R}^{N_1\times\cdots\times N_M\times T}$, and thus the total number of parameters for $\prod_{m=1}^MN_m$ separate LSTM is: \begin{equation} \begin{split} N^{(LSTM)} &= \prod_{m=1}^{M}N_m[4(d d' + d' d' + d')] \\ &=4d'(d+d'+1)\prod_{m=1}^MN_m \end{split} \end{equation} The total number of parameters for the TLSTM is: \begin{equation} N^{(TLSTM)} = 4d'(d+d'+1) + 8\sum_{m=1}^MN_m'^2 + \sum_{m=1}^M N_m'N_m \end{equation} where the first two terms on the right side are the numbers of parameters of the TLSTM cell, and the third term is the number of parameters required by $\{\mathbf{U}_m\}_{m=1}^M$ in the Tucker decomposition. Let $\Delta = N^{(TLSTM)} - N^{(LSTM)}$, and let's replace $N_m'$ by $\rho N_m$, then we have: \begin{equation} \Delta = (8\rho^2 + \rho)\sum_{m=1}^{M}N_m^2 - 4(\prod_{m=1}^MN_m - 1)d'(d+d'+1) \end{equation} Obviously, $\Delta$ is a convex function of $\rho$. Hence, as long as $\rho$ satisfies the condition specified in the following equation, it can be ensured that the number of parameters is reduced. \begin{equation}\label{eq:reduction_guarantee} \rho \leq \sqrt{\frac{(\prod_{m=1}^MN_m-1)d'(d+d'+1)}{2\sum_{m=1}^MN_m^2}+\frac{1}{256}} - \sqrt{\frac{1}{256}} \end{equation}\end{proof} \subsection{Output Module}\label{sec:output} Given the reconstructed hidden representation tensor obtained from the TRNN: $\mathcal{R}_{t+1}\in\mathbb{R}^{N_1\times\cdots\times N_M\times d'}$, which captures the temporal dynamics, and the node embedding of the current snapshot $\mathcal{S}_t$: $\mathcal{H}_t\in\mathbb{R}^{N_1\times\cdots\times N_M\times d}$, the output module is a function mapping $\mathcal{R}_t$ and $\mathcal{H}_t$ to $\mathcal{S}_{t+1}\in\mathbb{R}^{N_1\times N_2 \cdots \times N_M}$. We use a Multi-Layer Perceptron (MLP) with a linear output activation as the mapping function: \begin{equation} \hat{\mathcal{S}}_{t+1} = \text{MLP}([\mathcal{H}_t, \mathcal{R}_t]) \end{equation} where $\hat{\mathcal{S}}_{t+1}\in\mathbb{R}^{N_1\times\cdots\times N_M}$ represents the predicted snapshot; $\mathcal{H}_t$ and $\mathcal{R}_t$ are the outputs of TGCN and TRNN respectively; and $[\cdot, \cdot]$ denotes the concatenation operation. \subsection{Training} Directly training RNNs over the entire sequence is impractical in general \cite{sutskever2013training}. A common practice is to partition the long time series data by a certain window size with $\omega$ historical steps and $\tau$ future steps \cite{li2017diffusion, yu2017spatio, li2019predicting}. Given a time step $t$, let $\{\mathcal{S}_{t'}\}_{t'=t-\omega+1}^t$ and $\{\mathcal{S}_{t'}\}_{t'=t+1}^{t+\tau}$ be the historical and the future slices, the objective function of one window slice is defined as: \begin{equation} \begin{split} \arg \min_{\mathbf{\Theta}, \mathbf{\mathcal{W}}, \mathbf{\mathcal{B}}} & ||\textsc{NeT$^3$}(\{\mathcal{S}_{t'}\}_{t'=t-\omega+1}^t) - \{\mathcal{S}_{t'}\}_{t'=t+1}^{t+\tau})||_F^2\\ + & \mu_1\sum_{t'=t-\omega+1}^{t}||\mathcal{H}_{t'}- \mathcal{Z}_{t'}\prod_{m=1}^M\times_m\mathbf{U}_m||_F^2 \\ + & \mu_2 \sum_{m=1}^M||\mathbf{U}_m\mathbf{U}^T_m - \mathbf{I}_m||_F^2 \end{split} \end{equation} where \textsc{NeT$^3$} denotes the proposed model; $\mathbf{\Theta}$ and $\mathbf{\mathcal{W}}$ represent the parameters of TGCN and TRNN respectively; $\mathbf{\mathcal{B}}$ denotes the bias vectors; the second term denotes the reconstruction error of the Tucker decomposition; the third term denotes the orthonormality regularization for $\mathbf{U}_m$, and $\mathbf{I}_m$ denotes identity matrix ($\forall m\in[1,\cdots, M]$); $||\cdot||_F$ is the Frobenius norm; $\mu_1$ and $\mu_2$ are coefficients. \subsection{Experimental Setup}\label{sec:exp_setup} \subsubsection{Datasets} We evaluate the proposed \textsc{NeT$^3$}\ model on five real-world datasets, whose statistics is summarized in Table~\ref{tab:dataset}. \paragraph{Motes Dataset} The \textit{Motes} dataset\footnote{\url{http://db.csail.mit.edu/labdata/labdata.html}} \cite{motes_dataset} is a collection of reading log from 54 sensors deployed in the Intel Berkeley Research Lab. Each sensor collects 4 types of data, i.e., temperature, humidity, light, and voltage. Following \cite{cai2015facets}, we evaluate all the methods on the log of one day, which has 2880 time steps in total, yielding a $54\times 4\times 2880$ tensor time series. We use the average connectivity of each pair of sensors to construct the network for the first mode (54 sensors). As for the network of four data types, we use the Pearson correlation coefficient between each pair of them: \begin{equation}\label{eq:pearson} \mathbf{A}[i,j] = \frac{1}{2}(r_{ij} + 1) \end{equation} where $r_{ij}\in[-1,1]$ denotes the Pearson correlation coefficient between the sequence $i$ and the sequence $j$. \paragraph{Soil Dataset} The \textit{Soil} dataset contains one-year log of water temperature and volumetric water content collected from 42 locations and 5 depth levels in the Cook Agronomy Farm (CAF)\footnote{\url{http://www.cafltar.org/}} near Pullman, Washington, USA, \cite{gasch2017pragmatic} which forms a $42\times 5\times 2\times 365$ tensor time series. Since the dataset neither provides the specific location information of sensors nor the relation between the water temperature and volumetric water content, we use Pearson correlation, as shown in Equation \eqref{eq:pearson}, to build the adjacency matrices for all the modes. \begin{table}[t] \centering \caption{Statistics of the datasets.} \begin{tabular}{c|r|r|r} \hline Dataset & Shape & \# Nodes & Modes with $\mathbf{A}$\\ \hline \textit{Motes} & $54\times4\times2880$ & 216 & 1, 2\\ \textit{Soil} & $42\times5\times2\times365$ & 420 & 1, 2, 3\\ \textit{Revenue} & $410\times3\times62$ & 1,230 & 1, 2\\ \textit{Traffic} & $1000\times2\times1440$ & 2,000 & 1 \\ \textit{20CR} & $30\times30\times20\times6\times180$ & 108,000 & 1, 2, 3, 4\\ \hline \end{tabular} \label{tab:dataset} \end{table} \paragraph{Revenue Dataset} The \textit{Revenue} dataset is comprised of an actual and two estimated quarterly revenues for 410 major companies (e.g. Microsoft Corp.\footnote{\url{https://www.microsoft.com/}}, Facebook Inc.\footnote{\url{https://www.facebook.com/}}) from the first quarter of 2004 to the second quarter of 2019, which yields a $410\times 3\times 62$ tensor time series. We construct a co-search network \cite{lee2015search} based on log files of the U.S Securities and Exchange Commission (SEC)\footnote{\url{https://www.sec.gov/dera/data/edgar-log-file-data-set.html}} to represent the correlation among different companies, which is used as the adjacency matrix for the first mode. We also use the Pearson correlation coefficient to construct the adjacency matrix for the three revenues as in Equation \eqref{eq:pearson}. \paragraph{Traffic Dataset} The \textit{Traffic} dataset is collected from Caltrans Performance Measurement System (PeMS).\footnote{\url{https://dot.ca.gov/programs/traffic-operations/mobility-performance-reports}} Specifically, hourly average speed and occupancy of 1,000 randomly chosen sensor stations in District 7 of California from June 1, 2018, to July 30, 2018, are collected, which yields a $1000\times 2\times 1440$ tensor time series. The adjacency matrix $\mathbf{A}_1$ for the first mode is constructed by indicating whether two stations are adjacent: $\mathbf{A}_1[i,j]=1$ represents the stations $i$ and $j$ are next to each other. As for the second mode, since the Pearson correlation between speed and occupancy is not significant, we use identity matrix $\mathbf{I}$ as the adjacency matrix. \paragraph{20CR Dataset} We use the version 3 of the 20th Century Reanalysis data\footnote{20th Century Reanalysis V3 data provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, USA, from their Web site \url{https://psl.noaa.gov/data/gridded/data.20thC_ReanV3.html}}\footnote{Support for the Twentieth Century Reanalysis Project version 3 dataset is provided by the U.S. Department of Energy, Office of Science Biological and Environmental Research (BER), by the National Oceanic and Atmospheric Administration Climate Program Office, and by the NOAA Physical Sciences Laboratory.} \cite{compo2011twentieth, slivinski2019towards} collected by the National Oceanic and Atmospheric Administration (NOAA) Physical Sciences Laboratory (PSL). We use a subset of the full dataset, which covers a $30\times30$ area of north America, ranging from $30^\circ$ N to $60^\circ$ N, $80^\circ$ W to $110^\circ$ W, and it contains 20 atmospheric pressure levels. For each of the location point, 6 attributes are used, including air temperature, specific humidity, omega, u wind, v wind and geo-potential height.\footnote{For details of the attributes, please refer to the 20th Century Reanalysis project \url{https://psl.noaa.gov/data/20thC_Rean//}} We use the monthly average data ranging from 2001 to 2015. Therefore, the shape of the data is $30\times30\times20\times6\times180$. The adjacency matrix $\mathbf{A}_1$ for the first mode, latitude, is constructed by indicating whether two latitude degrees are next to each other: $\mathbf{A}_1[i,j]=1$ if $i$ and $j$ are adjacent. The adjacency matrices $\mathbf{A}_2$ and $\mathbf{A}_3$ for the second and the third modes are built in the same way as $\mathbf{A}_1$. We build $\mathbf{A}_4$ for the 6 attributes based on Equation \eqref{eq:pearson}. \subsubsection{Comparison Methods} We compare our methods with both classic methods (DynaMMo \cite{li2009dynammo}, MLDS \cite{rogers2013multilinear}) and recent deep learning methods (DCRNN \cite{li2017diffusion}, STGCN \cite{yu2017spatio}). We also compare the proposed full model \textsc{NeT$^3$} with its ablated versions. To evaluate TGCN, we compare it with MLP, GCN \cite{kipf2016semi} and iTGCN. Here, iTGCN is an ablated version of TGCN, which ignores the synergy between adjacency matrices. The updating function of iTGCN is given by the following equation: \begin{equation}\label{eq:itgcn} \sigma(\sum_{m=1}^M\mathcal{X}\times_m\Tilde{\mathbf{A}}_m\times_{M+1}\mathbf{\Theta}_{m} + \mathcal{X}\times_{M+1}\mathbf{\Theta}_0) \end{equation} where $\sigma(\cdot)$ denotes the activation function, $\mathbf{\Theta}$ denotes parameter matrix and $\mathcal{X}\in\mathbb{R}^{N_1\times\cdots\times N_M\times d}$. For a fair comparison with GCN and the baseline methods, we construct a flat graph by combining the adjacency matrices: \begin{equation} \mathbf{A}=\mathbf{A}_M\otimes_k\cdots\otimes_k\mathbf{A}_1 \end{equation} where $\otimes_k$ is Kronecker product, the dimension of $\mathbf{A}$ is $\prod_{m=1}^MN_m$, and $N_m$ is the dimension of $\mathbf{A}_m$. To evaluate TLSTM, we compare it with multiple separate LSTMs (mLSTM) and a single LSTM. \begin{figure*}[t] \centering \subfloat[Motes-Missing] {\includegraphics[width=.20\linewidth]{missing_motes.png}\label{fig:exp_motes_missing}}\, \subfloat[Soil-Missing] {\includegraphics[width=.20\linewidth]{missing_soil.png}\label{fig:exp_soil_missing}}\, \subfloat[Revenue-Missing] {\includegraphics[width=.20\linewidth]{missing_ibm.png}\label{fig:exp_revenue_missing}}\, \subfloat[Traffic-Missing] {\includegraphics[width=.20\linewidth]{missing_traffic.png}\label{fig:exp_traffic_missing}}\, \subfloat[Motes-Future] {\includegraphics[width=.20\linewidth]{future_motes.png}\label{fig:exp_motes_future}}\, \subfloat[Soil-Future] {\includegraphics[width=.20\linewidth]{future_soil.png}\label{fig:exp_soil_future}} \subfloat[Revenue-Future] {\includegraphics[width=.20\linewidth]{future_ibm.png}\label{fig:exp_revenue_future}}\, \subfloat[Traffic-Future] {\includegraphics[width=.20\linewidth]{future_traffic.png}\label{fig:exp_traffic_future}}\, \caption{RMSE of missing value recovery (upper) and future value prediction (lower).}\label{fig:exp_rmse} \end{figure*} \begin{figure*}[t] \centering \subfloat[Motes-Missing] {\includegraphics[width=.20\linewidth]{missing_motes_synergy.png}\label{fig:exp_motes_missing_synergy}}\, \subfloat[Soil-Missing] {\includegraphics[width=.20\linewidth]{missing_soil_synergy.png}\label{fig:exp_soil_missing_synergy}}\, \subfloat[Revenue-Missing] {\includegraphics[width=.20\linewidth]{missing_ibm_synergy.png}}\, \subfloat[Traffic-Missing] {\includegraphics[width=.20\linewidth]{missing_traffic_synergy.png}}\, \subfloat[Motes-Future] {\includegraphics[width=.20\linewidth]{future_motes_synergy.png}\label{fig:exp_motes_future_synergy}}\, \subfloat[Soil-Future] {\includegraphics[width=.20\linewidth]{future_soil_synergy.png}\label{fig:exp_soil_future_synergy}} \subfloat[Revenue-Future] {\includegraphics[width=.20\linewidth]{future_ibm_synergy.png}\label{fig:exp_revenue_future_synergy}}\, \subfloat[Traffic-Future] {\includegraphics[width=.20\linewidth]{future_traffic_synergy.png}\label{fig:exp_traffic_future_synergy}}\, \caption{Synergy Analysis: RMSE of missing value recovery (upper) and future value prediction (lower).}\label{fig:exp_rmse_synergy} \end{figure*} \begin{figure*}[t] \centering \subfloat[Missing Value Recovery] {\includegraphics[width=.20\linewidth]{missing_20cr.png}\label{fig:exp_20cr_missing}}\, \subfloat[Future Value Prediction] {\includegraphics[width=.20\linewidth]{future_20cr.png}\label{fig:exp_20cr_future}}\, \subfloat[Synergy-Missing] {\includegraphics[width=.20\linewidth]{missing_20cr_synergy.png}\label{fig:exp_20cr_missing_synergy}}\, \subfloat[Synergy-Future] {\includegraphics[width=.20\linewidth]{future_20cr_synergy.png}\label{fig:exp_20cr_future_synergy}} \caption{Experiments on the \textit{20CR} dataset}\label{fig:exp_20cr} \end{figure*} \begin{figure*}[h] \centering \subfloat {\includegraphics[width=.32\linewidth]{speed_1.png}}\, \subfloat {\includegraphics[width=.32\linewidth]{speed_2.png}}\, \subfloat {\includegraphics[width=.32\linewidth]{speed_3.png}}\, \subfloat {\includegraphics[width=.32\linewidth]{occ_1.png}}\, \subfloat {\includegraphics[width=.32\linewidth]{occ_2.png}}\, \subfloat {\includegraphics[width=.32\linewidth]{occ_3.png}} \caption{Visualization of future value prediction on the \textit{Traffic} dataset. Upper part presents the results for normalized speed. Lower part presents the results for normalized occupancy.}\label{fig:exp_visual} \end{figure*} \subsubsection{Implementation Details} For all the datasets and tasks, we use one layer TGCN, one layer TLSTM, and one layer MLP with the linear activation. The hidden dimension is fixed as $8$. We fix $\rho=0.8$, $0.8$, $0.2$, $0.1$ and $0.9$ for TLSTM on \textit{Motes}, \textit{Soil}, \textit{Revenue}, \textit{Traffic}, and \textit{20CR} datasets respectively. The window size is set as $\omega=5$ and $\tau=1$, and Adam optimizer \cite{kingma2014adam} with a learning rate of $0.01$ is adopted. Coefficients $\mu_1$ and $\mu_2$ are fixed as $10^{-3}$. \subsection{Effectiveness Results}\label{sec:exp_effectiveness} In this section, we present the effectiveness experimental results for missing value recovery, future value prediction, synergy analysis and sensitivity experiments. \subsubsection{Missing Value Recovery} For all the datasets, we randomly select 10\% to 50\% of the data points as test sets, and we use the mean and standard deviation of each time series in the training sets to normalize each time series. The evaluation results on \textit{Motes}, \textit{Soil}, \textit{Revenue} and \textit{Traffic} are shown in Figure \ref{fig:exp_motes_missing}-\ref{fig:exp_traffic_missing}, and the results for \textit{20CR} are presented in \ref{fig:exp_20cr_missing}. The proposed full model \textsc{NeT$^3$} (TGCN+TLSTM) outperforms all of the baseline methods on almost all of the settings. Among the baselines methods, those equipped with GCNs generally have a better performance than LSTM. When comparing TGCN with iTGCN, we observe that TGCN performs better than iTGCN on most of the settings. This is due to TGCN's capability in capturing various synergy among graphs. We can also observe that TLSTM (TGCN+TLSTM) achieves lower RMSE than both mLSTM (TGCN+mLSTM) and LSTM (TGCN+LSTM), demonstrating the effectiveness of capturing the implicit relations. \subsubsection{Future Value Prediction} We use the last 2\% to 10\% time steps as test sets for the \textit{Motes}, \textit{Traffic}, \textit{Soil} and \textit{20CR} datasets, and we use the last 1\% to 5\% time steps as test sets for the \textit{Revenue} dataset. Similar to the missing value recovery task, The datasets are normalized by mean and standard deviation of the training sets. The evaluation results are shown in Figure \ref{fig:exp_motes_future}-\ref{fig:exp_traffic_future} and Figure \ref{fig:exp_20cr_future}. The proposed \textsc{NeT$^3$}\ outperforms the baseline methods on all of the five datasets. Different from the missing value recovery task, the classic methods perform much worse than deep learning methods on the future value prediction, which might result from the fact that these methods are unable to capture the non-linearity in the temporal dynamics. Similar to the missing value recovery task, generally, TGCN also achives lower RMSE than iTGCN and GCN, and TLSTM performs better than both mLSTM and LSTM. We present the visualization of the future value prediction task on the \textit{Traffic} dataset in Figure \ref{fig:exp_visual}. \begin{figure*}[t!] \centering \subfloat[Missing value recovery] {\includegraphics[width=.32\linewidth]{sensitivity_miss.png}\label{fig:exp_sens_missing}} \: \subfloat[Future value prediction] {\includegraphics[width=.32\linewidth]{sensitivity_future.png}\label{fig:exp_sens_future}} \: \subfloat[Number of parameters] {\includegraphics[width=.32\linewidth]{sensitivity_parameters.png}\label{fig:exp_sens_para}} \caption{Sensitivity experiments of $\rho$ on the Motes dataset.}\label{fig:exp_sensitivity} \end{figure*} \subsubsection{Experiments on Synergy} In this section, we compare the proposed TGCN with iTGCN, GCN$_1$, GCN$_2$, GCN$_3$ and GCN$_4$ (if applicable) on the missing value recovery and future value prediction tasks. Here, GCN$_1$, GCN$_2$, GCN$_3$ and GCN$_4$ denote the GCN with the adjacency matrix of the 1st, 2nd, 3rd and 4th mode respectively. iTGCN is an independent version of TGCN (Equation \eqref{eq:itgcn}), which is a simple linear combination of different GCNs (GCN$_1$, GCN$_2$, GCN$_3$ and GCN$_4$). As shown in Figure \ref{fig:exp_rmse_synergy} and Figure \ref{fig:exp_20cr_missing_synergy}-\ref{fig:exp_20cr_future_synergy}, generally, TGCN outperforms GCNs designed for single modes and the simple combination of them (iTGCN). \subsubsection{Sensitivity Experiments} We use different values of $\rho$ for TLSTM on the \textit{Motes} dataset for the missing value recovery and future value prediction tasks and report their RMSE values in Figure \ref{fig:exp_sens_missing} and Figure \ref{fig:exp_sens_future}. It can be noted that, in general, the greater $\rho$ is, the better results (i.e., smaller RMSE) will be obtained. We believe the main reason is that a greater $\rho$ indicates that TLSTM captures more interaction between different time series. Figure \ref{fig:exp_sens_para} shows that the number of parameters of TLSTM is linear with respect to $\rho$. \begin{table}[h] \centering \caption{In the upper part, $\rho_{max}$ and $\rho_{exp}$ are the upper bounds and the values of $\rho$ used in experiments. The middle and lower parts present the number of parameters in TLSTM and LSTM, and the parameter reduction ratio.} \begin{tabular}{c|r|r|r|r|r} \hline & \textit{Motes} & \textit{Soil} & \textit{Revenue} & \textit{Traffic} & \textit{20CR}\\ \hline $\rho_{max}$ & 2.17 & 2.43 & 0.64 & 0.31 & 57.25\\ $\rho_{exp}$ & 0.80 & 0.80 & 0.20 & 0.10 & 0.90\\ \hline TLSTM & 18,552 & 10,996 & 87,967 & 180,554 & 16,696\\ mLSTM & 117,504 & 57,120 & 669,120 & 1,088,000 & 58,752,000\\ \hline Reduce & 84.21\% & 80.75\% & 86.85\% & 83.40\% & 99.97\%\\ \hline \end{tabular}\label{table:rho} \end{table} \begin{figure}[h!] \centering \subfloat {\includegraphics[width=.48\linewidth]{scalability_time.png}}\, \subfloat {\includegraphics[width=.48\linewidth]{scalability_parameters.png}}\, \caption{Scalability experiments.}\label{fig:exp_scalability} \end{figure} \subsection{Efficiency Results}\label{sec:exp_efficiency} In this section, we present experimental results for memory efficiency and scalability. \subsubsection{Memory Efficiency} As shown in Table \ref{table:rho}, the upper bounds ($\rho_{max}$) of $\rho$ for the five datasets are 2.17, 2.43, 0.64, 0.31 and 57.25. In the experiments, we fix $\rho_{exp}=0.80$, $0.80$, $0.20$, $0.10$ and $0.90$ for the \textit{Motes}, \textit{Soil}, \textit{Revenue}, \textit{Traffic} and \textit{20CR} datasets, respectively. Given the above values of $\rho_{exp}$, the TLSTM row in Table \ref{table:rho} shows the number of parameters in TLSTM. The mLSTM row shows the required number of parameters for multiple separate LSTMs for each single time series. Compared with mLSTM, TLSTM significantly reduces the number of parameters by more than 80\% and yet performs better than mLSTM (Figure \ref{fig:exp_rmse} and Figure \ref{fig:exp_20cr_missing}-\ref{fig:exp_20cr_future}). \subsubsection{Scalability} We evaluate the scalability of \textsc{NeT$^3$} on the \textit{20CR} dataset in terms of the training time and the number of parameters. We fix the $\rho=0.9$, and change the size of the input tensor by shrinking the dimension of all the modes by the specified ratios: [0.2, 0.4, 0.6, 0.8, 1.0]. Given the ratios, the input sizes (the number of nodes) are therefore 684, 6,912, 23,328, 55,296 and 108,000 respectively. The averaged training time of one epoch for TLSTM against the size of the input tensor is presented in left part of Figure \ref{fig:exp_scalability}, and the number of parameters of TLSTM against the size of the input tensor is presented in right part of Figure \ref{fig:exp_scalability}. Note that \textit{h} and \textit{k} on the x-axis represent \textit{hundreds} and \textit{thousands} respectively. \textit{s} and \textit{k} on the y-axis represent \textit{seconds} and \textit{thousands} respectively. The figures show that the training time and the number of parameters grow almost linearly with the size of input tensor. \subsection{Co-evolving Time Series} Co-evolving time series is ubiquitous and appears in a variety of applications, such as enviornmental monitoring, financial analysis and smart transportation. Li et al. \cite{li2009dynammo} proposed a linear dynamic system based on Kalman filter and Bayesian networks to model co-evolving time series. Rogers et al. \cite{rogers2013multilinear} extended \cite{li2009dynammo} and further proposed a Multi-Linear Dynamic System (MLDS), which provides the base of the proposed TRNN. Yu et al \cite{yu2016temporal} proposed a Temporal Regularized Matrix Factorization (TRMF) for modeling co-evolving time series. Zhou et al. \cite{7837896} proposed a bi-level model to detect the rare patterns of time series. Recently, Yu et al. \cite{yu2017deep} used LSTM \cite{hochreiter1997long} for modeling traffic flows. Liang et al. \cite{liang2018geoman} proposed a multi-level attention network for geo-sensory time series prediction. Srivastava et al. \cite{srivastava2018comparative} and Zhou et al. \cite{zhou2017recover} used separate RNNs for weather and air quality monitoring time series. Yu et al. \cite{yu2017long} proposed a HOT-RNN based on tensor-trains for long-term forecasting. Zhou et al. \cite{zhou2020domain} proposed a multi-domality neural attention network for financial time series. One limitation of this line of research is that it often ignores the relation network between different time series. \subsection{Graph Convolutional Networks} Plenty of real-world data could naturally be represented by a network or graph, such as social networks and sensor networks. Bruna et al. \cite{bruna2013spectral} defined spectral graph convolution operation in the Fourier domain by analogizing it to one-dimensional convolution. Henaff et al. \cite{henaff2015deep} used a linear interpolation, and Defferrard et al. \cite{defferrard2016convolutional} adopted Chebyshev polynomials to approximate the spectral graph convolution. Kipf et al. \cite{kipf2016semi} simplified the Chebyshev approximation and proposed a GCN. These methods were typically designed for flat graphs. There are also graph convolutional network methods considering multiple types of relationships. Monti et al. \cite{monti2017geometric} proposed a multi-graph CNN for matrix completion, which does not apply to tensor graphs. Wang et al. \cite{wang2019heterogeneous} proposed HAN which adopted attention mechanism to extract node embedding from different layers of a multiplex network \cite{de2013mathematical, jing2021hdmi, yan2021dynamic}, which is a flat graph with multiple types of relations, but not the \textit{tensor graph} in our paper. Liu et al. \cite{liu2020tensor} proposed a TensorGCN for text classification. It is worth pointing out that the term \textit{tensor} in \cite{liu2020tensor} was used in a different context, i.e., it actually refers to a multiplex graph. For a comprehensive review of the graph neural networks, please refer to \cite{zhang2019graph, zhou2018graph, wu2020comprehensive}. \subsection{Networked Time Series} Relation networks have been encoded into traditional machine learning methods such as dynamic linear \cite{li2009dynammo} and multi-linear \cite{rogers2013multilinear} systems for co-evolving time series \cite{cai2015fast, cai2015facets, hairi2020netdyna}. Recently, Li et al. \cite{li2017diffusion} incorporated spatial dependency of co-evolving traffic flows by the diffusion convolution. Yu et al. \cite{yu2017spatio} used GCN to incorporate spatial relations and CNN for capturing temporal dynamics. Yan et al. \cite{yan2018spatial}, introduced a spatial-temporal GCN for skeleton recognition. Li et al. \cite{li2019predicting} leveraged RGCN \cite{schlichtkrull2018modeling} to model spatial dependency and LSTM \cite{hochreiter1997long} for temporal dynamics. These methods only focus on the relation graphs of a single mode, and ignore relations on other modes e.g. the correlation between the speed and occupancy of the traffic. In addition, these methods rely on the same function for capturing temporal dynamics of all time series. It is worth pointing out that the proposed \textsc{NeT$^3$}\ unifies and supersedes both co-evolving time series and networked time series as a more general data model. For example, if the adjacency matrix $\mathbf{A}_m (m=1,...,M)$ for each mode is set as an identity matrix, the proposed \textsc{NeT$^3$}\ degenerates to co-evolving (tensor) time series (e.g., \cite{li2009dynammo}); networked time series in~\cite{cai2015fast} can be viewed as a special case of \textsc{NeT$^3$}\ whose tensor $\mathcal X$ only has a single mode. \section{Introduction} \input{001intro.tex} \section{Preliminaries}\label{sec:preliminary} \input{002preliminary} \section{Methodology}\label{sec:methods} \input{003method.tex} \section{Experiments}\label{sec:experiments} \input{004experiments.tex} \section{Related Works}\label{sec:related_work} \input{005relatedwork.tex} \section{Conclusion}\label{sec:conclusion} \input{006conclusion.tex} \begin{acks} This work is supported by National Science Foundation under grant No. 1947135, by Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture, and IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) - a research collaboration as part of the IBM AI Horizons Network. The content of the information in this document does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2021-05-17T02:06:40", "yymm": "2102", "arxiv_id": "2102.07736", "language": "en", "url": "https://arxiv.org/abs/2102.07736" }
\section{Introduction} \label{sec:intro} Despite the strong ability of Convolutional Neural Networks (CNNs) in feature representation and image recognition, these cumbersome models often lack explainability, limiting the trust and reliance of the end-users towards the decisions made by them. Explainable AI (XAI) is a field that attempts to make the third-party consumers trusted on AI models by opening their black-box and elucidating the reasoning of the models for their predictions. By meeting these goals, XAI algorithms provide the users with an answer to questions such as ``Why does the model predict what it predicts?", ``When does the model make an unreliable prediction?", ``How does the model behave if it is put in a specific scenario?" etc. \cite{lipton2018mythos, guidotti2018survey}. In particular, visual explanation methods (a.k.a. attribution methods) are among the most celebrated groups of XAI methods that explain the predictions made by CNNs. These algorithms are a branch of `post-hoc' explanation algorithms that interpret the behavior of the model in the evaluation phase. Visual explanation methods formulate their problem as follows: They take a model trained for image recognition and a digital image as inputs. The model is fed with the image and makes a prediction accordingly. The method's objective is to output a 2-dimensional heatmap named `explanation map' with the same height and width as the input image. The explanation map valuates the regions of the image, based on their contribution in the model's prediction. One notable group of visual XAI approaches are the ones based on the Class Activation Mapping (CAM) method \cite{CAM}. These approaches are specialized for CNNs and inspired from \cite{zhou2014object} which showed that CNNs act like object detectors and can learn high-level representations of the object instances in an unsupervised manner. Grad-CAM is a popular CAM-based approach that utilizes backpropagation to score the feature maps' locations in a specific layer \cite{GradCAM}. Grad-CAM and the other methods employing backpropagation to form explanation maps (such as Grad-CAM++ and XGrad-CAM \cite{GradCAM, fu2020axiom}), offer great versatility and faithfulness. However, the performance of these methods is limited as gradient-based values underestimate the sensitivity of the model's output to the features represented in the image. This shortcoming has been addressed in prior works such as \cite{ScoreCAM, AblationCAM, SmoothGradCAMPP}. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{IGCAM_fig01.pdf} \caption{Comparison of baseline CAM-based methods with Integrated Grad-CAM to show the ability of our method to generate faithful class discriminative explanation maps.} \label{fig:my_label} \end{figure} In this work, we propose a novel technique to reduce the shortcomings of Grad-CAM. In common with Grad-CAM and Grad-CAM++, our method also utilizes signal backpropagation for weighting feature maps. However, we replace the gradient terms in Grad-CAM with similar terms based on \textit{Integrated Gradient}, inspired by an attribution method of the same name \cite{IntegGrad}. Hence, we name our CAM-based algorithm \textit{Integrated Grad-CAM}. To summarize, the main contributions of this work are as follows: \begin{itemize} \item We propose Integrated Grad-CAM, which bridges Integrated Gradient and Grad-CAM to solve the gradient issues in the prior CAM-based methods taking benefits of backpropagation techniques. \item We demonstrate our proposed method's ability, compared to Grad-CAM, Grad-CAM++, and Integrated Gradient, by conducting experiments on shallow and deep networks and performing qualitative and quantitative metrics. We achieve the empirical results implying that our method successfully combines the practical ideas in each of these methods to improve them in completeness, faithfulness, and satisfaction. \end{itemize} \section{Related works} \label{sec:format} \subsection{Backpropagation-based methods: } Computing the gradient of a model’s output to the input features or the hidden neurons is the basis of this type of algorithms. The earliest backpropagation-based methods operate directly by computing the sensitivity of the model’s confidence score to the input features \cite{simonyan2013deep}. To develop such methods, some approaches such as \cite{bach2015pixel, nam2020relative} modify their backpropagation rules to assign scores to the input features denoting the relevance or irrelevance of the input features to the model’s prediction. Also, an Integrated Gradient calculation was defined by \cite{IntegGrad}, to satisfy to axioms termed as \textit{sensitivity} and \textit{implementation invariance} as per their definition. \subsection{Grad-CAM: } This method runs in two steps to form an explanation map using the outputs of a given layer (usually, the last convolutional layer) of the target CNN model. in the feature extraction unit of the model. In the first step, the selected layer is probed, and their corresponding feature maps are collected. In the second step, the signal is partially backpropagated from the output to the selected layer. Then, the average of the gradient values with respect to the pixels in each feature maps are calculated. Assume the input image to be $I$, and the class confidence score of the model for class $c$ to be $y_{c}(I)$, and a layer $l$ selected, Grad-CAM initially collects the feature maps $\{A^{l1}(I), A^{l2}(I),..., A^{lN}(I)\}$ in a forward pass ($N$ denotes the number of feature maps in the chosen layer). Then, the signal is passed back from the output neuron to the layer $l$. To reach the explanation map, Grad-CAM performs a weighted combination of the feature maps using their corresponding average gradient-based weights: \begin{equation} M_{Grad-CAM}^c = \text{ReLU}\big( \sum_{k=1}^{N} (\frac{1}{Z}\sum_{i,j}\frac{\partial y_c(I)}{\partial A_{ij}^{lk}(I)})A^{lk}(I)\big) \label{grad-cam} \end{equation} In the equation above, $A_{ij}^{lk}(I)$ refers to the location $\{i,j\}\in \mathbb{R}^{u,v}$ in the $k$-th feature map and $\{u,v\}$ denote the dimensions of the feature maps ($Z=u\times v$). The dimensions of $M_{Grad-CAM}^c$ is the same as that of the feature maps, and usually smaller than the input image. Hence, the final Grad-CAM explanation map is reached by upsampling $M_{Grad-CAM}^c$ to the size of $I$ through bilinear interpolation. \subsection{Integrated Gradient} One of the main drawbacks of deploying backpropagation in attribution methods is that they violate \textit{sensitivity} axiom. As discussed in previous works such as Integrated Gradient and DeepLift \cite{IntegGrad, shrikumar2016not}, this axiom implies that for each given pair of input and baseline image differing only in one feature, an attribution method should highlight this difference by assigning different values corresponding to that feature, which envisions the response of the model to this difference. To address this issue in vanilla gradient \cite{simonyan2013deep}, it was proposed in \cite{IntegGrad} that given a defined baseline, and the input image, the sensitivity of output's confidence scores to input features can be justified stronger by calculating the integral of gradient values on any continuous path connecting the baseline and the input. \section{Methodology} \label{sec:pagestyle} The same as gradient-based methods, Grad-CAM breaks the sensitivity axiom \cite{IntegGrad} while dealing with non-linear components of a CNN, such as activation functions (e.g., ReLU). To reduce this problem, we integrate the local sensitivity scores of the model's output to the neurons in each feature map when the input image is scaled from a pre-defined baseline $I'$ to the main input image $I$. Given a pair of baseline and input, a path connecting these two is defined as: \begin{equation} \gamma(\alpha)=I'+f(\alpha)\times(I-I') \label{eq:path} \end{equation} where $\alpha$ is a scalar variable, and the function $f(\alpha) : \mathbb{R}\rightarrow\mathbb{R}$ is differentiable and monotonically increasing when $0\leq\alpha\leq 1$, and satisfies $f(0)=0$ and $f(1)=1$. Gradient-based schemes may fail to quantify neurons' contribution in predicting an output for $I$ correctly when some of the paths linking them with the output node possess inactivated neurons \cite{shrikumar2016not}. Hence, the neurons' contribution scores can be determined more accurately via probing the relationship between them and the output node when the input image changes from a certain baseline. For each pair of assumed functions $g(.)$ and $h(.)$, path integral gradient (PathIG) are calculated as follows: \begin{equation} \text{PathIG}_{h,g}(I)\equiv \int_{\alpha=0}^{1}\frac{d h(\gamma(\alpha))}{d g(\gamma(\alpha))}[g(\gamma(\alpha))-g(I')]d\alpha \label{eq:PathIG} \end{equation} For more simplicity, the path from the baseline to the input image is defined as a straight linear path for computation simplicity by setting $f(\alpha)=\alpha$. In Integrated Grad-CAM, we formulate the scoring scheme considering average gradient values for each feature map. The general formulation of our equation is similar to eq. \eqref{grad-cam}. However, we update the average gradient terms in Grad-CAM with corresponding average integrated gradient values. We consider a straight linear path in the image domain from the reference image $I'$ to the desired input image to simplify our formulation. Hence, our explanation maps $M^c$ are computed as: \begin{equation} M^c=\int_{\alpha=0}^{1}\text{ReLU}(\sum_{k=1}^{N}\sum_{i,j}\frac{\partial y_c(\gamma(\alpha))}{\partial A_{ij}^{lk}(\gamma(\alpha))}\Delta_{lk}(\gamma(\alpha)))d\alpha \label{int-grad-cam} \end{equation} where, \begin{equation} \Delta_{lk}(\gamma(\alpha))=(A^{lk}(\gamma(\alpha))-A^{lk}(I')) \label{delta} \end{equation} In the equations above, $y_{c}(\gamma(\alpha))$ is the confidence score achieved for class $c$ and the input image $\gamma(\alpha)$, and $A^{lk}(\gamma(\alpha))$ is the $k$-th feature map derived from the layer $l$. Also, according to \cite{IntegGrad}, a black image is an appropriate choice for a baseline since black regions contain no significant attributions. The same as Grad-CAM, as far as our saliency maps are generated throughout the equation above, our explanation maps are reached after upsampling $M^c$ to the dimensions of the input image, via bilinear interpolation. \begin{figure}[t] \centering \includegraphics[width=0.99\linewidth]{IGSCHEME4.pdf} \caption{Schematic of the proposed method considering that the baseline image is set to black and the path connecting the baseline and the input is set as a straight line.} \label{fig:Schematic} \end{figure} Implementing integral functions on a software (or hardware) environment has always been a challenging task. In our case, a simple solution to overcome this issue is to approximate the integral in equation \eqref{int-grad-cam} with a summation via Riemman approximation. To perform such an estimation, we sample points along the path with a constant interval, calculate the expression in equation \eqref{int-grad-cam} for these points, and estimate the term $d\alpha$ with the interval size. Considering the interval step to be $\frac{1}{m} (m \in \mathbb{N})$, the integrated gradient-based score maps can be approximated as follows: \begin{equation} M^c \approx\sum_{t=1}^{m}\text{ReLU}\big(\frac{1}{m}\sum_{k=1}^{N}\sum_{i,j}\frac{\partial y_c(\gamma(\frac{t}{m}))}{\partial A_{ij}^{lk}(\gamma(\frac{t}{m}))})\Delta(\gamma(\frac{t}{m}))\big) \label{int-grad-cam-approx} \end{equation} Solving the equation \eqref{int-grad-cam} using the equation above makes our method equivalent to averaging Grad-CAM saliency maps reached for multiple copies of the input, which are linearly interpolated with the defined baseline, as shown in figure \ref{fig:Schematic}. \section{Experiments} \label{sec:typestyle} To verify the improved completeness and faithfulness of the explanations provided by our method, we have conducted experiments that compare our method with the baseline methods, Grad-CAM, and Grad-CAM++. In the experiments, we utilized TorchRay library provided in \cite{fong2019understanding}, and implemented our method in PyTorch \cite{paszke2019pytorch}\footnote{Our code is publically available at: \url{https://github.com/smstrzd/IntegratedGradCAM}}. In all experiments, we applied our method and other conventional CAM-based algorithms. We selected the last convolutional layer since this layer provides the highest-level representations captured by CNN. Moreover, we set the interval step $m$ in our method to 50 to reach an acceptable trade-off between precision and computational overhead. However, in the case that this parameter is set to any number between 20 and 200, the results of applying our method do not vary considerably. \subsection{Dataset and Models} \label{ssec:subhead} Our experiments are performed on two networks trained on PASCAL VOC 2007 dataset. We used the test set of this database to collect the qualitative and quantitative results. PASCAL VOC 2007 is an object detection dataset, containing 4952 test images from 20 different output classes. The presence of multiple objects from either the same instance or different instances makes interpreting the models trained on this dataset more challenging so that the explanation approaches producing class-indiscriminative saliency maps for model's prediction for multiple classes are expected to fail to interpret the models trained on this dataset accurately. In this work, we utilized two networks with different structures, trained on the mentioned dataset by \cite{ModelTrainer} and provided in TorchRay library. The first model is a VGG-16 network achieving a top-1 accuracy of 87.18\%, and the latter model is a deeper ResNet-50 network with a top-1 accuracy of 87.96\%. Both models take images of size $224\times 224\times 3$ as input. Thus, all images are resized to these dimensions before they are passed through the models. \subsection{Quantitative Evaluation} \begin{table}[t] \centering \begin{tabular}{c c c c c} \toprule & \multirow{2}{*}{\textbf{Metric}} & Grad- & Grad- & Integrated \\ & & CAM & CAM++ & Grad-CAM \\ \midrule \multirow{4}{*}{\STAB{\rotatebox[origin=c]{90}{\textbf{VGG16}}}} & \textbf{EBPG} & 55.44 & 46.29 & \textbf{55.94} \\ & \textbf{Bbox} & 51.7 & 55.59 & \textbf{55.6} \\ & \textbf{Drop\%} & 49.47 & 60.63 & \textbf{47.96} \\ & \textbf{Increase\%} & 31.08 & 23.89 & \textbf{31.47} \\ \midrule \multirow{4}{*}{\STAB{\rotatebox[origin=c]{90}{\textbf{ResNet-50}}}} & \textbf{EBPG} & 60.08 & 47.78 & \textbf{60.41} \\ & \textbf{Bbox} & 60.25 & 58.66 & \textbf{61.94} \\ & \textbf{Drop\%} & 35.80 & 41.77 & \textbf{34.49} \\ & \textbf{Increase\%} & 36.58 & 32.15 & \textbf{36.84} \\ \bottomrule \end{tabular} \caption{Results of quantitative analysis on PASCAL VOC 2007 test set. For each metric, the best is shown in bold. Except for Drop\%, the higher is better for all other metrics. The results are reported in percentage.} \label{tab: gt_metrics} \end{table} \label{sssec:subsubhead} To compare our method with the other state-of-the-art CAM-based methods, we utilize two types of quantitative metrics. First, we deploy ground truth-based metrics, including Energy-based Pointing game (\textbf{EBPG}) and Bounding box (\textbf{Bbox}), to assess our method's ability in accurate object localization and feature visualization, compared to the baseline methods. Besides, we measure ``\textbf{Drop\%}" and ``\textbf{Increase\%}" to evaluate the faithfulness of the explanations by observing the model's behavior when it is fed only with the features denoted as important by an explanation algorithm. The description of the metrics is provided below. \subsubsection {Ground truth-based metrics} Energy-based pointing game which is developed by \cite{ScoreCAM}, quantifies the fraction of energy in each resultant explanation map $S$ captured in the corresponding ground truth mask $G$, as $EBPG = \frac{||S \odot G||_{1}}{||S||_{1}}$. On the other hand, Bounding box, as introduced by \cite{Schulz2020Restricting} is a size-adaptive variant of mIoU. Denoting $N$ as the number of ground truth pixels in $G$, Bbox score is calculated by counting the fraction of pixels in $S$ among the highest $N$ pixels which are located inside the mask $G$. \subsubsection{Drop/Increase rate} As introduced in \cite{GradCAMPP} and developed by \cite{AblationCAM}, these metrics measure the correlation of the explanation maps generated by explanation algorithms with the model's prediction scores, by quantifying the positive attributions captured and the negative attribution discarded, respectively. Given a model $\Psi(.)$, an input image $I_{i}$ from a dataset containing $K$ images, and an explanation map $S(I_{i})$, initially a threshold function $T(.)$ is applied on $S(I_{i})$ to extract the most important 15\% pixels (based on $S(I_{i})$) from $I_{i}$ using point-wise multiplication. The confidence scores on the masked images are then compared with the original scores as follows: \begin{equation} Drop\%=\frac{100}{K}\sum_{i=1}^{K}\frac{\text{ReLU}(\Psi(I_{i})-\Psi(I_{i}\odot T(I_{i})))}{\Psi(I_{i})} \end{equation} \begin{equation} Increase\%=\frac{100}{K}\sum_{i=1}^{K} \text{sign}(\Psi(I_{i}\odot T(I_{i}))-\Psi(I_{i})) \end{equation} \subsection{Discussion} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{IGCAM_fig03.pdf} \caption{Qualitative comparison of baseline CAM-based XAI methods with Integrated Grad-CAM (our proposed). The sample images are given to a ResNet-50 model trained on PASCAL VOC 2007 dataset \cite{PASCALVOC}.} \label{fig:QRS} \end{figure} Every concrete explanation should satisfy two properties that are ``faithfulness" and ``understandability". Faithfulness denotes that explanations should reflect the exact behavior of the target model, while understandability means that explanations should be interpretable enough from the users' end. Our developed method is able to satisfy faithfulness and understandability better than Grad-CAM and Grad-CAM++. This is verified in table \ref{tab: gt_metrics} by model truth-based and ground truth-based metrics, respectively. Also, as shown in Figs. \ref{fig:my_label} and \ref{fig:QRS}, our method has a greater ability in highlighting more crucial attributions, compared to the conventional methods. The qualitative images are for the ResNet-50 model, though our method's advantages are also visible on the VGG-16 model. Despite of its superior performance, our method provides more computational overhead compared rather than Grad-CAM and Grad-CAM++. Conducting a complexity evaluation on 100 random images from PASCAL VOC 2007 test set given to the ResNet-50 model, it was observed that both of these methods run in 11.3 milliseconds on a P100-PCIe GPU with 16GB of memory, while Integrated Grad-CAM (with its interval step set to $20$) requires 54.8 milliseconds in average to operate on each image. Increasing the interval step will slow down the method more, without any significant change in the reached explanation maps. \section{Conclusion} \label{sec:illust} To deal with the fact that gradient-based CNN visualization approaches such as Grad-CAM are prone to miscalculate the features' value, we proposed Integrated Grad-CAM. Our method showed the ability to correct the measurements for scoring the attributions captured by CNN since it applies the path integral of a defined gradient-based term. Our experiments show that our approach improves Grad-CAM both in precise localization of the object regions and interpreting the predictions made by CNNs. \bibliographystyle{IEEEbib}
{ "timestamp": "2021-02-17T02:01:26", "yymm": "2102", "arxiv_id": "2102.07805", "language": "en", "url": "https://arxiv.org/abs/2102.07805" }
\section{Introduction} \label{sec:introduction} The detection of gravitational waves (GWs) from the coalescence of two compact objects with masses that can be associated to compact stellar objects has been reported recently by the LIGO-Virgo Scientific Collaboration~\cite{Abbott2017, Abbott2020, Abbott2020b}. One of these GW detections, GW170817, was accompanied by an electromagnetic counterpart~\cite{Abbott2017b, Abbott2017d} and has therefore been associated with the merger of a binary system of neutron stars which have long been proposed to be behind the production of short gamma-ray bursts \cite{Narayan92, Eichler89, Rezzolla:2011, Berger2013b}. Furthermore, the detection of a kilonova emission through the ejected material and the subsequent r-process nucleosynthesis \cite{Wanajo2014, Perego2014, Just2015, Bovard2017, Thielemann2017b, Radice2018a, Metzger2017, Soumi2020, Most2020e} has provided further evidence that the GW signal in GW170817 must have been produced by the merger of two compact matter objects. Combining the knowledge of nuclear physics, general relativity, and numerical relativity simulations, this detection has certainly helped to deepen our understanding of cold dense matter equation of state (EOS) through a tight constraints on maximum mass, radii and tidal deformability of the neutron stars~\cite{Margalit2017, Bauswein2017b, Rezzolla2017, Ruiz2017, Annala2017, Radice2017b, Most2018, Tews2018a, De2018, Abbott2018b, Shibata2019, Koeppel2019, Nathanail2021}. Besides the more conventional scenario of the merger of purely hadronic compact stars leading to a purely hadronic merged object, other possibilities have been explored in great detail. One of them is the possibility that the merging objects were hybrid (or twin) stars~\cite{Fattoyev2017, Paschalidis2017, Burgio2018, Montana2018, Gomes2018, Li2018b, Li2020}, that a phase transition to quark matter could have taken place after the merger~\cite{Most2018b, Bauswein2019, Weih2020}, or that the merger involved strange-quark stars~\cite{Zhou2017,Drago2018}. All of these different scenarios are in principle compatible with the GW signal of GW170817, which was necessarily limited to the inspiral only. The solution of the equations of general-relativistic hydrodynamic (GRHD) or general-relativistic magnetohydrodynamic (GRMHD) are indispensable tools for the accurate modelling of these scenarios and the quantitative prediction of the signals produced by the inspiral and merger, be it through the gravitational radiation, the emission of neutrinos, or the ejection of matter. Over the years, the scenario of the merger of fully hadronic compact stars has been studied by GRHD and GRMHD simulations in great detail~\cite{Shibata99d, Duez2004b, Anderson2007, Baiotti08, Rezzolla:2011, Bauswein2011, Bernuzzi2013, Radice2016, Sekiguchi2016, Radice2018a, Lehner2016, Ruiz2020b, Bovard2017, Papenfort2018, Nedora2020, Most2020e}. Among the numerous results that have been obtained with these simulations (see \cite{Baiotti2016, Paschalidis2016} for some reviews), two are particularly relevant for the results presented in this work. The first one is about the spectral properties of the GW signal -- both during the inspiral and after the merger -- have been analysed in great detail and shown to follow quasi-universal relations in terms of the stellar tidal deformability or compactness~\cite{Bauswein2012a, Read2013, Bauswein2014, Takami:2014, Bernuzzi2014, Takami2015, Rezzolla2016, Bose2017}. The second one is instead related to the matter ejected at merger and after the merger \cite{Bauswein2013b, Palenzuela2015, Bovard2017, Dietrich2017c, Dietrich2017, Kyutoku2018, Papenfort2018, Radice2018a, Sekiguchi2016, Nedora2020, Chaurasia2020}, and the impact that it has on r-process nucleosynthesis, on the lifetime of the merged object \cite{Gill2019}, and on the maximum mass of compact stars \cite{Margalit2017, Rezzolla2017, Nathanail2021}. While the fully hadronic scenario has been covered in great detail, the alternative scenarios involving matter in different states -- either before or after the merger -- has so far been considered less extensively. The few investigations performed so far have in fact been limited to the analysis of the post-merger GW signal when a phase transition sets in after the merger and leads to clear signatures in the GW signal \cite{Most2018b, Bauswein2019, Weih2020}. In particular, these studies have highlighted that differences, sometimes significant, can be found in the post-merge GW spectrum and that the universal relations found in the case of hadronic stars are obviously broken if a considerable quark core is produced. The literature is even scarcer when it comes to simulations exploring the inspiral and merger of quark-star binaries that composed of pure strange quark matter (SQM). Indeed, the first and only works so far are more than a decade old and have been obtained using smooth particle hydrodynamics and a conformally flat approximation to general relativity~\cite{Bauswein2009, Bauswein2010}. This is in great part due to the considerable additional difficulties that the numerical simulation of these objects implies and that originates from the very sharp decrease in density and enthalpy at the surface of the quark star. We recall that the SQM hypothesis was firstly proposed by Witten and suggesting that SQM, rather than nuclear matter, is the absolute ground state of matter~\cite{Witten84}. In this hypothesis, SQM is mainly composed by up, down, and strange quarks, with a small fraction of electrons also present. Because of the self-bound properties of SQM, objects composed of this matter can exist in any size, from scales as small as those of nuclei, to scales as large as that of a compact star. Indeed, in this hypothesis, a quark star composed of SQM has been proposed as a possible candidate of compact stars. The properties of single quark stars have been studied by numerous works \cite{Alcock86, Haensel1986, Itoh70, Xu2012, Drago2007, Yu2011, Drago2014a, Drago2016, Drago2016c, Drago2016d, Panotopoulos2017, Wiktorowicz2017, Bora2020}, while the study of the binary quark-star mergers have been explored far less~\cite{Bauswein2009, Bauswein2010, Lai2017b, Drago2018, Bucciantini2019, Lai2020}. The detection of the kilonova signal AT2017gfo associated with the GW170817 event has been considered by some as a strong evidence against the existence of SQM. This is because the ejected material of a binary quark-star merger would be composed by SQM that -- when assumed to be the most stable form of matter -- cannot be an efficient source of $r$-process nucleosynthesis. At the same time, recent studies have highlighted that the SQM scenario can be still conciliated with the signal from AT2017gfo if the SQM can evaporate into nucleons as a result of the high temperatures reached after the merger. In this case, most of ejected SQM from the quark-star binary would have evaporated into nucleons and could have therefore contributed to the kilonova signal in AT2017gfo ~\cite{Bucciantini2019, DePietri2019, Horvath2019}. With the goal of studying the scenario of the merger of quark stars -- and thus explore the possibility of SQM evaporation in far greater detail -- we have carried out the first fully general-relativistic simulations of the inspiral and merger of quark stars. Contrasting their evolution with a system of compact stars that have very similar properties in mass and compactness but are fully hadronic, we have been able to isolate three important features of the merger of quarks stars. First, their GW spectral properties are in agreement with the quasi-universal behaviour found for hadronic stars during the inspiral, but differs in the post-merger phase. Second, because of the intrinsic self-boundness of these objects, the amount of ejected mass is smaller, with binary quark stars ejecting $\sim 20\%$ less mass than a corresponding hadronic binary. Finally, as natural to be expected for matter that is colder and more self-bound, the ejected matter contains considerably smaller tails in the corresponding distributions of velocity and entropy. The structure of the paper is as follows. In Sec.~\ref{sec:setup} we discuss in detail the mathematical and physical setup that was necessary to develop in order to carry out the numerical simulations. Section Sec.~\ref{sec:oscil} is instead dedicated to a careful test of the validity of our approach when considering the oscillation properties of isolated quark and hadronic stars, and their match with perturbative studies. Section ~\ref{sec:BQS} is used to present in detail the results of our simulations, including both the overall dynamics of the merger and the outcomes in terms of GW signal and ejected matter. Finally, our conclusions and prospects for future work are presented in Sec.~\ref{sec:conclusion}, while Appendix \ref{sec:baryon mass} provides details on the technique that can be used when considering different values of the baryon mass. \section{Mathematical and Numerical Setup} \label{sec:setup} \subsection{Equations of state} \label{sec:EOS} The EOS of the SQM employed in our simulations was chosen to be the MIT2cfl EOS \cite{Zhou2017, zhu2020}. This EOS makes use of the MIT bag model with additional perturbative QCD corrections~\cite{Fraga2001, Alford2005, Li2017, Zhou2017}, and satisfies the constraints of having a maximum mass above two solar masses as required by the observations \cite{Demorest2010, Antoniadis2013}, and a tidal deformability compatible with the constraints from GW170817. In this EOS a colour-flavor-locked (CFL) phase is assumed to be present and consequently the number densities of all the flavor of quarks [up (u), down (d), and strange (s) quarks] are the same, i.e.,~ \begin{equation} n_{u} = n_{d} = n_{s}\,. \label{eq:cfl} \end{equation} As a result, the baryon number density defined as \begin{equation} n_{_{\rm B}} := \frac{1}{3}(n_{u} + n_{d} + n_{s})\,, \end{equation} can be directly converted to the rest-mass density $\rho$ that enters in the hydrodynamic equations and is evolved numerically, as $\rho := m_{_{\rm B}} n_{_{\rm B}}$, where $m_{_{\rm B}}$ is the average baryonic mass. We note that while the value of $m_{_{\rm B}}$ is well defined for a baryon, this is not the case when considering SQM. In particular, we recall that the definition of the baryonic mass for hadronic matter is given by the value of total energy density per baryon at vanishing pressure, i.e.,~ \begin{eqnarray} m_{_{\rm B}} := \frac{e(p=0)}{n_{_{\rm B}}(p=0)}\,. \label{eq:bmass} \end{eqnarray} As a result, when considering the MIT2cfl EOS employed here, we obtain that the baryon mass is $m_{_{\rm B}}\simeq 850\,{\rm MeV}$. We note that this value is smaller than the one assumed for hadronic matter, i.e.,~ $940\,{\rm MeV}$ (see, e.g.,~ \cite{DePietri2019}), but has been employed also by other groups (see, e.g.,~ \cite{Bauswein2009, Bhattacharyya2016, Drago2020}). Two comments should be made at this point. First, the definition Eq.~(\ref{eq:bmass}) naturally implies that at zero pressure the specific internal energy is also zero, i.e.,~ $\epsilon(p=0)=0$, so that the specific enthalpy at the stellar surface is $h(p=0)= 1 + \epsilon + p/\rho=1$, as one would expect. Second, as we will comment in more detail in Appendix \ref{sec:baryon mass}, a different value of the baryon mass inevitably introduces a discontinuity at the stellar surface, thus calling for a suitable rescaling in order to carry out the numerical simulations. Possibly the most serious challenge in modelling compact stars made of SQM is that -- because of their self-bound property -- the surface of such objects is characterized by a sharp transition at the stellar surface, where the pressure goes to zero at a nonzero rest-mass density. As a result, a large, intrinsic density jump is present at the stellar surface; by contrast, hadronic stars have surfaces near which the density rapidly decreases, but that goes to zero when the pressure is zero. When considering static solutions, as for example when constructing stellar models of isolated quark stars, such a density jump can be handled analytically by matching the stellar interior with the exterior vacuum ~\cite{Damour:2009, Postnikov2010, Zhou2017, zhu2020}. However, in the context of a hydrodynamic simulation, such a discontinuity represents the exemplary condition for the development of a strong shock that would lead to an artificial oscillation in the best-case scenario or to a numerical failure in the most realistic case. Clearly, a treatment aimed at smoothing this strong discontinuity into a region with small but finite size is necessary for a numerical evolution. A simple solution at the level of the EOS may consist in the introduction of a polytropic piece in the pressure dependence from the rest-mass density, i.e.,~ $p=k\rho^{\Gamma}$ with $k=8.12$ and $\Gamma=1.90$, thus effectively introducing a thin but nonzero ``crust'' in the quark star. The presence of a thin crust in a quark star and its implications have been studied in detail in a number of works~\cite{Kettner95, Huang97, Wu2020}. The introduction of a crust changes, at least in principle, the tidal deformability. In practice, however, the change is extremely small and of $1.3\%$ only. More precisely, the tidal deformabilities for a quark star with and without crust are 789.3 and 778.9, respectively. A few important aspects of our novel approach need to be remarked at this point. First, although artificial, the introduction of the thin crust does not provide a perceptible variation to the global properties. While we will demonstrate this in Sec. \ref{sec:oscil}, where we will compare the oscillation properties of an isolated star with the perturbative expectations, it suffices to say here that the variation on the gravitational mass after the introduction of thin crust is minute, i.e.,~ $\sim 5\times 10^{-3}\,M_\odot$, as is its spatial extension, which is restricted to two grid cells and therefore has a width of $\simeq 240\,{\rm m}$. Second, since we have introduced a thin crust, our compact star could be assimilated to a ``hybrid star'' as it is effectively composed of two regions: a quark-matter and a nuclear-matter region, although the latter is extremely thin. Indeed, simulations of this type of compact stars were performed and investigated in great detail in Refs.~\cite{Tsokaros2019b, Tsokaros2020b}. However, this similarity is potentially misleading because a strange quark star is actually composed of SQM that represents the ground state at any density. On the contrary, the quark matter in a hybrid star could exist as the ground state only if the density is sufficiently high, namely, at least larger than the saturation density. Hence, we prefer to regard ours as a strange star with a thin crust rather than a hybrid star. Finally, the handling of the MIT2cfl EOS for the actual merger requires the addition of a thermal part to the EOS. We do this in close analogy with what done routinely for simulations of hadronic stars described by cold EOSs. In essence, we account for the additional shock heating during the merger and post-merger phases by including thermal effects via a ``hybrid EOS'', that is, by adding an ideal-fluid thermal component to the cold (subscript ``c'' below) EOS \cite{Rezzolla_book:2013} \begin{eqnarray} p & = & p_{\rm c} + p_{\rm th}\,, \label{eq:therm1} \\ p_{\rm th} & = & \rho\epsilon_{\rm th} (\Gamma_{\rm th} - 1)\,, \label{eq:therm2} \\ \epsilon_{\rm th} & = & \epsilon - \epsilon_{\rm c}(\rho)\,, \label{eq:therm3} \end{eqnarray} where $\Gamma_{\rm th}=1.75$ is the thermal adiabatic index. Since we find it important to contrast the dynamical behaviour of merging quark stars with the corresponding one of hadronic stars with the same total mass, we have also considered the merger of a binary system subject to a hadronic EOS for comparison. Our choice has fallen on the DD2 EOS \cite{Typel2010}, which is compatible with the present observational constraints, and whose tidal deformability for a $M=1.35\,M_\odot$ star is close to that of a quark star of the same mass. At the same time, the radius of the quark star is significantly smaller: $R=11.81\,\rm{km}$ for the MIT2cfl quark star versus $R=13.21\,\rm{km}$ for DD2 hadronic star. In the framework of a comparative assessment of the dynamics of SQM and hadronic-matter binaries, and despite the fact that the hadronic DD2 EOS is a finite-temperature EOS employed in many GRHD or GRMHD simulations~\cite{Radice2017a, Bovard2017, Radice2018a, hanauske2019a, Most2019b}, we have adopted a hybrid-EOS approach [cf.~ Eqs.~(\ref{eq:therm1})--(\ref{eq:therm3})] also for the DD2 EOS, of which we have retained only the zero-temperature slice. An immediate disadvantage of this approach is that the consistent knowledge of some thermodynamical quantities, such the temperature or the entropy, are missing and need to be estimated in alternative manners. In this case, additional assumptions -- some of which are not necessarily realistic -- are required to extract these quantities, at least to some degree. However, since the same approximations are made for both EOSs, the expectation is that the systematic differences we can find in this way will persist also when considering more advanced and temperature-dependent EOSs for SQM. More specifically, in the case of ideal-fluid EOS, the temperature $T$ is proportional to the average kinetic energy per particle and we can therefore express the specific internal energy $\epsilon$ as \begin{equation} \epsilon = \frac{k_t T}{m_{_{\rm B}}}\,, \end{equation} where $k_t$ is a constant and $m_{_{\rm B}}$ is mass per baryon. We extend this expression to our EOSs by rewriting it as \begin{eqnarray} \epsilon = k_t \frac{(T - T_{\rm c})}{m_{_{\rm B}}} + \epsilon_{\rm c}(\rho)\,, \label{eq:eps_t} \end{eqnarray} where $T_{\rm c}$ is the temperature of the cold part of the EOS and which we take to be to $T_{\rm c}=0.01\,{\rm MeV}$. Using now Eq.~(\ref{eq:eps_t}) and recalling that for transformations at constant density $d\epsilon = Tds$, we can compute the specific entropy as \begin{eqnarray} s = \frac{k_t}{m_{_{\rm B}}}\log\left(\frac{\epsilon - \epsilon_{\rm c}}{k_t T_{\rm c}/m_{_{\rm B}}} + 1\right)\,. \label{eq:entropy} \end{eqnarray} Finally, specifying $k_t = 20$ for both EOSs, we can compute the entropy per baryon $s_{_{\rm B}} = m_{_{\rm B}} s$ once $\epsilon$ and $\epsilon_{\rm c}$ are known. \subsection{Numerical setup: Initial data} The initial data for binary stars was generated making use of the publicly available \texttt{Lorene} code~\cite{Gourgoulhon-etal-2000:2ns-initial-data}, which is a multi-domain spectral-method code computing quasi-equilibrium irrotational binary configurations of compact stars. Using this code, and as discussed above, we have computed initial binary configurations of both quark stars and of hadronic stars. Independently of whether we have considered the $\rm{MIT2cfl}$ or the $\rm{DD2}$ EOS, the properties of the binary have been set to be the same: the initial separation in the binary was set to be $45\,{\rm km}$ and the two stars have the same mass of $M=1.35\,M_\odot$. It is useful to remark that the introduction of a thin crust for the quark star was important also for the calculation of the initial data. Furthermore, the computation of the solution of binary quark stars with \texttt{Lorene} required extra care. In particular, because of the steep drop in rest-mass density across the crust of the quark star and of the presence of discontinuities in higher-order derivatives at the crust-core interface (we recall that the EOS is continuous but with discontinuous derivatives at crust-core interface), a single interior coordinate domain employed to cover the quark star turned out to be insufficient and prevented the convergence to an accurate solution. Fortunately, however, the addition of a second domain at higher resolution to cover the crust was sufficient to yield the convergence to an accurate solution. \begin{table}[t] \renewcommand{\arraystretch}{1.3} \caption{Properties of the quark and hadron stars considered here, and distinguished in whether they refer to the isolated configurations or to binaries. Reported are the mass $M$, the baryon mass $M_b$, the radii $R$, and the tidal deformability $\Lambda$. In addition, in the case of the binaries, we also report the relevant frequencies of the gravitational-wave signal ($f_{\rm max}, f_1$, and $f_2$), the ejected matter $M_{\rm ej}$ and the ejected baryon number $N_{\rm B}^{\rm ej}$.} \begin{center} \begin{tabular}{|l|c|r|r|} \hline \hline & & Quark star & Hadronic star \\ \hline ${\rm EOS}$ & ${\rm type}$ & ${\rm MIT2cfl}$ & ${\rm DD2}$ \\ \hline $M\ [M_\odot]$ & $({\rm isolated})$ & $1.40$ & $1.40$ \\ $M_{b}\ [M_\odot]$ & $({\rm isolated})$ & $1.57$ & $1.53$ \\ $R\ [{\rm km}]$ & $({\rm isolated})$ & $11.90$ & $13.22$ \\ $\Lambda$ & $({\rm isolated})$ & $657.78$ & $698.72$ \\ \hline $M\ [M_\odot]$ & $({\rm binary})$ & $1.35$ & $1.35$ \\ $M_{b}\ [M_\odot]$ & $({\rm binary})$ & $1.50$ & $1.47$ \\ $R\ [{\rm km}]$ & $({\rm binary})$ & $11.81$ & $13.21$ \\ $\Lambda$ & $({\rm binary})$ & $789.32$ & $857.69$ \\ $f_{\rm max}\ [{\rm kHz}]$ & $({\rm binary})$ & $1.684$ & $1.644$ \\ $f_1\ [{\rm kHz}]$ & $({\rm binary})$ & $1.919$ & $1.845$ \\ $f_2\ [{\rm kHz}]$ & $({\rm binary})$ & $2.399$ & $2.436$ \\ $M_{\rm ej}\ [10^{-3}\,M_\odot]$ & $({\rm binary})$ & $2.68$ & $2.94$ \\ $N_{\rm B}^{\rm ej}\ [10^{54}]$ & $({\rm binary})$ & $3.520$ & $3.533$ \\ \hline \hline \end{tabular} \label{tab:gw} \end{center} \end{table} \begin{figure*}[t] \centering \includegraphics[width=0.49\textwidth]{figures/oscil_t_quark.pdf} \includegraphics[width=0.49\textwidth]{figures/oscil_t_hadron.pdf} \caption{Evolutions of the normalised variations in the stellar central rest-mass density $\rho_c(t)$ for either an oscillating quark star (left panel) or for a hadronic star (right panel). Curves of different shading refer to different resolutions, with the high-resolution data being shown with the darkest shade.} \label{fig:oscil} \end{figure*} \subsection{Numerical setup: evolution equations} The simulations presented here have been performed with the publicly available GRHD code \texttt{WhiskyTHC}~\cite{Radice2012a, Radice2013b, Radice2013c}, which is a high-order, fully general-relativistic code for the solution of the equations of relativistic hydrodynamics and compatible with the \texttt{Einstein Toolkit} \cite{EinsteinToolkit_etal:2020_11}. The hydrodynamic equations were solved by using a high-resolution shock-capture (HRSC)~\cite{Rezzolla_book:2013} approach having with Local Lax-Friedrichs (LLF) flux-split and Monotonicity Preserving 5th-order (MP5) reconstruction method~\cite{suresh_1997_amp, mignone_2010_hoc} under the finite-difference scheme. The spacetime was instead evolved by implementing the CCZ4 formulation~\cite{Alic:2011a, Alic2013} through the finite-differencing code \texttt{McLachlan} \cite{Brown:2008sb}. In order to cover a large enough spatial domain and hence compute accurately the information on the ejected matter while keeping a high resolution on the compact stars, an adaptive-mesh refinement (AMR) approach was employed and handled by the \texttt{CARPET} driver~\cite{Schnetter:2003rb}. The number of refinement levels depends on the problem considered and was of $3$ levels in the case of isolated stars and of $6$ levels in the case of merging binaries. In this way, the corresponding total extents of the computational domain were set to $24\, M_\odot\ (36\, {\rm km})$ and $2048\, M_\odot\ ( 3051\, {\rm km})$, respectively. Furthermore, each of the scenario considered -- isolated stars or binary mergers -- has been evolved with simulations at different resolutions. More specifically, for the finest grid we have used three resolutions in the case of isolated stars, i.e.,~ $dx = 0.16\, M_\odot\ (236\, {\rm m})$, $dx = 0.12\, M_\odot\, (177\, {\rm m})$, and $dx = 0.08\, M_\odot\ (118\, {\rm m})$, which we will indicate as ``low'', ``medium'', and ``high'' resolution in following. Similarly, for both quark and hadronic stars, we have employed two resolutions in the case of merging binaries, i.e.,~ $dx = 0.25\, M_\odot\ (369\, {\rm m})$ and $dx = 0.16\, M_\odot\ (236\, {\rm m})$, which we will indicate as ``low'' and ``high'', respectively. Note that a resolution of $dx = 0.16\, M_\odot\ (236\, {\rm m})$, and even lower ones, (see e.g.,~ \cite{DePietri2016,Depietri2018}) have been used routinely in simulations of binary neutron stars and has been shown to be sufficient to capture accurately the dynamics of the system. \section{Results: isolated quark stars} \label{sec:oscil} As a first but very important test of the correct implementation of the rescaling procedure presented in the previous Section, we next discuss the results from the evolution of isolated and oscillating quark stars, comparing with the corresponding evolutions obtained with fully hadronic stars. The analytical formulation and the numerical computation of the spectral properties of oscillating compact stars is a very old one and has been studied in detail in the literature~\cite{Chandrasekhar64, Campolattaro1, McDermott1988, Rezzolla2002b}, also in the case of quark stars \cite{Panotopoulos2017, Bora2020}. Similarly, the investigation of the dynamical response of a relativistic star to a perturbation has been studied extensively in the literature (see, e.g.,~ \cite{Font99, Font02c, Baiotti04} for some of the initial works). \begin{figure*}[t] \centering \includegraphics[width=0.49\textwidth]{figures/oscil_psd_quark.pdf} \includegraphics[width=0.49\textwidth]{figures/oscil_psd_hadron.pdf} \caption{Power spectral densities of the timeseries in Fig. \eqref{fig:oscil} when compared with the perturbative predictions for the eigenfrequencies, reported as vertical blue dashed lines. The left panel refers to a quark star, while the right one to a hadronic star; the same convention is followed for the different types of lines. In both cases, the insets show a magnification near the frequency of the $F_0$ mode. Note that the accuracy of the match increases as the resolution is increased.} \label{fig:psd_oscil} \end{figure*} Because this serves only as a test and not to study the spectral properties of such stars, the oscillations we have considered are simple quasi-radial oscillations of initially static stars, whose perturbative eigenfrequencies of can be obtained easily by solving two ordinary differential equations [see Eqs. (11)--(12) in Ref.~\cite{Bora2020}] that are coupled with the equations of stellar equilibrium, i.e.,~ the Tolmann-Oppenheimer-Volkoff (TOV) equations (see, e.g.,~ \cite{Rezzolla_book:2013}). To make the comparison meaningful, both the isolated MIT2cfl quark stars and the hadronic DD2 stars have the same mass of $M=1.4\,M_{\odot}$. Solving the perturbation equations together with the TOV equations, we obtain the eigenfrequencies $F_i$, for different mode numbers $i$. In practice, we limit ourselves to the fundamental model $F_0$ and the first overtone $F_1$. Note that both the static models and the eigenfrequencies are computed after introducing the crustal prescription in the EOS, as described in the previous Section. For the dynamical evolutions, on the other hand, we introduce an initial purely radial perturbation in the radial velocity with an eigenfunction corresponding to an $n=1$, $\ell=0$ oscillation mode under the Cowling approximation (i.e.,~ with a spacetime being held fixed) \cite{Cowling41}, and an amplitude of $3\%$. As customary, we report in Fig.~\ref{fig:oscil} the normalised variation in the stellar central density $\rho_c(t)$ for either the quark star as discussed above (i.e.,~ with a thin crust, left panel) or for the hadronic star (right panel). Curves of different shading refer to different resolutions, with the high-resolution data being shown with the darkest shade. Note that, especially for the quark star, the behaviour of the oscillations shows aspects of strong nonlinearity, i.e.,~ deviation from a simple harmonic oscillation, during the first few millisecond, but these then disappear rather soon and the oscillation becomes rather regular. As to be expected, these nonlinear features becomes more marked as the resolution is increased and the deviations from a perfect conservation of rest-mass are $\lesssim 10^{-5}\%$. Note also the imprint of different resolutions is more evident in the quark star than in the hadronic one. A phase difference can in fact be measured in the low- and medium-resolution evolutions for the quark star, which is instead absent for the hadronic star\footnote{Note that the amplitude of the oscillations is not necessarily related to the resolution employed in the simulation. Indeed, the amplitude evolution depends on a number of factors, such as the ability to capture the eigenmodes of the oscillations and the occurrence of small shocks at the interface between the star in the fluid outside the star. As a result, oscillations of different amplitudes can be obtained even when using the same resolution but different reconstruction scheme (see the detailed discussion in Ref. \cite{Radice2012a}.}. This is again easy to interpret as due to the much sharper density jump at the stellar surface in the case of a quark star, which requires therefore a higher resolution to produce a consistent behaviour. However, for both stars, the low resolution is sufficient to capture a convergent behaviour and the accuracy simply increases as the resolution is increased. Finally, it should be noted that while the quark star exhibits oscillations that are almost symmetric with respect to the unperturbed value, the hadronic one does not. This is due to the fact that for the latter, the oscillations are accompanied by a significant contribution from higher-order modes (see discussion below). \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{figures/rho_qh_100.pdf} \caption{Small-scale (i.e.,~ $110\,{\rm km}$) views of the rest-mass density at different but characteristic times during the merger of the two classes of binaries [i.e.,~ $5\,{\rm ms}$ before merger (upper-left), the merger time (upper-right), $10\,{\rm ms}$ (lower left), and $20\,{\rm ms}$ (lower right) after merger]. Each panel offers views of the $(x,y)$ plane (bottom parts) and of the $(x,z)$ plane (top parts), respectively. Note that, for each panel, the left part refers to quark stars, while the right one to the hadronic stars.} \label{fig:rho_2D_zoom} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{figures/s_qh_100.pdf} \caption{Same as in Fig.~\ref{fig:rho_2D_zoom}, but for the entropy per baryon.} \label{fig:s_2D_zoom} \end{figure*} Figure \ref{fig:psd_oscil} helps to compare and contrast the evolutions of quark and hadron stars by taking the Fourier transforms of the timeseries in Fig. \eqref{fig:oscil} and comparing the resulting power spectral densities (PSDs) with the perturbative prediction for the eigenfrequencies, reported as vertical blue dashed lines. In this way, it is apparent that the match between the dynamical and the perturbative results is worse for the quark star, especially at low resolutions (see the insets). However, as the resolution is increased, both evolutions nicely converge to the perturbative values and differ from them with comparable errors, namely, $1.1\%$ ($3.1\%$) and $1.3\%$ ($3.5\%$) for the quark and hadron stars at high (medium) resolution, respectively. Note also that the low-resolution of the quark star has a considerable amount of noise near the position of the $F_1$ frequency, but that this noise decreases with increasing resolution, leading to the clear appearance of an overtone in the high-resolution run. Finally, note that the amount of power in the two overtones is much larger in the case of the hadronic star, which exhibits a clear second overtone. This behaviour in indeed visible already in the timeseries reported in Fig.~\ref{fig:oscil} (right panel), which exhibits rapid variations near the maxima and minima of the oscillations. Also reported for a comparison and marked with a transparent blue dashed line is the $F_0$ eigenfrequency of a strange star without a crust; note that this is larger and that the simulation do not converge to this value. In summary, the results presented here on the dynamics of isolated quark stars show that the approach presented in Sec. \ref{sec:EOS} to handle the stellar surface is not only properly implemented, but that it also leads to evolutions that are able to reproduce the spectral properties of quark stars and yield numerically consistent evolutions. At the same time, they show that, in general, quark stars require higher resolutions than the ones that would be needed for a hadronic star with comparable properties in terms of mass and tidal deformability. Once again, this is the inevitable drawback of having to model stars with a strong self-confinement and sharper density profiles at the surface. \section{Results: binary quark stars} \label{sec:BQS} In this section we discuss the dynamics of a binary quark-star merger and contrast it with the corresponding merger obtained with a hadronic EOS. Before entering in the details, and notwithstanding that these are the first fully general-relativistic simulations of the merger of SQM compact objects, it is useful to stress what are the limitations of our approach. First, the treatment of the thermal part of the EOS is necessarily approximate and modeled with a hybrid EOS approach for both EOSs [cf.~ Eqs. \eqref{eq:therm1}--\eqref{eq:therm3}]. As a result of this (forced) choice, the modelling of neutrinos, e.g.,~ via a leakage \cite{Galeazzi2013}, or more advanced approaches \cite{Radice2016, Weih2020b}, is not possible. Second, the modelling of the quark-evaporation processes in the ejected matter, is de-facto ignored. While both processes are expected to have a significant impact on the electromagnetic kilonova signal and on the nucleosynthesis, they are not expected to play an important role in the gravitational-wave signal, which can therefore be considered robust. Third, for obvious computational costs, the two resolutions used here for the binaries are rather low, and the highest one would correspond to the ``low'' resolution in the case of the isolated stars discussed in the previous Section. However, as demonstrated above, such a resolution is sufficient to provide a sufficiently accurate description of the stellar behaviour given that the relative error in the PSD for the low-resolution quark star is $\lesssim 4.6\%$. More importantly, by comparing and contrasting quark and hadronic stars with similar properties we can easily capture -- even at these low resolutions -- the systematic differences between the two stellar types, which we believe are therefore robust. \subsection{Overview of the dynamics} \label{ssec:overview} The main features of the dynamics of the binary systems of quark and hadronic stars are rather similar. Before the merger, the stars in the binary systems inspiral towards each other as a result of energy and angular moment loss via gravitational-wave emission. During this stage, the matter effects are dominated by the tidal deformability of the two stars. This stage is shown in the top-left panels of Figs. \ref{fig:rho_2D_zoom}--\ref{fig:s_2D_zoom}, which report the small-scale (i.e.,~ $110\,{\rm km}$) views of the rest-mass density (Fig. \ref{fig:rho_2D_zoom}) or of the entropy per baryon (Fig. \ref{fig:s_2D_zoom}). Different panels refer to different but characteristic times during the merger [i.e.,~ $5\,{\rm ms}$ before merger (upper-left), the merger time (upper-right), $10\,{\rm ms}$ (lower left), and $20\,{\rm ms}$ (lower right) after merger] and offer views of the $(x,y)$ plane (bottom parts) and of the $(x,z)$ plane (top parts), respectively. More importantly for our comparison, for each panel, the left part refers to quark stars, while the right one to the hadronic stars. While not straightforward to analyse, Figs. \ref{fig:rho_2D_zoom}--\ref{fig:s_2D_zoom} are rich of information and help obtain a comprehensive overview of the properties of the dynamics of the two binaries. It should be noted that already in the inspiral phase, the quark-star binary loses a smaller amount of material from the surface. While this matter is not significant both in terms of gravitational-wave emission and nucleosynthesis (the material has very small velocities, is distributed essentially isotropically, and is gravitationally bound in good part), it already shows a feature we will encounter again below. As the two stars approach each other, the amplitude of the emitted gravitational waves increases, achieving a maximum when the stars merge. At this point in time, intense tidal forces and strong shocks induce a significant and dynamical ejection of matter (see upper-right panels of Fig. \ref{fig:rho_2D_zoom}). After merger, a differentially rotating object is produced whose mass is larger than the maximum mass of rotating configuration and that is normally referred to as hypermassive neutron star (HMNS) in the case of hadronic EOSs, but that should dubbed hypermassive quark star (HMQS) in the case of the MIT2cfl EOS (see lower-left and lower-right panels of Fig. \ref{fig:rho_2D_zoom}). The dynamics of this post-merger phase is again rather similar between the two EOSs and follows well-known behaviours in terms of dynamical ejecta, tidal forces and shock heating (see, e.g.,~ \cite{Bovard2017, Radice2018a, ShibataRev19}). In particular, the dynamical ejection of matter takes place mostly at low latitudes (i.e.,~ near the equatorial plane, as it can be seen in the $(x,z)$ sections of Fig. \ref{fig:rho_2D_zoom}). On the other hand, the material at high latitudes (i.e.,~ near the polar directions, as it can be seen in the $(x,z)$ sections of Figs. \ref{fig:s_2D_zoom}) is considerably hotter and with a higher entropy per baryon. Furthermore, strong shocks taking place in the HMQS and in the HMNS lead to a pulsed dynamical mass ejection, which is apparent in the $(x,y)$ sections of Figs. \ref{fig:rho_2D_zoom}, \ref{fig:s_2D_zoom}, where it appears in terms of a striped spiral-arm structure. Essentially all of this matter is gravitationally unbound and by being both hot and with high entropy, it will be involved in the subsequent nucleosynthetic processes that cannot be modeled here. Overall, when inspecting the full set of Figs. \ref{fig:rho_2D_zoom}--\ref{fig:s_2D_zoom}, it is apparent that although no major qualitative difference emerges -- as one would have expected given the very similar bulk properties of the components in the two binaries -- there are some quantitative differences in the dynamics of quark-star binaries and hadron-star binaries. Postponing a more detailed comparison to Sec. \ref{ssec:ejecta}, we can already appreciate a first important result of our comparative study: mass loss is smaller in SQM binaries than in hadronic binaries. On the other hand, the differences in temperature and entropy are much smaller and of a factor of a few, at most. \subsection{Gravitational-wave emission} \label{ssec:gw} The emission of gravitational waves from merging binaries of compact objects is obviously one of the most interesting outcomes of these processes and can be used to extract important information on the status of matter when these compact objects are stars. It is therefore natural to investigate whether the gravitational-wave signal computed in our simulations can be exploited to distinguish the dynamics of strange quarks stars from that of hadronic stars. As customary, we can compute the strongest component of the gravitational-wave signal by extracting the $\ell=m=2$ mode of the Weyl-curvature scalar $\psi_4$ from our simulations, so that the GW amplitudes in the corresponding mode and in the two polarization $+$ and $\times$ can be obtained by integrating $\psi_4$ twice in time \begin{eqnarray} h^{22}_{+} - i h^{22}_{\times} = \int^t_{-\infty}dt^\prime \int_{-\infty}^{t^\prime} dt^{\prime\prime} (\psi_4)_{22}\,. \label{eq:gw} \end{eqnarray} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figures/gw_fre.pdf} \caption{\textit{Top panel:} gravitational-wave strain in the $\ell=2=m$ mode and in the $+$ polarisation, $h^{22}_+$. Curves of different shading refer to different resolutions, with the high-resolution data being shown with the darkest shade. The vertical black line marks the merger time, which corresponds to the time of maximum amplitude, and is used to align the waveforms in phase. \textit{Bottom panel:} instantaneous gravitational-wave frequencies relative to the strain in the top panel. The same convention is followed for the line types.} \label{fig:gw_fre} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figures/post_psd.pdf} \caption{Effective PSDs of the quark-star (red lines) and hadron-star binaries (black lines) at different resolutions. Also reported with filled circles are the frequencies at merger $f_{\rm max}$, as well as the sensitivity curves of advanced and next-generation gravitational-wave detectors such as advanced LIGO (adLIGO), the Einstein Telescope (ET), or Cosmic Explorer (CE). Note the presence of precise spectral features, $f_1$ and $f_2$, whose frequencies are shown in Table~\ref{tab:gw}.} \label{fig:psd} \end{figure} Figure~\ref{fig:gw_fre}, reports in its top panel the gravitational-wave strains $h^{22}_+$ for both the quark-star binary (red lines) and the hadron-star binary (black). As in previous figures, different shades refer to the two resolutions employed, with the high resolutions being indicated with a full shading. The various waveforms are aligned at the time of the merger, that is, the time when the amplitude reaches its first maximum. The bottom part of Fig.~\ref{fig:gw_fre}, on the other hand, reports, the corresponding evolution of the instantaneous gravitational-wave frequencies using the same convention for the line types. Overall, the gravitational-wave information provided in Fig.~\ref{fig:gw_fre} remarks that the inspiral part of the binary dynamics is very similar between the two types of stars, with the frequency evolutions that are essentially the same in this phase and up to the merger. However, small differences do develop after the merger and, as we will comment below, they can be used to extract important signatures on the occurrence of the merger of a quark-star binary. \begin{figure*}[t] \centering \includegraphics[width=0.45\textwidth]{figures/universal_lam.pdf} \hspace{0.5cm} \includegraphics[width=0.45\textwidth]{figures/universal_c.pdf} \caption{\textit{Left panel:} Quasi-universal relations for the frequency at merger $f_{\rm max}$ and the two post-merger frequencies $f_1$ and $f_2$ when shown as a function of the tidal deformability $\Lambda$ ([cf.~ Eqs. \eqref{eq:fmax}, \eqref{eq:f1_1}, and \eqref{eq:f2_1}]). Shown with symbols are the values of the frequencies measured from the high-resolution simulations in the case of quark-star binaries (red symbols) or hadron-star binaries (black symbols). Reported for comparison with gray symbols are the frequencies when considering the temperature-dependent version of the DD2 EOS. \textit{Right panel:} same as in the left panel but when the quasi-universal relations are expressed in terms of the average stellar compactness $\mathcal{C}$ ([cf.~ Eqs. \eqref{eq:f1_2} and \eqref{eq:f2_2}]). In both panels the shading refers to the uncertainties in the relations and the orange lines report the functional fitting in Ref. \cite{Bauswein2019}.} \label{fig:universal} \end{figure*} In order to highlight the differences that emerge after the merger we follow the methodology presented in Ref.~\cite{Takami2015} to process the post-merger GW signal and perform a spectral analysis whose results are shown in Fig. \ref{fig:psd}. First of all, we collect the GW signal in the time interval $t\in [-1500, 4000]\,M_\odot \sim [-7.39, 19.70] \,{\rm ms}$, and perform a Fourier transformation with a symmetric time-domain Tukey window function, with parameter $\alpha=0.25$. This window function helps eliminating spurious oscillations of the computed PSD. Furthermore, since we are interested only in the post-merger signal, we employ a fifth-order high-pass Butterworth filter for the low-frequency part of the signal with a cutoff frequency set to be $f_{\rm cut} = f(t-t_{\rm merg} = -7.38\,{\rm ms})+0.5\,{\rm kHz}$. In this way we compute the effective PSD as~\cite{Takami2015} \begin{eqnarray} \tilde{h}(f) := \sqrt{\frac{|\tilde{h}_{+}(f)|^2 + |\tilde{h}_{\times}(f)|^2}{2}}\,, \label{eq:psd} \end{eqnarray} where $\tilde{h}_{+}(f)$ and $\tilde{h}_{\times}(f)$ are the Fourier transforms of the to two gravitational-wave strains. At the same time, it is possible to compute signal-to-noise ratio (SNR) as \begin{eqnarray} {\rm SNR} := \left[\int_0^{\infty} \frac{|2\tilde{h}(f)f^{1/2}|^2}{S_h(f)} \frac{df}{f}\right]^{1/2}\,, \label{eq:snr} \end{eqnarray} where $S_h(f)$ is the noise PSD of a given gravitational-wave detector. Figure~\ref{fig:psd} reports the effective PSDs of the two binaries (red and black lines for the quark-star and the hadron-star binaries, respectively) at the two different resolutions (dark and light shading for the high and low resolutions, respectively), and the frequencies at merger $f_{\rm max}$ (filled circles; these frequencies are also used to match the amplitudes of the PSDs obtained at different resolutions). Also reported in Fig. \ref{fig:psd} are the sensitivity curves of advanced and next-generation gravitational-wave detectors such as advanced LIGO (adLIGO), the Einstein Telescope (ET)~\cite{Punturo:2010, Punturo2010b}, or Cosmic Explorer (CE)~\cite{Abbott17}. In this way, it is possible to appreciate the presence of precise spectral frequencies, i.e.,~ gravitational-wave spectral lines, which are named as $f_1$ and $f_2$ after the convention in \cite{Takami2014}, and whose values are shown in Table~\ref{tab:gw}. Also reported as filled solid circles on top of the various PSDs are the gravitational-wave frequencies at the merger (i.e.,~ at amplitude maximum) $f_{\rm max}$. Note that a third frequency, $f_3$ can be obtained from the approximate relation $2f_2 \approx f_1 + f_3$, which models the dynamics of the two stellar cores in terms and repeated collisions and bounces while the HMNS/HMQS rotates. As first pointed out in Ref. \cite{Read2013}, the values of $f_{\rm max}$ can be used as an important proxy of the EOS. Furthermore, this frequency was shown to follow a quasi-universal behaviour in terms of the tidal deformability. On the other hand, both Fig. \ref{fig:gw_fre} and Fig. \ref{fig:psd} clearly indicate that the values of $f_{\rm max}$ are rather similar for both quark and hadronic stars. This is because, although the radii of the stars in the two binaries are rather different, i.e.,~ $R_{_{\rm QS}} \simeq 11.8\,{\rm km}$ and $R_{_{\rm HS}}\simeq 13.2\,{\rm km}$ for the quark (QS) and hadron star (HS), respectively (see Table \ref{tab:gw}), the corresponding tidal deformabilities are very similar, i.e.,~ $\Lambda_{_{\rm QS}} \simeq 790$ and $\Lambda_{_{\rm HS}} \simeq 860$. Indeed, in Ref.~\cite{Rezzolla2016} and using a large set of purely hadronic EOSs, a universal relation was found between the $f_{\rm max}$ and tidal deformability\footnote{In Ref. \cite{Rezzolla2016}, the original form of Eq. \eqref{eq:fmax} was given in terms of the tidal deformability coefficient $\kappa_2^{\rm T}$, which is trivially related to $\Lambda$ and in the case of equal-mass binaries as $\kappa_2^{\rm T} = \tfrac{3}{16}\Lambda$.}, namely \begin{equation} \log_{10}\left(\frac{f_{\rm max}}{{\rm Hz}}\right) \approx a_0 + a_1 \Lambda^{1/5} - \log_{10}\left(\frac{2\overline{M}}{M_\odot}\right)\,, \label{eq:fmax} \end{equation} where $a_0 = 4.186, a_1 = -0.140$, and $\overline{M}$ represent the average mass at infinite separation (in our simulations and for both binaries, $\overline{M}=1.35\, M_\odot$). The quasi-universal relation \eqref{eq:fmax} is shown with a green solid line in the left panel of Fig. \ref{fig:universal} as a function of the tidal deformability $\Lambda$ and in the right panel as a function of the average stellar compactness $\mathcal{C} := \overline{M}/\overline{R}$, where $\overline{R}$ is the average radius. We recall that this is possible because another quasi-universal relation exists relating the tidal deformability $\Lambda$ with the stellar compactness $\mathcal{C}$, namely \cite{Yagi2017, Raithel2018} \begin{eqnarray} \mathcal{C} = 0.360 - 0.0355\,\ln\Lambda + 0.000705\,(\ln\Lambda)^2\,. \label{eq:c-lambda} \end{eqnarray} Also shown in Fig.~\ref{fig:universal} with red and black symbols are the values of the measured gravitational-wave frequencies at merger $f_{\rm max}$ for binary quark stars (red filled circles) and binary hadronic stars (black filled circles), respectively. When comparing the numerical results with the expected quasi-universal relation with the tidal deformability (left panel of Fig. \ref{fig:universal}), it is clear that the match is very good despite the very different nature of the two binaries and, in particular, the large difference in radii. This result may appear bizarre at first as one would expect that the gravitational-wave frequency at merger is directly related to the stellar radius and indeed the ``contact'' frequency is normally computed as the Keplerian frequency when the two stars have a separation which is twice the stellar radius $f_{\rm cont} := \mathcal{C}^{3/2}/(2\pi \overline{M})$ \cite{Damour:2012, Takami2014}. However, as recently pointed out in Ref. \cite{Ng2020}, the gravitational-wave frequency at merger $f_{\rm max}$ is closely related to the quadrupolar ($\ell = 2$) $F$-mode oscillation ($F_{2F}$) of nonrotating and rotating models; furthermore, $F_{2F}$ is can also be expressed quasi-universally in terms of the tidal deformability, so that the results shown in the left panel of Fig. \ref{fig:universal} are essentially stating that quark and hadronic stars with similar tidal deformability -- and thus similar properties of their dense cores -- have similar quadrupolar $F_{2F}$ oscillation modes and, hence, similar gravitational-wave frequency at merger $f_{\rm max}$. This is another important result of our comparative study: quark-star binaries have merger frequencies similar to those of hadron-star binaries with comparable tidal deformability. In turn, this implies that it may be difficult to distinguish the two classes of stars based only on the gravitational-wave signal during the inspiral. Note, however, that when expressed in terms of the stellar compactness (see right panel of Fig. \ref{fig:universal}), the match between the measured merger frequencies and the expectations from the quasi-universal relations is reasonably good in the case of hadronic stars, but it is much worse for quark-star binaries. This hints to the fact that while expression \eqref{eq:fmax} could be used both for hadronic and quark-star binaries, the corresponding expression in terms of the stellar compactness needs to be corrected in the case of quark-star binaries. We will return to this point below. Figure ~\ref{fig:universal} also shows additional quasi-universal relations for the $f_1$ and $f_2$ frequencies, both as a function of the tidal deformability (left panel) and of the average stellar compactness (right panel). For the first, low-frequency peak, the universal relations are expressed respectively as \cite{Takami2015, Rezzolla2016} \begin{eqnarray} f_1 & \approx & b_0 + b_1\, \mathcal{C} + b_2\, \mathcal{C}^2 + b_3\mathcal{C}^3\ {\rm kHz}\, , \label{eq:f1_1} \nonumber \\ \\ f_1 & \approx & c_0 + c_1\, \Lambda^{1/5} + c_2\, \Lambda^{2/5} + c_3\, \Lambda^{3/5}\ {\rm kHz} \,, \nonumber \\ \label{eq:f1_2} \end{eqnarray} where $b_0 = -35.17, b_1 = 727.99, b_2 = -4858.54, b_3 = 10989.88, c_0 = 45.19, c_1 = -31.11, c_2 = 7.50$, and $c_3 = -0.61$. Similarly, for the second and most powerful feature in the post-merger spectrum, we use \cite{Rezzolla2016} \begin{align} \label{eq:f2_1} f_2 & \approx 5.832 - 0.800\, \Lambda^{1/5}\ {\rm kHz}\,.& \\ \label{eq:f2_2} f_2 & \approx 5.832 - 123.016 \exp\left({-\sqrt{56.738\, \mathcal{C} + 4.930}}\right)\ {\rm kHz}\,,& \end{align} where for the second relation \eqref{eq:f2_2} we have used expression \eqref{eq:c-lambda}. These frequencies are shown in Fig. \ref{fig:universal} with green lines and shadings together with their uncertainties are estimated in Ref. \cite{Rezzolla2016}; furthermore, they are complemented by the expression for the $f_2$ frequency proposed in Eq. (1) of Ref. \cite{Bauswein2019}, where it is dubbed $f_{\rm peak}$ (orange line and shading). A comparison with numerical data clearly shows that the universal relations provide a rather accurate representation in the case of hadronic stars, both when expressed in terms of the tidal deformability (left panel) and of the average stellar compactness (right panel). The match remains very good also in terms of quark stars, but only when the universal relations are given in terms of the tidal deformability (in the case of the $f_1$ frequencies, the relative differences are of $2.15\%$ and $0.22\%$ for binary quark stars and binary hadron stars, respectively). Furthermore, the match of the $f_2$ frequency of the hadron-star binary (black filled star) with the quasi-universal relation is only marginally acceptable when using expressions \eqref{eq:f2_1} and \eqref{eq:f2_2}, while it is below the value estimated by Ref. \cite{Bauswein2019} (reported for comparison with gray symbols are the frequencies when considering the temperature-dependent version of the DD2 EOS). However, the differences can be quite substantial when the numerical data for binary quark stars is compared with expressions \eqref{eq:f1_1}, \eqref{eq:f1_2} for the $f_1$ frequency or with expressions \eqref{eq:f2_1}, \eqref{eq:f2_2} for the $f_2$ frequency. We can summarise these findings into another notable result of our comparative investigation: quark-star binaries have post-merger frequencies that obey quasi-universal relations derived from hadron-star binaries in terms of the tidal deformability, but not when expressed in terms of the average stellar compactness. Hence, it may be difficult to distinguish the two classes of stars based only on the post-merger frequencies and if no information on the stellar radius is available. As conjectured above, this behaviour seems to suggest that new universal relations in terms of the stellar compactness need to be derived when consider binaries of SQM. This conjecture can only be verified when simulations with other EOSs for the SQM have been performed. \subsection{Properties of the ejected material} \label{ssec:ejecta} As mentioned in the Introduction, besides gravitational waves, another important product of the merger of a binary system of compact stars is represented by the ejected matter as this plays an important role in subsequent nucleosynthesis and in the kilonova emission. We have also already remarked that the lack of a consistent treatment of the evaporation process of SQM to nucleons prevents us from establishing what is the actual role played by the ejected matter from the binary quark-star merger. We recall that the study of Ref.~\cite{Bucciantini2019} has concluded that only a small amount of quark matter can survive evaporation and thus survive to yield a very low density of SQM in the galaxy. At the same time, the use of a hybrid-EOS approach to handle the thermal effects in the hadronic DD2 EOS -- and that we have employed to achieve a consistent picture with the MIT2cfl EOS -- prevents us from making considerations on the nucleosynthetic yields and kilonova emission also for the binary hadron-star merger. Notwithstanding these limitations, we can nevertheless obtain interesting comparative measurements of the ejected matter in terms of bulk properties, such as: the total amounts of ejected rest-mass for the two classes of stars and the corresponding distributions in velocity and entropy. To this scope, and as done routinely in this type of simulations, we place a series of detectors in terms of spherical coordinate 2-spheres at different distances from the origin and measure the amount of matter that crosses such detectors. In this way, considering gravitationally unbound the matter satisfying the so-called ``geodesic'' criterion, namely, matter whose covariant time component of the four-velocity is $u_t< -1$ (see \cite{Bovard2016} for a detailed discussion of various criteria for unbound matter) we can measure the effective mass lost by the binaries. This is shown as a function of time from the merger in Fig.~\ref{fig:outflow} for a detector placed at a radius of $500\, M_\odot \simeq 750\, {\rm km}$, where lines of different colours follow the same convention of the previous figures. When comparing the results of the ejected mass from the quark-star binaries and from the hadron-star binaries, the difference is apparent, with the former having $M_{\rm ej} = 2.68\times 10^{-3}\,M_{\odot}$ and the latter $M_{\rm ej} = 2.94\times 10^{-3}\,M_{\odot}$. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figures/outflow.pdf} \caption{Ejected rest-mass as a function of time as measured by a detector placed at a radius of $500\, M_\odot \simeq 750\, {\rm km}$. Lines of different colours follow the same convention of the previous figures: red for quark-star binaries and black for hadron-star binaries. Note that the data refers to the high-resolution simulations and that the ejected mass for the quark-star binaries is smaller.} \label{fig:outflow} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.49\textwidth]{figures/outflow_dist_v.pdf} \includegraphics[width=0.49\textwidth]{figures/outflow_dist_s.pdf} \caption{Distributions of the ejected matter in terms of the norm of the three-velocity (left panel) and of the entropy per baryon (right panel). Lines of different colours follow the same convention of the previous figures: red for quark-star binaries and black for hadron-star binaries. Note that the data refers to the high-resolution simulations and that quark-star binaries have much reduced tails in both quantities.}\label{fig:outflow_dist} \end{figure*} There are at least two explanations behind this difference. The first is that, as mentioned repeatedly, SQM is characterised by self-bound properties and generally harder to eject. Second, quark stars are intrinsically more compact and hence the merged object will be subject both to stronger dynamical shocks but also produce stronger gravitational fields from which it is harder to eject matter. Third, and most importantly, a rest-mass difference is present between SQM and hadronic matter. We recall, in fact, that the average rest-mass per baryon of SQM we adopt here $850\,{\rm MeV}$, which is smaller than the corresponding value for hadronic matter, $940\,{\rm MeV}$. Proceeding further in our comparison, we report in Fig.~\ref{fig:outflow_dist}, two distributions of the ejected matter in terms of the norm of the three-velocity (left panel) and in terms of entropy per baryon (right panel). Note that both distributions are relative to the ejected matter and hence provide a measure of the fraction of the ejecta with given properties. Concentrating first on the left panel, it is clear that the two distributions are very similar and that differences appear only in the high-velocity tails, i.e.,~ $v\gtrsim 0.6$, which are larger by almost one order of magnitude in the case of binary hadron-star mergers. Although the actual amount of matter in these tails is tiny (i.e.,~ $\lesssim 10^{-8}\,M_{\odot}$), they could play a role when interacting with the interstellar medium and produce a re-brightening of the afterglow signal \cite{Hotokezaka2018} (see \cite{Most2020e} for a discussion about the role of fast ejecta). When considering instead the distribution in entropy (right panel of Fig.~\ref{fig:outflow_dist}) -- and bearing in mind that the these measurements are the same for the two classes of stars but equally approximate -- it is possible to appreciate that quark-star mergers overall produce ejected matter with larger entropy. At the same time, hadron-star mergers are able to eject matter with very high entropy, i.e.,~ $s_{_{\rm B}} \gtrsim 60\,k_{_{\rm B}}/{\rm baryon}$, and about one order of magnitude more than quark-star mergers. These differences on the entropy distributions may have a potential effects on the subsequent kilonovae observation, but the degree to which this will happen remains unclear until the quark-evaporation mechanism is properly taken into account, more sophisticated EOSs are devised to estimate the impact of SQM in binary mergers\footnote{We note that the MIT2cfl EOS considered here and introduced in \cite{Zhou2017} does not include an electron fraction, making it thus impossible to account for the evolution of the electron fraction in the ejected matter.}, and more systematical investigations are carried out. \section{Conclusion} \label{sec:conclusion} Since the hypothesis that SQM is the ground state of matter has been formulated more than four decades ago, a vast literature has been developed to investigate this scenario and consider its viability against the astronomical observations. In this way, a very large number of works have been presented in which the possibility that SQM could lead to compact stars, i.e.,~ quark stars, has been explored in the greatest detail. Of this vast literature, however, only a very small fraction has been dedicated to the study of the dynamics of binary systems of quark stars. The scarcity of studies of this scenario, which is particularly striking after the detection of gravitational waves and of electromagnetic counterparts from the merger of low-mass compact binaries such as GW170817, probably has a double origin. From an observational side, there is the expectation that a merger of binary quark stars cannot be responsible of the observed kilonova emission in GW170817 and of the subsequent nucleosynthesis. From a theoretical side, on the other hand, the modelling of the inspiral and merger of a binary of quark stars is particularly challenging as it is difficult to properly capture the large discontinuity that appears at the surface of these self-bound stars. The first of these difficulties has been recently addressed in a number of recent works that have invoked a quark-evaporation process into hadrons that could have taken place as a result of the high temperatures reached after the merger. In this case, therefore, most of ejected SQM from the quark-star binary would have evaporated into nucleons and therefore could have contributed to the kilonova signal in AT2017gfo \cite{Bucciantini2019, DePietri2019, Horvath2019}. Addressing the second difficulty, on the other hand, is the purpose of this work, where we have presented a suitable definition of the baryonic mass of SQM and a novel technique in which a very thin crust has been added to the EOS to produce a smooth and gradual change of the specific enthalpy across the crust. The new approach also allows to use different values for the baryonic mass, which can be introduced via a suitable rescaling of the hydrodynamical variables that are evolved. The introduction of this crust, whose rest-mass content is minute, i.e.,~ $\sim 5\times 10^{-3}\,M_\odot$, and whose spatial extension is restricted to two grid cells only, has been carefully tested by considering the oscillation properties of isolated quark stars. In this way, it was possible to show that the dynamical and simulated response of the quark stars matches accurately the perturbative predictions and that the match becomes increasingly accurate as the numerical resolution is increased. This validation, which has been introduced numerous times in the past when considering neutron stars, has levels of accuracy that are comparable with those obtained with hadronic stars. Using this novel technique we have been able to carry out the first fully general-relativistic simulations of the merger of binary strange quark stars. In addition, in order to best assess the feature of this merging process that are specific to quark stars, we have carried out a systematic comparison of the dynamics of quark-star binaries with the corresponding behaviour exhibited by a binary of hadron stars having the same mass and very similar tidal deformability, namely, a binary described by the DD2 EOS. In this way, it was possible to highlight several important differences between the SQM and the hadronic stars, which represent the main results of our work. First, the dynamical mass lost is $\sim 20\%$ smaller than that coming from a corresponding hadronic binary. Second, quark-star binaries have merger frequencies similar to those of hadron-star binaries with comparable tidal deformability. Hence, it may be difficult to distinguish the two classes of stars based only on the gravitational-wave signal during the inspiral. Third, quark-star binaries have post-merger frequencies that obey quasi-universal relations derived from hadron-star binaries in terms of the tidal deformability, but not when expressed in terms of the average stellar compactness. Hence, it may be difficult to distinguish the two classes of stars based only on the post-merger frequencies and if no information on the stellar radius is available. Fourth, the matter ejected from quark-star binaries has much smaller tails in the velocity distributions, i.e.,~ for $v \gtrsim 0.6$; this lack of fast ejecta may be revealed by the corresponding lack of a radio re-brightening when the fast ejecta interact with the interstellar medium. Finally, while quark-star binaries eject material that, on average, has larger entropy per baryon, it also lacks the important tail of very high-entropy material. Determining whether these differences in the ejected will able to leave an imprint in the electromagnetic counterpart and nucleosynthetic yields remains unclear. The results presented here had to rely necessarily on a number of simplifying assumptions and can therefore be improved in a number of ways. First, by adopting EOSs for the SQM that have a proper treatment of the temperature dependence and a nonzero electron fraction. Doing so will allow us not only to accurately and self-consistently describe the thermodynamical evolution after the merger, but also to determine whether the ejected material will lead to the observed chemical abundances. Second, by employing a larger class of EOSs so that it will be possible to establish whether the frequency at merger and the post-merger frequencies obey new universal relations when expressed in terms of the stellar compactness. Finally, and most importantly, by adopting a detailed and quantitative description of the quark-evaporation mechanism, so that a consistent assessment can be made of the viability of binary quark-star mergers as sources to the electromagnetic counterpart in AT2017gfo. All of these improvements will be explored and presented in future works. \begin{acknowledgments} It is a pleasure to thank M. Hanauske, J. Papenfort, L. Weih, A. Drago, G. Pagliara, and A. Bauswein for useful input and comments. Z. Zhu acknowledges support from the China Scholarship Council (CSC). Support also comes in part from ``PHAROS'', COST Action CA16214; LOEWE-Program in HIC for FAIR; the ERC Synergy Grant ``BlackHoleCam: Imaging the Event Horizon of Black Holes'' (Grant No. 610058). The simulations were performed on the SuperMUC and SuperMUC-NG clusters at the LRZ in Garching, on the LOEWE cluster in CSC in Frankfurt, and on the HazelHen cluster at the HLRS in Stuttgart. \end{acknowledgments} \bibliographystyle{apsrev4-2}
{ "timestamp": "2021-09-03T02:00:43", "yymm": "2102", "arxiv_id": "2102.07721", "language": "en", "url": "https://arxiv.org/abs/2102.07721" }
\section{MAJOR HEADINGS} \documentclass{article} \usepackage{spconf,amsmath,graphicx} \usepackage{svg} \usepackage{amssymb} \usepackage{multirow} \usepackage{booktabs} \usepackage[ruled,vlined]{algorithm2e} \setlength\heavyrulewidth{0.3ex} \newcommand{\STAB}[1]{\begin{tabular}{@{}c@{}}#1\end{tabular}} \DeclareMathOperator*{\argmax}{\arg\!\max} \def{\mathbf x}{{\mathbf x}} \def{\cal L}{{\cal L}} \title{Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks} \name{% \begin{tabular}{@{}c@{}} Mahesh Sudhakar$^{\star}$, Sam Sattarzadeh$^{\star}$, Konstantinos N. Plataniotis$^{\star}$, \\ Jongseong Jang$^{\dagger}$, Yeonjeong Jeong$^{\dagger}$, Hyunwoo Kim$^{\dagger}$ \end{tabular}} \address{$^{\star}$Department of Electrical \& Computer Engineering, University of Toronto\\ $^{\dagger}$Fundamental Research Lab, LG AI Research} \begin{document} \maketitle \begin{abstract} Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models. Recently, perturbation-based model analysis has shown better interpretation, but backpropagation techniques are still prevailing because of their computational efficiency. In this work, we combine both approaches as a hybrid visual explanation algorithm and propose an efficient interpretation method for convolutional neural networks. Our method adaptively selects the most critical features that mainly contribute towards a prediction to probe the model by finding the activated features. Experimental results show that the proposed method can reduce the execution time up to 30\% while enhancing competitive interpretability without compromising the quality of explanation generated. \end{abstract} \begin{keywords} CNNs, Deep Learning, Explainable AI, Interpretable ML, Neural Network Interpretability. \end{keywords} \section{INTRODUCTION} \label{sec:intro} Over the recent past years, access to a lot of digital data, the advances in computing facilities, and the facile access to many readily available pre-trained models have fueled the growth in deep learning. Although such models produce high accuracy in object recognition, the interpretability \cite{fan2020interpretability} of their decisions is also essential to convince the stakeholders or locate any potential bias in the underlying data. With AI currently being employed in various fields such as in healthcare, consumer retails, and banking, it is high time to develop ``Responsible AI" \cite{arrieta2020explainable} for society. To ensure the uniformity of the training data's distribution, lately, there is an increase in modern open-source toolkits \cite{adebayo2016fairml, bellamy2019ai} that acts as a common framework to evaluate a model's fairness. Explainable AI (XAI) has recently been offering many algorithms to interpret a model's behavior. Based on their usage at the training process's timeline, XAI approaches can be broadly classified into \textit{ad-hoc} and \textit{post-hoc} methods. In terms of their explanation ability to interpret a single instance or the whole decision process, XAI can be classified into \textit{local} and \textit{global}. They can also be categorized into \textit{model-agnostic} and \textit{model-specific} methods, based on the requirement to specify the model's architecture. In this work, we study such a recent \textit{post-hoc}, \textit{local}, and \textit{model-specific} XAI algorithm - Semantic Input Sampling for Explanation (SISE) \cite{sattarzadeh2020explaining} developed for image classification tasks. Building on this method, we propose a way to improve its run-time while retaining its overall performance without compromising the visual explanation's quality. Our approach introduces a novel way to adaptively select the most important feature information to be considered for the subsequent steps of the algorithm's operation. This modification acts as a smart filtering procedure that mutates the existing method into an automated, unified solution by eliminating the need for an end-user to tune the hyper-parameters. To demonstrate this claim, we evaluate our approach with the original algorithm's performance in terms of the visual explanation quality, overall benchmark analysis, and execution time. \section{BACKGROUND} \subsection{Existing methods} The prior works on \textit{post-hoc} visual XAI can be divided into three main groups: `backpropagation-based', `perturbation-based', and `CAM-based' methods. The backpropagation-based methods mainly operate by backpropagating the signals from the output neuron of the model to the inputs \cite{IntegGrad, simonyan2013deep} or the hidden nodes of the model \cite{srinivas2019full}, in order to calculate gradient \cite{simonyan2013deep} of relevance \cite{bach2015pixel} terms. Perturbation-based approaches rely on feed-forwarding the model with perturbed copies of the input image. They interpret the model's behavior using techniques such as probing the model with random masks \cite{petsiuk2018rise} or optimizing a perturbation mask for each input \cite{fong2019understanding}. Moreover, CAM-based methods are built based on the Class Activation Mapping (CAM) method \cite{CAM} and are used specifically for CNNs by taking advantage of the phenomenon of this type of networks in weak object localization, as stated in \cite{zhou2014object}. Most of these methods are developed by backpropagation techniques \cite{GradCAM, GradCAMPP} or perturbation techniques \cite{ScoreCAM}. \subsection{Semantic Input Sampling for Explanation} SISE is a recent explanation method that spans among all three mentioned visual XAI methods, although it is generally classified as a perturbation-based algorithm. SISE employs the feature information underlying the model's various depths to generate a saliency-based high-resolution visual explanation map. For a given trained classification model $\delta: I \rightarrow \mathbb{R}^{C}$ with $N$ convolutional blocks that outputs a confidence score over $C$ classes for each input image $I$, SISE generates a 2-dimensional explanation map $Y_{I,\delta(\lambda)}$ for $\lambda$ in the domain of feature maps $\Lambda$, through its four-phased architecture. In the first phase (\textit{Feature map Extraction}), pooling layers $p_{l}$ of the model for $l \in \{1,..,N\}$ are targeted, and their corresponding feature maps $F_{k}^{[p]}$ for $k \in \{1,..,M^{p}\}$ are collected. As this operation is independent of the classifier part, there would be a lot of irrelevant feature information about the background or other object classes (if present), in addition to the class of interest. The excess information is filtered out in the second phase (\textit{Feature map Selection}) based on their backpropagation scores. Here, the feature maps with positive gradients towards a particular class are selected and post-processed to be converted into attribution masks $A_{k}^{[p]}$, via bilinear interpolation followed by normalization in the range $[0,1]$. The generated attribution masks are then scored by weighing based on their classification scores in the third phase (\textit{Attribution mask Scoring}) and later combined to form a layer visualization map $V_{I, \delta(\lambda)}^{[p]}$. These preceding steps are repeated for all pooling (down-sampling) layers $p_{l}$ of the network and then passed to the final phase (\textit{Fusion}) of the algorithm, where they are fused in a cascading manner under a series of operations including addition, multiplication, and adaptive binarization, to reach the final explanation map. \section{PROBLEM STATEMENT} The gradient-based feature map selection policy in SISE is aimed to distinguish the feature maps containing essential attributions for the model's prediction (`positive-gradient') against the ones representing outliers or background information. That was achieved using a threshold parameter $\mu$ that was set to 0 by default to discard the ones with `negative-gradient' scores. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{hog_new.pdf} \caption{Histogram of the gradient values recorded from the feature maps in the last convolutional layer of a ResNet-50.} \label{fig:hog} \end{figure} However, most of the elected activation maps with positive gradients are relatively ineffective in the prediction procedure, thereby increasing SISE's computational overhead unnecessarily. We identify that the average gradient distribution of the positive-gradient feature maps follows a pattern, as in Fig. \ref{fig:hog}, where several trivial features are represented with low gradient values. Thus, only a fraction of the most critical feature maps is passed to the third phase of SISE. Hence, we focus on developing an adaptive selection policy for the parameter $\mu$ of SISE to estimate the least number of required features to generate an explanation map without any notable compromise (and even in some cases, a slight enhancement) in terms of visual quality. \section{ADAPTIVE MASK SELECTION} \begin{algorithm}[ht] \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKwData{Parameter}{Parameter} \Input{An input image $I$ and a trained model $\delta$.} $\eta \leftarrow$ \text{post-processing function.}\\ $\zeta \leftarrow$ \text{heatmap fusion function.} \For{$n \leftarrow 1,...,N$}{ Select the pooling layer $p$ and collect feature maps $F_{k}^{[p]} \forall k \in \{ 1,..,M^{p} \}$\; Let the domain of the feature maps be $\Lambda^{[p]}$\; $\sigma_{k}^{[p]}$ = $\displaystyle\sum_{\lambda^{[p]} \in c} \frac{\partial \delta(I)}{\partial F_{k}^{[p]} (\lambda^{[p]})}$ $\And$ $\rho^{[p]}$ = $\max(\sigma_{k}^{[p]})$ \; $A_k^{[p]} \leftarrow []$ ; $\Upsilon^{[p]}\leftarrow\{\upsilon_k^{[p]} > 0 | k\in\{1,...,M^{[p]}\}$\; $\mu^{[p]}\leftarrow \Upsilon^{[p]}(\argmax_{j\in \{1,...,|\Upsilon^{[p]}|\}}(\tau^{[p]}(j)))$\; \ForEach{$k \leftarrow \{1,...,m^{p}$\}}{ \eIf{$\frac{\sigma_{k}^{[p]}}{\rho^{[p]}}>\mu^{[p]}$}{ $A_{k}^{[p]}\leftarrow A_{k}^{[p]} \cup \eta (F_{k}^{[p]}) $\; }{ $A_{k}^{[p]}\leftarrow A_{k}^{[p]} $\; }} $V_{I, \delta(\lambda)}^{[p]}$ = $\mathbb{E}_{A^{[p]}} [\delta (I \odot m) \cdot C_{m}(\lambda)]$\; } SISE explanation: $Y_{I,\delta(\lambda)}$ = $\zeta (V_{I, \delta(\lambda)}^{[p]})$ \\ \Output{A 2D explanation map $Y_{I,\delta(\lambda)}$.} \caption{Ada-SISE: Adaptive Semantic Input Sampling for Explanation} \label{alg:Ada-SISE} \end{algorithm} \begin{figure}[ht] \centering \includegraphics[width=0.95\linewidth]{Ada-SISE-arch_compressed.pdf} \caption{Architecture of the proposed Ada-SISE XAI method. } \label{fig:arch} \end{figure} To tune the strictness of feature map selection adaptively for each of the layers, we employ an Otsu-based framework \cite{Otsu}. For a selected layer $p$, we reach the set of feature maps $F_k^{[p]}$ and their corresponding gradient values $\sigma_k^{[p]}$, and determine its maximum as $\rho^{[p]}$. Denoting the normalized gradient values for the feature maps as $\upsilon_k^{[p]}=\frac{\sigma_k^{[p]}}{\rho^{[p]}}$, we define the set of positive-gradient feature maps as: \begin{equation} \Upsilon^{[p]} \equiv {\Upsilon^{[p]}}^{+}= \{\upsilon_k^{[p]} > 0 | k\in\{1,...,M^{[p]}\} \} \end{equation} where $M^{[p]}$ is the number of feature maps extracted from layer $p$. Otsu's method is applied to the set of positive-gradient feature maps to implement an updated threshold on them, based on the histogram of their average gradient scores. Assuming $\Upsilon^{[p]}(i) \forall i \in \{1,...,|\Upsilon^{[p]}|\}$ to be the $i$-th value in $\Upsilon^{[p]}$, we can formulate the mean value of the masks with less/more gradient values than $\Upsilon^{[p]}(i)$ respectively, as follows: \begin{equation} \omega^{[p]}_L(i)=\frac{\sum_{j=1}^{i} (\Upsilon^{[p]}(j))}{i} \times |\Upsilon^{[p]}| \end{equation} \begin{equation} \omega^{[p]}_H(i)=\frac{\sum_{j=i}^{|\Upsilon^{[p]}|} (\Upsilon^{[p]}(j))}{|\Upsilon^{[p]}|-i}\times |\Upsilon^{[p]}| \end{equation} If we set $\mu=\Upsilon^{[p]}(i)$ to divide the set of positive-gradient feature maps into two low and high subsets, the inter-class variance of these sets are calculated as follows: \begin{equation} \tau^{[p]}(i)= \omega^{[p]}_L(i) \times \omega^{[p]}_H(i) \times \bigg[ \frac{|\Upsilon^{[p]}|-i}{|\Upsilon^{[p]}|}-\frac{i}{|\Upsilon^{[p]}|} \bigg]^2 \end{equation} which can be simplified as: \begin{equation} \tau^{[p]}(i)= \omega^{[p]}_L(i) \times \omega^{[p]}_H(i) \times \bigg[ \frac{|\Upsilon^{[p]}|-2i}{|\Upsilon^{[p]}|} \bigg]^2 \label{eq:obt} \end{equation} According to \cite{Otsu}, minimizing the intra-class variance for both classes simultaneously is equivalent to maximizing the inter-class variance in equation \eqref{eq:obt}. Hence, we can identify the most deterministic feature maps in each layer by applying a threshold which maximizes the inter-class variance accordingly: \begin{equation} \mu^{[p]}=\Upsilon^{[p]}\bigg( \argmax_{j\in \{1,...,|\Upsilon^{[p]}|\}} \big( \tau^{[p]}(j) \big) \bigg) \label{eq:thres} \end{equation} The $\argmax$ operation in equation \eqref{eq:thres} is achieved by a simple search method. If the number of feature maps derived from a layer is not noticeably large, and if some of these feature maps are discarded as negative-gradient activation maps, a simple search method would not add any significant additional complexity to SISE framework. We term our method Ada-SISE and show its architecture in Fig. \ref{fig:arch} and its methodology in Algorithm \ref{alg:Ada-SISE}. \section{RESULTS} To compare Ada-SISE's performance abreast with SISE, experiments were performed on the test set of the Pascal VOC 2007 dataset \cite{PASCALVOC}. Two pre-trained models, a shallow VGG16 (with a test accuracy of 87.18\%) and a residual ResNet-50 network (with 87.96\% test accuracy), are directly loaded from the TorchRay library \cite{petsiuk2018rise} to replicate the original experimentation setup. As it was reported in \cite{sattarzadeh2020explaining} that SISE meets or outperforms most of the state-of-the-art XAI methods like Grad-CAM \cite{GradCAM}, RISE \cite{petsiuk2018rise} and Score-CAM \cite{ScoreCAM}, we restrict our comparisons only with Extremal Perturbation \cite{fong2019understanding} (as it is one of the sophisticated perturbation-based methods) and SISE. \subsection{Benchmark Analysis} \begin{table}[ht] \centering \begin{tabular}{c c c c c} \toprule & \multirow{2}{*}{\textbf{Metric}} & \textbf{Extremal} & \multirow{2}{*}{\textbf{SISE}} & \textbf{} \multirow{2}{*}{\textbf{Ada-SISE}}\\ & & \textbf{Perturbation} & & \\ \midrule \multirow{5}{*}{\STAB{\rotatebox[origin=c]{90}{\textbf{VGG16}}}} & \textbf{EBPG}& \textbf{61.19} & 60.54 & 60.79 \\ & \textbf{Bbox} & 51.2 & 55.68 & \textbf{55.73}\\ & \textbf{Drop\%} & 43.9 & \textbf{38.40} & 38.87\\ & \textbf{Increase\%} & 32.65 & 37.96 & \textbf{38.25}\\ \cmidrule(lr){2-5} & \textbf{Run-time} (s) & 87.42 & 5.96 & \textbf{4.23} \\ \midrule \multirow{5}{*}{\STAB{\rotatebox[origin=c]{90}{\textbf{ResNet-50}}}} & \textbf{EBPG} & 63.24 & 66.08 & \textbf{66.4} \\ & \textbf{Bbox} & 52.34 & 61.59 & \textbf{61.77} \\ & \textbf{Drop\%} & 39.38 & \textbf{30.92} & \textbf{30.92} \\ & \textbf{Increase\%} & 34.27 & 40.22 & \textbf{40.75} \\ \cmidrule(lr){2-5} & \textbf{Run-time} (s) & 78.37 & 9.21 & \textbf{6.29} \\ \bottomrule \end{tabular} \caption{Results of benchmark evaluation of Ada-SISE on pre-trained models on the PASCAL VOC 2007 \cite{PASCALVOC} dataset.} \label{tab: gt_metrics} \end{table} Table \ref{tab: gt_metrics} shows the benchmark evaluation of Ada-SISE concerning various metrics and their execution time. As the depicted results are achieved through the same experimental setup as SISE paper, the readers can refer to \cite{sattarzadeh2020explaining} to infer further head-to-head comparison of Ada-SISE with other state-of-the-art methods. Energy-Based Pointing Game (EBPG) \cite{ScoreCAM} and Bbox \cite{schulz2020restricting} use the ground-truth annotations available to determine the precision of an XAI algorithm. Concurrently, Drop and Increase rates \cite{AblationCAM} measure the contribution of pixels captured in the explanation map towards the model's predictive accuracy. Ada-SISE outperforms SISE in almost all of the metrics\footnote{For each metric in Table \ref{tab: gt_metrics}, the best is shown in bold. Besides Drop\% and run-time (in seconds), the higher is better for all other metrics.} while executing about 30\% faster. Fig. \ref{fig:qual} compares the explanation maps qualitatively on a ResNet-50 model and shows the ground-truth class and their annotations along with the model's corresponding confidence score for each image. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{qual_compressed.pdf} \caption{Comparison of Ada-SISE with SISE \cite{sattarzadeh2020explaining} on a ResNet-50 model with images from Pascal VOC 2007 dataset \cite{PASCALVOC} demonstrating their class-discriminative explanation ability.} \label{fig:qual} \end{figure} \subsection{Run-time Analysis} \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{Runtime_compressed.pdf} \caption{Comparison of the average run-times of Ada-SISE with SISE on a sample of images with a ResNet-50. } \label{fig:run-time} \end{figure} The bottleneck in SISE's run-time is its third phase, where too many positive gradient feature maps are feed-forwarded to compute their weights for scoring. As Ada-SISE chooses only a fraction of them that it considers crucial, our algorithm's run-time is reduced significantly in the scoring phase. Fig. \ref{fig:run-time} shows the comparison of run-times, where it can be noted that Ada-SISE executes under 6.3 seconds while SISE takes about 9.21 seconds. The small rise in the execution time at the second phase of Ada-SISE is the effect of our proposed adaptive thresholding procedure. The reported numbers are the average of experimentation performed over 100 random images from the Pascal VOC dataset on an NVIDIA Tesla T4 GPU with 16 GB of RAM. \subsection{Discussion} \begin{table}[ht] \centering \begin{tabular}{c c c c c c} \textbf{} & \textbf{$p_{1}$} & \textbf{$p_{2}$} & \textbf{$p_{3}$} & \textbf{$p_{4}$} & \textbf{$p_{5}$} \\ \toprule \textbf{No. of feature} & \multirow{2}{*}{64} & \multirow{2}{*}{256} & \multirow{2}{*}{512} & \multirow{2}{*}{1024} & \multirow{2}{*}{2048} \\ \textbf{maps available} & & & & &\\ \midrule \textbf{SISE} & 31 & 130 & 262 & 515 & 1008 \\ \midrule \textbf{Ada-SISE} & 26 & 114 & 179 & 420 & 551 \\ \bottomrule \end{tabular} \caption{The number of feature maps chosen by Ada-SISE (on average) over the five pooling layers of a ResNet-50 compared with that of SISE and the corresponding number of available maps.} \label{tab:discussion} \end{table} The number of feature maps selected for each pooling layer $p_{l}$ of the network was recorded over a data sample of 500 images from the Pascal dataset, averaged, and reported in Table \ref{tab:discussion}. As the deeper layers contribute more feature maps, it can be noticed that Ada-SISE chooses only a fraction of them, justifying the run-time reduction after the second phase. This validates our claim that by neglecting comparatively lower gradient values, dominant feature maps that contribute more towards a prediction can be extracted without compromising the explanation quality. Although an ablation study could be performed to identify a suitable value for $\mu$ by fine-tuning SISE through extensive experiments, this solution would be profoundly dependent on the training data and would be brittle when expanded to new unseen data. Therefore, Ada-SISE generalizes SISE to be scaled for any application. \section{CONCLUSION} In this work, we propose Ada-SISE as an improvement to the recent SISE method that makes it a fully automated XAI algorithm. We also report a reduction in run-time and an overall improvement in the benchmark analysis without losing its visual explanation quality. Such identified important features would be adopted in future works to analyze a model's behavior by studying its effect on the model's prediction when replaced with noises or other classes' attributions. \bibliographystyle{IEEEbib}
{ "timestamp": "2021-02-17T02:01:20", "yymm": "2102", "arxiv_id": "2102.07799", "language": "en", "url": "https://arxiv.org/abs/2102.07799" }
\section{Introduction} \label{sec:introduction} In online learning, catastrophic forgetting refers to the tendency for artificial neural networks (ANNs) to forget previously learned information when in the presence of new information~\citep[p.~173]{french1991using}. Catastrophic forgetting presents a severe issue for the broad applicability of ANNs as many important learning problems, such as reinforcement learning, are online learning problems. Efficient online learning is also core to the continual---sometimes called lifelong~\citep[p.~55]{chen2018lifelong}---learning problem. The existence of catastrophic forgetting is of particular relevance now as ANNs have been responsible for a number of major artificial intelligence (AI) successes in recent years (e.g., \citet{taigman2014deepface}, \citet{mnih2015human}, \citet{silver2016mastering}, \citet{gatys2016image}, \citet{vaswani2017attention}, \citet{radford2019language}, \citet{senior2020improved}). Thus there is reason to believe that methods able to successfully mitigate catastrophic forgetting could lead to new breakthroughs in online learning problems. The significance of the catastrophic forgetting problem means that it has attracted much attention from the AI community. It was first formally reported on in \citet{mccloskey1989catastrophic} and, since then, numerous methods have been proposed to mitigate it (e.g., \citet{kirkpatrick2017overcoming}, \citet{lee2017overcoming}, \citet{zenke2017continual}, \citet{masse2018alleviating}, \citet{sodhani2020toward}). Despite this, it continues to be an unsolved issue~\citep{kemker2018measuring}. This may be partly because the phenomenon itself---and what contributes to it---is poorly understood, with recent work still uncovering fundamental connections (e.g., \citet{mirzadeh2020understanding}). This paper is offered as a step forward in our understanding of the phenomenon of catastrophic forgetting. In this work, we seek to improve our understanding of it by revisiting the fundamental questions of (1)~how we should quantify catastrophic forgetting, and (2)~to what degree do all of the choices we make when designing learning systems affect the amount of catastrophic forgetting. To answer the first question, we compare several different existing measures for catastrophic forgetting: retention, relearning, activation overlap, and pairwise interference. We discuss each of these metrics in detail in Section~\ref{sec:measuring_catastrophic_forgetting}. We show that, despite each of these metrics providing a principled measure of catastrophic forgetting, the relative ranking of algorithms varies wildly between them. This result suggests that catastrophic forgetting is not a phenomenon that a single one of these metrics can effectively describe. As most existing research into methods to mitigate catastrophic forgetting rarely looks at more than one of these metrics, our results imply that a more rigorous experimental methodology is required in the research community. Based on our results, we recommend that work looking at inter-task forgetting in supervised learning must, at the very least, consider both retention and relearning metrics concurrently. For intra-task forgetting in reinforcement learning, our results suggest that pairwise interference may be a suitable metric, but that activation overlap should, in general, be avoided as a singular measure of catastrophic forgetting. To address the question of to what degree all the choices we make when designing learning systems affect the amount of catastrophic forgetting, we look at how the choice of which modern gradient-based optimizer is used to train an ANN impacts the amount of catastrophic forgetting that occurs during training. We empirically compare vanilla SGD, SGD with Momentum~\citep{qian1999momentum,rumelhart1986learning}, RMSProp~\citep{geoffrey-rmsprop}, and Adam~\citep{kingma2014adam}, under the different metrics and testbeds. Our results suggest that selecting one of these optimizers over another does indeed result in a significant change in the catastrophic forgetting experienced by the learning system. Furthermore, our results ground previous observations about why vanilla SGD is often favoured in continual learning settings~\citep[p.~6]{mirzadeh2020understanding}: namely that it frequently experiences less catastrophic forgetting than the more sophisticated gradient-based optimizers---with a particularly pronounced reduction when compared with Adam. To the best of our knowledge, this is the first work explicitly providing strong evidence of this. Importantly, in this work, we are trying to better understand the phenomenon of catastrophic forgetting itself, and not explicitly seeking to understand the relationship between catastrophic forgetting and performance. While that relation is important, it is not the focus of this work. Thus, we defer all discussion of that relation to Appendix~\ref{app:actual_performance_on_the_testbeds} of our supplementary material. The source code for our experiments is available at \url{https://github.com/dylanashley/catastrophic-forgetting/tree/arxiv}. \section{Related Work} \label{sec:related_work} This section connects several closely related works to our own and examines how our work compliments them. The first of these related works, \citet{kemker2018measuring}, directly observed how different datasets and different metrics changed the effectiveness of contemporary algorithms designed to mitigate catastrophic forgetting. Our work extends their conclusions to non-retention-based metrics and to more closely related algorithms. \citet{hetherington1989catastrophic} demonstrated that the severity of the catastrophic forgetting shown in the experiments of \citet{mccloskey1989catastrophic} was reduced if catastrophic forgetting was measured with relearning-based rather than retention-based metrics. Our work extends their ideas to more families of metrics and a more modern experimental setting. \citet{goodfellow2013empirical} looked at how different activation functions affected catastrophic forgetting and whether or not dropout could be used to reduce its severity. Our work extends their work to the choice of optimizer and the metric used to quantify catastrophic forgetting. While we provide the first formal comparison of modern gradient-based optimizers with respect to the amount of catastrophic forgetting they experience, others have previously hypothesized that there could be a potential relation. \citet{ratcliff1990connectionist} contemplated the effect of momentum on their classic results around catastrophic forgetting and then briefly experimented to confirm their conclusions applied under both SGD and SGD with Momentum. While they only viewed small differences, our work demonstrates that a more thorough experiment reveals a much more pronounced effect of the optimizer on the degree of catastrophic forgetting. Furthermore, our work includes the even more modern gradient-based optimizers in our comparison (i.e., RMSProp and Adam), which---as noted by \citet[p.~6]{mirzadeh2020understanding}---are oddly absent from many contemporary learning systems designed to mitigate catastrophic forgetting. \section{Problem Formulation} \label{sec:problem_formulation} In this section, we define the two problem formulations we will be considering in this work. These problem formulations are online supervised learning and online state value estimation in undiscounted, episodic reinforcement learning. The supervised learning task is to learn a mapping \(f: \mathbb{R}^n \rightarrow \mathbb{R}\) from a set of examples \((\vec{x}_0, y_0)\), \((\vec{x}_1, y_1)\), ..., \((\vec{x}_n, y_n)\). The supervised learning framework is a general one as each $\vec{x_i}$ could be anything from an image to the full text of a book, and each $y_i$ could be anything from the name of an animal to the average amount of time needed to read something. In the incremental online variant of supervised learning, each example \((\vec{x}_t, y_t)\) only becomes available to the learning system at time $t$ and the learning system is expected to learn from only this example at time $t$. Reinforcement learning considers an agent interacting with an environment. Often this is formulated as a Markov Decision Process, where, at each time step $t$, the agent observes the current state of the environment \(S_{t} \in \mathcal{S}\), takes an action \(A_{t} \in \mathcal{A}\), and, for having taken action \(A_{t}\) when the environment is in state \(S_{t}\), subsequently receives a reward \(R_{t + 1} \in \mathbb{R}\). In episodic reinforcement learning, this continues until the agent reaches a terminal state \(S_{T} \in \mathcal{T} \subset \mathcal{S}\). In undiscounted policy evaluation in reinforcement learning, the goal is to learn, for each state, the expected sum of rewards received before the episode terminates when following a given policy~\citep[p.~74]{sutton2018reinforcement}. Formally we write this as: \[ \forall s \in \mathcal{S}, v_{\pi}(s) := \mathbb{E}_{\pi}\left[\sum_{t = 0}^{T} R_{t} | S_{0} = s\right] \] where $\pi$ is the policy mapping states to actions, and $T$ is the number of steps left in the episode. We refer to $v_{\pi}(s)$ as the value of state $s$ under policy $\pi$. In the incremental online variant of value estimation in undiscounted episodic reinforcement learning, each transition \((S_{t - 1}, R_{t}, S_{t})\) only becomes available to the learning system at time $t$ and the learning system is expected to learn from only this transition at time $t$. \section{Measuring Catastrophic Forgetting} \label{sec:measuring_catastrophic_forgetting} In this section, we examine the various ways which people have proposed to measure catastrophic forgetting. The most prominent of these is \textit{retention}. Retention-based metrics directly measure the drop in performance on a set of previously-learned tasks after learning a new task. Retention has its roots in psychology (e.g., \citet{barnes1959fate}), and \citet{mccloskey1989catastrophic} used this as a measure of catastrophic forgetting. The simplest way of measuring the retention of a learning system is to train it on one task until it has mastered that task, then train it on a second task until it has mastered that second task, and then, finally, report the new performance on the first task. \citet{mccloskey1989catastrophic} used it in a two-task setting, but more complicated formulations exist for situations where there are more than two tasks (e.g., see \citet{kemker2018measuring}). An alternative to retention that likewise appears in psychological literature and the machine learning literature is \textit{relearning}. Relearning was the first formal metric used to quantify forgetting in the psychology community~\citep{ebbinghaus1885memory}, and was first used to measure catastrophic forgetting in \citet{hetherington1989catastrophic}. The simplest way of measuring relearning is to train a learning system on a first task to mastery, then train it on a second task to mastery, then train it on the first task to mastery again, and then, finally, report how much quicker the learning system mastered the first task the second time around versus the first time. While in some problems relearning is of lesser import than retention, in others it is much more significant. A simple example of such a problem is one where forgetting is made inevitable due to resource limitations, and the rapid reacquisition of knowledge is paramount. A third measure for catastrophic forgetting, \textit{activation overlap}, was introduced in \citet{french1991using}. In that work, \citeauthor{french1991using} argued that catastrophic forgetting was a direct consequence of the overlap of the distributed representations of ANNs. He then postulated that catastrophic forgetting could be measured by quantifying the degree of this overlap exhibited by the ANN. The original formulation of the activation overlap of an ANN given a pair of samples looks at the activation of the hidden units in the ANN and measures the element-wise minimum of this between the samples. To bring this idea in line with contemporary thinking (e.g., \citet{kornblith2019similarity}) and modern network design, we propose instead using the dot product of these activations between the samples. Mathematically, we can thus write the activation overlap of a network with hidden units $h_0$, $h_1$, ..., $h_n$ with respect to two samples $\vec{a}$ and $\vec{b}$ as \[ s(\vec{a}, \vec{b}) := \frac{1}{n} \sum\limits_{i=0}^{n} g_{h_i} (\vec{a}) \cdot g_{h_i} (\vec{b}) \] where $g_{h_i} (\vec{x})$ is the activation of the hidden unit $h_i$ with a network input $\vec{x}$. A more contemporary measure of catastrophic forgetting than activation overlap is \textit{pairwise interference}~\citep{riemer2019learning,liu2019sparse,ghiassian2020improving}. Pairwise interference seeks to explicitly measure how much a network learning from one sample interferes with learning on another sample. In this way, it corresponds to the tendency for a network---under its current weights---to demonstrate both positive transfer and catastrophic forgetting due to interference. Mathematically, the pairwise interference of a network for two samples $\vec{a}$ and $\vec{b}$ at some instant $t$ can be written as \[ PI(\theta_t; \vec{a}, \vec{b}) := J(\theta_{t + 1}; \vec{a}) - J(\theta_{t}; \vec{a}) \] where $J(\theta_t; \vec{a})$ is the performance of the learning system with parameters $\theta_t$ on the objective function $J$ for $\vec{a}$ and $J(\theta_{t + 1}; \vec{a})$ is the performance on $J$ for $\vec{a}$ after performing an update at time $t$ using $\vec{b}$ as input. Assuming $J$ is a measure of error that the learning system is trying to minimize, lower values of pairwise interference suggest that less catastrophic forgetting is occurring. When comparing the above metrics, note that, unlike activation overlap and pairwise interference, retention and relearning require some explicit notion of ``mastery'' for a given task. Furthermore, note that activation overlap and pairwise interference can be reported at each step during the learning of a single task and thus can measure intra-task catastrophic forgetting, whereas retention and relearning can only measure inter-task catastrophic forgetting. Finally, note that activation overlap and pairwise interference are defined for pairs of samples, whereas retention and relearning are defined over an entire setting. Setting-wide variants of activation overlap and pairwise interference are estimated by just obtaining an average value for them between all pairs in some preselected set of examples. \section{Experimental Setup} \label{sec:experimental_setup} In this section, we design the experiments which will help answer our earlier questions: (1)~how we should quantify catastrophic forgetting, and (2)~to what degree do all of the choices we make when designing learning systems affect the amount of catastrophic forgetting. To address these questions, we apply the four metrics from the previous section to four different testbeds. For brevity, we defer less relevant details of our experimental setup to Appendix~\ref{app:additional_experimental_setup} of our supplementary material. The first two testbeds we use build on the MNIST~\citep{lecun1998gradient} and Fashion MNIST~\citep{xiao2017fashion} dataset, respectively, to create two four-class image classification supervised learning tasks. With both tasks, we separate the overall task into two distinct subtasks where the first subtask only includes examples from the first and second class, and the second subtask only includes examples from the third and fourth class. We have the learning system learn these subtasks in four phases, wherein only the first and third phases contain the first subtask, and only the second and fourth phases contain the second subtask. Each phase transitions to the next only when the learning system has achieved mastery in the phase. Here, that means the learning system must maintain a running accuracy in that phase of $90\%$ for five consecutive steps. All learning here---and in the other two testbeds---is fully online and incremental. The third and fourth testbeds draw examples from an agent operating under a fixed policy in a standard undiscounted episodic reinforcement learning domain. For the third testbed, we use the Mountain Car domain~\citep{moore1990efficient,sutton1998reinforcement} and, for the fourth testbed, we use the Acrobot domain~\citep{sutton1995generalization,dejong1994swinging,spong1989robot}. The learning system's goal in both testbeds is to learn, for each timestep, what the value of the current state is. Thus these are both reinforcement learning value estimation testbeds~\citep[p.~74]{sutton2018reinforcement}. There are several significant differences between the four testbeds that are worth noting. Firstly, the MNIST and Fashion MNIST testbeds' data-streams consist of multiple phases, each containing only i.i.d. examples. However, the Mountain Car and Acrobot testbeds have only one phase each, and that phase contains strongly temporally-correlated examples. One consequence of this difference is that only intra-task catastrophic forgetting metrics can be used in the Mountain Car and Acrobot testbed, and so here, the retention and relearning metrics of Section~\ref{sec:measuring_catastrophic_forgetting} can only be measured in the MNIST and Fashion MNIST testbeds. While it is theoretically possible to derive semantically similar metrics for the Mountain Car and Acrobot testbeds, this is non-trivial as---in addition to them consisting of only a single phase---it is somewhat unclear what mastery is in these contexts. Another difference between the MNIST testbed and the other two testbeds is that in the MNIST testbed---since the network is solving a four-class image classification problem in four phases with not all digits appearing in each phase---some weights connected to the output units of the network will be protected from being modified in some phases. This property of these kinds of experimental testbeds has been noted previously in \citet[Section 6.3.2.]{farquhar2018towards}. In the Mountain Car and Acrobot testbeds, no such weight protection exists. For each of the four testbeds, we use feedforward shallow ANNs trained through backpropagation~\citep{rumelhart1986learning}. We experiment with four different optimizers for training the aforementioned ANNs for each of the four testbeds. These optimizers are (1)~SGD, (2)~SGD with Momentum~\citep{qian1999momentum,rumelhart1986learning}, (3)~RMSProp~\citep{geoffrey-rmsprop}, and (4)~Adam~\citep{kingma2014adam}. For Adam, in accordance with recommendations of Adam's creators~\citep{kingma2014adam}, we set $\beta_1$, $\beta_2$, and $\epsilon$ to $0.9$, $0.999$, and $10^{-8}$, respectively. As Adam can be roughly viewed as a union of SGD with Momentum and RMSProp, we may expect that if one of the two is particularly susceptible to catastrophic forgetting, so too would Adam. Thus, there is some understanding we can gain by aligning their hyperparameters with some of the hyperparameters used by Adam. So to be consistent with Adam, in RMSProp, we set the coefficient for the moving average to $0.999$ and $\epsilon$ to $10^{-8}$, and, in SGD with Momentum, we set the momentum parameter to $0.9$. In the MNIST and Fashion MNIST testbeds, we select one $\alpha$ for each of the above optimizers by trying each of $2^{-3}$, $2^{-4}$, ..., $2^{-18}$ and selecting whatever minimized the average number of steps needed to complete the four phases. As the Mountain Car testbed and Acrobot testbed are likely to be harder for the ANN to learn, we select one $\alpha$ for each of these testbeds by trying each of $2^{-3}$, $2^{-3.5}$, ..., $2^{-18}$ and selecting whatever minimized the average area under the curve of the post-episode mean squared value error. We provide a sensitivity analysis for our selection of the coefficient for the moving average in RMSProp, for the momentum parameter in SGD with Momentum, as well as for our selection of $\alpha$ with each of the four optimizers. We limit this sensitivity analysis to the retention and relearning metrics in the MNIST testbed. We extend this sensitivity analysis to the other metrics and testbeds in Appendix~\ref{app:additional_hyperparameter_sensitivity_analysis} of our supplementary material. \section{Results} \label{sec:results} Since we are only interested in the phenomenon of catastrophic forgetting itself, we only report the learning systems' performance in terms of the metrics described in Section~\ref{sec:measuring_catastrophic_forgetting} here and skip reporting their performance on the actual problems. The curious reader can refer to Appendix~\ref{app:actual_performance_on_the_testbeds} of our supplementary material for that information. \begin{figure} \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.65]{mnist_retention_and_relearning.pdf} \end{minipage} \hfill \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.65]{fashion_mnist_relearning.pdf} \end{minipage} \caption{Retention and relearning in the (left) MNIST testbed, and relearning in the (right) Fashion MNIST testbed (higher is better in both). Here, retention is defined as the learning system's accuracy on the first task after training it on the first task to mastery, then training it on the second task to mastery, and relearning is defined as the length of the first phase as a function of the third.} \label{fig:mnist_and_fashion_mnist_interference} \end{figure} \begin{figure} \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.575]{mnist_additional_interference.pdf} \end{minipage} \hfill \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.575]{fashion_mnist_additional_interference.pdf} \end{minipage} \caption{Activation overlap and pairwise interference exhibited by the four optimizers as a function of phase and step in phase in the (left) MNIST testbed, and (right) Fashion MNIST testbed (lower is better in both). Lines are averages of all runs currently in that phase and are only plotted for steps where at least half of the runs for a given optimizer are still in that phase. Standard error is shown with shading but is very small.} \label{fig:mnist_and_fashion_mnist_additional_interference} \end{figure} The left side of Figure~\ref{fig:mnist_and_fashion_mnist_interference} shows the retention and relearning of the four optimizers in the MNIST testbed, and the right side shows the retention of the four optimizers in the Fashion MNIST testbed. Recall that, here, retention is defined as the learning system's accuracy on the first task after training it on the first task to mastery, then training it on the second task to mastery, and relearning is defined as the length of the first phase as a function of the third. When comparing the retention displayed by the optimizers in the MNIST testbed, RMSProp vastly outperformed the other three here. However, when comparing relearning instead, SGD is the clear leader. In the Fashion MNIST testbed, retention was less than $0.001$ with each of the four optimizers. Nevertheless, the same trend with regards to relearning in the MNIST testbed results can be observed in the Fashion MNIST testbed results. Also notable here, Adam displayed particularly poor performance in all cases. The left side of Figure~\ref{fig:mnist_and_fashion_mnist_additional_interference} shows the activation overlap and pairwise interference of the four optimizers in the MNIST testbed, and the right side shows these in the Fashion MNIST testbed. Note that, in Figure~\ref{fig:mnist_and_fashion_mnist_additional_interference}, lines stop when at least half of the runs for a given optimizer have moved to the next phase. Also, note that activation overlap should be expected to increase here as training progress since the network's representation for samples starts as random noise. Interestingly, the results under the MNIST and Fashion MNIST testbeds here are similar. Consistent with the retention and relearning metric, Adam exhibited the highest amount of activation overlap here. However, in contrast to the retention and relearning metric, RMSProp exhibited the second highest. Only minimal amounts are displayed with both SGD and SGD with Momentum. When compared with activation overlap, the pairwise interference reported in Figure~\ref{fig:mnist_and_fashion_mnist_additional_interference} seems to agree much more here with the retention and relearning metrics: SGD displays less pairwise interference than RMSProp, which, in turn, displays much less than either Adam or SGD with Momentum. \begin{figure} \centering \includegraphics[scale=0.575]{mountain_car_and_acrobot_interference.pdf} \caption{Activation overlap and pairwise interference exhibited by the four optimizers as a function of episode in the Mountain Car and Acrobot testbeds (lower is better). Lines are averages of all runs, and standard error is shown with shading but is very small.} \label{fig:mountain_car_and_acrobot_interference} \end{figure} Figure~\ref{fig:mountain_car_and_acrobot_interference} shows the activation overlap and pairwise interference of each of the four optimizers in the Mountain Car and Acrobot testbeds at the end of each episode. In Mountain Car, Adam exhibited both the highest mean and final activation overlap, whereas SGD with Momentum exhibited the least. However, in Acrobot, SGD with Momentum exhibited both the highest mean and final activation overlap. When looking at the post-episode pairwise interference values shown in Figure~\ref{fig:mountain_car_and_acrobot_interference}, again, some disagreement is observed. While SGD with Momentum seemed to do well in both Mountain Car and Acrobot, vanilla SGD did well only in Acrobot and did the worst in Mountain Car. Notably, pairwise interference in Mountain Car is the only instance under any of the metrics or testbeds of Adam being among the better two optimizers. \begin{figure} \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.625]{mnist_interference_momentum_and_rms.pdf} \caption{Retention and relearning in the MNIST testbed for SGD with Momentum under different values of momentum, and RMSProp under different coefficients for the moving average (higher is better). Other hyperparameters were set to be consistent with Figure~\ref{fig:mnist_and_fashion_mnist_interference}.} \label{fig:mnist_interference_momentum_and_rms} \end{minipage} \hfill \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.625]{mnist_interference_step-size.pdf} \caption{Retention and relearning in the MNIST testbed for each optimizer under different values of $\alpha$ (higher is better). Other hyperparameters were set to be consistent with Figure~\ref{fig:mnist_and_fashion_mnist_interference}. Lines are averages of all runs, and standard error is shown with shading. Lines are only drawn for values of $\alpha$ in which no run under the optimizer resulted in numerical instability.} \label{fig:mnist_interference_step-size} \end{minipage} \end{figure} Figure~\ref{fig:mnist_interference_momentum_and_rms} shows the retention and relearning in the MNIST testbed for SGD with Momentum as a function of momentum, and RMSProp as a function of the coefficient of the moving average. As would be expected with the results on SGD, lower values of momentum produce less forgetting. Conversely, lower coefficients produce worse retention in RMSProp, but seem to have less effect on relearning. Note that, under all the variations shown here, in no instance does SGD with Momentum or RMSProp outperform vanilla SGD with respect to relearning. Similar to Figure~\ref{fig:mnist_interference_momentum_and_rms}, Figure~\ref{fig:mnist_interference_step-size} shows the retention and relearning of the four optimizers as a function of $\alpha$. While---unsurprisingly---$\alpha$ has a large effect on both metrics, the effect is smooth with similar values of $\alpha$ producing similar values for retention and relearning. \section{Discussion} \label{sec:discussion} The results provided in Section~\ref{sec:results} allow us to reach several conclusions. First and foremost, as we observed a number of differences between the different optimizers over a variety of metrics and in a variety of testbeds, we can safely conclude that there can be no doubt that the choice of which modern gradient-based optimization algorithm is used to train an ANN has a meaningful and large effect on catastrophic forgetting. As we explored the most prominent of these, it is safe to conclude that this effect is likely impacting a large amount of contemporary work in the area. \begin{figure} \centering \includegraphics[scale=0.9]{top_2.pdf} \caption{Number of times each optimizer ranked either first or second under a metric in a testbed. In most of our results, a natural grouping was present between a pair of optimizers that did well and a pair of optimizers that did poorly. This grouping aligns with our earlier conjecture that Adam may be particularly susceptible to catastrophic forgetting when either SGD with Momentum or RMSProp is particularly susceptible. Thus, this figure summarizes the performance of each of the optimizers over the metrics and testbeds looked at.} \label{fig:top_2} \end{figure} We earlier postulated that Adam being viewable as a combination of SGD with Momentum and RMSProp could mean that if either of the two mechanisms exacerbated catastrophic forgetting, then this would carry over to Adam. Thus, it makes sense to look at how often each of the four optimizers was either the best or second-best under a given metric and testbed. This strategy for interpreting the above results is supported by the fact that---in many of our experiments--the four optimizers could be divided naturally into one pair that did well and one pair that did poorly. The results of this process are shown in Figure~\ref{fig:top_2}. Looking at Figure~\ref{fig:top_2}, it is very obvious that Adam was particularly vulnerable to catastrophic forgetting and that SGD outperformed the other optimizers overall. These results suggest that Adam should generally be avoided---and ideally replaced by SGD---when dealing with a problem where catastrophic forgetting is likely to occur. We provide the exact rankings of the algorithms under each metric and testbed in Appendix~\ref{app:detailed_rankings}. When looking at SGD with Momentum as a function of momentum and RMSProp as a function of the coefficient of the moving average, we saw evidence that these hyperparameters have a pronounced effect on the amount of catastrophic forgetting. Since the differences observed between vanilla SGD and SGD with Momentum can be attributed to the mechanism controlled by the momentum hyperparameter, and since the differences between vanilla SGD and RMSProp can be similarly attributed to the mechanism controlled by the moving average coefficient hyperparameter, this is in no way surprising. However, as with what we observed with $\alpha$, the relationship between the hyperparameters and the amount of catastrophic forgetting was generally smooth; similar values of the hyperparameter produced similar amounts of catastrophic forgetting. Furthermore, the optimizer seemed to play a more substantial effect here. For example, the best retention and relearning scores for SGD with Momentum we observed were still only roughly as good as the worst such scores for RMSProp. Thus while these hyperparameters have a clear effect on the amount of catastrophic forgetting, it seems unlikely that a large difference in catastrophic forgetting can be easily attributed to a small difference in these hyperparameters. One metric that we explored was activation overlap. While \citet{french1991using} argued that more activation overlap is the cause of catastrophic forgetting and so can serve as a viable metric for it~(p.~173), in the MNIST testbed, activation overlap seemed to be in opposition to the well-established retention and relearning metrics. These results suggested that, while Adam suffers a lot from catastrophic forgetting, so too does RMSProp. Together, this suggests that catastrophic forgetting cannot be a consequence of activation overlap alone. Further studies must be conducted to understand why the unique representation learned by RMSProp here leads to it performing well on the retention and relearning metrics despite having a greater representational overlap. On the consistency of the results, the variety of rankings we observed in Section~\ref{sec:results} validate previous concerns regarding the challenge of measuring catastrophic forgetting. Between testbeds, as well as between different metrics in a single testbed, vastly different rankings were produced. While each testbed and metric was meaningful and thoughtfully selected, little agreement appeared between them. Thus, we can conclude that, as we hypothesized, catastrophic forgetting is a subtle phenomenon that cannot be characterized by only limited metrics or limited problems. When looking at the different metrics, the disagreement between retention and relearning is perhaps the most concerning. Both are derived from principled, crucial metrics for forgetting in psychology. As such, when in a situation where using many metrics is not feasible, we recommend ensuring that at least retention and relearning-based metrics are present. If these metrics are not available due to the nature of the testbed, we recommend using pairwise interference as it tended to agree more closely with retention and relearning than activation overlap. That being said, more work should be conducted to validate these recommendations. \section{Conclusion} \label{sec:conclusion} In this work, we sought to improve our understanding of catastrophic forgetting in ANNs by revisiting the fundamental questions of (1)~how we can quantify catastrophic forgetting, and (2)~how do the choices we make when designing learning systems affect the amount of catastrophic forgetting that occurs during training. To answer these questions we explored four metrics for measuring catastrophic forgetting: retention, relearning, activation overlap, and pairwise interference. We applied these four metrics to four testbeds from the reinforcement learning and supervised learning literature and showed that (1)~catastrophic forgetting is not a phenomenon which can be effectively described by either a single metric or a single family of metrics, and (2)~the choice of which modern gradient-based optimizer is used to train an ANN has a serious effect on the amount of catastrophic forgetting. Our results suggest that users should be wary of the optimization algorithm they use with their ANN in problems susceptible to catastrophic forgetting---especially when using Adam but less so when using SGD. When in doubt, we recommend simply using SGD without any kind of momentum and would advise against using Adam. Our results also suggest that, when studying catastrophic forgetting, it is important to consider many different metrics. We recommend using at least a retention-based metric and a relearning-based metric. If the testbed prohibits using those metrics, we recommend using pairwise interference. Regardless of the metric used, though, research into catastrophic forgetting---like much research in AI---must be cognisant that different testbeds are likely to favor different algorithms, and results on single testbeds are at high risk of not generalizing. \section{Future Work} \label{sec:future_work} While we used various testbeds and metrics to quantify catastrophic forgetting, we only applied it to answer whether one particular set of mechanisms affected catastrophic forgetting. Moreover, no attempt was made to use the testbed to examine the effect of mechanisms specifically designed to mitigate catastrophic forgetting. The decision to not focus on such methods was made as \citet{kemker2018measuring} already showed that these mechanisms' effectiveness varies substantially as both the testbed changes and the metric used to quantify catastrophic forgetting changes. \citeauthor{kemker2018measuring}, however, only considered the retention metric in their work, so some value exists in looking at these methods again under the broader set of metrics we explore here. In this work, we only considered shallow ANNs. Contemporary deep learning frequently utilizes networks with many---sometimes hundreds---of hidden layers. While, \citet{ghiassian2020improving} showed that this might not be the most impactful factor in catastrophic forgetting~(p.~444), how deeper networks affect the nature of catastrophic forgetting remains largely unexplored. Thus further research into this is required. One final opportunity for future research lies in the fact that, while we explored several testbeds and multiple metrics for quantifying catastrophic forgetting, there are many other, more complicated testbeds, as well as several still-unexplored metrics which also quantify catastrophic forgetting (e.g., \citet{fedus2020catastrophic}). Whether the results of this work extend to significantly more complicated testbeds remains an important open question, as is the question of whether or not these results carry over to the control case of the reinforcement learning problem. Notably, though, it remains an open problem how exactly forgetting should be measured in the control case. \section*{Acknowledgements} The authors would like to thank Patrick Pilarsky and Mark Ring for their comments on an earlier version of this work. The authors would also like to thank Compute Canada for generously providing the computational resources needed to carry out the experiments contained herein. This work was partially funded by the European Research Council Advanced Grant AlgoRNN to J{\"{u}}rgen Schmidhuber (ERC no: 742870). \nocite{brockman2016openai} \nocite{geramifard2015rlpy} \nocite{ghiassian2017first} \nocite{glorot2010understanding} \nocite{glorot2011deep} \nocite{he2015delving} \nocite{jarrett2009best} \nocite{nair2010rectified} \printbibliography \clearpage
{ "timestamp": "2021-06-10T02:29:40", "yymm": "2102", "arxiv_id": "2102.07686", "language": "en", "url": "https://arxiv.org/abs/2102.07686" }
\section{Introduction} Recently, Dirac materials have attracted intense research interest following the celebrated discovery of a two-dimensional (2D) hexagonal allotropic atomic carbon, graphene \cite{Novoselov666}, because of its peculiar band structure and its fascinating properties \cite{AlessandroCresti2008,LuisEF2014} largely due to the massless Dirac fermion behavior of the charge carriers. Due to such excellent mechanical, magnetic and thermal properties of graphite monolayers, they can be used for the development of superconducting devices for micro-electromechanical and nano-electromechanical systems, leading to the development of the next generation of nanoelectronics \cite{RevModPhys.81.109,RevModPhys.83.407}. As the use of graphene sheets increases, the understanding of the mechanical behaviour is necessary and important for the design and analysis of graphene nanostructures and nanosystems. This opened a new field of research known as straintronics, which aims to refine the electronic and optical properties by applying mechanical deformations \cite{Naumis2017}. Following this direction, many theoretical works have been made studying the effect of mechanical strains on the electronic properties \cite{GGNaumis2009,Rodriguez2016} using a tight-binding approach \cite{Zhang2010,Zhang2011} and effective Hamiltonians for low energies in the vicinity of Dirac points \cite{Vozmediano2010,Oliva2013,Oliva2015,Guinea2009}. These electronic degrees of freedom are coupled to the structural lattice deformations, and this allows to modify its electronic properties in interesting ways \cite{Vozmediano2010,Amorim2015,Bastos2014, Chen2016, Deji2017}. It has been shown that a model to describe the coupling of the electrons to the out-of-plane deformation should be the Dirac equation in curved space \cite{Amorim2015,Volovik2014,Volovik2015,RichardKerner2012}. Such coupling is due to the appearance of pseudo-magnetic fields caused by the deformations \cite{Vozmediano2010,Oliva2013,Oliva2015,Naumis2017,Oliva2016,Bastos2018}. In recent years, experimental evidence has been found that for certain regimes, fluctuations in graphene membranes follow a Cauchy distribution that results in large movements and sudden changes in curvature by means of the \textit{mirror buckling} effect \cite{Ackerman2014, Ackerman2016,Thibado2014}. This mirror buckling effect was first related to the heating due to the scanning microscope. Later on, it was found that this mirror buckling is always presents and that the height of the flexural vibrations follow a L\'evy distribution with parameters $\alpha=1.5, \gamma=0$ \cite{Ackerman2016}. It was also found an unusual distribution of electron velocities \cite{Ackerman2016} and a theory has been proposed to explain it \cite{Kai2019}. However, this last theory is based on considering carbon atoms in the framework of the classical kinetic theory of gases and the Fokker-Planck-Kolmogorov master equation, but this scheme does not explicitly consider the contribution of out-of-plane acoustic modes and that the membrane executes Brownian motion with rare large height excursion indicative of L\'evy walks. Thus, a more exhaustive study is needed concerning this point. Likewise, Mao et. al. \cite{Mao2020} demonstrate that graphene monolayers placed on an atomically flat substrate can be forced to undergo a buckling transition, resulting in a periodically modulated pseudo-magnetic field, which in turn creates a `post-graphene' material with flat electronic bands. This buckling of 2D crystals offers a strategy for exploring interaction phenomena characteristic of flat bands. In addition, there is an growing interest in folded deformations due to transport properties of strained folds in graphene exhibit a rich behavior ranging from Coulomb blockade to Fabry-P\'erot oscillations for different fold orientations. Those exhibiting strong confinement, behave as electronic waveguides in the direction parallel to the fold axis, providing a new way to realize 1D conducting channels in 2D graphene by strain engineering \cite{Sandler2018}. In general, the mechanical displacements on graphene causes strong changes in the vacuum-induced shifts of the transition frequency of some emitter and, because its low mass and high $Q$ factor, make it a particular attractive candidate for a wide class of sensors \cite{Muschik2014}. Most previous work concerning this topic has been focused in studying the electron mobility thorough using transport equations \cite{Castro_2010,Pereira2019}. In the present work, we study the effects on charge carriers due to the presence of pseudo-electromagnetic fields which models the case of vertical fluctuations due to folded deformations or flexural modes. These modes have a large phonon population originating from the quadratic phonon dispersion and are known to dominate the electron scattering \cite{Castro_2010} and thermal transport \cite{Feng2018,Balandin2020}. In particular, we show that for certain kind of flexural fields, one can make close contact with previous works on Dirac fermions in random electromagnetic potentials, besides its close relationship with the phase transition between the plateaus in Hall's quantum states and the quasi-excitations in \textit{d}-wave superconductors \cite{Ichinose2002}. Then we show that for more general fields, the Coulomb gauge condition used in this work can not be fulfilled. It is important to remark that the methods presented here can be extended to study other optoelectronic properties in 2D materials, such as phosphorene \cite{Mehboudi5888} or borophene \cite{Naumis2017}, and these effects can also be studied using the present methodology, as plane deformations or flexural waves can be considered as random pseudo-electromagnetic waves; in addition, the present results can be extended for new Dirac materials \cite{doi:10.1093/nsr/nwu080, doi:10.1080/00018732.2014.927109}. The work is organized as follows. In Sec. \ref{Model Hamiltonian}, we introduce the effective Hamiltonian for low energies and obtain the time-independent Schrödinger equation to be solved. In Sec. \ref{sec:Folded deformations}, we analize specifically the electronic properties of graphene with folded deformations. And finally, we present the conclusions in Section \ref{sec: conclusions}. \section{HAMILTONIAN MODEL \label{Model Hamiltonian}} Out-of-plane acoustic modes are characteristic vibrations in graphene. These low frequency modes are easy to excite and carry most of the vibrational energy \cite{Jiang2015,Bastos2018}. They consist in a dynamic elongation, bending and torsion of the local bonds. The stretching or tension of the bonds is by far the most important for the electrons, since it causes a greater impact on the tunneling parameter \cite{RevModPhys.81.109}. Some lattice deformations can be expressed by a gauge field using a Hamiltonian at low energies \cite{Bastos2018, Vozmediano2010}. The low-energy Hamiltonian for non-interacting electrons in deformed graphene is a Dirac-type Hamiltonian and is given by \cite{Guinea2009,Bastos2018,Sasaki2008}: \begin{equation} \label{eq:Dirac-type Hamiltonian} \boldsymbol{\mathcal{\hat{H}}}_{\eta}(\boldsymbol{r})=v_{F} \boldsymbol{\sigma}_{\eta} \cdot \left( \boldsymbol{\hat{p}}- \eta \boldsymbol{A}(\boldsymbol{r},t) \right)+V(\boldsymbol{r},t) \mathbb{I}_{2 \times 2}, \end{equation} where $\boldsymbol{r}=(x,y)$ is the position vector, the subscript $\eta=\pm 1$ labels the Dirac points $\boldsymbol{K},\boldsymbol{K}\,'$ respectively; $v_{F}$ is the Fermi velocity ($v_{F}/c \approx 1/300$ with $c$ is the vacuum speed of light); $\boldsymbol{\hat{p}}=(\hat{p}_{x}, \hat{p}_{y})$ is the moment operator for the charge carriers, $\boldsymbol{\sigma}=(\eta \sigma_{x}, \sigma_{y})$ is the Pauli matrix vector, and $\boldsymbol{A}$ and $V$ are the pseudo vector and scalar potentials respectively, given by \cite{Guinea2009,Naumis2017,Oliva2015,Bastos2018} \begin{eqnarray} V(\boldsymbol{r},t)=g(\varepsilon_{xx}+ \varepsilon_{yy}) \label{eq:general scalar potential}\\ \boldsymbol{A}(\boldsymbol{r},t)=(A_{x},A_{y}) = \frac{\hbar \beta}{2 a_{cc}}(\varepsilon_{xx}-\varepsilon_{yy},-2 \varepsilon_{xy}) \label{eq:general vector potential} \end{eqnarray} The parameter $g$ takes values from $0$ to $20$ eV \cite{Vozmediano2010,Suzuura2002}, $a_{cc}=1{.}42$ \r{A} is the interatomic distance for undeformed graphene lattice, the dimensionless coefficient $\beta \approx 3{.}0$ measures the effect of the deformation on the hopping parameter. The coefficient $g$ refers to flexural changes in the membrane while the term ${\hbar \beta }/{2a_{cc}}$ refers to changes in bond length, as we know it requires more energy to make bond length changes to a rearrangement in the positions of the atoms on the membrane \cite{Suzuura2002}. Therefore depending on how ``{}strong"{} the deformations are we can vary the value of $g$ within a range of energies, while the factor ${\hbar \beta}/{2a_{cc}}$ remains approximately constant (within the validity range of our model). In general, we can consider a displacement outside the plane $h=h(\boldsymbol{r},t)$, and a displacement inside the plane $\boldsymbol{u}=\boldsymbol{u}(\boldsymbol{r},t)$. The stress tensor $\varepsilon_{\mu \nu}$ is given by \begin{equation} \label{eq:general stress tensor} \varepsilon_{\mu \nu}= \frac{1}{2} \left( \partial_{\mu} h \partial_{\nu} h \right)+ \frac{1}{2} \left( \partial_{\mu} u_{\nu}+ \partial_{\nu} u_{\mu} \right), \,\,\, \mu, \nu =x,y. \end{equation} We shall consider the simplest case, in which the deformation is only perpendicular to the plane, i.e., $\boldsymbol{u}=0$, so from Eq. \eqref{eq:general stress tensor} \begin{equation} \label{eq:particular stress tensor} \begin{split} \varepsilon_{xx}&= \frac{1}{2} \left( \partial_{x} h \right)^{2}\\ \varepsilon_{yy}&= \frac{1}{2} \left( \partial_{y} h \right)^{2}\\ \varepsilon_{xy}&= \frac{1}{2} \left( \partial_{x} h\right) \left( \partial_{y} h \right) \end{split} \end{equation} We introduce new variables defined as \begin{equation} \label{eq:random variables} \begin{split} l_{1}(\boldsymbol{r},t) &\equiv \left( \partial_{x} h \right)^{2} -\left( \partial_{y} h \right)^{2}\\ l_{2}(\boldsymbol{r},t)& \equiv 2 \left( \partial_{x} h \right) \left( \partial_{y} h \right) \end{split} \end{equation} which will give us information about how ``strong"{} the vertical displacements are. On the other hand, by making use of the Eqs. \eqref{eq:general scalar potential}, \eqref{eq:general vector potential},\eqref{eq:particular stress tensor} and \eqref{eq:random variables}, we can rewrite the scalar and pseudo-vector potentials, \begin{equation} \label{eq: particular potentials} \begin{split} V(\boldsymbol{r},t)&= \frac{g}{2} \sqrt{l_{1}^{2}(\boldsymbol{r},t)+l_{2}^{2}(\boldsymbol{r},t)}\\ \boldsymbol{A}(\boldsymbol{r},t)&= \frac{\hbar \beta}{4 a_{cc}} \left[ l_{1}(\boldsymbol{r},t) \hat{x}- l_{2}(\boldsymbol{r},t) \hat{y} \right]. \end{split} \end{equation} From Eq. \eqref{eq:Dirac-type Hamiltonian} and \eqref{eq: particular potentials}, the Hamiltonian is \begin{equation} \label{eq:Hamiltonian 1} \boldsymbol{\mathcal{\hat{H}}}_{\eta}(\boldsymbol{r})=\boldsymbol{\mathcal{\hat{H}}}_{0}(\boldsymbol{r})+\boldsymbol{W}(\boldsymbol{r},t)+ \frac{g}{2} |l(\boldsymbol{r},t)| \bm{\mathbbm{1}} \end{equation} where the hat is used to denote differential operators as follows, \begin{equation} \label{eq:Hamiltonian} \begin{split} \boldsymbol{\mathcal{\hat{H}}}_{0}(\boldsymbol{r})&= v_{F} \left( \begin{array}{lcc} 0 & (\eta \hat{p}_{x}-i \hat{p}_{y})\\ \eta \hat{p}_{x}+ i \hat{p}_{y} & 0 \end{array} \right)\\ \boldsymbol{W}(\boldsymbol{r},t)&= \left( \begin{array}{lcc} 0 & - \eta \tilde{\beta} l(\boldsymbol{r},t)\\ - \eta \tilde{\beta} l^{*}(\boldsymbol{r},t)& 0 \end{array} \right) \end{split} \end{equation} where $l(\boldsymbol{r},t) \equiv \eta l_{1}(\boldsymbol{r},t)+ i l_{2}(\boldsymbol{r},t) $ and we defined the parameter $\bar{\beta}$ as, \begin{equation} \bar{\beta}= \frac{v_F\hbar \beta}{4a_{cc}} \approx 3.476471 \text{ eV} \end{equation} The dynamic equation for the spinor $\Psi_{\eta}(\boldsymbol{r},t)$ follows a time-dependent Schrödinger type equation \begin{equation} \label{eq:time-dependent schrodinger equation} \begin{split} i \hbar \frac{\partial}{\partial t} \Psi_{\eta}(\boldsymbol{r},t)= \boldsymbol{\mathcal{\hat{H}}}_{\eta}(\boldsymbol{r}) \Psi_{\eta}(\boldsymbol{r},t) \end{split} \end{equation} where \begin{equation} \label{eq:spinor vector form } \Psi_{\eta}(\boldsymbol{r},t)= \left( \begin{array}{cc} \psi_{A}^{\eta}(\boldsymbol{r},t) \\ \psi_{B}^{\eta}(\boldsymbol{r},t) \end{array}\right) \end{equation} It is straightforward to prove that the Schrödinger type equation \eqref{eq:time-dependent schrodinger equation} can be rewritten as \begin{equation} \label{eq:time-dependent schrodinger equation for both spinor components} \begin{split} i \hbar \frac{\partial \psi_{A}^{\eta}}{\partial t}&= \frac{ g}{2}|l| \psi_{A}^{\eta}+ \left[v_{F}(\eta \hat{p}_{x}-i \hat{p}_{y})- \eta \bar{\beta} l \right] \psi_{B}^{\eta},\\ i \hbar \frac{\partial \psi_{B}^{\eta}}{\partial t}&= \frac{ g}{2}|l| \psi_{B}^{\eta}+ \left[v_{F}(\eta \hat{p}_{x}+i \hat{p}_{y})- \eta \bar{\beta} {l^{*}} \right] \psi_{A}^{\eta}.\\ \end{split} \end{equation} Notice how the magnitude of the disorder enters in the Dirac equation through the parameters $\bar{\beta}$ and $g$. While $g$ plays the role of a random local chemical potential, $\tilde{\beta}$ is a random local magnetic field. Eq. (\ref{eq:time-dependent schrodinger equation for both spinor components}) is a complex stochastic equation. Instead of solving the time-dependent problem, we consider that the deformation process is adiabatic in the time scale of the electron dynamics. In such a case, we can suppose that the disorder is quenched and thus $l_1$ and $l_2$ are time-independent. In such a case, Eq. (\ref{eq:Hamiltonian}) becomes a time-independent Hamiltonian with a spatial random potential $l(\boldsymbol{r},t)=l(\boldsymbol{r})$. Returning to Eq. \eqref{eq:time-dependent schrodinger equation} becomes the time-independent Schrödinger equation $ \boldsymbol{\mathcal{\hat{H}}}_{\eta}(\boldsymbol{r}) \Psi_{\eta}(\boldsymbol{r})= E \Psi_{\eta}(\boldsymbol{r})$ and we are interested in finding the distribution of the Hamiltonian eigenvalues of $\boldsymbol{\mathcal{\hat{H}}}_{\eta}(\boldsymbol{r})$ and the wavefunctions. \section{Folded Deformations} \label{sec:Folded deformations} To understand the changes induced by random flexural deformations, we study folded deformations. Such kind of fields have been observed experimentally in deformed graphene \cite{Kim2011,YI2016,Hallam2015} and there are some studies for particular deformations \cite{RCarrillo2016,Nancy2018,Sandler2018}. In a general folded deformation, the field does not vary in one direction. Therefore, it can be written as, \begin{equation} h(y)=\sum_{k=-k_c}^{k_c} a_k \exp(iky) \end{equation} with $a_{-{k}}=a_{{k}}^{*}$ as $h({y})$ is a real, and the coefficients $a_{{k}}$ can be deterministic or random variables. $k_c$ is a cutoff parameter and in what follows all sums are understood to use it. $k_c$ can be estimated from the Bose-Einstein distribution and the frequency dispersion of flexural modes. From Eqs. (\ref{eq:general vector potential}) and (\ref{eq:general stress tensor}) , the vectorial potential has only one component different from zero, \begin{equation} A_x(y)=\frac{\hbar \beta}{4 a_{cc}} \left[ \sum_{k} a_k k\exp(iky) \right]^{2} \end{equation} The advantage of this particular deformation is that $\bm A(\bm{r})$ is in the Coulomb gauge, as it satisfies $\nabla \cdot \bm A(\bm{r})=0$, therefore can be obtained as the derivative of a scalar field, \begin{equation} A_i=\epsilon_{ij}\partial_j \Phi(\boldsymbol{r}) \label{eq:DefPotential} \end{equation} where $\epsilon_{ij}$ is the 2D Levi-Civita tensor with $i=x,y$ and $j=x,y$. For this particular case, we express $\Phi(y)$ in terms of the following Fourier decomposition, \begin{equation} \Phi(y)=\Phi_0(y)+ \sum_{k \ne 0}e^{iky} \tilde{\Phi}(k)\label{eq:Phiy} \end{equation} with, \begin{equation} \Phi_0(y)=\frac{\hbar \beta}{4 a_{cc}}\left(\sum_k k^{2}|a_k|^{2}\right)y \end{equation} and, \begin{equation} \tilde{\Phi}(k)=-i\frac{\hbar \beta}{4 a_{cc}k} \left[ \sum_{k'}a_{k}a_{k'-k}^{*}k'\left(k'-k\right) \right]\label{eq:FourierPhiy} \end{equation} The associated pseudomagnetic field is $\bm{B}=\bm{\nabla}^{2} \Phi (\bm{r})$. It is worthwhile noticing that although $\Phi_0(y)$ does not produce a pseudomagnetic field, it produces an Aharonov-Bohm like effect as it leads to a constant $\bm A(\bm{r})$. An interesting consequence of having a field derived from the potential is that for any flexural field, being deterministic or random, the zero-mode can always be constructed. From the Schrödinger and Eq. (\ref{eq:DefPotential}), we obtain that for $E=0$ and $g=0$ the wave function is, \begin{equation} \psi_{\pm}(\bm{r})=(const.) (1\pm \sigma_z) \left( \begin{array}{c} e^{\Phi(y)} \\ e^{-\Phi(y)} \end{array} \right) \end{equation} where $\sigma_z$ is the Pauli $z$ matrix. Similar functions were studied years ago in the context of the integer quantum Hall transition \cite{Andreas1994}. It can be proved that for a random magnetic field in which the vector potential satisfies a Gaussian white-noise distribution with mean zero and variance $\Delta_{A}$ such that the average coefficients in Eq. (\ref{eq:Phiy}) are, \begin{equation} \langle \tilde{\Phi}(k)\tilde{\Phi}(k')\rangle=(2\pi)^{2}\delta(k-k')\frac{\Delta_A}{k^{2}} \end{equation} while the resulting wave-function is multifractal \cite{Andreas1994}. In a sample of size $L \times L$, the moments of the participation ratio $P_q(L)$ that measures a multifractal localization \cite{Barrios_Vargas_2012}, \begin{equation} P_q(L)=\langle |\psi({\bm{r})}|^{2q}\rangle \end{equation} are given by \cite{Andreas1994}, \begin{equation} P_q(L) \approx \frac{1}{L^{2+\tau(q)}} \end{equation} with, \begin{equation} \tau(q)=2(q-1)+\frac{\Delta_A}{\pi}q(1-q) \end{equation} where $q$ need not be integer. For big samples, the multifractal spectrum is dominated by its maximal value, from where the typical participation is \cite{Andreas1994}, \begin{equation} P_{\text{typical}}(L)=e^{\langle \ln|\Psi|^{2} \rangle}\approx \frac{1}{L^{2+\Delta_A/\pi}} \end{equation} Around these states and near the Fermi energy, the density of states (DOS) is \cite{Andreas1994}, \begin{equation} \rho(E)=E^{\frac{2-z}{z}} \end{equation} with $z=2+\Delta_A/\pi$. Such wavefunction multifractality and DOS means that an unusual electron velocity distribution will appear even in the simplest case of a Gaussian random flexural field, without restoring to Levy distributions of membrane jumps in graphene. In any case, the Levy jumps will induce an even more unusual distributions. We end up by considering the particular contribution of the Aharonov-Bohm term which for some geometries produces interesting effects in graphene \cite{deJuan2011}, nevertheless has not been studied for random fields. First we write the Fourier coefficients $a_k$ as the sum of an average plus a fluctuation part, $a_k= \langle a_k \rangle +\delta a_k$. If $a_k$ is Gaussian distributed with zero mean we have, \begin{equation} \Phi_0(y)=\frac{\hbar \beta}{4 a_{cc}}\sum_k (\delta a_{k})^{2}k^{2}y \approx \frac{\pi}{6} \frac{\hbar \beta}{ a_{cc}}\Delta_A k_c^{3}y \end{equation} and thus the phase difference between particles, with the same start and end points, but travelling along two different paths is, \begin{equation} \Delta \mathcal{\phi} =\left(\frac{d\Phi_0(y)}{dy}\mathcal{A} \right)\frac{e}{\hbar}=\frac{\pi}{6}\frac{\beta}{ a_{cc}} (\Delta_A k_c^{3}\mathcal{A}) e \label{eq:phasediference} \end{equation} where $\mathcal{A}$ is the area bounded by the two paths. For thermally activated fields, $k_c$ is determined from the temperature ($T$) population given by the Bose-Einstein distribution. As $\Delta_A \sim k_BT$, Eq. (\ref{eq:phasediference}) implies a very strong temperature dependent phase shift. This result is in agreement with recent first-principles calculations based on density functional theory and the Boltzmann equation\cite{Tue_2020}. \section{Conclusions. \label{sec: conclusions}} We studied the effects in the electronic properties of graphene of folded flexural deformations, which are equivalent to electromagnetic fields in the Columb gauge. First we studied general folded deformations giving an expression for the zero-modes which are the ones at the Fermi level for half-filled systems. For random Gaussian distributed folded deformations, we made contact with works on the quantum Hall effect under random magnetic fields, showing that the density of states has a power law behavior. This indicates that the system can present interesting behaviors. In particular, we showed that there is a remarkable Aharonov-Bohm pseudo-effect and wavefunction multifractality, which means that an unusual electron velocity distribution will appear even at this early stage of deformation. \begin{acknowledgments} We are grateful to Alejandro Pérez Riascos for their support and feedback in carrying out this work. We thank UNAM-DGAPA PAPIIT project IN102620 and CONACYT project 1564464. A.E.C. thanks CONACYT for providing a schoolarship. \end{acknowledgments} \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request.
{ "timestamp": "2021-08-10T02:02:46", "yymm": "2102", "arxiv_id": "2102.07734", "language": "en", "url": "https://arxiv.org/abs/2102.07734" }
\section{Introduction} \noindent Tensor network states (TNS) have become the tool of choice for many-body quantum systems on the lattice \cite{cirac2009-rev,evenbly2014-rev}. They provide a sparsely parameterized manifold of wavefunctions that is well suited for the approximation of the low energy states of local Hamiltonians \cite{molnar2015,hastings2006}. The \emph{bond dimension} $D$ controls the size of the manifold and thus the expressiveness of the class of states. In $1+1$ spacetime dimensions, matrix product states (MPS) \cite{fannes1992} are the simplest instance of TNS. They underpin the earlier density matrix renormalization group \cite{white1992,white1993,schollwock2005} and provide one of the most efficient numerical methods to study quantum chains. Their extension for $2+1$ dimensional problems, the projected entangled pair states (PEPS) \cite{verstraete2004}, while more difficult to optimize, have been used successfully in a wide range of non-trivial instances including the Hubbard model. Aside from their numerical use, tensor network states are powerful theoretical tools \cite{cirac2020matrix}, \eg for the classification of topological \cite{schuch2010,bultinck2017} and symmetry protected \cite{pollmann2010,chen2011,schuch2011,chen2013} phases of matter. In light of the progress they enabled in the discrete, it is tempting to use TNS to solve problems in the continuum, and attack quantum field theories (QFTs). This can be done in two ways, (i) by discretizing the QFT first to then use the standard lattice TNS toolbox and finally extrapolate the results (ii) by taking the continuum limit of TNS first to apply them to the continuum model without extrapolation needed. The first option has already provided numerically accurate results in many quantum field theories \cite{milsted2013,banuls2013,banuls2016,banuls2017,kadoh2019,delcamp2020}. However, working directly in the continuum offers crucial advantages: spatial symmetries are preserved and the perilous continuum extrapolations of the results are avoided. It is this neater second option, which consists in defining a manifold of states directly in the continuum, that I am interested in here. In 2010, Verstraete and Cirac introduced continuous matrix product states (CMPSs) \cite{verstraete2010}, which are a proper continuum limit of MPS particularly adapted to $1+1$ dimensional non-relativistic field theories (like the Lieb-Liniger model). The corresponding numerical toolbox was then developed and extended in the past ten years \cite{ganahl2017,ganahl2018,tuybens2020variational}. More recently, an extension to $d+1$ ($d\geq 2$) dimensions pushing PEPS to the continuum, was put forward in~\cite{tilloy2019}, but without a general numerical toolbox yet. The previous continuum ansatz are adapted to non-relativistic theories only, and cannot be used for relativistic QFTs without an additional momentum cutoff $\Lambda$. There are many ways to understand this limitation which I summarize in sec. \ref{sec:UVproblems}. For relativistic QFTs, this partially defeats the purpose of going to the continuum in the first place, since one still needs to extrapolate the final results as $\Lambda \rightarrow + \infty$. This is particularly disappointing given that, at least in $1+1$ dimensions, relativistic QFTs are straightforward to define ``all the way down'', without any cutoff~\cite{glimm1987,fernandez1992}. My main objective in this paper is to introduce a modification of CMPS, the relativistic CMPS (RCMPS), that is adapted to relativistic QFT in $1+1$ dimension, and does not require any additional cutoff, UV or IR. In a nutshell it is a state $\ket{Q,R}$ parameterized by two $D\times D$ complex matrices $Q,R$ and defined as \begin{equation} \ket{Q,R} = \mathrm{tr}\left\{\mathcal{P} \exp\left[\int \mathrm{d} x \, Q\otimes \mathds{1} + R\otimes a^\dagger(x)\right]\right\}\ket{0}_a. \end{equation} I will explain this formula in more detail later. Let me just mention here that $a^\dagger(x)$ is the Fourier transform of the momentum creation operator $a_k^\dagger$ diagonalizing the free part of the relativistic Hamiltonian under consideration and $\ket{0}_a$ the associated Fock vacuum. Crucially, $a^\dagger(x)$ is \emph{not} the creation operator $\psi^\dagger(x)$, local in the canonically conjugated fields, and used for non-relativistic CMPS. Using $a$ instead of $\psi$ is the only difference between CMPS and RCMPS. We will see later why using $a$ is a good idea and the consequences this choice has. I can mention one advantage already. Because RCMPSs allow to work directly with the true theory without cutoff or extrapolation, the results one obtains are genuinely variational, and thus provide rigorous energy upper bounds. Further, while results based on a discretization or with a finite UV cutoff have a limited validity at high momenta, the RCMPS are exact in the UV. The price to pay is that RCMPS are obtained from CMPS via change of basis $\psi,\psi^\dagger \rightarrow a,a^\dagger$ that is not local. In the new basis, the Hamiltonian density of a relativistic QFT is no longer strictly local, and only exponentially decaying. In practice, the lack of strict locality of the Hamiltonian density introduces minor complications in the computations. However, and perhaps surprisingly, these complications do not change the asymptotic scaling of the cost compared to regular CMPS, which remains $\propto D^3$. Further, the energy density seems to converge fast as a function of $D$ as one would expect \cite{huang2015computing}, while the cost which means RCMPS provide an efficient class of states for relativistic QFTs. In addition to their direct use for relativistic QFT in $1+1$ dimensions, demonstrated in this paper, RCMPS could be used as an auxiliary step to solve non-relativistic QFTs in $2+1$ dimensions. Indeed, continuous generalizations of PEPS currently lack a general purpose optimization algorithm. The missing routine is a function to solve a \emph{relativistic} QFT in one dimension less, that is in $1+1$ space-time dimensions \cite{tilloy2019}. The relativistic nature of this auxiliary theory comes from the need for Euclidean invariance in the $2$ space dimensions of the original non-relativistic theory \cite{tilloy2019}. For that task, RCMPS would fit the bill better than CMPS and preserve exact Euclidean invariance at short distances. The present paper is structured as follow. I first discuss standard CMPS and their limitations for relativistic theories in sec.~\ref{sec:cmps} before introducing and discussing the RCMPS ansatz in sec.~\ref{sec:rcmps}. I first explain how to compute expectation values in sec.~\ref{sec:computations} and then show how to optimize the state to find ground states in sec.~\ref{sec:optimization}. The numerical results for the ground energy and correlation functions, which result from this optimization, are presented for the $\phi^4$ model in sec.~\ref{sec:application}. I finally discuss some extensions in sec.~\ref{sec:extensions}, namely a slight variation of RCMPS with more general basis, and explain how one could obtain more general observables by extending known CMPS techniques. I conclude with a general discussion of the method in sec.~\ref{sec:discussion}, comparing it with renormalized Hamiltonian truncation (RHT). A quicker presentation of the results, emphasizing the importance and difficulty of the variational method, is presented in a companion letter \cite{rcmps_letter}. \section{Standard continuous matrix product states}\label{sec:cmps} \subsection{Definition and main properties} \noindent Continuous matrix product states (CMPS) were introduced by Verstraete and Cirac \cite{verstraete2010} in 2010. They extend the successful matrix product states ansatz from the lattice to the continuum. Concretely, for a bosonic quantum field theory on the closed line $[-L,L]$ with (non-relativistic) creation and annihilation operators $(\psi^\dagger,\psi)$, a CMPS is the state \begin{equation} \ket{Q,R}_\mathrm{nr}= \mathrm{tr}\left\{\mathcal{P} \exp\! \left[\int_{-L}^L \!\mathrm{d} x \, Q\otimes \mathds{1} + R\otimes \psi^\dagger(x)\right]\right\}\ket{0}_\psi\, , \end{equation} where by definition $[\psi(x),\psi^\dagger(y)] = \delta(x-y)$, $\mathcal{P}$ is the path ordering operator, and $\ket{0}_\psi$ is the Fock vacuum annihilated by all the $\psi(x)$. The state is parameterized by two $D\times D$ matrices $Q$ and $R$, where $D$ is the bond dimension, and which contain all the variational freedom. The trace is taken over the associated $D$ dimensional Hilbert space, and implements periodic boundary conditions (more general boundary conditions are possible by inserting a matrix $B$ in the trace). CMPS can be derived from a genuine continuum limit of MPS and thus share many of their properties \cite{haegeman2013}. In particular, all the correlation functions of local operators can be computed explicitly and efficiently. This is seen by introducing the generating functional $\mathcal{Z}_{j',j}$: \begin{equation}\label{eq:generatingfunction} \mathcal{Z}_{j',j}=\frac{\bra{Q,R} \exp\left(\int j'\, \psi^\dagger \right) \exp\left(\int j\, \psi \right) \ket{Q,R}_\mathrm{nr}}{\langle Q,R|Q,R\rangle_\text{nr}} \,, \end{equation} which can be used to compute all normal-ordered correlation $N$-point functions of local operators, \eg \begin{align} \label{eq:correlexample} \langle\psi^\dagger(x) \psi(y)\rangle:=& \frac{\langle Q,R | \psi^\dagger(x) \psi(y)| Q,R\rangle_\mathrm{nr}}{\langle Q,R | Q,R\rangle_\textrm{nr}} \nonumber \\ =& \frac{\delta}{\delta j'(x)}\frac{\delta}{\delta j(y)} \mathcal{Z}_{j',j} \bigg|_{j,j'=0}\,. \end{align} Using Wick's theorem, one can show \cite{haegeman2013} that this generating functional has an exact expression: \begin{equation} \mathcal{Z}_{j',j} \! = \! \mathrm{tr}\left\{\mathcal{P} \exp\! \bigg[\int_{-L}^L \!\!\!\mathrm{d} x \, \mathbb{T} + j(x) R \otimes \mathds{1} +j'(x) \mathds{1}\otimes \bar{R}\bigg]\right\}\, \end{equation} where $\mathbb{T} = Q\otimes \mathds{1} + \mathds{1}\otimes \bar{Q} + R\otimes \bar{R}$ is the transfer operator and the trace is now taken over the tensor product of two copies of the original $D$ dimensional Hilbert space. For example, for the simple two-point function of eq. \eqref{eq:correlexample}, using the exact expression gives, for $x\geq y$: \begin{equation}\label{eq:twopointexplicit} \langle\psi^\dagger(x) \psi(y) \rangle \!=\! \mathrm{tr}\Big[ \e^{(L-x)\mathbb{T}} (\mathds{1}\otimes \bar{R}) \e^{(x-y)\mathbb{T}} (R\otimes\mathds{1}) \e^{(y+L)\mathbb{T}}\Big]. \end{equation} The state $\ket{Q,R}_\mathrm{nr}$ is parameterized in a redundant way, in the sense that different $Q,R$ pairs give the same state. In particular, conjugating both matrices with an invertible matrix $U$ keeps the state invariant. This freedom can be used to choose a gauge in which only the anti-Hermitian part $K$ of $Q$ is free \cite{haegeman2010}: \begin{equation}\label{eq:left_gauge} Q := -iK -\frac{1}{2} R^\dagger R\, . \end{equation} In this gauge, the transfer operator $\mathbb{T}$ is of the Lindblad form, hence negative, and generically has a single largest eigenvalue $\lambda_0=0$ with associated left and right eigenvectors $\bra{\ell_0}$ and $\ket{r_0}$. Using this form provides convenient expressions in the thermodynamic limit, as $\e^{L\mathbb{T}} \rightarrow \ket{r_0}\bra{\ell_0}$ when $L\rightarrow + \infty$. For example, eq. \eqref{eq:twopointexplicit} simplifies to: \begin{equation}\label{eq:twopoint_thermo} \langle\psi^\dagger(x) \psi(y) \rangle \!=\! \bra{\ell_0} (\mathds{1}\otimes \bar{R}) \e^{(x-y)\mathbb{T}} (R\otimes\mathds{1}) \ket{r_0}\, . \end{equation} In what follows, and in particular for the upcoming relativistic extension, I will always work directly in the thermodynamic limit. A CMPS is typically used to find the ground state of a non-relativistic QFT. The archetypal example is the Lieb-Liniger model with Hamiltonian $H_\text{LL}$: \begin{align} \label{eq:LL_hamiltonian} H_\text{LL} &=\int_\mathbb{R} \partial_x \psi^\dagger\partial_x\psi + c\, \psi^\dagger\psi^\dagger\psi\psi - \mu \psi^\dagger\psi \\ &= \int_\mathbb{R} h_\text{LL} \; . \end{align} With the generating functional, it is possible to compute the expectation value of the $3$ terms of the Hamiltonian density $\langle h_\text{LL}\rangle_{Q,R} = f(Q,R)$, where $f$ is a simple function containing expectation values of products of $Q$ and $R$ as in eq. \eqref{eq:twopoint_thermo}. One then minimizes the energy density over the matrices $Q$ (or $K$) and $R$ to get close to the ground state: \begin{equation} \ket{\mathsf{ground}}\simeq \ket{Q,R} \;\;\text{where}\;\; Q,R = \argmin \, \langle h_\text{LL}\rangle_{Q,R} . \end{equation} The approximation can get \emph{arbitrarily} good as the bond dimension (and thus the size of the variational manifold) is increased because CMPS are dense in the Fock space \cite{haegeman2013}. Further since the method is variational, we also know that the energy we find for finite $D$ always upper bounds the true ground energy \begin{equation} \varepsilon_0 := \langle h_\text{LL}\rangle_{\textsf{ground}} \leq \langle h_\text{LL}\rangle_{Q,R} \, . \end{equation} In practice, one can simply feed the expression of the energy density as a function of $Q$ and $R$ to a standard numerical minimizer as was done originally in \cite{verstraete2010}. However it is typically much more efficient, especially for large $D$, to use more elaborate tangent space methods \cite{vanderstraeten2019tangentspace}. I explain how they work in sec. \ref{sec:optimization}. In a nutshell, the latter require computing the exact gradient with respect to the variational parameters as well as the natural metric on the tangent space of CMPS induced by the real part of the Hilbert space scalar product. The optimization is then done through gradient descent on the corresponding differentiable manifold. At this stage, all that matters for us is that it can be done and that it is efficient: one can easily optimize CMPSs of large bond dimensions~\cite{ganahl2017} (without barren plateaus or other pathology). \subsection{UV problems in relativistic theories}\label{sec:UVproblems} \noindent CMPS are well suited for non-relativistic theories, which they approximate well ``all the way down'', without short distance cut-off. However, once we move to relativistic QFT, it becomes necessary to introduce a UV cut-off. This is best understood on the free boson, which was discussed in the CMPS context in~\cite{stojevic2015}. The free boson Hamiltonian is \begin{equation} H_\text{fb}=\frac{1}{2}\int_{\mathbb{R}} \pi^2+ (\partial_x \phi)^2 + m^2 \phi^2 \,, \end{equation} where $m$ is the mass and $\pi,\phi$ are canonically conjugated $[\phi(x),\pi(y)]=i\delta(x-y)$. To deal with such a Hamiltonian with a CMPS, it is tempting to express the field operator and its conjugate as a function of a non-relativistic creation-annihilation pair $\psi^\dagger,\psi$: \begin{align} \phi &= \sqrt{\frac{1}{2\Lambda}}(\psi+\psi^\dagger) \, ,\\ \pi&= \sqrt{\frac{\Lambda}{2}}\; (\psi-\psi^\dagger) \, , \end{align} which introduces a new arbitrary mass scale $\Lambda$. In this basis, the Hamiltonian $H_\text{fb}$ is still the integral of a local density, and thus it is straightforward to evaluate the energy density on a CMPS $\ket{Q,R}_\text{nr}$. This unfortunately leads to mild and serious divergences. The first mild one is that $H_\text{fb}$ is a priori not normal ordered in the $\psi^\dagger,\psi$ operator basis. This is simply solved by considering $:H_\text{fb}{:}_\mathrm{\psi}$ instead. The serious divergence comes from the fact that $:H_\text{fb}{:}_\mathrm{\psi}$ not only contains the standard non-relativistic kinetic energy $\partial_x\psi^\dagger\partial_x\psi$ but also $\partial_x\psi\partial_x\psi + \text{h.c.}$. The latter is divergent when evaluated on a generic CMPS \cite{haegeman2010}. This second divergence can be cured by adding an adapted UV regulator to the Hamiltonian and considering \cite{stojevic2015} \begin{equation} H^\Lambda_\text{fb}=\frac{1}{2}\int_{\mathbb{R}} \pi^2+ (\partial_x \phi)^2 + m^2 \phi^2 + \underset{ \text{regulator}}{\underbrace{\frac{1}{\Lambda^2} \left(\partial_x \pi\right)^2}}\, , \end{equation} which kills the problematic terms. To reach a fixed precision, one then needs to increase the bond dimension D as the cutoff $\Lambda$ is lifted. Similarly for fermionic theories, it was observed in \cite{haegeman2010-relativistic} that one needs to add a UV cutoff to the Hamiltonian, this time not to ensure the finiteness of the results, but to make the optimization problem well behaved. In a nutshell, without a cut-off, the CMPS reduces the energy density by approximating larger and larger momentum modes as the optimization proceeds, completely missing the IR which contributes to the energy only in a subleading way. Independently of CMPS, this sensitivity to high frequencies in variational methods was considered by Feynman as one of the main difficulties preventing its use in relativistic QFT \cite{feynman1988}. Zooming out from the CMPS peculiarities, this situation is not surprising, since we are working in the wrong operator basis. Indeed, the operator $:H_\text{fb}{:}_\psi$ is not bounded from below with this specific ordering. Hence even if we manage to have a finite energy density when computing CMPS expectation values, we are always infinitely far from the true ground state. Further, the free boson is a conformal field theory (CFT) at short distances (the massless free boson). It is thus no surprise that a CMPS, which is adapted to systems with a gap and exponentially decaying correlation functions, completely fails to capture the short distance behavior of relativistic models. Yet another way to see the problem is that the ground state of the free boson has an infinite density of non-relativistic particles, whereas a CMPS always gives a finite density. Note that these UV problems are not made easier if one adds a relevant interaction, as the latter becomes negligible at short distances. The relativistic continuous matrix product state ansatz will solve these short distance problems already present in the free theory, but also allow to deal with interactions without additional difficulty. \section{Relativistic continuous matrix product states} \label{sec:rcmps} \subsection{Intuition} \noindent In the standard textbook approach to quantum field theory \cite{peskin1995}, the problem of infinite particle density is solved at the very beginning by changing of basis (and in fact even of Hilbert space). Typically, one expands the field operator in new creation-annihilation modes that diagonalize the Hamiltonian: \begin{align}\label{eq:modeexpansion} \phi(x) &= \frac{1}{2\pi} \int \mathrm{d} k \sqrt{\frac{1}{2 \, \omega_k}} \left(\e^{ikx} a_k + \e^{-ikx} a^\dagger_k \right) \\ \pi(x) &= \frac{1}{2\pi i} \int \mathrm{d} k \sqrt{\frac{\omega_k}{2}} \left(\e^{ikx} a_k - \e^{-ikx} a^\dagger_k \right) \, , \end{align} where $\omega_k=\sqrt{k^2 + m^2}$ and $[a_k,a_{k'}^\dagger]=2\pi\delta(k-k')$. In condensed matter physics, this would be called a Bogoliubov transform. In this new basis, the Hamiltonian is diagonal \begin{equation} H_\text{fb} = \frac{1}{2\pi}\int_\mathbb{R} \mathrm{d} k \, \omega_k \; \frac{a^\dagger_k a_k + a_k a^\dagger_k}{2} \, . \end{equation} Then, normal ordering $H_\text{fb} \rightarrow :H_\text{fb}{:}_a$ removes the infinite vacuum energy contribution and one finds that the ground state is simply the Fock vacuum annihilated by all the $a_k$'s: $\ket{\textsf{ground}} = \ket{0}_a$. The Hamiltonian $:H_\text{fb}{:}_a$ further is a well defined operator on the Fock space generated by creating relativistic particles from the vacuum. The crucial observation is that in $1+1$ dimensions, this normal ordering procedure is typically\footnote{Normal ordering is sufficient to make polynomials in the field well defined. For more general potentials, like $\cos(b\phi)$, normal ordering is sufficient only for $b$ small enough.} sufficient to cure all the divergences that can appear, even when adding interactions. In particular, the $\phi^4$ Hamiltonian $H$ which we will consider in more detail later, \begin{equation}\label{eq:phi4split} H = :H_\text{fb}{:}_a + g \int_\mathbb{R} :\phi^4{:}_a\, , \end{equation} is a perfectly legitimate and regular Hamiltonian on the free Fock space. From now on, I will omit the subscript on the normal ordering which will always be done with respect to $a,a^\dagger$ unless otherwise stated. The Fock space generated by acting with $a^\dagger_k$ on $\ket{0}_a$ is much more adapted to relativistic theories than the Fock space generated by acting with $\psi^\dagger$ on $\ket{0}_\psi$, because the former solves the scale invariant short distance behavior of the theory exactly with its vacuum. However, the operators $a_k,a_k^\dagger$ are not adapted to CMPS because the latter encode the state in real space, and not momentum space. Further, the Hamiltonian $H$ is not translation invariant in momentum, which is the situation CMPS are most adapted for. A tempting workaround is to simply Fourier transform $a_k$ to get $a(x)$ \begin{align}\label{eq:fourier} a(x) &= \frac{1}{2\pi}\int \mathrm{d} k \, \e^{ikx} a_k \, , \end{align} which verifies $[a(x),a^\dagger(y)] = \delta(x-y)$. Note the crucial fact that $\psi(x) \neq a(x)$, as the Fourier transform \eqref{eq:fourier} does not contain the factors $\omega_k$. Intuitively, $a^\dagger(x)\ket{0}_a$ corresponds to a (bare) relativistic particle localized in $x$. Working with this new operator basis is the key to make CMPS adapted to relativistic theories. \subsection{Definition and basic properties} \noindent The previous discussion naturally leads us to introduce the relativistic CMPS (RCMPS) ansatz: \begin{equation} \ket{Q,R} = \mathrm{tr}\left\{\mathcal{P} \exp\left[\int \mathrm{d} x \, Q\otimes \mathds{1} + R\otimes a^\dagger(x)\right]\right\}\ket{0}_a \end{equation} where $a^\dagger(x)$ is the Fourier transform of the creation operator diagonalizing the free part of the relativistic Hamiltonian under consideration. Note that we choose to work directly in the thermodynamic limit, and thus the state $\ket{Q,R}$ has no cut-off, UV or IR (although the latter is trivial to reintroduce). The properties of the state are the same as before if one replaces $\psi$ by $a$, as these operators obey the same algebra. A large part of the theory of CMPS can thus be reused. In particular, all normal-ordered $N$-point correlation functions of $a,a^\dagger$ can be computed efficiently using the same generating functional. Additionally, we have that RCMPS are dense in the appropriate QFT Fock space (the manifold is maximally expressive) and thus that the precision can be arbitrarily refined by increasing $D$. \subsection{Consequence on the Hamiltonian density} \label{sec:consequences} \noindent The main difference with the non-relativistic case is that, once written as a function of $a(x)$, the Hamiltonian of a (massive) relativistic theory is not strictly local, but only exponentially decaying. Indeed a relativistic Hamiltonian is local in the field $\phi(x)$ and its conjugate, which do not have a local expression as a function of $a(x)$. More precisely, \begin{align}\label{eq:convolution} \phi(x) &= \frac{1}{2\pi} \int \mathrm{d} k \sqrt{\frac{1}{2 \, \omega_k}} \left(\e^{ikx} a_k + \e^{-ikx} a^\dagger_k \right) \nonumber \\ &= \frac{1}{2\pi} \int \frac{\mathrm{d} k \, \mathrm{d} y}{\sqrt{2 \, \omega_k}} \left(\e^{ik(x-y)} a(y) + \e^{-ik(x-y)} a^\dagger(y) \right) \nonumber\\ &=\int \mathrm{d} y\; J(x-y) \left[a(y) + a^\dagger(y) \right]\, \end{align} where $J(x)$ is a smooth kernel for $x\neq 0$ (and not a Dirac distribution) which I will make explicit later. Consequently, the Hamiltonian density of a relativistic theory with terms of order up to $n$ in the field $\phi$ can be written as a function of $a(x)$ with $n$ integrals, a bit informally: \begin{align} H &= \int \mathrm{d} x\, h(x) \\ &=\int \! \mathrm{d} x \int \!\mathrm{d} x_1 .. . \mathrm{d} x_n K(x_1,...,x_n) a^{(\dagger)}(x_1) ... a^{(\dagger)}(x_n) \, \end{align} where $a^{(\dagger)}$ is a compact notation to convey the fact that both creation and annihilation occur in a sum, and $K$ is a kernel that decays exponentially as a function of the difference of its arguments. Is this non-locality a problem? Technically, it certainly induces complications in evaluating the energy density: we have exact expressions for $N$ point functions of $a,a^\dagger$ which we then have to integrate over some kernel (instead of taking them at equal point for a local density). This is however not insurmountable, and as I will show in the next section, one can still compute the Hamiltonian density at a cost $\propto D^3$. More crucially, and provided we can optimize them, do we expect the expressive power of RCMPS to be lower for such Hamiltonians? There is no reason to think so, and in fact CMPS have already be applied to Hamiltonians with exponentially decaying interactions in the non-relativistic context \cite{rincon2015}. Intuitively, we are merely introducing a new length scale $m^{-1}$ that replaces the lattice scale in lattice models. This is more physical than introducing a much smaller and arbitrary cut-off scale $\Lambda^{-1} \ll m^{-1}$ as was done previously. Other choices of length-scales that could further improve precision are discussed in sec. \ref{sec:extensions}. \section{Efficiently computing RCMPS expectation values} \label{sec:computations} \subsection{Naive direct evaluation} \noindent The operators $a^\dagger(x),a(y)$ have the same commutation relations as the $\psi^\dagger(x),\psi(y)$ of standard non-relativistic CMPS and thus all the formulas follow. In particular, one can straightforwardly compute the exact expectation values of normal-ordered products of $a^\dagger$ and $a$ using the generating functional \eqref{eq:generatingfunction}. Obtaining simple functions of the field $\phi$, \eg expectation values of normal-ordered monomials $\langle :\! \phi^n\!: \rangle$ is less straightforward because the field $\phi$ is obtained from $a,a^\dagger$ with a convolution \eqref{eq:convolution}. Hence, to compute the expectation value of $\langle :\! \phi^n\!: \rangle$, one \emph{a priori} needs to compute $n$ integrals (in fact $n-1$ using translation invariance) of exact $a,a^\dagger$ correlation functions $\langle a^{(\dagger)}(x_1) a^{(\dagger)}(x_2) \cdots a^{(\dagger)}(x_n) \rangle$. This is feasible for low bond dimension and can be used as a sanity check, but is prohibitively expensive for a full optimization of the state\footnote{This was, unfortunately, the strategy I first followed.}. \subsection{Vertex operators} \noindent For efficient computations of functions of the field $\phi$, the starting point is to compute expectation values of vertex operators: \begin{equation}\label{eq:vertex_definition} \langle V_b\rangle:=\bra{Q,R} :\! \e^{b\phi(x)}\!: \ket{Q,R}\, , \end{equation} which we may evaluate in $x=0$ without loss of generality because of translation invariance. Such an expectation values seems even more difficult to compute than field monomials at first sight. Indeed, using the naive approach above and expanding the exponential \eqref{eq:vertex_definition} provides $n$ integrals at each order $n$. However, it turns out one can directly compute the expectation value without expanding the exponential. To this end, we simply express the vertex operators as a function of $a(x)$ \begin{align} :\! \e^{b\phi(0)}\!: &= :\!\exp\left[ \frac{b}{2\pi }\int \mathrm{d} x \! \int \frac{\mathrm{d} k}{\sqrt{2\omega_k}} \e^{-ikx} a(x) + \e^{ikx} a^\dagger(x) \right]\!\! :\nonumber\\ &= \exp\left[b \int \mathrm{d} x \, J(x) a^\dagger (x) \right]\exp\left[b \int \mathrm{d} x \, J(x) a (x)\right] \label{eq:ordered_vertex} \end{align} where $J$ is a real function \begin{align} J(x):&= \frac{1}{2\pi} \int \frac{\mathrm{d} k}{\sqrt{2\omega_k}} \, \e^{-ikx} \label{eq:J_def}\\ &= \frac{K_{1/4}(|x/m|)}{2^{9/4}\sqrt{\pi}\, \Gamma(5/4)\, |x/m|^{1/4}} \, \end{align} and $K_{\nu}(x)$ (not to be confused with the matrix $K$) is the modified Bessel function of the second kind. We see from \eqref{eq:ordered_vertex} the crucial fact that the expectation value of a vertex operators is simply the generating functional itself, with $b J$ as source, \ie $\langle V_b\rangle= \mathcal{Z}_{bJ,bJ}$, thus explicitly \begin{equation} \langle V_b\rangle = \mathrm{tr} \left[\mathcal{P}\exp\int_\mathbb{R} \mathrm{d} x\, \mathbb{T} + bJ(x) \left(R \otimes \mathds{1} + \mathds{1} \otimes \bar{R} \right)\right] \,. \end{equation} Inside the trace, this is simply the solution of an ordinary differential equation that can be solved numerically. To reduce the computational cost, one can use the standard trick of matrix product states which consists in mapping the tensor product Hilbert space $\mathbb{C}^D\otimes \mathbb{C}^D$ to the space of matrices $\rho$ acting on $\mathbb{C}^D$. Introducing the super-operators \begin{align} \mathcal{L}\cdot \rho &= -i[K,\rho] + R\rho R^\dagger - \frac{1}{2} \left(R^\dagger R \rho + \rho R^\dagger R \right) \label{eq:def_Lindblad}\\ \mathcal{R}(x) \cdot \rho &= J(x) (R\rho + \rho R^\dagger) \end{align} we have \begin{equation} \langle V_b\rangle = \mathrm{tr}\left\{\mathcal{P}\exp\left[\int_\mathbb{R} \mathrm{d} x \, \mathcal{L} + b \mathcal{R}(x)\right]\cdot \rho_\text{ss}\right\} \end{equation} where $\rho_\text{ss}$ is the stationary state of $\mathcal{L}$ normalized with trace $1$. Rewriting the path-ordered exponential as the solution of an ODE we have \begin{equation}\label{eq:vertex_Lindblad} \langle V_b\rangle = \lim_{x\rightarrow +\infty} \mathrm{tr} \left[\rho_x\right] \end{equation} where $\lim_{x\rightarrow-\infty} \rho_x = \rho_\text{ss}$ and \begin{equation}\label{eq:vertex_ode} \frac{\mathrm{d}}{\mathrm{d} x} \rho_x = \mathcal{L} \cdot \rho_x + b\mathcal{R}(x) \cdot \rho_x \end{equation} The limit \eqref{eq:vertex_Lindblad} is well defined because $J(x)$, and thus $\mathcal{R}(x)$, decrease exponentially fast at infinity and are integrable in $0$. Because of this fast decay at infinity, one could in fact use any density matrix as initial state. Using a simple ODE solver, \eg a backward differential formula (BDF) solver, one can obtain the limit in \eqref{eq:vertex_Lindblad} to arbitrary precision with only a reasonable number of subdivisions. The total computational cost is proportional to the cost of applying the super-operator $\mathcal{L} + b\mathcal{R}(x)$ on a density matrix and thus scales $\propto D^3$ only. \subsection{Field monomials} \noindent To compute field monomials, one can differentiate vertex operators with respect to their exponent $b$ \begin{equation} \langle :\phi^n\!:\rangle= \frac{\partial^n}{\partial b^n} \langle V_b\rangle \bigg|_{b=0} \, . \end{equation} This allows to obtain $\langle :\phi^n\!:\rangle$ by directly differentiating the ODE \eqref{eq:vertex_ode}. Doing so yields \begin{equation} \langle :\phi^n\!:\rangle = \lim_{x\rightarrow + \infty} \mathrm{tr}\left[\rho^{(n)}_x\right] \end{equation} where $\rho^{(k)}:= \partial_b^k \rho^b|_{b=0}$ obey $n$ coupled matrix ODE \begin{equation} \frac{\mathrm{d}}{\mathrm{d} x} \rho^{(k)}_x = \mathcal{L}\cdot \rho^{(k)}_x + \mathcal{R}(x) \cdot \rho^{(k-1)}_x \end{equation} with the convention that $\rho^{(0)}_x\equiv \rho_\text{ss}$ and for $k>0$, $\rho^{(k)}_{-\infty}=0$. Solving the ODE above numerically provides arbitrarily accurate approximations of $\langle :\phi^n\!:\rangle$ at a cost $\propto n\times D^2$. \subsection{Kinetic term} \noindent In addition to exponentials and polynomials of the field $\phi$, it is important to be able to compute the expectation value of the free part of the Hamiltonian. For convenience, we consider directly the Hamiltonian for the \emph{massive} free boson, but since the mass term $m^2\langle :\!\phi^2\!:\rangle$ is also computable, it could be subsequently subtracted to obtain the pure kinetic term. The free boson Hamiltonian can be expressed as a function of the momentum space creation and annihilation operators $a_k^\dagger,a_k$ and reads \begin{equation} :H_\text{fb}: = \frac{1}{2\pi}\int \mathrm{d} k \, \omega_k \; a^\dagger_k a_k\, . \end{equation} The corresponding Hamiltonian density $h_\text{fb}(x)$ is \begin{equation} :h_\mathrm{fb}(x): = \frac{1}{2\pi} \int \mathrm{d} k \mathrm{d} y \, \omega_k\; \e^{ik(y-x)}a^\dagger(y) a(x) \, . \end{equation} As before, we would like to write this density as a derivative of a vertex operator. If we try to mirror the reasoning of the previous subsection, we face the issue that the natural source $\tilde{J}$ that appears now is the Fourier transform of $\sqrt{\omega_k}$. This is not a function but only a distribution. To get a true function, we can divide and multiply by $\omega_k^2=m^2+k^2$, and interpret the $k^2$ term on the numerator as a $\partial_x\partial_y$ derivative. This gives \begin{equation} :h_\mathrm{fb}(x): = \frac{1}{2\pi} \int \mathrm{d} y \, \frac{\mathrm{d} k}{\omega_k}\; \e^{ik(y-x)} (m^2 + \partial_y \partial_x ) a^\dagger(y) a(x) \, . \end{equation} We are back to an expression that depends on the source $J(x)$ that we introduced before \eqref{eq:J_def} \begin{equation} \begin{split} \langle :h_\mathrm{fb}: \rangle =& 2 m^2 \left \langle \int \mathrm{d} x J(x) a^\dagger(x) \!\! \int \mathrm{d} y J(y) a(y)\right\rangle \\ + & 2 \left\langle \int \mathrm{d} x J(x) \partial_x a^\dagger(x) \!\! \int \mathrm{d} y J(y) \partial_y a(y) \! \right\rangle \end{split} \end{equation} except this time derivatives of $a,a^\dagger$ appear as well. This gives \begin{align} \langle :h_\mathrm{fb}: \rangle =& 2 \left[m^2 \frac{\partial}{\partial b_1} \frac{\partial}{\partial b_2} \mathcal{Z}_{b_1J,b_2 J} + \frac{\partial}{\partial b_1} \frac{\partial}{\partial b_2} \mathcal{Y}_{b_1J,b_2 J} \right]_{b_{1,2}=0} \end{align} where $\mathcal{Y}_{j',j}$ is the generating functional of normal ordered correlation functions of $\partial_x a^\dagger(x),\partial_y a(y)$. This generating functional also has an exact expression, which is easily derived by differentiating correlation functions obtained from $\mathcal{Z}$ with respect to position \begin{equation} \mathcal{Y}_{j',j} \! = \! \mathrm{tr}\left\{\mathcal{P} \exp\! \bigg(\int \! \, \mathbb{T} + j [Q,R] \otimes \mathds{1} +j'\mathds{1}\otimes [\bar{Q}, \bar{R}]\bigg)\right\}\, . \end{equation} As before, $\frac{\partial}{\partial b_1} \frac{\partial}{\partial b_2} \mathcal{Z}_{b_1J,b_2 J}$ and $\frac{\partial}{\partial b_1} \frac{\partial}{\partial b_2} \mathcal{Y}_{b_1J,b_2 J}$ can be obtained by solving simple ODEs. To this end, we introduce $\rho_x:=\rho^{b_1 b_2}_x$ with initial condition $\rho_{-\infty}=\rho_\text{ss}$ and dynamics \begin{equation} \frac{\mathrm{d}}{\mathrm{d} x} \rho_x = \mathcal{L} \cdot \rho_x + b_1 J(x) R \rho_x +b_2 J(x) \rho_x R^\dagger \end{equation} and $\sigma_x:=\sigma^{b_1 b_2}_x$ with initial solution $\sigma_{-\infty} = \rho_\text{ss}$ and dynamics \begin{equation} \frac{\mathrm{d}}{\mathrm{d} x} \sigma_x = \mathcal{L} \cdot \sigma_x + b_1 J(x) [Q,R] \sigma_x +b_2 J(x) \sigma_x [R^\dagger,Q^\dagger] \end{equation} We further introduce notations for the partial derivatives $\rho^{(1,0)}:= \partial_{b_1} \rho|_{b_{1,2}=0}$, $\rho^{(0,1)}:= \partial_{b_2} \rho|_{b_{1,2}=0}$ and $\rho^{(1,1)}:= \partial_{b_1}\partial_{b_2} \rho|_{b_{1,2}=0}$. They obey \begin{align} \frac{\mathrm{d}}{\mathrm{d} x} \rho^{(1,0)} &= \mathcal{L}\cdot \rho^{(1,0)}_x + J(x) R \rho_0 \\ \frac{\mathrm{d}}{\mathrm{d} x} \rho^{(0,1)}_x &= \mathcal{L}\cdot \rho^{(0,1)}_x + J(x) \rho_0 R^\dagger \\ \frac{\mathrm{d}}{\mathrm{d} x} \rho^{(1,1)} &= \mathcal{L}\cdot \rho^{(1,1)}_x + J(x) R \rho_x^{(0,1)} + J(x) \rho_x^{(1,0)} R^\dagger \end{align} The same system of ODEs can be obtained for $\sigma$, replacing $R$ by $[Q,R]$ and $R^\dagger$ by $[R^\dagger,Q^\dagger]$. Finally, the expectation value we are looking for is obtained from the trace of the solutions \begin{equation} \langle :h_\mathrm{fb}: \rangle = 2 \lim_{x\rightarrow +\infty} \mathrm{tr}\left[m^2\rho^{(1,1)}_x + \sigma_x^{(1,1)}\right]\, . \end{equation} Hence the expectation value of the massive free boson Hamiltonian density $\langle :h_\mathrm{fb}: \rangle$ can be computed by solving 2 systems of 3 coupled matrix ODEs, and thus can be obtained to arbitrary precision at a cost $\propto D^3$. \section{Optimization} \label{sec:optimization} \subsection{Failure of naive optimization} \noindent Using the results in the previous section, one obtains an expression for the energy density of the form $\langle h\rangle = f(Q,R)$ where $f$ is a function of the matrices $R$ and $Q$ (in practice $K$) that can be evaluated efficiently on a classical computer at a cost $\propto D^3$. One may thus simply input this function to a standard minimizer, that will typically use a gradient computed by finite differences and hope for the best. This is what was done in the original paper on CMPS applied to the Lieb-Liniger model~\cite{verstraete2010}. For our model, this approach works reasonably well up to $D=4$, after which all the standard optimizers (\eg L-BFGS or conjugate gradient) get stuck in plateaus. To understand why it happens and go beyond such small values of D, we need to do better, and use a tangent space approach. \subsection{Tangent space approach} \noindent It is well known that the notion of steepest descent in optimization depends on a choice of metric. More precisely, if we want to minimize a function $f(x)$ where $x=\{x^\mu\}_{\mu=1}^N$ is a vector of parameters (think of the coefficients of $R,Q$), we can go down the steepest descent direction $u$ defined by \begin{equation} u=\argmin_{\|u\|= 1} \langle \nabla f, u\rangle_x \, . \end{equation} This scalar product on the tangent space $\langle u,v \rangle_x = g_{\mu\nu}(x) u^\mu v^\nu$ and associated metric $g_{\mu\nu}$ are a priori arbitrary. The notion of ``steep'' depends on a metric, and what is steep for the ``right'' metric $g_{\mu\nu}$ may look like a plateau for the naive $\delta_{\mu\nu}$ metric if $g_{\mu\nu}$ is singular. If there are many parameters, the naive metric has no reason to be good. But what is the right metric? It is one where the distance between parameter values is proportional to how much they change the function one optimizes. An excellent choice of metric is thus given by the Hessian $\text{Hess}_{\mu\nu}:=\partial_\mu\partial_\nu f$ of the function one is optimizing. Taking this metric gives the descent direction $u \propto - [\text{Hess}^{-1}]^{\mu\nu} \partial_\nu f $ where $ [\text{Hess}^{-1}]^{\mu\nu}\text{Hess}_{\nu\rho} =\delta^\mu_\rho$, which corresponds to the famous Newton method. This matrix is costly to estimate for RCMPS, because it requires computing $\propto D^4$ derivatives of the energy density, instead of $\propto D^2$ if we only compute the gradient. There is another natural option that comes from the fact that, in our case, the tangent space is also a Hilbert space \cite{hackl2020}. Indeed, let us write $\ket{x} = \ket{Q,R}$ a state in the manifold of RCMPS. Then a natural tangent space metric is simply the Hilbert one \begin{equation} g_{\mu\nu}(x):= \text{Re}\left[(\partial_\mu \bra{x})( \partial_\nu \ket{x})\right]. \end{equation} It provides a notion of distance between parameter values associated to how much they change the quantum state (instead of the energy). Further, in our case, it can be computed straightforwardly (it is instantaneous in comparison with the computation of the gradient). A more physical justification for the use of this metric is that it corresponds to (approximate) imaginary time evolution \cite{hackl2020}, which converges exponentially fast for a gapped system and an expressive enough state manifold. Indeed, upon an infinitesimal imaginary time evolution $\mathrm{d} \tau$ an RCMPS $\ket{x}$ evolves into $\ket{x} - \mathrm{d} \tau H \ket{x}$. This latter state no longer belongs to the RCMPS manifold, and to get an approximate evolution we need to project down the evolution to the tangent space. More precisely, we want to find a direction $u\in \mathbb{R}^N$ in the tangent space such that $u^\mu \partial_\mu \ket{x} \simeq - H \ket{x}$. It is obtained by minimizing $\|u^\mu \partial_\mu \ket{x} + H \ket{x} \|^2$, which gives \begin{equation} u^\mu= - [g^{-1}]^{\mu\nu}\partial_\nu (\bra{x}H\ket{x})\, \end{equation} provided $g_{\mu\nu}$ is invertible which we will assume here. This projected imaginary time evolution corresponds to the (imaginary) time dependent variational principle (TDVP) in the tensor network context \cite{vanderstraeten2019tangentspace}. In my opinion, the advantage of seeing imaginary TDPV simply as gradient descent with a different metric is that it makes it obvious the time step does not need to be infinitesimal, and can be chosen optimally with a line search. In practice, I observed for $D\leq 4$ that quasi-Newton methods, which try to estimate the best metric (the Hessian) from the gradient at different iterations, are still reasonably efficient. For larger $D$, I found that the metric $g_{\mu\nu}$ becomes very singular near the ground state, which may explain why quasi-Newton methods fail to estimate the Hessian (which is likely very singular as well) and get stuck in plateaus. However, as we will see in \ref{sec:algorithm}, the tangent space approach I presented converges fast even for large values of $D$ as one would expect. For the optimization RCMPS in moderately large $D$, it is thus better to have an exact ``good'' metric, than an approximation of the best metric. \subsection{Computing the metric} \noindent The metric can be computed easily following~\cite{vanderstraeten2019tangentspace}. The first step is to define the tangent space vectors \begin{align}\label{eq:tangent} \ket{V,W}_{Q,R} =\!\! \int\! \mathrm{d} x \left[V_{\alpha\beta} \frac{\delta }{\delta Q_{\alpha\beta}(x)} + W_{\alpha\beta} \frac{\delta }{\delta R_{\alpha\beta}(x)}\right] \! \ket{Q,R} . \end{align} The complex matrices $V,W$ parameterize the direction in the tangent space, and $Q,R$ the point on the RCMPS manifold. We work in the translation invariant case, where $Q,R$ are taken position independent at the end, but the position argument $x$ in \eqref{eq:tangent} is necessary to know how operators are ordered. A crucial fact is that the tangent space is overparameterized and, with the left canonical choice \eqref{eq:left_gauge} we took for $Q$, \ie $Q = -iK -\frac{1}{2} R^\dagger R $, one is free to fix $V=-R^\dagger W$ without losing a linearly independent direction \cite{vanderstraeten2019tangentspace}. We may thus drop $V$ as a parameter, as it is fixed by $W$. With this choice, one can show \cite{vanderstraeten2019tangentspace} that the overlap between tangent vectors takes the particularly simple form \begin{equation}\label{eq:overlap} \begin{split} \langle W_1 | W_2 \rangle_{Q,R} &= \bra{\ell_0} W_2\otimes \bar{W}_1 \ket{r_0} \\ &= \mathrm{tr} [W_2\rho_\text{ss} W_1^\dagger] \end{split} \end{equation} where $\rho_\text{ss}$ is the (normalized) stationary state of the Lindbladian $\mathcal{L}$ defined in \eqref{eq:def_Lindblad}. We thus have $2D^2$ directions on the tangent space corresponding to the real and imaginary parts of the coefficients $W_{\alpha\beta}$. The metric is simply the bilinear map taking two $W$ and outputing the real part of \eqref{eq:overlap}. Note that the metric depends on the state only and is cheap to compute. Indeed, it does not require the resolution of ODEs which are the numerical bottleneck of RCMPS. \subsection{Computing the gradient with an adjoint method} \noindent To compute the gradient of the energy density in the $2D^2$ independent directions, one a priori needs $\propto 2D^2$ computations of expectation values each with a cost $\propto D^3$. However, using standard adjoint methods (a.k.a. backpropagation), one can compute the complete gradient with the same asymptotic cost as computing the energy, hence $\propto D^3$. In principle, this could be done by using a complex ODE solver compatible with automatic differentiation. In practice, because the ODEs involved have a special form, it is easy, efficient, and illuminating to implement the adjoint method directly. I illustrate the idea on the example of the computation of the gradient of a vertex operator, but it applies equally easily to the gradients of field monomials and kinetic term. The expectation value of a vertex operator on a RCMPS is \begin{equation}\label{eq:vertex_2} \langle V_b\rangle_{Q,R} = \mathrm{tr}\left\{\mathcal{P}\exp\left[\int_\mathbb{R} \mathrm{d} x \, \mathcal{L} + b \mathcal{R}(x)\right]\cdot \rho_\text{ss}\right\}. \end{equation} Let us consider the gradient in the $W$ direction $\nabla_{W} \langle V_b\rangle_{Q,R}$ which is defined implicitly via \begin{equation} \langle V_b\rangle_{Q+\varepsilon V,R + \varepsilon W} = \langle V_b\rangle_{Q,R} + \varepsilon \nabla_{W} \langle V_b\rangle_{Q,R} + O(\varepsilon^2). \end{equation} Differentiating directly \eqref{eq:vertex_2} yields \begin{equation}\label{eq:gradient_raw} \nabla_{W} \langle V_b\rangle=\!\!\int \!\!\mathrm{d} y\, \mathrm{tr}\left\{ \mathcal{P}\e^{\int_{y}^{+\infty} \!\!\mathcal{L}^b}\!\!\cdot\!\nabla_{W}\mathcal{L}^b(y)\!\cdot\mathcal{P}\e^{\int_{-\infty}^y\!\! \mathcal{L}^b}\!\!\!\cdot \rho_\text{ss}\right\}. \end{equation} with the notation $\mathcal{L}^b(x) = \mathcal{L} + b\mathcal{R}(x)$ and \begin{equation}\label{eq:derivative_Lindblad} \begin{split} \nabla_{W}\mathcal{L}^b(y) \cdot \rho = &V\rho + \rho V^\dagger + \frac{1}{2} \left(R\rho W^\dagger+W\rho R^\dagger\right) \\ &+ b J(y) \left(W\rho + \rho W^\dagger\right)\, , \end{split} \end{equation} recalling again that $V=-R^\dagger W$. This gradient of $\mathcal{L}^b(y)$ appears in \eqref{eq:gradient_raw} between two evolution super-operators. Because a trace is taken at the end, we can replace the last part of the evolution from $y$ to $+\infty$ by its adjoint acting on the identity \begin{equation}\label{eq:gradient_adjoint} \begin{split} \nabla_{W} \langle V_b\rangle=\int \!\!\mathrm{d} y\, \mathrm{tr}\bigg\{ &\left[\mathcal{P}\e^{\int_{y}^{+\infty} \!\!\mathcal{L}^{b*}}\!\!\!\!\cdot \mathds{1}\right] \\ \times &\nabla_{W}\mathcal{L}^b(y)\!\cdot\left[\mathcal{P}\e^{\int_{-\infty}^y\!\! \mathcal{L}^b}\!\!\!\cdot \rho_\text{ss}\right]\bigg\}. \end{split} \end{equation} where the adjoint $\mathcal{L}^{b*}(y)$ of $\mathcal{L}^b(y)$ is defined as \begin{equation} \mathcal{L}^{b*}(y)\cdot \mathcal{O} = Q^\dagger\mathcal{O} + \mathcal{O} Q+ \frac{1}{2} R^\dagger \mathcal{O} R + b J(y) \left[R^\dagger \mathcal{O} + \mathcal{O}R \right]. \end{equation} Writing as before $\rho_x=\mathcal{P}\e^{\int_{-\infty}^x\!\! \mathcal{L}^b}\!\!\!\cdot \rho_\text{ss}$ the solution of the forward problem and $\mathcal{O}_x=\mathcal{P}\e^{\int_{x}^{+\infty} \!\!\mathcal{L}^{b*}}\!\!\!\!\cdot \mathds{1}$ the solution of the backward problem we get \begin{equation} \nabla_{W} \langle V_b\rangle =\int \mathrm{d} y \, \mathrm{tr} \left[\mathcal{O}_y \nabla_{W}\mathcal{L}^b(y)\cdot \rho_y\right] \end{equation} This can be further simplified exploiting the expression of $\nabla_{W}\mathcal{L}^b(y)$ \eqref{eq:derivative_Lindblad}, and one obtains all the components of the gradient from 2 matrices $M_W$ and $M_{W^\dagger}$ \begin{equation}\label{eq:gradient_compact} \nabla_{W} \langle V_b\rangle =\mathrm{tr}\left[M_W W + M_{W^\dagger} W^\dagger\right] \end{equation} where \begin{equation}\label{eq:matrices_integral} \begin{split} M_W&= \!\!\int \mathrm{d} y\, -\!\rho_y \mathcal{O}_y R^\dagger + \frac{1}{2} \rho_y R^\dagger \mathcal{O}_y + b J(y) \rho_y \mathcal{O}_y \\ M_{W^\dagger}&= \!\!\int \mathrm{d} y\, -\! R\mathcal{O}_y\rho_y + \frac{1}{2} \mathcal{O}_y R \rho_y + b J(y) \mathcal{O}_y \rho_y \,. \end{split} \end{equation} In practice, one computes $\rho_y$ and $\mathcal{O}_y$ by solving the corresponding ODEs. The matrices $M_W$ and $M_{W^\dagger}$ are then obtained by evaluating the integrals in \eqref{eq:matrices_integral} with an efficient numerical method like the tanh-sinh quadrature~\cite{mori2001}. This gives an algorithm with a cost $\propto D^3$ to compute the full gradient of the expectation value of a vertex operator. The gradients of the kinetic term and of other potentials can be computed efficiently with the same method. \subsection{Algorithm}\label{sec:algorithm} \noindent We now have all the pieces to understand the optimization algorithm. The first step is to start from an initial guess. One can certainly do much smarter, but I started from uniformly random $R$ and $K$ matrices. This gives a very high starting energy density, but it fortunately decreases fast enough that initialization is a secondary concern for the bond dimensions I probed. The second step is to compute the descent direction, obtained by acting with the inverse metric on the gradient. The gradient is computed with the backpropagation method described before, and is the most costly step, while the computation of the inverse metric is an immediate algebraic operation using \eqref{eq:overlap}. The third step is to move $R$ and $Q$ in the descent direction. The step size need not be small, as in imaginary time evolution, and it is chosen to approximately yield the maximal energy decrease at each step. In practice, I used a backtracking line search with Armijo-Goldstein condition to find the the right amount to move at each step. Note that this is very similar to what was done by Ganahl \textit{et al.} in \cite{ganahl2017} for standard CMPS and the Lieb-Liniger model. Like them, I observed that this approach speeds up the optimization by roughly 2 orders of magnitude compared to imaginary time evolution with an optimal but fixed time step. Typically, results are converged after $\sim 10^2-10^4$ iterations depending on the coupling (convergence is slower near criticality), bond dimension (convergence is slower for large $D$), and random initial seed. \section{Application to the self-interacting scalar}\label{sec:application} \subsection{The model} \noindent To assess the soundness of RCMPSs, we consider the simplest non-trivial QFT in $1+1$ dimensions, the self-interacting scalar field with Hamiltonian \begin{equation}\label{eq:phi4Hamiltonian} H= :\left[\int_{\mathbb{R}} \frac{\pi^2}{2}+ \frac{(\partial_x \phi)^2}{2} + \frac{m^2}{2} \phi^2 + g\, \phi^4 \,\right]: . \end{equation} The normal ordering is again done with respect to the creation and annihilation operators $a_k^\dagger,a_k$ which diagonalize the quadratic part of the Hamiltonian and are defined in \eqref{eq:modeexpansion}. This model is a good case study because it is simple to define, even rigorously, as $H$ is a genuine renormalized Hamiltonian (self-adjoint, finite energy density). Yet, the model is not integrable, and carrying accurate computations out of the perturbative regime is non-trivial. The self-interacting scalar has been studied with a wide variety of methods: renormalized Hamiltonian truncation (without space-time discretization but finite size) \cite{rychkov2015}, infinite matrix product states (with space discretization but no finite size cutoff) \cite{milsted2013,vanhecke2019}, Monte-Carlo (with space-time discretization and finite size) \cite{schaich2009,bosetti2015,bronzin2019}, tensor network renormalization (with space-time discretization and finite size) \cite{kadoh2019,delcamp2020}, and, of course, (resummed) perturbative expansions (without cutoff, but perturbative)~\cite{serone2018}. Out of these works, let us mention two that are particularly relevant for our study as they are carried in the Hamiltonian formalism. The study of Milsted \textit{et al.} \cite{milsted2013} is the the closest, in terms of method used, to what we shall do: the authors discretize the model in space, and find the ground state with translation invariant matrix product states, thus without IR cutoff, reaching unbeatable precision at the time. The drawback is the need to extrapolate the continuum limit, \ie the UV cutoff. On the other hand, the remarkably pedagogical Hamiltonian truncation study of Rychkov and Vitale \cite{rychkov2015} is carried directly in the continuum with the Hamiltonian \eqref{eq:phi4Hamiltonian}. In a nutshell, the authors introduce an IR and an energy cutoff that make the Hilbert space finite, and exactly diagonalize the resulting finite Hamiltonian matrix. In addition, they introduce a smart renormalization procedure which makes the convergence of the results faster as the cutoffs are lifted. The drawback is that the size of the Hilbert space (and thus the cost) is exponential in both cutoffs (I discuss it further in \ref{sec:discussion}). Further, while the renormalization procedure and careful extrapolations drastically improve precision, they destroy the variational nature of the results. My objective is not so much to improve upon these studies in terms of raw numerical precision than in terms of conceptual simplicity, robustness, and scaling: with RCMPS one can in principle find the ground state of \eqref{eq:phi4Hamiltonian} directly in the continuum limit, without UV or IR cutoff, and in a variational way (that is, with rigorous energy upper bounds). \subsection{Ground energy density} \noindent The ground state energy density is finite and negative for the model we consider. This is clear given that the Fock vacuum for the free part $\ket{0}_a$, which is no longer the ground state for $g\neq 0$, already gives a zero energy because the Hamiltonian is normal ordered. Optimizing the RCMPS for $m=1$ indeed shows that the energy density is negative and decreases as the coupling is increased (see Fig. \ref{fig:energy_density}), at first quadratically in the coupling constant $g$ (as expected from perturbation theory \cite{rychkov2015}). \begin{figure} \centering \includegraphics[width=.99\columnwidth]{energy_density.pdf} \caption{Ground state energy density of the $\phi ^4$ model as a function of the coupling $g$ for $m=1$. The RCMPS results are compared with the renormalized Hamiltonian truncation calculations from \cite{rychkov2015}, obtained with an IR cutoff $L=10$. Points are sampled more densely around the critical coupling $g_c\simeq 2.77$.} \label{fig:energy_density} \end{figure} Importantly, the convergence in bond dimension is fast for all values of the coupling, even deep in the non-perturbative regime (which kicks in roughly at $g\geq 0.1$). As we can see in Fig. \ref{fig:energy_density} the energy as a function of $g$ is already qualitatively correct for $D=5$, and the points at larger bond dimensions ($D=10,15,20$) are essentially indistinguishable on this plot. At large coupling ($g \geq 3$ the results are substantially below those obtained with renormalized Hamiltonian truncation (RHT) \cite{rychkov2015}, which means they are more accurate since the method is truly variational. To estimate the error, it is not possible to compare with an exact solution ($\phi^4_2$ is not integrable), nor with earlier numerical results, \eg RHT, which are less precise even in their latest high precision development \cite{eliasmiro2017-2}. To get an accurate point of comparison, I simply considered a large $D$ estimate of the energy density as reference to estimate the error at lower bond dimensions. For $D=32$, I obtained the rigorous bounds $\langle h \rangle_{g=1} \leq -0.039354$ and $\langle h \rangle_{g=2} \leq -0.157214$, which can be used as fairly good estimates of the true values. The resulting errors for lower bond dimensions are shown in Fig. \ref{fig:error} for $g=1$ and $g=2$ and provide hints that the convergence to the ground energy is close to exponential in the bond dimension, or at least faster than any power law as one expects from the discrete \cite{huang2015computing}. Retrospectively, this fast convergence implies that our $D=32$ points of comparison are exact enough, with an error to the true ground state much smaller than that of the lower $D$ points whose error is estimated. Note that the energy density obtained with RCMPS is the renormalized one and thus would be very difficult to estimate with a similar precision with lattice methods. Indeed, the latter give access to the total energy density, which diverges in the continuum limit. One would need to subtract the diverging tadpole part to get the finite renormalized contribution. As one gets closer to the continuum limit, obtaining this finite correction to a fixed precision requires a prohibitively high relative precision on the total energy. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{error.pdf} \caption{Relative error in the energy density as a function of the bond dimension $D$. The error is computed by using as reference the results at $D=32$: $\langle h \rangle_{g=1} \leq -0.0393547$ and $\langle h \rangle_{g=2} \leq -0.157214$. The latter should have a substantially lower error ($\simeq 10^{-5}$ and $\simeq 10^{-4}$ respectively) and thus be exact enough for the approximate computation of relative errors.} \label{fig:error} \end{figure} Finally, note that a second order phase transition occurs for $g_c\simeq 2.77$ (for $m=1$) \cite{delcamp2020}. Just like Rychkov and Vitale~\cite{rychkov2015} with RHT, and as expected from the similarity with the 2d Ising model, we see no sign of this transition in the ground state energy density. As we will see, the transition appears more clearly once we look at observables like the expectation value of the field $\phi$ (which behaves like the Ising magnetization). \subsection{Observables} \noindent Once an RCMPS is optimized to approximate the ground state of a given Hamiltonian, expectation values of operators come essentially for free. As an illustration, I show the expectation values of $ \phi $ (Fig. \ref{fig:phi1}) and $:\!\phi^2\!:$ (Fig. \ref{fig:phi2}) but one could equally easily consider $:\!\phi^{42}\!:$ or $:\!\cosh(\phi)\!:$. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{phi1.pdf} \caption{Absolute value of $\langle\phi \rangle$ taken in the approximate RCMPS ground state. The symmetry breaking as the coupling is increased is manifest. Away from the critical point, convergence is extremely fast as a function of the bond dimension.} \label{fig:phi1} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{phi2.pdf} \caption{$\langle:\!\phi^2\!:\rangle$ in the approximate RCMPS ground state. The phase transition can also be seen, although less clearly than in Fig. \ref{fig:phi1}, through the divergence of the first derivative near criticality.} \label{fig:phi2} \end{figure} The expectation value of $\phi$, which is the equivalent of the Ising magnetization for the $\phi^4$ model, is the most instructive. It shows a clear spontaneous symmetry breaking around the expected coupling $g_c\simeq 2.77$. To locate it more precisely (with a precision closer to the lattice extrapolations \cite{delcamp2020}) and estimate the critical exponents, one would need to compute more data points near the critical point, for a wide range of $D$, and use finite entanglement scaling techniques \cite{pollmann2009,stojevic2015,vanhecke2019}. This is left for future work. Consequently, the exceptional numerical accuracy of the ansatz is so far limited to the gapped phases on both sides of the critical point. \section{Extensions} \label{sec:extensions} \subsection{Adjustable characteristic length-scale} The core idea of RCMPS compared to CMPS is to use creation operators such that the theory is exactly solved at short distance. We used the pair $a(x),a^\dagger(x)$ associated to the free part of the theory with mass $m$. As we discussed, this introduces a length-scale $m^{-1}$ for the exponentially decaying support of the Hamiltonian density, which is reminiscent of the lattice scale for lattice models. But this scale is arbitrary in our case and we could have chosen a different pair of creation-annihilation operators as long as the UV behavior is still exact. The simplest extension is to consider mass-adjustable RCMPS, that is states of the form \begin{equation} \ket{Q,R,\tilde{m}} = \mathrm{tr}\left\{\mathcal{P} \exp\left[\int \mathrm{d} x \, Q\otimes \mathds{1} + R\otimes a_{\tilde{m}}^\dagger(x)\right]\right\}\ket{0}_{\tilde{m}} \end{equation} and where $a_{\tilde{m}}$ are the operators diagonalizing the free theory of mass $\tilde{m}$, and $\ket{0}_{\tilde{m}}$ is the associated Fock vacuum. More explicitly, the field operators can be expanded in this new operator basis as \begin{align}\label{eq:modeexpansion_alt} \phi(x) &= \frac{1}{2\pi} \int \mathrm{d} k \sqrt{\frac{1}{2 \, \widetilde{\omega}_k}} \left(\e^{ikx} a_{\tilde{m},k} + \e^{-ikx} a^\dagger_{\tilde{m},k} \right) \\ \pi(x) &= \frac{1}{2\pi} \int \mathrm{d} k \sqrt{\frac{\widetilde{\omega}_k}{2}} \left(\e^{ikx} a_{\tilde{m},k} - \e^{-ikx} a^\dagger_{\tilde{m},k} \right) \, , \end{align} where $\widetilde{\omega}(k) = \sqrt{k^2+\tilde{m}^2}$. At short distances, large $k$, the state $\ket{Q,R,\tilde{m}}$ still solves the theory exactly just like the RCMPS $\ket{Q,R}=\ket{Q,R,m}$, but the variable mass or inverse length-scale $\tilde{m} \neq m$ gives an additional degree of freedom one can optimize to better fit the IR. The computations with the mass variable RCMPS are more difficult than with the standard RCMPS, and the complications mainly come from the fact that the Hamiltonian density is not normal-ordered for the operators $a_{\tilde{m}},a^\dagger_{\tilde{m}}$. Evaluating the Hamiltonian density is thus tedious but doable, and I could carry the optimization. However, for the few values of coupling I tried, I obtained rather underwhelming results, only marginally improving the precision at the cost of a substantial increase in complexity and slower optimization. A more thorough study should be done in future works. One could expect that carefully optimizing the length-scale $\tilde{m}^{-1}$ near criticality would yield more meaningful improvements, since the gap becomes much smaller than the mass $m$ appearing in the Hamiltonian (corresponding to the one-loop renormalized mass). A more systematic exploration of Bogoliubov transformations would be interesting as well. In principle, one can change the $a,a^\dagger$ into new operators $b,b^\dagger$ by replacing $\omega_k$ in the mode expansion \eqref{eq:modeexpansion} with any function $\Omega_k$ with the same large $k$ behavior so that expectation values are still UV finite. This should provide a substantial gain in expressiveness at fixed bond dimension, but the computations would be more involved. Finally, I would like to emphasize that this idea of non-local change of basis to make the continuum limit well behaved or even simply increase expressiveness, which comes at the cost of making the Hamiltonian exponentially decaying instead of local, could be used on the lattice as well. \subsection{Excitation spectrum and beyond} \noindent Once optimized, a RCMPS can be used to compute all correlation functions at equal time for no additional minimization cost. As I argued, this also gives some dynamical information because of Lorentz invariance, but can one get more? In principle, one can use TDVP in real-time to evolve states which means we have access to all dynamical properties. Note however in that case that one cannot use the trick of taking optimally large time steps, since, in real time, we no longer have a global minimization problem. Using standard tangent-space CMPS techniques, one also has access to the excitation spectrum \cite{vanderstraeten2019tangentspace}. In a nutshell, the idea is to diagonalize the Hamiltonian in the tangent space $\ket{V,W}_{Q,R}$ which is a vector space orthogonal to the ground state (in the gauge we chose). This would give approximations only to the excited states with zero momentum, but simple local modifications allow to target states of non-zero momenta as well \cite{vanderstraeten2019tangentspace}. Carrying such computations requires evaluating matrix elements of the form $\bra{V',W'} h \ket{V,W}_{Q,R}$, which should be doable. A comparison of this spectral data from the one one could extract from the two-point function would be interesting. \subsection{Other quantum field theories} \noindent I illustrated the use of RCMPS on the self-interacting scalar field only, but it can in principle be applied to almost all theories in $1+1$ dimensions. Other bosonic theories with polynomial interactions (\eg $:\!\phi^6\! :$) can be treated without new technique. Scalar theories with exponential potentials, like the Sine-Gordon and Sinh-Gordon models can also be dealt with immediately since expectation values of vertex operators are straightforward to compute. Fermionic theories could also be dealt with directly, with a minor subtlety related to regularity conditions \cite{haegeman2013}, that is conditions on $R,Q$ one has to impose to make expectation values finite. The Gross-Neveu model \cite{gross1974}, which has already been studied with various UV cutoffs with tensor networks \cite{haegeman2010-relativistic,roose2020lattice}, would be an interesting candidate to probe the behavior of a ``just renormalizable'' theory. Alternatively, a large class of Fermionic models, like the Thirring model, can already be dealt using bosonization and the results of the present paper. There are however serious difficulties remaining to extend the method to relativistic QFT in $2+1$ and $3+1$ dimensions. The first is specific to the Hamiltonian formalism, as renormalized Hamiltonians are more difficult to define in higher dimensions: normal ordering is no longer sufficient and the renormalized Hamiltonian no longer acts on the free Fock space \cite{glimm1968}. Nonetheless, recent progress was made using renormalized Hamiltonian truncation \cite{elias-miro2020} and lightcone conformal truncation \cite{anand2020nonperturbative}, with promising numerical results, showing that the Hamiltonian approach is still reasonable in $2+1$ dimensions and in the continuum. The second difficulty is related to continuous tensor network states themselves, that is the higher dimension equivalent of CMPS. In the non-relativistic setting, these states have been proposed in~\cite{tilloy2019}, but evaluating correlation functions is so far efficient only for Gaussian states~\cite{karanikolaou2020gaussian}. Indeed, the equivalent of the finite transfer matrix $\mathbb{T}$ of CMPS in $2+1$ dimensions is an operator acting on (two copies of) the Fock space of a $1+1$ dimensional relativistic QFT. In principle, one could bootstrap the present approach and evaluate correlation functions (or the energy density) in $2+1$ using a boundary RCMPS approach. In a nutshell, one would find the stationary state of the transfer matrix as a (large bond dimension) RCMPS. Every evaluation of an expectation value in $2+1$ would be done at the cost of a full optimization in $1+1$. This is the typical dimensional reduction obtained with tensor network approaches, where a physical dimension is traded for a variational optimization. Clearly, optimizing RCMPS is so far orders of magnitudes too slow to be used as a routine to be called many times in a full optimization process. I hope the present work can stimulate work in this direction. \section{Discussion} \label{sec:discussion} \noindent In this paper, I have presented a new class of states, the relativistic continuous matrix product states, that is adapted to relativistic quantum field theories. It is usable directly in the thermodynamic limit (no IR cutoff) and is exact at short distances, all the way down (no UV cutoff). The bond dimension $D$ controls the expressiveness of this class, and a state has $2D^2$ independent parameters. The comparison between RCMPS and Hamiltonian truncation is a good illustration of the difference between linear and non-linear methods. Hamiltonian truncation (up to its renormalization refinements) is a linear approach. The candidate ground state is expanded in a truncated Hilbert space. The energy is quadratic in the coefficients, and thus minimized efficiently as a linear problem. The price to pay is a lack of extensivity, the size of the Hilbert space grows exponentially as the size of the system increases for a fixed energy cutoff (and thus fixed precision). A side effect is that the number of parameters needs to be huge to reach good precision, with the candidate ground state written as a linear superposition of typically $10^4$ to $10^6$ states \cite{eliasmiro2017-2}. In contrast, using a continuous tensor network ansatz as we did, we work with a manifold of states, and the energy density is a highly non-linear function to optimize. Nonetheless, it can still be done efficiently as I showed in \ref{sec:optimization}. Further, having a non-linear class of states allows us to have an extensive ansatz, where going to the thermodynamic limit does not increase the number of parameters (in the translation invariant case). The results at $D=5$ provide an approximation to the ground state with only $2D^2=50$ independent real parameters which is already more accurate than RHT at strong coupling. This parsimonious encoding of the state translates into a better asymptotic behavior of the approximation. The error of Hamiltonian truncation decreases as a power law in the truncation energy $E_T$, while the cost is exponential in $E_T$ (see \eg \cite{eliasmiro2017-2}). In contrast, for RCMPS, the cost of the optimization is polynomial in $D$ while, at least for the model I considered, the error seems to converge exponentially fast to zero as a function of $D$. Rigorous results for MPS \cite{huang2015computing} lead one to expect a convergence at least faster than any power law. This favorable scaling should be explored with more careful numerical simulations, and it would be interesting to see if it can be proved rigorously. Nonetheless, even without such a proof, RCMPS already provide rigorous and rather tight energy upper bounds. These rigorous results are made possible because we work directly in the continuum, without the need for extrapolations. Naturally, there is a lot of work to do to generalize the RCMPS to a wider a range of quantum field theories. But I hope the present paper will motivate others to pursue this necessary exploration. \begin{acknowledgments} \noindent I am grateful to have had discussions with Patrick Emonts, Tommaso Guaita, and Teresa Karanikolaou. They helped me realize the subtleties of minimization on a manifold, which was crucial for this work to succeed. I also thank Ignacio Cirac for helpful comments and for his support. Finally, I thank Jutho Haegeman, Karel Van Acoleyen, and Frank Verstraete, for helpful comments on an earlier version. \end{acknowledgments}
{ "timestamp": "2021-07-06T02:34:29", "yymm": "2102", "arxiv_id": "2102.07741", "language": "en", "url": "https://arxiv.org/abs/2102.07741" }
\subsection{Main Proof} \begin{proof} The regret of C-UCBVI up to episode $K$ is: \begin{align*} R_K = \sum_{k=1}^{K}\left(V_1^*(s_{k,1})-V_1^{\pi_k}(s_{k,1})\right). \end{align*} Define the value function difference terms at level $h$ in episode $k$ by $\Delta_{k,h}:=V_h^*-V_h^{\pi_k}$ and $\widetilde{\Delta}_{k,h}:=V_{k,h}-V_h^{\pi_k}$. Define their realizations at state $s_{k,h}$ by $\delta_{k,h}:=\Delta_{k,h}(s_{k,h})$ and $\widetilde{\delta}_{k,h}:=\widetilde{\Delta}_{k,h}(s_{k,h})$. Under these notations, the regret can be written as: \begin{align*} R_K = \sum_{k=1}^{K}\delta_{k,1}. \end{align*} At first step, we show that the optimism property, i.e. $V_{k,h}(s)$ term upper bound the optimal value function $V_h^*(s)$ for all $(k,h,s)$ tuples, holds with high probability. Since we have no knowledge on the optimal value functions appearing as a key term in regret, it is hard to directly bound terms $\delta_{k,1}$. Lemma~\ref{lemma:optimism} shows that we can instead bound $\tilde{\delta}_{k,1}$, which depends mostly on the information provided by the algorithm. \begin{lemma} [Optimism]\label{lemma:optimism} Define optimism events as follows: \begin{align*} \mathcal{E}=\{V_{k,h}(s)\geq V_h^*(s), \forall k,h,s\}, \end{align*} we have $P(\mathcal{E})\geq 1-\delta$. \end{lemma} This can be proved by backward induction over $h$ for every $k$ such that $\{q_{k,h}(s,\mathbf{z})\geq q_h^*(s,\mathbf{z}),\forall k,h,s,\mathbf{z}\}$ holds with high probability, where $q_h^*(s,\mathbf{z})\overset{\mathrm{def}}{=}\max_\pi q_h^\pi(s,\mathbf{z})$. By the construction way of $Q_{k,h}$ functions in Algorithm~\ref{algo:CUCBVI}, Lemma~\ref{lemma:optimism} can then be proved using definitions of value functions $q,Q$ and $V$. See proof details in Section~\ref{app_sec:cucbvi_opt_proof}. According to Lemma~\ref{lemma:optimism}, the regret is upper bounded by: \begin{align} R_K = \sum_{k=1}^{K}\delta_{k,1} \leq \sum_{k=1}^{K}\widetilde{\delta}_{k,1}, \end{align} with probability $1-\delta$. In order to divide each $\widetilde{\delta}_{k,1}$ into several pieces, we bound every $\widetilde{\delta}_{k,h}$ in a recursion way in terms of $\widetilde{\delta}_{k,h+1}$. At every episode $k$, level $h$, we define the state transition kernel, weighted bonus and useful martingale terms as follows: \begin{align*} \hat{\mathbb{P}}_{k,h}^{\pi_k}(y|s_{k,h}) &= \sum_{\mathbf{z}\in\mathcal{Z}}\hat{\mathbb{P}}_k(y|s_{k,h},\mathbf{z})P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h))\\ b_{k,h}&:= \sum_{\mathbf{z}}b_{k,h}(s_{k,h},\mathbf{z}) P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h))\\ \epsilon_{k,h} &:= \mathbb{P}_h^{\pi_k}\widetilde{\Delta}_{k,h+1}-\widetilde{\Delta}_{k,h+1}(s_{k,h+1})\\ \varepsilon_{k,h} &:= \epsilon_{k,h}(s_{k,h}), \end{align*} where $b_{k,h}(s_{k,h},\mathbf{z}) = 7HL\sqrt{\frac{S}{N_k(s_{k,h},\mathbf{z})}}$ denotes the output of Algorithm~\ref{algo:bonus} with input $N_k(s_{k,h},\mathbf{z})>0$ and $\delta>0$. See definition for $L$ (a logarithmic term) in Algorithm~\ref{algo:bonus}. In particular, we set $\hat{\mathbb{P}}_k(y|s_{k,h},\mathbf{z}) = 0$ and $b_{k,h}(s_{k,h},\mathbf{z}) = H\sqrt{S}$ when $N_k(s_{k,h},\mathbf{z}) = 0$. Using above definition and procedures in Algorithm~\ref{algo:CUCBVI} and Algorithm~\ref{algo:CUCBQval}, we bound $\tilde{\delta}_{k,h}$ recursively as follows: \begin{align} \tilde{\delta}_{k,h}&\leq \widetilde{\delta}_{k,h+1}+\varepsilon_{k,h}+b_{k,h}+\left[(\hat{\mathbb{P}}_{k,h}^{\pi_k}-\mathbb{P}_{h}^{\pi_k})V_{h+1}^*\right](s_{k,h}) \notag\\ &+\left[(\hat{\mathbb{P}}_{k,h}^{\pi_k}-\mathbb{P}_{h}^{\pi_k})(V_{k,h+1}-V_{h+1}^*)\right](s_{k,h}).\label{equ:recursion} \end{align} Thus, it remains to bound each term separately in above inequality. We bound the estimation error term in Claim~\ref{claim:cucbvi_err}. \begin{claim}\label{claim:cucbvi_err} For $\delta>0$, with probability at least $1-\delta$, the error term in~\eqref{equ:recursion} can be bounded by: \begin{align*} \left|(\hat{\mathbb{P}}_{k,h}^{\pi_k}-\mathbb{P}_{h}^{\pi_k})V_{h+1}^*(s_{k,h})\right| \leq \sum_{\mathbf{z}\in\mathcal{Z}}b_{k,h}(s_{k,h},\mathbf{z})P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h)) = b_{k,h}. \end{align*} \end{claim} \begin{proof}[Proof for Claim~\ref{claim:cucbvi_err}] We first use Lemma 2 in~\citet{osband2014near} to show an L1 bound for the empirical transition function. It ensures that for any $s,\mathbf{z}\in\mathcal{S}\times\mathcal{Z}$ such that $N_k(s,\mathbf{z})>0$, we have $\left\|\mathbb{P}(\cdot|s,\mathbf{z})-\hat{\mathbb{P}}_k(\cdot|s,\mathbf{z})\right\|_1\leq\sqrt{\frac{2S}{N_k(s,\mathbf{z})}\log\left(\frac{2}{\delta_k'}\right)}$ with probability at least $1-\delta_k'$. Simply set $\delta_k' = \delta/2SZk^2$, by a union bound over $k,s,\mathbf{z}$ we have \begin{align} P\left(\left\|\mathbb{P}(\cdot|s,\mathbf{z})-\hat{\mathbb{P}}_k(\cdot|s,\mathbf{z})\right\|_1\leq\sqrt{\frac{2S}{N_k(s,\mathbf{z})}\log\left(\frac{2}{\delta_k'}\right)}, \forall k\in\mathbb{N}, s\in\mathcal{S},\mathbf{z}\in\mathcal{Z} \text{ s.t. } N_k(s,\mathbf{z})>0\right) \geq 1-\delta. \label{equ:1norm} \end{align} Next, we bound the estimation error term, which is the main interest of this claim. By writing out the expression and above inequality, with probability at least $1-\delta$, for all $k,h$ we have: \begin{align*} \left|(\hat{\mathbb{P}}_{k,h}^{\pi_k}-\mathbb{P}_{h}^{\pi_k})V_{h+1}^*(s_{k,h})\right| &= \left|\sum_{y\in\mathcal{S}}\left(\mathbb{P}(y|s_{k,h},\pi_k(s_{k,h},h))-\hat{\mathbb{P}}_k(y|s_{k,h},\pi_k(s_{k,h},h))\right)V_{h+1}^*(y)\right|\\ & = \left|\sum_{y\in\mathcal{S}}\sum_{\mathbf{z}\in\mathcal{Z}}\left(\mathbb{P}(y|s_{k,h},\mathbf{z})-\hat{\mathbb{P}}_k(y|x_{k,h},\mathbf{z})\right)P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h))V_{h+1}^*(y)\right|\\ &\leq H\sum_{\mathbf{z}\in\mathcal{Z}}P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h))\left\|\mathbb{P}(\cdot|s_{k,h},\mathbf{z})-\hat{\mathbb{P}}_k(\cdot|s_{k,h},\mathbf{z})\right\|_1\\ &\leq H \sum_{\mathbf{z}:N_k(s_{k,h},\mathbf{z})>0}P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h)) \sqrt{\frac{2S}{N_k(s_{k,h},\mathbf{z})}\log\left(\frac{2}{\delta_k'}\right)} \text{ from~\eqref{equ:1norm}}\\ & + H\sum_{\mathbf{z}:N_k(s_{k,h},\mathbf{z})=0} \sqrt{S}P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h)) \text{ by Cauchy-Schwarz inequality on 1-norm}\\ & \leq b_{k,h}, \end{align*} where the last inequality can be seen from the definition of $b_{k,h}$. \end{proof} In Claim~\ref{claim:cucbvi_err}, the only information about value function $V$ we use is its upper bound $H$. Thus, under exact the same approach, we bound in higher-order error term as follows: with probability at least $1-\delta$, for all $k,h$ we have \begin{align*} \left|(\hat{\mathbb{P}}_{k,h}^{\pi_k}-\mathbb{P}_{h}^{\pi_k})(V_{k,h+1}-V_{h+1}^*)(x_{k,h})\right| \leq b_{k,h}. \end{align*} Combining the recursion in~\eqref{equ:recursion} and above two claims, we have \begin{align} \tilde{\delta}_{k,h} \leq \sum_{j=h}^{H}\varepsilon_{k,j}+3b_{k,j}.\label{equ:recursion_sim} \end{align} We bound the summation on bonus terms in Claim~\ref{claim:cucbvi_bonus}. \begin{claim}\label{claim:cucbvi_bonus} For any $\delta$, with probability at least $1-\delta$, we have: \begin{align} \sum_{k=1}^{K}\sum_{h=1}^{H}b_{k,h} \leq 14HL\sqrt{2STL}+14HSL\sqrt{ZT}+H\sqrt{S}SZ. \end{align} \end{claim} \begin{proof}[Proof for Claim~\ref{claim:cucbvi_bonus}] \begin{align*} \sum_{k=1}^{K}\sum_{h=1}^{H}b_{k,h} &= \sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{\mathbf{z}\in\mathcal{Z}}b_{k,h}(s_{k,h},\mathbf{z}) P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h))\\ & = \sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{\mathbf{z}:N_k(s_{k,h},\mathbf{z})>0}7HL\sqrt{\frac{S}{N_k(s_{k,h},\mathbf{z})}} P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h))\\ & + \sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{\mathbf{z}:N_k(s_{k,h},\mathbf{z})=0} H\sqrt{S} P(\mathbf{z}|s_{k,h},\pi_k(s_{k,h},h))\\ & = 7HL\sqrt{S}\sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{s\in\mathcal{S}}\sum_{\mathbf{z}:N_k(s,\mathbf{z})>0}P(\mathbf{z}|s,\pi_k(s,h))\sqrt{\frac{1}{N_k(s,\mathbf{z})}}\mathbb{1}_{\{s_{k,h} = s\}}\\ &+H\sqrt{S} \sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{s\in\mathcal{S}}\sum_{\mathbf{z}:N_k(s,\mathbf{z})=0}P(\mathbf{z}|s,\pi_k(s,h))\mathbb{1}_{\{s_{k,h} = s\}}\\ & = \underbrace{7HL\sqrt{S}\sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{s\in\mathcal{S}}\sum_{\mathbf{z}:N_k(s,\mathbf{z})>0}\sqrt{\frac{1}{N_k(s,\mathbf{z})}}\mathbb{1}_{\{s_{k,h} =s\}}\left(P(\mathbf{z}|s,\pi_k(s,h))-\mathbb{1}_{\{\mathbf{z}_{k,h}=\mathbf{z}\}}\right)}_{(a)}\\ &+\underbrace{7HL\sqrt{S}\sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{s\in\mathcal{S}}\sum_{\mathbf{z}:N_k(s,\mathbf{z})>0}\sqrt{\frac{1}{N_k(s,\mathbf{z})}}\mathbb{1}_{\{s_{k,h} = s,\mathbf{z}_{k,h} = \mathbf{z}\}}}_{(b)} \\ & + \underbrace{H\sqrt{S} \sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{s\in\mathcal{S}}\sum_{\mathbf{z}:N_k(s,\mathbf{z})=0}(P(\mathbf{z}|s,\pi_k(s,h))-\mathbb{1}_{\{\mathbf{z}_{k,h} = \mathbf{z}\}})\mathbb{1}_{\{s_{k,h} = s\}}}_{(a')}\\ & + \underbrace{H\sqrt{S} \sum_{k=1}^{K}\sum_{h=1}^{H}\sum_{s\in\mathcal{S}}\sum_{\mathbf{z}\in\mathcal{Z}}\mathbb{1}_{\{\mathbf{z}_{k,h} = \mathbf{z}, s_{k,h} = s, N_k(s,\mathbf{z})=0\}}}_{(b')} \end{align*} We bound term $(a)$ and $(a')$ in above using Azuma inequality, with probability $1-\delta$ we have \begin{align*} (a) &\leq 7HL\sqrt{S}\sqrt{2T\log\left(\frac{1}{\delta}\right)} \leq 7HL\sqrt{2STL}, \\ (a') & \leq H\sqrt{S}\sqrt{2T\log\left(\frac{1}{\delta}\right)} \leq 7H\sqrt{2STL} \end{align*} We bound term $(b)$ using pigeon-hole theorem, \begin{align*} (b)&\leq 7HL \sqrt{S}\sum_{s\in\mathcal{S}}\sum_{\mathbf{z}\in\mathcal{Z}}\int_{0}^{N_K(s,\mathbf{z})}\sqrt{\frac{1}{x}}dx\\ & = 7HL \sqrt{S}\sum_{s\in\mathcal{S}}\sum_{\mathbf{Z}\in\mathcal{Z}}2\sqrt{N_K(s,\mathbf{z})}\\ &\leq 14HLS\sqrt{ZT} \text{ (by Cauchy-Schwarz)}. \end{align*} We bound term $(b')$ by $H\sqrt{S}SZ$ due to its indicator function's property. Combine $(a)$, $(a')$, $(b)$ and $(b')$ we conclude the result. \end{proof} Now we bound the summation over $\varepsilon_{k,h}$ terms, which are the only terms in~\eqref{equ:recursion} remaining to bound to this point. By Azuma inequality we have: \begin{align*} \sum_{k=1}^{K}\sum_{h=1}^{H}\varepsilon_{k,h} & = \sum_{k=1}^{K}\sum_{h=1}^{H}\left(\sum_{y\in\mathcal{S}}P(y|s_{k,h},\pi_k(s_{k,h},h))\tilde{\Delta}_{k,h+1}(y)-\tilde{\Delta}_{k,h+1}\left(s_{k,h+1}\right)\right)\leq 2H\sqrt{KH\log\left(\frac{1}{\delta}\right)}\leq 2H\sqrt{TL}. \end{align*} Back to \eqref{equ:recursion_sim}, combining above bound with Claim~\ref{claim:cucbvi_bonus}, we have \begin{align*} R_K \leq \sum_{k=1}^{K}\tilde{\delta}_{k,1} \leq \sum_{k=1}^{K}\sum_{h=1}^{H}\varepsilon_{k,h}+3b_{k,h} = \tilde{O}\left(HS\sqrt{ZT}\right), \end{align*} with probability at least $1-\delta$. (after replacing original $\delta$ by dividing some constant number $C$.) We ignore small order terms which do not have non-logarithmic dependency on $T$. \end{proof} \subsection{Proof for Lemma~\ref{lemma:optimism}} \label{app_sec:cucbvi_opt_proof} \begin{proof} We start from proving below lemma: \begin{lemma}\label{lemma:claimUCB} For any $\delta>0$, with probability at least $1-\delta$, we have: \begin{align*} \left|\left((\mathbb{P}-\hat{\mathbb{P}}_k)V_{h+1}^*\right)(s,\mathbf{z})\right|\leq b_{k,h}(s,\mathbf{z}). \end{align*} \end{lemma} \begin{proof}[Proof for Lemma~\ref{lemma:claimUCB}] We follow the proof for Claim~\ref{claim:cucbvi_err} and use Inequality~\eqref{equ:1norm} and definition of $b_{k,h}(s,\mathbf{z})$ to get \begin{align*} &P\left(\left|\left((\mathbb{P}-\hat{\mathbb{P}}_k)V_{h+1}^*\right)(s,\mathbf{z})\right|>b_{k,h}(s,\mathbf{z}),\forall k,h,\mathbf{z}\right) \\ \leq & P\left(\left\|\mathbb{P}(\cdot|s,\mathbf{z})-\hat{\mathbb{P}}(\cdot|s,\mathbf{z})\right\|_1>7L\sqrt{\frac{S}{N_k(s,\mathbf{z})}},\forall k,h,\mathbf{z} \text{ s.t. }N_k(s,\mathbf{z})>0\right)\leq \delta. \end{align*} \end{proof} Define $q_h^*(s,\mathbf{z})$ by $\max_\pi q_h^{\pi}(s,\mathbf{z})$, which is the maximal $q$ value that can be achieved by any policy. We use induction to prove $\{q_{k,h}(s,\mathbf{z})\geq q_h^*(s,\mathbf{z})\}$ holds with high probability. For $h=H$, $q_{k,H}(s,\mathbf{z}) \geq \min\{H,R(s,\mathbf{z})+b_{k,H}(s,\mathbf{z})\} \geq R(s,\mathbf{z}) = q_H^*(s,\mathbf{z})$. Suppose $q_{k,h'}(s,\mathbf{z})\geq q_{h'}^*(s,\mathbf{z})$ holds for $h' = h+1,\ldots,H$, and we know \begin{align*} V_{k,h'}(s)-V_{h'}^*(s) &= \max_a Q_{k,h'}(s,a)-Q_{h'}^*(x,\pi^*(s,h'))\\ &\geq Q_{k,h'}(s,\pi^*(x,h'))-Q_{h'}^*(s,\pi^*(s,h'))\\ & = \sum_{\mathbf{z}\in\mathcal{Z}}P(\mathbf{z}|x,\pi^*(s,h'))\left(q_{k,h'}(s,\mathbf{z})-q_{h'}^*(s,\mathbf{z})\right)\\ &\geq 0 (\text{by induction}). \end{align*} Now we show at $h$, $\{q_{k,h}(s,\mathbf{z})\geq q_h^*(s,\mathbf{z})\}$ also holds with high probability. If $q_{k,h}(s,\mathbf{z}) = H$, it trivially holds. So we consider the case $q_{k,h}(s,\mathbf{z}) = R(s,\mathbf{z})+\left(\hat{\mathbb{P}}_{k}V_{k,h+1}\right)(s,\mathbf{z})+b_{k,h}(s,\mathbf{z})$. \begin{align*} q_{k,h}(s,\mathbf{z}) - q_h^*(s,\mathbf{z}) &= \left(\hat{\mathbb{P}}_{k}V_{k,h+1}\right)(s,\mathbf{z})+b_{k,h}(s,\mathbf{z})-\left(\mathbb{P}V_{h+1}^*\right)(s,\mathbf{z})\\ &\geq \left((\hat{\mathbb{P}}_k-\mathbb{P})V_{h+1}^*\right)(s,\mathbf{z})+b_{k,h}(s,\mathbf{z}) \text{ (by induction)}\\ &\geq 0 \text{ (hold with probability at least $1-\delta$ by Lemma~\ref{lemma:claimUCB})}. \end{align*} Up to here we show that with probability at least $1-\delta$, $\{q_{k,h}(s,\mathbf{z})\geq q_h^*(s,\mathbf{z})\}$ holds. Using the same argument above we have $\{V_{k,h}(s)\geq V_h^*(s), \forall k,h,s\}$ holds with probability $1-\delta$. \end{proof} \section{Proof for Theorem~\ref{thm:CUCBVI}} \input{app_cmdp_regret.tex} \section{Proof for Theorem~\ref{thm:CFUCBVI}} \input{app_cfmdp_regret.tex} \section{Introduction}\label{sec:intro} \input{intro.tex} \section{Related Work}\label{sec:rel} \input{related.tex} \section{Preliminaries}\label{sec:setup} \input{setup.tex} \section{Causal UCBVI} \label{sec:C} \input{CUCBVI.tex} \section{Causal Factored-UCBVI} \label{sec:CF} \input{CFUCBVI.tex} \section{Causal Linear MDPs} \label{sec:CL} \input{linear_mod_graph.tex} \section{Experiments} \label{sec:exp} \input{experiments.tex} \section{Discussion} \label{sec:dis} \input{conclusion.tex} \section*{ACKNOWLEDGEMENT} This work was supported in part by NSF CAREER grant IIS-1452099 and an Adobe Data Science Research Award. \newpage \bibliographystyle{apalike} \subsection{RELATED WORK}
{ "timestamp": "2021-02-16T02:38:23", "yymm": "2102", "arxiv_id": "2102.07663", "language": "en", "url": "https://arxiv.org/abs/2102.07663" }
\section{Introduction} Let $G$ be a locally compact, second countable, unimodular group that is nondiscrete and noncompact, endowed with a Haar measure $\lambda$. We think of $\lambda$ as an inherent parameter of $G$, as all the notions trivally scale with $\lambda$. Throughout the paper, we will make these assumptions on $G$ except when stated otherwise. A \emph{point process} $\Pi$ on $G$ is a random closed and discrete subset $\Pi$ of $G$. More precisely, it is a random variable taking values in the \emph{configuration space} of $G$: \[ \MM = \MM(G) = \{ \omega \subseteq G \mid \omega \text{ is closed and discrete} \}. \] When the law of $\Pi$ is invariant under the left $G$-action, we call $\Pi$ an \emph{invariant point process}. We do not assume the reader has any knowledge of point process theory and have made the paper as self-contained as possible. Invariant point processes are examples of probability measure preserving (pmp) actions. Recall that a pmp action is \emph{essentially free} (or simply \emph{free} for short) if the stabiliser of almost every point in the action space is trivial. In the particular case of point processes, this means that the set of points will almost surely have no symmetries. Our first theorem shows that actually \emph{every} free pmp action can be realised this way. \begin{thm} \label{minden}Every free pmp action of $G$ is isomorphic to a point process on $G$. \end{thm} Note that freeness is a necessary condition here as can be seen from the action of $\RR^2$ on $\RR^2 / (\ZZ \times \RR)$. This action is however a point process on a homogeneous space of $\RR^2$. The proof of Theorem \ref{minden} exhibits an analogy between point processes of locally compact groups and the symbolic dynamics of countable groups. For a pmp action of a countable group $\Gamma$, every Borel partition of the underlying space gives rise to an invariant random coloring of $\Gamma$ by considering the orbit of a random point of the underlying space. Similarly, every cross section of a free pmp action of $G$, when considering its intersection with the $G$-orbit of a random point, will become a point process on $G$. So point processes serve as \emph{stochastic visualizations} of pmp actions of locally compact groups, just like invariant random colorings do for countable groups. This paper aims to show that this visualization leads to new and meaningful results. The correspondence above and the classical theorem of Forrest \cite{MR417388} on the existence of cross sections (see also \cite{MR3335405} and Section 3.B of \cite{kechris2019theory}) immediately yields that every free pmp action factors onto an invariant point process. The factor map can be upgraded to an isomorphism by using a \emph{marked} point process. These are random discrete subsets where each point carries a mark from some mark space (for example, a finite set of colours). Then Theorem \ref{minden} is proved by showing that every marked point process is isomorphic to an unmarked one, by ``spatially encoding'' the marks. We now introduce the cost of a point process $\Pi$ on $G$. A \emph{factor graph} $\mathscr{G}$ of $\Pi$ is an equivariantly and measurably defined graph $\mathscr{G}(\Pi)$ whose vertex set is $\Pi$. For example, one can define the distance graph for $r > 0$ to be the set of pairs $g, h \in \Pi$ with $d(g, h) < r$, where $d$ is a left-invariant metric on $G$. Informally speaking, the \emph{cost} of $\Pi$ is defined as the infimum of the average degrees over connected factor graphs of $\Pi$, suitably normalised to be an isomorphism invariant. For precise definitions see Section \ref{cost}. We then define cost for free pmp actions of $G$ via Theorem \ref{minden}, which is well-defined since cost is an isomorphism invariant. The cost of pmp actions of countable groups has been an active subject in the last twenty years, see Gaboriau's paper \cite{gaboriau2016around} and the survey paper \cite{furman2009survey} for the literature. It has been known in the community that cross sections naturally allow one to extend the notion of cost to free pmp actions of locally compact groups, but due to the lack of results, the notion stayed dormant. The first explicit appearance of the definition can be found in a recent paper of Carderi \cite{carderi2018asymptotic}. The definition we work with is essentially equivalent to his, but we develop it intrinsically as a point process theoretic notion. One of the most important families of processes on a discrete group is Bernoulli percolations $\text{Ber}(p)$. The natural analogue of this family for non discrete groups is the \emph{Poisson point process} of intensity $t > 0$. Here the \emph{intensity} of an invariant point process is the expected number of points which fall in a set of unit volume. This quantity can be shown to be independent of the choice of set. An explicit description of Poisson point processes will be given later, but one should know that these processes are ``completely independent''. \begin{thm}\label{Poissmax} Poisson point processes have maximal cost among all free pmp actions on $G$. In particular, the cost of a Poisson point process does not depend on its intensity. \end{thm} We denote the cost of a Poisson point process on $G$ by $c_P(G)$. The above result can be looked at as a locally compact analogue of a result of Ab\'{e}rt and Weiss \cite{abert2013bernoulli} where they show that for a countable group, Bernoulli actions have maximal cost among all pmp actions. A central open problem in cost theory is the Fixed Price problem of Gaboriau, that asks whether all free pmp actions of a countable group have the same cost. This is also open in the locally compact setting. \begin{question} Is it true that all free point processes on $G$ have the same cost? \end{question} Gaboriau \cite{gaboriau2002invariants} asks if for a countable pmp equivalence relation, the cost of the relation equals its first $L^{2}$ Betti number $\beta_{1}(G)$ plus 1. Note that an affirmative answer for this would imply an affirmative answer to Question 1, as well, using the cross section correspondence. Since the cost of any free process is at least one, a viable way to prove that a group has fixed price one is by showing that the Poisson point process admits connected factor graphs with average degree $1 + \e$ for all $\e > 0$. We succeed in this for the first nontrivial case, answering a question of Carderi \cite{carderi2018asymptotic}: \begin{thm} \label{zszorzat}Every free pmp action of $G\times\mathbb{Z}$ has cost one if $G$ is compactly generated. \end{thm} Our proof is truly a stochastic proof in nature as it essentially uses some special properties of Poisson point processes. In countable cost theory, it remains an open question if the direct product $\Gamma \times \Delta$ of two infinite countable groups $\Gamma$ and $\Delta$ has fixed price one. It is known to hold if one of the groups contains a fixed price one subgroup. When trying to generalize Theorem \ref{zszorzat} to arbitrary products, we seem to hit a somewhat similar barrier. \begin{question} Let $G$ and $H$ be compactly generated but noncompact groups. Does $G \times H$ have fixed price one? \end{question} Another application of Theorem \ref{Poissmax} concerns the growth of the minimal number of generators (the rank gradient) for a sequence of lattices in semisimple real Lie groups. Recall that a discrete subgroup $\Gamma\leq G$ is a \emph{lattice} if it has finite covolume in $G$. Let $d(\Gamma)$ denote the minimal number of generators (also known as the \emph{rank}) of $\Gamma$. When $G$ is a semisimple Lie group, $d(\Gamma)$ is finite and by a theorem of Gelander \cite{MR2863908}, we have \[ \frac{d(\Gamma)-1}{\mathrm{vol}(G/\Gamma)}\leq C \] for some constant $C$ only depending on $G$. A sequence of lattices $\Gamma_{n}$ in $G$ is \emph{Farber}, if $G/\Gamma_{n}$ approximates $G$ in the Benjamini-Schramm topology, or, equivalently, if the corresponding invariant random subgroups weakly converge to the trivial subgroup. \begin{thm}\label{carderiextension} Let $G$ be a semisimple real Lie group and let $\Gamma_{n}$ be a Farber sequence of lattices in $G$. Then \[ \limsup_{n\to\infty}\frac{d(\Gamma_n)-1}{\mathrm{vol}(G/\Gamma_n)} \leq\mathrm{c}_{\mathrm{P}}(G)-1. \] \end{thm} In particular, if the Poisson point process has cost $1$ then the rank grows sublinearly in the covolume, for Farber sequences of lattices. This correspondence connects computing the cost of the Poisson process to some exciting open problems that have been investigated extensively. Theorem \ref{carderiextension} extends Carderi's result \cite{carderi2018asymptotic} who proved it for uniformly discrete (in particular, cocompact) Farber sequences. Here a sequence of lattices is \emph{uniformly discrete} if there exists $C>0$ such that the infimal injectivity radius is bounded below by $C$ for $G/\Gamma_{n}$. \begin{question} Let $G$ be a semisimple real Lie group that is not a compact extension of $\SL_{2}(\RR)$. Is $\mathrm{c}_{\mathrm{P}}(G)=1$? \end{question} Note that by work of Conley, Gaboriau, Marks, and Tucker-Drob the group $\SL_{2}(R)$ is treeable and has fixed price greater than $1$. We now showcase three concrete cases for a semisimple Lie group where computing the cost of the Poisson process would solve known problems of a different nature. Note that it is natural to ask about the cost of the Poisson process on the symmetric space $X$ of $G$ rather than on the group $G$ itself. As we show in Theorem \ref{symmetricspacecostmax} these two invariants are equal. \medskip \textbf{Case }$G=\SL_{2}(\CC)$ and $X=\HH^3$\textbf{.} If $\mathrm{c}_{\mathrm{P}}(G)>1$, then we get free point processes in $G$ with different cost. Moreover, we also get a countable equivalence relation whose first $L^2$ Betti number is not equal to its cost-1, answering a question of Gaboriau \cite{gaboriau2002invariants}. If, on the other hand, $\mathrm{c}_{\mathrm{P}}(G)=1$, then we get that the Heegaard genus divided by the rank of the fundamental group of a (compact) hyperbolic $3$-manifold can get arbitrarily large. In fact, we yield this for \emph{any} expander Farber sequence of hyperbolic $3$-manifolds, which is understood as the typical behavior. Indeed, by the work of Lackenby \cite{lackenby2002heegaard} for expander sequences, the Heegaard genus grows linearly, while using our work, the rank would grow sublinearly in the volume. Note that it is a longstanding open problem whether this ratio is absolutely bounded over all $3$-manifolds, and in fact it was only proved recently in the deep paper of Li \cite{li2013rank} that for compact hyperbolic $3$-manifolds, the Heegaard genus can \emph{differ} from the rank. \medskip \textbf{Case when }$G$\textbf{ has higher rank.} For these Lie groups, Fraczyk recently proved in a beautiful paper \cite{fraczyk2018growth} that the growth of the first mod $2$ homology vanishes for Farber sequences in $G$. Surprisingly, his method does not seem to carry over to odd primes, so for primes other than $2$, this is still an open problem. As the rank is an upper bound for the first mod $p$ homology of a discrete group, proving $\mathrm{c}_{\mathrm{P}}(G)=1$ would settle this problem. By a standard induction argument, proving $\mathrm{c}_{\mathrm{P}}(G)=1$ would show that any lattice in $G$ has fixed price $1$, a problem of Gaboriau that is still open for cocompact lattices. \medskip \textbf{Case when }$G$\textbf{ has higher rank and property (T).} Using \cite{MR3664810}, for semisimple Lie groups with (T) one can actually omit the Farber condition. \begin{cor} Let $G$ be a higher rank semisimple real Lie group with property (T) and let $\Gamma_{n}$ be any sequence of lattices in $G$ with $\mathrm{vol}(G/\Gamma_n)\rightarrow \infty$. Then \[ \lim\sup_{n\rightarrow\infty}\frac{d(\Gamma_n)-1}{\mathrm{vol}(G/\Gamma_n)}% \leq\mathrm{c}_{\mathrm{P}}(G)-1\text{.}% \] \end{cor} In particular, if $\mathrm{c}_{\mathrm{P}}(G)=1$, then we get a \emph{totally uniform} vanishing theorem for the growth of rank for lattices in these groups, including $SL(d,R)$ ($d\geq3$). It is shown in \cite{abert2017rank} that any Farber sequence in SL(3,Z) has vanishing rank gradient, but the uniform version is wide open. Note that in their very recent paper Lubotzky and Slutsky \cite{lubotzky2021asymptotic} showed that in the above situation, every sequence of non-uniform lattices will have rank gradient $0$. Their proof uses deep classical results on non-uniform lattices, like arithmeticity and the Congruence Subgroup Property but in turn gives a much stronger upper bound for the number of generators than what we ask, logarithmic in the covolume. In most cases, they can even improve this with a loglog factor. Their methods do not seem to readily generalize to co-compact lattices. Our purely geometric approach may have the potential to be applied more widely but the payoff is that, being a limiting argument, it is not expected to yield such explicit estimates. The proof of Theorem \ref{Poissmax} uses the stochastic visualisation method to show that every free action is ``sufficiently rich'' in randomness to ``simulate'' the Poisson point process. In particular, connected factor graphs of the Poisson point process can be transferred to an arbitrary free process in a way that can at worst \emph{decrease} the average degree. Simulation here refers to \emph{weak factoring}, a notion we introduce that is inspired by weak containment of actions, see the survey of Kechris and Burton \cite{kechris2019theory}. For invariant point processes $\Pi$ and $\Upsilon$, we say that $\Upsilon$ is a \emph{factor} of $\Pi$ if there exists a $G$-equivariant Borel map $\Phi: \MM \to \MM$ such that $\Phi(\Pi) = \Upsilon$. We say that $\Upsilon$ is a \emph{weak factor} of $\Pi$ or $\Pi$ \emph{weakly factors} onto $\Upsilon$ if there exist factor maps $\Phi_1, \Phi_2, \ldots$ of $\Pi$ such that $\Phi_n(\Pi)$ converges weakly to $\Upsilon$. \begin{thm} \label{Poissweak}Let $\Pi$ be a free point process on $G$. Then $\Pi$ weakly factors onto the Poisson point process of intensity $t$, for all $t$. \end{thm} In particular, Poisson processes on $G$ of different intensities weakly factor onto each other. More is known in the amenable case: Ornstein and Weiss showed \cite{ornstein1987entropy} that for a large class of amenable groups, the Poisson point processes of different intensity are in fact \emph{isomorphic} as actions (see \cite{soo2019finitary} for an alternative construction on $\RR^n$ with additional properties). The proof of Theorem \ref{Poissweak} revolves around IID-marked processes. Let $[0,1]^\Pi$ denote the random $[0,1]$-marked subset of $G$ whose underlying set is $\Pi$ and has independent and identically distributed $\texttt{Unif}[0,1]$ random variables. We call this the \emph{IID of $\Pi$}. Once this definition and that of the Poisson point process is understood, one can readily see that the IID of \emph{any} process factors onto the Poisson point process. We then prove: \begin{thm} \label{putiid}Let $\Pi$ be a free point process on $G$. Then $\Pi$ weakly factors onto $[0,1]^\Pi$, its own IID. \end{thm} Somewhat surprisingly, it is not entirely trivial to show that weak factoring is a \emph{transitive} notion, but we are able to prove it. Thus in particular, Theorem \ref{putiid} implies that free point processes weakly factor onto the Poisson point process. We next investigate how cost behaves with respect to factor maps. It is easy to see that it can only \emph{increase} under a factor map: if $\Pi$ factors onto $\Upsilon$, then $\cost(\Pi) \leq \Upsilon$. In particular, this shows that cost is an \emph{isomorphism invariant} of actions. This monotonicity of cost under factor maps can be pushed further: \begin{thm} Suppose $\Pi$ weakly factors onto $\Upsilon$, as witnessed by a sequence of factor maps $\Phi_n(\Pi)$ weakly converging to $\Upsilon$. Under appropriate tightness conditions on $\Pi, \Upsilon$, and the sequence $\Phi_n$, we have $\cost(\Pi) \leq \cost(\Upsilon)$. \end{thm} See Section \ref{certainfactors} for a precise statement. This cost monotonicity theorem, limited as it is, is powerful enough to prove that the Poisson point process has maximal cost amongst all free processes. \medskip \noindent {\bf Acknowledgements.} The authors wish to thank the anonymous referee for a very thorough and helpful report. The second author thanks Mikolaj Fraczyk and Alessandro Carderi for helpful discussions. \medskip The paper is structured as follows. In Section 1, we give the basic definitions and notations of point processes for those who have never encountered them before, and describe the most important examples of point processes for our work. In Section 2, we introduce the \emph{Palm measure} of a point process and the rerooting groupoid. In Section 3, we define the \emph{cost} of an invariant point process and prove basic properties of it. In Section 4, we define \emph{weak factoring} of point processes and prove that (in certain circumstances) cost is \emph{monotone} with respect to weak factoring. We use this to show that the Poisson has maximal cost amongst all free processes. In Section 5, we use the fact that the Poisson has maximal cost to give the first nontrivial examples of nondiscrete groups with fixed price. In Section 6, we connect the rank gradient of sequences of lattices in a group with the cost of the Poisson point process on said group. In Section 7, we discuss the modifications required to extend the above theory to invariant point processes on symmetric spaces. In the appendix, we include a summary of necessary technical facts from point process theory with references for proofs. No originality is claimed for this material. \section{Point processes and factors of interest} Let $(Z,d)$ denote a complete and separable metric space (a csms). A \emph{point process on $Z$} is a random discrete subset of $Z$. We will also study random discrete subsets of $Z$ that are \emph{marked} by elements of an additional csms $\Xi$. Typically $\Xi$ will be a finite set that we think of as colours. \begin{defn} The \emph{configuration space} of $Z$ is \[ \MM(Z) = \{ \omega \subset Z \mid \omega \text{ is locally finite} \}, \] and the \emph{$\Xi$-marked configuration space} of $Z$ is \[ \Xi^\MM(Z) = \{ \omega \subset Z \times \Xi \mid \omega \text{ is discrete, and if } (g, \xi) \in \omega \text{ and } (g, \xi') \in \omega \text{ then } \xi = \xi' \}. \] \end{defn} Note that $\Xi^\MM(Z) \subset \MM(Z \times \Xi)$. We think of a $\Xi$-marked configuration $\omega \in \Xi^\MM(Z)$ as a locally finite subset of $Z$ with labels on each of the points (whereas a typical element of $\MM(Z \times \Xi)$ is a locally finite subset where each point has possibly multiple marks). If $\omega \in \Xi^\MM(Z)$ is a marked configuration, then we will write $\omega_z$ for the unique element of $\Xi$ such that $(z, \omega_z) \in \omega$. The Borel structure on configuration spaces is exactly such that the following \emph{point counting functions} are measurable. Let $U \subseteq Z$ be a Borel set. It induces a function $N_U : \MM(Z) \to \NN_0 \cup \{ \infty \}$ given by \[ N_U(\omega) = \@ifstar{\oldabs}{\oldabs*}{\omega \cap U}. \] We will primarily be interested in point processes defined on locally compact and second countable (lcsc) groups $G$. Such groups admit a unique (up to scaling) left-invariant Haar measure $\lambda$, we fix such a choice. We will further assume that $G$ is \emph{unimodular}, although it is not strictly necessary for every argument in the paper. Recall: \begin{thm}[Struble's theorem, see Theorem 2.B.4 of \cite{MR3561300}] Let $G$ be a locally compact topological group. Then $G$ is second countable \emph{if and only if} it admits a proper\footnote{Recall that a metric is \emph{proper} if closed balls are compact.} left-invariant metric. \end{thm} Such a metric is unique up to coarse equivalence (bilipschitz if the group is compactly generated). We fix $d$ to be any such metric. We mostly consider the configuration space of a fixed group $G$. So out of notational convenience let us write $\MM = \MM(G)$ and $\Xi^\MM = \Xi^\MM(G)$. The latter here is an abuse of notation: formally $\Xi^\MM$ ought to denote the set of functions from $\MM$ to $\Xi$, but instead we are using it to denote the set of functions from \emph{elements} of $\MM$ to $\Xi$. Note that the marked and unmarked configuration spaces of $G$ are Borel $G$-spaces. To spell this out, $G \curvearrowright \MM$ by $g \cdot \omega = g\omega$ and $G \curvearrowright \Xi^\MM$ by \[ g \cdot \omega = \{(gx, \xi) \in G \times \Xi \mid (g, \xi) \in \omega \}. \] \begin{defn} A \emph{point process} on $G$ is a $\MM(G)$-valued random variable $\Pi : (\Omega, \PP) \to \MM(G)$. Its \emph{law} or \emph{distribution} $\mu_\Pi$ is the pushforward measure $\Pi_*(\PP)$ on $\MM(G)$. It is \emph{invariant} if its law is an invariant probability measure for the action $G \curvearrowright \MM(G)$. The associated \emph{point process action} of an invariant point process $\Pi$ is $G \curvearrowright (\MM(G), \mu_\Pi)$. \end{defn} Some remarks and caveats are in order: \begin{itemize} \item Point processes which are not invariant are very much of interest, but will only come up when we discuss ``Palm processes''. Thus we will sometimes say ``point process'' when we strictly mean \emph{invariant} point process. \item Speaking properly, we are discussing \emph{simple} point processes, that is, those where each point has multiplicity one. We will discuss this more later. \item $\Xi$-marked point processes are defined similarly, with $\Xi^\MM$ taking the place of $\MM$. There isn't much difference between marked point processes and unmarked ones for our purposes (it's just a case of which is more convenient for the particular problem at hand). Thus ``point process'' might also mean ``marked point process''. \item One could certainly define point processes on a discrete group, but this is essentially percolation theory. We are specifically trying to understand the nondiscrete case, and so will assume $G$ is nondiscrete. \item The other case of interest we will discuss is $\Isom(S)$-invariant point processes on $S$, where $S$ is a Riemannian symmetric space. For instance, one would consider isometry invariant point processes on Euclidean space $\RR^n$ or hyperbolic space $\HH^n$. We will discuss this case more in Section \ref{homogeneousspaces}. \item Our interest in point processes is almost exclusively \emph{as actions}. We will therefore rarely distinguish between a point process proper and its distribution. Thus we may use expressions like ``suppose $\mu$ is a point process'' to mean ``suppose $\mu$ is the distribution of some point process''. \item The configuration space of any Polish space will be Polish, so the probability theory of point processes on such spaces is well behaved. The metric properties of configuration spaces that we require are listed in the appendix, with references for proofs. \end{itemize} \begin{defn} The \emph{intensity} of a point process $\mu$ is \[ \intensity(\mu) = \frac{1}{\lambda(U)} \EE_\mu \left[ N_U \right], \] where $U \subset G$ is any Borel set of positive (but finite) Haar measure, and $N_U(\omega) = \@ifstar{\oldabs}{\oldabs*}{\omega \cap U}$ is its point counting function. \end{defn} To see that the intensity is well-defined (that is, does not depend on our choice of $U$), observe that the function $U \mapsto \EE_\mu[N_U]$ defines a Borel measure on $G$ which inherits invariance from the shift invariance of $\mu$. So by uniqueness of Haar measure, it is some scaling of our fixed Haar meausure $\lambda$ -- the intensity is exactly this multiplier. We also see that whilst the intensity depends on our choice of Haar measure, it scales linearly with it. Note that a point process has intensity zero if and only if it is empty almost surely. \subsection{Examples of point processes} \begin{example}[Lattice shifts] Let $\Gamma < G$ be a \emph{lattice}, that is, a discrete subgroup that admits an invariant probability measure $\nu$ for the action $G \curvearrowright G / \Gamma$. The natural map $\MM(G / \Gamma) \to \MM(G)$ given by \[ \omega \mapsto \bigcup_{a\Gamma \in \omega} a\Gamma \] is left-equivariant, and hence maps invariant point processes on $G / \Gamma$ to invariant point processes on $G$. In particular, we have the \emph{lattice shift}, given by choosing a $\nu$-random point $a\Gamma$. \end{example} \begin{example}[\textbf{Induction from a lattice}] Now suppose one also has a pmp action $\Gamma \curvearrowright (X, \mu)$. It is possible to \emph{induce} this to a pmp action of $G$ on $G / \Gamma \times X$. This can be described as an $X$-marked point process on $G$. To do this, fix a fundamental domain $\mathscr{F} \subset G$ for $\Gamma$. Choose $f \in \mathscr{F}$ uniformly at random, and independently choose a $\mu$-random point $x \in X$. Let \[ \Pi = \{ (f\gamma, \gamma \cdot x) \in G \times X \mid \gamma \in \Gamma \}. \] Then $\Pi$ is a $G$-invariant $X$-marked point process. \end{example} In this way one can view point processes as generalised lattice shift actions. Note that there are groups without lattices (for instance Neretin's group, see \cite{MR2881324}), but every group admits interesting point processes, as we discuss now. The most fundamental of these is known as \emph{the Poisson point process}. We will define this after reviewing the Poisson distribution: Recall that a random integer $N$ is \emph{Poisson distributed with parameter $t > 0$} if \[ \PP[N = k] = \exp(-t)\frac{t^k}{k!}.\] We write $N \sim \texttt{Pois}(t)$ to denote this. It is convenient to extend this definition to $t = 0$ and $t = \infty$ by declaring $N \sim \texttt{Pois}(0)$ when $N = 0$ almost surely and $N \sim \texttt{Pois}(\infty)$ when $N = \infty$ almost surely. \begin{defn} Let $X$ be a complete and separable metric space equipped with a non-atomic Borel measure $\lambda$. A point process $\Pi$ on $X$ is \emph{Poisson with intensity $t > 0$} if it satisfies the following two properties: \begin{description} \item[(Poisson point counts)] for all $U \subseteq G$ Borel, $N_U(\Pi)$ is a Poisson distributed random variable with parameter $t \lambda(U)$, and \item[(Total independence)] for all $U, V \subseteq G$ \emph{disjoint} Borel sets, the random variables $N_U(\Pi)$ and $N_V(\Pi)$ are \emph{independent}. \end{description} \end{defn} For reasons that should not be immediately apparent, the above defining properties are in fact equivalent. We will write $\PPP_t$ for the distribution of such a random variable, or simply $\PPP$ if the intensity is understood. We think of the Poisson point process as a completely random scattering of points in the group. It is an analogue of Bernoulli site percolation for a continuous space. We now construct the process (somewhat) explicitly. Partition $G$ into disjoint Borel sets $U_1, U_2, \ldots$ of positive but finite volume. For each of these, independently sample from a Poisson distribution with parameter $t \lambda(U_i)$. Place that number of points in the corresponding $U_i$ (independently and uniformly at random). This description can be turned into an explicit sampling rule\footnote{That is, one can define a measurable function $f : \prod_n X_n \to \MM$ defined on an appropriate product of probability spaces such that the pushforward measure is the distribution of the Poisson point process.}, if one desires. For proofs of basic properties of the Poisson point process (such as the fact that it does not depend on the partition chosen above), see the first five chapters of Kingman's book \cite{MR1207584}. \begin{defn} A pmp action $G \curvearrowright (X, \mu)$ is \emph{ergodic} if for every $G$-invariant measurable subset $A \subseteq X$, we have $\mu(A) = 0$ or $\mu(A) = 1$. The action is \emph{mixing} if for all measurable $A, B \subseteq (X, \mu)$ we have \[ \lim_{g \to \infty} \mu(gA \cap B) = \mu(A)\mu(B). \] The action is \emph{essentially free} if $\stab_G(x) = \{1\}$ for $\mu$ almost every $x \in X$. In the case of point process actions we will sometimes use the term \emph{aperiodic} to refer to this. \end{defn} \begin{prop} The Poisson point process actions $G \curvearrowright (\MM, \PPP_t)$ on a noncompact group $G$ are essentially free and ergodic (in fact, mixing). \end{prop} A proof of freeness that is readily adaptable to our setting can be found as Proposition 2.7 of \cite{MR3664810}. For ergodicity and mixing, see the proof of the discrete case in Proposition 7.3 of the Lyons-Peres book \cite{MR3616205}. It directly adapts, once one knows the required cylinder sets exist. Although the subscript $t$ suggests that the Poisson point processes form a continuum family of actions, this is not always the case: \begin{thm}[Ornstein-Weiss] Let $G$ be an amenable group which is not a countable union of compact subgroups. Then the Poisson point process actions $G \curvearrowright (\MM, \PPP_t)$ are all isomorphic. \end{thm} The following definition uses notation that does not appear in the literature (the object of course does, but there does not appear to be a symbolic representation for it): \begin{defn} If $\Pi$ is a point process, then its \emph{IID version} is the $[0,1]$-marked point process $[0,1]^\Pi$ with the property that conditional on its set of points, its labels are independent and IID $\text{Unif}[0,1]$ distributed. If $\mu$ is the law of $\Pi$, then we will write $[0,1]^\mu$ for the law of $[0,1]^\Pi$. One can define the IID of a point process over spaces other than $[0,1]$ (for instance, $[n] = \{1, 2, \ldots, n\}$ with the counting measure), but we will only use the full IID. \end{defn} \begin{remark} As we've mentioned, $[0,1]$-marked point processes on $G$ are particular examples of point processes on $G \times [0,1]$. One can show (see Theorem 5.6 of \cite{MR3791470}) that the Poisson point process on $G \times [0,1]$ with respect to the product measure $\lambda \otimes \text{Leb}$ is just the IID version of the Poisson point process on $G$, a fact which we will make use of later. \end{remark} \begin{prop} The IID Poisson point process on a noncompact group is ergodic (and in fact mixing). \end{prop} This can be seen by viewing the IID Poisson on $G$ as the Poisson point process on $G \times S^1$, restricted to $G$. Note that the restriction of a mixing action to a noncompact subgroup is mixing. \begin{remark} One can define ``the IID'' of any probability measure preserving countable Borel equivalence relation, see \cite{MR3813200}. This construction is known as \emph{the Bernoulli extension}, and is ergodic if the base space is ergodic. \end{remark} \begin{prop} Let $\Pi$ be a point process on a group $G$ which is non-empty almost surely. Then $\@ifstar{\oldabs}{\oldabs*}{\Pi} = \infty$ almost surely if and only if $G$ is noncompact. \end{prop} \begin{proof} It is immediate that any point process on a compact group must be finite almost surely (as it is a discrete subset of the space). Now suppose $\Pi$ is a non-empty point process on $G$ which is finite almost surely. Then the IID of this process $[0,1]^\Pi$ still has this property. We define the following $G$-valued random variable: \[ f([0,1]^\Pi) = \text{ the unique } x \in \Pi \text{ with maximal label in } [0,1]^\Pi. \] The invariance of the point process translates into equivariance of the map $f : [0,1]^\MM \to G$. Thus this random variable's law is an invariant probability measure on $G$. Such a measure exists exactly when $G$ is compact. \end{proof} \subsection{Factors of point processes} \begin{defn} A \emph{point process factor map} is a $G$-equivariant and measurable map $\Phi : \MM \to \MM$. If $\mu$ is a point process and $\Phi$ is only defined $\mu$ almost everywhere, then we will call it a \emph{$\mu$ factor map}. We will be interested in two monotonicity conditions: \begin{itemize} \item if $\Phi(\omega) \subseteq \omega$ for all $\omega \in \MM$, we will call $\Phi$ a \emph{thinning} (and usually denote it by $\theta$), and \item if $\Phi(\omega) \supseteq \omega$ for all $\omega \in \MM$, we will call $\Phi$ a \emph{thickening} (and usually denote it by $\Theta$). \end{itemize} We use the same terms for marked point processes as well. \end{defn} \begin{remark}\label{thinningconfusio} There are \emph{two} possible ways to interpret the above monotonicity conditions for a $\Xi$-marked point process, depending on what you want to do with the mark space. One can consider \[ \Phi : \Xi^\MM \to \Xi^\MM, \text{ or } \Phi : \Xi^\MM \to \MM. \] In the former case, the definition above works verbatim. In the latter case, one should interpret a statement like ``$\omega \subseteq \Phi(\omega)$'' as ``$\omega$ is contained in the underlying set $\pi(\Phi(\omega))$ of $\Phi(\omega)$, where $\pi : \Xi^\MM \to \MM$ is the map that forgets labels. \end{remark} \begin{example}[Metric thinning]\label{deltathinningdefn} Let $\delta > 0$ be a tolerance parameter. The \emph{$\delta$-thinning} is the equivariant map $\theta_\delta : \MM \to \MM$ given by \[ \theta^\delta(\omega) = \{ g \in \omega \mid d(g, \omega \setminus \{g\}) > \delta \}. \] When $\theta^\delta$ is applied to a point process, the result is always a $\delta$-separated point process\footnote{Probabilists refer to such processes as \emph{hard-core}.} (but possibly empty). We define $\theta^\delta$ in the same way for marked point processes (that is, it simply ignores the marks). \end{example} \begin{example}[Independent thinning]\label{independentthinning} Let $\Pi$ be a point process. The \emph{independent $p$-thinning} defined on its IID $[0,1]^\Pi$ is given by \[ \mathcal{I}_p([0,1]^\Pi) = \{g \in \Pi \mid \Pi_g \leq p \}. \] \end{example} One can show that independent $p$-thinning of the Poisson point process of intensity $t > 0$ yields the Poisson point process of intensity $pt$, as one would expect. See Chapter 5 of \cite{MR3791470} for further details. \begin{example}[Constant thickening]\label{constantthickening} Let $F \subset G$ be a finite set containing the identity $0 \in G$, and $\Pi$ be a point process which is \emph{$F$-separated} in the sense that $\Pi \cap \Pi f = \empt$ for all $f \in F\setminus\{0\}$. Then there is the associated thickening $\Theta^F(\Pi) = \Pi F$. It is intuitively obvious that $\intensity (\Theta^F(\Pi)) = \@ifstar{\oldabs}{\oldabs*}{F} \intensity (\Pi)$. This can be formally established as follows: let $U \subseteq G$ be of unit volume. Then \begin{align*} \intensity (\Theta^F(\Pi) ) &= \EE\@ifstar{\oldabs}{\oldabs*}{U \cap \Pi F} && \text{By definition} \\ &= \sum_{f \in F} \EE\@ifstar{\oldabs}{\oldabs*}{U \cap \Pi f} && \text{By }F\text{-separation} \\ &= \sum_{f \in F} \EE \@ifstar{\oldabs}{\oldabs*}{Uf^{-1} \cap \Pi} && \\ &= \sum_{f \in F} \EE\@ifstar{\oldabs}{\oldabs*}{U \cap \Pi} && \text{By \emph{unimodularity}} \\ &= \@ifstar{\oldabs}{\oldabs*}{F} \intensity (\Pi). \end{align*} This is the first real appearance of our unimodularity assumption. In particular, we can demonstrate that $\intensity \Theta^F(\Pi) = \@ifstar{\oldabs}{\oldabs*}{F} \intensity \Pi$ is \emph{not} automatically true without unimodularity. For this, let $\Pi$ denote the unit intensity Poisson point process on $G$, and $F = \{0, f\}$ where $f \in G$ is chosen such that $\lambda(Uf^{-1}) < 1$. Then $\@ifstar{\oldabs}{\oldabs*}{Uf^{-1} \cap \Pi}$ is Poisson distributed with parameter $\lambda(Uf^{-1})$, and so by the above calculation $\intensity\Theta^F(\Pi) < 2 \cdot \intensity \Pi$. \end{example} Monotone maps have been investigated in the specific case of the Poisson point process on $\RR^n$. We note the following interesting theorems: \begin{thm}[Holroyd, Peres, Soo \cite{MR2884878}] Let $s > t > 0$. Then the Poisson point process on $\RR^n$ of intensity $s$ can be thinned to the Poisson point process of intensity $t$. That is, there exists an equivariant and deterministic thinning $\theta : (\MM(\RR), \PPP_s) \to (\MM(\RR), \PPP_t)$. \end{thm} \begin{thm}[Gurel-Gurevich and Peled \cite{MR3096589}] Let $t > s > 0$ be intensities. Then the Poisson point process on $\RR^n$ of intensity $s$ cannot be thickened to the Poisson point process of intensity $t$. That is, there is no equivariant and deterministic thickening $\Theta : (\MM(\RR), \PPP_s) \to (\MM(\RR), \PPP_t)$. \end{thm} We stress in the above theorems the \emph{deterministic} nature of the maps. If one is allowed additional randomness (that is, one asks for a factor of IID map), then both theorems are easily established. We note the following fact, which we will use (and prove) later after developing some notation. \begin{example} If $\Pi$ is any non-empty point process, then its IID factors onto the Poisson (in fact, onto the IID Poisson). \end{example} \begin{defn} A \emph{factor $\Xi$-marking} of a point process is a $G$-equivariant map $\mathscr{C} : \MM \to \Xi^\MM$ such that the underlying subset in $G$ of $\mathscr{C}(\omega)$ is $\omega$. That is, $\mathscr{C}$ is a rule that assigns a mark from $\Xi$ to each point of $\omega$ in some deterministic way. Again, if $\mathscr{C}$ is only defined $\mu$ almost everywhere then we will call it a \emph{$\mu$ factor $\Xi$-marking}. \end{defn} \begin{example}\label{colouring} Let $\theta : \MM \to \MM$ be a thinning. Then the associated $2$-colouring is $\mathscr{C}_\theta : \MM \to \{0, 1\}^\MM$ given by \[ \mathscr{C}_\theta(\omega) = \{ (g, \mathbbm{1}[{g \in \theta(\omega)}]) \in G \times \{0, 1\} \mid g \in \omega \}. \] We will see that all markings are built out of thinnings in a similar way. \end{example} \begin{remark}\label{thinninglost} There is a difference between the \emph{thinning map $\theta$} and the resulting \emph{thinned process $\theta_*(\mu)$} that can be a source for confusion. Passing to the thinned process (in principle) can lose information about $\mu$. For example, let $\Pi$ denote a Poisson point process on $G$ and $\Upsilon$ an independent random shift of a lattice $\Gamma < G$. Define the following thinning $\theta : \MM \to \MM$ by \[ \theta(\omega) = \{ g \in \omega \mid g\Gamma \subseteq \omega \}. \] Then $\theta(\Pi \cup \Upsilon) = \Upsilon$, and so the thinning completely loses the Poisson point process. \end{remark} \begin{defn}\label{inputoutputdefn} Let $\Phi : \MM \to \MM$ be a factor map. We think of its input $\omega$ as being red, its output $\Phi(\omega)$ as being blue, and their overlap $\omega \cap \Phi(\omega)$ as being purple. For $g \in \omega$, let $\texttt{Colour}(g) \in \{\text{Red, Blue, Purple}\}$ be \[ \texttt{Colour}(g) = \begin{cases} \text{Red} & \text{ If } g \in \omega \setminus \Phi(\omega), \\ \text{Blue} & \text{ If } g \in \Phi(\omega)\setminus \omega, \\ \text{Purple} & \text{ If } g \in \omega \cap \Phi(\omega). \end{cases} \] Now define $\Theta^\Phi : \MM \to \{\text{Red, Blue, Purple}\}^\MM$ to be the following \emph{input/output thickening} of $\Phi$ (see also Figure \ref{inputoutputfigure}): \[ \Theta^\Phi(\omega) = \{ (g, \texttt{Colour}(g)) \in G \times \text{Red, Blue, Purple}\} \mid g \in \omega \}. \] \begin{figure}[h]\label{inputoutputfigure} \includegraphics[scale=.4]{inputoutput.pdf} \centering \caption{This is how you should picture the input/output thickening of a factor map.} \end{figure} Let $\pi : \{\text{Red, Blue, Purple}\}^\MM \to \MM$ be the projection map that deletes red points and then forgets colours, that is, \[ \pi(\omega) = \{ g \in \omega \mid \omega_g \in \{\text{Blue, Purple}\} \}. \] \end{defn} \begin{remark}\label{factorsdecompose} Observe that $\Phi = \pi \circ \Theta^\Phi$ -- that is, an \emph{arbitrary} factor map decomposes as the composition of a thinning and a thickening. In this way we can often reduce the study of arbitrary factors to that of \emph{monotone} factors. \end{remark} \begin{defn} The \emph{space of graphs in $G$} is \[ \graph(G) = \{ (V, E) \in \MM(G) \times \MM(G \times G) \mid E \subseteq V \times V \}. \] This is a Borel $G$-space (with the diagonal action). A \emph{factor graph} is a measurable and $G$-equivariant map $\Phi : \MM(G) \to \graph(G)$ with the property that the vertex set of $\Phi(\omega)$ is $\omega$. If a factor graph is connected, then we will refer to it as a \emph{graphing}. \end{defn} \begin{remark} The elements of $\graph(G)$ are technically directed graphs, possibly with loops, and without multiple edges between the same pair of vertices. It's possible to define (in a Borel way) whatever space of graphs one desires (coloured, undirected, etc.) by taking appropriate subsets of products of configuration spaces. \end{remark} \begin{remark} One might prefer to call factor graphs as above \emph{monotone} factor graphs. Our terminology follows that of probabilists, see for instance \cite{holroyd2005}. We have not found a use for the less restrictive factor graph concept. \end{remark} \begin{example}\label{distanceR} The \emph{distance-$R$} factor graph is the map $\mathscr{D}_R : \MM \to \graph(G)$ given by \[ \mathscr{D}_R(\omega) = \{ (g, h) \in \omega \times \omega \mid d(g, h) \leq R \}. \] The connectivity properties of this graph fall under the purview of continuum percolation theory, see for instance \cite{meester1996continuum}. \end{example} \section{The rerooting equivalence relation and groupoid} We now introduce a pair of algebraic objects that capture factors of a point process. For exposition's sake, we will first discuss unmarked point processes on a group $G$. \begin{defn} The \emph{space of rooted configurations on $G$} is \[ {\mathbb{M}_0}(G) = \{ \omega \in \MM(G) \mid 0 \in \omega \}. \] If $G$ is understood, then we will drop it from the notation for clarity. The \emph{rerooting equivalence relation} on ${\mathbb{M}_0}$ is the orbit equivalence relation of $G \curvearrowright \MM$ restricted to ${\mathbb{M}_0}$. Explicitly: \[ \Rel = \{ (\omega, g^{-1}\omega) \in {\mathbb{M}_0} \times {\mathbb{M}_0} \mid g \in \omega \}. \] This defines a countable Borel equivalence relation structure on ${\mathbb{M}_0}$. It is degenerate whenever $\omega \in {\mathbb{M}_0}$ exhibits symmetries: for instance, the equivalence class of $\ZZ$ viewed as an element of ${\mathbb{M}_0}(\RR)$ is a singleton. We are usually interested in essentially free actions, where such difficulties will not occur. Nevertheless, we do care about lattice shift point processes and so we will introduce a groupoid structure that keeps track of symmetries. The \emph{space of birooted configurations} is \[ \Marrow = \{ (\omega, g) \in {\mathbb{M}_0} \times G \mid g \in \omega \}. \] We visualise an element $(\omega, g) \in \Marrow$ as the rooted configuration $\omega \in {\mathbb{M}_0}$ with an arrow pointing to $g \in \omega$ from the root (ie, the identity element of $G$). The above spaces form a \emph{groupoid} $({\mathbb{M}_0}, \Marrow)$ which we will refer to as the \emph{rerooting groupoid}. Its unit space is ${\mathbb{M}_0}$ and its arrow space is $\Marrow$. We can identify ${\mathbb{M}_0}$ with ${\mathbb{M}_0} \times \{0\} \subset \Marrow$. The multiplication structure is as follows: we declare a pair of birooted configurations $(\omega, g), (\omega', h)$ in $\Marrow$ to be \emph{composable} if $\omega' = g^{-1}\omega$, in which case \[ (\omega, g) \cdot (\omega', h) := (\omega, gh). \] Note that if $\Gamma < G$ is a discrete subgroup (so $\Gamma \in {\mathbb{M}_0}(G)$), then the above multiplication on $\{\Gamma\} \times \Gamma \subset \Marrow(G)$ is just the usual one. The \emph{source map} $s : \Marrow \to {\mathbb{M}_0}$ and \emph{target map} $t : \Marrow \to {\mathbb{M}_0}$ are \[ s(\omega, g) = \omega, \text{ and } t(\omega, g) = g^{-1}\omega. \] \end{defn} Note that the rerooting groupoid is \emph{discrete} in the sense that $s^{-1}(\omega)$ is at most countable for all $\omega \in {\mathbb{M}_0}$. \begin{remark} Let $\Maper_0$ denote the set of rooted configurations $\omega$ that are \emph{aperiodic} in the sense that $\stab_G(\omega) = \{e\}$. Then the groupoid generated by $\Maper_0$ in ${\mathbb{M}_0}$ is \emph{principal}\footnote{Recall that a groupoid is \emph{principal} if its isotropy subgroups are all trivial. That is, the groupoid structure is just that of an equivalence relation}. \end{remark} \begin{defn} If $\Xi$ is a space of marks, then the \emph{space of $\Xi$-marked rooted configurations} is \[ \Xi^{\mathbb{M}_0} = \{ \omega \in \Xi^\MM \mid \exists \xi \in \Xi \text{ such that } (0, \xi) \in \omega \}. \] The \emph{$\Xi$-marked rerooting groupoid} is defined as previously, with $\Xi^{\mathbb{M}_0}$ taking the place of ${\mathbb{M}_0}$. \end{defn} \subsection{Borel correspondences between the groupoid and factors}\label{borelcorrespondences} Suppose $\theta : \MM \to \MM$ is an equivariant and measurable thinning. Then we can associate to it a subset of the rerooting groupoid, namely \[ A_\theta = \{ \omega \in \MM \mid 0 \in \theta(\omega) \}. \] This association has an inverse: given a Borel subset $A \subseteq {\mathbb{M}_0}$, we can define a thinning $\theta^A : \MM \to \MM$ \[ \theta^A(\omega) = \{g \in \omega \mid g^{-1}\omega \in A \}. \] Thus we see that \emph{Borel subsets $A \subseteq {\mathbb{M}_0}$ of the rerooting groupoid correspond to Borel thinning maps $\theta : \MM \to \MM$}. In the $\Xi$-marked case, one associates to a subset $A \subseteq \Xi^{\mathbb{M}_0}$ a thinning $\theta^A : \Xi^\MM \to \Xi^\MM$. In a similar way, we can see that if $P : {\mathbb{M}_0} \to [d]$ is a Borel partition of ${\mathbb{M}_0}$ into $d$ classes, then there is an associated factor $[d]$-colouring $\mathscr{C}^P : \MM \to [d]^\MM$ given by \[ \mathscr{C}^P(\omega) = \{ (g, P(g^{-1}\omega) \in G \times [d] \mid g \in \omega \}, \] and given a factor $[d]$-colouring $\mathscr{C} : \MM \to [d]^\MM$ one associates the partition $P^\mathscr{C} : {\mathbb{M}_0} \to [d]$ given by \[ P(\omega) = c, \text{ where } c \text{ is the unique element of } [d] \text{ such that } (0, c) \in \mathscr{C}(\omega). \] Again, these associations are mutual inverses. More generally, we see that \emph{Borel factor $\Xi$-markings $\mathscr{C} : \MM \to \Xi^\MM$ correspond to Borel maps $P : {\mathbb{M}_0} \to \Xi$}. Now suppose that $\mathscr{G} : \MM \to \graph(G)$ is an equivariant and measurable factor graph. Then we can associate to it a subset of the rerooting groupoid's arrow space, namely \[ \mathscr{A}_\mathscr{G} = \{ (\omega, g) \in \Marrow \mid (0,g) \in \mathscr{G}(\omega)\}. \] In the other direction, we associate to a subset $\mathscr{A} \subseteq \Marrow$ the factor graph $\mathscr{G}^\mathscr{A} : \MM \to \graph(G)$ \[ \mathscr{G}^\mathscr{A}(\omega) = \{ (g, h) \in \omega \times \omega \mid (g^{-1}\omega, g^{-1}h) \in \mathscr{A} \}. \] Thus we see that \emph{Borel subsets $\mathscr{A} \subseteq \Marrow$ of the rerooting groupoid's arrow space correspond to Borel factor (directed) graphs $\mathscr{G} : \MM \to \graph(G)$}. \begin{remark} If $\mu$ is a point process, then the correspondence still works in one direction: namely, we can associate subsets $A \subset {\mathbb{M}_0}$ (or $\mathscr{A} \subseteq \Marrow$) to $\mu$-thinnings $\theta^A: (\MM, \mu) \to \MM$ (or $\mu$-factor graphs $\mathscr{G}_\mathscr{A}: (\MM, \mu) \to \MM$ respectively). We run into trouble in the other direction: suppose $\theta : \MM \to \MM$ is a thinning, but only defined $\mu$ almost everywhere. We wish to restrict it to ${\mathbb{M}_0}$, but a priori this makes no sense -- that is a subset of measure zero. It turns out that there is a way to make sense of this due to equivariance, but it will require some more theory that we explain in the next section. \end{remark} \subsection{The Palm measure} We will now associate to a (finite intensity) point process $\mu$ a probability measure $\mu_0$ defined on the rerooting groupoid ${\mathbb{M}_0}$. When the ambient space is unimodular, this will turn the rerooting groupoid into a \emph{probability measure preserving (pmp) discrete groupoid}. Informally, the Palm measure of a point process $\Pi$ is the process conditioned to contain the root. A priori this makes no sense (the subset ${\mathbb{M}_0}$ has probability zero), but there is an obvious way one could interpret the statement: condition on the process to contain a point in an $\e$ ball about the root, and take the limit as $\e$ goes to zero. See Theorem 13.3.IV of \cite{daley2007introduction} and Section 9.3 of \cite{MR3791470} for further details. We will instead take the following concept of \emph{relative rates} as our basic definition: \begin{defn} Let $\Pi$ be a point process of finite intensity with law $\mu$. Its (normalised) \emph{Palm measure} is the probability measure $\mu_0$ defined on Borel subsets of ${\mathbb{M}_0}$ by \[ \mu_0(A) := \frac{\intensity(\theta^A(\Pi))}{\intensity(\Pi)}, \] where $\theta^A$ is the thinning associated to $A \subseteq {\mathbb{M}_0}$. More explicitly, \[ \mu_0(A) := \frac{1}{\intensity(\mu) \lambda(U)} \EE_\mu \left[ \#\{g \in U \mid g^{-1}\omega \in A \} \right], \] where $U \subseteq G$ is any measurable set with $0 < \lambda(U) < \infty$. To make formulas simpler, we will often choose $U$ to be of unit volume. Alternatively, note that by the definition of intensity we may write \[ \mu_0(A) = \frac{\EE \left[ \#\{g \in U \mid g^{-1}\Pi \in A \} \right]}{\EE \@ifstar{\oldabs}{\oldabs*}{U \cap \Pi}}. \] We also define the Palm measure of a $\Xi$-marked point process similarly, with $\Xi^{\mathbb{M}_0}$ taking the place of ${\mathbb{M}_0}$. A \emph{Palm version} of $\Pi$ is any random variable $\Pi_0$ with law $\mu_0$. That is, we require that for all Borel $B \subseteq {\mathbb{M}_0}$ we have \[ \PP[\Pi_0 \in B] = \mu_0(B). \] \end{defn} We now describe some \emph{Palm calculus}. If $\Pi$ is a point process with Palm version $\Pi_0$ and $\Phi(\Pi)$ is some factor map then we wish to express the Palm version $\Phi(\Pi)_0$ of $\Phi(\Pi)$ in terms of $\Pi_0$ and $\Phi$. The Palm calculus tells us how this is done. It will be sufficient for our purposes to compute the Palm measure of factors for factor which are forgettings, thinnings, coloured thickenings, and colourings. In each case the answer is more or less obvious, so we will give an informal description of the answer and then verify that it satisfies the required property. \begin{example}[Forgetting labels] If $\Pi$ is a labelled point process, then the Palm measure of $\Pi$ \emph{after} we forget the labels is the same thing as forgetting the labels from the Palm measure $\Pi_0$. We prove this after the following clarification: When talking about the Palm measure for a $\Xi$-marked point process, it is important in the above to choose the correct thinning. Recall from Remark \ref{thinningconfusio} that for a subset $A \subseteq \Xi^{\mathbb{M}_0}$ one can discuss \emph{two} possible kinds of thinnings, namely \[ \theta^A : \Xi^\MM \to \Xi^\MM \text{ or } \pi \circ \theta^A : \Xi^\MM \to \MM, \] where $\pi : \Xi^\MM \to \MM$ is the map that forgets labels. It is the \emph{former} kind of thinning one should take. Note that if $\Pi$ is a $\Xi$-marked point process, then its intensity remains the same if you forget the marks, that is, $\intensity \Pi = \intensity \pi(\Pi)$. More generally, the operation of taking the Palm version \emph{commutes} with forgetting labels. That is, $\pi(\Pi_0) = (\pi(\Pi))_0$. To see this, let $B \subseteq {\mathbb{M}_0}$, and observe \begin{align*} \PP[ \pi(\Pi_0) \in B] &= \PP[\Pi_0 \in \pi^{-1}(B)] \\ &= \frac{\intensity \theta^{\pi^{-1}(B)}(\Pi)}{\intensity \Pi} \\ &= \frac{\intensity \pi(\theta^{\pi^{-1}(B)}(\Pi))}{\intensity \pi(\Pi)} \\ &= \frac{ \intensity \theta^B(\pi(\Pi))}{\intensity \pi(\Pi)} \\ &= \PP[ \pi(\Pi)_0 \in B], \end{align*} where we simply followed our nose. \end{example} \begin{example}[Lattice actions] If $\Gamma < G$ is a lattice, then the Palm measure of the associated lattice shift is just $\delta_\Gamma$ -- that is, the atomic measure on $\Gamma \in {\mathbb{M}_0}(G)$. More generally, if $\Gamma \curvearrowright (X, \mu)$ is a pmp action, then the Palm measure of the associated induced $X$-marked point process is its \emph{symbolic dynamics}. That is, the map $\Sigma : (X, \mu) \to X^\MM$ given by \[ \Sigma(x) = \{ (\gamma, \gamma^{-1} \cdot x) \in G \times X \mid \gamma \in \Gamma \}. \] pushes forward $\mu$ to the Palm measure. In words, you sample a $\mu$-random point $x \in X$ and track its orbit under $\Gamma$ (the inverse is an artefact of our left bias). \end{example} \begin{remark} Suppose $\Pi$ is a finite intensity point process such that its Palm version is an atomic measure, say $\Pi_0 = \Omega$ almost surely where $\Omega \in {\mathbb{M}_0}$. Then $\Omega$ is a lattice in $G$. Note that $\Omega$ is automatically a discrete subset of $G$, and a simple mass transport argument shows that it is a subgroup. The covolume of this subgroup is the reciprocal of the intensity of $\Pi$. \end{remark} \begin{example}[Mecke-Slivnyak Theorem]\label{palmofpoisson} If $\Pi$ is a Poisson point process, then its Palm measure has the same law as $\Pi \cup \{0\}$, where $0 \in G$ is the identity. In fact, this is a \emph{characterisation} of the Poisson point process: if the Palm measure of $\mu$ is obtained by simply adding the root\footnote{More formally, consider the map $F : \MM \to {\mathbb{M}_0}$ given by $F(\omega) = \omega \cup \{0\}$, by ``adding the root'' we mean the Palm measure $\mu_0$ is the pushforward $F_*\mu$.}, then $\mu$ is the Poisson point process (of some intensity). \end{example} The proof of the above fact can be found in Section 9.2 of \cite{MR3791470}. As a consequence, the Palm measure of the IID Poisson is the IID of the Palm measure of the Poisson itself. \begin{example}\label{palmofcolouring} The Palm version $\mathscr{C}^A(\Pi)_0$ of a $2$-colouring $\mathscr{C} : \MM \to \{0,1\}^\MM$ determined by a subset $A \subseteq {\mathbb{M}_0}$ (as in Example \ref{colouring}) is simply $\mathscr{C}(\Pi_0)$. \end{example} \begin{example}[Thinnings] The Palm version $\theta(\Pi)_0$ of a thinning $\theta = \theta^A$ of $\Pi$ (determined by a subset $A \subseteq {\mathbb{M}_0}$) is described in terms of its Palm version $\Pi_0$ as a conditional probability as follows: \[ \PP[\theta(\Pi)_0 \in B] = \PP[\theta(\Pi_0) \in B \mid \Pi_0 \in A] \] for any $B \subseteq {\mathbb{M}_0}$. That is, the Palm measure $\theta(\Pi)_0$ can be obtained by sampling from $\Pi_0$ conditioned that the root is retained in the thinning, and then applying the thinning. To see this, first one should work from the definitions to show that $\theta^B(\theta^A(\Pi)) = \theta^{A \cap (\theta^A)^{-1}(B)}$. Therefore \begin{align*} \PP[(\theta(\Pi))_0 \in B] &= \frac{\intensity \theta^B(\theta^A(\Pi))}{\intensity \theta^A(\Pi)} && \\ &= \left.{\frac{\intensity \theta^{A \cap (\theta^A)^{-1}(B)}(\Pi)}{\intensity \Pi}} \middle/ {\frac{\intensity \theta^A(\Pi)}{\intensity \Pi}}\right. && \text{By the observation} \\ &= \frac{\PP[\Pi_0 \in A \cap (\theta^A)^{-1}(B)]}{\PP[\Pi_0 \in A]} && \\ &= \frac{\PP[\{\theta(\Pi_0) \in B\} \cap \{\Pi_0 \in A\}]}{\PP[\Pi_0 \in A]} \\ &= \frac{\PP[\{\theta(\Pi_0) \in B\} \cap \{\Pi_0 \in A\}]}{\PP[\Pi_0 \in A]}, \end{align*} which is exactly the definition of the desired conditional probability. \end{example} \begin{example}\label{palmofthickening} Let $\Theta = \Theta^F$ be a constant thickening determined by $F \subset G$, as described in Example \ref{constantthickening}. If $\Pi$ is an $F$-separated process, then the Palm version $\Theta(\Pi)_0$ of the thickening $\Theta(\Pi)$ is as follows: sample from $\Pi_0$, and independently choose to root $\Theta(\Pi_0)$ at a uniformly chosen element $X$ of $F$. That is, $\Theta(\Pi)_0 \overset{d}{=} X^{-1} \Theta(\Pi_0)$. To see this, we compute\footnote{When we define the Palm measure of a set $B \subseteq {\mathbb{M}_0}$, we usually write ``$g \in U$'' rather than ``$g \in U \cap \Pi$'', as the latter condition $g^{-1} \Pi \in B$ already implies $g \in \Pi$. For this computation it is better to really spell it out though.} as follows: \begin{align*} &\PP[\Theta(\Pi)_0 \in B] = \frac{1}{\intensity \Theta(\Pi)} \EE[ \#\{g \in U \cap \Pi F \mid g^{-1} \Theta(\Pi) \in B \} ] && \text{By definition} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \frac{1}{\intensity \mu} \sum_{f \in F} \EE[ \#\{g \in U \cap \Pi f \mid g^{-1} \Theta(\Pi) \in B \} ] && \text{By Example \ref{constantthickening}} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \frac{1}{\intensity \mu} \sum_{f \in F} \EE[ \#\{g \in Uf^{-1} \cap \Pi \mid g^{-1} \Pi \in \Theta^{-1}(B) \} ] && \text{By equivariance} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \frac{1}{\intensity \mu} \sum_{f \in F} \EE[ \#\{g \in U \cap \Pi \mid g^{-1} \Pi \in \Theta^{-1}(B) \} ] && \text{By unimodularity} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \sum_{f \in F} \PP[ \Pi_0 \in \Theta^{-1}(B)] && \text{By definition} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \sum_{f \in F} \PP[ \Theta(\Pi_0) \in B] && \\ &= \PP[X^{-1} \Theta(\Pi_0) \in B]. \end{align*} \end{example} The Palm measure has an associated integral equation, which we will refer to as ``the CLMM'', following the convention of \cite{baszczyszyn:cel-01654766}. It is also referred to as ``the refined Campbell theorem'' in \cite{last2011poisson} and \cite{vere2003introduction}, for example. \begin{thm}[Campbell-Little-Mecke-Matthes]\label{CLMM} Let $\mu$ be a finite intensity point process on $G$ with Palm measure $\mu_0$. Write $\EE$ and $\EE_0$ for the associated integral operators. If $f : G \times {\mathbb{M}_0} \to \RR_{\geq 0}$ is a measurable function (\emph{not} necessarily invariant in any way), then \[ \EE \left[\sum_{x \in \omega} f(x, x^{-1}\omega) \right] = \intensity(\mu) \EE_0 \left[ \int_G f(x, \omega) d\lambda(x)\right]. \] \end{thm} The proof of the above theorem is a standard monotone convergence argument, and as such we will only give the first step of the argument and leave the details to the reader. Observe that by definition of the Palm measure, for any $U \subseteq(G)$ of finite volume and any measurable $A \subseteq {\mathbb{M}_0}$, we have \[ \EE\left[\#\{ g\in U \mid g^{-1}\Pi \in A\} \right] = \intensity(\mu) \mu_0(A) \lambda(U). \] Now rewrite the integrand on the lefthand side as a sum, and the right hand side as an integral we see: \[ \EE\left[\sum_{g \in \Pi} \mathbbm{1}[(g, g^{-1}\omega) \in U \times A] \right] = \intensity(\mu) \int_{{\mathbb{M}_0}} \int_G \mathbbm{1}[(g, \omega) \in U \times A]d\lambda(g)d\mu_0(\omega), \] and observe that this is exactly the claimed theorem (in slightly different notation) in the case of $f(x,\omega) = \mathbbm{1}[(x, \omega) \in U \times A]$. The theorem follows for arbitrary $f$ by the monotone convergence theorem. \begin{remark}\label{VIF} If $\nu$ is a point process with $\nu_0 = \mu_0$, then $\nu = \mu$, that is, the Palm measure \emph{determines} the point process. To see this, we use the existance of a map $\mathscr{V} : [0,1] \times {\mathbb{M}_0} \to \MM$ with the property that if $\mu$ is \emph{any} point process with Palm measure $\mu_0$, then $\mathscr{V}_*(\text{Leb} \otimes \mu_0) = \mu$. This is a consequence of the \emph{Voronoi inversion formula}, see Section 9.4 of \cite{MR3791470}. \end{remark} \subsection{Unimodularity and the Mass Transport Principle}\label{unimodularity} The Mass Transport Principle is a powerful tool in percolation theory, see \cite{MR3616205} for an introduction and historical context. For the convenience of the reader, we include a proof of it for our context and in our notation, but no originality is claimed. For further generalisations of the mass transport principle see \cite{kallenberg2011invariant}, \cite{gentner2011palm}, and Chapter 7 of \cite{kallenberg2017random} and for further exposition in the context of point processes see \cite{baszczyszyn:cel-01654766}. The source and range maps $s, t : \Marrow \to {\mathbb{M}_0}$ induce a pair of measures on $\Marrow$ defined by \[ \muarrow^s(\mathscr{G}) = \int_{\mathbb{M}_0} \@ifstar{\oldabs}{\oldabs*}{s^{-1}(\omega) \cap \mathscr{G}} d\mu_0(\omega), \text{ and } \muarrow^t(\mathscr{G}) = \int_{\mathbb{M}_0} \@ifstar{\oldabs}{\oldabs*}{t^{-1}(\omega) \cap \mathscr{G}} d\mu_0(\omega). \] In our factor graph interpretation this corresponds to the expected indegree and outdegree of $\mathscr{G}$ respectively, where we view $\mathscr{G}$ as a \emph{directed} rooted graph. To see this, recall that for a rooted configuration $\omega \in {\mathbb{M}_0}$, \[ s^{-1}(\omega) = \{(\omega, g) \in {\mathbb{M}_0} \times G \mid g \in \omega\} \text{ and } t^{-1}(\omega) = \{(g^{-1}\omega, g^{-1}) \in {\mathbb{M}_0} \times G \mid g \in \omega \}, \] and that there is an edge from $0$ to $g$ in $\mathscr{G}(\omega)$ exactly when $(\omega, g) \in \mathscr{G}$, and an edge from $g$ to $0$ exactly when $(g^{-1}\omega, g^{-1}) \in \mathscr{G}$. Thus \[ \overrightarrow{\deg}_0({\mathscr{G}(\omega)}) = \@ifstar{\oldabs}{\oldabs*}{s^{-1}(\omega) \cap \mathscr{G}(\omega)} \text{ and } \overleftarrow{\deg}_0({\mathscr{G}(\omega)}) = \@ifstar{\oldabs}{\oldabs*}{t^{-1}(\omega) \cap \mathscr{G}(\omega)}. \] \begin{remark} We have had to adapt notation to suit our purposes. Usually a groupoid would be denoted by a letter like $\mathcal{G}$, and that is the set of arrows. Then its units would be denoted $\mathcal{G}_0$. We have tried to match this up with the necessary notation from point process theory as closely as possible. We choose to denote outdegree by an expression like $\overrightarrow{\deg}_0({\mathscr{G}(\omega)})$ instead of $\deg^+_{\mathscr{G}(\omega)}(0)$ as the arrows are more evocative, and the subscript notation becomes very small (as in, for instance, $\deg^+_{\mathscr{G}(\Pi_0)}(0)$. \end{remark} \begin{prop}\label{pmpgroupoid} If $G$ is \emph{unimodular}, then $\muarrow^s = \muarrow^t$. That is, $(\Marrow, \muarrow)$ forms a discrete pmp groupoid. Equivalently, if $\Pi_0$ is the Palm version of any point process $\Pi$ on $G$, then \[ \EE\left[ \overrightarrow{\deg}_0({\mathscr{G}(\Pi_0)}) \right] = \EE\left[ \overleftarrow{\deg}_0({\mathscr{G}(\Pi_0)}) \right]. \] We will denote by $\muarrow$ this common measure $\muarrow^s = \muarrow^t$. \end{prop} \begin{proof}[Proof of Proposition \ref{pmpgroupoid}] Fix $U \subseteq G$ of unit volume. We compute: \begin{align*} \muarrow^s(\mathscr{G}) &= \EE_{\mu_0} \left[ \sum_{g \in \omega} \mathbbm{1}[{(\omega, g) \in \mathscr{G}}] \right] && \text{By definition} \\ &= \EE_{\mu_0} \left[\int_G \mathbbm{1}[{x \in U}] \sum_{g \in \omega} \mathbbm{1}[{(\omega, g) \in \mathscr{G}}] d\lambda(x) \right] && \\ &= \frac{1}{\intensity \mu} \EE_\mu \left[ \sum_{x \in \omega} \mathbbm{1}[{x \in U}] \sum_{g \in x^{-1}\omega} \mathbbm{1}[{(x^{-1}\omega, g) \in \mathscr{G}}] \right] && \text{By the CLLM} \\ &= \frac{1}{\intensity \mu} \EE_\mu \left[ \sum_{h \in \omega} \sum_{g^{-1} \in h^{-1}\omega} \mathbbm{1}[{hg^{-1} \in U}] \mathbbm{1}[{(gh^{-1}\omega, g) \in \mathscr{G}}] \right] && \\ &= \EE_{\mu_0} \left[ \int_G \sum_{g^{-1} \in \omega} \mathbbm{1}[{hg^{-1} \in U}] \mathbbm{1}[{(g\omega, g) \in \mathscr{G}}] d\lambda(h) \right] && \text{By the CLLM} \\ &= \EE_{\mu_0} \left[ \int_G \sum_{g \in \omega} \mathbbm{1}[{hg \in U}] \mathbbm{1}[{(g^{-1}\omega, g^{-1}) \in \mathscr{G}}] d\lambda(h) \right] && \text{By unimodularity} \\ &= \EE_{\mu_0} \left[\sum_{g \in \omega} \mathbbm{1}[{(g^{-1}\omega, g^{-1}) \in \mathscr{G}}] \int_G \mathbbm{1}[{h \in Ug^{-1}}] d\lambda(h) \right] && \\ &= \EE_{\mu_0} \left[\sum_{g \in \omega} \mathbbm{1}[{(g^{-1}\omega, g^{-1}) \in \mathscr{G}}] \right] && \text{By unimodularity} \\ &= \muarrow^t(\mathscr{G}), \end{align*} as desired, where the first instance of CLMM is applied with \[ f_1(x,\omega) = \mathbbm{1}[x \in U] \sum_{g \in \omega} \mathbbm{1}[(\omega, g) \in \mathscr{G}], \text{ and} \] \[ f_2(x, \omega) = \sum_{g^{-1} \in \omega} \mathbbm{1}[xg^{-1} \in U]\mathbbm{1}[(g\omega, g) \in \mathscr{G}]. \] in the second instance.\end{proof} \begin{defn} The \emph{Palm groupoid} of a point process $\Pi$ with law $\mu$ is $(\Marrow, \muarrow)$. If $\Pi$ is free, then this groupoid is principal, and thus we refer to $\Pi$'s \emph{Palm equivalence relation} $({\mathbb{M}_0}, \Rel, \mu_0)$. \end{defn} \begin{defn}\label{edgemeasure} Let $\Pi$ be a point process and $\mathscr{G}$ an \emph{undirected} factor graph of $\Pi$. Its \emph{edge density} is $\EE[ \deg_0(\mathscr{G}(\Pi_0))]$, where $\Pi_0$ is the Palm version of $\Pi$. \end{defn} By the above proposition, if $\mathscr{G}'$ is any \emph{orientation} of $\mathscr{G}$, then the edge density can be expressed as \[ \EE[ \deg_0(\mathscr{G}(\Pi_0))] = 2 \EE\left[ \overrightarrow{\deg}_0({\mathscr{G'}(\Pi_0)}) \right]. \] Speaking properly then, we should be talking of \emph{directed} factor graphs, but for this reason we will often think of the factor graphs as undirected. \begin{thm}[The Mass Transport Principle]\label{MTP} Let $G$ be a unimodular group, and $\Pi$ a point process on $G$ with Palm version $\Pi_0$. Suppose $T : G \times G \times \MM \to \RR_{\geq 0}$ is a measurable function which is \emph{diagonally invariant} in the sense that $T(gx, gy; g\omega) = T(x, y; \omega)$ for all $g \in G$. Then \[ \EE \left[ \sum_{y \in \Pi_0} T(0, y; \Pi_0) \right] = \EE \left[ \sum_{x \in \Pi_0} T(x, 0; \Pi_0) \right]. \] \end{thm} We view $T(x, y; \Pi_0)$ as representing an amount of \emph{mass} sent from $x$ to $y$ when the configuration is $\Pi_0$. Thus the integrand on the lefthand side represents the total mass sent out from the root, and similarly the integrand on the righthand side represents the total mass received by the root. \begin{proof}[Proof of Theorem \ref{MTP}] The mass transport principle follows from Proposition \ref{pmpgroupoid}. First, observe that as in the proof of the CLMM, there is an integral equation implied by Proposition \ref{pmpgroupoid} and a monotone convergence argument. To wit, \[ \EE\left[ \sum_{g \in \Pi_0} f(\Pi_0, g) \right] = \EE\left[ \sum_{g \in \Pi_0} f(g^{-1}\Pi_0, g^{-1})\right], \] where $f : \Marrow \to \RR_{\geq 0}$ is a measurable function (defined $\muarrow$ almost everywhere). We now apply the integral equation with $f(\omega, g) = T(0, g; \omega)$. Note that \[ f(g^{-1}\omega, g^{-1} = T(0, g^{-1}; g^{-1}\omega) = T(g, 0; \omega), \] where the second equality is by diagonal invariance. Hence the integral equation for $f$ yields \[ \EE\left[ \sum_{g \in \Pi_0} T(0, g; \Pi_0) \right] = \EE\left[ \sum_{g \in \Pi_0} T(g, 0; \Pi_0)\right], \] which is exactly the mass transport principle expressed with a differently named integrating variable. \end{proof} \begin{remark}\label{palmformula} One can use the CLMM formula (see Theorem \ref{CLMM}) to express $\muarrow(\mathscr{G})$ without reference to the Palm measure. Let $U \subseteq G$ be of unit volume, and apply the formula to $f(x,\omega) = \mathbbm{1}[{x \in U}] \overrightarrow{\deg}_0({\mathscr{G}(\omega)})$, resulting in \[ \muarrow(\mathscr{G}) = \frac{1}{\intensity \Pi} \EE \left[ \sum_{x \in \Pi} \mathbbm{1}[{x \in U}] \overrightarrow{\deg}_x({\mathscr{G}(\Pi)}) \right] \] (note that by equivariance $\overrightarrow{\deg}_0({\mathscr{G}(x^{-1}\omega)}) = \overrightarrow{\deg}_x({\mathscr{G}(\omega)})$). \end{remark} As an application of the CLMM, we will find an expression for the Palm version of general thickenings: \begin{example}[Palm measures of general thickenings]\label{palmofgeneralthickening} Suppose one has for each configuration $\omega \in \MM$ and each $g \in \omega$ a measurably defined \emph{finite} subset $F_\omega(g)$ satisfying the following properties: \begin{description} \item[Monotonicity:] That $g \in F_\omega(g)$, \item[Separation:] If $g, h \in \omega$ are \emph{distinct} then $F_\omega(g) \cap F_\omega(h) = \empt$, and \item [Equivariance:] For all $\gamma \in G$, we have $F_{\gamma \omega}(\gamma \omega) = \gamma F_\omega(g)$. \end{description} Then one can define a thickening $\Theta : \MM \to \MM$ by \[ \Theta(\omega) = \bigsqcup_{g \in \omega} F_\omega(g). \] That is, each point $g \in \omega$ looks at the current configuration, and adds points $F_\omega(g)$ locally to it according to some equivariant rule. Every thickening has this form (see Definition \ref{voronoidefn} and the ensuing discussion). We refer to points of $\omega$ as \emph{progenitors} and points of $F_\omega(g)$ as $g$'s \emph{spawn} in $\omega$. It stands to reason that if $\Pi$ is a point process satisfying the above rules almost surely, then $\intensity \Theta(\Pi) = \EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} \cdot \intensity \Pi$. Just as in Example \ref{constantthickening} though, this will require unimodularity to prove, this time in the form of the Mass Transport Principle. Let us identify the thickening with its input/output version. Note that if we compute the Palm version of the latter, then we get it for the former by simply forgetting the labels. Our reason for doing this is simple: we need to be able to identify which points were progenitors and which points are spawn. This is only possible if we use the input/output version, but the downside is that that is more notationally cumbersome. We first verify that $\intensity \Theta(\Pi) = \EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} \cdot \intensity \Pi$. In order to apply mass transport, we need to know the following fact: \[ \PP[\Theta(\Pi)_0 \in A | 0 \text{ is a progenitor}] = \PP[\Theta(\Pi_0) \in A]. \] This follows from the definitions by similar manipulations to those we've already seen. With this fact in hand, define\footnote{An advantage of using the input/output version of the thickening is that we can exactly identify who spawned who in a well-defined way.} a transport as follows: \[ T(x, y, \Theta(\Pi)) = \mathbbm{1}[x \text{ spawned } y \text{ in } \Theta(\Pi)]. \] Then the total mass received by the root is always one (as everyone is spawned by someone), and hence the expected mass received is one. The expected mass sent out is \[ \EE_0\left[ \mathbbm{1}[0 \text{ is a progenitor}] \cdot \#\{\text{spawn of } 0\}\right] = \PP[0 \text{ is a progenitor}] \cdot \EE[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}], \] where $\EE_0$ denotes expectation with respect to the Palm measure of $\Theta(\Pi)$, and the equality follows from the fact above and the definition of conditional probability. We have by the definition of progenitor \[ \PP[0 \text{ is a progenitor}] = \frac{\intensity \Pi}{\intensity \Theta(\Pi)}, \] so $\intensity \Theta(\Pi) = \EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} \cdot \intensity \Pi$ by the mass transport principle. We now express the Palm version of $\Theta(\Pi)$ in terms of $\Pi_0$ and $\Theta$. Note that for this to be defined we must assume $\Pi$ has finite intensity and that $\EE[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}] < \infty$. Let \begin{itemize} \item $N$ be a random variable with \[ \PP[N = n] = \frac{n\PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n]}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} = \frac{n\PP[0 \text{ spawns } n \text{ points of } \Theta(\Pi_0)]}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}}, \] \item $\Upsilon^n$ denote $\Pi_0$ conditioned on the event $\{0 \text{ spawns } n \text{ points of } \Theta(\Pi_0)\}$, \item $X$ be a uniformly chosen element of $F_{\Upsilon^n}(0)$ (conditional on $\Upsilon^n$). \end{itemize} We claim that $X^{-1}\Theta(\Upsilon^N)$ is a Palm version of $\Theta(\Pi)$. In words, we are sampling from the Palm measure $\Pi_0$ biased\footnote{To see that some kind of size biasing is required, consider the point process $\ZZ + \texttt{Unif}[0,1] \subset \RR$, and define a thickening which leaves points marked $0$ as they are and adds a thousand points tightly packed around points marked $1$ A ``typical point'' of the resulting process should look more like a configuration with a thousand points near the origin, and the size biasing accommodates for this.} towards the configurations that spawn more points, and then applying the thickening and rooting at one of the spawns uniformly at random. Let $A \subseteq \{\text{Red, Blue, Purple}\}^{\mathbb{M}_0}$. We find an expression for $\PP[\Theta(\Pi)_0 \in A]$ by using mass transport: define \[ T(x, y; \Theta(\Pi)) = \mathbbm{1}[\{x \text{ spawns } y \text{ in } \Theta(\Pi) \} \cap \{y \in \theta^A(\Theta(\Pi)) \}]. \] The expected mass in with respect to $T$ is exactly $\PP[\Theta(\Pi)_0 \in A]$. The expected mass out is \begin{align*} &\EE\left[\sum_{y \in \Theta(\Pi)_0} T(0, y; \Theta(\Pi)_0) \right] \\ &= \EE\left[\mathbbm{1}[0 \text{ is a progenitor}] \cdot \#\{0 \text{ spawns } y \text{ with } y \in \theta^A(\Theta(\Pi)_0)\} \right] \\ &= \PP[0 \text{ is a progenitor}] \EE\left[\#\{0 \text{ spawns } y \text{ with } y \in \theta^A(\Theta(\Pi)_0)\} | 0 \text{ is a progenitor} \right]\\ &= \frac{1}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \EE \left[\#\{y \in F_{\Pi_0}(0) \mid y^{-1}\Theta(\Pi_0) \in A\} \right] \\ &= \frac{1}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \sum_n \EE \left[\#\{y \in F_{\Pi_0}(0) \mid y^{-1}\Theta(\Pi_0) \in A\} | \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n]. \end{align*} We can now match up this expression with our earlier description of $X^{-1}\Phi(\Upsilon^N)$. Recall that if $Y \subseteq [n]$ is a random subset, then $\EE \@ifstar{\oldabs}{\oldabs*}{Y} = n \PP[X \in Y]$, where $X$ is a uniformly chosen element of $[n]$. \begin{align*} &\EE\left[\sum_{y \in \Theta(\Pi)_0} T(0, y; \Theta(\Pi)_0) \right] \\ &= \frac{1}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \sum_n \EE \left[\#\{y \in F_{\Pi_0}(0) \mid y^{-1}\Theta(\Pi_0) \in A\} | \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n] \\ &= \frac{1}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \sum_n \EE \left[\#\{y \in F_{\Upsilon^n}(0) \mid y^{-1}\Theta(\Upsilon^n) \in A\} \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n] \\ &= \sum_n n \PP[X^{-1}\Theta(\Upsilon^n) \in A] \frac{\PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n]}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \\ &= \sum_n \PP[X^{-1}\Theta(\Upsilon^n) \in A] \PP[N = n] \\ &= \PP[X^{-1} \Theta(\Upsilon^N) \in A], \end{align*} as desired. \end{example} In fact, every thickening can be expressed \`{a} la Example \ref{palmofgeneralthickening}, as we shall now see. \begin{defn}\label{voronoidefn} Let $\omega \in \MM$ be a configuration, and $g \in \omega$ one of its points. The associated \emph{Voronoi cell} is \[ V_\omega(g) = \{ x \in G \mid d(x, g) \leq d(x, h) \text{ for all } h \in \omega \}. \] The associated \emph{Voronoi tessellation} is the ensemble of closed sets $\{V_\omega(g)\}_{g \in \omega}$. \end{defn} Left-invariance of the metric $d$ implies that the Voronoi cells are equivariant in the sense that for all $\gamma \in G$, we have $V_{\gamma \omega}(\gamma g) = \gamma V_\omega(g)$. Note that discreteness of the configuration implies that the Voronoi tessellation forms a locally finite \emph{cover} of the ambient space by closed sets. We would like to think of these sets as forming a \emph{partition} of the ambient space, but this isn't necessarily true even in the measured sense: the boundaries of the Voronoi cells can have positive volume. For example, let $\Gamma$ be a discrete group and consider $\Gamma \times \{0\} \subset \Gamma \times \RR$. Lie groups and Riemannian symmetric spaces essentially avoid this deficiency, as hyperplanes\footnote{Sets of the form $\{x \in X \mid d(x, g) = d(x, h) \}$ for a fixed distinct pair $g, h \in X$.} have zero volume. So depending on the examples one is interested in one can assume that the Voronoi cells are essentially disjoint (that is, that their intersection is Haar null). If this property is necessary then one can make a small modification to ensure it: we introduce a \emph{tie breaking} function that allows points belonging to multiple Voronoi cells to decide which one they shall belong to. Take any\footnote{Recall that standard Borel spaces are isomorphic if they have the same cardinality.} Borel isomorphism $T : G \to \RR$. Let us define \[ \begin{split} V_\omega^T(g) = \{ x \in G \mid \text{for all } h \in \omega \setminus \{g\},\; d(x, g) < d(x, h)\\ \text{ or } d(x, g) = d(x,h) \text{ and } T(x^{-1}g) < T(x^{-1}h) \}. \end{split} \] Note that these tie-broken Voronoi cells form a \emph{measurable} partition of $G$. That is, we have traded the Voronoi cells being closed for them being genuinely disjoint. The equivariance property $V^T_{\gamma \omega}(\gamma g) = \gamma V^T_\omega(g)$ still holds as well. If $\Theta : \MM \to \MM$ is a thickening, then we simply define $F_\omega(g) = V_\omega(g) \cap \Theta(\omega)$. \subsection{Ergodicity and the factor correspondences in the measured category}\label{ergodicity} In this section we show how to extend the correspondences of Section \ref{borelcorrespondences} to the measured category, which connects the distribution $\mu$ of a point process to its Palm measure $\mu_0$, and objects understood to be defined $\mu$ almost everywhere with those defined $\mu_0$ almost everywhere. \begin{defn} A subset $A \subseteq \MM$ of unrooted configurations is \emph{shift-invariant} if for all $\omega \in A$ and $g \in G$, we have $g\omega \in A$. A subset $A_0 \subseteq {\mathbb{M}_0}$ of rooted configurations is \emph{rootshift invariant} if for all $\omega \in A_0$ and $g \in \omega$, we have $g^{-1}\omega \in A_0$. The groupoid $(\Marrow, \muarrow)$ is \emph{ergodic} if every rootshift invariant subset $A \subseteq {\mathbb{M}_0}$ has $\mu_0(A) = 0$ or $1$. \end{defn} Note that if $A \subseteq \MM$ is shift-invariant, then $A_0 := A \cap {\mathbb{M}_0}$ is rootshift invariant, and if $A_0 \subseteq {\mathbb{M}_0}$ is rootshift-invariant, then $A := GA_0$ is shift invariant. Thus shift-invariant subsets and rootshift-invariant subsets are in bijective correspondence. Moreover: \begin{prop}\label{transferprinciple} Let $\mu$ be a point process with Palm measure $\mu_0$. \begin{enumerate} \item If $A \subseteq {\mathbb{M}_0}$ is rootshift invariant, then $\mu_0(A) = \mu(GA)$. \item If $A \subseteq \MM$ is shift invariant, then $\mu_0(A \cap {\mathbb{M}_0}) = \mu(A)$. \end{enumerate} That is, under the correspondence between rootshift invariant subsets of ${\mathbb{M}_0}$ and shift invariant subsets of $\MM$, the measures $\mu_0$ and $\mu$ coincide. In particular, $G \curvearrowright (\MM, \mu)$ is ergodic \emph{if and only if} $(\Marrow, \muarrow)$ is ergodic. \end{prop} \begin{proof} We assume ergodicity and prove the statements about measures. We then prove the general statement by using the ergodic case. First, suppose $G \curvearrowright (\MM, \mu)$ is ergodic, and let $A \subseteq {\mathbb{M}_0}$ be rootshift invariant. Then for any $U \subseteq G$ of unit volume, \begin{align*} \mu_0(A) &= \frac{1}{\intensity \mu} \EE_\mu\left[ \#\{g \in U \mid g^{-1}\omega \in A \} \right] && \text{By definition} \\ &= \frac{1}{\intensity \mu} \EE_\mu\left[ \@ifstar{\oldabs}{\oldabs*}{\omega \cap U} \mathbbm{1}[{\omega \in GA}] \right] && \text{By rootshift invariance of } A \\ &= \mu(GA) && \text{By ergodicity}. \end{align*} In particular, we see that $\mu_0(A)$ is zero or one, so the equivalence relation is ergodic. Now suppose $({\mathbb{M}_0}, \Rel, \mu)$ is ergodic, and let $A \subseteq \MM$ be shift invariant. \begin{align*} \mu_0(A \cap {\mathbb{M}_0}) &= \frac{1}{\intensity \mu} \EE_\mu \left[ \#\{g \in U \mid g^{-1}\omega \in A \cap {\mathbb{M}_0} \} \right] && \text{By definition} \\ &= \frac{1}{\intensity \mu} \EE_\mu\left[ \@ifstar{\oldabs}{\oldabs*}{\omega \cap U} \mathbbm{1}[{\omega \in A}] \right] && \text{By shift invariance of } A \\ &= \mu(A) && \text{By ergodicity}. \end{align*} For the general case, we appeal to the ergodic decomposition theorem (see \cite{MR1784210} for a proof): \begin{thm} Let $G$ be an lcsc group, and $G \curvearrowright (X, \mu)$ a pmp action on a standard Borel space. Then there exists a standard Borel space $Y$ equipped with a probability measure $\nu$ and a family $\{ p_y \mid y \in Y\}$ of probability measures $p_y$ on $X$ with the following properties: \begin{enumerate} \item For every Borel $A \subset X$, the map $y \mapsto p_y(A)$ is Borel, and \[ \mu(A) = \int_Y p_y(A) d\nu(y). \] \item For every $y \in Y$, $p_y$ is an invariant and ergodic measure for the action $G \curvearrowright (X, p_y)$, \item If $y, y' \in Y$ are distinct, then $p_y$ and $p_y'$ are mutually singular. \end{enumerate} \end{thm} There is an almost identically stated version of the above theorem for pmp cbers as well. These two decompositions are essentially equivalent, in a way that we shall now discuss. If $(Y, \nu)$ and $\{p_y \mid y \in Y\}$ is the ergodic decomposition for $G \curvearrowright (\MM, \mu)$, then the Palm measures $(p_y)_0$ of the $p_y$ form an ergodic decomposition for $({\mathbb{M}_0}, \Rel, \mu_0)$. That is, for all $A \subseteq {\mathbb{M}_0}$ we have \[ \mu_0(A) = \int_Y (p_y)_0(A) d\nu(y). \] Applying the previous ergodic case to this yields the general formula. \end{proof} \begin{thm}\label{correspondencetheorem} Let $G$ be a locally compact and second countable group, and $\Pi$ an invariant point process on $G$ with law $\mu$. Then associated to this data is an $r$-discrete probability measure preserving groupoid $(\Marrow, \muarrow)$ called \emph{the Palm groupoid} of $\Pi$. It has the following properties: \begin{itemize} \item Thinning maps $\theta : (\MM, \mu) \to \MM$ of $\Pi$ are in correspondence with Borel subsets $A$ of the unit space ${\mathbb{M}_0}$ of the Palm groupoid defined $\mu_0$ almost everywhere, \item Factor $\Xi$-markings $\mathscr{C} : (\MM, \mu) \to \Xi^\MM$ are in correspondence with Borel $\Xi$-valued maps $P$ defined on the unit space ${\mathbb{M}_0}$ of the Palm groupoid defined $\mu_0$ almost everywhere, and \item Factor graphs $\mathscr{G} : (\MM, \mu) \to \graph(G)$ of $\Pi$ are in correspondence with Borel subsets $\mathscr{A}$ of the arrow space $\Marrow$ of the Palm groupoid defined $\muarrow$ almost everywhere. \end{itemize} \end{thm} The Palm measure is well studied, but the equivalence relation structure seems to have been mostly overlooked. One can find two direct references to it: Example 2.2 in a paper of Avni \cite{avni2005spectral} and a question of Bowen in \cite{bowen2018all} (specifically, (Questions and comments, item 1). We now prove Theorem \ref{correspondencetheorem}, building on Section \ref{borelcorrespondences}. The task here is to verify that under the correspondence, objects which are equal almost everywhere with respect to the point process are equal almost everywhere with respect to the Palm measure, and vice versa. \begin{lem}\label{extensionlemma} Let $\mu$ be a point process on $G$ with Palm measure $\mu_0$, and $X$ a Borel $G$-space. Let $\Phi, \Phi' : \MM \to X$ be an equivariant Borel map. Then \[ \Phi = \Phi' \;\; \mu \text{ almost everywhere \emph{if and only if} } \restr{\Phi}{{\mathbb{M}_0}} = \restr{\Phi'}{{\mathbb{M}_0}}\;\; \mu_0 \text{ almost everywhere}. \] \end{lem} \begin{proof} Observe that by equivariance the sets \[ \{ \omega \in \MM \mid \Phi(\omega) = \Phi'(\omega)\} \text{ and } \{ \omega \in {\mathbb{M}_0} \mid \Phi(\omega) = \Phi'(\omega) \} \] are shift invariant and rootshift invariant respectively. So by Proposition \ref{transferprinciple} one is $\mu$-sure if and only if the other is $\mu_0$-sure, as desired. \end{proof} \begin{proof}[Proof of Theorem \ref{correspondencetheorem}] The method is essentially the same for thinnings and for markings, so we will just prove the thinning statement. To that end, let $\theta : (\MM, \mu) \to \MM$ be a thinning. Note that by our assumption that $\theta$ is equivariant, we have \[ \{ \omega \in \MM \mid \theta(\omega) \subseteq \omega \} \text{ has } \mu \text{ measure one}. \] This is a shift invariant set, so by Proposition \ref{transferprinciple} we have \[ \{ \omega \in {\mathbb{M}_0} \mid \theta(\omega) \subseteq \omega \} \text{ has } \mu_0 \text{ measure one}. \] We are now able to define $A = \{\omega \in {\mathbb{M}_0} \mid 0 \in \theta(\omega) \}$, and this will be our desired subset of $({\mathbb{M}_0}, \mu_0)$. It follows from equivariance that the thinning $\theta^A$ associated to $A$ satisfies \[ \restr{\theta^A}{{\mathbb{M}_0}} = \restr{\theta}{{\mathbb{M}_0}} \;\; \mu_0 \text{ almost everywhere,} \] so by Lemma \ref{extensionlemma} we have $\theta^A = \theta$ ($\mu$ almost everywhere). It remains to verify that if $A = B$ $\mu_0$ almost everywhere (that is, that $\mu_0(A \triangle B) = 0$, then $\theta^A = \theta^B$ ($\mu$ almost everywhere). Recall\footnote{This is a general fact about nonsingular cbers, and it follows from the fact that they can all be generated by actions of \emph{countable} groups.} that the \emph{saturation} of $A \triangle B$ \[ [A \triangle B] = \{ g^{-1}\omega \in {\mathbb{M}_0} \mid \omega \in A \triangle B \text{ and } g \in \omega \} \] is $\mu_0$ null if $A \triangle B$ is $\mu_0$ null. Observe that for $\omega \not\in [A \triangle B]$ we have $\theta^A(\omega) = \theta^B(\omega)$, and hence $\restr{\theta^A}{{\mathbb{M}_0}} = \restr{\theta^B}{{\mathbb{M}_0}}$ $\mu_0$ almost everywhere, and we are finish by again applying Lemma \ref{extensionlemma}. If $\mathscr{G}$ is a factor graph of $\mu$, then in the same fashion we see that it has a well-defined restriction to $({\mathbb{M}_0}, \mu_0)$. We then define \[ \mathscr{A} = \{ (\omega, g) \in {\mathbb{M}_0} \times G \mid (0, g) \in \mathscr{G}(\omega) \}. \] We must verify that if $\mathscr{A}, \mathscr{B} \subseteq \Marrow$ are subsets with $\muarrow(A \triangle B) = 0$, then their associated factor graphs $\mathscr{G}^{\mathscr{A}}$ and $\mathscr{G}^{\mathscr{B}}$ are equal $\mu$ almost everywhere. This assumption states \[ \int_{{\mathbb{M}_0}} \# \{g \in \omega \mid (\omega, g) \in \mathscr{A} \triangle \mathscr{B} \} d\mu_0(\omega) = 0 \] and hence the integrand is zero $\mu_0$ almost everywhere. By again considering the saturation of sets, we see that \[ \mu_0( \{\omega \in {\mathbb{M}_0} \mid \text{ for all } g \in \omega, g^{-1}\omega \in \mathscr{A} \triangle \mathscr{B} \}) = 0, \] from which the argument finishes as in the case of thinnings. \end{proof} \subsection{Every free action is a point process, and the cross-section perspective}\label{crosssectionappendix} We have taken the perspective that point processes are an \emph{intrinsically interesting} class of pmp actions of lcsc groups to study. They are also a fairly general class: in this section we will prove Theorem \ref{minden}, that \emph{every} free and pmp action of a \emph{nondiscrete} lcsc group $G$ on a standard Borel measure space $(X, \mu)$ is abstractly isomorphic to a finite intensity point process. This is similar to the following fact: let $\Gamma \curvearrowright (X, \mu)$ be a pmp action of a discrete group $\Gamma$. The \emph{symbolic dynamics} of this action is the map \begin{align*} &\Sigma : (X, \mu) \to X^\Gamma \\ &\Sigma_x(\gamma) = \gamma^{-1}x. \end{align*} This is an injective and equivariant map, so we may identify the action $\Gamma \curvearrowright (X, \mu)$ with the invariant colouring action $\Gamma \curvearrowright (X^\Gamma, \Sigma_* \mu)$. In this way, we see that all pmp actions of discrete groups are isomorphic to invariant colourings\footnote{If desired, one can fix a Borel isomorphism $X \cong [0,1]$ so that the colouring space is the same for all actions}. A standard technique in the study of free pmp actions of lcsc groups is to analyse their associated \emph{cross-sections}. This will gives an analogue of symbolic dynamics for nondiscrete groups. \begin{defn} Let $G \curvearrowright (X, \mu)$ be a pmp action on a standard Borel measure space $(X, \mu)$. A \emph{discrete cross-section} for the action is a Borel subset $Y \subset X$ such that for $\mu$-every $x \in X$ the set $\{g \in G \mid g^{-1}x \in Y \}$ is a closed and discrete non-empty subset of $G$. \end{defn} \begin{example} The set ${\mathbb{M}_0} \subset \MM$ is a discrete cross-section \emph{for all} non-empty point process actions $G \curvearrowright (\MM, \mu)$. \end{example} There is a sense in which this ${\mathbb{M}_0}$ is the \emph{only} cross-section, which we now discuss. Fix a discrete cross-section $Y$ for $G \curvearrowright X$. We associate to this data two maps \begin{align*} &\mathcal{V} : (X, \mu) \to \MM && \mathscr{V} : (X, \mu) \to Y^\MM \\ &\mathcal{V}_x = \{g \in G \mid g^{-1}x \in Y \} && \mathscr{V}_x = \{(g, g^{-1}x) \in G \times Y \mid g^{-1}x \in Y \}. \end{align*} These are equivariant maps, and the second one is always injective. In particular\footnote{Recall that an \emph{injective} map between standard Borel spaces is always a Borel isomorphism onto its image}, we see that every action which admits a cross-section also admits a point process factor, and is isomorphic to a \emph{marked} point process. Note that $\mathcal{V}^{-1}({\mathbb{M}_0}) = Y$. In this way we see that \emph{a discrete cross-section is the same thing as an unmarked point process factor}. \begin{remark}[Terminological discussion] If $\mathcal{P}(\omega)$ is some property of discrete subsets $\omega$ in $G$, then we can investigate discrete cross-sections of actions $G \curvearrowright (X, \mu)$ such that the associated subset $\mathcal{V}_x$ satisfies $\mathcal{P}$ for $\mu$ almost every $x \in X$. For instance, $\mathcal{P}(\omega)$ might be the property ``$\omega$ is uniformly discrete'' or ``$\omega$ is a net'' (see Definition \ref{metricdefs} for the meaning of these terms). We will refer to a discrete cross-section such that $\mathcal{P}(\mathcal{V}_x)$ is satisfied for $\mu$ almost every $x \in X$ as a \emph{$\mathcal{P}$ cross-section}. Note that if $G \curvearrowright (\MM, \mu)$ is the Poisson point process action, then ${\mathbb{M}_0}$ is \emph{not} a lacunary cross-section. It is for this reason that we feel the terminology should be modified slightly. \end{remark} \begin{thm}[Forrest\cite{MR417388}, see also \cite{MR3335405}]\label{crosssectionsexist} Every free and \emph{nonsingular}\footnote{Recall that an action is \emph{nonsingular} if it preserves null sets, that is, if $\mu(A) = 0$ then $\mu(gA) = 0$ for all $g \in G$.} action of an lcsc group on a standard probability space admits a discrete cross-section. Moreover, the cross-section can be chosen to be uniformly separated and even a net. \end{thm} One sees that Theorem \ref{minden} is true by applying the above theorem with the unmarking technique of Proposition \ref{abstractlyisom}. \begin{remark} In fact, cross-sections of actions are known to exist in great generality, see \cite{kechris2019theory} for further examples. Our keen interest in \emph{free} actions is because it allows us to identify the orbit $Gx$ of any point $x \in X$ with $G$ itself. One can run into issues in the absence of this. For instance, let $\RR \times \RR$ act on $\{\bullet\} \times \RR/\ZZ$ diagonally, where $\{\bullet\}$ denotes a singleton with trivial action. Then $\{ (\bullet, 0) \}$ is a lacunary cross-section for the action. If we try to construct a map $\mathcal{V}$ as before, then we would map $(\bullet, x) \in \{\bullet\} \times \RR/\ZZ$ to the subset of $\RR^2$ \[ \mathcal{V}_{(\bullet, x)} = \RR \times \{ x + \ZZ \}. \] In this way one has constructed a \emph{random closed set} as a factor of the action, but it is not a point process. In fact, it is possible to view an \emph{arbitrary} pmp action as a kind of ``bundle'' of point processes over the various homogeneous spaces $G/H$, where $H$ ranges over the closed subgroups of $G$, but we will not explore this further. \end{remark} The following theorem is described as folklore in \cite{MR3335405}: \begin{thm}[Folklore theorem, see Proposition 4.3 of \cite{MR3335405}] Let $G$ be a unimodular lcsc group, and $G \curvearrowright (X, \mu)$ a pmp action on a standard Borel space. Fix a lacunary cross-section $Y \subset X$ for the action. Then: \begin{enumerate} \item The orbit equivalence relation of $G \curvearrowright X$ restricts to a cber $\Rel$ on $Y$. \item There exists an $\Rel$-invariant probability measure $\nu$ on $Y$. \item The action $G \curvearrowright (X, \mu)$ is ergodic if and only if the cber $(Y, \Rel, \nu)$ is ergodic. \item The group $G$ is noncompact if and only if the cber is aperiodic $\nu$ almost everywhere. \item The group $G$ is amenable if and only if the cber $(Y, \Rel, \nu)$ is amenable. \end{enumerate} \end{thm} The mathematical content of Theorem \ref{correspondencetheorem} can be viewed as a rediscovery of the above theorem with different proofs, together with interpretation of factor constructions as objects living on the Palm groupoid. \begin{question} Is there a more point process theoretic method to construct discrete cross-sections of free pmp actions? \end{question} We have seen that if $G \curvearrowright (X, \mu)$ is a free pmp action, then cross-sections are the same thing as point process factor maps, and that every choice of cross-section gives an isomorphic representation of the action as a marked point process. These ideas can be combined. Suppose $\Phi : (X, \mu) \to \MM$ is an equivariant factor map. Then $Y = \Phi^{-1}({\mathbb{M}_0})$ is a cross-section for the action $G \curvearrowright (X, \mu)$. We also have the isomorphism $\mathscr{V} : (X, \mu) \to Y^\MM$. These can be combined, and we see that the map $\Phi \circ \mathscr{V}^{-1} : Y^\MM \to \MM$ is simply the map that forgets labels. In other words, every extension of a point process is just the point process with an enriched mark space. \section{The cost of a point process}\label{cost} \subsection{Definition and monotonicity for factors} Our goal is to extend the notion of cost for pmp cbers to point processes. For further background on cost, see \cite{gaboriau2000cout}, \cite{gaboriau2010cost}, \cite{gaboriau2016around}, and \cite{kechris2004topics}. Informally speaking, the \emph{cost} of a point process is the ``cheapest'' way to wire it up. We look at all \emph{connected} factor graphs of the process and compute the expected degree at the origin in the Palm version. This is then suitably normalised to give an isomorphism invariant. \begin{defn}\label{groupoidcostdefn} Let $\Pi$ be a point process on $G$ (possibly marked) with finite but non-zero intensity. Its \emph{groupoid cost} is defined by \[ \cost(\Pi) - 1 = \intensity \mu \cdot \inf_{\mathscr{G}} \left\{ \frac{1}{2}\EE\left[\deg_0{\mathscr{G}(\Pi_0)}\right] - 1 \right\}, \] where the infimum is taken over all connected factor graphs $\mathscr{G}$ of $\Pi$ and $\Pi_0$ denotes the Palm version of $\Pi$. Equivalently by Remark \ref{palmformula}, \[ \cost(\Pi) - 1 = \inf_{\mathscr{G}}\left\{ \frac{1}{2}\EE\left[ \sum_{x \in U \cap \Pi} \deg_x{\mathscr{G}(\Pi)} \right] \right\} - \intensity(\Pi), \] where $U$ is a set of unit volume in $G$. \end{defn} \begin{remark} The cost respects the ergodic decomposition of a process, and so for this reason it suffices to consider ergodic processes. \end{remark} \begin{defn} The \emph{cost} of a group is the infimum of the cost of all its free point processes. A group is said to have \emph{fixed price} if all of its \emph{essentially free} point processes have the same cost. \end{defn} At the time of writing there are no groups known that do not have this property. \begin{remark} We sometimes refer to Definition \ref{groupoidcostdefn} as \emph{groupoid} as it can be thought of as the infimal ``size'' of a generator of the groupoid $(\Marrow, \muarrow)$, in a way that we now discuss. Recall that (directed) factor graphs are in correspondence with subsets of $\Marrow$. We identify objects under this correspondence. One defines the \emph{product} of two factor graphs $\mathscr{G}, \mathscr{H} \subset \Marrow$ by taking all well-defined products. More explicitly, \[ \mathscr{G} \cdot \mathscr{H} = \{ (\omega, gh) \in \Marrow \mid (\omega, g) \in \mathscr{G} \text{ and } (g^{-1}\omega, h) \in \mathscr{H} \}. \] From the factor graph viewpoint, the edges of $\mathscr{G} \cdot \mathscr{H}$ are those pairs of vertices that can be reached by following an edge of $\mathscr{G}$ and then an edge of $\mathscr{H}$. A \emph{Borel generator} of $\Marrow$ is a Borel factor graph $\mathscr{G}$ such that \[ \langle \mathscr{G} \rangle := \bigcup_{n} \mathscr{G}^n = \Marrow. \] In other words, it is a \emph{connected} factor graph. If $\Pi$ is a point process with law $\mu$, then a generator of the measured groupoid $(\Marrow, \muarrow)$ is a factor graph $\mathscr{G}$ such that \[ \muarrow(\Marrow \setminus \langle \mathscr{G} \rangle) = 0. \] In other words, it is a factor graph which is \emph{connected almost surely}. With these definitions, one can equivalently rephase the probabilistic definition of the cost of $\Pi$ as \[ \cost(\Pi) - 1 = \intensity(\Pi) \cdot \inf_{\mathscr{G}}\left\{ \muarrow(\mathscr{G}) - 1\right\}, \] where $\mathscr{G}$ runs over all generators of $(\Marrow, \muarrow)$. \end{remark} \begin{example} If $\Pi$ is the lattice shift corresponding to $\Gamma < G$, then \[ \cost(\Pi) = 1 + \frac{d(\Gamma) - 1}{\covol(G / \Gamma) }, \] where $d(\Gamma)$ denotes the \emph{rank} of $\Gamma$, that is, its minimum number of generators. To see this, observe that by equivariance a factor graph of the lattice shift is determined by a \emph{single} subset $S \subset \Gamma$, and connects $x \in \Pi$ to all $xs \in \Pi$ for $s \in S$. The graph is connected exactly when $S$ generates $\Gamma$. The formula then follows from the definition of cost. \end{example} \begin{remark} In a concurrently appearing work\cite{mellick2021palm} by the second author, it is shown that the Palm equivalence relation of any free point process on an amenable group is hyperfinite almost everywhere. It follows that amenable groups (in particular $\RR^n$) have fixed price one. We will show that all groups of the form $G \times \RR$ have fixed price one. This gives an alternative proof that $\RR^n$ has fixed price one. It would be interesting to see a ``direct'' proof of this fact. That is, to exhibit \emph{reasonably explicit} connected factor graphs that have cost less than $1 + \e$ for every $\e > 0$. In \cite{coupier20132d} an explicit factor graph of the Poisson point process on $\RR^2$ is described and shown to be a connected and one-ended tree. It follows that it has cost one. \end{remark} \begin{lem}\label{costmonotone} Let $\Pi$ be a point process of finite intensity, and $\Phi$ a factor map of $\Pi$ such that $\Phi(\Pi)$ has finite intensity. Then \[ \cost(\Pi) \leq \cost(\Phi(\Pi)). \] Thus cost is \emph{monotone} for factors. \end{lem} \begin{cor} If $\mu$ and $\nu$ are finite intensity point processes that factor onto each other, then $\cost(\mu) = \cost(\nu)$. In particular, the cost of $\mu$ only depends on its isomorphism class as an action. \end{cor} \begin{proof}[Proof of Lemma \ref{costmonotone}] Recall from Remark \ref{factorsdecompose} that $\Phi$ decomposes as the composition of a thinning $\pi$ and a thickening $\Theta^\Phi$. We prove \[ \cost(\Pi) \leq \cost(\Theta^\Phi(\Pi)) \leq \cost(\pi(\Theta^\Phi(\Pi))) = \cost(\Phi(\Pi)), \] where the last equality holds as $\Phi = \pi \circ \Theta^\Phi$. We prove the second inequality first, as it is simpler. For this we use the non-Palm definition of cost. To that end, let $\mathscr{G}$ be a graphing of $\Phi(\Pi)$ that $\e$-computes the cost, that is, with \[ \EE\left[\sum_{x \in U \cap \Phi(\Pi)} \overrightarrow{\deg}_x{\mathscr{G}(\Phi(\Pi))} \right]- \intensity (\Phi(\Pi)) \leq \cost(\Phi(\Pi)) - 1 + \e. \] We will use it to define a graphing $\mathscr{H}$ of the thickened process $\Theta^\Phi(\Pi)$. Recall that this process has three types of points: red, purple, and blue. Let $\mathscr{N}$ be the factor graph of $\Theta^\Phi(\Pi)$ that connects each red point $x$ to its nearest blue neighbour. If this is not well-defined, then we use the tie-breaking function $T : G \to \RR$ of Section \ref{voronoidefn} to make it so in an equivariant way. That is, if $y_1, y_2, \ldots, y_n$ are the (finitely many!) blue points of $\Theta^\Phi(\Pi)$ that are closest to $x$, then let $y$ be the element that minimises $T(x^{-1}y_i)$ and add in a directed edge $x \to y$ to $\mathscr{N}$. We can view $\mathscr{G}$ as defining a factor graph on $\Theta^\Phi(\Pi)$, which lives on the blue and purple points. Now let $\mathscr{H}(\Theta^{\Phi}(\Pi)) = \mathscr{G}(\Phi(\Pi)) \sqcup \mathscr{N}(\Theta^\Phi(\Pi))$. This is connected as an undirected graph, so by the definition of cost: \begin{align*} &\cost(\Theta^\Phi(\Pi)) - 1 \leq \EE\left[\sum_{x \in \Theta^\Phi(\Pi) \cap U}\overrightarrow{\deg}_x{\mathscr{H}(\Theta^\Phi(\Pi))}\right] - \intensity(\Theta^\Phi(\Pi)) \\ &= \EE\left[\sum_{x \in U \cap \Pi \setminus \Phi(\Pi)} 1 + \sum_{x \in U \cap \Phi(\Pi)} \overrightarrow{\deg}_x{\mathscr{G}(\Phi(\Pi))} \right] - \intensity (\Pi \setminus \Phi(\Pi)) - \intensity (\Phi(\Pi)) \\ &= \EE\left[\sum_{x \in U \cap \Phi(\Pi)} \overrightarrow{\deg}_x{\mathscr{G}(\Phi(\Pi))} \right]- \intensity (\Phi(\Pi)) \\ &\leq \cost(\Phi(\Pi)) - 1 + \e. \end{align*} As $\e$ was arbitrary, this proves the second inequality. For the other inequality, we use the explicit description of the Palm measure as in Example \ref{palmofgeneralthickening} and the Palm definition of cost. The idea of the proof is: we have a graphing defined on a larger subset, and we must push it onto a smaller subset somehow. We will simply transfer all edges of $\Theta^\Phi(\Pi)$ to $\Pi$ along the Voronoi cells. For $g \in \Pi$, let $F_\Pi(g) = V_\Pi(g) \cap \Theta^\Phi(\Pi)$. Let us call a graphing $\mathscr{G}$ of $\Theta^\Phi(\Pi)$ \emph{starlike} if for all $g \in \Pi$ and $x \in F_\Pi(g)$, we have $(g,x) \in \mathscr{G}$. If $\mathscr{G}$ is any graphing, then we can perturb it to find a starlike graphing of the same edge measure. Let us take this for granted for now and see how the proof concludes. Let $\mathscr{G}$ be a starlike graphing of $\Theta^\Phi(\Pi)$ that $\e$-computes the cost. Let us define a graphing $\mathscr{H}$ of $\Pi$ as follows: join $x, y \in \Pi$ by an edge in $\mathscr{H}(\Pi)$ if there exists $x' \in F_\Pi(x)$ and $y' \in F_\Pi(y)$ such that $x'$ and $y'$ are connected by an edge in $\Theta^\Phi(\Pi)$. When we push $\mathscr{G}$ onto $\Pi$, some edges get killed. For instance, if two Voronoi cells have many edges between them, then some get killed. By assuming that the graphing is \emph{starlike} we are guaranteed to kill enough edges. In particular, we kill $\@ifstar{\oldabs}{\oldabs*}{F_\Pi(g)} - 1$ edges at each $g \in \Pi$. To make the proof more legible, we write $I_\Pi = \intensity(\Pi)$ and $I_{\Theta} = \intensity(\Theta^\Phi(\Pi))$, so that $I_\Theta = I_\Pi \cdot \EE[F_{\Pi_0}(0)]$. We compute its expected outdegree as follows: \begin{align*} &I_\Pi \cdot \EE\left[\overrightarrow{\deg}_0{\mathscr{H}(\Pi_0)} - 1\right] \leq I_\Pi \cdot \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} - \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}\right] \\ &= I_\Pi \cdot \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \right] - I_\Pi\cdot \EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} \\ &= I_\Pi \cdot \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \right] - I_\Theta. \end{align*} We now work on this first term. \begin{align*} &I_\Pi \cdot \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \right] = \frac{I_\Theta}{\EE \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \right]\\ &= \sum_{k \geq 1}\frac{I_\Theta}{\EE \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \Big| \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = k \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = k] \\ &= I_\Theta \sum_{k \geq 1} \EE\left[\frac{1}{\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}}\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \Big| \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = k \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = k] \\ &= I_\Theta \EE\left[\overrightarrow{\deg}_0(\mathscr{G}(\Theta^\Phi(\Pi)_0))\right], \end{align*} where we use the explicit description of the Palm measure of a general thickening proven in Example \ref{palmofgeneralthickening}. Thus \[ I_\Pi \cdot \EE\left[\overrightarrow{\deg}_0{\mathscr{H}(\Pi_0)} - 1\right] \leq I_\Theta \EE\left[\overrightarrow{\deg}_0(\mathscr{G}(\Theta^\Phi(\Pi)_0)) - 1\right] \] proving $\cost(\Pi) \leq \cost(\Theta^\Phi(\Pi))$, as desired. At last, we must show how to perturb graphings to be starlike. The idea is simple: if some $g \in \Pi$ is not starlike, then there is some $x \in F_\Pi(g)$ such that $(g, x) \not\in \mathscr{G}$. However, there must be \emph{some} path from $g$ to $x$ in $\mathscr{G}$ so we pinch an edge from that path and thus rob Peter to pay Paul. In this way we can improve a given factor graph to be more starlike. By iterating in an appropriate way we can construct the desired factor graph. Let \[ \Pi' = \bigcup_{g \in \Pi} \{ h \in F_\Pi(g) \mid h \neq g \text{ and } (g, h) \not\in \mathscr{G} \} \] denote the subprocess of points that violate starlikeness. The edges of $\mathscr{G}$ are of three kinds according to how they interface with the Voronoi cells of $\Pi$: \begin{description} \item[Starlike edges,] those of the form $(g, h)$ where $g \in \Pi$ and $h \in F_\Pi(g)$, \item[Intracell edges,] those of the form $(h, h')$ where $h, h' \in F_\Pi(g)$ for some $g \in \Pi$ with neither of $h$ or $h'$ being $g$, and \item[Crossing edges,] those of the form $(h, h')$ with $h \in F_\Pi(g)$ and $h' \in F_\Pi(g')$ with $g, g' \in \Pi$ and $g \neq g'$. \end{description} \begin{figure}[h]\label{edgesexample} \includegraphics[scale=0.4]{rewiringedgesexample.pdf} \centering \caption{A chunk of a point process, with starlike edges coloured magenta, intracell edges black, and crossing edges cyan.} \end{figure} We consider the space\footnote{By constructing an appropriate subset of a configuration space, one can encode these graphs as a standard Borel space.} $\mathcal{G}$ of marked factor graphs of $\Pi$ with the following properties: \begin{itemize} \item They are simply $\mathscr{G}$ as an unmarked graph, \item Points $h$ of $\Pi'$ receive \emph{either} the blank mark $\bullet$, \item \emph{or} they are marked by a non backtracking path in $\mathscr{G}$ from $h$ to $g$, where $h \in F_\Pi(g)$, and one crossing or intracell edge of this path is coloured red, and \item each red edge appears in at most one of the paths of points of $\Pi'$. \end{itemize} These factor graphs are basically rewiring rules for $\mathscr{G}$. If $\mathscr{H} \in \mathcal{G}$, then each point of $\Pi'$ that receives a path label in $\mathcal{H}$ replaces the red crossing edge with its starlike edge (see Figure \ref{rewired}). This is an equivariant, measurable, and deterministic rule, so defines a factor graph of $\Pi$. \begin{figure}[h]\label{rewired} \includegraphics[scale=0.5]{rewired.pdf} \centering \caption{A single rewiring move. We stress that this move must be taken be \emph{all} chosen points simultaneously, and therein lies the rub.} \end{figure} Note that this rewiring doesn't change the edge measure of the graph (we rob Peter to pay Paul), and it remains connected. Let \[ \iota(\mathscr{H}) = \text{the intensity of } \Pi' \text{ points that receive path labels in } \mathscr{H} \] and \[ f(\mathscr{H}) = \sup\left\{\iota(\mathscr{H}') \mid \mathscr{H}' \in \mathcal{G} \text{ and } \mathscr{H} \preceq \mathscr{H}' \right\}, \] where we declare $\mathscr{H} \preceq \mathscr{H}'$ if every path label in $\mathscr{H}$ is present in $\mathscr{H}'$. That is, $\mathscr{H}'$ has simply replaced $\bullet$ labelled points in $\mathscr{H}$ by path labels. \begin{claim} There is a maximal element $\mathscr{G}_\infty$ (with respect to $\preceq$) of $\mathcal{G}$, and the rewiring of $\mathscr{G}$ associated to it is starlike. \end{claim} It's easy to find the maximal element. Choose $\mathscr{G}_1$ such that \[ f(\mathscr{G}) \leq \iota(\mathscr{G}_0) + 1, \] and then inductively choose $\mathscr{G}_{n+1}$ such that \[ f(\mathscr{G}_n) \leq \iota(\mathscr{G}_{n+1}) + \frac{1}{n} \] Let $\mathscr{G}_\infty$ denote the ``union'' of the $\mathscr{G}_n$, where we declare that path labels trump $\bullet$ labels. If $\mathscr{H} \in \mathcal{G}$ is a factor graph with $\mathscr{G}_\infty \preceq \mathscr{H}$, then $\mathscr{G}_n \preceq \mathscr{H}$ for all $n$, so \[ \iota(\mathscr{H}) \leq f(\mathscr{G}_n) \leq \iota(\mathscr{G}_n) + \frac{1}{n} \to \iota(\mathscr{G}_\infty). \] We conclude that $\mathscr{G}_\infty = \mathscr{H}$ almost surely since one process is a subset of the other. We will now prove that the rewiring associated to $\mathscr{G}_\infty$ is starlike by contradiction. Using the assumption we construct $\mathscr{H} \in \mathcal{G}$ with $\mathscr{G}_\infty \preceq \mathscr{H}$ and $\iota(\mathscr{G}_\infty) < \iota(\mathscr{H})$. This violates maximality of $\mathscr{G}_\infty$. Let $\overline{\mathscr{G}}$ denote the result of rewiring $\mathscr{G}_\infty$. Set \[ \Pi_\times = \{ g \in \Pi \mid g \text{ is not starlike in } \overline{\mathscr{G}} \}. \] We are going to make these points more starlike. For each $g \in \Pi_\times$, choose a point $x_g \in \Theta(\Pi)$ in an equivariant and measurable way. More precisely, we consider the set \[ \{ y \in F_\Pi(g) \mid (g, y) \not \in \overline{\mathscr{G}} \} \] and choose $x_g$ to be the element minimising $I(g^{-1}y)$. Fix a nonbacktracking path $P(g, x_g)$ from $g$ to $x_g$ in $\overline{\mathscr{G}}$. We do this for all $g \in \Pi_\times$ simultaneously, again in an equivariant and measurable way: look at all paths between $g$ and $x_g$ of minimal length (as in number of $\overline{\mathscr{G}}$ edges used), and choose one using the Borel isomorphism $I$ in a similar way to before. Choose\footnote{If the process isn't ergodic then this $N$ should be a random variable, in any case one can manage.} $N$ so large that there is a positive intensity of points $g \in \Pi_\times$ with paths $P(g, x_g)$ of length at most $N$. We now construct our desired marked factor graph $\mathscr{H}$ as follows: \begin{itemize} \item Every point in $\mathscr{G}$ that has a path label retains its path label. \item Every point of $\Pi' \setminus \Pi_\times$ is marked $\bullet$. \item Every point $g$ of $\Pi_\times$ whose path $P(g, x_g)$ has length greater than $N$ is marked $\bullet$. \end{itemize} This leaves the points of $\Pi_\times$ whose paths are bounded by $N$. Note that this is a locally finite family -- each edge appears on at most finitely many $P(g, x_g)$. Every path in the rewired graph $\overline{\mathscr{G}}$ can be associated to a path in $\mathscr{G}_\infty$ itself -- every time one of the starlike edges that was added in the rewiring process is added, just go the long way in $\mathscr{G}$. We refer to this as the \emph{detour version} of the path. Now, to construct the remaining labels check if there are any paths $P(g, x_g)$ which contain an intracell edge $e = (h, h')$ with $h, h' \in F_\Pi(g)$. If so, then we label $g$ by the detour version of this path in $\mathscr{G}_\infty$ with $e$ coloured red. Observe that the remaining paths must contain \emph{at least two} crossing edges. Each $g$ will \emph{apply} to the first edge crossing edge it sees on the path from $g$ to $x_g$. Each edge $(h, h')$ receives finitely many applicants $\{g_1, g_2, \ldots, g_k\}$. It chooses the element of this set which minimises $\min \{I(g_i^{-1}h), I(g_i^{-1}h')$. At last, we finish the construction of $\mathscr{H}$ by marking the points who were rejected by $\bullet$, and the remaining ones by the detour version of this path in $\mathscr{G}$ with their chosen edge coloured red. Then $\mathscr{G}_\infty \preceq \mathscr{H}$ by construction, but $\iota(\mathscr{H}) > \iota (\mathscr{G}_\infty)$. We are therefore able to replace $\mathscr{G}$ by $\overline{\mathscr{G}_\infty}$ and assume our factor graph is starlike, as desired. \end{proof} \begin{remark} The groupoid cost can really increase under a factor map: take the example of Remark \ref{thinninglost} with $\ZZ^n < \RR^n$ for $n > 1$. That is, consider the union of the $\ZZ^n$ lattice shift with the Poisson point process. As a free process, this has cost one. But it factors onto the lattice shift, which has cost greater than one. \end{remark} One can also prove cost monotonicity by invoking Gaboriau's theorem on the cost of complete sections: \begin{thm}[Proposition II.6 of \cite{gaboriau2000cout}, see also Theorem 21.1 of \cite{kechris2004topics}] If $(X, \Rel, \mu)$ is a pmp cber and $S \subseteq X$ is a complete section\footnote{That is, it meets almost every orbit of $X$.}, then \[ \cost_\mu(\Rel) - 1 = \mu(S) \left( \cost_{\mu | S}(\restr{\Rel}{S}) - 1 \right), \] where $\restr{\Rel}{S} = \Rel \cap S \times S$ is the restriction and \[ \mu | S := \frac{\mu( \bullet \cap S)}{\mu(S)} \] is the conditional measure. \end{thm} Suppose $\Phi : (\MM, \mu) \to \MM$ is a point process factor map with $\Phi_* \mu$ of finite intensity. Then \[ Y := \Phi^{-1}({\mathbb{M}_0}) = \{ \omega \in \MM \mid 0 \in \Phi(\omega) \} \] forms a \emph{discrete cross section}\footnote{See Section \ref{crosssectionappendix} for the definition and further context.} for the action $G \curvearrowright (\MM, \mu)$. One can define a ``Palm measure'' $\mu_Y$ on $Y$ by replacing all references to ${\mathbb{M}_0}$ with $Y$, and similarly there is a rerooting equivalence relation $\Rel_Y$ on $Y$. This again forms a pmp cber. Then we have a morphism $\Phi : (Y, \Rel_Y, \mu_Y) \to ({\mathbb{M}_0}, \Rel, \Phi_* \mu)$ of pmp cbers, so \[ \cost_{\mu_Y}(\Rel_Y) \leq \cost_{\Phi_* \mu} (\Rel). \] One can see that $Y \cup {\mathbb{M}_0}$ \emph{also} forms a discrete cross section, and both $Y$ and ${\mathbb{M}_0}$ are complete sections for it. Then two applications of Gaboriau's theorem shows \[ \cost_{\mu_Y}(\Rel_Y) = \cost_{\mu}(\Rel), \] thus proving cost montonicity. \subsection{Unmarking} We have defined cost of groups by looking at all free \emph{unmarked} point processes on the group. This is no loss of generality: \begin{prop}\label{abstractlyisom} Every free point process $\Pi$ on a nondiscrete group with marks from a standard Borel space $\Xi$ is equivariantly isomorphic to an \emph{unmarked} point process. More precisely, if $\Pi$ has marks from a standard Borel space $\Xi$ and $\mu$ is its law, then there is a measurable and equivariant almost everywhere isomorphism $\Phi: (\Xi^\MM, \mu) \to (\MM, \Phi_*\mu)$. \end{prop} Since cost is an isomorphism invariant (even if one process is marked and the other isn't), this shows that one can't find point processes with lower cost by using some tricky mark space. We refer to Proposition \ref{abstractlyisom} as \emph{unmarking}. It should be easy to convince oneself that such a proposition will be true, although the details will necessarily be somewhat messy and ad hoc. We call the technique used \emph{local encoding}, which is illustrated in the following example: \begin{figure}[h]\label{localencode} \includegraphics[scale=0.5]{localencode.pdf} \centering \caption{Locally encoding labels of a point process.} \end{figure} This is a point process in $\RR^2$ labelled by the set $\{+, -\}$, which we have coloured as cyan and magenta respectively in the diagram. The map $\Phi : \{+, -\}^\MM \to \MM$ takes the input configuration, and adds a small decoration around each point. In this case we are literally encoding $+$ marks as a plus symbol centred at each point and similarly for $-$ marks. Barring some exceptional circumstances, you should be able to convince yourself that $\Phi$ is an injective map, and thus is an isomorphism onto its image for many input processes. The proof for general $G$ works along the same lines. We will employ a general lemma that is no doubt well known to experts. For the convenience of the reader we translate a proof appearing in \cite{timar2004} and attributed to Yuval Peres into our language. \begin{lem}\label{independentsetsexist} Let $\mu$ be a \emph{free} point process on $G$, and $\mathscr{G}$ a locally finite measurable factor graph of $\mu$. Then one can equivariantly and measurably construct a non-trivial independent subset of $\mathscr{G}$. \end{lem} To spell this out, this means there exists a map $I : (\MM, \mu) \to \MM$ with the properties that \begin{itemize} \item $I(\omega) \subset \omega$ almost surely, and \item if $g, h \in I(\omega)$, then $g$ and $h$ are not connected in $\mathscr{G}(\omega$). \end{itemize} \begin{proof} The key idea in the proof can be illustrated by via the factor labelling $\odot : \MM \to {\mathbb{M}_0}^\MM$ given by \[ \odot(\omega) = \{ (g, g^{-1}\omega) \in G \times {\mathbb{M}_0}(G) \mid g \in \omega \}. \] Under $\odot$, each point $g$ of a configuration $\omega$ looks at how the configuration looks like from its perspective, and records it as a label. That is, it views itself as the centre of the universe (this is what the symbol $\odot$ is meant to represent, we will call the map \emph{egotistical} or \emph{self-centred}). Observe that $\mu$ is an (essentially) free action if and only if $\odot(\omega)$ has distinct labels almost surely. For if $g, h \in \omega$ receive the same label under the egotistical map, then $g^{-1}\omega = h^{-1}\omega$, ie. $gh^{-1} \in \stab_G(\omega)$. Conversely, if $g \in \stab_G(\omega)$ is nontrivial, then for all $x \in \omega$ the label $x^{-1}\omega$ of $x$ is the same as that of $gx$, as $(gx)^{-1}\omega = x^{-1}\omega$. Fix a countable dense subset $Q \subset {\mathbb{M}_0}$. Let us define a thinning $I_q : \MM \to \MM$ for each $q \in Q$ by \[ I_q(\omega) = \{ g \in \omega \mid d(g^{-1}\omega, q) < d(h^{-1}\omega, q) \text{ for all } h \in \omega \text{ adjacent to } g \text{ in } \mathscr{G}(\omega) \}. \] Note that each $I_q(\omega)$ is an independent subset of $\mathscr{G}(\omega)$, but it is possibly empty. However, the \emph{union} over all $q$ of the $I_q$ is $\omega$ by freeness, so at least one such $I_q$ must define a non-empty independent subset, as desired. \end{proof} In particular, by applying the lemma to the factor graph $\mathscr{D}_R$ of Example \ref{distanceR}, one has: \begin{cor} Let $\Pi$ be a free point process. Then for all $R > 0$ one deterministically, measurably, and equivariantly select a subset $\Pi_R \subset \Pi$ that is $R$ uniformly separated, in the sense that if $x$ and $y$ are distinct points of $\Pi_R$, then $d(x,y) > R$. \end{cor} Our proof will also make use of a technique we refer to as \emph{label trickery}: \begin{prop}[Label trickery]\label{labeltrickery} Let $\Pi$ be any free point process (possibly marked), and $\theta(\Pi)$ any nonempty thinning. Then there exists a marked point process $\Upsilon$ such that the underlying point set of $\Upsilon$ is $\theta(\Pi)$, and $\Upsilon$ is isomorphic to $\Pi$ \emph{as a pmp action}. In particular, $\Upsilon$ is a free action. The same can be achieved with $\Upsilon$ having marks from the \emph{compact} space $[0,1]$. \end{prop} \begin{proof} Let $\Upsilon = \theta(\odot(\Pi))$, that is, \[ \Upsilon = \{ (g, g^{-1}\Pi) \in \MM \times {\mathbb{M}_0} \mid g \in \theta(\Pi) \}. \] Observe that this is an \emph{injective} map, as one can recover $\Pi$ uniquely from the knowledge of any point $\Upsilon$ and its label, and so $\Upsilon$ is an isomorphic process to $\Pi$. For the second statement, simply fix a Borel isomorphism\footnote{It exists as ${\mathbb{M}_0}$ is a Polish space, and thus standard Borel, and all standard Borel spaces of the same cardinality are isomorphic.} $I : {\mathbb{M}_0} \to [0,1]$, and define \[ \Upsilon = \{ (g, I(g^{-1}\Pi) \in \MM \times [0,1] \mid g \in \theta(\Pi) \}. \]\end{proof} \begin{proof}[Proof of Proposition \ref{abstractlyisom}] Suppose $\Pi$ is a free $\Xi$-marked point process with law $\mu$. We can (and do) assume that $\Pi$ is abstractly isomorphic to a uniformly separated process with a slightly different (but nevertheless standard Borel) mark space by using the previous two propositions. Let $X$ denote the space: \[ X = \{ \omega \in {\mathbb{M}_0}(B(0, \delta/100)) \mid \omega \cap B(0, \delta/200) = \{0\}, \text{ and } \forall x \in \omega \setminus \{0\}, \@ifstar{\oldabs}{\oldabs*}{B(x, \delta/200)} > 1 \}. \] This is a Borel subset of a standard Borel space, and hence standard Borel in its own right. One can readily see that it is uncountable, and hence there is a Borel isomorphism $I: \Xi \to X$. Define the following factor map: \begin{align*} &\Phi : \Xi^\MM \to \MM \\ &\Phi(\omega) = \bigcup_{x \in \omega} x I(\xi_x), \end{align*} where $\xi_x$ denotes the label of $x$ (that is, $(x, \xi_x) \in \omega$. This is an injective map: we can recover the underlying set of any input configuration to $\Phi$ by identifying the points which are $\delta/200$-isolated. We can then uniquely recover their labels by applying the inverse of $I$ locally. \end{proof} \subsection{Cost is finite for compactly generated groups} \begin{prop}\label{finitecost} Suppose $G$ is \emph{compactly generated} by $S \subseteq G$. Then every free point process $\Pi$ on $G$ has finite cost. \end{prop} Implicitly we are assuming that $\Pi$ has finite intensity, so that its cost is defined. We recall some definitions and facts from metric geometry, see \cite{MR3561300} for further details in the specific context we are interested in. \begin{defn}\label{metricdefs} Let $(X, d)$ be a metric space. \begin{itemize} \item $(X, d)$ is \emph{coarsely connected} if there exists $c > 0$ such that for all $x, x' \in X$ there are points $x_1, x_2, \ldots, x_n \in X$ with $x = x_1$, $x_n = x'$, and $d(x_i, x_{i+1}) \leq c$ for all $i$. \item A subset $\omega \subseteq X$ is \emph{uniformly discrete} if there exists $\e > 0$ such that $d(x, y) > \e$ for all distinct $x, y \in \omega$. \item A subset $\omega \subseteq X$ is \emph{coarsely dense} if there exists $r > 0$ such that for every $x \in X$, $d(x, \omega) < r$. \item A \emph{Delone set} is a subset $\omega \subseteq X$ which is both uniformly discrete and coarsely dense. \item An \emph{$\e$-net} is a subset $\omega \subseteq X$ which is $\frac{\e}{2}$ uniformly discrete and $\e$ coarsely dense. \end{itemize} \end{defn} \begin{thm}[See Proposition 1.D.2 of \cite{MR3561300}] Let $G$ be an lcsc group with a left-invariant proper metric $d$ which generates its topology. Then $G$ is compactly generated if and only if it is coarsely connected. \end{thm} Note that if $X$ is coarsely connected, then so too is any coarsely dense subset of $X$. \begin{defn} Let $S \subseteq G$ be a compact and symmetric generating set. The \emph{Cayley factor graph} associated to $S$ is the map $\Cay(\bullet, S) : \MM \to \graph(G)$ given by \[ \Cay(\omega, S) = \{ (g, gs) \in \omega \times \omega \mid s \in S \}. \] \end{defn} Note that this graph is not necessarily connected, for instance for the Poisson point process. However, if $\Pi$ is a point process which is almost surely $c$-coarsely-connected for $c$ such that $B(0,c) \subseteq S$ then $\Cay(\Pi, S)$ is connected. This condition can always be satisfied by replacing $S$ with an appropriate power of the generating set $S^k$, since $S^k$ exhausts $G$ and in particular must contain $B(0,c)$ for $k$ sufficiently large The following can be readily deduced from existing results in the literature (even removing the compact generation assumption), but we include a separate proof for completeness. \begin{prop}\label{factoronnet} Suppose $\Pi$ is a free and ergodic point process on a compactly generated group $G$. Then for every $R > 0$ there exists a \emph{finite intensity} thickening $\Theta$ of $\Pi$ such that $\Theta(\Pi)$ is almost surely $R$-coarsely-dense. Moreover, if $\Pi$ is $\delta$-separated (with $\delta < 2R$), then $\Theta$ will also be $\delta$-separated. \end{prop} \begin{proof} It suffices to prove the statement for ergodic processes. Fix $R > 0$. We will construct a factor map $\Phi$ of $\Pi$ such that $\Phi(\Pi)$ is $\frac{R}{2}$ uniformly separated and $\Theta(\Pi) := \Pi \sqcup \Phi(\Pi)$ is $R$-coarsely dense. The uniform separation then implies that this thickening has finite intensity. The idea of the proof is the following: observe that every uniformly separated subset of a metric space is a subset of a Delone set. You can prove this using the well-ordering principle or Zorn's lemma (as to your taste). Now consider a sample $\Pi$ from the point process. We know there are \emph{some} ways to add points to it to get something coarsely dense, the only difficulty is that we are required to make these choices equivariantly. We will select points that see the ``frontier'' of the process, which will then add points to cover a piece of the frontier. At every stage the frontier gets smaller, and in the limit we cover the whole space. For configurations $\omega \in \MM$, let $\omega^t$ denote the following closed set \[ \omega^t = \bigcup_{g \in \omega} B(g, t), \] that is, the union of all \emph{closed} balls about the points of $\omega$. If $\Pi$ is a point process, then $\Pi^t$ is a random closed subset of $G$. We call a point $g \in \Pi$ \emph{on the frontier} if $B(g, c_1 R) \not\subseteq \Pi^R$, where $c_1 > 1$ is some parameter to be chosen later, and let $F(\Pi)$ denote the subset of frontier points of $\Pi$. This is a metrically defined condition, and hence equivariant. We will define a rule $\Phi_1(\Pi)$ that specifies a collection of points such that their $R$-balls cover all the $c_1 R$-balls of the frontier points of $\Pi$. We will then iterate this construction (so that $\Phi_2(\Pi)$'s $R$-balls cover the $c_2 R$-balls of $\Phi_1(\Pi) \cup \Pi$'s frontier points, for some $c_2 > c_1$, and so on). In this way we will find enough points to cover the whole space. Choose $c_1$ large such that $\PP[ \Pi^{c_1 R} \setminus \Pi^R \neq \empt] = 1$. If this is not possible, then the process is already $R$-coarsely-dense by ergodicity. One can decompose the frontier points of $\Pi$ as \[ F(\Pi) = \bigsqcup_n F_n(\Pi), \] where each $F_n(\Pi)$ is $10 c_1 R$ uniformly separated. This can be done by using the existence of a \emph{Borel kernel} of the factor graph $\mathscr{D}_{10 c_1 R}(\Pi_0)$ defined on the frontier points of $\Pi_0$, see Section 4 of \cite{KECHRIS19991} for further information on Borel kernels. Note that by using the Palm process, the resulting sets $F_n(\Pi)$ are equivariantly defined. We now fix an auxiliary (deterministic) $R$-net $\mathcal{N} \subset G$. If $W \subseteq G$ is a Borel region and $g \in G$, then let \[ N(g, W) = \{ x \in g^{-1}\mathcal{N} \mid B(x, R) \cap W \neq \empt \}. \] Note that $N(g,W)^R \supseteq W$, as $\mathcal{N}$ is coarsely dense. Define \[ \Phi_1(\Pi) = \bigcup_{g \in F_1(\Pi)} N(g, B(g, c_1 R) \setminus \Pi^R), \] and inductively \[ \Phi_{n+1}(\Pi) = \bigcup_{g \in F_n(\Pi)} N(g, B(g, c_1 R) \setminus \left(\Pi \cup \bigcup_{i \leq n} \Phi_i(\Pi) \right)^R ) \] Then \[ \Pi^{c_1 R} \subseteq \Pi^R \cup \bigcup_{n \geq 1} \Phi_n(\Pi)^R. \] We now repeat this procedure as many times as necessary (possibly countably infinitely many times) until we construct the desired thickening $\Theta$. The only care necessary is that one should choose the parameters $c_1 < c_2 < \cdots$ so that they tend to infinity, as \[ G = \bigcup_{n \geq 1} \Pi^{c_n R} \] for any such sequence. \end{proof} \begin{remark} In Section \ref{crosssectionappendix} we describe the connection between point processes and ``cross-sections'' of actions. The previous proposition can be deduced from the fact that every free action admits a ``cocompact cross-section''. A similar statement to the proposition directly phrased in terms of cross-sections can be found in Section 2 of \cite{slutsky2017lebesgue}, where it is shown that any cross-section can be extended to a cocompact cross-section. That proof works without the compact generation assumption. \end{remark} \begin{proof}[Proof of Proposition \ref{finitecost}] It suffices to consider the case where $\Pi$ is ergodic, see \cite{kechris2004topics} Corollary 18.6. If $\Pi$ is $\delta$-separated, then the thickening constructed in Proposition $\ref{factoronnet}$ has finite cost for the Cayley factor graph, which is connected (for a suitable choice of generating set). Cost is monotone for factors, so $\Pi$ itself has finite cost. Otherwise, choose $\delta$ sufficiently small so that the $\delta$-thinning of $\Pi$ is non-empty almost surely. By label trickery (Proposition \ref{labeltrickery}), $\Pi$ is isomorphic to a (marked) $\delta$-separated point process, reducing to the previous case. \end{proof} \section{The Poisson point process has maximal cost} We begin with the observation that every IID process factors onto the Poisson: \begin{prop}\label{factorontopoisson} Let $\Pi$ be a point process on $G$. Then $[0,1]^\Pi$ factors onto the Poisson point process. \end{prop} \begin{proof} Fix a map $F : [0,1] \to \MM(G)$ such that if $\xi \sim \texttt{Unif}[0,1]$, then $F(\xi)$ is a Poisson point process on $G$ of unit intensity. We will use the Voronoi tessellation to simply glue independent copies of the Poisson point process in each cell, resulting in a Poisson point process. Define a factor map $\Phi([0,1]^\Pi)$ by \[ \Phi([0,1]^\Pi) = \bigcup_{g \in [0,1]^\Pi} g \cdot F(\xi_g) \cap V_{\Pi}^T(g) , \] where $\xi_g$ denotes the label of $g$ in $[0,1]^\Pi$. Then $\Phi([0,1]^\Pi)$ is the Poisson point process. \end{proof} In particular, the cost of every IID process is at most the cost of the Poisson. Our goal is to prove an \emph{asymptotic} version of this statement. We will show that every free point process ``weakly'' factors onto an IID process, and that cost is monotone for (certain) weak factors. This will prove that the Poisson point process has maximal cost amongst all free point processes. \subsection{Weak factoring and Ab\'{e}rt-Weiss for point processes} We have seen that cost is monotone under factor maps. We will now introduce a weaker version of factoring and investigate its relationship to cost: \begin{defn} Let $\Pi$ and $\Upsilon$ be point processes. Then $\Pi$ \emph{weakly factors} onto $\Upsilon$ if there is a sequence $\Phi^n$ of factors of $\Pi$ such that $\Phi^n(\Pi)$ weakly converges\footnote{For more information on weak convergence in the context of point processes, see Appendix \ref{metricproperties}.} to $\Upsilon$. \end{defn} The restive reader is advised to take a look at the statements of Theorem \ref{abertweiss} and Theorem \ref{costmonotonicity}. These are the tools that will be used to prove the headline theorem of this section. The other results in this section are necessary but have a more routine flavour. \begin{thm}\label{iidamenableweakfactor} Let $\Pi$ and $\Upsilon$ be point processes on an amenable group $G$. If $\Pi$ is free, then $[0,1]^\Pi$ weakly factors onto $\Upsilon$. \end{thm} The proof of this uses a lemma, a proof of which can be found in a concurrently appearing paper by the second author: \begin{lem}\label{hyperfinitelemma} If $\Pi$ is a free point process on an amenable group $G$, then there exists factor partitions $\mathcal{P}_n(\Pi) = \{P^n_g\}_{g \in \Pi}$ with the following properties: \begin{description} \item[Equivariance:] $P^n_{\gamma g} = \gamma P^n_g$, \item[Partitioning:] For each $n$, $G$ is the union of $\{P^n_g\}_{g \in \Pi}$, and if $g, h \in \Pi$ then $P^n_g = P^n_h$ or $P^n_g \cap P^n_h = \empt$, \item[Increasing:] For each $n$, $P^n_g \subseteq P^{n+1}_g$, \item[Exhausting:] For all compact $C \subseteq G$, there exists $N$ and $g \in \Pi$ such that $C \subseteq P^N_g$, \item[Finite volume:] For all $n$, $0 < \lambda(P^n_g) < \infty$. \item[Finitariness:] For each $n$, $P^n_g \cap \Pi$ is finite (and contains $g$). \end{description} \end{lem} \begin{figure}[h]\label{clumpingfigure} \includegraphics[scale=0.5]{clumping.pdf} \centering \caption{The factor partitions from Lemma \ref{hyperfinitelemma} are ``clumpings'', and should be visualised like this. } \end{figure} \begin{proof}[Proof of Theorem \ref{iidamenableweakfactor}] Let $f : [0,1] \to \MM$ be a measurable map with $f(\xi) \sim \Upsilon$ if $\xi \sim \texttt{Unif}[0,1]$. Choose factor partitions $\mathcal{P}_n(\Pi) = \{P^n_g\}_{g \in \Pi}$ as in Lemma \ref{hyperfinitelemma}. Let $\Pi_n$ be an equivariantly defined subprocess of $\Pi$ which consists of one point chosen out of each cell $P^n_g$ -- we are able to do this by essential freeness. For instance, fix a Borel isomorphism of ${\mathbb{M}_0}$ with $[0,1]$ and note that the induced label in $[0,1]$ at each point of $\Pi$ is distinct for all points (essential freeness), and thus we may choose $\Pi_n$ to consist of the point with maximal label amongst its cell. Define factors $\Phi_n$ as follows: \[ \Phi_n([0,1]^\Pi) = \bigcup_{g \in \Pi_n} g \cdot (f(\xi_g) \cap P^n_g). \] That is, in each cell we glue a copy of the process $\Upsilon$ sampled according to the label $\xi_g$ on $g$. It follows immediately that $\Phi_n([0,1]^\Pi)$ weakly converges to $\Upsilon$: if $C \subseteq G$ is any compact stochastic continuity set for $\Upsilon$, then for sufficiently large $N$ is entirely contained in some $P^N_g$, and thus the point counts of $\Phi^n([0,1]^\Pi)$ inside $C$ are exactly distributed the same as those of $\Upsilon$. \end{proof} The following statement is due to Ab\'{e}rt and Weiss\cite{abert2013bernoulli} for discrete groups, we extend it to point processes: \begin{thm}\label{abertweiss} Let $\Pi$ be an essentially free point process on a noncompact group $G$. Then $\Pi$ weakly factors onto $[0,1]^\Pi$, its own IID. \end{thm} \begin{proof} It suffices to show that $\Pi$ weakly factors onto $[d]^\Pi$, where $[d] = \{ 1, 2, \ldots, d\}$ is equipped with the uniform measure. Here $[d]^\Pi$ is thus the \emph{finitary} IID of $\Pi$. This suffices as $[d]^\Pi$ weakly converges to $[0,1]^\Pi$ as $d \to \infty$. We will do this by constructing factor \emph{$[d]$-labellings} $\mathscr{C}_n$ of $\Pi$ such that $\mathscr{C}_n(\Pi)$ weakly converges to $[d]^\Pi$. To do this, we'll use the second moment method, hewing close to the original Ab\'{e}rt-Weiss recipe. The strategy will be as follows. Consider the set of $[d]$-labellings of $\Pi$. We will study a probabilistic model that produces a \emph{random element} of this space. We will show that this random \emph{deterministic colouring} satisfies certain constraints with positive probability. In particular, there must exist a $[d]$-labelling satisfying those constraints. By adjusting the parameters of this model, one can produce the desired sequence $\mathscr{C}_n$. Fix a \emph{countable} weak convergence determining family $\{V_i\}$ as discussed at Lemma \ref{determiningclass}, so that the sets $V_i \subset G \times [d]$ are bounded stochastic continuity sets for $[d]^\Pi$. We will construct a sequence of factor colourings $\mathscr{C}_n$ of $\Pi$ such that for fixed $k$, \[ N_{\boldsymbol{V}_k}(\mathscr{C}_n \Pi) \text{ converges weakly to } N_{\boldsymbol{V}_k}([d]^\Pi), \] where $\boldsymbol{V}_k = (V_1, V_2, \ldots, V_k)$. Set $W_k = \bigcup_{i \leq k} V_k$ to be the total window. Formally this is a subset of $G \times [d]$, but we view it as a subset of $G$. For $\e > 0$ arbitrary, we choose $\delta > 0$ so small that the following properties are true, where $\mu$ denotes the law of $\Pi$: \[ \mu(\{ \omega \in \MM \mid \text{for all } g, h \in \omega \cap W_k, g \neq h \text{ implies } d(g^{-1}\omega, h^{-1}\omega) > \delta \}) > 1 - \e \] and \[ (\mu \otimes \mu)(\{ (\omega, \omega') \in \MM \times \MM \mid \text{for all } (g, h) \in (\omega \cap W_k) \times (\omega' \cap W_k),\; d(g^{-1}\omega, h^{-1}\omega') > \delta \}) > 1 - \e. \] This is possible by essential freeness (for $A_\delta$) and essential freeness and noncompactness of $G$ (for $B_\delta$). To see this, let us formulate essential freeness in the following way: \[ \mu(\{ \omega \in \MM \mid \text{ for all } g, h \in \omega, g \neq h \text{ implies } g^{-1}\omega \neq h^{-1}\omega \}) = 1. \] This remains an almost sure event if we restrict $g$ and $h$ to lie in the window $W_k$. Now observe that $A_\delta$ increases as $\delta$ tends to zero to this almost sure event. The argument for $B_\delta$ is similar, but depends on the fact that the set \[ B_0 = \{(\omega, \omega') \in \MM \times \MM \mid \text{for all } g \in \omega, h \in \omega', g^{-1}\omega \neq h^{-1}\omega' \} \] is $\mu \otimes \mu$ almost sure, which is less immediate and we now show. Observe that $\mu \otimes \mu$ defines a point process of $G \times G$ of intensity $\intensity(\mu)^2$, and Palm measure $\mu_0 \otimes \mu_0$. By the correspondence of measures between $\mu \otimes \mu$ and $\mu_0 \otimes \mu_0$, we equivalently ask that \[ (\mu_0 \otimes \mu_0)(\{ (\omega, \omega') \in {\mathbb{M}_0} \times {\mathbb{M}_0} \mid \omega \neq \omega'\}) = 1, \] or equivalently that $\mu_0$ has no atoms. This is contradicted by essential freeness: if $\omega \in \mu_0$ is an atom, then $\omega$ is shift invariant, that is, $g^{-1}\omega = \omega$ for all $g \in \omega$. This implies that $\omega$ is in fact a \emph{subgroup} of $G$, and it's discrete by definition. Now the ergodic component of $\mu$ corresponding to $\omega$ is supported on $G\omega$, and thus defines a $G$-invariant probability measure on $G/\omega$, that is, it is a lattice shift. But $\mu$ was assumed to be essentially free. We now construct a \emph{random} colouring $\mathscr{C}$ of $\Pi$ in the following way: let \[ {\mathbb{M}_0} = \bigsqcup_i D_i, \text{ where } \diam(D_i) < \delta. \] be a partition of ${\mathbb{M}_0}$ into small measurable sets. By the correspondences we've described, any $[d]$-colouring of the sets $D_i$ corresponds to a factor colouring $\mathscr{C} : \MM \to [d]^\MM$ in the following way: \[ \mathscr{C}(\omega) = \{(g, c) \in \omega \times [d] \mid g^{-1}\omega \in {\mathbb{M}_0} \text{ is coloured by } c \}. \] We look at such $\mathscr{C}$ when the $D_i$ sets are coloured \emph{uniformly at random} by elements of $[d]$. To emphasise: we are considering a \emph{distribution} on \emph{deterministic} colourings. For an integral vector $\boldsymbol{\alpha} = (\alpha_1, \alpha_2, \ldots, \alpha_k) \in \NN_0^k$, we set \[ T_\alpha = \{ \omega \in [d]^\MM \mid (N_{V_1}(\omega), \ldots, N_{V_k}(\omega)) = \alpha \} \] to be the set of configurations whose point/colour statistics in $W_k$ are prescribed by $\alpha$. Note that $\mathscr{C}_*\mu(T_\alpha)$ is a random variable (whose source of randomness is $\mathscr{C}$). Given $k$ and $M$, we use the second moment method to prove the existence of $\mathscr{C}$ such that for all $\alpha \in \NN_0^k$ with $\@ifstar{\oldnorm}{\oldnorm*}{\alpha}_\infty \leq M$, \[ \@ifstar{\oldabs}{\oldabs*}{\mathscr{C}_*(\mu) (T_\alpha) - [d]^\mu(T_\alpha)} < \e. \] Then any sequence of such colourings with $k, M$ tending to infinity will witness that $\Pi$ weakly factors onto $[d]^\Pi$. Exchanging order of integration allows us to express the mean of $\mathscr{C}_*(\mu) (T_\alpha)$ as \begin{align*} \EE \left[ \mathscr{C}_*(\mu) (T_\alpha) \right] &= \EE[ \mu(\mathscr{C}^{-1}(T_\alpha))] \\ &= \EE\left[ \int_\MM \mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha] d\mu(\omega)\right] \\ &= \int_\MM \EE\left[ \mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha\right]] d\mu(\omega). \end{align*} Note that for $\omega \in A_\delta$, all pairs of distinct points $g,h \in \omega$ from the window $W_k$ have the property that $g^{-1}\omega$ and $h^{-1}\omega$ fall into different $D_i$ sets, and are therefore assigned \emph{independent} colours. Thus \[ \text{ for } \omega \in A_\delta, \EE \left[\mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha]\right] = [d]^\mu(T_\alpha). \] As $\mu(A_\delta) > 1 - \e$, it follows that \[ \@ifstar{\oldabs}{\oldabs*}{ \EE \left[ \mathscr{C}_*\mu(T_\alpha) \right] - [d]^\mu(T_\alpha) } < 2\e. \] We now work on the variance. Again, exchanging order of integration in a similar way to before allows us to express the mean of $(\mathscr{C}_*(\mu) (T_\alpha))^2$ as \[ \EE \left[ (\mathscr{C}_*(\mu) (T_\alpha))^2 \right] = \iint_{\MM \times \MM} \EE \left[ \mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha] \mathbbm{1}[\mathscr{C}(\omega') \in T_\alpha] \right] \,d\mu(\omega) \,d\mu(\omega'). \] By similar reasoning to before, for $(\omega, \omega') \in (A_\delta \times A_\delta) \cap B_\delta$, the colours one will see at points in $W_k$ will be independent. Thus for such $(\omega, \omega')$ we have \[ \EE \left[ \mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha] \mathbbm{1}[\mathscr{C}(\omega') \in T_\alpha] \right] = \left( [d]^\mu(T_\alpha) \right)^2 \] Note that $(A_\delta \times A_\delta) \cap B_\delta = (A_\delta \times {\mathbb{M}_0}) \cap ({\mathbb{M}_0} \times A_\delta) \cap B_\delta$, so by the union bound $(\mu \otimes \mu)((A_\delta \times A_\delta) \cap B_\delta) > 1 - 3\e$. Putting this together, \[ \Var(\mathscr{C}_*(\mu) (T_\alpha)) = \EE \left[ (\mathscr{C}_*(\mu) (T_\alpha))^2 \right] - \left(\EE \left[ \mathscr{C}_*\mu(T_\alpha) \right]\right)^2 < 12\e. \] We now apply Chebyshev's inequality which states that for any $c > 0$, \[ \PP\left[ \@ifstar{\oldabs}{\oldabs*}{\mathscr{C}_*(\mu) (T_\alpha) - \EE[\mathscr{C}_*(\mu) (T_\alpha)]} > c \right] < \frac{\Var(\mathscr{C}_*(\mu) (T_\alpha))}{c^2}. \] Our bounds on the mean and the variance of $\mathscr{C}_*(\mu) (T_\alpha)$ and the choice $c = \e^{\frac{1}{3}}$ yield \[ \PP\left[ \@ifstar{\oldabs}{\oldabs*}{\mathscr{C}_*(\mu) (T_\alpha) - [d]^\mu(T_\alpha)} > \e^{\frac{1}{3}}+2\e \right] < 12\e^{\frac{1}{3}}. \] Let $E_\alpha$ denote the event $\{\@ifstar{\oldabs}{\oldabs*}{\mathscr{C}_*(\mu) (T_\alpha) - [d]^\mu(T_\alpha)} < \e^{\frac{1}{3}} + 2\e \}$. Then by the union bound \[ \PP[ \bigcap_{\substack{\alpha \in \NN_0^k \\ \@ifstar{\oldnorm}{\oldnorm*}{\alpha}_\infty \leq M}} E_\alpha] \geq 1 - 12M^k \e^{\frac{1}{3}}. \] In particular, by choosing $\e$ sufficiently small, such a colouring exists. \end{proof} \begin{prop}\label{weakfactoringtransitive} Suppose $\Pi$ and $\Upsilon$ are point processes, with $\Pi$ weakly factoring onto $\Upsilon$ and $\Psi(\Upsilon)$ being a factor of $\Upsilon$. Then $\Pi$ weakly factors onto $\Psi(\Upsilon)$. It follows that weak factoring is a \emph{transitive} notion. \end{prop} \begin{proof} We have seen that every factor map decomposes as the composition of a (coloured) thickening and a thinning. We are therefore able to reduce the problem to the case where $\Psi$ is a thinning, a colouring, and a thickening. We will repeatedly use the following fact: if $\Pi$ weakly factors onto $\Psi_m(\Upsilon)$ for a sequence of factors $\Psi_m$, and $\Psi_m$ converges to $\Psi$ pointwise, then $\Pi$ weakly factors onto $\Psi(\Upsilon)$. We begin with the case of thinnings. Let $\Phi^n(\Pi)$ weakly converge to $\Upsilon$. We write $\mu$ and $\nu$ for the laws of $\Pi$ and $\Upsilon$ respectively. Let $\theta^A$ be a thinning of $\Upsilon$, determined by a subset $A \subseteq ({\mathbb{M}_0}, \nu_0)$ as in Theorem \ref{correspondencetheorem}. The idea of the proof is this: if $A$ were a $\nu_0$ continuity set, then the corresponding thinning $\theta^A : \MM \to \MM$ is continuous $\nu$ almost everywhere, and so $\theta^A(\Phi^n(\Pi))$ converges to $\theta^A(\Upsilon)$ by Lemma \ref{continuity}. We handle the general case by approximating $A$ by $\nu_0$ continuity sets. \begin{claim} If $A$ is a $\nu_0$ continuity set, then $\theta^A : \MM \to \MM$ is continuous $\nu$ almost everywhere. \end{claim} To see this, recall the \emph{saturation} notion we used in the proof of Theorem \ref{correspondencetheorem}. We've assumed $\nu_0(\partial A) = 0$, and hence $\nu(G \partial A) = 0$ too. Then $\theta^A$ is continuous on the complement of this set. Note that if $\omega \not\in G \partial A$, then $g^{-1}\omega \not\in \partial A$ for all $g \in \omega$. One can now see that if $\omega_n$ converges to $\omega$, then $\theta^A(\omega_n)$ restricted to any fixed radius ball is eventually equal to $\theta^A(\omega)$, as desired. For the general case, let $A_m \subseteq {\mathbb{M}_0}$ be $\nu_0$-continuity sets such that \[ \nu_0(A \triangle A_m) < \frac{1}{ 2^m}. \] Then for every $m$, we have $\theta^{A_m}(\Phi^n(\Pi)) \to \theta^{A_m}(\Upsilon)$ by our earlier argument. By Borel-Cantelli, $A \triangle A_m$ does not occur infinitely often. This is true for the saturation, so we see that $\theta^{A_m} \to \theta^{A}$ pointwise almost surely and hence also in distribution. By choosing an appropriate subsequence of $m$s and $n$s we find our desired sequence of factor maps. The above proof for thinnings can be immediately adapted to prove that $\Pi$ weakly factors onto $\Psi(\Upsilon)$ if $\Psi$ is any $[d]$-colouring of $\Upsilon$. Since any colouring is a pointwise limit of finitary colourings, we see that $\Pi$ weakly factors onto \emph{any} colouring of $\Upsilon$. Finally, suppose $\Psi$ is a thickening of $\Upsilon$. By using the Voronoi tessellation we may express $\Psi$ in the following form: \[ \Psi(\omega) = \bigcup_{g \in \omega} g F(g^{-1}\omega), \] where $F : {\mathbb{M}_0} \to {\mathbb{M}_0}$ is a measurable function. We say that $\Psi$ is a \emph{bounded range} thickening if there exists $C > 0$ such that $F(\Upsilon_0) \subseteq B(0, C)$ almost surely. It is easy to show that $\Psi$ is the pointwise limit of such thickenings, so we are reduced to this case. Define $I : {\mathbb{M}_0}(B(0, C))^\MM \to \MM$ by \[ I(\omega) = \bigcup_{g \in \omega} g \xi_g, \] where $\xi_g$ is the label of $g$ in $\omega$, that is, $(g, \xi_g) \in \omega$. This is the \emph{implementation} map: it takes a schema for a thickening and implements it. \begin{claim} The implementation map $I$ is continuous. \end{claim} The task is to show that given $R, \e > 0$ there exists $S, \delta > 0$ such that if $\omega$ and $\omega'$ are $(S, \delta)$-wobbles of each other, then $I(\omega)$ and $I(\omega')$ are $(R, \e)$-wobbles of each other. The idea is simply that the behaviour of $I(\omega)$ in a ball of radius $R$ is determined by $\omega$ restricted to the ball of radius $R + C$, as the thickening is of bounded range $C$. By choosing $\delta$ sufficiently small (depending on the labels of the points in $\omega \cap B(0, R + C)$, we find our desired $S$ and $\delta$. With the claim in hand, our desired result follows from the claim and Lemma \ref{continuity}. \end{proof} \begin{cor}\label{amenableequivalent} Let $\Pi$ and $\Upsilon$ be point processes on an amenable group, with $\Pi$ free. Then $\Pi$ weakly factors onto $\Upsilon$. \end{cor} \begin{proof} By Theorem \ref{abertweiss} $\Pi$ weakly factors onto $[0,1]^\Pi$, and by Theorem \ref{iidamenableweakfactor} $[0,1]^\Pi$ weakly factors onto $\Upsilon$. Hence the claim follows from Proposition \ref{weakfactoringtransitive}. \end{proof} \begin{lem}\label{weakfactorimpliesiiweakfactor} Suppose $\Pi_n$ weakly converges to $\Pi$. Then $[0,1]^{\Pi_n}$ weakly converges to $[0,1]^{\Pi}$. \end{lem} \begin{proof} This can be seen, for instance, by verifying that the finite dimensional distributions of $[0,1]^{\Pi_n}$ weakly converge to those of $[0,1]^\Pi$. Recall that a $[0,1]$-marked point process on $G$ is just a particular kind of point process on $G \times [0,1]$. It therefore suffices to check weak convergence of the finite dimensional distributions against stochastic continuity sets of $[0,1]^\Pi$ in product form. To that end, let $\boldsymbol{V} = (V_1, V_2, \ldots, V_k)$ denote a collection of stochastic continuity sets for $\Pi$, and $[0, \boldsymbol{p}) = ([0, p_1), [0, p_2), \ldots, [0, p_k))$ a family of intervals in $[0,1]$. We denote by $\boldsymbol{V} \times [0, \boldsymbol{p}) = (V_1 \times [0, p_1), \ldots, V_k \times [0, p_k))$ the stochastic continuity set of $[0,1]^\Pi$. Fix an integral vector $\boldsymbol{\alpha} \in \NN_0^k$. We must show that $\PP[N_{\boldsymbol{V} \times [0, \boldsymbol{p})} [0,1]^{\Pi_n} = \boldsymbol{\alpha}]$ converges to $\PP[N_{\boldsymbol{V} \times [0, \boldsymbol{p})} [0,1]^{\Pi} = \boldsymbol{\alpha}]$. We find the following explicit expression simply by conditioning on $\boldsymbol{\beta}$, the total number of points appearing in $\boldsymbol{V}$: \begin{align*} \PP[N_{\boldsymbol{V} \times [0, \boldsymbol{p})} [0,1]^{\Pi_n} = \boldsymbol{\alpha}] &= \sum_{\boldsymbol{\beta} \geq \boldsymbol{\alpha}} \PP[N_{\boldsymbol{V} \times [0, \boldsymbol{p})} [0,1]^{\Pi_n} \mid N_{\boldsymbol{V}} \Pi_n = \boldsymbol{\beta}] \PP[N_{\boldsymbol{V}} \Pi_n = \boldsymbol{\beta}]\\ &= \sum_{\boldsymbol{\beta} \geq \boldsymbol{\alpha}} \prod_{i=1}^k p_i^{\alpha_i} (1-p_i)^{\beta_i - \alpha_i} \PP[N_{\boldsymbol{V}} \Pi_n = \boldsymbol{\beta}], \end{align*} where by $\boldsymbol{\beta} \geq \boldsymbol{\alpha}$ we mean that $\beta_i \geq \alpha_i$ for each entry. There is an identical expression for $\Pi$ (simply replace all instances of $\Pi_n$ by $\Pi$). The conclusion follows, as $\PP[N_{\boldsymbol{V}} \Pi_n = \boldsymbol{\beta}]$ converges to $\PP[N_{\boldsymbol{V}} \Pi = \boldsymbol{\beta}]$ for all $\boldsymbol{\beta}$. \end{proof} We have seen that all free point processes are able to \emph{weakly} factor onto their own IID. It is natural to ask if all this hassle was worth it -- can a point process always factor directly onto its own IID? \begin{thm}[Holroyd, Lyons, Soo\cite{MR2884878}] The Poisson point process cannot be split into two \emph{independent} Poisson point processes of lower intensity without additional randomness. More precisely, there does not exist a \emph{deterministic} two colouring $\mathscr{C} : (\MM, \PPP) \to \{0, 1\}^\MM$ such that $\mathscr{C}_* \PPP$ is the IID $\texttt{Ber}(p)$ labelled Poisson point process for $0 < p < 1$. \end{thm} \begin{example} Some point processes \emph{can} factor onto their own IID, however. Note that taking the IID of a point process is idempotent, in the sense that \[ [0,1]^{[0,1]^\Pi} \cong ([0,1]^2)^\Pi \cong [0,1]^\Pi. \] For an unlabelled example, one can simply \emph{spatially implement} $[0,1]^\Pi$. That is, using the method sketched at Proposition \ref{abstractlyisom} one can find an unlabelled point process $\Upsilon$ (abstractly) isomorphic to $[0,1]^\Pi$, and thus $[0,1]^\Upsilon \cong \Upsilon$. \end{example} \subsection{Cost monotonicity for (certain) weak factors}\label{certainfactors} In this section we will always assume $G$ is compactly generated by $S \subset G$. \begin{question} Suppose $\Pi$ weakly factors onto $\Upsilon$. Is it true that $\cost(\Pi) \leq \cost(\Upsilon)$? That is, is cost monotone for weak factors? \end{question} This is the \emph{real} theorem that we would like to prove. We are able only to prove the following theorem, which implies that cost is monotone for certain weak factors: \begin{thm}\label{costmonotonicity} Suppose $\Pi^n$ is a sequence of point processes that weakly converge to $\Pi$. Then \[ \limsup_{n \to \infty} \cost(\Pi^n) \leq \cost(\Pi) \] holds in the following cases: \begin{enumerate} \item If there exists $\delta, R > 0$ such that $\Pi_n$ and $\Pi$ are all $\delta$ uniformly separated and $R$ coarsely connected. \item If all the $\Pi_n$ are free and $\Pi$ is $\delta$ uniformly separated. \end{enumerate} Moreover, the same statements are true if the point processes have labels from a \emph{compact} mark space $\Xi$. \end{thm} We will need an auxiliary lemma, which we will use again later: \begin{lem}\label{continuousthinning} Let $\Pi$ be a point process with law $\mu$. Then for all but countably many $\delta > 0$, the $\delta$-metric-thinning map $\theta^\delta : \MM \to \MM$ is continuous $\mu$ almost everywhere. In particular, if $\Pi_n$ weakly converges to $\Pi$, then $\theta^\delta(\Pi_n)$ weakly converges to $\theta^\delta(\Pi)$. \end{lem} To prove the lemma, simply note that any $\delta$ such that $B_G(0, \delta)$ is a stochastic continuity set for $\Pi_0$ works. \begin{proof}[Proof of Theorem \ref{costmonotonicity}.] We prove (1), and then show how to reduce (2) to (1). By increasing $S$ if necessary, we may assume that the Cayley factor graph of $\Pi_n$ and $\Pi$ with respect to $S$ is connected almost surely. Denote the distributions of $\Pi_n$ and $\Pi$ by $\mu_n$ and $\mu$ respectively. We call a factor graph $\mathscr{G}$ a \emph{$\mu$-continuity factor graph} if it has the property that \[ \lim_{n \to \infty} \muarrow^n(\mathscr{G}) = \muarrow(\mathscr{G}). \] The same technique used to prove Proposition \ref{palmconvergence} shows that factor graphs of the form\footnote{Let us unpack the definition: there is an edge between $g, h \in \mathscr{G}_{A,V}(\omega)$ if $g^{-1}\omega \in A$ and $g^{-1}h \in V$. That is, each point $g$ decides if it will have edges (checks if $g^{-1}\omega \in A$), and then simply connects to all points in $gV$.} $\mathscr{G}_{A, V} = (A \times V) \cap \Marrow$, where $A \subseteq {\mathbb{M}_0}$ is a $\mu_0$ continuity set and $V \subseteq G$ is a bounded stochastic $\mu$ continuity set, are $\mu$-continuity factor graphs. The idea of the proof is that we will take a cheap graphing $\mathscr{G}$ for the limit process $\mu$, and use it to produce a cheap $\mu_0$-continuity graphing $\mathscr{H}$. The continuity property then gives us information about the costs of $\mu_n$, \emph{but only if we can ensure $\mathscr{H}$ is connected on $\Pi_n$}. This is why we assume coarse density. Note that by outer regularity of the measure $\muarrow$, for every factor graph $\mathscr{G}$ and $\e > 0$ there exists an \emph{open} factor graph $\mathscr{G}' \supseteq \mathscr{G}$ such that $\muarrow(\mathscr{G}') \leq \muarrow(\mathscr{G}) + \e$. Therefore in the definition of cost one can replace ``measurable graphing'' by ``open graphing''. \begin{claim} Every \emph{open} graphing $\mathscr{G}$ of $\mu$ contains a $\mu$-continuity factor graph $\mathscr{H}_N$ such that its $N$th power satisfies $\mathscr{H}_N^N \supseteq \Marrow \cap (H_\delta \times S)$. \end{claim} Here $H_\delta$ denotes the space of $\delta$ separated configurations as in Lemma \ref{hardcorecompact}. Note that this condition and the assumption on $R$ and $S$ implies that $\mathscr{H}$ is \emph{connected} on any $\delta$-uniformly separated and $R$ coarsely dense input. In particular, $\mathscr{H}(\Pi_n)$ is connected for every $n$. Let us assume that all of $\Pi_n$ and $\Pi$ have unit intensity and finish the proof: for any $\e > 0$, choose a graphing $\mathscr{G}$ of $\Pi$ such that $ \muarrow(\mathscr{G}) \leq \cost(\Pi) + \e$. Take $\mathscr{H}$ as in the above procedure. Then: \begin{align*} \limsup_{n \to\ \infty} &\cost(\Pi_n) \leq \limsup_{n \to\ \infty} \muarrow^n(\mathscr{H}) && \text{as } \mathscr{H} \text{ is a graphing of } \Pi_n \\ &= \muarrow(\mathscr{H}) && \text{since } \mathscr{H} \text{ is a } \muarrow\text{-continuity graphing} \\ &\leq \muarrow(\mathscr{G}) && \text{as } \mathscr{H} \subseteq \mathscr{G} \\ &\leq \cost(\Pi) + \e && \text{by assumption on } \mathscr{G}. \end{align*} Since $\e > 0$ was arbitrary, this proves the result for unit intensity processes. In general, the steps are the same except with the more complicated formula for cost of processes with non unit intensity. One just needs to know that if $\Pi_n$ weakly converges to a finite intensity process $\Pi$, then $\intensity(\Pi_n)$ converges to $\intensity(\Pi)$. This follows by weak convergence and choosing a stochastic continuity set $U$ for $\Pi$, noting that the point count function $N_U$ is thus continuous and \emph{bounded} as we are taking uniformly separated processes. It remains to prove the claim about $\mu$-continuity factor graphs. Recall from Lemma \ref{continuitysets} that ${\mathbb{M}_0}$ and $G$ admit \emph{topological bases} $\{A_i\}$ and $\{V_j\}$ consisting of $\mu_0$-continuity sets and $\mu$ stochastic continuity sets (respectively). So by definition of the subspace topology, $\Marrow$ admits a topological basis $\{\mathscr{G}_{A_i,V_j}\}$ consisting of $\mu$-continuity factor graphs. For each $k \in \NN$ define \[ \mathscr{H}_k = \bigcup_{\substack{i, j \leq k \\ \mathscr{G}_{A_i,V_j} \subseteq \mathscr{G}}} \mathscr{G}_{A_i,V_j}. \] Since $\mathscr{H}_k$ consists of \emph{finitely many} continuity factor graphs, it is itself a continuity factor graph. Each $\mathscr{H}_k$ is also open, and increases to $\mathscr{G}$ as $k$ tends to infinity. As $\mathscr{G}$ is generating, $\{\mathscr{H}_k^k\}_{k \in \NN}$ forms an open cover of the \emph{compact}\footnote{See Lemma \ref{hardcorecompact}.} space $\Marrow \cap (H_\delta \times S)$. In particular, there exists $N$ such that $\mathscr{H}_N^N \supseteq \Marrow \cap H_\delta \times S$, proving the claim. One sees that the essential feature in the above proof strategy was \emph{compactness}, and therefore it remains true for $\Xi$-labelled point processes if $\Xi$ is compact, as mentioned. With this observation in hand, we can now deduce the (2) statement from the (1). We will produce a weakly convergent sequence of separated and coarsely connected point processes, where each term has the same cost as $\Pi_n$ and the weak limit factors onto $\Pi$, thus has cost at most the cost of $\Pi$. This proves the statement. Choose $\delta' < \delta$ as in Lemma \ref{continuousthinning} so that the $\delta'$ metric thinning satisfies \[ \theta^{\delta'}(\Pi_n) \text{ weakly converges to } \theta^{\delta'}(\Pi) = \Pi. \] Now observe by label trickery (see Proposition \ref{labeltrickery}) we can find $[0,1]$-labelled point processes $\Upsilon_n$ each isomorphic to the respective $\Pi_n$ and such that their underlying point set is $\theta^{\delta'}(\Pi_n)$. Note that $\Upsilon_n$ \emph{might not weakly converge}, but it will have subsequential weak limits. All such subsequential weak limits will be some kind of (possibly random) labelling of $\Pi$. To see this, let $\pi : [0,1]^\MM \to \MM$ be the map that forgets labels. Thus $\pi(\Upsilon_n) = \theta^{\delta'}(\Pi_n)$. Since $\pi$ is continuous, it preserves weak limits. Let $\Upsilon$ be any subsequential weak limit of $\Upsilon_n$, along a subsequence $n_k$. Then by continuity \[ \pi(\Upsilon) = \lim_{k \to \infty} \pi(\Upsilon_{n_k}) = \lim_{k \to \infty} \theta^{\delta'}(\Pi_{n_k}) = \Pi. \] Now let $\Theta_n(\Upsilon_n)$ be \emph{the input/output versions} of the $(\delta', R)$-Delone thickenings that exist from Proposition \ref{factoronnet}. Here we use that $\Upsilon_n$ are free actions. By input/output we mean you keep track of which points of the thickening are input and output, as in Definition \ref{inputoutputdefn}. In particular, \[ \cost(\Theta_n(\Upsilon_n)) = \cost(\Upsilon_n) = \cost(\Pi_n), \] where the first equality holds because we took the input/output version of the thickening. Let $\Upsilon'$ denote any subsequential weak limit of $\Theta_n(\Upsilon_n)$. Then $\Upsilon'$ factors onto $\Pi$, by a similar argument to the earlier one about forgetting certain labels. Putting this all together: \[ \limsup_{n \to \infty} \cost(\Pi_n) = \cost(\Theta_n(\Upsilon_n)) \leq \cost(\Upsilon') \leq \cost(\Pi), \] where the final inequality holds because cost can only increase under factors. \end{proof} \begin{remark} In the second part of the proof, one might want to replace label trickery by something like ``each $\Pi_n$ is isomorphic to a random Delone set $\Upsilon_n$, which has subsequential weak limits, so choose one such limit $\Upsilon$...'', but then it's not clear what the cost of $\Upsilon$ has to do with the cost of $\Pi$. One would require the Delone-ification process to preserve weak limits in some sense in order to relate $\cost(\Upsilon)$ and $\cost(\Pi)$. \end{remark} \begin{remark} There is label trickery in \cite{abert2013bernoulli} too: it is always assumed there that the action is continuous on a compact space. \end{remark} \begin{thm}\label{poissonmax} If $\Pi$ is a free point process, then its cost is at most the cost of the IID Poisson point process on $G$. \end{thm} \begin{proof} We know by Theorem \ref{abertweiss} that $\Pi$ weakly factors onto $[0,1]^\Pi$, and $[0,1]^\Pi$ factors onto the IID Poisson. We would like to say ``so $\Pi$ weakly factors onto the IID Poisson, and hence has less cost by the cost monotonicity statement'', but our cost monotonicty statement is too weak for this, so we use a different argument. Note that $\Pi$ is \emph{abstractly isomorphic} to a $\delta$ uniformly separated process $\Pi'$ by Proposition \ref{abstractlyisom}. Then $\Pi'$ is also free and has the same cost as $\Pi$. Now $\Pi'$ weakly factors onto its own IID, more explicitly, there is a sequence of factors \emph{labellings} $\Phi_n(\Pi')$ weakly converging to $[0,1]^{\Pi'}$. Because $\Phi_n(\Pi')$ is a labelling of $\Pi'$ it is itself free and uniformly separated. Putting it all together: \begin{align*} \cost(\Pi) &= \cost(\Pi') && \text{as they are isomorphic actions} \\ &\leq \limsup_{n \to \infty} \cost(\Phi_n(\Pi')) && \text{cost can only increase for factors} \\ &\leq \cost([0,1]^{\Pi'}) && \text{by Theorem \ref{costmonotonicity}} \\ &\leq \cost([0,1]^\PPP) && \text{as } [0,1]^{\Pi'} \text{ factors onto } [0,1]^\PPP, \end{align*} as desired. \end{proof} \section{Some fixed price one groups} Theorem \ref{poissonmax} is a strategy for proving that groups have fixed price, and we will use it to that end for $G \times \ZZ$. However, our argument is somewhat indirect and requires taking a further weak limit. It would be instructive to have a more direct argument: \begin{question} Can one \emph{explicitly} construct for every $\e > 0$ connected factor graphs of the IID Poisson on $G \times \ZZ$ of edge measure less than $1 + \e$? \end{question} To see what we mean by \emph{explicit}, one should consider the discrete case: if $\Gamma$ is a finitely generated group, then it is straightforward to construct factor of IID connected graphs of small edge measure on Bernoulli (site) percolation on $\Gamma \times \ZZ$. We would like a construction in that vein. So instead we will use the weak factoring strategy to reduce the above problem to a much simpler one, where we \emph{can} construct such factor graphs. \subsection{Groups of the form \texorpdfstring{$G \times \ZZ$}{G x Z}} \begin{defn} Let $\Pi$ be a point process on $G$. Its \emph{vertical coupling} on $G \times \ZZ$ is $\Delta(\Pi) = \Pi \times \ZZ$. \end{defn} Here $\Delta : \MM(G) \to \MM(G \times \ZZ)$ is induced by the diagonal embedding of $G$ into $G^\ZZ$. For this reason one might prefer to call $\Delta(\Pi)$ the \emph{diagonal coupling}, but this terminology will not be suitable when we go to $G \times \RR$. \begin{figure}[h] \includegraphics[scale=0.4]{GtimesZ.pdf} \centering \caption{How one should think of $G \times \ZZ$. Note that if $\Pi$ is a point process on $G$, then its vertical coupling is simply infinitely many copies of \emph{the same} points stacked on top of each other.} \end{figure} \begin{lem}\label{verticalcost} The IID version $[0,1]^{\Delta(\Pi)}$ of a vertically coupled process has cost one. \end{lem} The proof uses the fact that Bernoulli percolation of a factor graph can be implemented as a factor of IID. This sort of trick will be familiar to many, but we will nevertheless spell it out: \begin{defn} Let $\mathscr{G}$ be a factor graph of a point process $\Pi$. Its \emph{$\e$ edge percolation} is the factor graph $\mathscr{G}_\e$ defined on $[0,1]^\Pi$ in the following way: for points $g, h \in [0,1]^\Pi$ let \[ g \sim_{\mathscr{G}_\e} h \text{ whenever } g \sim_{\mathscr{G}} h \text{ and } \xi_g \oplus \xi_h < \e. \] Here $\oplus$ denotes addition of the labels modulo one. \end{defn} Observe that if $(g, h_1)$ and $(g, h_2)$ are edges of $\mathscr{G}(\Pi)$, then the random variables $\xi_g \oplus \xi_{h_1}$ and $\xi_g \oplus \xi_{h_2}$ are independent uniform once again. \begin{remark} If $\mathscr{G}$ is already a factor graph defined on $[0,1]^\Pi$, then we can implement $\mathscr{G}_\e$ on $[0,1]^\Pi$, that is, without adding further randomness (via the replication trick, see the discussion below). \end{remark} \begin{proof}[Proof of Lemma \ref{verticalcost}] Let $\mathscr{G}$ be any graphing of $\Pi$ with finite edge density. We \emph{lift} it to a factor graph $\mathscr{G}^\Delta$ of $\Delta(\Pi)$ in the following way: \[ (g, n) \sim_{\mathscr{G}^\Delta(\Pi)} (h, n) \text{ if and only if } g \sim_{\mathscr{G}(\Pi)} h, \] that is, as $\Delta(\Pi)$ is just copies of $\Pi$ stacked on every level of $G \times \ZZ$, then we simply copy $\mathscr{G}$ onto every level of $G \times \ZZ$ as well. Let $\mathscr{V}$ denote the factor graph of $\Delta(\Pi)$ consisting of \emph{vertical} edges, that is for every $(g, n) \in \Delta(\Pi)$ we have an edge to $(g, n+1)$. One can see that $\mathscr{V} \cup \mathscr{G}^\Delta$ is a connected factor graph. But this is also true when we percolate the edges level wise, that is, when we consider $\mathscr{V} \cup \mathscr{G}^\Delta_\e$. This is because if $(g, n) \sim_{\mathscr{G}^\Delta} (h, n)$ is an edge destroyed in the percolation, then we can slide up along vertical edges and consider the edge $(g, n+1) \sim_{\mathscr{G}^\Delta} (h, n+1)$ instead. Its chance of survival in the percolation is independent from the previous edge, and hence we get another go to cross over. By sliding up far enough we are guaranteed to be able to cross. Finally, observe that the edge density of $\mathscr{V} \cup \mathscr{G}^\Delta_\e$ is $1$ plus $\e$ times the edge density of $\mathscr{G}$. \end{proof} \begin{lem}\label{factorimpliesiidfactor} Suppose $\Pi$ and $\Upsilon$ are point processes, and $\Pi$ factors onto $\Upsilon$. Then $[0,1]^\Pi$ factors onto $[0,1]^\Upsilon$. In particular, if $[0,1]^\Pi$ weakly factors onto $\Upsilon$ then $[0,1]^\Pi$ weakly factors onto $[0,1]^\Pi$ too. \end{lem} The proof of this uses the following \emph{replication trick}: note that the randomness in one $\texttt{Unif}[0,1]$ random variable $\xi$ is equivalent to the randomness in an entire IID sequence $\xi_1, \xi_2, \ldots$ of $\texttt{Unif}[0,1]$ random variables. More precisely, there is an isomorphism (as measure spaces) \[ I : ([0,1], \text{Leb}) \to ([0,1]^\NN, \text{Leb}^{\otimes \NN}). \] So if $\xi \sim \texttt{Unif}[0,1]$, then we will write $I(\xi) = (\xi_1, \xi_2, \ldots)$ for the associated IID sequence of $\texttt{Unif}[0,1]$ random variables. \begin{proof}[Proof of Lemma \ref{factorimpliesiidfactor}] Suppose $\Upsilon = \Phi(\Pi)$. If $g \in [0,1]^\Pi$, then we write $\xi^g$ for its label, and by the replication trick $\xi^g_1, \xi^g_2, \ldots$ for the associated IID seqence of $\texttt{Unif}[0,1]$ random variables. We define a factor map $\widetilde{\Phi}$ of $[0,1]^\Pi$ as follows: \[ \widetilde{\Phi}([0,1]^\Pi) = \bigcup_{g \in [0,1]^\Pi} \{(h_i, \xi^g_i) \in G \times [0,1] \mid V_g(\Pi) \cap \Phi(\Pi) = (h_1, h_2, \ldots, ) \}, \] where we mean that $(h_1, h_2, \ldots)$ is any \emph{enumeration} of $V_g(\Pi) \cap \Phi(\Pi)$ performed in an equivariant way. Note that the intersection $V_g(\Pi) \cap \Phi(\Pi)$ could be empty. For instance, look at the elements of $h \in V_g(\Pi) \cap \Phi(\Pi)$ which are \emph{closest} to $g$. Then let $h_1$ be the element that minimises $T(g^{-1}h)$, where $T : G \to [0,1]$ is the tie-breaking function of Section \ref{voronoidefn}. Then let $h_2$ be the next smallest element, and so on, until you exhaust the closest elements. Then look at the batch of next closest elements and so on. One can check that this is an equivariant construction (any construction where you do the same thing at every point will be). Then $\widetilde{\Phi}([0,1]^\Pi) = [0,1]^\Upsilon$, as desired. For the second part, simply note that taking the IID is idempotent in the sense that $[0,1]^{([0,1]^\Pi)} \cong [0,1]^\Pi$, and apply Lemma \ref{weakfactorimpliesiiweakfactor}. \end{proof} \begin{prop}\label{GtimesZweakfactor} The IID Poisson on $G \times \ZZ$ weakly factors onto the vertically coupled Poisson of $G$. \end{prop} \begin{proof} We will construct factor maps $\Phi^n : [0,1]^{\MM(G \times \ZZ)} \to \MM(G \times \ZZ)$ that ``straighten'' the input in the following way: for a given input $\omega \in [0,1]^\MM$, we select a ``sparse'' subset of its points. At each one of these we \emph{propagate} them upwards by placing copies of them on the levels above. This will converge to a vertically coupled process for suitable inputs. More precisely, let $\Pi$ denote the (unit intensity) IID Poisson on $G \times \ZZ$. We will denote points of $\Pi$ by $(g, l) \in G \times \ZZ$, and write $\Pi_{g, l}$ for its label. We now define the factor map $\Phi^n$ in two stages as a thinning and then a thickening to simplify the analysis. Let \[ \Pi^{1/n} = \{(g, l) \in \Pi \mid \Pi_{g, l} \leq \frac{1}{n} \}, \] and $F_n = \{0\} \times \{0, 1, \ldots, n-1\}$. Set \[ \Phi^n \Pi = \Theta^{F_n}(\Pi^{1/n}), \] where we write $\Phi^n \Pi$ for $\Phi^n(\Pi)$ to conserve parentheses. Let us explain what this means: \begin{itemize} \item At the first step $\Pi \mapsto \Pi^{1/n}$, we independently thin $\Pi$ to get a subprocess of intensity $\frac{1}{n}$. By the discussion in Example \ref{independentthinning}, the resulting process $\Pi^{1/n}$ is simply a Poisson point process on $G \times \ZZ$ of intensity $\frac{1}{n}$. We refer to the points of $\Pi^{1/n}$ as \emph{progenitors}. \item Each progenitor $(g, l)$ spawns additional points with the same $G$-coordinate on the next $n-1$ levels above it. This is the map $\Pi^{1/n} \mapsto \Theta^{F_n}(\Pi^{1/n}) = \Phi^n \Pi $. \item By the discussion at Example \ref{constantthickening}, $\Phi^n \Pi$ is a process of unit intensity. \end{itemize} We will employ the following strategy to show that $\Phi^n \Pi$ weakly converges to the vertical Poisson: \begin{enumerate} \item The sequence $(\Phi^n \Pi)$ admits weak subsequential limits, which a priori might be random counting measures, \item These subsequential limits are actually simple point processes, \item All of these subsequential limits are vertical processes, and \item That process is the vertical Poisson. \end{enumerate} Recall that if $(x_n)$ is a relatively compact sequence and every subsequential limit of $(x_n)$ is $x$, then $x_n$ converges to $x$. By this basic fact and the above items, we can conclude that $(\Phi^n \Pi)$ weakly converges to the vertical Poisson. We now verify that $\{ \Phi^n \Pi \}$ is \emph{uniformly tight}, proving (1). It suffices to verify that the distributions of point counts $N_C(\Phi^n \Pi)$, where $C = B_G(0, r) \times [L]$ denotes a cylinder whose base is a ball of radius $r$ and its height (in levels) is $L$, are uniformly tight. Let $X_i$ denote the number of points in $B_G(0, r) \times \{i\}$ with label $\Pi_{g, i} \leq \frac{1}{n}$, that is, the number of progenitors on the $i$th level. Thus the $X_i$ are IID Poisson random variables with parameter $\lambda(B_G(0,r)) / n$. One can explicitly describe the random variable $N_C(\Phi^n \Pi)$ in terms of the $X_i$s, but for our purposes it is enough to observe that: \[ N_C(\Phi^n \Pi) \leq L \sum_{i=1}^n X_i. \] The sum of independent Poisson random variables is again Poisson distributed (with parameter the sum of the parameters of the individual Poissons), so we see that $N_C(\Phi^n \Pi)$ is bounded in terms of a random variable \emph{that does not depend on $n$}. Therefore $\{\Phi^n \Pi\}$ is uniformly tight. To prove item (2), note that the above shows that the point counts in $B_G(0, r) \times \{0\}$ for $\Phi^n \Pi$ are \emph{exactly} Poisson distributed with parameter $\lambda(B_G(0, r))$. Thus if $\Upsilon$ is any subsequential weak limit of $\Phi^n \Pi$ and $r$ is such that $B_G(0,r) \times \{0\}$ is a stochastic continuity set for $\Upsilon$, then $N_{B_G(0,r) \times \{0\}} \Upsilon$ will also be Poisson distributed. In particular, $\Upsilon$ must be a \emph{simple} point process. For item (3), let $\Upsilon$ be any subsequential weak limit of $\Phi^n \Pi$. Observe that $\Upsilon$ is \emph{vertical} if and only if $(g, l) \in \Upsilon$ implies $(g, l+1) \in \Upsilon$. The idea is that this property is satisfied for most points of $\Phi^n \Pi$, and therefore must be preserved in the weak limit. Note that a process is vertical if and only if its Palm measure is vertical almost surely. We can now explicitly describe the Palm measure of $\Phi^n(\Pi)$. Recall from Theorem \ref{palmofpoisson} that the Palm version $\Pi_0^{1/n}$ of $\Pi^{1/n}$ is simply $\Pi^{1/n} \cup \{(0,0)\}$. To express the Palm version of the $F_n$-thickening of $\Pi^{1/n}$ (\`{a} la Example \ref{palmofthickening}), it will be useful to introduce the following notation. For each $k \in \NN$, let \[ \Pi_k^{1/n} = \Pi_0^{1/n} \cdot (0, k) = \{ (g, l+k) \in G \times \ZZ \mid (g, l) \in \Pi_0^{1/n} \}. \] That is, you simply shift $\Pi_0^{1/n}$ up by $k$ levels. Then $\Phi^n(\Pi_0) = \Pi_0^{1/n} \cup \Pi_1^{1/n} \cdots \cup \Pi_{n-1}^{1/n}$. Denote by $K$ a random integer chosen uniformly from $\{0, 1, \ldots, n-1\}$. Then the Palm version of $\Phi^n(\Pi)$ is \[ (\Phi^n \Pi)_0 = \Pi_{-K}^{1/n} \cup \Pi_{-K+1}^{1/n} \cup \cdots \cup \Pi_{-K+n-1}^{1/n}, \] where we use parentheses to stress that it is the Palm version of $\Phi^n \Pi$, not $\Phi^n$ applied to $\Pi_0$. Let us say that a rooted configuration $\omega \in {\mathbb{M}_0}(G \times \ZZ)$ \emph{has an $\e$-successor} if there is a point approximately above the root $(0,0)$ in $\omega$. More precisely, we define an \emph{event} \[ \{ \omega \text{ has an } \e\text{-successor} \} := \{ \omega \in {\mathbb{M}_0} \mid N_{B_G(0, \e) \times \{1\}} \omega > 1 \}. \] From this, we see \[ \PP[(\Phi^{n} \Pi)_0 \text{ has an } \e\text{-successor} ] \geq \frac{n-1}{n}, \] as $(\Phi^{n} \Pi)_0$ certainly has an $\e$-successor whenever $K < n-1$. Recall that $\Upsilon$ was any subsequential weak limit of $\Phi^n \Pi$. Fix a subsequence $n_i$ such that $\Phi^{n_i} \Pi$ weakly converges to $\Upsilon$. Choose a sequence $\e_k$ tending to zero such that $B_G(0, \e_k) \times \{1\}$ is a stochastic continuity set for $\Upsilon$. This is possible by Lemma \ref{continuitysets}. Then for each $k$ \[ \frac{n_i-1}{n_i} \leq \PP[\Phi^{n_i} (\Pi)_0 \text{ has an } \e_k\text{-successor} ] \to \PP[\Upsilon_0 \text{ has an } \e_k\text{-successor} ], \] So $\Upsilon_0$ has $\e_k$-successors almost surely for every $k$, and hence has $0$-successors. That is, $\Upsilon$ is a vertical process, at last proving item (3). Finally, for item (4) we observe that any vertical process is completely determined by its intersection with $G \times \{0\}$. We observed in the proof of item (2) that $\Upsilon$ is a Poisson point process on the $0$th level, so it must be the vertical Poisson, as desired. \end{proof} \begin{cor}\label{GtimesZcost} Groups of the form $G \times \ZZ$ have fixed price one. \end{cor} \begin{proof} By the previous proposition and Lemma \ref{factorimpliesiidfactor}, we know that the IID Poisson weakly factors onto the IID of the vertically coupled Poisson. Explicitly, there exists factor maps $\Phi^n : [0,1]^\MM \to [0,1]^\MM$ such that \[ \Phi^n \Pi \text{ weakly converges to } [0,1]^{\Delta(\PPP)}, \] where $\Pi$ is the IID Poisson on $G$ and\footnote{This is a slight abuse of notation: we were using $\PPP$ to denote the \emph{law} of the Poisson point process, but in the above expression we treat it as if it were a random variable. We do this to prevent the profusion of asterisks representing pushforwards of measures.} $\PPP$ is the Poisson on $G$. Choose $\delta < 1$ as in Lemma \ref{continuousthinning} such that metric $\delta$-thinning preserves the weak limit. Note that because $\delta < 1$, the thinning commutes with the vertical coupling: that is, $\theta^\delta(\Delta (\PPP))= \Delta(\theta^\delta \PPP)$. Therefore \[ \theta^\delta(\Phi^n \Pi) \text{ weakly converges to } [0,1]^{\Delta(\theta^\delta (\PPP))}. \] Putting this all together, \begin{align*} \cost(\Pi) &\leq \limsup_{n \to \infty} \cost(\theta^\delta(\Phi^n \Pi)) && \text{As cost can only increase under factors} \\ &\leq \cost([0,1]^{\Delta(\theta^\delta (\PPP))}) && \text{By Theorem \ref{costmonotonicity}} \\ &= 1 && \text{By Lemma \ref{verticalcost}}. \end{align*} Since the IID Poisson has maximal cost, this proves that $G \times \ZZ$ has fixed price one. \end{proof} \begin{remark} With further percolation-theoretic assumptions on $G$, one can \emph{directly} show that $\cost(\Phi^n(\Pi)) \leq 1 + \e_n$, where $\e_n$ tends to zero. This is by constructing factor graphs on $\Phi^n(\Pi)$. By using the Poisson net, one can prove an analogue of the Babson and Benjamini theorem \cite{10.2307/119068} and show that the distance $\mathscr{D}_R$ factor graph on the Poisson point process on a \emph{compactly presented} and one-ended group has a \emph{unique} infinite connected component if $R$ is sufficiently large. Now on $\Phi^n(\Pi)$, we construct a factor graph as follows: add in all vertical edges, and the $\mathscr{D}_R$ edges horizontally. Now percolate the horizontal edges. One can show that by adding a small amount of edges to this, the result is a graph with a unique infinite connected component. \end{remark} \subsection{Groups of the form \texorpdfstring{$G \times \RR$}{G x R}} We now outline the modifications required to extend the $G \times \ZZ$ case to the following theorem: \begin{thm}\label{GtimesRfp} Groups of the form $G \times \RR$ have fixed price one. \end{thm} \begin{proof} The strategy will be exactly the same as in Proposition \ref{GtimesZweakfactor}. We define factor maps $\Phi^n$ of the IID Poisson $\Pi$ using the same formula as in the $G \times \ZZ$ case. We claim these weakly converge to a point process $\Upsilon$ which is \emph{vertical} in the sense that $(g, t) \in \Upsilon$ implies $(g, t + n) \in \Upsilon$ for all $n \in \ZZ$. First we show $\{\Phi^n (\Pi)\}$ is uniformly tight. This works exactly as in the $G \times \ZZ$ case, except instead of counting progenitors $X_i$ on $G \times \{i\}$, we count them on $G \times [i, i+1)$ for $i \in \ZZ$. Next we show that any subsequential weak limit $\Upsilon$ of $\{ \Phi^n (\Pi) \}$ is not just a random counting measure, but an actual point process. This follows as in the $G \times \ZZ$ case, as $\Phi^n (\Pi)$ has the same distribution in $G \times [0, 1)$ as a Poisson point process on $G \times \RR$. The proof that $\Upsilon$ is a vertical point process works the same as in the $G \times \ZZ$ case. At this point one can observe that a vertical process is determine by its intersection with $G \times [0,1)$, and therefore $\Phi^n (\Pi)$ weakly converges to a unique point process $\Upsilon$. We now adapt Lemma \ref{verticalcost} to this context, and show that if $\Upsilon$ is \emph{any} vertical point process such that the projection $\pi(\Upsilon)$ has finite cost, then the IID process $[0,1]^\Upsilon$ has cost one. Let $\pi : G \times \RR \to G$ denote the projection map. Observe that \emph{if $\Upsilon$ is vertical}, then $\pi(\Upsilon)$ is discrete, and hence defines a point process on $G$. For contrast, observe that the projection $\pi(\Pi)$ of the Poisson point process $\Pi$ is almost surely dense, and hence does not define a point process on $G$. In the case of the $\Upsilon$ we construct as a weak limit, its projection $\pi(\Upsilon)$ is just the Poisson point process on $G$. Let $\mathscr{G}$ be a finite cost graphing of $\pi(\Upsilon)$. We lift this to a factor graph of $\Upsilon$ in the following way: \[ (g_1, t_1) \sim_{\mathscr{H}(\Upsilon)} (g_2, t_2) \text{ when } g_1 \sim_{\mathscr{G}(\pi(\Upsilon))} g_2 \text{ and } \@ifstar{\oldabs}{\oldabs*}{t_1 - t_2} < 1. \] Let $\mathscr{V}(\Upsilon)$ denote the set of \emph{vertical edges}, that is \[ \mathscr{V}(\Upsilon) = \{ ((g, t), (g, t + 1)) \mid (g, t) \in \Upsilon \}. \] Then as in Lemma \ref{verticalcost}, the vertical edges $\mathscr{V}(\Upsilon)$ together with an $\e$-percolation of $\mathscr{H}(\Upsilon)$ defines a cheap connected factor graph of $\Upsilon$. \begin{figure}[h] \includegraphics[scale=0.5]{gtimesr.pdf} \centering \caption{A portion of a graphing on the projection of a vertical process, and how it might look when lifted. Note that it gets wobbled a bit in the process.} \end{figure} We conclude from this that $G \times \RR$ has fixed price one by the same kind of reasoning as in Corollary \ref{GtimesZcost}. \end{proof} \begin{remark} The limiting process here can be described as follows: sample from a Poisson on $G \times [0, 1)$, and then simply extend it periodically in the second coordinate. \end{remark} \section{Rank gradient of Farber sequences vs. cost} In this section we will discuss how to connect rank gradient to the cost of the Poisson point process in certain situations. We first work with general lcsc groups, and then specialise to semisimple Lie groups and prove a stronger theorem. \begin{defn} Let $(\Gamma_n)$ denote a sequence of lattices in a fixed group $G$. The sequence is \emph{Farber} if for every compact neighbourhood of the identity $V \subseteq G$ we have \[ \PP[a\Gamma_n a^{-1} \cap V = \{e\}] \to 1 \text{ as } n \to \infty, \] where $a\Gamma_n$ denotes a coset of $\Gamma_n$ chosen randomly according to the (normalised) finite $G$-invariant measure on $G/\Gamma_n$. \end{defn} Note that $a \Gamma_n a^{-1}$ is exactly the stabiliser of $a\Gamma_n$ for the action $G \curvearrowright G/\Gamma$. Thus the Farber condition says that the action on most points of the quotient is locally injective. Equivalently, the condition states that $a\Gamma_n \cap Va = \{a\}$ with high probability. It is this second form that we will actually use in the proof below. We think of $a$ as being a point sampled randomly from a fundamental domain for $\Gamma_n$ in $G$, and thus it states that the $V$-neighbourhood around this point $a$ meets the lattice shift $a\Gamma_n$ only trivially. \begin{defn} Let $(\Gamma_n)$ denote a sequence of lattices in a fixed group $G$. Its \emph{rank gradient} is \[ \RG(G, (\Gamma_n)) = \lim_n \frac{d(\Gamma_n) - 1}{\covol \Gamma_n}, \] whenever this limit exists. \end{defn} \begin{remark} If $G$ is discrete, then the $\Gamma_n$ are all finite index subgroups. The Nielsen-Schreier formula \[ \frac{d(\Gamma_n) - 1}{[G : \Gamma_n]} \leq d(G) - 1 \] shows that the terms in the rank gradient are at least bounded. Gelander proved\cite{MR2863908} an analogue of this formula for lattices in connected semisimple Lie groups without compact factors. In the Seven Samurai paper\cite{MR3664810}, it is shown that if $G$ is a centre-free semisimple Lie group of higher rank with property (T), then \emph{any} sequence of irreducible lattices $(\Gamma_n)$ in $G$ is automatically Farber, as long as $\covol(\Gamma_n)$ tends to infinity. \end{remark} In the particular case of a decreasing \emph{chain} $\Gamma = \Gamma_1 > \Gamma_2 > \ldots$ of finite index subgroups, Ab\'{e}rt and Nikolov showed \cite{MR2966663} that the rank gradient $RG(\Gamma, (\Gamma_n))$ can be described as the \emph{groupoid} cost of an associated pmp action $\Gamma \curvearrowright \partial T(\Gamma, (\Gamma_n))$ on the boundary of a rooted tree. \subsection{Cocompact lattices in general groups} \begin{defn} We say that a lattice $\Gamma < G$ is \emph{$\delta$-uniformly discrete} if all of its \emph{right} cosets $\Gamma a \in \Gamma \backslash G$ are $\delta$ uniformly separated as subsets of $G$. That is, for all distinct pairs $\gamma_1, \gamma_2 \in \Gamma$, we have $d(\gamma_1 a, \gamma_2 a) \geq \delta$. Equivalently by left-invariance of the metric, $d(e, a^{-1} \gamma a ) \geq \delta$ for all $\gamma \in \Gamma$ not equal to the identity $e \in G$. If $(\Gamma_n)$ is a sequence of lattices, then we say it is $\delta$ uniformly discrete if each $\Gamma_n$ is $\delta$ uniformly discrete in the above sense. \end{defn} \begin{thm}\label{farbertheorem} Let $(\Gamma_n)$ be a Farber sequence of \emph{cocompact} lattices. Suppose further that the sequence is \emph{uniformly discrete}. If its rank gradient exists, then \[ \RG(G, (\Gamma_n)) \leq c_P(G) - 1, \] where $c_P(G)$ denotes the cost of the Poisson point process on $G$. In particular, if $G$ has fixed price one then the rank gradient vanishes. \end{thm} The above theorem is spiritually the same as one proved independently by Carderi in \cite{carderi2018asymptotic}, but in a drastically different language (namely, that of ultraproducts of actions). The theorem is therefore his, but we include our own proof as it has a different flavour. In the subsequent section we will discuss a similar theorem which applies for nonuniform lattices, at least with additional assumptions on the group. \begin{proof}[Proof of Theorem \ref{farbertheorem}] Recall that the cost of a lattice shift is \[ \cost(G \curvearrowright G/\Gamma_n) = 1 + \frac{d(\Gamma_n) - 1}{\covol \Gamma_n}, \] which is essentially the term appearing in the rank gradient definition. We would therefore like to take a weak limit of these actions to get some free point process, and then appeal to the cost monotonicity result. Of course, this is completely senseless: the intensity of the lattice shift tends to zero, so it weak limits on the empty process. Therefore we \emph{thicken} the lattice shifts to get processes $\Pi_n$ with a nontrivial weak limit. This thickening procedure must be done correctly, so that we can apply our (weak) cost monotonicity result. We will produce a sequence of $[0,1]$-marked point processes $\Pi_n$ such that \begin{itemize} \item each $\Pi_n$ is a $2\delta$-net, \item each $\Pi_n$ is a factor of the lattice shift $a\Gamma_n$, and so has cost \emph{at least} $1+\frac{d(\Gamma_n) - 1}{\covol \Gamma_n}$, and \item they have a subsequential weak limit $\Upsilon$ with IID $[0,1]$ labels. \end{itemize} Then \[ \RG(G, (\Gamma_n)) + 1 \leq \limsup_{n \to \infty} \cost(\Pi_n) \leq \cost(\Upsilon) \leq \mathrm{c}_\mathrm{P}(G) \] by the cost monotonicity result, as desired. View the space of \emph{right} cosets $\Gamma_n \backslash G$ as a compact metric space, where the distance between two cosets $\Gamma_n b_1, \Gamma_n b_2 \in \Gamma_n \backslash G$ is just their distance as closed subsets of $G$. Let $B_n = \{\Gamma_n b_1^n, \Gamma_n b_2^n, \ldots, \Gamma_n b^n_{k_n} \}$ be a collection of $2\delta$-nets in $\Gamma_n \backslash G$, where $\delta$ is the uniform discreteness parameter. We also choose $b_1^n = e$ for all $n$. This specifies a sequence of \emph{thickenings} $\Theta_n$ of the corresponding lattice shifts: that is, $a\Gamma_n \mapsto a B_n$. Note that $\Theta_n(a\Gamma_n)$ is a $2\delta$-net: it's true that $d(a\Gamma_n b_i^n, a\Gamma_n b_j^n) = d(\Gamma_n b_i^n, \Gamma_n b_j^n) \geq \delta$ for $i \neq j$ by our choice of $B_n$, and points of $\Gamma_n b_i^n$ are uniformly separated too exactly by our uniform discreteness assumption. It is also $2\delta$-coarsely dense, by the same property for $B_n$. Since $\{\Theta_n(a\Gamma_n)\}$ is a collection of random $2\delta$-nets, it is automatically uniformly tight, and all subsequential weak limits are $2\delta$-nets (and in particular, simple point processes). At this point we would like to apply cost monotonicity to $\Theta_n$ by passing to a subsequential weak limit. Our issue here is that one would have to demonstrate that this weak limit is a \emph{free} action in order to compare its cost to the cost of the Poisson. To do this, one would have to use the Farber condition in an essential way. We bypass this by a labelling trick: note that the IID of \emph{any} point process is automatically a free action (as any two points of it will receive distinct values almost surely, killing any possible symmetries). So we will limit on an IID labelled process instead. Consider the action $G \curvearrowright G/\Gamma_n \times [0,1]$, where the action on the second coordinate is trivial. We refer to this as the \emph{periodic IID lattice shift}. It is simply the lattice shift, but with every point of it receiving \emph{the same} IID label. This is of course distinct from the IID of the lattice shift, but note \[ \cost(G \curvearrowright G/\Gamma_n) = G \curvearrowright G/\Gamma_n \times [0,1] \] as cost is the integral of the cost over the ergodic decomposition (see Corollary 18.6 of \cite{kechris2004topics}). We write $(a\Gamma_n, \xi)$ for a sample from this space. Now we thicken as before, but this time distribute labels: let \[ \Theta_n(a \Gamma_n, \xi) = \bigcup_{i = 1}^{k_n} a \Gamma_n b_i^n \times \{\xi_i\}, \] where $\xi_1, \xi_2, \ldots$ is an IID sequence of $\texttt{Unif}[0,1]$ random variables produced as a factor of $\xi$. In other words, each point of the lattice shift adds points to the space and gives them an IID $[0,1]$ label (but note that each point starts with \emph{the same} label $\xi$, so this is not the IID of the thickening). Let $\Upsilon$ denote any subsequential weak limit of $\Theta_n(a \Gamma_n, \xi)$, and pass to that subsequence. Then $\pi(\Theta_n(a \Gamma_n, \xi)$ weakly converges to $\pi(\Upsilon)$, as the projection $\pi: G \times [0,1] \to G$ that forgets labels is continuous. Our task is to show that $\Upsilon = [0,1]^{\pi(\Upsilon)}$. The idea of the proof is the following: fix $C \subseteq G$ to be a bounded stochastic continuity set for $\Upsilon$. We want to prove that the labels of the points of $\Theta_n(a \Gamma_n, \xi)$ in $C$ are independent and uniform on $[0,1]$. They are already $\texttt{Unif}[0,1]$ by definition, so we must now consider their dependencies. Again, by definition, points of $C$ arising from \emph{distinct} $a\Gamma_n b_i^n$ are automatically independent. The only dependency issue that can arise is when $a\Gamma_n b_i^n \cap C$ has \emph{multiple} points. We will show that this is a vanishingly rare event. This will be achieved via the following lemma: \begin{lem}\label{farbermult} Let $C \subseteq G$ be compact. If $(\Gamma_n)$ is a Farber sequence and $B_n \subseteq G$ is any sequence of finite subsets, then $\PP[ \exists b \in B_n \text{ such that } \@ifstar{\oldabs}{\oldabs*}{a\Gamma_n b \cap C} > 1] \to 0$. \end{lem} \begin{proof} Apply the Farber condition with any set $V$ containing $CC^{-1}$. If $b \in B_n$ is such that there are $a\gamma_1, a\gamma_2$ \emph{distinct} elements of $a\Gamma_n b \cap C$, then \[ (a\gamma_1 b)(a \gamma_2 b)^{-1} = a \gamma_1 \gamma_2^{-1} a^{-1} \text{ is in } CC^{-1}, \] so $a \gamma_1 \gamma_2^{-1} a^{-1} ab = a \gamma_1 \gamma_2^{-1} b \in Vab$, and this element is also in $a\Gamma_n b$. By the Farber condition, \[ a\Gamma_n b \cap Vab = \{ab\} \] with high probability, and so \[ \PP[ \exists b \in B_n \text{ such that } \@ifstar{\oldabs}{\oldabs*}{a\Gamma_n b \cap C} > 1] \leq \PP[a\Gamma_n a^{-1} \cap V = \{e\}] \to 0, \] finishing the proof of the lemma. \end{proof} Let $\boldsymbol{V} = (V_1, V_2, \ldots, V_k)$ denote a collection of bounded stochastic continuity sets for $\Upsilon$, and $[0, \boldsymbol{p}) = ([0, p_1), [0, p_2), \ldots, [0, p_k))$ a family of intervals in $[0,1]$. We denote by $\boldsymbol{V} \times [0, \boldsymbol{p}) = (V_1 \times [0, p_1), \ldots, V_k \times [0, p_k))$ the stochastic continuity set of $[0,1]^\Upsilon$. Let $C$ be a compact set large enough to contain $\bigcup_i V_i$. On the event from the lemma, \[ N_{\boldsymbol{V}}(\Theta_n(a\Gamma_n, \xi)) = N_{\boldsymbol{V}}([0,1]^{\Theta_n(a\Gamma_n)}, \] where by $\Theta_n(a\Gamma_n)$ we simply mean $(\Theta_n(a\Gamma_n, \xi))$ with the labels erased. Therefore $\Theta_n(a\Gamma_n, \xi)$ converges weakly to $[0,1]^\Upsilon$, finishing the proof. \end{proof} \subsection{The rerooting groupoid for homogeneous spaces}\label{homogeneousspaces} One must also investigate point processes on the homogeneous spaces of groups. The setup which we will consider is the action of $G$ on $X = G/K$, where $K \leq G$ is a \emph{compact} subgroup. This covers our principal case of interest, namely Riemannian symmetric spaces (such as $\Isom(\HH^d) \curvearrowright \HH^d$). All of the point process theoretic definitions (such as thinnings and factor graphs) can be readily adapted to this context. There are some minor subtleties that must be addressed, however. Our aim is to define a cost notion for $G$-invariant point processes on $X$, and relate it to cost for $G$-invariant point processes on $G$ itself. We will show: \begin{thm}\label{symmetricspacecostmax} Assume that the Poisson point process on $X = G/K$ is free\footnote{Some assumption is required. For instance, if $K$ contains an element of the centre of $G$ then the action will not be free.} as a $G$ action. Then \[ \sup_\Pi \cost_X(\Pi) = \sup_\Pi \cost_G(\Pi), \] where the supremum is taken over the set of free point processes on $X$ and on $G$ respectively. In particular, if $X$ has fixed price one then $G$ itself has fixed price one. \end{thm} \begin{remark} The appeal of the above theorem is that it allows one to prove fixed price for a group by working on the symmetric space instead, where the geometry is more apparent. As will be evident in the proof, it suffices to prove fixed price for the Poisson point process on $X$, which is a concrete probabilistic object. \end{remark} Our starting point is to note that such spaces $X$ enjoy the properties we've been using\footnote{The limiting factor here is really the metric: a $G$-invariant Borel measure exists on $G/H$ in fairly great generality (it is an imposition on the modular functions of $G$ and $H$, but an invariant metric will only exist if $G/H$ is homeomorphic (in an appropriate sense) to a quotient $G'/H'$ with $H'$ compact in $G$, see \cite{ANANTHARAMANDELAROCHE2013546} for further details.}: namely, an invariant proper metric that makes $X$ an lcsc space and a $G$-invariant Borel measure $\lambda_X$ on $X$. For the metric on $X$, one takes the Hausdorff metric: \[ d_X(aK, bK) = \max\{ \sup_{k_1 \in K} \inf_{k_2 \in K} d(ak_1, bk_2), \sup_{k_2 \in K} \inf_{k_1 \in K} d(ak_1, bk_2) \}, \] and for the measure $\lambda_X$ one takes the pushforward $\pi_* \lambda$, where $\pi : G \to X$ is the quotient map $a \mapsto aK$. We recall \emph{the mapping theorem} (See Section 2.3 of \cite{MR1207584} for further details): \begin{thm}[Mapping theorem]\label{mappingtheorem} Suppose $X$ is a standard Borel space with $\sigma$-finite measure $\lambda$, $\Pi$ is the Poisson point process with intensity measure $\lambda$, and $f : X \to Y$ is a measurable function, then $f(\Pi)$ is the Poisson point process on $Y$ with intensity measure $f_* \lambda$, assuming this measure has no atoms. \end{thm} In other words, a map between the base spaces $X$ and $Y$ induces a map from the Poisson point process on $X$ to the Poisson point process on $Y$ (with some intensity measure). It is intuitive that this should lead to an inequality on costs, and we will detail how this works. Our study splits into two cases, according to if $K$ is Haar null or not (for $G$'s Haar measure, of course). If $\lambda(K) > 0$, then by Steinhaus Theorem\footnote{It states that if $A \subset G$ is a subset of a locally compact group with positive Haar measure, then $AA^{-1}$ contains an identity neighbourhood.} we have that $K$ is open, and hence a compact clopen subgroup of $G$. It will then also have countable index. This situation occurs for instance in the study of Cayley-Abels graphs of totally disconnected locally compact groups. In this case one is essentially looking at Bernoulli percolation on the quotient space. We will instead focus on the case that $\lambda(K) = 0$. One can consider $G$-invariant point processes $\Pi$ on $X$, which should be understood as random elements of $\MM(X)$ whose distribution is $G$-invariant. We may define the intensity of $\Pi$ as before: \[ \intensity(\Pi) = \frac{1}{\lambda_X(U)} \EE \@ifstar{\oldabs}{\oldabs*}{\Pi \cap U}, \] where $U \subseteq X$ is any set of unit volume. One can further consider notions of thinnings, partitionings, cost, and so on. We follow our previous strategy of studying these by looking at the associated groupoid. Let us write $0$ for the element $K \in X$, which we view as the root of the space. Accordingly we will define the space of rooted configurations in $X$ as \[ \MM_0(X) = \{ \omega \in \MM(X) \mid 0 \in \omega \}. \] Note that the orbit equivalence relation of $G \curvearrowright \MM(X)$ \emph{no longer} restricts to $\MM_0(X)$ to an equivalence relation with countable classes. The solution to this problem is to take a quotient: \begin{prop} Let $K$ act on $\MM_0(X)$ by left multiplication. Then the action is \emph{smooth}, that is, the space of orbits $K \backslash \MM_0(X)$ is a standard Borel space. \end{prop} It is a general fact that compact groups \emph{always} act smoothly on standard Borel spaces (see Proposition 2.1.12 of \cite{MR776417} and its corollary), but it is possible to give a direct proof in this case. The proof is very reminiscent of the section ``Extension to more general point processes'' of \cite{holroyd2003}. \begin{proof} We will construct a measurable function $F : {\mathbb{M}_0}(X) \to [0,1]$ with the property that $F(\omega) = F(\omega')$ if and only if $\omega' \in K\omega$. This is a characterisation of smoothness. Fix a family ${U_n}$ of open subsets of $G$ with the property that it \emph{determines} elements of ${\mathbb{M}_0}(X)$ in the sense that \[ \omega = \omega' \Leftrightarrow \{n \in \NN \mid \omega \cap U_n \neq \empt \} = \{n \in \NN \mid \omega' \cap U_n \neq \empt \} \] Let $f : \{0,1\}^\NN \to [0,1]$ be any continuous order-preserving injection, and consider the map $F : {\mathbb{M}_0}(X) \to [0,1]$ given by \[ F(\omega) = \inf_{k \in K} f((\mathbbm{1}[U_n \cap k \cdot\omega \neq \empt])_n) \] Note that the component functions $\omega \mapsto \mathbbm{1}[U_n \cap \omega \neq \empt]$ are lower semicontinuous, so the infimum is attained. The function is constant on $K$-orbits by definition, but by the separating nature of the family $\{U_n\}$ it also takes distinct values for points in distinct orbits. \end{proof} A Borel thinning $\theta : \MM(X) \to \MM(X)$ corresponds to a Borel subset $A \subseteq {K \backslash} \MM_0(X)$. Note that the latter can be identified with a subset of $\MM_0(X)$ which is $K$-invariant in the sense that $KA = A$, we will make such identifications freely. To see why this $K$-invariance is required, consider the formula \[ \theta^A(\omega) := \{gK \in \omega \mid g^{-1}\omega \in A \}. \] For this to be well-defined, we need that the condition does not depend on our choice of coset representative $gK$. This is exactly asking for $K$-invariance of $A$. In the same way one can see that Borel factor $[d]$-colourings correspond to Borel partitions of $K \backslash \MM_0(X)$ indexed by $[d]$, and so on. If $\Pi$ is a $G$-invariant point process on $X$ with law $\mu$, then we may use the above to define its \emph{Palm measure} $\mu_0$ on $K \backslash \MM_0(X)$ as before: \begin{align*} \mu_0(A) &:= \frac{\intensity \theta^A(\Pi)}{\intensity \Pi} \\ &= \frac{1}{\intensity \Pi} \EE \left[ \# \{ gK \in U \mid g^{-1} \omega \in A \}\right], \text{ where } U \subseteq X \text{ has unit volume}. \end{align*} We equip $K \backslash \MM_0(X)$ with the following rerooting equivalence relation: \[ K\omega \sim_\Rel K\omega' \text{ if and only if } \exists aK \in \omega \text{ such that } K\omega' = K a^{-1} \omega. \] We can also define a groupoid structure. If one defines \[ \Marrow(X) = \{ (\omega, aK) \in \MM_0(X) \times X \mid aK \in \omega \}, \] then there is a natural diagonal action of $K$ on $\Marrow(X)$. The quotient is again standard Borel, we denote it $K \backslash \Marrow(X)$. Then $K \backslash \MM_0(X)$ and $K \backslash \Marrow(X)$ form the unit space and arrow space (respectively) of a groupoid, which we call \emph{the rerooting groupoid}. The source and target maps are defined as before \begin{align*} &s, t : K \backslash \Marrow(X) \to K \backslash \MM_0(X),\\ &s(K\omega, KaK) = K\omega,\\ &t(K\omega, KaK) = Ka^{-1}\omega. \end{align*} A pair of arrows $(K\omega, KaK), (K\omega', KbK) \in K \backslash \Marrow(X)$ are deemed composable if $K\omega' = Ka^{-1}\omega$, in which case \[ (K\omega, KaK) \cdot (K\omega', KbK) := (K\omega, KabK). \] We can equip this groupoid with the Palm measure, resulting in a $r$-discrete pmp groupoid as before. \begin{defn} Let $\Pi$ be a $G$-invariant point process on $X$. Its \emph{groupoid cost} is \[ \cost_X(\Pi) - 1 = \inf_{\mathscr{G}}\left\{ \frac{1}{2}\EE\left[ \sum_{x \in U \cap \Pi} \deg_x{\mathscr{G}(\Pi)} \right] \right\} - \intensity(\Pi), \] where $U$ is a set of unit volume in $X$ and the infimum is taken over all \emph{connected} equivariantly defined factor graphs of $\Pi$. The \emph{cost of $X$} is \[ \cost(X) := \inf \{ \cost_X(\Pi) \mid \Pi \text{ is an invariant free point process on } X \}. \] \end{defn} Aside from replacing ${\mathbb{M}_0}(G)$ with $K \backslash {\mathbb{M}_0}(X)$, our earlier arguments apply verbatim and prove the following: \begin{itemize} \item If $\Phi : \MM(X) \to \MM(X)$ is a $G$-equivariant factor map, then $\cost_X(\Pi) \leq \cost_X(\Phi(\Pi))$. In particular, $\cost_X(\Pi)$ only depends on the isomorphism class of $\Pi$, \item Every \emph{free} point process weakly factors onto its own IID, \item The Poisson point process \emph{on $X$} has maximal $\cost_X$ amongst free $X$ processes, assuming the Poisson point process is free. \end{itemize} \begin{remark} In this level of generality, the Poisson point process on $X = G/K$ might not be free and thus must be assumed. For instance, let $G = \RR \times \RR/\ZZ$ and $K = \RR/\ZZ$. Then $K$ acts trivially on the quotient $X$, and thus \emph{no} $G$-invariant point process on $X$ is free (even their IID will not be free). These examples are rather contrived however. \end{remark} \begin{thm}\label{equalcost} If $\Pi$ is a \emph{free} point process on $X$, then its $\cost_X$ is equal to its cost as a $G$-action. \end{thm} Recall from the introduction that the cost of a free pmp action of $G$ is defined by picking an isomorphic representation of the action as a point process, and taking the cost of that. This theorem will employ a ``whittling'' construction. Note that we can view point processes on $X$ as random closed subsets of $G$ (which happen to be unions of cosets of a fixed compact subgroup). We are able to exploit freeness to \emph{deterministically} whittle this random closed subset to a point process: \begin{prop}\label{deterministiclift} If $\Pi$ is a free point process on $X$, then it admits a \emph{deterministic lift} to $G$: that is, there exists an invariant point process $\Upsilon = \Upsilon(\Pi)$ on $G$ such that \begin{itemize} \item $\Upsilon \subset \Pi$ almost surely, \item $\Upsilon$ intersects each coset $aK$ \emph{at most} once, and \item $\pi(\Upsilon) = \Pi$. \end{itemize} In other words, we are able to select one element out of every coset $aK \in \Pi$ in an equivariant and deterministic way. \end{prop} \begin{proof}[Proof of Theorem \ref{equalcost}] Observe that the process $\Upsilon$ from Proposition \ref{deterministiclift} is isomorphic to $\Pi$, so $\cost(\Pi) = \cost(\Upsilon)$. We verify that $\cost(\Upsilon) = \cost_X(\Pi)$. Note that factor graphs of $\Pi$ and $\Upsilon$ can be bijectively identified, and so will have the same edge measures. Finally, they have the same intensity: choose $U \subseteq X$ with volume one, and observe that $\Pi \cap U$ and $\Upsilon \cap UK$ are in bijection, with $UK$ also having volume one. \end{proof} We will require a simple lemma, which essentially already appears in Lemma \ref{independentsetsexist} but we isolate for clarity. It works for point processes on $G$ and on $X$. \begin{lem}\label{freeness} A point process $\Pi$ is free if and only if it admits a deterministic labelling by $[0,1]$ such that all of the labels are distinct (almost surely). \end{lem} \begin{proof} Clearly if such a labeling exists, then the process must be free. For the converse, let $I: {\mathbb{M}_0} \to [0,1]$ be a Borel isomorphism. Define a labelling by \begin{align*} &L : \MM \to [0,1]^\MM \\ &L(\omega) = \{ (x, I(x^{-1}\omega) \mid x \in \omega \}. \end{align*} Observe that two distinct points $x, y \in \omega$ receive the same label in $L(\omega)$ exactly when $I(x^{-1}\omega) = I(y^{-1}\omega)$, and so $xy^{-1}\omega = \omega$. If the process is free, then this never occurs, as desired. To run the proof on $X$, simply replace ${\mathbb{M}_0}$ by $L \backslash {\mathbb{M}_0}(X)$. \end{proof} \begin{proof}[Proof of Proposition \ref{deterministiclift}] By virtue of being free, we may use Theorem \ref{minden} to fix an isomorphism of $\Pi$ with a point process $\Pi'$ on $G$. The desired process $\Upsilon$ will be the result of pushing the points of $\Pi'$ into $\Pi$. Of course $\Pi'$ is itself a free process, so we may fix a deterministic labelling $L(\Pi')$ of its points \`{a} la Lemma \ref{freeness}. Assign each coset $aK$ of $\Pi$ to a point of $x$ of $\Pi'$ in your preferred equivariant way. For instance, note that every such coset intersects some (finite but non-zero) number of Voronoi cells of $\Pi'$. For each $aK \in \Pi$, assign it to whichever of these cells has the germ with the highest label in $L(\Pi')$. We denote by $A_x$ the set of cosets in $\Pi$ that we assign to $x \in \Pi'$ in this way. Fix a Borel transversal $T \subset G$. Note that $xT$ is another Borel transversal for any $x \in G$, so $xT \cap aK$ selects the unique point representative of $aK$ with respect to this transversal. Finally, set \[ \Upsilon = \bigcup_{\substack{x \in \Pi' \\ aK \in A_x}} xT \cap aK. \] This selects one representative from every coset in $\Pi$, and at every stage it was performed in an equivariant way, so is our desired invariant point process. \end{proof} \begin{proof}[Proof of Theorem \ref{symmetricspacecostmax}] Let $\Pi$ denote the Poisson point process on $G$. Then by the mapping theorem (Theorem \ref{mappingtheorem}), the image $\Upsilon$ of $\Pi$ under the quotient map $G \to X$ is the Poisson point process on $X$. Since $\cost_G$ can only increase under factor maps, we have \[ \cost_G(\Pi) \leq \cost_G(\Upsilon). \] But the Poisson point process has maximal cost amongst free $G$-actions, so there is equality. By Theorem \ref{equalcost}, \[ \cost_X(\Upsilon) = \cost_G(\Upsilon), \] and as discussed, the Poisson point process has maximal cost amongst all free point processes on $X$, finishing the proof. \end{proof} \begin{question} It is natural to ask if $G$ and $X$ have the same \emph{infimal} cost as well. \end{question} \subsection{Farber sequences in semisimple Lie groups} The goal of this section is to to prove Theorem \ref{carderiextension} from the introduction, which we restate here: \begin{thm*} Let $G$ be a semisimple real Lie group and let $\Gamma_{n}$ be a Farber sequence of lattices in $G$. Then \[ \limsup_{n\to\infty}\frac{d(\Gamma_n)-1}{\mathrm{vol}(G/\Gamma_n)} \leq\mathrm{c}_{\mathrm{P}}(G)-1. \] \end{thm*} There would be a more straightforward (and general) proof of the above if a more general form of cost monotonicity were true, however we are unable to prove (or disprove) the following statement: suppose $\Pi_n$ is a sequence of finite cost point processes that weakly converge to a random net $\Pi$. Is it true that \[ \limsup_{n \to \infty} \cost(\Pi_n) \leq \cost(\Pi) ? \] To prove Theorem \ref{carderiextension} we will use the geometric interpretation of being a Farber sequence -- specifically, see Corollary 3.3 of \cite{MR3664810}. In brief, it means that for all $r > 0$ the injectivity radius of a randomly chosen point of the quotient manifold $M_n = \Gamma_n \backslash X$ is larger than $r$ with high probability, where $X = G / K$ denotes the symmetric space of $G$. We will also heavily call upon the paper \cite{MR2863908}. Additionally, it will be assumed that the reader understands the proof of Theorem \ref{farbertheorem}. \begin{proof}[Proof of Theorem \ref{carderiextension}] First, let us handle the special case of $G = \PSL_2(\RR)$, for which the theorem is true but for fundamentally different reasons. In this case the $\Gamma_n$ being discussed are fundamental groups of finite volume hyperbolic surfaces, and we only require that $\covol(\Gamma_n)$ tends to infinity. This allows us to eliminate additive constants in the following: \[ \lim_{n\to\infty}\frac{d(\Gamma_n)-1}{\mathrm{vol}(G/\Gamma_n)} = \lim_{n\to\infty}\frac{b_1(\Gamma_n)}{\mathrm{vol}(G/\Gamma_n)} = \lim_{n\to\infty}\frac{-\chi(G/\Gamma_n)}{\mathrm{vol}(G/\Gamma_n)} = \frac{1}{2\pi} = \beta_1(G) \leq \mathrm{c}_{\mathrm{P}}(G)-1 \] where $\beta_1(G)$ is the first $L^2$-Betti number of $G$. We are using the Gauss-Bonnet formula above and Gaboriau's result that the first $L^2$-Betti number of a relation is dominated by its cost-1. Note that in \cite{conley2021one}, they prove (in particular) that (cross sections of) actions of $\PSL_2(\RR)$ are treeable and thus have fixed price equal to their $L^2$-betti number plus one. Thus we actually have equality all the way above. We now handle the other cases. Let us denote by $a\Gamma_n$ the lattice shift corresponding to $\Gamma_n < G$. Let us summarise the proof: \begin{enumerate} \item We produce a sequence of uniformly separated factors $\Phi^n(a\Gamma_n)$ of the lattice shifts $G \curvearrowright G/\Gamma_n$. Note that by equivariance they must be of the form $\Phi^n(a\Gamma_n) = a\Gamma_n F_n$, and \[ \cost(G \curvearrowright G/\Gamma_n) - 1 = \frac{d(\Gamma_n) - 1}{\text{vol}(G/\Gamma_n)} \leq \cost(\Phi^n(a\Gamma_n)) - 1 \] as cost is monotone under factors. \item We show that $\Phi^n(a\Gamma_n)$ admits subsequential weak limits, and any such subsequential weak limit is a random net. \item As in the proof of Theorem \ref{farbertheorem}, we now use the periodic IID lattice shift and distribute randomness, replacing $\Phi^n(a\Gamma_n)$ by marked processes which converge to an IID-labelled random net. \item Using results of \cite{MR2863908}, our desired inequality follows from cost monotonicity. \end{enumerate} We will show that the distance-$R$ factor graph $\mathscr{D}_R$ is connected on $\Phi^n(a\Gamma_n) = a\Gamma_n F_n$. Observe that, by left-invariance of the metric, this is true if and only if it is connected on $\Gamma_n F_n$. Observe that this is finitely many \emph{right} cosets, that is, $\Gamma_n F_n \subset {\Gamma_n}\backslash G$. We will show that $\mathscr{D}_R$ is connected by appealing to properties of the further quotient ${\Gamma_n}\backslash G/K = {\Gamma_n}\backslash X$. The essential result we use from \cite{MR2863908} is the following. As long as $G$ is not $\PSL(2,\RR)$, there exist constants $\e, \e' > 0$ and a sequence of \emph{connected} subsets $U_n \subset X$ such that \begin{itemize} \item The projection $\Gamma_n U_n \subset \Gamma_n \backslash X$ contains the $\e$-thick part of $\Gamma_n \backslash X$, and \item The projection $\Gamma_n U_n \subset \Gamma_n \backslash X$ is contained in the $\e'$-thick part of $\Gamma_n \backslash X$. \end{itemize} Here $\e$ is defined in Lemma 2.3 of \cite{MR2863908} and $\e' = \e/(2\mu)$, where $\mu$ is defined after the proof of Lemma 2.4. Crucially, these constants only depend on $G$. In the paper, our $U_n$ is denoted by $\widetilde{\psi}_{\leq 0}$ and it is a level set with respect to a function inversely measuring the injectivity radius. \begin{claim} There exists a sequence $\Phi^n(a\Gamma_n)$ of factors that are uniformly separated and any subsequential weak limit of them is a random net. \end{claim} \begin{proof}[Proof of claim] We choose $F_n \subset G$ following Corollary 2.13 of \cite{MR2863908}. We choose a maximal $\e'$-separated subset $E_n$ of $\Gamma_n U_n K \subset \Gamma_n \backslash X$. Then the union of $\e'$-balls around $E_n$ will cover $\Gamma_n U_n$ by maximality, hence, the set of points not covered by these balls lies in the $\e'$-thin part of $\Gamma_n \backslash X$. By the Farber condition, the density of these points tends to $0$ in $n$. This means that for any subsequential weak limit of the point processes $a\Gamma_n F_n$, the probability that the identity is distance more than $\e'$ from the root of $X$ is $0$. That is, the union of $\e'$-balls for any subsequential weak limit will equal $X$ a.s., that is, the weak limit will be a net. The slight difference is that we now need the same on $G$, not on $X$. In order to do that, we pick a coset representative (randomly or deterministically) with respect to $K$. This can only increase the $X$-distance but only by a bounded amount, so even on $G$, the same argument holds with worse constants. \end{proof} Similar to before, we distribute randomness from the $\Gamma_n$-periodic IID processes to $\Phi^n(a\Gamma_n)$. Call the resulting process $\Pi^n$. By passing to a subsequential weak limit, we can assume that $\Pi^n$ weakly converges to some process $\Upsilon$. As before in Theorem \ref{farbertheorem}, the Farber condition ensures that $\Upsilon$ is in fact an IID labelled process (and in particular, its cost is at most the cost of the Poisson point process). Our final task is to relate the cost of the $\Pi^n$ processes to the cost of $\Upsilon$. We write $\mu_0^n$ for the Palm measure of $\Pi^n$ and $\mu_0$ for the Palm measure of $\Upsilon$. By the proof of Theorem \ref{costmonotonicity}, any factor graph which $\delta$-computes the cost of $\Upsilon$ contains a $\mu_0$-continuity factor graph $\mathscr{G}$ which is connected for $\Upsilon$. Thus \[ \limsup_{n \to \infty} \overrightarrow{\mu_0^n}(\mathscr{G}) \leq \overrightarrow{\mu_0}(\mathscr{G}). \] A priori, there is no reason to expect that $\mathscr{G}$ is connected for \emph{any} $\Pi_n$, however. But note that by construction the graphing $\mathscr{G}$ has the property that for large enough $R > 0$ there exists a constant $N$ such that $\mathscr{G}^N(\omega) \supseteq \mathscr{D}_R(\omega)$ for all $\e'$-separated configurations $\omega$, where $\mathscr{D}_R$ denotes the distance-$R$ factor graph as usual. In particular, $\mathscr{G}$ is connected for the lattice shift factors $a\Gamma_n F_n$, as they are coarsely connected: by left-invariance of the metric, we may simply consider $\Gamma_n F_n$, and recall that its image in $X$ lies in the connected subset $U_n$. Now for any pair of points $x$ and $y$ in $\Gamma_n F_n$, take a path between their images $xK$ and $yK$ lying within $U_n$, and note that it induces a coarse path (with bounded jumps) between $x$ and $y$ themselves. Thus \[ \limsup_{n \to \infty} \frac{d(\Gamma_n) - 1}{\text{vol}(G/\Gamma_n)} \leq \cost(\Pi^n) - 1 \] as desired. \end{proof} \section{Introduction} Let $G$ be a locally compact, second countable, unimodular group that is nondiscrete and noncompact, endowed with a Haar measure $\lambda$. We think of $\lambda$ as an inherent parameter of $G$, as all the notions trivally scale with $\lambda$. Throughout the paper, we will make these assumptions on $G$ except when stated otherwise. A \emph{point process} $\Pi$ on $G$ is a random closed and discrete subset $\Pi$ of $G$. More precisely, it is a random variable taking values in the \emph{configuration space} of $G$: \[ \MM = \MM(G) = \{ \omega \subseteq G \mid \omega \text{ is closed and discrete} \}. \] When the law of $\Pi$ is invariant under the left $G$-action, we call $\Pi$ an \emph{invariant point process}. We do not assume the reader has any knowledge of point process theory and have made the paper as self-contained as possible. Invariant point processes are examples of probability measure preserving (pmp) actions. Recall that a pmp action is \emph{essentially free} (or simply \emph{free} for short) if the stabiliser of almost every point in the action space is trivial. In the particular case of point processes, this means that the set of points will almost surely have no symmetries. Our first theorem shows that actually \emph{every} free pmp action can be realised this way. \begin{thm} \label{minden}Every free pmp action of $G$ is isomorphic to a point process on $G$. \end{thm} Note that freeness is a necessary condition here as can be seen from the action of $\RR^2$ on $\RR^2 / (\ZZ \times \RR)$. This action is however a point process on a homogeneous space of $\RR^2$. The proof of Theorem \ref{minden} exhibits an analogy between point processes of locally compact groups and the symbolic dynamics of countable groups. For a pmp action of a countable group $\Gamma$, every Borel partition of the underlying space gives rise to an invariant random coloring of $\Gamma$ by considering the orbit of a random point of the underlying space. Similarly, every cross section of a free pmp action of $G$, when considering its intersection with the $G$-orbit of a random point, will become a point process on $G$. So point processes serve as \emph{stochastic visualizations} of pmp actions of locally compact groups, just like invariant random colorings do for countable groups. This paper aims to show that this visualization leads to new and meaningful results. The correspondence above and the classical theorem of Forrest \cite{MR417388} on the existence of cross sections (see also \cite{MR3335405} and Section 3.B of \cite{kechris2019theory}) immediately yields that every free pmp action factors onto an invariant point process. The factor map can be upgraded to an isomorphism by using a \emph{marked} point process. These are random discrete subsets where each point carries a mark from some mark space (for example, a finite set of colours). Then Theorem \ref{minden} is proved by showing that every marked point process is isomorphic to an unmarked one, by ``spatially encoding'' the marks. We now introduce the cost of a point process $\Pi$ on $G$. A \emph{factor graph} $\mathscr{G}$ of $\Pi$ is an equivariantly and measurably defined graph $\mathscr{G}(\Pi)$ whose vertex set is $\Pi$. For example, one can define the distance graph for $r > 0$ to be the set of pairs $g, h \in \Pi$ with $d(g, h) < r$, where $d$ is a left-invariant metric on $G$. Informally speaking, the \emph{cost} of $\Pi$ is defined as the infimum of the average degrees over connected factor graphs of $\Pi$, suitably normalised to be an isomorphism invariant. For precise definitions see Section \ref{cost}. We then define cost for free pmp actions of $G$ via Theorem \ref{minden}, which is well-defined since cost is an isomorphism invariant. The cost of pmp actions of countable groups has been an active subject in the last twenty years, see Gaboriau's paper \cite{gaboriau2016around} and the survey paper \cite{furman2009survey} for the literature. It has been known in the community that cross sections naturally allow one to extend the notion of cost to free pmp actions of locally compact groups, but due to the lack of results, the notion stayed dormant. The first explicit appearance of the definition can be found in a recent paper of Carderi \cite{carderi2018asymptotic}. The definition we work with is essentially equivalent to his, but we develop it intrinsically as a point process theoretic notion. One of the most important families of processes on a discrete group is Bernoulli percolations $\text{Ber}(p)$. The natural analogue of this family for non discrete groups is the \emph{Poisson point process} of intensity $t > 0$. Here the \emph{intensity} of an invariant point process is the expected number of points which fall in a set of unit volume. This quantity can be shown to be independent of the choice of set. An explicit description of Poisson point processes will be given later, but one should know that these processes are ``completely independent''. \begin{thm}\label{Poissmax} Poisson point processes have maximal cost among all free pmp actions on $G$. In particular, the cost of a Poisson point process does not depend on its intensity. \end{thm} We denote the cost of a Poisson point process on $G$ by $c_P(G)$. The above result can be looked at as a locally compact analogue of a result of Ab\'{e}rt and Weiss \cite{abert2013bernoulli} where they show that for a countable group, Bernoulli actions have maximal cost among all pmp actions. A central open problem in cost theory is the Fixed Price problem of Gaboriau, that asks whether all free pmp actions of a countable group have the same cost. This is also open in the locally compact setting. \begin{question} Is it true that all free point processes on $G$ have the same cost? \end{question} Gaboriau \cite{gaboriau2002invariants} asks if for a countable pmp equivalence relation, the cost of the relation equals its first $L^{2}$ Betti number $\beta_{1}(G)$ plus 1. Note that an affirmative answer for this would imply an affirmative answer to Question 1, as well, using the cross section correspondence. Since the cost of any free process is at least one, a viable way to prove that a group has fixed price one is by showing that the Poisson point process admits connected factor graphs with average degree $1 + \e$ for all $\e > 0$. We succeed in this for the first nontrivial case, answering a question of Carderi \cite{carderi2018asymptotic}: \begin{thm} \label{zszorzat}Every free pmp action of $G\times\mathbb{Z}$ has cost one if $G$ is compactly generated. \end{thm} Our proof is truly a stochastic proof in nature as it essentially uses some special properties of Poisson point processes. In countable cost theory, it remains an open question if the direct product $\Gamma \times \Delta$ of two infinite countable groups $\Gamma$ and $\Delta$ has fixed price one. It is known to hold if one of the groups contains a fixed price one subgroup. When trying to generalize Theorem \ref{zszorzat} to arbitrary products, we seem to hit a somewhat similar barrier. \begin{question} Let $G$ and $H$ be compactly generated but noncompact groups. Does $G \times H$ have fixed price one? \end{question} Another application of Theorem \ref{Poissmax} concerns the growth of the minimal number of generators (the rank gradient) for a sequence of lattices in semisimple real Lie groups. Recall that a discrete subgroup $\Gamma\leq G$ is a \emph{lattice} if it has finite covolume in $G$. Let $d(\Gamma)$ denote the minimal number of generators (also known as the \emph{rank}) of $\Gamma$. When $G$ is a semisimple Lie group, $d(\Gamma)$ is finite and by a theorem of Gelander \cite{MR2863908}, we have \[ \frac{d(\Gamma)-1}{\mathrm{vol}(G/\Gamma)}\leq C \] for some constant $C$ only depending on $G$. A sequence of lattices $\Gamma_{n}$ in $G$ is \emph{Farber}, if $G/\Gamma_{n}$ approximates $G$ in the Benjamini-Schramm topology, or, equivalently, if the corresponding invariant random subgroups weakly converge to the trivial subgroup. \begin{thm}\label{carderiextension} Let $G$ be a semisimple real Lie group and let $\Gamma_{n}$ be a Farber sequence of lattices in $G$. Then \[ \limsup_{n\to\infty}\frac{d(\Gamma_n)-1}{\mathrm{vol}(G/\Gamma_n)} \leq\mathrm{c}_{\mathrm{P}}(G)-1. \] \end{thm} In particular, if the Poisson point process has cost $1$ then the rank grows sublinearly in the covolume, for Farber sequences of lattices. This correspondence connects computing the cost of the Poisson process to some exciting open problems that have been investigated extensively. Theorem \ref{carderiextension} extends Carderi's result \cite{carderi2018asymptotic} who proved it for uniformly discrete (in particular, cocompact) Farber sequences. Here a sequence of lattices is \emph{uniformly discrete} if there exists $C>0$ such that the infimal injectivity radius is bounded below by $C$ for $G/\Gamma_{n}$. \begin{question} Let $G$ be a semisimple real Lie group that is not a compact extension of $\SL_{2}(\RR)$. Is $\mathrm{c}_{\mathrm{P}}(G)=1$? \end{question} Note that by work of Conley, Gaboriau, Marks, and Tucker-Drob the group $\SL_{2}(R)$ is treeable and has fixed price greater than $1$. We now showcase three concrete cases for a semisimple Lie group where computing the cost of the Poisson process would solve known problems of a different nature. Note that it is natural to ask about the cost of the Poisson process on the symmetric space $X$ of $G$ rather than on the group $G$ itself. As we show in Theorem \ref{symmetricspacecostmax} these two invariants are equal. \medskip \textbf{Case }$G=\SL_{2}(\CC)$ and $X=\HH^3$\textbf{.} If $\mathrm{c}_{\mathrm{P}}(G)>1$, then we get free point processes in $G$ with different cost. Moreover, we also get a countable equivalence relation whose first $L^2$ Betti number is not equal to its cost-1, answering a question of Gaboriau \cite{gaboriau2002invariants}. If, on the other hand, $\mathrm{c}_{\mathrm{P}}(G)=1$, then we get that the Heegaard genus divided by the rank of the fundamental group of a (compact) hyperbolic $3$-manifold can get arbitrarily large. In fact, we yield this for \emph{any} expander Farber sequence of hyperbolic $3$-manifolds, which is understood as the typical behavior. Indeed, by the work of Lackenby \cite{lackenby2002heegaard} for expander sequences, the Heegaard genus grows linearly, while using our work, the rank would grow sublinearly in the volume. Note that it is a longstanding open problem whether this ratio is absolutely bounded over all $3$-manifolds, and in fact it was only proved recently in the deep paper of Li \cite{li2013rank} that for compact hyperbolic $3$-manifolds, the Heegaard genus can \emph{differ} from the rank. \medskip \textbf{Case when }$G$\textbf{ has higher rank.} For these Lie groups, Fraczyk recently proved in a beautiful paper \cite{fraczyk2018growth} that the growth of the first mod $2$ homology vanishes for Farber sequences in $G$. Surprisingly, his method does not seem to carry over to odd primes, so for primes other than $2$, this is still an open problem. As the rank is an upper bound for the first mod $p$ homology of a discrete group, proving $\mathrm{c}_{\mathrm{P}}(G)=1$ would settle this problem. By a standard induction argument, proving $\mathrm{c}_{\mathrm{P}}(G)=1$ would show that any lattice in $G$ has fixed price $1$, a problem of Gaboriau that is still open for cocompact lattices. \medskip \textbf{Case when }$G$\textbf{ has higher rank and property (T).} Using \cite{MR3664810}, for semisimple Lie groups with (T) one can actually omit the Farber condition. \begin{cor} Let $G$ be a higher rank semisimple real Lie group with property (T) and let $\Gamma_{n}$ be any sequence of lattices in $G$ with $\mathrm{vol}(G/\Gamma_n)\rightarrow \infty$. Then \[ \lim\sup_{n\rightarrow\infty}\frac{d(\Gamma_n)-1}{\mathrm{vol}(G/\Gamma_n)}% \leq\mathrm{c}_{\mathrm{P}}(G)-1\text{.}% \] \end{cor} In particular, if $\mathrm{c}_{\mathrm{P}}(G)=1$, then we get a \emph{totally uniform} vanishing theorem for the growth of rank for lattices in these groups, including $SL(d,R)$ ($d\geq3$). It is shown in \cite{abert2017rank} that any Farber sequence in SL(3,Z) has vanishing rank gradient, but the uniform version is wide open. Note that in their very recent paper Lubotzky and Slutsky \cite{lubotzky2021asymptotic} showed that in the above situation, every sequence of non-uniform lattices will have rank gradient $0$. Their proof uses deep classical results on non-uniform lattices, like arithmeticity and the Congruence Subgroup Property but in turn gives a much stronger upper bound for the number of generators than what we ask, logarithmic in the covolume. In most cases, they can even improve this with a loglog factor. Their methods do not seem to readily generalize to co-compact lattices. Our purely geometric approach may have the potential to be applied more widely but the payoff is that, being a limiting argument, it is not expected to yield such explicit estimates. The proof of Theorem \ref{Poissmax} uses the stochastic visualisation method to show that every free action is ``sufficiently rich'' in randomness to ``simulate'' the Poisson point process. In particular, connected factor graphs of the Poisson point process can be transferred to an arbitrary free process in a way that can at worst \emph{decrease} the average degree. Simulation here refers to \emph{weak factoring}, a notion we introduce that is inspired by weak containment of actions, see the survey of Kechris and Burton \cite{kechris2019theory}. For invariant point processes $\Pi$ and $\Upsilon$, we say that $\Upsilon$ is a \emph{factor} of $\Pi$ if there exists a $G$-equivariant Borel map $\Phi: \MM \to \MM$ such that $\Phi(\Pi) = \Upsilon$. We say that $\Upsilon$ is a \emph{weak factor} of $\Pi$ or $\Pi$ \emph{weakly factors} onto $\Upsilon$ if there exist factor maps $\Phi_1, \Phi_2, \ldots$ of $\Pi$ such that $\Phi_n(\Pi)$ converges weakly to $\Upsilon$. \begin{thm} \label{Poissweak}Let $\Pi$ be a free point process on $G$. Then $\Pi$ weakly factors onto the Poisson point process of intensity $t$, for all $t$. \end{thm} In particular, Poisson processes on $G$ of different intensities weakly factor onto each other. More is known in the amenable case: Ornstein and Weiss showed \cite{ornstein1987entropy} that for a large class of amenable groups, the Poisson point processes of different intensity are in fact \emph{isomorphic} as actions (see \cite{soo2019finitary} for an alternative construction on $\RR^n$ with additional properties). The proof of Theorem \ref{Poissweak} revolves around IID-marked processes. Let $[0,1]^\Pi$ denote the random $[0,1]$-marked subset of $G$ whose underlying set is $\Pi$ and has independent and identically distributed $\texttt{Unif}[0,1]$ random variables. We call this the \emph{IID of $\Pi$}. Once this definition and that of the Poisson point process is understood, one can readily see that the IID of \emph{any} process factors onto the Poisson point process. We then prove: \begin{thm} \label{putiid}Let $\Pi$ be a free point process on $G$. Then $\Pi$ weakly factors onto $[0,1]^\Pi$, its own IID. \end{thm} Somewhat surprisingly, it is not entirely trivial to show that weak factoring is a \emph{transitive} notion, but we are able to prove it. Thus in particular, Theorem \ref{putiid} implies that free point processes weakly factor onto the Poisson point process. We next investigate how cost behaves with respect to factor maps. It is easy to see that it can only \emph{increase} under a factor map: if $\Pi$ factors onto $\Upsilon$, then $\cost(\Pi) \leq \Upsilon$. In particular, this shows that cost is an \emph{isomorphism invariant} of actions. This monotonicity of cost under factor maps can be pushed further: \begin{thm} Suppose $\Pi$ weakly factors onto $\Upsilon$, as witnessed by a sequence of factor maps $\Phi_n(\Pi)$ weakly converging to $\Upsilon$. Under appropriate tightness conditions on $\Pi, \Upsilon$, and the sequence $\Phi_n$, we have $\cost(\Pi) \leq \cost(\Upsilon)$. \end{thm} See Section \ref{certainfactors} for a precise statement. This cost monotonicity theorem, limited as it is, is powerful enough to prove that the Poisson point process has maximal cost amongst all free processes. \medskip \noindent {\bf Acknowledgements.} The authors wish to thank the anonymous referee for a very thorough and helpful report. The second author thanks Mikolaj Fraczyk and Alessandro Carderi for helpful discussions. \medskip The paper is structured as follows. In Section 1, we give the basic definitions and notations of point processes for those who have never encountered them before, and describe the most important examples of point processes for our work. In Section 2, we introduce the \emph{Palm measure} of a point process and the rerooting groupoid. In Section 3, we define the \emph{cost} of an invariant point process and prove basic properties of it. In Section 4, we define \emph{weak factoring} of point processes and prove that (in certain circumstances) cost is \emph{monotone} with respect to weak factoring. We use this to show that the Poisson has maximal cost amongst all free processes. In Section 5, we use the fact that the Poisson has maximal cost to give the first nontrivial examples of nondiscrete groups with fixed price. In Section 6, we connect the rank gradient of sequences of lattices in a group with the cost of the Poisson point process on said group. In Section 7, we discuss the modifications required to extend the above theory to invariant point processes on symmetric spaces. In the appendix, we include a summary of necessary technical facts from point process theory with references for proofs. No originality is claimed for this material. \section{Point processes and factors of interest} Let $(Z,d)$ denote a complete and separable metric space (a csms). A \emph{point process on $Z$} is a random discrete subset of $Z$. We will also study random discrete subsets of $Z$ that are \emph{marked} by elements of an additional csms $\Xi$. Typically $\Xi$ will be a finite set that we think of as colours. \begin{defn} The \emph{configuration space} of $Z$ is \[ \MM(Z) = \{ \omega \subset Z \mid \omega \text{ is locally finite} \}, \] and the \emph{$\Xi$-marked configuration space} of $Z$ is \[ \Xi^\MM(Z) = \{ \omega \subset Z \times \Xi \mid \omega \text{ is discrete, and if } (g, \xi) \in \omega \text{ and } (g, \xi') \in \omega \text{ then } \xi = \xi' \}. \] \end{defn} Note that $\Xi^\MM(Z) \subset \MM(Z \times \Xi)$. We think of a $\Xi$-marked configuration $\omega \in \Xi^\MM(Z)$ as a locally finite subset of $Z$ with labels on each of the points (whereas a typical element of $\MM(Z \times \Xi)$ is a locally finite subset where each point has possibly multiple marks). If $\omega \in \Xi^\MM(Z)$ is a marked configuration, then we will write $\omega_z$ for the unique element of $\Xi$ such that $(z, \omega_z) \in \omega$. The Borel structure on configuration spaces is exactly such that the following \emph{point counting functions} are measurable. Let $U \subseteq Z$ be a Borel set. It induces a function $N_U : \MM(Z) \to \NN_0 \cup \{ \infty \}$ given by \[ N_U(\omega) = \@ifstar{\oldabs}{\oldabs*}{\omega \cap U}. \] We will primarily be interested in point processes defined on locally compact and second countable (lcsc) groups $G$. Such groups admit a unique (up to scaling) left-invariant Haar measure $\lambda$, we fix such a choice. We will further assume that $G$ is \emph{unimodular}, although it is not strictly necessary for every argument in the paper. Recall: \begin{thm}[Struble's theorem, see Theorem 2.B.4 of \cite{MR3561300}] Let $G$ be a locally compact topological group. Then $G$ is second countable \emph{if and only if} it admits a proper\footnote{Recall that a metric is \emph{proper} if closed balls are compact.} left-invariant metric. \end{thm} Such a metric is unique up to coarse equivalence (bilipschitz if the group is compactly generated). We fix $d$ to be any such metric. We mostly consider the configuration space of a fixed group $G$. So out of notational convenience let us write $\MM = \MM(G)$ and $\Xi^\MM = \Xi^\MM(G)$. The latter here is an abuse of notation: formally $\Xi^\MM$ ought to denote the set of functions from $\MM$ to $\Xi$, but instead we are using it to denote the set of functions from \emph{elements} of $\MM$ to $\Xi$. Note that the marked and unmarked configuration spaces of $G$ are Borel $G$-spaces. To spell this out, $G \curvearrowright \MM$ by $g \cdot \omega = g\omega$ and $G \curvearrowright \Xi^\MM$ by \[ g \cdot \omega = \{(gx, \xi) \in G \times \Xi \mid (g, \xi) \in \omega \}. \] \begin{defn} A \emph{point process} on $G$ is a $\MM(G)$-valued random variable $\Pi : (\Omega, \PP) \to \MM(G)$. Its \emph{law} or \emph{distribution} $\mu_\Pi$ is the pushforward measure $\Pi_*(\PP)$ on $\MM(G)$. It is \emph{invariant} if its law is an invariant probability measure for the action $G \curvearrowright \MM(G)$. The associated \emph{point process action} of an invariant point process $\Pi$ is $G \curvearrowright (\MM(G), \mu_\Pi)$. \end{defn} Some remarks and caveats are in order: \begin{itemize} \item Point processes which are not invariant are very much of interest, but will only come up when we discuss ``Palm processes''. Thus we will sometimes say ``point process'' when we strictly mean \emph{invariant} point process. \item Speaking properly, we are discussing \emph{simple} point processes, that is, those where each point has multiplicity one. We will discuss this more later. \item $\Xi$-marked point processes are defined similarly, with $\Xi^\MM$ taking the place of $\MM$. There isn't much difference between marked point processes and unmarked ones for our purposes (it's just a case of which is more convenient for the particular problem at hand). Thus ``point process'' might also mean ``marked point process''. \item One could certainly define point processes on a discrete group, but this is essentially percolation theory. We are specifically trying to understand the nondiscrete case, and so will assume $G$ is nondiscrete. \item The other case of interest we will discuss is $\Isom(S)$-invariant point processes on $S$, where $S$ is a Riemannian symmetric space. For instance, one would consider isometry invariant point processes on Euclidean space $\RR^n$ or hyperbolic space $\HH^n$. We will discuss this case more in Section \ref{homogeneousspaces}. \item Our interest in point processes is almost exclusively \emph{as actions}. We will therefore rarely distinguish between a point process proper and its distribution. Thus we may use expressions like ``suppose $\mu$ is a point process'' to mean ``suppose $\mu$ is the distribution of some point process''. \item The configuration space of any Polish space will be Polish, so the probability theory of point processes on such spaces is well behaved. The metric properties of configuration spaces that we require are listed in the appendix, with references for proofs. \end{itemize} \begin{defn} The \emph{intensity} of a point process $\mu$ is \[ \intensity(\mu) = \frac{1}{\lambda(U)} \EE_\mu \left[ N_U \right], \] where $U \subset G$ is any Borel set of positive (but finite) Haar measure, and $N_U(\omega) = \@ifstar{\oldabs}{\oldabs*}{\omega \cap U}$ is its point counting function. \end{defn} To see that the intensity is well-defined (that is, does not depend on our choice of $U$), observe that the function $U \mapsto \EE_\mu[N_U]$ defines a Borel measure on $G$ which inherits invariance from the shift invariance of $\mu$. So by uniqueness of Haar measure, it is some scaling of our fixed Haar meausure $\lambda$ -- the intensity is exactly this multiplier. We also see that whilst the intensity depends on our choice of Haar measure, it scales linearly with it. Note that a point process has intensity zero if and only if it is empty almost surely. \subsection{Examples of point processes} \begin{example}[Lattice shifts] Let $\Gamma < G$ be a \emph{lattice}, that is, a discrete subgroup that admits an invariant probability measure $\nu$ for the action $G \curvearrowright G / \Gamma$. The natural map $\MM(G / \Gamma) \to \MM(G)$ given by \[ \omega \mapsto \bigcup_{a\Gamma \in \omega} a\Gamma \] is left-equivariant, and hence maps invariant point processes on $G / \Gamma$ to invariant point processes on $G$. In particular, we have the \emph{lattice shift}, given by choosing a $\nu$-random point $a\Gamma$. \end{example} \begin{example}[\textbf{Induction from a lattice}] Now suppose one also has a pmp action $\Gamma \curvearrowright (X, \mu)$. It is possible to \emph{induce} this to a pmp action of $G$ on $G / \Gamma \times X$. This can be described as an $X$-marked point process on $G$. To do this, fix a fundamental domain $\mathscr{F} \subset G$ for $\Gamma$. Choose $f \in \mathscr{F}$ uniformly at random, and independently choose a $\mu$-random point $x \in X$. Let \[ \Pi = \{ (f\gamma, \gamma \cdot x) \in G \times X \mid \gamma \in \Gamma \}. \] Then $\Pi$ is a $G$-invariant $X$-marked point process. \end{example} In this way one can view point processes as generalised lattice shift actions. Note that there are groups without lattices (for instance Neretin's group, see \cite{MR2881324}), but every group admits interesting point processes, as we discuss now. The most fundamental of these is known as \emph{the Poisson point process}. We will define this after reviewing the Poisson distribution: Recall that a random integer $N$ is \emph{Poisson distributed with parameter $t > 0$} if \[ \PP[N = k] = \exp(-t)\frac{t^k}{k!}.\] We write $N \sim \texttt{Pois}(t)$ to denote this. It is convenient to extend this definition to $t = 0$ and $t = \infty$ by declaring $N \sim \texttt{Pois}(0)$ when $N = 0$ almost surely and $N \sim \texttt{Pois}(\infty)$ when $N = \infty$ almost surely. \begin{defn} Let $X$ be a complete and separable metric space equipped with a non-atomic Borel measure $\lambda$. A point process $\Pi$ on $X$ is \emph{Poisson with intensity $t > 0$} if it satisfies the following two properties: \begin{description} \item[(Poisson point counts)] for all $U \subseteq G$ Borel, $N_U(\Pi)$ is a Poisson distributed random variable with parameter $t \lambda(U)$, and \item[(Total independence)] for all $U, V \subseteq G$ \emph{disjoint} Borel sets, the random variables $N_U(\Pi)$ and $N_V(\Pi)$ are \emph{independent}. \end{description} \end{defn} For reasons that should not be immediately apparent, the above defining properties are in fact equivalent. We will write $\PPP_t$ for the distribution of such a random variable, or simply $\PPP$ if the intensity is understood. We think of the Poisson point process as a completely random scattering of points in the group. It is an analogue of Bernoulli site percolation for a continuous space. We now construct the process (somewhat) explicitly. Partition $G$ into disjoint Borel sets $U_1, U_2, \ldots$ of positive but finite volume. For each of these, independently sample from a Poisson distribution with parameter $t \lambda(U_i)$. Place that number of points in the corresponding $U_i$ (independently and uniformly at random). This description can be turned into an explicit sampling rule\footnote{That is, one can define a measurable function $f : \prod_n X_n \to \MM$ defined on an appropriate product of probability spaces such that the pushforward measure is the distribution of the Poisson point process.}, if one desires. For proofs of basic properties of the Poisson point process (such as the fact that it does not depend on the partition chosen above), see the first five chapters of Kingman's book \cite{MR1207584}. \begin{defn} A pmp action $G \curvearrowright (X, \mu)$ is \emph{ergodic} if for every $G$-invariant measurable subset $A \subseteq X$, we have $\mu(A) = 0$ or $\mu(A) = 1$. The action is \emph{mixing} if for all measurable $A, B \subseteq (X, \mu)$ we have \[ \lim_{g \to \infty} \mu(gA \cap B) = \mu(A)\mu(B). \] The action is \emph{essentially free} if $\stab_G(x) = \{1\}$ for $\mu$ almost every $x \in X$. In the case of point process actions we will sometimes use the term \emph{aperiodic} to refer to this. \end{defn} \begin{prop} The Poisson point process actions $G \curvearrowright (\MM, \PPP_t)$ on a noncompact group $G$ are essentially free and ergodic (in fact, mixing). \end{prop} A proof of freeness that is readily adaptable to our setting can be found as Proposition 2.7 of \cite{MR3664810}. For ergodicity and mixing, see the proof of the discrete case in Proposition 7.3 of the Lyons-Peres book \cite{MR3616205}. It directly adapts, once one knows the required cylinder sets exist. Although the subscript $t$ suggests that the Poisson point processes form a continuum family of actions, this is not always the case: \begin{thm}[Ornstein-Weiss] Let $G$ be an amenable group which is not a countable union of compact subgroups. Then the Poisson point process actions $G \curvearrowright (\MM, \PPP_t)$ are all isomorphic. \end{thm} The following definition uses notation that does not appear in the literature (the object of course does, but there does not appear to be a symbolic representation for it): \begin{defn} If $\Pi$ is a point process, then its \emph{IID version} is the $[0,1]$-marked point process $[0,1]^\Pi$ with the property that conditional on its set of points, its labels are independent and IID $\text{Unif}[0,1]$ distributed. If $\mu$ is the law of $\Pi$, then we will write $[0,1]^\mu$ for the law of $[0,1]^\Pi$. One can define the IID of a point process over spaces other than $[0,1]$ (for instance, $[n] = \{1, 2, \ldots, n\}$ with the counting measure), but we will only use the full IID. \end{defn} \begin{remark} As we've mentioned, $[0,1]$-marked point processes on $G$ are particular examples of point processes on $G \times [0,1]$. One can show (see Theorem 5.6 of \cite{MR3791470}) that the Poisson point process on $G \times [0,1]$ with respect to the product measure $\lambda \otimes \text{Leb}$ is just the IID version of the Poisson point process on $G$, a fact which we will make use of later. \end{remark} \begin{prop} The IID Poisson point process on a noncompact group is ergodic (and in fact mixing). \end{prop} This can be seen by viewing the IID Poisson on $G$ as the Poisson point process on $G \times S^1$, restricted to $G$. Note that the restriction of a mixing action to a noncompact subgroup is mixing. \begin{remark} One can define ``the IID'' of any probability measure preserving countable Borel equivalence relation, see \cite{MR3813200}. This construction is known as \emph{the Bernoulli extension}, and is ergodic if the base space is ergodic. \end{remark} \begin{prop} Let $\Pi$ be a point process on a group $G$ which is non-empty almost surely. Then $\@ifstar{\oldabs}{\oldabs*}{\Pi} = \infty$ almost surely if and only if $G$ is noncompact. \end{prop} \begin{proof} It is immediate that any point process on a compact group must be finite almost surely (as it is a discrete subset of the space). Now suppose $\Pi$ is a non-empty point process on $G$ which is finite almost surely. Then the IID of this process $[0,1]^\Pi$ still has this property. We define the following $G$-valued random variable: \[ f([0,1]^\Pi) = \text{ the unique } x \in \Pi \text{ with maximal label in } [0,1]^\Pi. \] The invariance of the point process translates into equivariance of the map $f : [0,1]^\MM \to G$. Thus this random variable's law is an invariant probability measure on $G$. Such a measure exists exactly when $G$ is compact. \end{proof} \subsection{Factors of point processes} \begin{defn} A \emph{point process factor map} is a $G$-equivariant and measurable map $\Phi : \MM \to \MM$. If $\mu$ is a point process and $\Phi$ is only defined $\mu$ almost everywhere, then we will call it a \emph{$\mu$ factor map}. We will be interested in two monotonicity conditions: \begin{itemize} \item if $\Phi(\omega) \subseteq \omega$ for all $\omega \in \MM$, we will call $\Phi$ a \emph{thinning} (and usually denote it by $\theta$), and \item if $\Phi(\omega) \supseteq \omega$ for all $\omega \in \MM$, we will call $\Phi$ a \emph{thickening} (and usually denote it by $\Theta$). \end{itemize} We use the same terms for marked point processes as well. \end{defn} \begin{remark}\label{thinningconfusio} There are \emph{two} possible ways to interpret the above monotonicity conditions for a $\Xi$-marked point process, depending on what you want to do with the mark space. One can consider \[ \Phi : \Xi^\MM \to \Xi^\MM, \text{ or } \Phi : \Xi^\MM \to \MM. \] In the former case, the definition above works verbatim. In the latter case, one should interpret a statement like ``$\omega \subseteq \Phi(\omega)$'' as ``$\omega$ is contained in the underlying set $\pi(\Phi(\omega))$ of $\Phi(\omega)$, where $\pi : \Xi^\MM \to \MM$ is the map that forgets labels. \end{remark} \begin{example}[Metric thinning]\label{deltathinningdefn} Let $\delta > 0$ be a tolerance parameter. The \emph{$\delta$-thinning} is the equivariant map $\theta_\delta : \MM \to \MM$ given by \[ \theta^\delta(\omega) = \{ g \in \omega \mid d(g, \omega \setminus \{g\}) > \delta \}. \] When $\theta^\delta$ is applied to a point process, the result is always a $\delta$-separated point process\footnote{Probabilists refer to such processes as \emph{hard-core}.} (but possibly empty). We define $\theta^\delta$ in the same way for marked point processes (that is, it simply ignores the marks). \end{example} \begin{example}[Independent thinning]\label{independentthinning} Let $\Pi$ be a point process. The \emph{independent $p$-thinning} defined on its IID $[0,1]^\Pi$ is given by \[ \mathcal{I}_p([0,1]^\Pi) = \{g \in \Pi \mid \Pi_g \leq p \}. \] \end{example} One can show that independent $p$-thinning of the Poisson point process of intensity $t > 0$ yields the Poisson point process of intensity $pt$, as one would expect. See Chapter 5 of \cite{MR3791470} for further details. \begin{example}[Constant thickening]\label{constantthickening} Let $F \subset G$ be a finite set containing the identity $0 \in G$, and $\Pi$ be a point process which is \emph{$F$-separated} in the sense that $\Pi \cap \Pi f = \empt$ for all $f \in F\setminus\{0\}$. Then there is the associated thickening $\Theta^F(\Pi) = \Pi F$. It is intuitively obvious that $\intensity (\Theta^F(\Pi)) = \@ifstar{\oldabs}{\oldabs*}{F} \intensity (\Pi)$. This can be formally established as follows: let $U \subseteq G$ be of unit volume. Then \begin{align*} \intensity (\Theta^F(\Pi) ) &= \EE\@ifstar{\oldabs}{\oldabs*}{U \cap \Pi F} && \text{By definition} \\ &= \sum_{f \in F} \EE\@ifstar{\oldabs}{\oldabs*}{U \cap \Pi f} && \text{By }F\text{-separation} \\ &= \sum_{f \in F} \EE \@ifstar{\oldabs}{\oldabs*}{Uf^{-1} \cap \Pi} && \\ &= \sum_{f \in F} \EE\@ifstar{\oldabs}{\oldabs*}{U \cap \Pi} && \text{By \emph{unimodularity}} \\ &= \@ifstar{\oldabs}{\oldabs*}{F} \intensity (\Pi). \end{align*} This is the first real appearance of our unimodularity assumption. In particular, we can demonstrate that $\intensity \Theta^F(\Pi) = \@ifstar{\oldabs}{\oldabs*}{F} \intensity \Pi$ is \emph{not} automatically true without unimodularity. For this, let $\Pi$ denote the unit intensity Poisson point process on $G$, and $F = \{0, f\}$ where $f \in G$ is chosen such that $\lambda(Uf^{-1}) < 1$. Then $\@ifstar{\oldabs}{\oldabs*}{Uf^{-1} \cap \Pi}$ is Poisson distributed with parameter $\lambda(Uf^{-1})$, and so by the above calculation $\intensity\Theta^F(\Pi) < 2 \cdot \intensity \Pi$. \end{example} Monotone maps have been investigated in the specific case of the Poisson point process on $\RR^n$. We note the following interesting theorems: \begin{thm}[Holroyd, Peres, Soo \cite{MR2884878}] Let $s > t > 0$. Then the Poisson point process on $\RR^n$ of intensity $s$ can be thinned to the Poisson point process of intensity $t$. That is, there exists an equivariant and deterministic thinning $\theta : (\MM(\RR), \PPP_s) \to (\MM(\RR), \PPP_t)$. \end{thm} \begin{thm}[Gurel-Gurevich and Peled \cite{MR3096589}] Let $t > s > 0$ be intensities. Then the Poisson point process on $\RR^n$ of intensity $s$ cannot be thickened to the Poisson point process of intensity $t$. That is, there is no equivariant and deterministic thickening $\Theta : (\MM(\RR), \PPP_s) \to (\MM(\RR), \PPP_t)$. \end{thm} We stress in the above theorems the \emph{deterministic} nature of the maps. If one is allowed additional randomness (that is, one asks for a factor of IID map), then both theorems are easily established. We note the following fact, which we will use (and prove) later after developing some notation. \begin{example} If $\Pi$ is any non-empty point process, then its IID factors onto the Poisson (in fact, onto the IID Poisson). \end{example} \begin{defn} A \emph{factor $\Xi$-marking} of a point process is a $G$-equivariant map $\mathscr{C} : \MM \to \Xi^\MM$ such that the underlying subset in $G$ of $\mathscr{C}(\omega)$ is $\omega$. That is, $\mathscr{C}$ is a rule that assigns a mark from $\Xi$ to each point of $\omega$ in some deterministic way. Again, if $\mathscr{C}$ is only defined $\mu$ almost everywhere then we will call it a \emph{$\mu$ factor $\Xi$-marking}. \end{defn} \begin{example}\label{colouring} Let $\theta : \MM \to \MM$ be a thinning. Then the associated $2$-colouring is $\mathscr{C}_\theta : \MM \to \{0, 1\}^\MM$ given by \[ \mathscr{C}_\theta(\omega) = \{ (g, \mathbbm{1}[{g \in \theta(\omega)}]) \in G \times \{0, 1\} \mid g \in \omega \}. \] We will see that all markings are built out of thinnings in a similar way. \end{example} \begin{remark}\label{thinninglost} There is a difference between the \emph{thinning map $\theta$} and the resulting \emph{thinned process $\theta_*(\mu)$} that can be a source for confusion. Passing to the thinned process (in principle) can lose information about $\mu$. For example, let $\Pi$ denote a Poisson point process on $G$ and $\Upsilon$ an independent random shift of a lattice $\Gamma < G$. Define the following thinning $\theta : \MM \to \MM$ by \[ \theta(\omega) = \{ g \in \omega \mid g\Gamma \subseteq \omega \}. \] Then $\theta(\Pi \cup \Upsilon) = \Upsilon$, and so the thinning completely loses the Poisson point process. \end{remark} \begin{defn}\label{inputoutputdefn} Let $\Phi : \MM \to \MM$ be a factor map. We think of its input $\omega$ as being red, its output $\Phi(\omega)$ as being blue, and their overlap $\omega \cap \Phi(\omega)$ as being purple. For $g \in \omega$, let $\texttt{Colour}(g) \in \{\text{Red, Blue, Purple}\}$ be \[ \texttt{Colour}(g) = \begin{cases} \text{Red} & \text{ If } g \in \omega \setminus \Phi(\omega), \\ \text{Blue} & \text{ If } g \in \Phi(\omega)\setminus \omega, \\ \text{Purple} & \text{ If } g \in \omega \cap \Phi(\omega). \end{cases} \] Now define $\Theta^\Phi : \MM \to \{\text{Red, Blue, Purple}\}^\MM$ to be the following \emph{input/output thickening} of $\Phi$ (see also Figure \ref{inputoutputfigure}): \[ \Theta^\Phi(\omega) = \{ (g, \texttt{Colour}(g)) \in G \times \text{Red, Blue, Purple}\} \mid g \in \omega \}. \] \begin{figure}[h]\label{inputoutputfigure} \includegraphics[scale=.4]{inputoutput.pdf} \centering \caption{This is how you should picture the input/output thickening of a factor map.} \end{figure} Let $\pi : \{\text{Red, Blue, Purple}\}^\MM \to \MM$ be the projection map that deletes red points and then forgets colours, that is, \[ \pi(\omega) = \{ g \in \omega \mid \omega_g \in \{\text{Blue, Purple}\} \}. \] \end{defn} \begin{remark}\label{factorsdecompose} Observe that $\Phi = \pi \circ \Theta^\Phi$ -- that is, an \emph{arbitrary} factor map decomposes as the composition of a thinning and a thickening. In this way we can often reduce the study of arbitrary factors to that of \emph{monotone} factors. \end{remark} \begin{defn} The \emph{space of graphs in $G$} is \[ \graph(G) = \{ (V, E) \in \MM(G) \times \MM(G \times G) \mid E \subseteq V \times V \}. \] This is a Borel $G$-space (with the diagonal action). A \emph{factor graph} is a measurable and $G$-equivariant map $\Phi : \MM(G) \to \graph(G)$ with the property that the vertex set of $\Phi(\omega)$ is $\omega$. If a factor graph is connected, then we will refer to it as a \emph{graphing}. \end{defn} \begin{remark} The elements of $\graph(G)$ are technically directed graphs, possibly with loops, and without multiple edges between the same pair of vertices. It's possible to define (in a Borel way) whatever space of graphs one desires (coloured, undirected, etc.) by taking appropriate subsets of products of configuration spaces. \end{remark} \begin{remark} One might prefer to call factor graphs as above \emph{monotone} factor graphs. Our terminology follows that of probabilists, see for instance \cite{holroyd2005}. We have not found a use for the less restrictive factor graph concept. \end{remark} \begin{example}\label{distanceR} The \emph{distance-$R$} factor graph is the map $\mathscr{D}_R : \MM \to \graph(G)$ given by \[ \mathscr{D}_R(\omega) = \{ (g, h) \in \omega \times \omega \mid d(g, h) \leq R \}. \] The connectivity properties of this graph fall under the purview of continuum percolation theory, see for instance \cite{meester1996continuum}. \end{example} \section{The rerooting equivalence relation and groupoid} We now introduce a pair of algebraic objects that capture factors of a point process. For exposition's sake, we will first discuss unmarked point processes on a group $G$. \begin{defn} The \emph{space of rooted configurations on $G$} is \[ {\mathbb{M}_0}(G) = \{ \omega \in \MM(G) \mid 0 \in \omega \}. \] If $G$ is understood, then we will drop it from the notation for clarity. The \emph{rerooting equivalence relation} on ${\mathbb{M}_0}$ is the orbit equivalence relation of $G \curvearrowright \MM$ restricted to ${\mathbb{M}_0}$. Explicitly: \[ \Rel = \{ (\omega, g^{-1}\omega) \in {\mathbb{M}_0} \times {\mathbb{M}_0} \mid g \in \omega \}. \] This defines a countable Borel equivalence relation structure on ${\mathbb{M}_0}$. It is degenerate whenever $\omega \in {\mathbb{M}_0}$ exhibits symmetries: for instance, the equivalence class of $\ZZ$ viewed as an element of ${\mathbb{M}_0}(\RR)$ is a singleton. We are usually interested in essentially free actions, where such difficulties will not occur. Nevertheless, we do care about lattice shift point processes and so we will introduce a groupoid structure that keeps track of symmetries. The \emph{space of birooted configurations} is \[ \Marrow = \{ (\omega, g) \in {\mathbb{M}_0} \times G \mid g \in \omega \}. \] We visualise an element $(\omega, g) \in \Marrow$ as the rooted configuration $\omega \in {\mathbb{M}_0}$ with an arrow pointing to $g \in \omega$ from the root (ie, the identity element of $G$). The above spaces form a \emph{groupoid} $({\mathbb{M}_0}, \Marrow)$ which we will refer to as the \emph{rerooting groupoid}. Its unit space is ${\mathbb{M}_0}$ and its arrow space is $\Marrow$. We can identify ${\mathbb{M}_0}$ with ${\mathbb{M}_0} \times \{0\} \subset \Marrow$. The multiplication structure is as follows: we declare a pair of birooted configurations $(\omega, g), (\omega', h)$ in $\Marrow$ to be \emph{composable} if $\omega' = g^{-1}\omega$, in which case \[ (\omega, g) \cdot (\omega', h) := (\omega, gh). \] Note that if $\Gamma < G$ is a discrete subgroup (so $\Gamma \in {\mathbb{M}_0}(G)$), then the above multiplication on $\{\Gamma\} \times \Gamma \subset \Marrow(G)$ is just the usual one. The \emph{source map} $s : \Marrow \to {\mathbb{M}_0}$ and \emph{target map} $t : \Marrow \to {\mathbb{M}_0}$ are \[ s(\omega, g) = \omega, \text{ and } t(\omega, g) = g^{-1}\omega. \] \end{defn} Note that the rerooting groupoid is \emph{discrete} in the sense that $s^{-1}(\omega)$ is at most countable for all $\omega \in {\mathbb{M}_0}$. \begin{remark} Let $\Maper_0$ denote the set of rooted configurations $\omega$ that are \emph{aperiodic} in the sense that $\stab_G(\omega) = \{e\}$. Then the groupoid generated by $\Maper_0$ in ${\mathbb{M}_0}$ is \emph{principal}\footnote{Recall that a groupoid is \emph{principal} if its isotropy subgroups are all trivial. That is, the groupoid structure is just that of an equivalence relation}. \end{remark} \begin{defn} If $\Xi$ is a space of marks, then the \emph{space of $\Xi$-marked rooted configurations} is \[ \Xi^{\mathbb{M}_0} = \{ \omega \in \Xi^\MM \mid \exists \xi \in \Xi \text{ such that } (0, \xi) \in \omega \}. \] The \emph{$\Xi$-marked rerooting groupoid} is defined as previously, with $\Xi^{\mathbb{M}_0}$ taking the place of ${\mathbb{M}_0}$. \end{defn} \subsection{Borel correspondences between the groupoid and factors}\label{borelcorrespondences} Suppose $\theta : \MM \to \MM$ is an equivariant and measurable thinning. Then we can associate to it a subset of the rerooting groupoid, namely \[ A_\theta = \{ \omega \in \MM \mid 0 \in \theta(\omega) \}. \] This association has an inverse: given a Borel subset $A \subseteq {\mathbb{M}_0}$, we can define a thinning $\theta^A : \MM \to \MM$ \[ \theta^A(\omega) = \{g \in \omega \mid g^{-1}\omega \in A \}. \] Thus we see that \emph{Borel subsets $A \subseteq {\mathbb{M}_0}$ of the rerooting groupoid correspond to Borel thinning maps $\theta : \MM \to \MM$}. In the $\Xi$-marked case, one associates to a subset $A \subseteq \Xi^{\mathbb{M}_0}$ a thinning $\theta^A : \Xi^\MM \to \Xi^\MM$. In a similar way, we can see that if $P : {\mathbb{M}_0} \to [d]$ is a Borel partition of ${\mathbb{M}_0}$ into $d$ classes, then there is an associated factor $[d]$-colouring $\mathscr{C}^P : \MM \to [d]^\MM$ given by \[ \mathscr{C}^P(\omega) = \{ (g, P(g^{-1}\omega) \in G \times [d] \mid g \in \omega \}, \] and given a factor $[d]$-colouring $\mathscr{C} : \MM \to [d]^\MM$ one associates the partition $P^\mathscr{C} : {\mathbb{M}_0} \to [d]$ given by \[ P(\omega) = c, \text{ where } c \text{ is the unique element of } [d] \text{ such that } (0, c) \in \mathscr{C}(\omega). \] Again, these associations are mutual inverses. More generally, we see that \emph{Borel factor $\Xi$-markings $\mathscr{C} : \MM \to \Xi^\MM$ correspond to Borel maps $P : {\mathbb{M}_0} \to \Xi$}. Now suppose that $\mathscr{G} : \MM \to \graph(G)$ is an equivariant and measurable factor graph. Then we can associate to it a subset of the rerooting groupoid's arrow space, namely \[ \mathscr{A}_\mathscr{G} = \{ (\omega, g) \in \Marrow \mid (0,g) \in \mathscr{G}(\omega)\}. \] In the other direction, we associate to a subset $\mathscr{A} \subseteq \Marrow$ the factor graph $\mathscr{G}^\mathscr{A} : \MM \to \graph(G)$ \[ \mathscr{G}^\mathscr{A}(\omega) = \{ (g, h) \in \omega \times \omega \mid (g^{-1}\omega, g^{-1}h) \in \mathscr{A} \}. \] Thus we see that \emph{Borel subsets $\mathscr{A} \subseteq \Marrow$ of the rerooting groupoid's arrow space correspond to Borel factor (directed) graphs $\mathscr{G} : \MM \to \graph(G)$}. \begin{remark} If $\mu$ is a point process, then the correspondence still works in one direction: namely, we can associate subsets $A \subset {\mathbb{M}_0}$ (or $\mathscr{A} \subseteq \Marrow$) to $\mu$-thinnings $\theta^A: (\MM, \mu) \to \MM$ (or $\mu$-factor graphs $\mathscr{G}_\mathscr{A}: (\MM, \mu) \to \MM$ respectively). We run into trouble in the other direction: suppose $\theta : \MM \to \MM$ is a thinning, but only defined $\mu$ almost everywhere. We wish to restrict it to ${\mathbb{M}_0}$, but a priori this makes no sense -- that is a subset of measure zero. It turns out that there is a way to make sense of this due to equivariance, but it will require some more theory that we explain in the next section. \end{remark} \subsection{The Palm measure} We will now associate to a (finite intensity) point process $\mu$ a probability measure $\mu_0$ defined on the rerooting groupoid ${\mathbb{M}_0}$. When the ambient space is unimodular, this will turn the rerooting groupoid into a \emph{probability measure preserving (pmp) discrete groupoid}. Informally, the Palm measure of a point process $\Pi$ is the process conditioned to contain the root. A priori this makes no sense (the subset ${\mathbb{M}_0}$ has probability zero), but there is an obvious way one could interpret the statement: condition on the process to contain a point in an $\e$ ball about the root, and take the limit as $\e$ goes to zero. See Theorem 13.3.IV of \cite{daley2007introduction} and Section 9.3 of \cite{MR3791470} for further details. We will instead take the following concept of \emph{relative rates} as our basic definition: \begin{defn} Let $\Pi$ be a point process of finite intensity with law $\mu$. Its (normalised) \emph{Palm measure} is the probability measure $\mu_0$ defined on Borel subsets of ${\mathbb{M}_0}$ by \[ \mu_0(A) := \frac{\intensity(\theta^A(\Pi))}{\intensity(\Pi)}, \] where $\theta^A$ is the thinning associated to $A \subseteq {\mathbb{M}_0}$. More explicitly, \[ \mu_0(A) := \frac{1}{\intensity(\mu) \lambda(U)} \EE_\mu \left[ \#\{g \in U \mid g^{-1}\omega \in A \} \right], \] where $U \subseteq G$ is any measurable set with $0 < \lambda(U) < \infty$. To make formulas simpler, we will often choose $U$ to be of unit volume. Alternatively, note that by the definition of intensity we may write \[ \mu_0(A) = \frac{\EE \left[ \#\{g \in U \mid g^{-1}\Pi \in A \} \right]}{\EE \@ifstar{\oldabs}{\oldabs*}{U \cap \Pi}}. \] We also define the Palm measure of a $\Xi$-marked point process similarly, with $\Xi^{\mathbb{M}_0}$ taking the place of ${\mathbb{M}_0}$. A \emph{Palm version} of $\Pi$ is any random variable $\Pi_0$ with law $\mu_0$. That is, we require that for all Borel $B \subseteq {\mathbb{M}_0}$ we have \[ \PP[\Pi_0 \in B] = \mu_0(B). \] \end{defn} We now describe some \emph{Palm calculus}. If $\Pi$ is a point process with Palm version $\Pi_0$ and $\Phi(\Pi)$ is some factor map then we wish to express the Palm version $\Phi(\Pi)_0$ of $\Phi(\Pi)$ in terms of $\Pi_0$ and $\Phi$. The Palm calculus tells us how this is done. It will be sufficient for our purposes to compute the Palm measure of factors for factor which are forgettings, thinnings, coloured thickenings, and colourings. In each case the answer is more or less obvious, so we will give an informal description of the answer and then verify that it satisfies the required property. \begin{example}[Forgetting labels] If $\Pi$ is a labelled point process, then the Palm measure of $\Pi$ \emph{after} we forget the labels is the same thing as forgetting the labels from the Palm measure $\Pi_0$. We prove this after the following clarification: When talking about the Palm measure for a $\Xi$-marked point process, it is important in the above to choose the correct thinning. Recall from Remark \ref{thinningconfusio} that for a subset $A \subseteq \Xi^{\mathbb{M}_0}$ one can discuss \emph{two} possible kinds of thinnings, namely \[ \theta^A : \Xi^\MM \to \Xi^\MM \text{ or } \pi \circ \theta^A : \Xi^\MM \to \MM, \] where $\pi : \Xi^\MM \to \MM$ is the map that forgets labels. It is the \emph{former} kind of thinning one should take. Note that if $\Pi$ is a $\Xi$-marked point process, then its intensity remains the same if you forget the marks, that is, $\intensity \Pi = \intensity \pi(\Pi)$. More generally, the operation of taking the Palm version \emph{commutes} with forgetting labels. That is, $\pi(\Pi_0) = (\pi(\Pi))_0$. To see this, let $B \subseteq {\mathbb{M}_0}$, and observe \begin{align*} \PP[ \pi(\Pi_0) \in B] &= \PP[\Pi_0 \in \pi^{-1}(B)] \\ &= \frac{\intensity \theta^{\pi^{-1}(B)}(\Pi)}{\intensity \Pi} \\ &= \frac{\intensity \pi(\theta^{\pi^{-1}(B)}(\Pi))}{\intensity \pi(\Pi)} \\ &= \frac{ \intensity \theta^B(\pi(\Pi))}{\intensity \pi(\Pi)} \\ &= \PP[ \pi(\Pi)_0 \in B], \end{align*} where we simply followed our nose. \end{example} \begin{example}[Lattice actions] If $\Gamma < G$ is a lattice, then the Palm measure of the associated lattice shift is just $\delta_\Gamma$ -- that is, the atomic measure on $\Gamma \in {\mathbb{M}_0}(G)$. More generally, if $\Gamma \curvearrowright (X, \mu)$ is a pmp action, then the Palm measure of the associated induced $X$-marked point process is its \emph{symbolic dynamics}. That is, the map $\Sigma : (X, \mu) \to X^\MM$ given by \[ \Sigma(x) = \{ (\gamma, \gamma^{-1} \cdot x) \in G \times X \mid \gamma \in \Gamma \}. \] pushes forward $\mu$ to the Palm measure. In words, you sample a $\mu$-random point $x \in X$ and track its orbit under $\Gamma$ (the inverse is an artefact of our left bias). \end{example} \begin{remark} Suppose $\Pi$ is a finite intensity point process such that its Palm version is an atomic measure, say $\Pi_0 = \Omega$ almost surely where $\Omega \in {\mathbb{M}_0}$. Then $\Omega$ is a lattice in $G$. Note that $\Omega$ is automatically a discrete subset of $G$, and a simple mass transport argument shows that it is a subgroup. The covolume of this subgroup is the reciprocal of the intensity of $\Pi$. \end{remark} \begin{example}[Mecke-Slivnyak Theorem]\label{palmofpoisson} If $\Pi$ is a Poisson point process, then its Palm measure has the same law as $\Pi \cup \{0\}$, where $0 \in G$ is the identity. In fact, this is a \emph{characterisation} of the Poisson point process: if the Palm measure of $\mu$ is obtained by simply adding the root\footnote{More formally, consider the map $F : \MM \to {\mathbb{M}_0}$ given by $F(\omega) = \omega \cup \{0\}$, by ``adding the root'' we mean the Palm measure $\mu_0$ is the pushforward $F_*\mu$.}, then $\mu$ is the Poisson point process (of some intensity). \end{example} The proof of the above fact can be found in Section 9.2 of \cite{MR3791470}. As a consequence, the Palm measure of the IID Poisson is the IID of the Palm measure of the Poisson itself. \begin{example}\label{palmofcolouring} The Palm version $\mathscr{C}^A(\Pi)_0$ of a $2$-colouring $\mathscr{C} : \MM \to \{0,1\}^\MM$ determined by a subset $A \subseteq {\mathbb{M}_0}$ (as in Example \ref{colouring}) is simply $\mathscr{C}(\Pi_0)$. \end{example} \begin{example}[Thinnings] The Palm version $\theta(\Pi)_0$ of a thinning $\theta = \theta^A$ of $\Pi$ (determined by a subset $A \subseteq {\mathbb{M}_0}$) is described in terms of its Palm version $\Pi_0$ as a conditional probability as follows: \[ \PP[\theta(\Pi)_0 \in B] = \PP[\theta(\Pi_0) \in B \mid \Pi_0 \in A] \] for any $B \subseteq {\mathbb{M}_0}$. That is, the Palm measure $\theta(\Pi)_0$ can be obtained by sampling from $\Pi_0$ conditioned that the root is retained in the thinning, and then applying the thinning. To see this, first one should work from the definitions to show that $\theta^B(\theta^A(\Pi)) = \theta^{A \cap (\theta^A)^{-1}(B)}$. Therefore \begin{align*} \PP[(\theta(\Pi))_0 \in B] &= \frac{\intensity \theta^B(\theta^A(\Pi))}{\intensity \theta^A(\Pi)} && \\ &= \left.{\frac{\intensity \theta^{A \cap (\theta^A)^{-1}(B)}(\Pi)}{\intensity \Pi}} \middle/ {\frac{\intensity \theta^A(\Pi)}{\intensity \Pi}}\right. && \text{By the observation} \\ &= \frac{\PP[\Pi_0 \in A \cap (\theta^A)^{-1}(B)]}{\PP[\Pi_0 \in A]} && \\ &= \frac{\PP[\{\theta(\Pi_0) \in B\} \cap \{\Pi_0 \in A\}]}{\PP[\Pi_0 \in A]} \\ &= \frac{\PP[\{\theta(\Pi_0) \in B\} \cap \{\Pi_0 \in A\}]}{\PP[\Pi_0 \in A]}, \end{align*} which is exactly the definition of the desired conditional probability. \end{example} \begin{example}\label{palmofthickening} Let $\Theta = \Theta^F$ be a constant thickening determined by $F \subset G$, as described in Example \ref{constantthickening}. If $\Pi$ is an $F$-separated process, then the Palm version $\Theta(\Pi)_0$ of the thickening $\Theta(\Pi)$ is as follows: sample from $\Pi_0$, and independently choose to root $\Theta(\Pi_0)$ at a uniformly chosen element $X$ of $F$. That is, $\Theta(\Pi)_0 \overset{d}{=} X^{-1} \Theta(\Pi_0)$. To see this, we compute\footnote{When we define the Palm measure of a set $B \subseteq {\mathbb{M}_0}$, we usually write ``$g \in U$'' rather than ``$g \in U \cap \Pi$'', as the latter condition $g^{-1} \Pi \in B$ already implies $g \in \Pi$. For this computation it is better to really spell it out though.} as follows: \begin{align*} &\PP[\Theta(\Pi)_0 \in B] = \frac{1}{\intensity \Theta(\Pi)} \EE[ \#\{g \in U \cap \Pi F \mid g^{-1} \Theta(\Pi) \in B \} ] && \text{By definition} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \frac{1}{\intensity \mu} \sum_{f \in F} \EE[ \#\{g \in U \cap \Pi f \mid g^{-1} \Theta(\Pi) \in B \} ] && \text{By Example \ref{constantthickening}} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \frac{1}{\intensity \mu} \sum_{f \in F} \EE[ \#\{g \in Uf^{-1} \cap \Pi \mid g^{-1} \Pi \in \Theta^{-1}(B) \} ] && \text{By equivariance} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \frac{1}{\intensity \mu} \sum_{f \in F} \EE[ \#\{g \in U \cap \Pi \mid g^{-1} \Pi \in \Theta^{-1}(B) \} ] && \text{By unimodularity} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \sum_{f \in F} \PP[ \Pi_0 \in \Theta^{-1}(B)] && \text{By definition} \\ &= \frac{1}{\@ifstar{\oldabs}{\oldabs*}{F}} \sum_{f \in F} \PP[ \Theta(\Pi_0) \in B] && \\ &= \PP[X^{-1} \Theta(\Pi_0) \in B]. \end{align*} \end{example} The Palm measure has an associated integral equation, which we will refer to as ``the CLMM'', following the convention of \cite{baszczyszyn:cel-01654766}. It is also referred to as ``the refined Campbell theorem'' in \cite{last2011poisson} and \cite{vere2003introduction}, for example. \begin{thm}[Campbell-Little-Mecke-Matthes]\label{CLMM} Let $\mu$ be a finite intensity point process on $G$ with Palm measure $\mu_0$. Write $\EE$ and $\EE_0$ for the associated integral operators. If $f : G \times {\mathbb{M}_0} \to \RR_{\geq 0}$ is a measurable function (\emph{not} necessarily invariant in any way), then \[ \EE \left[\sum_{x \in \omega} f(x, x^{-1}\omega) \right] = \intensity(\mu) \EE_0 \left[ \int_G f(x, \omega) d\lambda(x)\right]. \] \end{thm} The proof of the above theorem is a standard monotone convergence argument, and as such we will only give the first step of the argument and leave the details to the reader. Observe that by definition of the Palm measure, for any $U \subseteq(G)$ of finite volume and any measurable $A \subseteq {\mathbb{M}_0}$, we have \[ \EE\left[\#\{ g\in U \mid g^{-1}\Pi \in A\} \right] = \intensity(\mu) \mu_0(A) \lambda(U). \] Now rewrite the integrand on the lefthand side as a sum, and the right hand side as an integral we see: \[ \EE\left[\sum_{g \in \Pi} \mathbbm{1}[(g, g^{-1}\omega) \in U \times A] \right] = \intensity(\mu) \int_{{\mathbb{M}_0}} \int_G \mathbbm{1}[(g, \omega) \in U \times A]d\lambda(g)d\mu_0(\omega), \] and observe that this is exactly the claimed theorem (in slightly different notation) in the case of $f(x,\omega) = \mathbbm{1}[(x, \omega) \in U \times A]$. The theorem follows for arbitrary $f$ by the monotone convergence theorem. \begin{remark}\label{VIF} If $\nu$ is a point process with $\nu_0 = \mu_0$, then $\nu = \mu$, that is, the Palm measure \emph{determines} the point process. To see this, we use the existance of a map $\mathscr{V} : [0,1] \times {\mathbb{M}_0} \to \MM$ with the property that if $\mu$ is \emph{any} point process with Palm measure $\mu_0$, then $\mathscr{V}_*(\text{Leb} \otimes \mu_0) = \mu$. This is a consequence of the \emph{Voronoi inversion formula}, see Section 9.4 of \cite{MR3791470}. \end{remark} \subsection{Unimodularity and the Mass Transport Principle}\label{unimodularity} The Mass Transport Principle is a powerful tool in percolation theory, see \cite{MR3616205} for an introduction and historical context. For the convenience of the reader, we include a proof of it for our context and in our notation, but no originality is claimed. For further generalisations of the mass transport principle see \cite{kallenberg2011invariant}, \cite{gentner2011palm}, and Chapter 7 of \cite{kallenberg2017random} and for further exposition in the context of point processes see \cite{baszczyszyn:cel-01654766}. The source and range maps $s, t : \Marrow \to {\mathbb{M}_0}$ induce a pair of measures on $\Marrow$ defined by \[ \muarrow^s(\mathscr{G}) = \int_{\mathbb{M}_0} \@ifstar{\oldabs}{\oldabs*}{s^{-1}(\omega) \cap \mathscr{G}} d\mu_0(\omega), \text{ and } \muarrow^t(\mathscr{G}) = \int_{\mathbb{M}_0} \@ifstar{\oldabs}{\oldabs*}{t^{-1}(\omega) \cap \mathscr{G}} d\mu_0(\omega). \] In our factor graph interpretation this corresponds to the expected indegree and outdegree of $\mathscr{G}$ respectively, where we view $\mathscr{G}$ as a \emph{directed} rooted graph. To see this, recall that for a rooted configuration $\omega \in {\mathbb{M}_0}$, \[ s^{-1}(\omega) = \{(\omega, g) \in {\mathbb{M}_0} \times G \mid g \in \omega\} \text{ and } t^{-1}(\omega) = \{(g^{-1}\omega, g^{-1}) \in {\mathbb{M}_0} \times G \mid g \in \omega \}, \] and that there is an edge from $0$ to $g$ in $\mathscr{G}(\omega)$ exactly when $(\omega, g) \in \mathscr{G}$, and an edge from $g$ to $0$ exactly when $(g^{-1}\omega, g^{-1}) \in \mathscr{G}$. Thus \[ \overrightarrow{\deg}_0({\mathscr{G}(\omega)}) = \@ifstar{\oldabs}{\oldabs*}{s^{-1}(\omega) \cap \mathscr{G}(\omega)} \text{ and } \overleftarrow{\deg}_0({\mathscr{G}(\omega)}) = \@ifstar{\oldabs}{\oldabs*}{t^{-1}(\omega) \cap \mathscr{G}(\omega)}. \] \begin{remark} We have had to adapt notation to suit our purposes. Usually a groupoid would be denoted by a letter like $\mathcal{G}$, and that is the set of arrows. Then its units would be denoted $\mathcal{G}_0$. We have tried to match this up with the necessary notation from point process theory as closely as possible. We choose to denote outdegree by an expression like $\overrightarrow{\deg}_0({\mathscr{G}(\omega)})$ instead of $\deg^+_{\mathscr{G}(\omega)}(0)$ as the arrows are more evocative, and the subscript notation becomes very small (as in, for instance, $\deg^+_{\mathscr{G}(\Pi_0)}(0)$. \end{remark} \begin{prop}\label{pmpgroupoid} If $G$ is \emph{unimodular}, then $\muarrow^s = \muarrow^t$. That is, $(\Marrow, \muarrow)$ forms a discrete pmp groupoid. Equivalently, if $\Pi_0$ is the Palm version of any point process $\Pi$ on $G$, then \[ \EE\left[ \overrightarrow{\deg}_0({\mathscr{G}(\Pi_0)}) \right] = \EE\left[ \overleftarrow{\deg}_0({\mathscr{G}(\Pi_0)}) \right]. \] We will denote by $\muarrow$ this common measure $\muarrow^s = \muarrow^t$. \end{prop} \begin{proof}[Proof of Proposition \ref{pmpgroupoid}] Fix $U \subseteq G$ of unit volume. We compute: \begin{align*} \muarrow^s(\mathscr{G}) &= \EE_{\mu_0} \left[ \sum_{g \in \omega} \mathbbm{1}[{(\omega, g) \in \mathscr{G}}] \right] && \text{By definition} \\ &= \EE_{\mu_0} \left[\int_G \mathbbm{1}[{x \in U}] \sum_{g \in \omega} \mathbbm{1}[{(\omega, g) \in \mathscr{G}}] d\lambda(x) \right] && \\ &= \frac{1}{\intensity \mu} \EE_\mu \left[ \sum_{x \in \omega} \mathbbm{1}[{x \in U}] \sum_{g \in x^{-1}\omega} \mathbbm{1}[{(x^{-1}\omega, g) \in \mathscr{G}}] \right] && \text{By the CLLM} \\ &= \frac{1}{\intensity \mu} \EE_\mu \left[ \sum_{h \in \omega} \sum_{g^{-1} \in h^{-1}\omega} \mathbbm{1}[{hg^{-1} \in U}] \mathbbm{1}[{(gh^{-1}\omega, g) \in \mathscr{G}}] \right] && \\ &= \EE_{\mu_0} \left[ \int_G \sum_{g^{-1} \in \omega} \mathbbm{1}[{hg^{-1} \in U}] \mathbbm{1}[{(g\omega, g) \in \mathscr{G}}] d\lambda(h) \right] && \text{By the CLLM} \\ &= \EE_{\mu_0} \left[ \int_G \sum_{g \in \omega} \mathbbm{1}[{hg \in U}] \mathbbm{1}[{(g^{-1}\omega, g^{-1}) \in \mathscr{G}}] d\lambda(h) \right] && \text{By unimodularity} \\ &= \EE_{\mu_0} \left[\sum_{g \in \omega} \mathbbm{1}[{(g^{-1}\omega, g^{-1}) \in \mathscr{G}}] \int_G \mathbbm{1}[{h \in Ug^{-1}}] d\lambda(h) \right] && \\ &= \EE_{\mu_0} \left[\sum_{g \in \omega} \mathbbm{1}[{(g^{-1}\omega, g^{-1}) \in \mathscr{G}}] \right] && \text{By unimodularity} \\ &= \muarrow^t(\mathscr{G}), \end{align*} as desired, where the first instance of CLMM is applied with \[ f_1(x,\omega) = \mathbbm{1}[x \in U] \sum_{g \in \omega} \mathbbm{1}[(\omega, g) \in \mathscr{G}], \text{ and} \] \[ f_2(x, \omega) = \sum_{g^{-1} \in \omega} \mathbbm{1}[xg^{-1} \in U]\mathbbm{1}[(g\omega, g) \in \mathscr{G}]. \] in the second instance.\end{proof} \begin{defn} The \emph{Palm groupoid} of a point process $\Pi$ with law $\mu$ is $(\Marrow, \muarrow)$. If $\Pi$ is free, then this groupoid is principal, and thus we refer to $\Pi$'s \emph{Palm equivalence relation} $({\mathbb{M}_0}, \Rel, \mu_0)$. \end{defn} \begin{defn}\label{edgemeasure} Let $\Pi$ be a point process and $\mathscr{G}$ an \emph{undirected} factor graph of $\Pi$. Its \emph{edge density} is $\EE[ \deg_0(\mathscr{G}(\Pi_0))]$, where $\Pi_0$ is the Palm version of $\Pi$. \end{defn} By the above proposition, if $\mathscr{G}'$ is any \emph{orientation} of $\mathscr{G}$, then the edge density can be expressed as \[ \EE[ \deg_0(\mathscr{G}(\Pi_0))] = 2 \EE\left[ \overrightarrow{\deg}_0({\mathscr{G'}(\Pi_0)}) \right]. \] Speaking properly then, we should be talking of \emph{directed} factor graphs, but for this reason we will often think of the factor graphs as undirected. \begin{thm}[The Mass Transport Principle]\label{MTP} Let $G$ be a unimodular group, and $\Pi$ a point process on $G$ with Palm version $\Pi_0$. Suppose $T : G \times G \times \MM \to \RR_{\geq 0}$ is a measurable function which is \emph{diagonally invariant} in the sense that $T(gx, gy; g\omega) = T(x, y; \omega)$ for all $g \in G$. Then \[ \EE \left[ \sum_{y \in \Pi_0} T(0, y; \Pi_0) \right] = \EE \left[ \sum_{x \in \Pi_0} T(x, 0; \Pi_0) \right]. \] \end{thm} We view $T(x, y; \Pi_0)$ as representing an amount of \emph{mass} sent from $x$ to $y$ when the configuration is $\Pi_0$. Thus the integrand on the lefthand side represents the total mass sent out from the root, and similarly the integrand on the righthand side represents the total mass received by the root. \begin{proof}[Proof of Theorem \ref{MTP}] The mass transport principle follows from Proposition \ref{pmpgroupoid}. First, observe that as in the proof of the CLMM, there is an integral equation implied by Proposition \ref{pmpgroupoid} and a monotone convergence argument. To wit, \[ \EE\left[ \sum_{g \in \Pi_0} f(\Pi_0, g) \right] = \EE\left[ \sum_{g \in \Pi_0} f(g^{-1}\Pi_0, g^{-1})\right], \] where $f : \Marrow \to \RR_{\geq 0}$ is a measurable function (defined $\muarrow$ almost everywhere). We now apply the integral equation with $f(\omega, g) = T(0, g; \omega)$. Note that \[ f(g^{-1}\omega, g^{-1} = T(0, g^{-1}; g^{-1}\omega) = T(g, 0; \omega), \] where the second equality is by diagonal invariance. Hence the integral equation for $f$ yields \[ \EE\left[ \sum_{g \in \Pi_0} T(0, g; \Pi_0) \right] = \EE\left[ \sum_{g \in \Pi_0} T(g, 0; \Pi_0)\right], \] which is exactly the mass transport principle expressed with a differently named integrating variable. \end{proof} \begin{remark}\label{palmformula} One can use the CLMM formula (see Theorem \ref{CLMM}) to express $\muarrow(\mathscr{G})$ without reference to the Palm measure. Let $U \subseteq G$ be of unit volume, and apply the formula to $f(x,\omega) = \mathbbm{1}[{x \in U}] \overrightarrow{\deg}_0({\mathscr{G}(\omega)})$, resulting in \[ \muarrow(\mathscr{G}) = \frac{1}{\intensity \Pi} \EE \left[ \sum_{x \in \Pi} \mathbbm{1}[{x \in U}] \overrightarrow{\deg}_x({\mathscr{G}(\Pi)}) \right] \] (note that by equivariance $\overrightarrow{\deg}_0({\mathscr{G}(x^{-1}\omega)}) = \overrightarrow{\deg}_x({\mathscr{G}(\omega)})$). \end{remark} As an application of the CLMM, we will find an expression for the Palm version of general thickenings: \begin{example}[Palm measures of general thickenings]\label{palmofgeneralthickening} Suppose one has for each configuration $\omega \in \MM$ and each $g \in \omega$ a measurably defined \emph{finite} subset $F_\omega(g)$ satisfying the following properties: \begin{description} \item[Monotonicity:] That $g \in F_\omega(g)$, \item[Separation:] If $g, h \in \omega$ are \emph{distinct} then $F_\omega(g) \cap F_\omega(h) = \empt$, and \item [Equivariance:] For all $\gamma \in G$, we have $F_{\gamma \omega}(\gamma \omega) = \gamma F_\omega(g)$. \end{description} Then one can define a thickening $\Theta : \MM \to \MM$ by \[ \Theta(\omega) = \bigsqcup_{g \in \omega} F_\omega(g). \] That is, each point $g \in \omega$ looks at the current configuration, and adds points $F_\omega(g)$ locally to it according to some equivariant rule. Every thickening has this form (see Definition \ref{voronoidefn} and the ensuing discussion). We refer to points of $\omega$ as \emph{progenitors} and points of $F_\omega(g)$ as $g$'s \emph{spawn} in $\omega$. It stands to reason that if $\Pi$ is a point process satisfying the above rules almost surely, then $\intensity \Theta(\Pi) = \EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} \cdot \intensity \Pi$. Just as in Example \ref{constantthickening} though, this will require unimodularity to prove, this time in the form of the Mass Transport Principle. Let us identify the thickening with its input/output version. Note that if we compute the Palm version of the latter, then we get it for the former by simply forgetting the labels. Our reason for doing this is simple: we need to be able to identify which points were progenitors and which points are spawn. This is only possible if we use the input/output version, but the downside is that that is more notationally cumbersome. We first verify that $\intensity \Theta(\Pi) = \EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} \cdot \intensity \Pi$. In order to apply mass transport, we need to know the following fact: \[ \PP[\Theta(\Pi)_0 \in A | 0 \text{ is a progenitor}] = \PP[\Theta(\Pi_0) \in A]. \] This follows from the definitions by similar manipulations to those we've already seen. With this fact in hand, define\footnote{An advantage of using the input/output version of the thickening is that we can exactly identify who spawned who in a well-defined way.} a transport as follows: \[ T(x, y, \Theta(\Pi)) = \mathbbm{1}[x \text{ spawned } y \text{ in } \Theta(\Pi)]. \] Then the total mass received by the root is always one (as everyone is spawned by someone), and hence the expected mass received is one. The expected mass sent out is \[ \EE_0\left[ \mathbbm{1}[0 \text{ is a progenitor}] \cdot \#\{\text{spawn of } 0\}\right] = \PP[0 \text{ is a progenitor}] \cdot \EE[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}], \] where $\EE_0$ denotes expectation with respect to the Palm measure of $\Theta(\Pi)$, and the equality follows from the fact above and the definition of conditional probability. We have by the definition of progenitor \[ \PP[0 \text{ is a progenitor}] = \frac{\intensity \Pi}{\intensity \Theta(\Pi)}, \] so $\intensity \Theta(\Pi) = \EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} \cdot \intensity \Pi$ by the mass transport principle. We now express the Palm version of $\Theta(\Pi)$ in terms of $\Pi_0$ and $\Theta$. Note that for this to be defined we must assume $\Pi$ has finite intensity and that $\EE[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}] < \infty$. Let \begin{itemize} \item $N$ be a random variable with \[ \PP[N = n] = \frac{n\PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n]}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} = \frac{n\PP[0 \text{ spawns } n \text{ points of } \Theta(\Pi_0)]}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}}, \] \item $\Upsilon^n$ denote $\Pi_0$ conditioned on the event $\{0 \text{ spawns } n \text{ points of } \Theta(\Pi_0)\}$, \item $X$ be a uniformly chosen element of $F_{\Upsilon^n}(0)$ (conditional on $\Upsilon^n$). \end{itemize} We claim that $X^{-1}\Theta(\Upsilon^N)$ is a Palm version of $\Theta(\Pi)$. In words, we are sampling from the Palm measure $\Pi_0$ biased\footnote{To see that some kind of size biasing is required, consider the point process $\ZZ + \texttt{Unif}[0,1] \subset \RR$, and define a thickening which leaves points marked $0$ as they are and adds a thousand points tightly packed around points marked $1$ A ``typical point'' of the resulting process should look more like a configuration with a thousand points near the origin, and the size biasing accommodates for this.} towards the configurations that spawn more points, and then applying the thickening and rooting at one of the spawns uniformly at random. Let $A \subseteq \{\text{Red, Blue, Purple}\}^{\mathbb{M}_0}$. We find an expression for $\PP[\Theta(\Pi)_0 \in A]$ by using mass transport: define \[ T(x, y; \Theta(\Pi)) = \mathbbm{1}[\{x \text{ spawns } y \text{ in } \Theta(\Pi) \} \cap \{y \in \theta^A(\Theta(\Pi)) \}]. \] The expected mass in with respect to $T$ is exactly $\PP[\Theta(\Pi)_0 \in A]$. The expected mass out is \begin{align*} &\EE\left[\sum_{y \in \Theta(\Pi)_0} T(0, y; \Theta(\Pi)_0) \right] \\ &= \EE\left[\mathbbm{1}[0 \text{ is a progenitor}] \cdot \#\{0 \text{ spawns } y \text{ with } y \in \theta^A(\Theta(\Pi)_0)\} \right] \\ &= \PP[0 \text{ is a progenitor}] \EE\left[\#\{0 \text{ spawns } y \text{ with } y \in \theta^A(\Theta(\Pi)_0)\} | 0 \text{ is a progenitor} \right]\\ &= \frac{1}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \EE \left[\#\{y \in F_{\Pi_0}(0) \mid y^{-1}\Theta(\Pi_0) \in A\} \right] \\ &= \frac{1}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \sum_n \EE \left[\#\{y \in F_{\Pi_0}(0) \mid y^{-1}\Theta(\Pi_0) \in A\} | \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n]. \end{align*} We can now match up this expression with our earlier description of $X^{-1}\Phi(\Upsilon^N)$. Recall that if $Y \subseteq [n]$ is a random subset, then $\EE \@ifstar{\oldabs}{\oldabs*}{Y} = n \PP[X \in Y]$, where $X$ is a uniformly chosen element of $[n]$. \begin{align*} &\EE\left[\sum_{y \in \Theta(\Pi)_0} T(0, y; \Theta(\Pi)_0) \right] \\ &= \frac{1}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \sum_n \EE \left[\#\{y \in F_{\Pi_0}(0) \mid y^{-1}\Theta(\Pi_0) \in A\} | \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n] \\ &= \frac{1}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \sum_n \EE \left[\#\{y \in F_{\Upsilon^n}(0) \mid y^{-1}\Theta(\Upsilon^n) \in A\} \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n] \\ &= \sum_n n \PP[X^{-1}\Theta(\Upsilon^n) \in A] \frac{\PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = n]}{\EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \\ &= \sum_n \PP[X^{-1}\Theta(\Upsilon^n) \in A] \PP[N = n] \\ &= \PP[X^{-1} \Theta(\Upsilon^N) \in A], \end{align*} as desired. \end{example} In fact, every thickening can be expressed \`{a} la Example \ref{palmofgeneralthickening}, as we shall now see. \begin{defn}\label{voronoidefn} Let $\omega \in \MM$ be a configuration, and $g \in \omega$ one of its points. The associated \emph{Voronoi cell} is \[ V_\omega(g) = \{ x \in G \mid d(x, g) \leq d(x, h) \text{ for all } h \in \omega \}. \] The associated \emph{Voronoi tessellation} is the ensemble of closed sets $\{V_\omega(g)\}_{g \in \omega}$. \end{defn} Left-invariance of the metric $d$ implies that the Voronoi cells are equivariant in the sense that for all $\gamma \in G$, we have $V_{\gamma \omega}(\gamma g) = \gamma V_\omega(g)$. Note that discreteness of the configuration implies that the Voronoi tessellation forms a locally finite \emph{cover} of the ambient space by closed sets. We would like to think of these sets as forming a \emph{partition} of the ambient space, but this isn't necessarily true even in the measured sense: the boundaries of the Voronoi cells can have positive volume. For example, let $\Gamma$ be a discrete group and consider $\Gamma \times \{0\} \subset \Gamma \times \RR$. Lie groups and Riemannian symmetric spaces essentially avoid this deficiency, as hyperplanes\footnote{Sets of the form $\{x \in X \mid d(x, g) = d(x, h) \}$ for a fixed distinct pair $g, h \in X$.} have zero volume. So depending on the examples one is interested in one can assume that the Voronoi cells are essentially disjoint (that is, that their intersection is Haar null). If this property is necessary then one can make a small modification to ensure it: we introduce a \emph{tie breaking} function that allows points belonging to multiple Voronoi cells to decide which one they shall belong to. Take any\footnote{Recall that standard Borel spaces are isomorphic if they have the same cardinality.} Borel isomorphism $T : G \to \RR$. Let us define \[ \begin{split} V_\omega^T(g) = \{ x \in G \mid \text{for all } h \in \omega \setminus \{g\},\; d(x, g) < d(x, h)\\ \text{ or } d(x, g) = d(x,h) \text{ and } T(x^{-1}g) < T(x^{-1}h) \}. \end{split} \] Note that these tie-broken Voronoi cells form a \emph{measurable} partition of $G$. That is, we have traded the Voronoi cells being closed for them being genuinely disjoint. The equivariance property $V^T_{\gamma \omega}(\gamma g) = \gamma V^T_\omega(g)$ still holds as well. If $\Theta : \MM \to \MM$ is a thickening, then we simply define $F_\omega(g) = V_\omega(g) \cap \Theta(\omega)$. \subsection{Ergodicity and the factor correspondences in the measured category}\label{ergodicity} In this section we show how to extend the correspondences of Section \ref{borelcorrespondences} to the measured category, which connects the distribution $\mu$ of a point process to its Palm measure $\mu_0$, and objects understood to be defined $\mu$ almost everywhere with those defined $\mu_0$ almost everywhere. \begin{defn} A subset $A \subseteq \MM$ of unrooted configurations is \emph{shift-invariant} if for all $\omega \in A$ and $g \in G$, we have $g\omega \in A$. A subset $A_0 \subseteq {\mathbb{M}_0}$ of rooted configurations is \emph{rootshift invariant} if for all $\omega \in A_0$ and $g \in \omega$, we have $g^{-1}\omega \in A_0$. The groupoid $(\Marrow, \muarrow)$ is \emph{ergodic} if every rootshift invariant subset $A \subseteq {\mathbb{M}_0}$ has $\mu_0(A) = 0$ or $1$. \end{defn} Note that if $A \subseteq \MM$ is shift-invariant, then $A_0 := A \cap {\mathbb{M}_0}$ is rootshift invariant, and if $A_0 \subseteq {\mathbb{M}_0}$ is rootshift-invariant, then $A := GA_0$ is shift invariant. Thus shift-invariant subsets and rootshift-invariant subsets are in bijective correspondence. Moreover: \begin{prop}\label{transferprinciple} Let $\mu$ be a point process with Palm measure $\mu_0$. \begin{enumerate} \item If $A \subseteq {\mathbb{M}_0}$ is rootshift invariant, then $\mu_0(A) = \mu(GA)$. \item If $A \subseteq \MM$ is shift invariant, then $\mu_0(A \cap {\mathbb{M}_0}) = \mu(A)$. \end{enumerate} That is, under the correspondence between rootshift invariant subsets of ${\mathbb{M}_0}$ and shift invariant subsets of $\MM$, the measures $\mu_0$ and $\mu$ coincide. In particular, $G \curvearrowright (\MM, \mu)$ is ergodic \emph{if and only if} $(\Marrow, \muarrow)$ is ergodic. \end{prop} \begin{proof} We assume ergodicity and prove the statements about measures. We then prove the general statement by using the ergodic case. First, suppose $G \curvearrowright (\MM, \mu)$ is ergodic, and let $A \subseteq {\mathbb{M}_0}$ be rootshift invariant. Then for any $U \subseteq G$ of unit volume, \begin{align*} \mu_0(A) &= \frac{1}{\intensity \mu} \EE_\mu\left[ \#\{g \in U \mid g^{-1}\omega \in A \} \right] && \text{By definition} \\ &= \frac{1}{\intensity \mu} \EE_\mu\left[ \@ifstar{\oldabs}{\oldabs*}{\omega \cap U} \mathbbm{1}[{\omega \in GA}] \right] && \text{By rootshift invariance of } A \\ &= \mu(GA) && \text{By ergodicity}. \end{align*} In particular, we see that $\mu_0(A)$ is zero or one, so the equivalence relation is ergodic. Now suppose $({\mathbb{M}_0}, \Rel, \mu)$ is ergodic, and let $A \subseteq \MM$ be shift invariant. \begin{align*} \mu_0(A \cap {\mathbb{M}_0}) &= \frac{1}{\intensity \mu} \EE_\mu \left[ \#\{g \in U \mid g^{-1}\omega \in A \cap {\mathbb{M}_0} \} \right] && \text{By definition} \\ &= \frac{1}{\intensity \mu} \EE_\mu\left[ \@ifstar{\oldabs}{\oldabs*}{\omega \cap U} \mathbbm{1}[{\omega \in A}] \right] && \text{By shift invariance of } A \\ &= \mu(A) && \text{By ergodicity}. \end{align*} For the general case, we appeal to the ergodic decomposition theorem (see \cite{MR1784210} for a proof): \begin{thm} Let $G$ be an lcsc group, and $G \curvearrowright (X, \mu)$ a pmp action on a standard Borel space. Then there exists a standard Borel space $Y$ equipped with a probability measure $\nu$ and a family $\{ p_y \mid y \in Y\}$ of probability measures $p_y$ on $X$ with the following properties: \begin{enumerate} \item For every Borel $A \subset X$, the map $y \mapsto p_y(A)$ is Borel, and \[ \mu(A) = \int_Y p_y(A) d\nu(y). \] \item For every $y \in Y$, $p_y$ is an invariant and ergodic measure for the action $G \curvearrowright (X, p_y)$, \item If $y, y' \in Y$ are distinct, then $p_y$ and $p_y'$ are mutually singular. \end{enumerate} \end{thm} There is an almost identically stated version of the above theorem for pmp cbers as well. These two decompositions are essentially equivalent, in a way that we shall now discuss. If $(Y, \nu)$ and $\{p_y \mid y \in Y\}$ is the ergodic decomposition for $G \curvearrowright (\MM, \mu)$, then the Palm measures $(p_y)_0$ of the $p_y$ form an ergodic decomposition for $({\mathbb{M}_0}, \Rel, \mu_0)$. That is, for all $A \subseteq {\mathbb{M}_0}$ we have \[ \mu_0(A) = \int_Y (p_y)_0(A) d\nu(y). \] Applying the previous ergodic case to this yields the general formula. \end{proof} \begin{thm}\label{correspondencetheorem} Let $G$ be a locally compact and second countable group, and $\Pi$ an invariant point process on $G$ with law $\mu$. Then associated to this data is an $r$-discrete probability measure preserving groupoid $(\Marrow, \muarrow)$ called \emph{the Palm groupoid} of $\Pi$. It has the following properties: \begin{itemize} \item Thinning maps $\theta : (\MM, \mu) \to \MM$ of $\Pi$ are in correspondence with Borel subsets $A$ of the unit space ${\mathbb{M}_0}$ of the Palm groupoid defined $\mu_0$ almost everywhere, \item Factor $\Xi$-markings $\mathscr{C} : (\MM, \mu) \to \Xi^\MM$ are in correspondence with Borel $\Xi$-valued maps $P$ defined on the unit space ${\mathbb{M}_0}$ of the Palm groupoid defined $\mu_0$ almost everywhere, and \item Factor graphs $\mathscr{G} : (\MM, \mu) \to \graph(G)$ of $\Pi$ are in correspondence with Borel subsets $\mathscr{A}$ of the arrow space $\Marrow$ of the Palm groupoid defined $\muarrow$ almost everywhere. \end{itemize} \end{thm} The Palm measure is well studied, but the equivalence relation structure seems to have been mostly overlooked. One can find two direct references to it: Example 2.2 in a paper of Avni \cite{avni2005spectral} and a question of Bowen in \cite{bowen2018all} (specifically, (Questions and comments, item 1). We now prove Theorem \ref{correspondencetheorem}, building on Section \ref{borelcorrespondences}. The task here is to verify that under the correspondence, objects which are equal almost everywhere with respect to the point process are equal almost everywhere with respect to the Palm measure, and vice versa. \begin{lem}\label{extensionlemma} Let $\mu$ be a point process on $G$ with Palm measure $\mu_0$, and $X$ a Borel $G$-space. Let $\Phi, \Phi' : \MM \to X$ be an equivariant Borel map. Then \[ \Phi = \Phi' \;\; \mu \text{ almost everywhere \emph{if and only if} } \restr{\Phi}{{\mathbb{M}_0}} = \restr{\Phi'}{{\mathbb{M}_0}}\;\; \mu_0 \text{ almost everywhere}. \] \end{lem} \begin{proof} Observe that by equivariance the sets \[ \{ \omega \in \MM \mid \Phi(\omega) = \Phi'(\omega)\} \text{ and } \{ \omega \in {\mathbb{M}_0} \mid \Phi(\omega) = \Phi'(\omega) \} \] are shift invariant and rootshift invariant respectively. So by Proposition \ref{transferprinciple} one is $\mu$-sure if and only if the other is $\mu_0$-sure, as desired. \end{proof} \begin{proof}[Proof of Theorem \ref{correspondencetheorem}] The method is essentially the same for thinnings and for markings, so we will just prove the thinning statement. To that end, let $\theta : (\MM, \mu) \to \MM$ be a thinning. Note that by our assumption that $\theta$ is equivariant, we have \[ \{ \omega \in \MM \mid \theta(\omega) \subseteq \omega \} \text{ has } \mu \text{ measure one}. \] This is a shift invariant set, so by Proposition \ref{transferprinciple} we have \[ \{ \omega \in {\mathbb{M}_0} \mid \theta(\omega) \subseteq \omega \} \text{ has } \mu_0 \text{ measure one}. \] We are now able to define $A = \{\omega \in {\mathbb{M}_0} \mid 0 \in \theta(\omega) \}$, and this will be our desired subset of $({\mathbb{M}_0}, \mu_0)$. It follows from equivariance that the thinning $\theta^A$ associated to $A$ satisfies \[ \restr{\theta^A}{{\mathbb{M}_0}} = \restr{\theta}{{\mathbb{M}_0}} \;\; \mu_0 \text{ almost everywhere,} \] so by Lemma \ref{extensionlemma} we have $\theta^A = \theta$ ($\mu$ almost everywhere). It remains to verify that if $A = B$ $\mu_0$ almost everywhere (that is, that $\mu_0(A \triangle B) = 0$, then $\theta^A = \theta^B$ ($\mu$ almost everywhere). Recall\footnote{This is a general fact about nonsingular cbers, and it follows from the fact that they can all be generated by actions of \emph{countable} groups.} that the \emph{saturation} of $A \triangle B$ \[ [A \triangle B] = \{ g^{-1}\omega \in {\mathbb{M}_0} \mid \omega \in A \triangle B \text{ and } g \in \omega \} \] is $\mu_0$ null if $A \triangle B$ is $\mu_0$ null. Observe that for $\omega \not\in [A \triangle B]$ we have $\theta^A(\omega) = \theta^B(\omega)$, and hence $\restr{\theta^A}{{\mathbb{M}_0}} = \restr{\theta^B}{{\mathbb{M}_0}}$ $\mu_0$ almost everywhere, and we are finish by again applying Lemma \ref{extensionlemma}. If $\mathscr{G}$ is a factor graph of $\mu$, then in the same fashion we see that it has a well-defined restriction to $({\mathbb{M}_0}, \mu_0)$. We then define \[ \mathscr{A} = \{ (\omega, g) \in {\mathbb{M}_0} \times G \mid (0, g) \in \mathscr{G}(\omega) \}. \] We must verify that if $\mathscr{A}, \mathscr{B} \subseteq \Marrow$ are subsets with $\muarrow(A \triangle B) = 0$, then their associated factor graphs $\mathscr{G}^{\mathscr{A}}$ and $\mathscr{G}^{\mathscr{B}}$ are equal $\mu$ almost everywhere. This assumption states \[ \int_{{\mathbb{M}_0}} \# \{g \in \omega \mid (\omega, g) \in \mathscr{A} \triangle \mathscr{B} \} d\mu_0(\omega) = 0 \] and hence the integrand is zero $\mu_0$ almost everywhere. By again considering the saturation of sets, we see that \[ \mu_0( \{\omega \in {\mathbb{M}_0} \mid \text{ for all } g \in \omega, g^{-1}\omega \in \mathscr{A} \triangle \mathscr{B} \}) = 0, \] from which the argument finishes as in the case of thinnings. \end{proof} \subsection{Every free action is a point process, and the cross-section perspective}\label{crosssectionappendix} We have taken the perspective that point processes are an \emph{intrinsically interesting} class of pmp actions of lcsc groups to study. They are also a fairly general class: in this section we will prove Theorem \ref{minden}, that \emph{every} free and pmp action of a \emph{nondiscrete} lcsc group $G$ on a standard Borel measure space $(X, \mu)$ is abstractly isomorphic to a finite intensity point process. This is similar to the following fact: let $\Gamma \curvearrowright (X, \mu)$ be a pmp action of a discrete group $\Gamma$. The \emph{symbolic dynamics} of this action is the map \begin{align*} &\Sigma : (X, \mu) \to X^\Gamma \\ &\Sigma_x(\gamma) = \gamma^{-1}x. \end{align*} This is an injective and equivariant map, so we may identify the action $\Gamma \curvearrowright (X, \mu)$ with the invariant colouring action $\Gamma \curvearrowright (X^\Gamma, \Sigma_* \mu)$. In this way, we see that all pmp actions of discrete groups are isomorphic to invariant colourings\footnote{If desired, one can fix a Borel isomorphism $X \cong [0,1]$ so that the colouring space is the same for all actions}. A standard technique in the study of free pmp actions of lcsc groups is to analyse their associated \emph{cross-sections}. This will gives an analogue of symbolic dynamics for nondiscrete groups. \begin{defn} Let $G \curvearrowright (X, \mu)$ be a pmp action on a standard Borel measure space $(X, \mu)$. A \emph{discrete cross-section} for the action is a Borel subset $Y \subset X$ such that for $\mu$-every $x \in X$ the set $\{g \in G \mid g^{-1}x \in Y \}$ is a closed and discrete non-empty subset of $G$. \end{defn} \begin{example} The set ${\mathbb{M}_0} \subset \MM$ is a discrete cross-section \emph{for all} non-empty point process actions $G \curvearrowright (\MM, \mu)$. \end{example} There is a sense in which this ${\mathbb{M}_0}$ is the \emph{only} cross-section, which we now discuss. Fix a discrete cross-section $Y$ for $G \curvearrowright X$. We associate to this data two maps \begin{align*} &\mathcal{V} : (X, \mu) \to \MM && \mathscr{V} : (X, \mu) \to Y^\MM \\ &\mathcal{V}_x = \{g \in G \mid g^{-1}x \in Y \} && \mathscr{V}_x = \{(g, g^{-1}x) \in G \times Y \mid g^{-1}x \in Y \}. \end{align*} These are equivariant maps, and the second one is always injective. In particular\footnote{Recall that an \emph{injective} map between standard Borel spaces is always a Borel isomorphism onto its image}, we see that every action which admits a cross-section also admits a point process factor, and is isomorphic to a \emph{marked} point process. Note that $\mathcal{V}^{-1}({\mathbb{M}_0}) = Y$. In this way we see that \emph{a discrete cross-section is the same thing as an unmarked point process factor}. \begin{remark}[Terminological discussion] If $\mathcal{P}(\omega)$ is some property of discrete subsets $\omega$ in $G$, then we can investigate discrete cross-sections of actions $G \curvearrowright (X, \mu)$ such that the associated subset $\mathcal{V}_x$ satisfies $\mathcal{P}$ for $\mu$ almost every $x \in X$. For instance, $\mathcal{P}(\omega)$ might be the property ``$\omega$ is uniformly discrete'' or ``$\omega$ is a net'' (see Definition \ref{metricdefs} for the meaning of these terms). We will refer to a discrete cross-section such that $\mathcal{P}(\mathcal{V}_x)$ is satisfied for $\mu$ almost every $x \in X$ as a \emph{$\mathcal{P}$ cross-section}. Note that if $G \curvearrowright (\MM, \mu)$ is the Poisson point process action, then ${\mathbb{M}_0}$ is \emph{not} a lacunary cross-section. It is for this reason that we feel the terminology should be modified slightly. \end{remark} \begin{thm}[Forrest\cite{MR417388}, see also \cite{MR3335405}]\label{crosssectionsexist} Every free and \emph{nonsingular}\footnote{Recall that an action is \emph{nonsingular} if it preserves null sets, that is, if $\mu(A) = 0$ then $\mu(gA) = 0$ for all $g \in G$.} action of an lcsc group on a standard probability space admits a discrete cross-section. Moreover, the cross-section can be chosen to be uniformly separated and even a net. \end{thm} One sees that Theorem \ref{minden} is true by applying the above theorem with the unmarking technique of Proposition \ref{abstractlyisom}. \begin{remark} In fact, cross-sections of actions are known to exist in great generality, see \cite{kechris2019theory} for further examples. Our keen interest in \emph{free} actions is because it allows us to identify the orbit $Gx$ of any point $x \in X$ with $G$ itself. One can run into issues in the absence of this. For instance, let $\RR \times \RR$ act on $\{\bullet\} \times \RR/\ZZ$ diagonally, where $\{\bullet\}$ denotes a singleton with trivial action. Then $\{ (\bullet, 0) \}$ is a lacunary cross-section for the action. If we try to construct a map $\mathcal{V}$ as before, then we would map $(\bullet, x) \in \{\bullet\} \times \RR/\ZZ$ to the subset of $\RR^2$ \[ \mathcal{V}_{(\bullet, x)} = \RR \times \{ x + \ZZ \}. \] In this way one has constructed a \emph{random closed set} as a factor of the action, but it is not a point process. In fact, it is possible to view an \emph{arbitrary} pmp action as a kind of ``bundle'' of point processes over the various homogeneous spaces $G/H$, where $H$ ranges over the closed subgroups of $G$, but we will not explore this further. \end{remark} The following theorem is described as folklore in \cite{MR3335405}: \begin{thm}[Folklore theorem, see Proposition 4.3 of \cite{MR3335405}] Let $G$ be a unimodular lcsc group, and $G \curvearrowright (X, \mu)$ a pmp action on a standard Borel space. Fix a lacunary cross-section $Y \subset X$ for the action. Then: \begin{enumerate} \item The orbit equivalence relation of $G \curvearrowright X$ restricts to a cber $\Rel$ on $Y$. \item There exists an $\Rel$-invariant probability measure $\nu$ on $Y$. \item The action $G \curvearrowright (X, \mu)$ is ergodic if and only if the cber $(Y, \Rel, \nu)$ is ergodic. \item The group $G$ is noncompact if and only if the cber is aperiodic $\nu$ almost everywhere. \item The group $G$ is amenable if and only if the cber $(Y, \Rel, \nu)$ is amenable. \end{enumerate} \end{thm} The mathematical content of Theorem \ref{correspondencetheorem} can be viewed as a rediscovery of the above theorem with different proofs, together with interpretation of factor constructions as objects living on the Palm groupoid. \begin{question} Is there a more point process theoretic method to construct discrete cross-sections of free pmp actions? \end{question} We have seen that if $G \curvearrowright (X, \mu)$ is a free pmp action, then cross-sections are the same thing as point process factor maps, and that every choice of cross-section gives an isomorphic representation of the action as a marked point process. These ideas can be combined. Suppose $\Phi : (X, \mu) \to \MM$ is an equivariant factor map. Then $Y = \Phi^{-1}({\mathbb{M}_0})$ is a cross-section for the action $G \curvearrowright (X, \mu)$. We also have the isomorphism $\mathscr{V} : (X, \mu) \to Y^\MM$. These can be combined, and we see that the map $\Phi \circ \mathscr{V}^{-1} : Y^\MM \to \MM$ is simply the map that forgets labels. In other words, every extension of a point process is just the point process with an enriched mark space. \section{The cost of a point process}\label{cost} \subsection{Definition and monotonicity for factors} Our goal is to extend the notion of cost for pmp cbers to point processes. For further background on cost, see \cite{gaboriau2000cout}, \cite{gaboriau2010cost}, \cite{gaboriau2016around}, and \cite{kechris2004topics}. Informally speaking, the \emph{cost} of a point process is the ``cheapest'' way to wire it up. We look at all \emph{connected} factor graphs of the process and compute the expected degree at the origin in the Palm version. This is then suitably normalised to give an isomorphism invariant. \begin{defn}\label{groupoidcostdefn} Let $\Pi$ be a point process on $G$ (possibly marked) with finite but non-zero intensity. Its \emph{groupoid cost} is defined by \[ \cost(\Pi) - 1 = \intensity \mu \cdot \inf_{\mathscr{G}} \left\{ \frac{1}{2}\EE\left[\deg_0{\mathscr{G}(\Pi_0)}\right] - 1 \right\}, \] where the infimum is taken over all connected factor graphs $\mathscr{G}$ of $\Pi$ and $\Pi_0$ denotes the Palm version of $\Pi$. Equivalently by Remark \ref{palmformula}, \[ \cost(\Pi) - 1 = \inf_{\mathscr{G}}\left\{ \frac{1}{2}\EE\left[ \sum_{x \in U \cap \Pi} \deg_x{\mathscr{G}(\Pi)} \right] \right\} - \intensity(\Pi), \] where $U$ is a set of unit volume in $G$. \end{defn} \begin{remark} The cost respects the ergodic decomposition of a process, and so for this reason it suffices to consider ergodic processes. \end{remark} \begin{defn} The \emph{cost} of a group is the infimum of the cost of all its free point processes. A group is said to have \emph{fixed price} if all of its \emph{essentially free} point processes have the same cost. \end{defn} At the time of writing there are no groups known that do not have this property. \begin{remark} We sometimes refer to Definition \ref{groupoidcostdefn} as \emph{groupoid} as it can be thought of as the infimal ``size'' of a generator of the groupoid $(\Marrow, \muarrow)$, in a way that we now discuss. Recall that (directed) factor graphs are in correspondence with subsets of $\Marrow$. We identify objects under this correspondence. One defines the \emph{product} of two factor graphs $\mathscr{G}, \mathscr{H} \subset \Marrow$ by taking all well-defined products. More explicitly, \[ \mathscr{G} \cdot \mathscr{H} = \{ (\omega, gh) \in \Marrow \mid (\omega, g) \in \mathscr{G} \text{ and } (g^{-1}\omega, h) \in \mathscr{H} \}. \] From the factor graph viewpoint, the edges of $\mathscr{G} \cdot \mathscr{H}$ are those pairs of vertices that can be reached by following an edge of $\mathscr{G}$ and then an edge of $\mathscr{H}$. A \emph{Borel generator} of $\Marrow$ is a Borel factor graph $\mathscr{G}$ such that \[ \langle \mathscr{G} \rangle := \bigcup_{n} \mathscr{G}^n = \Marrow. \] In other words, it is a \emph{connected} factor graph. If $\Pi$ is a point process with law $\mu$, then a generator of the measured groupoid $(\Marrow, \muarrow)$ is a factor graph $\mathscr{G}$ such that \[ \muarrow(\Marrow \setminus \langle \mathscr{G} \rangle) = 0. \] In other words, it is a factor graph which is \emph{connected almost surely}. With these definitions, one can equivalently rephase the probabilistic definition of the cost of $\Pi$ as \[ \cost(\Pi) - 1 = \intensity(\Pi) \cdot \inf_{\mathscr{G}}\left\{ \muarrow(\mathscr{G}) - 1\right\}, \] where $\mathscr{G}$ runs over all generators of $(\Marrow, \muarrow)$. \end{remark} \begin{example} If $\Pi$ is the lattice shift corresponding to $\Gamma < G$, then \[ \cost(\Pi) = 1 + \frac{d(\Gamma) - 1}{\covol(G / \Gamma) }, \] where $d(\Gamma)$ denotes the \emph{rank} of $\Gamma$, that is, its minimum number of generators. To see this, observe that by equivariance a factor graph of the lattice shift is determined by a \emph{single} subset $S \subset \Gamma$, and connects $x \in \Pi$ to all $xs \in \Pi$ for $s \in S$. The graph is connected exactly when $S$ generates $\Gamma$. The formula then follows from the definition of cost. \end{example} \begin{remark} In a concurrently appearing work\cite{mellick2021palm} by the second author, it is shown that the Palm equivalence relation of any free point process on an amenable group is hyperfinite almost everywhere. It follows that amenable groups (in particular $\RR^n$) have fixed price one. We will show that all groups of the form $G \times \RR$ have fixed price one. This gives an alternative proof that $\RR^n$ has fixed price one. It would be interesting to see a ``direct'' proof of this fact. That is, to exhibit \emph{reasonably explicit} connected factor graphs that have cost less than $1 + \e$ for every $\e > 0$. In \cite{coupier20132d} an explicit factor graph of the Poisson point process on $\RR^2$ is described and shown to be a connected and one-ended tree. It follows that it has cost one. \end{remark} \begin{lem}\label{costmonotone} Let $\Pi$ be a point process of finite intensity, and $\Phi$ a factor map of $\Pi$ such that $\Phi(\Pi)$ has finite intensity. Then \[ \cost(\Pi) \leq \cost(\Phi(\Pi)). \] Thus cost is \emph{monotone} for factors. \end{lem} \begin{cor} If $\mu$ and $\nu$ are finite intensity point processes that factor onto each other, then $\cost(\mu) = \cost(\nu)$. In particular, the cost of $\mu$ only depends on its isomorphism class as an action. \end{cor} \begin{proof}[Proof of Lemma \ref{costmonotone}] Recall from Remark \ref{factorsdecompose} that $\Phi$ decomposes as the composition of a thinning $\pi$ and a thickening $\Theta^\Phi$. We prove \[ \cost(\Pi) \leq \cost(\Theta^\Phi(\Pi)) \leq \cost(\pi(\Theta^\Phi(\Pi))) = \cost(\Phi(\Pi)), \] where the last equality holds as $\Phi = \pi \circ \Theta^\Phi$. We prove the second inequality first, as it is simpler. For this we use the non-Palm definition of cost. To that end, let $\mathscr{G}$ be a graphing of $\Phi(\Pi)$ that $\e$-computes the cost, that is, with \[ \EE\left[\sum_{x \in U \cap \Phi(\Pi)} \overrightarrow{\deg}_x{\mathscr{G}(\Phi(\Pi))} \right]- \intensity (\Phi(\Pi)) \leq \cost(\Phi(\Pi)) - 1 + \e. \] We will use it to define a graphing $\mathscr{H}$ of the thickened process $\Theta^\Phi(\Pi)$. Recall that this process has three types of points: red, purple, and blue. Let $\mathscr{N}$ be the factor graph of $\Theta^\Phi(\Pi)$ that connects each red point $x$ to its nearest blue neighbour. If this is not well-defined, then we use the tie-breaking function $T : G \to \RR$ of Section \ref{voronoidefn} to make it so in an equivariant way. That is, if $y_1, y_2, \ldots, y_n$ are the (finitely many!) blue points of $\Theta^\Phi(\Pi)$ that are closest to $x$, then let $y$ be the element that minimises $T(x^{-1}y_i)$ and add in a directed edge $x \to y$ to $\mathscr{N}$. We can view $\mathscr{G}$ as defining a factor graph on $\Theta^\Phi(\Pi)$, which lives on the blue and purple points. Now let $\mathscr{H}(\Theta^{\Phi}(\Pi)) = \mathscr{G}(\Phi(\Pi)) \sqcup \mathscr{N}(\Theta^\Phi(\Pi))$. This is connected as an undirected graph, so by the definition of cost: \begin{align*} &\cost(\Theta^\Phi(\Pi)) - 1 \leq \EE\left[\sum_{x \in \Theta^\Phi(\Pi) \cap U}\overrightarrow{\deg}_x{\mathscr{H}(\Theta^\Phi(\Pi))}\right] - \intensity(\Theta^\Phi(\Pi)) \\ &= \EE\left[\sum_{x \in U \cap \Pi \setminus \Phi(\Pi)} 1 + \sum_{x \in U \cap \Phi(\Pi)} \overrightarrow{\deg}_x{\mathscr{G}(\Phi(\Pi))} \right] - \intensity (\Pi \setminus \Phi(\Pi)) - \intensity (\Phi(\Pi)) \\ &= \EE\left[\sum_{x \in U \cap \Phi(\Pi)} \overrightarrow{\deg}_x{\mathscr{G}(\Phi(\Pi))} \right]- \intensity (\Phi(\Pi)) \\ &\leq \cost(\Phi(\Pi)) - 1 + \e. \end{align*} As $\e$ was arbitrary, this proves the second inequality. For the other inequality, we use the explicit description of the Palm measure as in Example \ref{palmofgeneralthickening} and the Palm definition of cost. The idea of the proof is: we have a graphing defined on a larger subset, and we must push it onto a smaller subset somehow. We will simply transfer all edges of $\Theta^\Phi(\Pi)$ to $\Pi$ along the Voronoi cells. For $g \in \Pi$, let $F_\Pi(g) = V_\Pi(g) \cap \Theta^\Phi(\Pi)$. Let us call a graphing $\mathscr{G}$ of $\Theta^\Phi(\Pi)$ \emph{starlike} if for all $g \in \Pi$ and $x \in F_\Pi(g)$, we have $(g,x) \in \mathscr{G}$. If $\mathscr{G}$ is any graphing, then we can perturb it to find a starlike graphing of the same edge measure. Let us take this for granted for now and see how the proof concludes. Let $\mathscr{G}$ be a starlike graphing of $\Theta^\Phi(\Pi)$ that $\e$-computes the cost. Let us define a graphing $\mathscr{H}$ of $\Pi$ as follows: join $x, y \in \Pi$ by an edge in $\mathscr{H}(\Pi)$ if there exists $x' \in F_\Pi(x)$ and $y' \in F_\Pi(y)$ such that $x'$ and $y'$ are connected by an edge in $\Theta^\Phi(\Pi)$. When we push $\mathscr{G}$ onto $\Pi$, some edges get killed. For instance, if two Voronoi cells have many edges between them, then some get killed. By assuming that the graphing is \emph{starlike} we are guaranteed to kill enough edges. In particular, we kill $\@ifstar{\oldabs}{\oldabs*}{F_\Pi(g)} - 1$ edges at each $g \in \Pi$. To make the proof more legible, we write $I_\Pi = \intensity(\Pi)$ and $I_{\Theta} = \intensity(\Theta^\Phi(\Pi))$, so that $I_\Theta = I_\Pi \cdot \EE[F_{\Pi_0}(0)]$. We compute its expected outdegree as follows: \begin{align*} &I_\Pi \cdot \EE\left[\overrightarrow{\deg}_0{\mathscr{H}(\Pi_0)} - 1\right] \leq I_\Pi \cdot \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} - \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}\right] \\ &= I_\Pi \cdot \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \right] - I_\Pi\cdot \EE\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} \\ &= I_\Pi \cdot \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \right] - I_\Theta. \end{align*} We now work on this first term. \begin{align*} &I_\Pi \cdot \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \right] = \frac{I_\Theta}{\EE \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \right]\\ &= \sum_{k \geq 1}\frac{I_\Theta}{\EE \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}} \EE\left[\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \Big| \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = k \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = k] \\ &= I_\Theta \sum_{k \geq 1} \EE\left[\frac{1}{\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)}}\sum_{x \in F_{\Pi_0}(0)}\overrightarrow{\deg}_x{\mathscr{G}(\Theta^\Phi(\Pi_0))} \Big| \@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = k \right] \PP[\@ifstar{\oldabs}{\oldabs*}{F_{\Pi_0}(0)} = k] \\ &= I_\Theta \EE\left[\overrightarrow{\deg}_0(\mathscr{G}(\Theta^\Phi(\Pi)_0))\right], \end{align*} where we use the explicit description of the Palm measure of a general thickening proven in Example \ref{palmofgeneralthickening}. Thus \[ I_\Pi \cdot \EE\left[\overrightarrow{\deg}_0{\mathscr{H}(\Pi_0)} - 1\right] \leq I_\Theta \EE\left[\overrightarrow{\deg}_0(\mathscr{G}(\Theta^\Phi(\Pi)_0)) - 1\right] \] proving $\cost(\Pi) \leq \cost(\Theta^\Phi(\Pi))$, as desired. At last, we must show how to perturb graphings to be starlike. The idea is simple: if some $g \in \Pi$ is not starlike, then there is some $x \in F_\Pi(g)$ such that $(g, x) \not\in \mathscr{G}$. However, there must be \emph{some} path from $g$ to $x$ in $\mathscr{G}$ so we pinch an edge from that path and thus rob Peter to pay Paul. In this way we can improve a given factor graph to be more starlike. By iterating in an appropriate way we can construct the desired factor graph. Let \[ \Pi' = \bigcup_{g \in \Pi} \{ h \in F_\Pi(g) \mid h \neq g \text{ and } (g, h) \not\in \mathscr{G} \} \] denote the subprocess of points that violate starlikeness. The edges of $\mathscr{G}$ are of three kinds according to how they interface with the Voronoi cells of $\Pi$: \begin{description} \item[Starlike edges,] those of the form $(g, h)$ where $g \in \Pi$ and $h \in F_\Pi(g)$, \item[Intracell edges,] those of the form $(h, h')$ where $h, h' \in F_\Pi(g)$ for some $g \in \Pi$ with neither of $h$ or $h'$ being $g$, and \item[Crossing edges,] those of the form $(h, h')$ with $h \in F_\Pi(g)$ and $h' \in F_\Pi(g')$ with $g, g' \in \Pi$ and $g \neq g'$. \end{description} \begin{figure}[h]\label{edgesexample} \includegraphics[scale=0.4]{rewiringedgesexample.pdf} \centering \caption{A chunk of a point process, with starlike edges coloured magenta, intracell edges black, and crossing edges cyan.} \end{figure} We consider the space\footnote{By constructing an appropriate subset of a configuration space, one can encode these graphs as a standard Borel space.} $\mathcal{G}$ of marked factor graphs of $\Pi$ with the following properties: \begin{itemize} \item They are simply $\mathscr{G}$ as an unmarked graph, \item Points $h$ of $\Pi'$ receive \emph{either} the blank mark $\bullet$, \item \emph{or} they are marked by a non backtracking path in $\mathscr{G}$ from $h$ to $g$, where $h \in F_\Pi(g)$, and one crossing or intracell edge of this path is coloured red, and \item each red edge appears in at most one of the paths of points of $\Pi'$. \end{itemize} These factor graphs are basically rewiring rules for $\mathscr{G}$. If $\mathscr{H} \in \mathcal{G}$, then each point of $\Pi'$ that receives a path label in $\mathcal{H}$ replaces the red crossing edge with its starlike edge (see Figure \ref{rewired}). This is an equivariant, measurable, and deterministic rule, so defines a factor graph of $\Pi$. \begin{figure}[h]\label{rewired} \includegraphics[scale=0.5]{rewired.pdf} \centering \caption{A single rewiring move. We stress that this move must be taken be \emph{all} chosen points simultaneously, and therein lies the rub.} \end{figure} Note that this rewiring doesn't change the edge measure of the graph (we rob Peter to pay Paul), and it remains connected. Let \[ \iota(\mathscr{H}) = \text{the intensity of } \Pi' \text{ points that receive path labels in } \mathscr{H} \] and \[ f(\mathscr{H}) = \sup\left\{\iota(\mathscr{H}') \mid \mathscr{H}' \in \mathcal{G} \text{ and } \mathscr{H} \preceq \mathscr{H}' \right\}, \] where we declare $\mathscr{H} \preceq \mathscr{H}'$ if every path label in $\mathscr{H}$ is present in $\mathscr{H}'$. That is, $\mathscr{H}'$ has simply replaced $\bullet$ labelled points in $\mathscr{H}$ by path labels. \begin{claim} There is a maximal element $\mathscr{G}_\infty$ (with respect to $\preceq$) of $\mathcal{G}$, and the rewiring of $\mathscr{G}$ associated to it is starlike. \end{claim} It's easy to find the maximal element. Choose $\mathscr{G}_1$ such that \[ f(\mathscr{G}) \leq \iota(\mathscr{G}_0) + 1, \] and then inductively choose $\mathscr{G}_{n+1}$ such that \[ f(\mathscr{G}_n) \leq \iota(\mathscr{G}_{n+1}) + \frac{1}{n} \] Let $\mathscr{G}_\infty$ denote the ``union'' of the $\mathscr{G}_n$, where we declare that path labels trump $\bullet$ labels. If $\mathscr{H} \in \mathcal{G}$ is a factor graph with $\mathscr{G}_\infty \preceq \mathscr{H}$, then $\mathscr{G}_n \preceq \mathscr{H}$ for all $n$, so \[ \iota(\mathscr{H}) \leq f(\mathscr{G}_n) \leq \iota(\mathscr{G}_n) + \frac{1}{n} \to \iota(\mathscr{G}_\infty). \] We conclude that $\mathscr{G}_\infty = \mathscr{H}$ almost surely since one process is a subset of the other. We will now prove that the rewiring associated to $\mathscr{G}_\infty$ is starlike by contradiction. Using the assumption we construct $\mathscr{H} \in \mathcal{G}$ with $\mathscr{G}_\infty \preceq \mathscr{H}$ and $\iota(\mathscr{G}_\infty) < \iota(\mathscr{H})$. This violates maximality of $\mathscr{G}_\infty$. Let $\overline{\mathscr{G}}$ denote the result of rewiring $\mathscr{G}_\infty$. Set \[ \Pi_\times = \{ g \in \Pi \mid g \text{ is not starlike in } \overline{\mathscr{G}} \}. \] We are going to make these points more starlike. For each $g \in \Pi_\times$, choose a point $x_g \in \Theta(\Pi)$ in an equivariant and measurable way. More precisely, we consider the set \[ \{ y \in F_\Pi(g) \mid (g, y) \not \in \overline{\mathscr{G}} \} \] and choose $x_g$ to be the element minimising $I(g^{-1}y)$. Fix a nonbacktracking path $P(g, x_g)$ from $g$ to $x_g$ in $\overline{\mathscr{G}}$. We do this for all $g \in \Pi_\times$ simultaneously, again in an equivariant and measurable way: look at all paths between $g$ and $x_g$ of minimal length (as in number of $\overline{\mathscr{G}}$ edges used), and choose one using the Borel isomorphism $I$ in a similar way to before. Choose\footnote{If the process isn't ergodic then this $N$ should be a random variable, in any case one can manage.} $N$ so large that there is a positive intensity of points $g \in \Pi_\times$ with paths $P(g, x_g)$ of length at most $N$. We now construct our desired marked factor graph $\mathscr{H}$ as follows: \begin{itemize} \item Every point in $\mathscr{G}$ that has a path label retains its path label. \item Every point of $\Pi' \setminus \Pi_\times$ is marked $\bullet$. \item Every point $g$ of $\Pi_\times$ whose path $P(g, x_g)$ has length greater than $N$ is marked $\bullet$. \end{itemize} This leaves the points of $\Pi_\times$ whose paths are bounded by $N$. Note that this is a locally finite family -- each edge appears on at most finitely many $P(g, x_g)$. Every path in the rewired graph $\overline{\mathscr{G}}$ can be associated to a path in $\mathscr{G}_\infty$ itself -- every time one of the starlike edges that was added in the rewiring process is added, just go the long way in $\mathscr{G}$. We refer to this as the \emph{detour version} of the path. Now, to construct the remaining labels check if there are any paths $P(g, x_g)$ which contain an intracell edge $e = (h, h')$ with $h, h' \in F_\Pi(g)$. If so, then we label $g$ by the detour version of this path in $\mathscr{G}_\infty$ with $e$ coloured red. Observe that the remaining paths must contain \emph{at least two} crossing edges. Each $g$ will \emph{apply} to the first edge crossing edge it sees on the path from $g$ to $x_g$. Each edge $(h, h')$ receives finitely many applicants $\{g_1, g_2, \ldots, g_k\}$. It chooses the element of this set which minimises $\min \{I(g_i^{-1}h), I(g_i^{-1}h')$. At last, we finish the construction of $\mathscr{H}$ by marking the points who were rejected by $\bullet$, and the remaining ones by the detour version of this path in $\mathscr{G}$ with their chosen edge coloured red. Then $\mathscr{G}_\infty \preceq \mathscr{H}$ by construction, but $\iota(\mathscr{H}) > \iota (\mathscr{G}_\infty)$. We are therefore able to replace $\mathscr{G}$ by $\overline{\mathscr{G}_\infty}$ and assume our factor graph is starlike, as desired. \end{proof} \begin{remark} The groupoid cost can really increase under a factor map: take the example of Remark \ref{thinninglost} with $\ZZ^n < \RR^n$ for $n > 1$. That is, consider the union of the $\ZZ^n$ lattice shift with the Poisson point process. As a free process, this has cost one. But it factors onto the lattice shift, which has cost greater than one. \end{remark} One can also prove cost monotonicity by invoking Gaboriau's theorem on the cost of complete sections: \begin{thm}[Proposition II.6 of \cite{gaboriau2000cout}, see also Theorem 21.1 of \cite{kechris2004topics}] If $(X, \Rel, \mu)$ is a pmp cber and $S \subseteq X$ is a complete section\footnote{That is, it meets almost every orbit of $X$.}, then \[ \cost_\mu(\Rel) - 1 = \mu(S) \left( \cost_{\mu | S}(\restr{\Rel}{S}) - 1 \right), \] where $\restr{\Rel}{S} = \Rel \cap S \times S$ is the restriction and \[ \mu | S := \frac{\mu( \bullet \cap S)}{\mu(S)} \] is the conditional measure. \end{thm} Suppose $\Phi : (\MM, \mu) \to \MM$ is a point process factor map with $\Phi_* \mu$ of finite intensity. Then \[ Y := \Phi^{-1}({\mathbb{M}_0}) = \{ \omega \in \MM \mid 0 \in \Phi(\omega) \} \] forms a \emph{discrete cross section}\footnote{See Section \ref{crosssectionappendix} for the definition and further context.} for the action $G \curvearrowright (\MM, \mu)$. One can define a ``Palm measure'' $\mu_Y$ on $Y$ by replacing all references to ${\mathbb{M}_0}$ with $Y$, and similarly there is a rerooting equivalence relation $\Rel_Y$ on $Y$. This again forms a pmp cber. Then we have a morphism $\Phi : (Y, \Rel_Y, \mu_Y) \to ({\mathbb{M}_0}, \Rel, \Phi_* \mu)$ of pmp cbers, so \[ \cost_{\mu_Y}(\Rel_Y) \leq \cost_{\Phi_* \mu} (\Rel). \] One can see that $Y \cup {\mathbb{M}_0}$ \emph{also} forms a discrete cross section, and both $Y$ and ${\mathbb{M}_0}$ are complete sections for it. Then two applications of Gaboriau's theorem shows \[ \cost_{\mu_Y}(\Rel_Y) = \cost_{\mu}(\Rel), \] thus proving cost montonicity. \subsection{Unmarking} We have defined cost of groups by looking at all free \emph{unmarked} point processes on the group. This is no loss of generality: \begin{prop}\label{abstractlyisom} Every free point process $\Pi$ on a nondiscrete group with marks from a standard Borel space $\Xi$ is equivariantly isomorphic to an \emph{unmarked} point process. More precisely, if $\Pi$ has marks from a standard Borel space $\Xi$ and $\mu$ is its law, then there is a measurable and equivariant almost everywhere isomorphism $\Phi: (\Xi^\MM, \mu) \to (\MM, \Phi_*\mu)$. \end{prop} Since cost is an isomorphism invariant (even if one process is marked and the other isn't), this shows that one can't find point processes with lower cost by using some tricky mark space. We refer to Proposition \ref{abstractlyisom} as \emph{unmarking}. It should be easy to convince oneself that such a proposition will be true, although the details will necessarily be somewhat messy and ad hoc. We call the technique used \emph{local encoding}, which is illustrated in the following example: \begin{figure}[h]\label{localencode} \includegraphics[scale=0.5]{localencode.pdf} \centering \caption{Locally encoding labels of a point process.} \end{figure} This is a point process in $\RR^2$ labelled by the set $\{+, -\}$, which we have coloured as cyan and magenta respectively in the diagram. The map $\Phi : \{+, -\}^\MM \to \MM$ takes the input configuration, and adds a small decoration around each point. In this case we are literally encoding $+$ marks as a plus symbol centred at each point and similarly for $-$ marks. Barring some exceptional circumstances, you should be able to convince yourself that $\Phi$ is an injective map, and thus is an isomorphism onto its image for many input processes. The proof for general $G$ works along the same lines. We will employ a general lemma that is no doubt well known to experts. For the convenience of the reader we translate a proof appearing in \cite{timar2004} and attributed to Yuval Peres into our language. \begin{lem}\label{independentsetsexist} Let $\mu$ be a \emph{free} point process on $G$, and $\mathscr{G}$ a locally finite measurable factor graph of $\mu$. Then one can equivariantly and measurably construct a non-trivial independent subset of $\mathscr{G}$. \end{lem} To spell this out, this means there exists a map $I : (\MM, \mu) \to \MM$ with the properties that \begin{itemize} \item $I(\omega) \subset \omega$ almost surely, and \item if $g, h \in I(\omega)$, then $g$ and $h$ are not connected in $\mathscr{G}(\omega$). \end{itemize} \begin{proof} The key idea in the proof can be illustrated by via the factor labelling $\odot : \MM \to {\mathbb{M}_0}^\MM$ given by \[ \odot(\omega) = \{ (g, g^{-1}\omega) \in G \times {\mathbb{M}_0}(G) \mid g \in \omega \}. \] Under $\odot$, each point $g$ of a configuration $\omega$ looks at how the configuration looks like from its perspective, and records it as a label. That is, it views itself as the centre of the universe (this is what the symbol $\odot$ is meant to represent, we will call the map \emph{egotistical} or \emph{self-centred}). Observe that $\mu$ is an (essentially) free action if and only if $\odot(\omega)$ has distinct labels almost surely. For if $g, h \in \omega$ receive the same label under the egotistical map, then $g^{-1}\omega = h^{-1}\omega$, ie. $gh^{-1} \in \stab_G(\omega)$. Conversely, if $g \in \stab_G(\omega)$ is nontrivial, then for all $x \in \omega$ the label $x^{-1}\omega$ of $x$ is the same as that of $gx$, as $(gx)^{-1}\omega = x^{-1}\omega$. Fix a countable dense subset $Q \subset {\mathbb{M}_0}$. Let us define a thinning $I_q : \MM \to \MM$ for each $q \in Q$ by \[ I_q(\omega) = \{ g \in \omega \mid d(g^{-1}\omega, q) < d(h^{-1}\omega, q) \text{ for all } h \in \omega \text{ adjacent to } g \text{ in } \mathscr{G}(\omega) \}. \] Note that each $I_q(\omega)$ is an independent subset of $\mathscr{G}(\omega)$, but it is possibly empty. However, the \emph{union} over all $q$ of the $I_q$ is $\omega$ by freeness, so at least one such $I_q$ must define a non-empty independent subset, as desired. \end{proof} In particular, by applying the lemma to the factor graph $\mathscr{D}_R$ of Example \ref{distanceR}, one has: \begin{cor} Let $\Pi$ be a free point process. Then for all $R > 0$ one deterministically, measurably, and equivariantly select a subset $\Pi_R \subset \Pi$ that is $R$ uniformly separated, in the sense that if $x$ and $y$ are distinct points of $\Pi_R$, then $d(x,y) > R$. \end{cor} Our proof will also make use of a technique we refer to as \emph{label trickery}: \begin{prop}[Label trickery]\label{labeltrickery} Let $\Pi$ be any free point process (possibly marked), and $\theta(\Pi)$ any nonempty thinning. Then there exists a marked point process $\Upsilon$ such that the underlying point set of $\Upsilon$ is $\theta(\Pi)$, and $\Upsilon$ is isomorphic to $\Pi$ \emph{as a pmp action}. In particular, $\Upsilon$ is a free action. The same can be achieved with $\Upsilon$ having marks from the \emph{compact} space $[0,1]$. \end{prop} \begin{proof} Let $\Upsilon = \theta(\odot(\Pi))$, that is, \[ \Upsilon = \{ (g, g^{-1}\Pi) \in \MM \times {\mathbb{M}_0} \mid g \in \theta(\Pi) \}. \] Observe that this is an \emph{injective} map, as one can recover $\Pi$ uniquely from the knowledge of any point $\Upsilon$ and its label, and so $\Upsilon$ is an isomorphic process to $\Pi$. For the second statement, simply fix a Borel isomorphism\footnote{It exists as ${\mathbb{M}_0}$ is a Polish space, and thus standard Borel, and all standard Borel spaces of the same cardinality are isomorphic.} $I : {\mathbb{M}_0} \to [0,1]$, and define \[ \Upsilon = \{ (g, I(g^{-1}\Pi) \in \MM \times [0,1] \mid g \in \theta(\Pi) \}. \]\end{proof} \begin{proof}[Proof of Proposition \ref{abstractlyisom}] Suppose $\Pi$ is a free $\Xi$-marked point process with law $\mu$. We can (and do) assume that $\Pi$ is abstractly isomorphic to a uniformly separated process with a slightly different (but nevertheless standard Borel) mark space by using the previous two propositions. Let $X$ denote the space: \[ X = \{ \omega \in {\mathbb{M}_0}(B(0, \delta/100)) \mid \omega \cap B(0, \delta/200) = \{0\}, \text{ and } \forall x \in \omega \setminus \{0\}, \@ifstar{\oldabs}{\oldabs*}{B(x, \delta/200)} > 1 \}. \] This is a Borel subset of a standard Borel space, and hence standard Borel in its own right. One can readily see that it is uncountable, and hence there is a Borel isomorphism $I: \Xi \to X$. Define the following factor map: \begin{align*} &\Phi : \Xi^\MM \to \MM \\ &\Phi(\omega) = \bigcup_{x \in \omega} x I(\xi_x), \end{align*} where $\xi_x$ denotes the label of $x$ (that is, $(x, \xi_x) \in \omega$. This is an injective map: we can recover the underlying set of any input configuration to $\Phi$ by identifying the points which are $\delta/200$-isolated. We can then uniquely recover their labels by applying the inverse of $I$ locally. \end{proof} \subsection{Cost is finite for compactly generated groups} \begin{prop}\label{finitecost} Suppose $G$ is \emph{compactly generated} by $S \subseteq G$. Then every free point process $\Pi$ on $G$ has finite cost. \end{prop} Implicitly we are assuming that $\Pi$ has finite intensity, so that its cost is defined. We recall some definitions and facts from metric geometry, see \cite{MR3561300} for further details in the specific context we are interested in. \begin{defn}\label{metricdefs} Let $(X, d)$ be a metric space. \begin{itemize} \item $(X, d)$ is \emph{coarsely connected} if there exists $c > 0$ such that for all $x, x' \in X$ there are points $x_1, x_2, \ldots, x_n \in X$ with $x = x_1$, $x_n = x'$, and $d(x_i, x_{i+1}) \leq c$ for all $i$. \item A subset $\omega \subseteq X$ is \emph{uniformly discrete} if there exists $\e > 0$ such that $d(x, y) > \e$ for all distinct $x, y \in \omega$. \item A subset $\omega \subseteq X$ is \emph{coarsely dense} if there exists $r > 0$ such that for every $x \in X$, $d(x, \omega) < r$. \item A \emph{Delone set} is a subset $\omega \subseteq X$ which is both uniformly discrete and coarsely dense. \item An \emph{$\e$-net} is a subset $\omega \subseteq X$ which is $\frac{\e}{2}$ uniformly discrete and $\e$ coarsely dense. \end{itemize} \end{defn} \begin{thm}[See Proposition 1.D.2 of \cite{MR3561300}] Let $G$ be an lcsc group with a left-invariant proper metric $d$ which generates its topology. Then $G$ is compactly generated if and only if it is coarsely connected. \end{thm} Note that if $X$ is coarsely connected, then so too is any coarsely dense subset of $X$. \begin{defn} Let $S \subseteq G$ be a compact and symmetric generating set. The \emph{Cayley factor graph} associated to $S$ is the map $\Cay(\bullet, S) : \MM \to \graph(G)$ given by \[ \Cay(\omega, S) = \{ (g, gs) \in \omega \times \omega \mid s \in S \}. \] \end{defn} Note that this graph is not necessarily connected, for instance for the Poisson point process. However, if $\Pi$ is a point process which is almost surely $c$-coarsely-connected for $c$ such that $B(0,c) \subseteq S$ then $\Cay(\Pi, S)$ is connected. This condition can always be satisfied by replacing $S$ with an appropriate power of the generating set $S^k$, since $S^k$ exhausts $G$ and in particular must contain $B(0,c)$ for $k$ sufficiently large The following can be readily deduced from existing results in the literature (even removing the compact generation assumption), but we include a separate proof for completeness. \begin{prop}\label{factoronnet} Suppose $\Pi$ is a free and ergodic point process on a compactly generated group $G$. Then for every $R > 0$ there exists a \emph{finite intensity} thickening $\Theta$ of $\Pi$ such that $\Theta(\Pi)$ is almost surely $R$-coarsely-dense. Moreover, if $\Pi$ is $\delta$-separated (with $\delta < 2R$), then $\Theta$ will also be $\delta$-separated. \end{prop} \begin{proof} It suffices to prove the statement for ergodic processes. Fix $R > 0$. We will construct a factor map $\Phi$ of $\Pi$ such that $\Phi(\Pi)$ is $\frac{R}{2}$ uniformly separated and $\Theta(\Pi) := \Pi \sqcup \Phi(\Pi)$ is $R$-coarsely dense. The uniform separation then implies that this thickening has finite intensity. The idea of the proof is the following: observe that every uniformly separated subset of a metric space is a subset of a Delone set. You can prove this using the well-ordering principle or Zorn's lemma (as to your taste). Now consider a sample $\Pi$ from the point process. We know there are \emph{some} ways to add points to it to get something coarsely dense, the only difficulty is that we are required to make these choices equivariantly. We will select points that see the ``frontier'' of the process, which will then add points to cover a piece of the frontier. At every stage the frontier gets smaller, and in the limit we cover the whole space. For configurations $\omega \in \MM$, let $\omega^t$ denote the following closed set \[ \omega^t = \bigcup_{g \in \omega} B(g, t), \] that is, the union of all \emph{closed} balls about the points of $\omega$. If $\Pi$ is a point process, then $\Pi^t$ is a random closed subset of $G$. We call a point $g \in \Pi$ \emph{on the frontier} if $B(g, c_1 R) \not\subseteq \Pi^R$, where $c_1 > 1$ is some parameter to be chosen later, and let $F(\Pi)$ denote the subset of frontier points of $\Pi$. This is a metrically defined condition, and hence equivariant. We will define a rule $\Phi_1(\Pi)$ that specifies a collection of points such that their $R$-balls cover all the $c_1 R$-balls of the frontier points of $\Pi$. We will then iterate this construction (so that $\Phi_2(\Pi)$'s $R$-balls cover the $c_2 R$-balls of $\Phi_1(\Pi) \cup \Pi$'s frontier points, for some $c_2 > c_1$, and so on). In this way we will find enough points to cover the whole space. Choose $c_1$ large such that $\PP[ \Pi^{c_1 R} \setminus \Pi^R \neq \empt] = 1$. If this is not possible, then the process is already $R$-coarsely-dense by ergodicity. One can decompose the frontier points of $\Pi$ as \[ F(\Pi) = \bigsqcup_n F_n(\Pi), \] where each $F_n(\Pi)$ is $10 c_1 R$ uniformly separated. This can be done by using the existence of a \emph{Borel kernel} of the factor graph $\mathscr{D}_{10 c_1 R}(\Pi_0)$ defined on the frontier points of $\Pi_0$, see Section 4 of \cite{KECHRIS19991} for further information on Borel kernels. Note that by using the Palm process, the resulting sets $F_n(\Pi)$ are equivariantly defined. We now fix an auxiliary (deterministic) $R$-net $\mathcal{N} \subset G$. If $W \subseteq G$ is a Borel region and $g \in G$, then let \[ N(g, W) = \{ x \in g^{-1}\mathcal{N} \mid B(x, R) \cap W \neq \empt \}. \] Note that $N(g,W)^R \supseteq W$, as $\mathcal{N}$ is coarsely dense. Define \[ \Phi_1(\Pi) = \bigcup_{g \in F_1(\Pi)} N(g, B(g, c_1 R) \setminus \Pi^R), \] and inductively \[ \Phi_{n+1}(\Pi) = \bigcup_{g \in F_n(\Pi)} N(g, B(g, c_1 R) \setminus \left(\Pi \cup \bigcup_{i \leq n} \Phi_i(\Pi) \right)^R ) \] Then \[ \Pi^{c_1 R} \subseteq \Pi^R \cup \bigcup_{n \geq 1} \Phi_n(\Pi)^R. \] We now repeat this procedure as many times as necessary (possibly countably infinitely many times) until we construct the desired thickening $\Theta$. The only care necessary is that one should choose the parameters $c_1 < c_2 < \cdots$ so that they tend to infinity, as \[ G = \bigcup_{n \geq 1} \Pi^{c_n R} \] for any such sequence. \end{proof} \begin{remark} In Section \ref{crosssectionappendix} we describe the connection between point processes and ``cross-sections'' of actions. The previous proposition can be deduced from the fact that every free action admits a ``cocompact cross-section''. A similar statement to the proposition directly phrased in terms of cross-sections can be found in Section 2 of \cite{slutsky2017lebesgue}, where it is shown that any cross-section can be extended to a cocompact cross-section. That proof works without the compact generation assumption. \end{remark} \begin{proof}[Proof of Proposition \ref{finitecost}] It suffices to consider the case where $\Pi$ is ergodic, see \cite{kechris2004topics} Corollary 18.6. If $\Pi$ is $\delta$-separated, then the thickening constructed in Proposition $\ref{factoronnet}$ has finite cost for the Cayley factor graph, which is connected (for a suitable choice of generating set). Cost is monotone for factors, so $\Pi$ itself has finite cost. Otherwise, choose $\delta$ sufficiently small so that the $\delta$-thinning of $\Pi$ is non-empty almost surely. By label trickery (Proposition \ref{labeltrickery}), $\Pi$ is isomorphic to a (marked) $\delta$-separated point process, reducing to the previous case. \end{proof} \section{The Poisson point process has maximal cost} We begin with the observation that every IID process factors onto the Poisson: \begin{prop}\label{factorontopoisson} Let $\Pi$ be a point process on $G$. Then $[0,1]^\Pi$ factors onto the Poisson point process. \end{prop} \begin{proof} Fix a map $F : [0,1] \to \MM(G)$ such that if $\xi \sim \texttt{Unif}[0,1]$, then $F(\xi)$ is a Poisson point process on $G$ of unit intensity. We will use the Voronoi tessellation to simply glue independent copies of the Poisson point process in each cell, resulting in a Poisson point process. Define a factor map $\Phi([0,1]^\Pi)$ by \[ \Phi([0,1]^\Pi) = \bigcup_{g \in [0,1]^\Pi} g \cdot F(\xi_g) \cap V_{\Pi}^T(g) , \] where $\xi_g$ denotes the label of $g$ in $[0,1]^\Pi$. Then $\Phi([0,1]^\Pi)$ is the Poisson point process. \end{proof} In particular, the cost of every IID process is at most the cost of the Poisson. Our goal is to prove an \emph{asymptotic} version of this statement. We will show that every free point process ``weakly'' factors onto an IID process, and that cost is monotone for (certain) weak factors. This will prove that the Poisson point process has maximal cost amongst all free point processes. \subsection{Weak factoring and Ab\'{e}rt-Weiss for point processes} We have seen that cost is monotone under factor maps. We will now introduce a weaker version of factoring and investigate its relationship to cost: \begin{defn} Let $\Pi$ and $\Upsilon$ be point processes. Then $\Pi$ \emph{weakly factors} onto $\Upsilon$ if there is a sequence $\Phi^n$ of factors of $\Pi$ such that $\Phi^n(\Pi)$ weakly converges\footnote{For more information on weak convergence in the context of point processes, see Appendix \ref{metricproperties}.} to $\Upsilon$. \end{defn} The restive reader is advised to take a look at the statements of Theorem \ref{abertweiss} and Theorem \ref{costmonotonicity}. These are the tools that will be used to prove the headline theorem of this section. The other results in this section are necessary but have a more routine flavour. \begin{thm}\label{iidamenableweakfactor} Let $\Pi$ and $\Upsilon$ be point processes on an amenable group $G$. If $\Pi$ is free, then $[0,1]^\Pi$ weakly factors onto $\Upsilon$. \end{thm} The proof of this uses a lemma, a proof of which can be found in a concurrently appearing paper by the second author: \begin{lem}\label{hyperfinitelemma} If $\Pi$ is a free point process on an amenable group $G$, then there exists factor partitions $\mathcal{P}_n(\Pi) = \{P^n_g\}_{g \in \Pi}$ with the following properties: \begin{description} \item[Equivariance:] $P^n_{\gamma g} = \gamma P^n_g$, \item[Partitioning:] For each $n$, $G$ is the union of $\{P^n_g\}_{g \in \Pi}$, and if $g, h \in \Pi$ then $P^n_g = P^n_h$ or $P^n_g \cap P^n_h = \empt$, \item[Increasing:] For each $n$, $P^n_g \subseteq P^{n+1}_g$, \item[Exhausting:] For all compact $C \subseteq G$, there exists $N$ and $g \in \Pi$ such that $C \subseteq P^N_g$, \item[Finite volume:] For all $n$, $0 < \lambda(P^n_g) < \infty$. \item[Finitariness:] For each $n$, $P^n_g \cap \Pi$ is finite (and contains $g$). \end{description} \end{lem} \begin{figure}[h]\label{clumpingfigure} \includegraphics[scale=0.5]{clumping.pdf} \centering \caption{The factor partitions from Lemma \ref{hyperfinitelemma} are ``clumpings'', and should be visualised like this. } \end{figure} \begin{proof}[Proof of Theorem \ref{iidamenableweakfactor}] Let $f : [0,1] \to \MM$ be a measurable map with $f(\xi) \sim \Upsilon$ if $\xi \sim \texttt{Unif}[0,1]$. Choose factor partitions $\mathcal{P}_n(\Pi) = \{P^n_g\}_{g \in \Pi}$ as in Lemma \ref{hyperfinitelemma}. Let $\Pi_n$ be an equivariantly defined subprocess of $\Pi$ which consists of one point chosen out of each cell $P^n_g$ -- we are able to do this by essential freeness. For instance, fix a Borel isomorphism of ${\mathbb{M}_0}$ with $[0,1]$ and note that the induced label in $[0,1]$ at each point of $\Pi$ is distinct for all points (essential freeness), and thus we may choose $\Pi_n$ to consist of the point with maximal label amongst its cell. Define factors $\Phi_n$ as follows: \[ \Phi_n([0,1]^\Pi) = \bigcup_{g \in \Pi_n} g \cdot (f(\xi_g) \cap P^n_g). \] That is, in each cell we glue a copy of the process $\Upsilon$ sampled according to the label $\xi_g$ on $g$. It follows immediately that $\Phi_n([0,1]^\Pi)$ weakly converges to $\Upsilon$: if $C \subseteq G$ is any compact stochastic continuity set for $\Upsilon$, then for sufficiently large $N$ is entirely contained in some $P^N_g$, and thus the point counts of $\Phi^n([0,1]^\Pi)$ inside $C$ are exactly distributed the same as those of $\Upsilon$. \end{proof} The following statement is due to Ab\'{e}rt and Weiss\cite{abert2013bernoulli} for discrete groups, we extend it to point processes: \begin{thm}\label{abertweiss} Let $\Pi$ be an essentially free point process on a noncompact group $G$. Then $\Pi$ weakly factors onto $[0,1]^\Pi$, its own IID. \end{thm} \begin{proof} It suffices to show that $\Pi$ weakly factors onto $[d]^\Pi$, where $[d] = \{ 1, 2, \ldots, d\}$ is equipped with the uniform measure. Here $[d]^\Pi$ is thus the \emph{finitary} IID of $\Pi$. This suffices as $[d]^\Pi$ weakly converges to $[0,1]^\Pi$ as $d \to \infty$. We will do this by constructing factor \emph{$[d]$-labellings} $\mathscr{C}_n$ of $\Pi$ such that $\mathscr{C}_n(\Pi)$ weakly converges to $[d]^\Pi$. To do this, we'll use the second moment method, hewing close to the original Ab\'{e}rt-Weiss recipe. The strategy will be as follows. Consider the set of $[d]$-labellings of $\Pi$. We will study a probabilistic model that produces a \emph{random element} of this space. We will show that this random \emph{deterministic colouring} satisfies certain constraints with positive probability. In particular, there must exist a $[d]$-labelling satisfying those constraints. By adjusting the parameters of this model, one can produce the desired sequence $\mathscr{C}_n$. Fix a \emph{countable} weak convergence determining family $\{V_i\}$ as discussed at Lemma \ref{determiningclass}, so that the sets $V_i \subset G \times [d]$ are bounded stochastic continuity sets for $[d]^\Pi$. We will construct a sequence of factor colourings $\mathscr{C}_n$ of $\Pi$ such that for fixed $k$, \[ N_{\boldsymbol{V}_k}(\mathscr{C}_n \Pi) \text{ converges weakly to } N_{\boldsymbol{V}_k}([d]^\Pi), \] where $\boldsymbol{V}_k = (V_1, V_2, \ldots, V_k)$. Set $W_k = \bigcup_{i \leq k} V_k$ to be the total window. Formally this is a subset of $G \times [d]$, but we view it as a subset of $G$. For $\e > 0$ arbitrary, we choose $\delta > 0$ so small that the following properties are true, where $\mu$ denotes the law of $\Pi$: \[ \mu(\{ \omega \in \MM \mid \text{for all } g, h \in \omega \cap W_k, g \neq h \text{ implies } d(g^{-1}\omega, h^{-1}\omega) > \delta \}) > 1 - \e \] and \[ (\mu \otimes \mu)(\{ (\omega, \omega') \in \MM \times \MM \mid \text{for all } (g, h) \in (\omega \cap W_k) \times (\omega' \cap W_k),\; d(g^{-1}\omega, h^{-1}\omega') > \delta \}) > 1 - \e. \] This is possible by essential freeness (for $A_\delta$) and essential freeness and noncompactness of $G$ (for $B_\delta$). To see this, let us formulate essential freeness in the following way: \[ \mu(\{ \omega \in \MM \mid \text{ for all } g, h \in \omega, g \neq h \text{ implies } g^{-1}\omega \neq h^{-1}\omega \}) = 1. \] This remains an almost sure event if we restrict $g$ and $h$ to lie in the window $W_k$. Now observe that $A_\delta$ increases as $\delta$ tends to zero to this almost sure event. The argument for $B_\delta$ is similar, but depends on the fact that the set \[ B_0 = \{(\omega, \omega') \in \MM \times \MM \mid \text{for all } g \in \omega, h \in \omega', g^{-1}\omega \neq h^{-1}\omega' \} \] is $\mu \otimes \mu$ almost sure, which is less immediate and we now show. Observe that $\mu \otimes \mu$ defines a point process of $G \times G$ of intensity $\intensity(\mu)^2$, and Palm measure $\mu_0 \otimes \mu_0$. By the correspondence of measures between $\mu \otimes \mu$ and $\mu_0 \otimes \mu_0$, we equivalently ask that \[ (\mu_0 \otimes \mu_0)(\{ (\omega, \omega') \in {\mathbb{M}_0} \times {\mathbb{M}_0} \mid \omega \neq \omega'\}) = 1, \] or equivalently that $\mu_0$ has no atoms. This is contradicted by essential freeness: if $\omega \in \mu_0$ is an atom, then $\omega$ is shift invariant, that is, $g^{-1}\omega = \omega$ for all $g \in \omega$. This implies that $\omega$ is in fact a \emph{subgroup} of $G$, and it's discrete by definition. Now the ergodic component of $\mu$ corresponding to $\omega$ is supported on $G\omega$, and thus defines a $G$-invariant probability measure on $G/\omega$, that is, it is a lattice shift. But $\mu$ was assumed to be essentially free. We now construct a \emph{random} colouring $\mathscr{C}$ of $\Pi$ in the following way: let \[ {\mathbb{M}_0} = \bigsqcup_i D_i, \text{ where } \diam(D_i) < \delta. \] be a partition of ${\mathbb{M}_0}$ into small measurable sets. By the correspondences we've described, any $[d]$-colouring of the sets $D_i$ corresponds to a factor colouring $\mathscr{C} : \MM \to [d]^\MM$ in the following way: \[ \mathscr{C}(\omega) = \{(g, c) \in \omega \times [d] \mid g^{-1}\omega \in {\mathbb{M}_0} \text{ is coloured by } c \}. \] We look at such $\mathscr{C}$ when the $D_i$ sets are coloured \emph{uniformly at random} by elements of $[d]$. To emphasise: we are considering a \emph{distribution} on \emph{deterministic} colourings. For an integral vector $\boldsymbol{\alpha} = (\alpha_1, \alpha_2, \ldots, \alpha_k) \in \NN_0^k$, we set \[ T_\alpha = \{ \omega \in [d]^\MM \mid (N_{V_1}(\omega), \ldots, N_{V_k}(\omega)) = \alpha \} \] to be the set of configurations whose point/colour statistics in $W_k$ are prescribed by $\alpha$. Note that $\mathscr{C}_*\mu(T_\alpha)$ is a random variable (whose source of randomness is $\mathscr{C}$). Given $k$ and $M$, we use the second moment method to prove the existence of $\mathscr{C}$ such that for all $\alpha \in \NN_0^k$ with $\@ifstar{\oldnorm}{\oldnorm*}{\alpha}_\infty \leq M$, \[ \@ifstar{\oldabs}{\oldabs*}{\mathscr{C}_*(\mu) (T_\alpha) - [d]^\mu(T_\alpha)} < \e. \] Then any sequence of such colourings with $k, M$ tending to infinity will witness that $\Pi$ weakly factors onto $[d]^\Pi$. Exchanging order of integration allows us to express the mean of $\mathscr{C}_*(\mu) (T_\alpha)$ as \begin{align*} \EE \left[ \mathscr{C}_*(\mu) (T_\alpha) \right] &= \EE[ \mu(\mathscr{C}^{-1}(T_\alpha))] \\ &= \EE\left[ \int_\MM \mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha] d\mu(\omega)\right] \\ &= \int_\MM \EE\left[ \mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha\right]] d\mu(\omega). \end{align*} Note that for $\omega \in A_\delta$, all pairs of distinct points $g,h \in \omega$ from the window $W_k$ have the property that $g^{-1}\omega$ and $h^{-1}\omega$ fall into different $D_i$ sets, and are therefore assigned \emph{independent} colours. Thus \[ \text{ for } \omega \in A_\delta, \EE \left[\mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha]\right] = [d]^\mu(T_\alpha). \] As $\mu(A_\delta) > 1 - \e$, it follows that \[ \@ifstar{\oldabs}{\oldabs*}{ \EE \left[ \mathscr{C}_*\mu(T_\alpha) \right] - [d]^\mu(T_\alpha) } < 2\e. \] We now work on the variance. Again, exchanging order of integration in a similar way to before allows us to express the mean of $(\mathscr{C}_*(\mu) (T_\alpha))^2$ as \[ \EE \left[ (\mathscr{C}_*(\mu) (T_\alpha))^2 \right] = \iint_{\MM \times \MM} \EE \left[ \mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha] \mathbbm{1}[\mathscr{C}(\omega') \in T_\alpha] \right] \,d\mu(\omega) \,d\mu(\omega'). \] By similar reasoning to before, for $(\omega, \omega') \in (A_\delta \times A_\delta) \cap B_\delta$, the colours one will see at points in $W_k$ will be independent. Thus for such $(\omega, \omega')$ we have \[ \EE \left[ \mathbbm{1}[\mathscr{C}(\omega) \in T_\alpha] \mathbbm{1}[\mathscr{C}(\omega') \in T_\alpha] \right] = \left( [d]^\mu(T_\alpha) \right)^2 \] Note that $(A_\delta \times A_\delta) \cap B_\delta = (A_\delta \times {\mathbb{M}_0}) \cap ({\mathbb{M}_0} \times A_\delta) \cap B_\delta$, so by the union bound $(\mu \otimes \mu)((A_\delta \times A_\delta) \cap B_\delta) > 1 - 3\e$. Putting this together, \[ \Var(\mathscr{C}_*(\mu) (T_\alpha)) = \EE \left[ (\mathscr{C}_*(\mu) (T_\alpha))^2 \right] - \left(\EE \left[ \mathscr{C}_*\mu(T_\alpha) \right]\right)^2 < 12\e. \] We now apply Chebyshev's inequality which states that for any $c > 0$, \[ \PP\left[ \@ifstar{\oldabs}{\oldabs*}{\mathscr{C}_*(\mu) (T_\alpha) - \EE[\mathscr{C}_*(\mu) (T_\alpha)]} > c \right] < \frac{\Var(\mathscr{C}_*(\mu) (T_\alpha))}{c^2}. \] Our bounds on the mean and the variance of $\mathscr{C}_*(\mu) (T_\alpha)$ and the choice $c = \e^{\frac{1}{3}}$ yield \[ \PP\left[ \@ifstar{\oldabs}{\oldabs*}{\mathscr{C}_*(\mu) (T_\alpha) - [d]^\mu(T_\alpha)} > \e^{\frac{1}{3}}+2\e \right] < 12\e^{\frac{1}{3}}. \] Let $E_\alpha$ denote the event $\{\@ifstar{\oldabs}{\oldabs*}{\mathscr{C}_*(\mu) (T_\alpha) - [d]^\mu(T_\alpha)} < \e^{\frac{1}{3}} + 2\e \}$. Then by the union bound \[ \PP[ \bigcap_{\substack{\alpha \in \NN_0^k \\ \@ifstar{\oldnorm}{\oldnorm*}{\alpha}_\infty \leq M}} E_\alpha] \geq 1 - 12M^k \e^{\frac{1}{3}}. \] In particular, by choosing $\e$ sufficiently small, such a colouring exists. \end{proof} \begin{prop}\label{weakfactoringtransitive} Suppose $\Pi$ and $\Upsilon$ are point processes, with $\Pi$ weakly factoring onto $\Upsilon$ and $\Psi(\Upsilon)$ being a factor of $\Upsilon$. Then $\Pi$ weakly factors onto $\Psi(\Upsilon)$. It follows that weak factoring is a \emph{transitive} notion. \end{prop} \begin{proof} We have seen that every factor map decomposes as the composition of a (coloured) thickening and a thinning. We are therefore able to reduce the problem to the case where $\Psi$ is a thinning, a colouring, and a thickening. We will repeatedly use the following fact: if $\Pi$ weakly factors onto $\Psi_m(\Upsilon)$ for a sequence of factors $\Psi_m$, and $\Psi_m$ converges to $\Psi$ pointwise, then $\Pi$ weakly factors onto $\Psi(\Upsilon)$. We begin with the case of thinnings. Let $\Phi^n(\Pi)$ weakly converge to $\Upsilon$. We write $\mu$ and $\nu$ for the laws of $\Pi$ and $\Upsilon$ respectively. Let $\theta^A$ be a thinning of $\Upsilon$, determined by a subset $A \subseteq ({\mathbb{M}_0}, \nu_0)$ as in Theorem \ref{correspondencetheorem}. The idea of the proof is this: if $A$ were a $\nu_0$ continuity set, then the corresponding thinning $\theta^A : \MM \to \MM$ is continuous $\nu$ almost everywhere, and so $\theta^A(\Phi^n(\Pi))$ converges to $\theta^A(\Upsilon)$ by Lemma \ref{continuity}. We handle the general case by approximating $A$ by $\nu_0$ continuity sets. \begin{claim} If $A$ is a $\nu_0$ continuity set, then $\theta^A : \MM \to \MM$ is continuous $\nu$ almost everywhere. \end{claim} To see this, recall the \emph{saturation} notion we used in the proof of Theorem \ref{correspondencetheorem}. We've assumed $\nu_0(\partial A) = 0$, and hence $\nu(G \partial A) = 0$ too. Then $\theta^A$ is continuous on the complement of this set. Note that if $\omega \not\in G \partial A$, then $g^{-1}\omega \not\in \partial A$ for all $g \in \omega$. One can now see that if $\omega_n$ converges to $\omega$, then $\theta^A(\omega_n)$ restricted to any fixed radius ball is eventually equal to $\theta^A(\omega)$, as desired. For the general case, let $A_m \subseteq {\mathbb{M}_0}$ be $\nu_0$-continuity sets such that \[ \nu_0(A \triangle A_m) < \frac{1}{ 2^m}. \] Then for every $m$, we have $\theta^{A_m}(\Phi^n(\Pi)) \to \theta^{A_m}(\Upsilon)$ by our earlier argument. By Borel-Cantelli, $A \triangle A_m$ does not occur infinitely often. This is true for the saturation, so we see that $\theta^{A_m} \to \theta^{A}$ pointwise almost surely and hence also in distribution. By choosing an appropriate subsequence of $m$s and $n$s we find our desired sequence of factor maps. The above proof for thinnings can be immediately adapted to prove that $\Pi$ weakly factors onto $\Psi(\Upsilon)$ if $\Psi$ is any $[d]$-colouring of $\Upsilon$. Since any colouring is a pointwise limit of finitary colourings, we see that $\Pi$ weakly factors onto \emph{any} colouring of $\Upsilon$. Finally, suppose $\Psi$ is a thickening of $\Upsilon$. By using the Voronoi tessellation we may express $\Psi$ in the following form: \[ \Psi(\omega) = \bigcup_{g \in \omega} g F(g^{-1}\omega), \] where $F : {\mathbb{M}_0} \to {\mathbb{M}_0}$ is a measurable function. We say that $\Psi$ is a \emph{bounded range} thickening if there exists $C > 0$ such that $F(\Upsilon_0) \subseteq B(0, C)$ almost surely. It is easy to show that $\Psi$ is the pointwise limit of such thickenings, so we are reduced to this case. Define $I : {\mathbb{M}_0}(B(0, C))^\MM \to \MM$ by \[ I(\omega) = \bigcup_{g \in \omega} g \xi_g, \] where $\xi_g$ is the label of $g$ in $\omega$, that is, $(g, \xi_g) \in \omega$. This is the \emph{implementation} map: it takes a schema for a thickening and implements it. \begin{claim} The implementation map $I$ is continuous. \end{claim} The task is to show that given $R, \e > 0$ there exists $S, \delta > 0$ such that if $\omega$ and $\omega'$ are $(S, \delta)$-wobbles of each other, then $I(\omega)$ and $I(\omega')$ are $(R, \e)$-wobbles of each other. The idea is simply that the behaviour of $I(\omega)$ in a ball of radius $R$ is determined by $\omega$ restricted to the ball of radius $R + C$, as the thickening is of bounded range $C$. By choosing $\delta$ sufficiently small (depending on the labels of the points in $\omega \cap B(0, R + C)$, we find our desired $S$ and $\delta$. With the claim in hand, our desired result follows from the claim and Lemma \ref{continuity}. \end{proof} \begin{cor}\label{amenableequivalent} Let $\Pi$ and $\Upsilon$ be point processes on an amenable group, with $\Pi$ free. Then $\Pi$ weakly factors onto $\Upsilon$. \end{cor} \begin{proof} By Theorem \ref{abertweiss} $\Pi$ weakly factors onto $[0,1]^\Pi$, and by Theorem \ref{iidamenableweakfactor} $[0,1]^\Pi$ weakly factors onto $\Upsilon$. Hence the claim follows from Proposition \ref{weakfactoringtransitive}. \end{proof} \begin{lem}\label{weakfactorimpliesiiweakfactor} Suppose $\Pi_n$ weakly converges to $\Pi$. Then $[0,1]^{\Pi_n}$ weakly converges to $[0,1]^{\Pi}$. \end{lem} \begin{proof} This can be seen, for instance, by verifying that the finite dimensional distributions of $[0,1]^{\Pi_n}$ weakly converge to those of $[0,1]^\Pi$. Recall that a $[0,1]$-marked point process on $G$ is just a particular kind of point process on $G \times [0,1]$. It therefore suffices to check weak convergence of the finite dimensional distributions against stochastic continuity sets of $[0,1]^\Pi$ in product form. To that end, let $\boldsymbol{V} = (V_1, V_2, \ldots, V_k)$ denote a collection of stochastic continuity sets for $\Pi$, and $[0, \boldsymbol{p}) = ([0, p_1), [0, p_2), \ldots, [0, p_k))$ a family of intervals in $[0,1]$. We denote by $\boldsymbol{V} \times [0, \boldsymbol{p}) = (V_1 \times [0, p_1), \ldots, V_k \times [0, p_k))$ the stochastic continuity set of $[0,1]^\Pi$. Fix an integral vector $\boldsymbol{\alpha} \in \NN_0^k$. We must show that $\PP[N_{\boldsymbol{V} \times [0, \boldsymbol{p})} [0,1]^{\Pi_n} = \boldsymbol{\alpha}]$ converges to $\PP[N_{\boldsymbol{V} \times [0, \boldsymbol{p})} [0,1]^{\Pi} = \boldsymbol{\alpha}]$. We find the following explicit expression simply by conditioning on $\boldsymbol{\beta}$, the total number of points appearing in $\boldsymbol{V}$: \begin{align*} \PP[N_{\boldsymbol{V} \times [0, \boldsymbol{p})} [0,1]^{\Pi_n} = \boldsymbol{\alpha}] &= \sum_{\boldsymbol{\beta} \geq \boldsymbol{\alpha}} \PP[N_{\boldsymbol{V} \times [0, \boldsymbol{p})} [0,1]^{\Pi_n} \mid N_{\boldsymbol{V}} \Pi_n = \boldsymbol{\beta}] \PP[N_{\boldsymbol{V}} \Pi_n = \boldsymbol{\beta}]\\ &= \sum_{\boldsymbol{\beta} \geq \boldsymbol{\alpha}} \prod_{i=1}^k p_i^{\alpha_i} (1-p_i)^{\beta_i - \alpha_i} \PP[N_{\boldsymbol{V}} \Pi_n = \boldsymbol{\beta}], \end{align*} where by $\boldsymbol{\beta} \geq \boldsymbol{\alpha}$ we mean that $\beta_i \geq \alpha_i$ for each entry. There is an identical expression for $\Pi$ (simply replace all instances of $\Pi_n$ by $\Pi$). The conclusion follows, as $\PP[N_{\boldsymbol{V}} \Pi_n = \boldsymbol{\beta}]$ converges to $\PP[N_{\boldsymbol{V}} \Pi = \boldsymbol{\beta}]$ for all $\boldsymbol{\beta}$. \end{proof} We have seen that all free point processes are able to \emph{weakly} factor onto their own IID. It is natural to ask if all this hassle was worth it -- can a point process always factor directly onto its own IID? \begin{thm}[Holroyd, Lyons, Soo\cite{MR2884878}] The Poisson point process cannot be split into two \emph{independent} Poisson point processes of lower intensity without additional randomness. More precisely, there does not exist a \emph{deterministic} two colouring $\mathscr{C} : (\MM, \PPP) \to \{0, 1\}^\MM$ such that $\mathscr{C}_* \PPP$ is the IID $\texttt{Ber}(p)$ labelled Poisson point process for $0 < p < 1$. \end{thm} \begin{example} Some point processes \emph{can} factor onto their own IID, however. Note that taking the IID of a point process is idempotent, in the sense that \[ [0,1]^{[0,1]^\Pi} \cong ([0,1]^2)^\Pi \cong [0,1]^\Pi. \] For an unlabelled example, one can simply \emph{spatially implement} $[0,1]^\Pi$. That is, using the method sketched at Proposition \ref{abstractlyisom} one can find an unlabelled point process $\Upsilon$ (abstractly) isomorphic to $[0,1]^\Pi$, and thus $[0,1]^\Upsilon \cong \Upsilon$. \end{example} \subsection{Cost monotonicity for (certain) weak factors}\label{certainfactors} In this section we will always assume $G$ is compactly generated by $S \subset G$. \begin{question} Suppose $\Pi$ weakly factors onto $\Upsilon$. Is it true that $\cost(\Pi) \leq \cost(\Upsilon)$? That is, is cost monotone for weak factors? \end{question} This is the \emph{real} theorem that we would like to prove. We are able only to prove the following theorem, which implies that cost is monotone for certain weak factors: \begin{thm}\label{costmonotonicity} Suppose $\Pi^n$ is a sequence of point processes that weakly converge to $\Pi$. Then \[ \limsup_{n \to \infty} \cost(\Pi^n) \leq \cost(\Pi) \] holds in the following cases: \begin{enumerate} \item If there exists $\delta, R > 0$ such that $\Pi_n$ and $\Pi$ are all $\delta$ uniformly separated and $R$ coarsely connected. \item If all the $\Pi_n$ are free and $\Pi$ is $\delta$ uniformly separated. \end{enumerate} Moreover, the same statements are true if the point processes have labels from a \emph{compact} mark space $\Xi$. \end{thm} We will need an auxiliary lemma, which we will use again later: \begin{lem}\label{continuousthinning} Let $\Pi$ be a point process with law $\mu$. Then for all but countably many $\delta > 0$, the $\delta$-metric-thinning map $\theta^\delta : \MM \to \MM$ is continuous $\mu$ almost everywhere. In particular, if $\Pi_n$ weakly converges to $\Pi$, then $\theta^\delta(\Pi_n)$ weakly converges to $\theta^\delta(\Pi)$. \end{lem} To prove the lemma, simply note that any $\delta$ such that $B_G(0, \delta)$ is a stochastic continuity set for $\Pi_0$ works. \begin{proof}[Proof of Theorem \ref{costmonotonicity}.] We prove (1), and then show how to reduce (2) to (1). By increasing $S$ if necessary, we may assume that the Cayley factor graph of $\Pi_n$ and $\Pi$ with respect to $S$ is connected almost surely. Denote the distributions of $\Pi_n$ and $\Pi$ by $\mu_n$ and $\mu$ respectively. We call a factor graph $\mathscr{G}$ a \emph{$\mu$-continuity factor graph} if it has the property that \[ \lim_{n \to \infty} \muarrow^n(\mathscr{G}) = \muarrow(\mathscr{G}). \] The same technique used to prove Proposition \ref{palmconvergence} shows that factor graphs of the form\footnote{Let us unpack the definition: there is an edge between $g, h \in \mathscr{G}_{A,V}(\omega)$ if $g^{-1}\omega \in A$ and $g^{-1}h \in V$. That is, each point $g$ decides if it will have edges (checks if $g^{-1}\omega \in A$), and then simply connects to all points in $gV$.} $\mathscr{G}_{A, V} = (A \times V) \cap \Marrow$, where $A \subseteq {\mathbb{M}_0}$ is a $\mu_0$ continuity set and $V \subseteq G$ is a bounded stochastic $\mu$ continuity set, are $\mu$-continuity factor graphs. The idea of the proof is that we will take a cheap graphing $\mathscr{G}$ for the limit process $\mu$, and use it to produce a cheap $\mu_0$-continuity graphing $\mathscr{H}$. The continuity property then gives us information about the costs of $\mu_n$, \emph{but only if we can ensure $\mathscr{H}$ is connected on $\Pi_n$}. This is why we assume coarse density. Note that by outer regularity of the measure $\muarrow$, for every factor graph $\mathscr{G}$ and $\e > 0$ there exists an \emph{open} factor graph $\mathscr{G}' \supseteq \mathscr{G}$ such that $\muarrow(\mathscr{G}') \leq \muarrow(\mathscr{G}) + \e$. Therefore in the definition of cost one can replace ``measurable graphing'' by ``open graphing''. \begin{claim} Every \emph{open} graphing $\mathscr{G}$ of $\mu$ contains a $\mu$-continuity factor graph $\mathscr{H}_N$ such that its $N$th power satisfies $\mathscr{H}_N^N \supseteq \Marrow \cap (H_\delta \times S)$. \end{claim} Here $H_\delta$ denotes the space of $\delta$ separated configurations as in Lemma \ref{hardcorecompact}. Note that this condition and the assumption on $R$ and $S$ implies that $\mathscr{H}$ is \emph{connected} on any $\delta$-uniformly separated and $R$ coarsely dense input. In particular, $\mathscr{H}(\Pi_n)$ is connected for every $n$. Let us assume that all of $\Pi_n$ and $\Pi$ have unit intensity and finish the proof: for any $\e > 0$, choose a graphing $\mathscr{G}$ of $\Pi$ such that $ \muarrow(\mathscr{G}) \leq \cost(\Pi) + \e$. Take $\mathscr{H}$ as in the above procedure. Then: \begin{align*} \limsup_{n \to\ \infty} &\cost(\Pi_n) \leq \limsup_{n \to\ \infty} \muarrow^n(\mathscr{H}) && \text{as } \mathscr{H} \text{ is a graphing of } \Pi_n \\ &= \muarrow(\mathscr{H}) && \text{since } \mathscr{H} \text{ is a } \muarrow\text{-continuity graphing} \\ &\leq \muarrow(\mathscr{G}) && \text{as } \mathscr{H} \subseteq \mathscr{G} \\ &\leq \cost(\Pi) + \e && \text{by assumption on } \mathscr{G}. \end{align*} Since $\e > 0$ was arbitrary, this proves the result for unit intensity processes. In general, the steps are the same except with the more complicated formula for cost of processes with non unit intensity. One just needs to know that if $\Pi_n$ weakly converges to a finite intensity process $\Pi$, then $\intensity(\Pi_n)$ converges to $\intensity(\Pi)$. This follows by weak convergence and choosing a stochastic continuity set $U$ for $\Pi$, noting that the point count function $N_U$ is thus continuous and \emph{bounded} as we are taking uniformly separated processes. It remains to prove the claim about $\mu$-continuity factor graphs. Recall from Lemma \ref{continuitysets} that ${\mathbb{M}_0}$ and $G$ admit \emph{topological bases} $\{A_i\}$ and $\{V_j\}$ consisting of $\mu_0$-continuity sets and $\mu$ stochastic continuity sets (respectively). So by definition of the subspace topology, $\Marrow$ admits a topological basis $\{\mathscr{G}_{A_i,V_j}\}$ consisting of $\mu$-continuity factor graphs. For each $k \in \NN$ define \[ \mathscr{H}_k = \bigcup_{\substack{i, j \leq k \\ \mathscr{G}_{A_i,V_j} \subseteq \mathscr{G}}} \mathscr{G}_{A_i,V_j}. \] Since $\mathscr{H}_k$ consists of \emph{finitely many} continuity factor graphs, it is itself a continuity factor graph. Each $\mathscr{H}_k$ is also open, and increases to $\mathscr{G}$ as $k$ tends to infinity. As $\mathscr{G}$ is generating, $\{\mathscr{H}_k^k\}_{k \in \NN}$ forms an open cover of the \emph{compact}\footnote{See Lemma \ref{hardcorecompact}.} space $\Marrow \cap (H_\delta \times S)$. In particular, there exists $N$ such that $\mathscr{H}_N^N \supseteq \Marrow \cap H_\delta \times S$, proving the claim. One sees that the essential feature in the above proof strategy was \emph{compactness}, and therefore it remains true for $\Xi$-labelled point processes if $\Xi$ is compact, as mentioned. With this observation in hand, we can now deduce the (2) statement from the (1). We will produce a weakly convergent sequence of separated and coarsely connected point processes, where each term has the same cost as $\Pi_n$ and the weak limit factors onto $\Pi$, thus has cost at most the cost of $\Pi$. This proves the statement. Choose $\delta' < \delta$ as in Lemma \ref{continuousthinning} so that the $\delta'$ metric thinning satisfies \[ \theta^{\delta'}(\Pi_n) \text{ weakly converges to } \theta^{\delta'}(\Pi) = \Pi. \] Now observe by label trickery (see Proposition \ref{labeltrickery}) we can find $[0,1]$-labelled point processes $\Upsilon_n$ each isomorphic to the respective $\Pi_n$ and such that their underlying point set is $\theta^{\delta'}(\Pi_n)$. Note that $\Upsilon_n$ \emph{might not weakly converge}, but it will have subsequential weak limits. All such subsequential weak limits will be some kind of (possibly random) labelling of $\Pi$. To see this, let $\pi : [0,1]^\MM \to \MM$ be the map that forgets labels. Thus $\pi(\Upsilon_n) = \theta^{\delta'}(\Pi_n)$. Since $\pi$ is continuous, it preserves weak limits. Let $\Upsilon$ be any subsequential weak limit of $\Upsilon_n$, along a subsequence $n_k$. Then by continuity \[ \pi(\Upsilon) = \lim_{k \to \infty} \pi(\Upsilon_{n_k}) = \lim_{k \to \infty} \theta^{\delta'}(\Pi_{n_k}) = \Pi. \] Now let $\Theta_n(\Upsilon_n)$ be \emph{the input/output versions} of the $(\delta', R)$-Delone thickenings that exist from Proposition \ref{factoronnet}. Here we use that $\Upsilon_n$ are free actions. By input/output we mean you keep track of which points of the thickening are input and output, as in Definition \ref{inputoutputdefn}. In particular, \[ \cost(\Theta_n(\Upsilon_n)) = \cost(\Upsilon_n) = \cost(\Pi_n), \] where the first equality holds because we took the input/output version of the thickening. Let $\Upsilon'$ denote any subsequential weak limit of $\Theta_n(\Upsilon_n)$. Then $\Upsilon'$ factors onto $\Pi$, by a similar argument to the earlier one about forgetting certain labels. Putting this all together: \[ \limsup_{n \to \infty} \cost(\Pi_n) = \cost(\Theta_n(\Upsilon_n)) \leq \cost(\Upsilon') \leq \cost(\Pi), \] where the final inequality holds because cost can only increase under factors. \end{proof} \begin{remark} In the second part of the proof, one might want to replace label trickery by something like ``each $\Pi_n$ is isomorphic to a random Delone set $\Upsilon_n$, which has subsequential weak limits, so choose one such limit $\Upsilon$...'', but then it's not clear what the cost of $\Upsilon$ has to do with the cost of $\Pi$. One would require the Delone-ification process to preserve weak limits in some sense in order to relate $\cost(\Upsilon)$ and $\cost(\Pi)$. \end{remark} \begin{remark} There is label trickery in \cite{abert2013bernoulli} too: it is always assumed there that the action is continuous on a compact space. \end{remark} \begin{thm}\label{poissonmax} If $\Pi$ is a free point process, then its cost is at most the cost of the IID Poisson point process on $G$. \end{thm} \begin{proof} We know by Theorem \ref{abertweiss} that $\Pi$ weakly factors onto $[0,1]^\Pi$, and $[0,1]^\Pi$ factors onto the IID Poisson. We would like to say ``so $\Pi$ weakly factors onto the IID Poisson, and hence has less cost by the cost monotonicity statement'', but our cost monotonicty statement is too weak for this, so we use a different argument. Note that $\Pi$ is \emph{abstractly isomorphic} to a $\delta$ uniformly separated process $\Pi'$ by Proposition \ref{abstractlyisom}. Then $\Pi'$ is also free and has the same cost as $\Pi$. Now $\Pi'$ weakly factors onto its own IID, more explicitly, there is a sequence of factors \emph{labellings} $\Phi_n(\Pi')$ weakly converging to $[0,1]^{\Pi'}$. Because $\Phi_n(\Pi')$ is a labelling of $\Pi'$ it is itself free and uniformly separated. Putting it all together: \begin{align*} \cost(\Pi) &= \cost(\Pi') && \text{as they are isomorphic actions} \\ &\leq \limsup_{n \to \infty} \cost(\Phi_n(\Pi')) && \text{cost can only increase for factors} \\ &\leq \cost([0,1]^{\Pi'}) && \text{by Theorem \ref{costmonotonicity}} \\ &\leq \cost([0,1]^\PPP) && \text{as } [0,1]^{\Pi'} \text{ factors onto } [0,1]^\PPP, \end{align*} as desired. \end{proof} \section{Some fixed price one groups} Theorem \ref{poissonmax} is a strategy for proving that groups have fixed price, and we will use it to that end for $G \times \ZZ$. However, our argument is somewhat indirect and requires taking a further weak limit. It would be instructive to have a more direct argument: \begin{question} Can one \emph{explicitly} construct for every $\e > 0$ connected factor graphs of the IID Poisson on $G \times \ZZ$ of edge measure less than $1 + \e$? \end{question} To see what we mean by \emph{explicit}, one should consider the discrete case: if $\Gamma$ is a finitely generated group, then it is straightforward to construct factor of IID connected graphs of small edge measure on Bernoulli (site) percolation on $\Gamma \times \ZZ$. We would like a construction in that vein. So instead we will use the weak factoring strategy to reduce the above problem to a much simpler one, where we \emph{can} construct such factor graphs. \subsection{Groups of the form \texorpdfstring{$G \times \ZZ$}{G x Z}} \begin{defn} Let $\Pi$ be a point process on $G$. Its \emph{vertical coupling} on $G \times \ZZ$ is $\Delta(\Pi) = \Pi \times \ZZ$. \end{defn} Here $\Delta : \MM(G) \to \MM(G \times \ZZ)$ is induced by the diagonal embedding of $G$ into $G^\ZZ$. For this reason one might prefer to call $\Delta(\Pi)$ the \emph{diagonal coupling}, but this terminology will not be suitable when we go to $G \times \RR$. \begin{figure}[h] \includegraphics[scale=0.4]{GtimesZ.pdf} \centering \caption{How one should think of $G \times \ZZ$. Note that if $\Pi$ is a point process on $G$, then its vertical coupling is simply infinitely many copies of \emph{the same} points stacked on top of each other.} \end{figure} \begin{lem}\label{verticalcost} The IID version $[0,1]^{\Delta(\Pi)}$ of a vertically coupled process has cost one. \end{lem} The proof uses the fact that Bernoulli percolation of a factor graph can be implemented as a factor of IID. This sort of trick will be familiar to many, but we will nevertheless spell it out: \begin{defn} Let $\mathscr{G}$ be a factor graph of a point process $\Pi$. Its \emph{$\e$ edge percolation} is the factor graph $\mathscr{G}_\e$ defined on $[0,1]^\Pi$ in the following way: for points $g, h \in [0,1]^\Pi$ let \[ g \sim_{\mathscr{G}_\e} h \text{ whenever } g \sim_{\mathscr{G}} h \text{ and } \xi_g \oplus \xi_h < \e. \] Here $\oplus$ denotes addition of the labels modulo one. \end{defn} Observe that if $(g, h_1)$ and $(g, h_2)$ are edges of $\mathscr{G}(\Pi)$, then the random variables $\xi_g \oplus \xi_{h_1}$ and $\xi_g \oplus \xi_{h_2}$ are independent uniform once again. \begin{remark} If $\mathscr{G}$ is already a factor graph defined on $[0,1]^\Pi$, then we can implement $\mathscr{G}_\e$ on $[0,1]^\Pi$, that is, without adding further randomness (via the replication trick, see the discussion below). \end{remark} \begin{proof}[Proof of Lemma \ref{verticalcost}] Let $\mathscr{G}$ be any graphing of $\Pi$ with finite edge density. We \emph{lift} it to a factor graph $\mathscr{G}^\Delta$ of $\Delta(\Pi)$ in the following way: \[ (g, n) \sim_{\mathscr{G}^\Delta(\Pi)} (h, n) \text{ if and only if } g \sim_{\mathscr{G}(\Pi)} h, \] that is, as $\Delta(\Pi)$ is just copies of $\Pi$ stacked on every level of $G \times \ZZ$, then we simply copy $\mathscr{G}$ onto every level of $G \times \ZZ$ as well. Let $\mathscr{V}$ denote the factor graph of $\Delta(\Pi)$ consisting of \emph{vertical} edges, that is for every $(g, n) \in \Delta(\Pi)$ we have an edge to $(g, n+1)$. One can see that $\mathscr{V} \cup \mathscr{G}^\Delta$ is a connected factor graph. But this is also true when we percolate the edges level wise, that is, when we consider $\mathscr{V} \cup \mathscr{G}^\Delta_\e$. This is because if $(g, n) \sim_{\mathscr{G}^\Delta} (h, n)$ is an edge destroyed in the percolation, then we can slide up along vertical edges and consider the edge $(g, n+1) \sim_{\mathscr{G}^\Delta} (h, n+1)$ instead. Its chance of survival in the percolation is independent from the previous edge, and hence we get another go to cross over. By sliding up far enough we are guaranteed to be able to cross. Finally, observe that the edge density of $\mathscr{V} \cup \mathscr{G}^\Delta_\e$ is $1$ plus $\e$ times the edge density of $\mathscr{G}$. \end{proof} \begin{lem}\label{factorimpliesiidfactor} Suppose $\Pi$ and $\Upsilon$ are point processes, and $\Pi$ factors onto $\Upsilon$. Then $[0,1]^\Pi$ factors onto $[0,1]^\Upsilon$. In particular, if $[0,1]^\Pi$ weakly factors onto $\Upsilon$ then $[0,1]^\Pi$ weakly factors onto $[0,1]^\Pi$ too. \end{lem} The proof of this uses the following \emph{replication trick}: note that the randomness in one $\texttt{Unif}[0,1]$ random variable $\xi$ is equivalent to the randomness in an entire IID sequence $\xi_1, \xi_2, \ldots$ of $\texttt{Unif}[0,1]$ random variables. More precisely, there is an isomorphism (as measure spaces) \[ I : ([0,1], \text{Leb}) \to ([0,1]^\NN, \text{Leb}^{\otimes \NN}). \] So if $\xi \sim \texttt{Unif}[0,1]$, then we will write $I(\xi) = (\xi_1, \xi_2, \ldots)$ for the associated IID sequence of $\texttt{Unif}[0,1]$ random variables. \begin{proof}[Proof of Lemma \ref{factorimpliesiidfactor}] Suppose $\Upsilon = \Phi(\Pi)$. If $g \in [0,1]^\Pi$, then we write $\xi^g$ for its label, and by the replication trick $\xi^g_1, \xi^g_2, \ldots$ for the associated IID seqence of $\texttt{Unif}[0,1]$ random variables. We define a factor map $\widetilde{\Phi}$ of $[0,1]^\Pi$ as follows: \[ \widetilde{\Phi}([0,1]^\Pi) = \bigcup_{g \in [0,1]^\Pi} \{(h_i, \xi^g_i) \in G \times [0,1] \mid V_g(\Pi) \cap \Phi(\Pi) = (h_1, h_2, \ldots, ) \}, \] where we mean that $(h_1, h_2, \ldots)$ is any \emph{enumeration} of $V_g(\Pi) \cap \Phi(\Pi)$ performed in an equivariant way. Note that the intersection $V_g(\Pi) \cap \Phi(\Pi)$ could be empty. For instance, look at the elements of $h \in V_g(\Pi) \cap \Phi(\Pi)$ which are \emph{closest} to $g$. Then let $h_1$ be the element that minimises $T(g^{-1}h)$, where $T : G \to [0,1]$ is the tie-breaking function of Section \ref{voronoidefn}. Then let $h_2$ be the next smallest element, and so on, until you exhaust the closest elements. Then look at the batch of next closest elements and so on. One can check that this is an equivariant construction (any construction where you do the same thing at every point will be). Then $\widetilde{\Phi}([0,1]^\Pi) = [0,1]^\Upsilon$, as desired. For the second part, simply note that taking the IID is idempotent in the sense that $[0,1]^{([0,1]^\Pi)} \cong [0,1]^\Pi$, and apply Lemma \ref{weakfactorimpliesiiweakfactor}. \end{proof} \begin{prop}\label{GtimesZweakfactor} The IID Poisson on $G \times \ZZ$ weakly factors onto the vertically coupled Poisson of $G$. \end{prop} \begin{proof} We will construct factor maps $\Phi^n : [0,1]^{\MM(G \times \ZZ)} \to \MM(G \times \ZZ)$ that ``straighten'' the input in the following way: for a given input $\omega \in [0,1]^\MM$, we select a ``sparse'' subset of its points. At each one of these we \emph{propagate} them upwards by placing copies of them on the levels above. This will converge to a vertically coupled process for suitable inputs. More precisely, let $\Pi$ denote the (unit intensity) IID Poisson on $G \times \ZZ$. We will denote points of $\Pi$ by $(g, l) \in G \times \ZZ$, and write $\Pi_{g, l}$ for its label. We now define the factor map $\Phi^n$ in two stages as a thinning and then a thickening to simplify the analysis. Let \[ \Pi^{1/n} = \{(g, l) \in \Pi \mid \Pi_{g, l} \leq \frac{1}{n} \}, \] and $F_n = \{0\} \times \{0, 1, \ldots, n-1\}$. Set \[ \Phi^n \Pi = \Theta^{F_n}(\Pi^{1/n}), \] where we write $\Phi^n \Pi$ for $\Phi^n(\Pi)$ to conserve parentheses. Let us explain what this means: \begin{itemize} \item At the first step $\Pi \mapsto \Pi^{1/n}$, we independently thin $\Pi$ to get a subprocess of intensity $\frac{1}{n}$. By the discussion in Example \ref{independentthinning}, the resulting process $\Pi^{1/n}$ is simply a Poisson point process on $G \times \ZZ$ of intensity $\frac{1}{n}$. We refer to the points of $\Pi^{1/n}$ as \emph{progenitors}. \item Each progenitor $(g, l)$ spawns additional points with the same $G$-coordinate on the next $n-1$ levels above it. This is the map $\Pi^{1/n} \mapsto \Theta^{F_n}(\Pi^{1/n}) = \Phi^n \Pi $. \item By the discussion at Example \ref{constantthickening}, $\Phi^n \Pi$ is a process of unit intensity. \end{itemize} We will employ the following strategy to show that $\Phi^n \Pi$ weakly converges to the vertical Poisson: \begin{enumerate} \item The sequence $(\Phi^n \Pi)$ admits weak subsequential limits, which a priori might be random counting measures, \item These subsequential limits are actually simple point processes, \item All of these subsequential limits are vertical processes, and \item That process is the vertical Poisson. \end{enumerate} Recall that if $(x_n)$ is a relatively compact sequence and every subsequential limit of $(x_n)$ is $x$, then $x_n$ converges to $x$. By this basic fact and the above items, we can conclude that $(\Phi^n \Pi)$ weakly converges to the vertical Poisson. We now verify that $\{ \Phi^n \Pi \}$ is \emph{uniformly tight}, proving (1). It suffices to verify that the distributions of point counts $N_C(\Phi^n \Pi)$, where $C = B_G(0, r) \times [L]$ denotes a cylinder whose base is a ball of radius $r$ and its height (in levels) is $L$, are uniformly tight. Let $X_i$ denote the number of points in $B_G(0, r) \times \{i\}$ with label $\Pi_{g, i} \leq \frac{1}{n}$, that is, the number of progenitors on the $i$th level. Thus the $X_i$ are IID Poisson random variables with parameter $\lambda(B_G(0,r)) / n$. One can explicitly describe the random variable $N_C(\Phi^n \Pi)$ in terms of the $X_i$s, but for our purposes it is enough to observe that: \[ N_C(\Phi^n \Pi) \leq L \sum_{i=1}^n X_i. \] The sum of independent Poisson random variables is again Poisson distributed (with parameter the sum of the parameters of the individual Poissons), so we see that $N_C(\Phi^n \Pi)$ is bounded in terms of a random variable \emph{that does not depend on $n$}. Therefore $\{\Phi^n \Pi\}$ is uniformly tight. To prove item (2), note that the above shows that the point counts in $B_G(0, r) \times \{0\}$ for $\Phi^n \Pi$ are \emph{exactly} Poisson distributed with parameter $\lambda(B_G(0, r))$. Thus if $\Upsilon$ is any subsequential weak limit of $\Phi^n \Pi$ and $r$ is such that $B_G(0,r) \times \{0\}$ is a stochastic continuity set for $\Upsilon$, then $N_{B_G(0,r) \times \{0\}} \Upsilon$ will also be Poisson distributed. In particular, $\Upsilon$ must be a \emph{simple} point process. For item (3), let $\Upsilon$ be any subsequential weak limit of $\Phi^n \Pi$. Observe that $\Upsilon$ is \emph{vertical} if and only if $(g, l) \in \Upsilon$ implies $(g, l+1) \in \Upsilon$. The idea is that this property is satisfied for most points of $\Phi^n \Pi$, and therefore must be preserved in the weak limit. Note that a process is vertical if and only if its Palm measure is vertical almost surely. We can now explicitly describe the Palm measure of $\Phi^n(\Pi)$. Recall from Theorem \ref{palmofpoisson} that the Palm version $\Pi_0^{1/n}$ of $\Pi^{1/n}$ is simply $\Pi^{1/n} \cup \{(0,0)\}$. To express the Palm version of the $F_n$-thickening of $\Pi^{1/n}$ (\`{a} la Example \ref{palmofthickening}), it will be useful to introduce the following notation. For each $k \in \NN$, let \[ \Pi_k^{1/n} = \Pi_0^{1/n} \cdot (0, k) = \{ (g, l+k) \in G \times \ZZ \mid (g, l) \in \Pi_0^{1/n} \}. \] That is, you simply shift $\Pi_0^{1/n}$ up by $k$ levels. Then $\Phi^n(\Pi_0) = \Pi_0^{1/n} \cup \Pi_1^{1/n} \cdots \cup \Pi_{n-1}^{1/n}$. Denote by $K$ a random integer chosen uniformly from $\{0, 1, \ldots, n-1\}$. Then the Palm version of $\Phi^n(\Pi)$ is \[ (\Phi^n \Pi)_0 = \Pi_{-K}^{1/n} \cup \Pi_{-K+1}^{1/n} \cup \cdots \cup \Pi_{-K+n-1}^{1/n}, \] where we use parentheses to stress that it is the Palm version of $\Phi^n \Pi$, not $\Phi^n$ applied to $\Pi_0$. Let us say that a rooted configuration $\omega \in {\mathbb{M}_0}(G \times \ZZ)$ \emph{has an $\e$-successor} if there is a point approximately above the root $(0,0)$ in $\omega$. More precisely, we define an \emph{event} \[ \{ \omega \text{ has an } \e\text{-successor} \} := \{ \omega \in {\mathbb{M}_0} \mid N_{B_G(0, \e) \times \{1\}} \omega > 1 \}. \] From this, we see \[ \PP[(\Phi^{n} \Pi)_0 \text{ has an } \e\text{-successor} ] \geq \frac{n-1}{n}, \] as $(\Phi^{n} \Pi)_0$ certainly has an $\e$-successor whenever $K < n-1$. Recall that $\Upsilon$ was any subsequential weak limit of $\Phi^n \Pi$. Fix a subsequence $n_i$ such that $\Phi^{n_i} \Pi$ weakly converges to $\Upsilon$. Choose a sequence $\e_k$ tending to zero such that $B_G(0, \e_k) \times \{1\}$ is a stochastic continuity set for $\Upsilon$. This is possible by Lemma \ref{continuitysets}. Then for each $k$ \[ \frac{n_i-1}{n_i} \leq \PP[\Phi^{n_i} (\Pi)_0 \text{ has an } \e_k\text{-successor} ] \to \PP[\Upsilon_0 \text{ has an } \e_k\text{-successor} ], \] So $\Upsilon_0$ has $\e_k$-successors almost surely for every $k$, and hence has $0$-successors. That is, $\Upsilon$ is a vertical process, at last proving item (3). Finally, for item (4) we observe that any vertical process is completely determined by its intersection with $G \times \{0\}$. We observed in the proof of item (2) that $\Upsilon$ is a Poisson point process on the $0$th level, so it must be the vertical Poisson, as desired. \end{proof} \begin{cor}\label{GtimesZcost} Groups of the form $G \times \ZZ$ have fixed price one. \end{cor} \begin{proof} By the previous proposition and Lemma \ref{factorimpliesiidfactor}, we know that the IID Poisson weakly factors onto the IID of the vertically coupled Poisson. Explicitly, there exists factor maps $\Phi^n : [0,1]^\MM \to [0,1]^\MM$ such that \[ \Phi^n \Pi \text{ weakly converges to } [0,1]^{\Delta(\PPP)}, \] where $\Pi$ is the IID Poisson on $G$ and\footnote{This is a slight abuse of notation: we were using $\PPP$ to denote the \emph{law} of the Poisson point process, but in the above expression we treat it as if it were a random variable. We do this to prevent the profusion of asterisks representing pushforwards of measures.} $\PPP$ is the Poisson on $G$. Choose $\delta < 1$ as in Lemma \ref{continuousthinning} such that metric $\delta$-thinning preserves the weak limit. Note that because $\delta < 1$, the thinning commutes with the vertical coupling: that is, $\theta^\delta(\Delta (\PPP))= \Delta(\theta^\delta \PPP)$. Therefore \[ \theta^\delta(\Phi^n \Pi) \text{ weakly converges to } [0,1]^{\Delta(\theta^\delta (\PPP))}. \] Putting this all together, \begin{align*} \cost(\Pi) &\leq \limsup_{n \to \infty} \cost(\theta^\delta(\Phi^n \Pi)) && \text{As cost can only increase under factors} \\ &\leq \cost([0,1]^{\Delta(\theta^\delta (\PPP))}) && \text{By Theorem \ref{costmonotonicity}} \\ &= 1 && \text{By Lemma \ref{verticalcost}}. \end{align*} Since the IID Poisson has maximal cost, this proves that $G \times \ZZ$ has fixed price one. \end{proof} \begin{remark} With further percolation-theoretic assumptions on $G$, one can \emph{directly} show that $\cost(\Phi^n(\Pi)) \leq 1 + \e_n$, where $\e_n$ tends to zero. This is by constructing factor graphs on $\Phi^n(\Pi)$. By using the Poisson net, one can prove an analogue of the Babson and Benjamini theorem \cite{10.2307/119068} and show that the distance $\mathscr{D}_R$ factor graph on the Poisson point process on a \emph{compactly presented} and one-ended group has a \emph{unique} infinite connected component if $R$ is sufficiently large. Now on $\Phi^n(\Pi)$, we construct a factor graph as follows: add in all vertical edges, and the $\mathscr{D}_R$ edges horizontally. Now percolate the horizontal edges. One can show that by adding a small amount of edges to this, the result is a graph with a unique infinite connected component. \end{remark} \subsection{Groups of the form \texorpdfstring{$G \times \RR$}{G x R}} We now outline the modifications required to extend the $G \times \ZZ$ case to the following theorem: \begin{thm}\label{GtimesRfp} Groups of the form $G \times \RR$ have fixed price one. \end{thm} \begin{proof} The strategy will be exactly the same as in Proposition \ref{GtimesZweakfactor}. We define factor maps $\Phi^n$ of the IID Poisson $\Pi$ using the same formula as in the $G \times \ZZ$ case. We claim these weakly converge to a point process $\Upsilon$ which is \emph{vertical} in the sense that $(g, t) \in \Upsilon$ implies $(g, t + n) \in \Upsilon$ for all $n \in \ZZ$. First we show $\{\Phi^n (\Pi)\}$ is uniformly tight. This works exactly as in the $G \times \ZZ$ case, except instead of counting progenitors $X_i$ on $G \times \{i\}$, we count them on $G \times [i, i+1)$ for $i \in \ZZ$. Next we show that any subsequential weak limit $\Upsilon$ of $\{ \Phi^n (\Pi) \}$ is not just a random counting measure, but an actual point process. This follows as in the $G \times \ZZ$ case, as $\Phi^n (\Pi)$ has the same distribution in $G \times [0, 1)$ as a Poisson point process on $G \times \RR$. The proof that $\Upsilon$ is a vertical point process works the same as in the $G \times \ZZ$ case. At this point one can observe that a vertical process is determine by its intersection with $G \times [0,1)$, and therefore $\Phi^n (\Pi)$ weakly converges to a unique point process $\Upsilon$. We now adapt Lemma \ref{verticalcost} to this context, and show that if $\Upsilon$ is \emph{any} vertical point process such that the projection $\pi(\Upsilon)$ has finite cost, then the IID process $[0,1]^\Upsilon$ has cost one. Let $\pi : G \times \RR \to G$ denote the projection map. Observe that \emph{if $\Upsilon$ is vertical}, then $\pi(\Upsilon)$ is discrete, and hence defines a point process on $G$. For contrast, observe that the projection $\pi(\Pi)$ of the Poisson point process $\Pi$ is almost surely dense, and hence does not define a point process on $G$. In the case of the $\Upsilon$ we construct as a weak limit, its projection $\pi(\Upsilon)$ is just the Poisson point process on $G$. Let $\mathscr{G}$ be a finite cost graphing of $\pi(\Upsilon)$. We lift this to a factor graph of $\Upsilon$ in the following way: \[ (g_1, t_1) \sim_{\mathscr{H}(\Upsilon)} (g_2, t_2) \text{ when } g_1 \sim_{\mathscr{G}(\pi(\Upsilon))} g_2 \text{ and } \@ifstar{\oldabs}{\oldabs*}{t_1 - t_2} < 1. \] Let $\mathscr{V}(\Upsilon)$ denote the set of \emph{vertical edges}, that is \[ \mathscr{V}(\Upsilon) = \{ ((g, t), (g, t + 1)) \mid (g, t) \in \Upsilon \}. \] Then as in Lemma \ref{verticalcost}, the vertical edges $\mathscr{V}(\Upsilon)$ together with an $\e$-percolation of $\mathscr{H}(\Upsilon)$ defines a cheap connected factor graph of $\Upsilon$. \begin{figure}[h] \includegraphics[scale=0.5]{gtimesr.pdf} \centering \caption{A portion of a graphing on the projection of a vertical process, and how it might look when lifted. Note that it gets wobbled a bit in the process.} \end{figure} We conclude from this that $G \times \RR$ has fixed price one by the same kind of reasoning as in Corollary \ref{GtimesZcost}. \end{proof} \begin{remark} The limiting process here can be described as follows: sample from a Poisson on $G \times [0, 1)$, and then simply extend it periodically in the second coordinate. \end{remark} \section{Rank gradient of Farber sequences vs. cost} In this section we will discuss how to connect rank gradient to the cost of the Poisson point process in certain situations. We first work with general lcsc groups, and then specialise to semisimple Lie groups and prove a stronger theorem. \begin{defn} Let $(\Gamma_n)$ denote a sequence of lattices in a fixed group $G$. The sequence is \emph{Farber} if for every compact neighbourhood of the identity $V \subseteq G$ we have \[ \PP[a\Gamma_n a^{-1} \cap V = \{e\}] \to 1 \text{ as } n \to \infty, \] where $a\Gamma_n$ denotes a coset of $\Gamma_n$ chosen randomly according to the (normalised) finite $G$-invariant measure on $G/\Gamma_n$. \end{defn} Note that $a \Gamma_n a^{-1}$ is exactly the stabiliser of $a\Gamma_n$ for the action $G \curvearrowright G/\Gamma$. Thus the Farber condition says that the action on most points of the quotient is locally injective. Equivalently, the condition states that $a\Gamma_n \cap Va = \{a\}$ with high probability. It is this second form that we will actually use in the proof below. We think of $a$ as being a point sampled randomly from a fundamental domain for $\Gamma_n$ in $G$, and thus it states that the $V$-neighbourhood around this point $a$ meets the lattice shift $a\Gamma_n$ only trivially. \begin{defn} Let $(\Gamma_n)$ denote a sequence of lattices in a fixed group $G$. Its \emph{rank gradient} is \[ \RG(G, (\Gamma_n)) = \lim_n \frac{d(\Gamma_n) - 1}{\covol \Gamma_n}, \] whenever this limit exists. \end{defn} \begin{remark} If $G$ is discrete, then the $\Gamma_n$ are all finite index subgroups. The Nielsen-Schreier formula \[ \frac{d(\Gamma_n) - 1}{[G : \Gamma_n]} \leq d(G) - 1 \] shows that the terms in the rank gradient are at least bounded. Gelander proved\cite{MR2863908} an analogue of this formula for lattices in connected semisimple Lie groups without compact factors. In the Seven Samurai paper\cite{MR3664810}, it is shown that if $G$ is a centre-free semisimple Lie group of higher rank with property (T), then \emph{any} sequence of irreducible lattices $(\Gamma_n)$ in $G$ is automatically Farber, as long as $\covol(\Gamma_n)$ tends to infinity. \end{remark} In the particular case of a decreasing \emph{chain} $\Gamma = \Gamma_1 > \Gamma_2 > \ldots$ of finite index subgroups, Ab\'{e}rt and Nikolov showed \cite{MR2966663} that the rank gradient $RG(\Gamma, (\Gamma_n))$ can be described as the \emph{groupoid} cost of an associated pmp action $\Gamma \curvearrowright \partial T(\Gamma, (\Gamma_n))$ on the boundary of a rooted tree. \subsection{Cocompact lattices in general groups} \begin{defn} We say that a lattice $\Gamma < G$ is \emph{$\delta$-uniformly discrete} if all of its \emph{right} cosets $\Gamma a \in \Gamma \backslash G$ are $\delta$ uniformly separated as subsets of $G$. That is, for all distinct pairs $\gamma_1, \gamma_2 \in \Gamma$, we have $d(\gamma_1 a, \gamma_2 a) \geq \delta$. Equivalently by left-invariance of the metric, $d(e, a^{-1} \gamma a ) \geq \delta$ for all $\gamma \in \Gamma$ not equal to the identity $e \in G$. If $(\Gamma_n)$ is a sequence of lattices, then we say it is $\delta$ uniformly discrete if each $\Gamma_n$ is $\delta$ uniformly discrete in the above sense. \end{defn} \begin{thm}\label{farbertheorem} Let $(\Gamma_n)$ be a Farber sequence of \emph{cocompact} lattices. Suppose further that the sequence is \emph{uniformly discrete}. If its rank gradient exists, then \[ \RG(G, (\Gamma_n)) \leq c_P(G) - 1, \] where $c_P(G)$ denotes the cost of the Poisson point process on $G$. In particular, if $G$ has fixed price one then the rank gradient vanishes. \end{thm} The above theorem is spiritually the same as one proved independently by Carderi in \cite{carderi2018asymptotic}, but in a drastically different language (namely, that of ultraproducts of actions). The theorem is therefore his, but we include our own proof as it has a different flavour. In the subsequent section we will discuss a similar theorem which applies for nonuniform lattices, at least with additional assumptions on the group. \begin{proof}[Proof of Theorem \ref{farbertheorem}] Recall that the cost of a lattice shift is \[ \cost(G \curvearrowright G/\Gamma_n) = 1 + \frac{d(\Gamma_n) - 1}{\covol \Gamma_n}, \] which is essentially the term appearing in the rank gradient definition. We would therefore like to take a weak limit of these actions to get some free point process, and then appeal to the cost monotonicity result. Of course, this is completely senseless: the intensity of the lattice shift tends to zero, so it weak limits on the empty process. Therefore we \emph{thicken} the lattice shifts to get processes $\Pi_n$ with a nontrivial weak limit. This thickening procedure must be done correctly, so that we can apply our (weak) cost monotonicity result. We will produce a sequence of $[0,1]$-marked point processes $\Pi_n$ such that \begin{itemize} \item each $\Pi_n$ is a $2\delta$-net, \item each $\Pi_n$ is a factor of the lattice shift $a\Gamma_n$, and so has cost \emph{at least} $1+\frac{d(\Gamma_n) - 1}{\covol \Gamma_n}$, and \item they have a subsequential weak limit $\Upsilon$ with IID $[0,1]$ labels. \end{itemize} Then \[ \RG(G, (\Gamma_n)) + 1 \leq \limsup_{n \to \infty} \cost(\Pi_n) \leq \cost(\Upsilon) \leq \mathrm{c}_\mathrm{P}(G) \] by the cost monotonicity result, as desired. View the space of \emph{right} cosets $\Gamma_n \backslash G$ as a compact metric space, where the distance between two cosets $\Gamma_n b_1, \Gamma_n b_2 \in \Gamma_n \backslash G$ is just their distance as closed subsets of $G$. Let $B_n = \{\Gamma_n b_1^n, \Gamma_n b_2^n, \ldots, \Gamma_n b^n_{k_n} \}$ be a collection of $2\delta$-nets in $\Gamma_n \backslash G$, where $\delta$ is the uniform discreteness parameter. We also choose $b_1^n = e$ for all $n$. This specifies a sequence of \emph{thickenings} $\Theta_n$ of the corresponding lattice shifts: that is, $a\Gamma_n \mapsto a B_n$. Note that $\Theta_n(a\Gamma_n)$ is a $2\delta$-net: it's true that $d(a\Gamma_n b_i^n, a\Gamma_n b_j^n) = d(\Gamma_n b_i^n, \Gamma_n b_j^n) \geq \delta$ for $i \neq j$ by our choice of $B_n$, and points of $\Gamma_n b_i^n$ are uniformly separated too exactly by our uniform discreteness assumption. It is also $2\delta$-coarsely dense, by the same property for $B_n$. Since $\{\Theta_n(a\Gamma_n)\}$ is a collection of random $2\delta$-nets, it is automatically uniformly tight, and all subsequential weak limits are $2\delta$-nets (and in particular, simple point processes). At this point we would like to apply cost monotonicity to $\Theta_n$ by passing to a subsequential weak limit. Our issue here is that one would have to demonstrate that this weak limit is a \emph{free} action in order to compare its cost to the cost of the Poisson. To do this, one would have to use the Farber condition in an essential way. We bypass this by a labelling trick: note that the IID of \emph{any} point process is automatically a free action (as any two points of it will receive distinct values almost surely, killing any possible symmetries). So we will limit on an IID labelled process instead. Consider the action $G \curvearrowright G/\Gamma_n \times [0,1]$, where the action on the second coordinate is trivial. We refer to this as the \emph{periodic IID lattice shift}. It is simply the lattice shift, but with every point of it receiving \emph{the same} IID label. This is of course distinct from the IID of the lattice shift, but note \[ \cost(G \curvearrowright G/\Gamma_n) = G \curvearrowright G/\Gamma_n \times [0,1] \] as cost is the integral of the cost over the ergodic decomposition (see Corollary 18.6 of \cite{kechris2004topics}). We write $(a\Gamma_n, \xi)$ for a sample from this space. Now we thicken as before, but this time distribute labels: let \[ \Theta_n(a \Gamma_n, \xi) = \bigcup_{i = 1}^{k_n} a \Gamma_n b_i^n \times \{\xi_i\}, \] where $\xi_1, \xi_2, \ldots$ is an IID sequence of $\texttt{Unif}[0,1]$ random variables produced as a factor of $\xi$. In other words, each point of the lattice shift adds points to the space and gives them an IID $[0,1]$ label (but note that each point starts with \emph{the same} label $\xi$, so this is not the IID of the thickening). Let $\Upsilon$ denote any subsequential weak limit of $\Theta_n(a \Gamma_n, \xi)$, and pass to that subsequence. Then $\pi(\Theta_n(a \Gamma_n, \xi)$ weakly converges to $\pi(\Upsilon)$, as the projection $\pi: G \times [0,1] \to G$ that forgets labels is continuous. Our task is to show that $\Upsilon = [0,1]^{\pi(\Upsilon)}$. The idea of the proof is the following: fix $C \subseteq G$ to be a bounded stochastic continuity set for $\Upsilon$. We want to prove that the labels of the points of $\Theta_n(a \Gamma_n, \xi)$ in $C$ are independent and uniform on $[0,1]$. They are already $\texttt{Unif}[0,1]$ by definition, so we must now consider their dependencies. Again, by definition, points of $C$ arising from \emph{distinct} $a\Gamma_n b_i^n$ are automatically independent. The only dependency issue that can arise is when $a\Gamma_n b_i^n \cap C$ has \emph{multiple} points. We will show that this is a vanishingly rare event. This will be achieved via the following lemma: \begin{lem}\label{farbermult} Let $C \subseteq G$ be compact. If $(\Gamma_n)$ is a Farber sequence and $B_n \subseteq G$ is any sequence of finite subsets, then $\PP[ \exists b \in B_n \text{ such that } \@ifstar{\oldabs}{\oldabs*}{a\Gamma_n b \cap C} > 1] \to 0$. \end{lem} \begin{proof} Apply the Farber condition with any set $V$ containing $CC^{-1}$. If $b \in B_n$ is such that there are $a\gamma_1, a\gamma_2$ \emph{distinct} elements of $a\Gamma_n b \cap C$, then \[ (a\gamma_1 b)(a \gamma_2 b)^{-1} = a \gamma_1 \gamma_2^{-1} a^{-1} \text{ is in } CC^{-1}, \] so $a \gamma_1 \gamma_2^{-1} a^{-1} ab = a \gamma_1 \gamma_2^{-1} b \in Vab$, and this element is also in $a\Gamma_n b$. By the Farber condition, \[ a\Gamma_n b \cap Vab = \{ab\} \] with high probability, and so \[ \PP[ \exists b \in B_n \text{ such that } \@ifstar{\oldabs}{\oldabs*}{a\Gamma_n b \cap C} > 1] \leq \PP[a\Gamma_n a^{-1} \cap V = \{e\}] \to 0, \] finishing the proof of the lemma. \end{proof} Let $\boldsymbol{V} = (V_1, V_2, \ldots, V_k)$ denote a collection of bounded stochastic continuity sets for $\Upsilon$, and $[0, \boldsymbol{p}) = ([0, p_1), [0, p_2), \ldots, [0, p_k))$ a family of intervals in $[0,1]$. We denote by $\boldsymbol{V} \times [0, \boldsymbol{p}) = (V_1 \times [0, p_1), \ldots, V_k \times [0, p_k))$ the stochastic continuity set of $[0,1]^\Upsilon$. Let $C$ be a compact set large enough to contain $\bigcup_i V_i$. On the event from the lemma, \[ N_{\boldsymbol{V}}(\Theta_n(a\Gamma_n, \xi)) = N_{\boldsymbol{V}}([0,1]^{\Theta_n(a\Gamma_n)}, \] where by $\Theta_n(a\Gamma_n)$ we simply mean $(\Theta_n(a\Gamma_n, \xi))$ with the labels erased. Therefore $\Theta_n(a\Gamma_n, \xi)$ converges weakly to $[0,1]^\Upsilon$, finishing the proof. \end{proof} \subsection{The rerooting groupoid for homogeneous spaces}\label{homogeneousspaces} One must also investigate point processes on the homogeneous spaces of groups. The setup which we will consider is the action of $G$ on $X = G/K$, where $K \leq G$ is a \emph{compact} subgroup. This covers our principal case of interest, namely Riemannian symmetric spaces (such as $\Isom(\HH^d) \curvearrowright \HH^d$). All of the point process theoretic definitions (such as thinnings and factor graphs) can be readily adapted to this context. There are some minor subtleties that must be addressed, however. Our aim is to define a cost notion for $G$-invariant point processes on $X$, and relate it to cost for $G$-invariant point processes on $G$ itself. We will show: \begin{thm}\label{symmetricspacecostmax} Assume that the Poisson point process on $X = G/K$ is free\footnote{Some assumption is required. For instance, if $K$ contains an element of the centre of $G$ then the action will not be free.} as a $G$ action. Then \[ \sup_\Pi \cost_X(\Pi) = \sup_\Pi \cost_G(\Pi), \] where the supremum is taken over the set of free point processes on $X$ and on $G$ respectively. In particular, if $X$ has fixed price one then $G$ itself has fixed price one. \end{thm} \begin{remark} The appeal of the above theorem is that it allows one to prove fixed price for a group by working on the symmetric space instead, where the geometry is more apparent. As will be evident in the proof, it suffices to prove fixed price for the Poisson point process on $X$, which is a concrete probabilistic object. \end{remark} Our starting point is to note that such spaces $X$ enjoy the properties we've been using\footnote{The limiting factor here is really the metric: a $G$-invariant Borel measure exists on $G/H$ in fairly great generality (it is an imposition on the modular functions of $G$ and $H$, but an invariant metric will only exist if $G/H$ is homeomorphic (in an appropriate sense) to a quotient $G'/H'$ with $H'$ compact in $G$, see \cite{ANANTHARAMANDELAROCHE2013546} for further details.}: namely, an invariant proper metric that makes $X$ an lcsc space and a $G$-invariant Borel measure $\lambda_X$ on $X$. For the metric on $X$, one takes the Hausdorff metric: \[ d_X(aK, bK) = \max\{ \sup_{k_1 \in K} \inf_{k_2 \in K} d(ak_1, bk_2), \sup_{k_2 \in K} \inf_{k_1 \in K} d(ak_1, bk_2) \}, \] and for the measure $\lambda_X$ one takes the pushforward $\pi_* \lambda$, where $\pi : G \to X$ is the quotient map $a \mapsto aK$. We recall \emph{the mapping theorem} (See Section 2.3 of \cite{MR1207584} for further details): \begin{thm}[Mapping theorem]\label{mappingtheorem} Suppose $X$ is a standard Borel space with $\sigma$-finite measure $\lambda$, $\Pi$ is the Poisson point process with intensity measure $\lambda$, and $f : X \to Y$ is a measurable function, then $f(\Pi)$ is the Poisson point process on $Y$ with intensity measure $f_* \lambda$, assuming this measure has no atoms. \end{thm} In other words, a map between the base spaces $X$ and $Y$ induces a map from the Poisson point process on $X$ to the Poisson point process on $Y$ (with some intensity measure). It is intuitive that this should lead to an inequality on costs, and we will detail how this works. Our study splits into two cases, according to if $K$ is Haar null or not (for $G$'s Haar measure, of course). If $\lambda(K) > 0$, then by Steinhaus Theorem\footnote{It states that if $A \subset G$ is a subset of a locally compact group with positive Haar measure, then $AA^{-1}$ contains an identity neighbourhood.} we have that $K$ is open, and hence a compact clopen subgroup of $G$. It will then also have countable index. This situation occurs for instance in the study of Cayley-Abels graphs of totally disconnected locally compact groups. In this case one is essentially looking at Bernoulli percolation on the quotient space. We will instead focus on the case that $\lambda(K) = 0$. One can consider $G$-invariant point processes $\Pi$ on $X$, which should be understood as random elements of $\MM(X)$ whose distribution is $G$-invariant. We may define the intensity of $\Pi$ as before: \[ \intensity(\Pi) = \frac{1}{\lambda_X(U)} \EE \@ifstar{\oldabs}{\oldabs*}{\Pi \cap U}, \] where $U \subseteq X$ is any set of unit volume. One can further consider notions of thinnings, partitionings, cost, and so on. We follow our previous strategy of studying these by looking at the associated groupoid. Let us write $0$ for the element $K \in X$, which we view as the root of the space. Accordingly we will define the space of rooted configurations in $X$ as \[ \MM_0(X) = \{ \omega \in \MM(X) \mid 0 \in \omega \}. \] Note that the orbit equivalence relation of $G \curvearrowright \MM(X)$ \emph{no longer} restricts to $\MM_0(X)$ to an equivalence relation with countable classes. The solution to this problem is to take a quotient: \begin{prop} Let $K$ act on $\MM_0(X)$ by left multiplication. Then the action is \emph{smooth}, that is, the space of orbits $K \backslash \MM_0(X)$ is a standard Borel space. \end{prop} It is a general fact that compact groups \emph{always} act smoothly on standard Borel spaces (see Proposition 2.1.12 of \cite{MR776417} and its corollary), but it is possible to give a direct proof in this case. The proof is very reminiscent of the section ``Extension to more general point processes'' of \cite{holroyd2003}. \begin{proof} We will construct a measurable function $F : {\mathbb{M}_0}(X) \to [0,1]$ with the property that $F(\omega) = F(\omega')$ if and only if $\omega' \in K\omega$. This is a characterisation of smoothness. Fix a family ${U_n}$ of open subsets of $G$ with the property that it \emph{determines} elements of ${\mathbb{M}_0}(X)$ in the sense that \[ \omega = \omega' \Leftrightarrow \{n \in \NN \mid \omega \cap U_n \neq \empt \} = \{n \in \NN \mid \omega' \cap U_n \neq \empt \} \] Let $f : \{0,1\}^\NN \to [0,1]$ be any continuous order-preserving injection, and consider the map $F : {\mathbb{M}_0}(X) \to [0,1]$ given by \[ F(\omega) = \inf_{k \in K} f((\mathbbm{1}[U_n \cap k \cdot\omega \neq \empt])_n) \] Note that the component functions $\omega \mapsto \mathbbm{1}[U_n \cap \omega \neq \empt]$ are lower semicontinuous, so the infimum is attained. The function is constant on $K$-orbits by definition, but by the separating nature of the family $\{U_n\}$ it also takes distinct values for points in distinct orbits. \end{proof} A Borel thinning $\theta : \MM(X) \to \MM(X)$ corresponds to a Borel subset $A \subseteq {K \backslash} \MM_0(X)$. Note that the latter can be identified with a subset of $\MM_0(X)$ which is $K$-invariant in the sense that $KA = A$, we will make such identifications freely. To see why this $K$-invariance is required, consider the formula \[ \theta^A(\omega) := \{gK \in \omega \mid g^{-1}\omega \in A \}. \] For this to be well-defined, we need that the condition does not depend on our choice of coset representative $gK$. This is exactly asking for $K$-invariance of $A$. In the same way one can see that Borel factor $[d]$-colourings correspond to Borel partitions of $K \backslash \MM_0(X)$ indexed by $[d]$, and so on. If $\Pi$ is a $G$-invariant point process on $X$ with law $\mu$, then we may use the above to define its \emph{Palm measure} $\mu_0$ on $K \backslash \MM_0(X)$ as before: \begin{align*} \mu_0(A) &:= \frac{\intensity \theta^A(\Pi)}{\intensity \Pi} \\ &= \frac{1}{\intensity \Pi} \EE \left[ \# \{ gK \in U \mid g^{-1} \omega \in A \}\right], \text{ where } U \subseteq X \text{ has unit volume}. \end{align*} We equip $K \backslash \MM_0(X)$ with the following rerooting equivalence relation: \[ K\omega \sim_\Rel K\omega' \text{ if and only if } \exists aK \in \omega \text{ such that } K\omega' = K a^{-1} \omega. \] We can also define a groupoid structure. If one defines \[ \Marrow(X) = \{ (\omega, aK) \in \MM_0(X) \times X \mid aK \in \omega \}, \] then there is a natural diagonal action of $K$ on $\Marrow(X)$. The quotient is again standard Borel, we denote it $K \backslash \Marrow(X)$. Then $K \backslash \MM_0(X)$ and $K \backslash \Marrow(X)$ form the unit space and arrow space (respectively) of a groupoid, which we call \emph{the rerooting groupoid}. The source and target maps are defined as before \begin{align*} &s, t : K \backslash \Marrow(X) \to K \backslash \MM_0(X),\\ &s(K\omega, KaK) = K\omega,\\ &t(K\omega, KaK) = Ka^{-1}\omega. \end{align*} A pair of arrows $(K\omega, KaK), (K\omega', KbK) \in K \backslash \Marrow(X)$ are deemed composable if $K\omega' = Ka^{-1}\omega$, in which case \[ (K\omega, KaK) \cdot (K\omega', KbK) := (K\omega, KabK). \] We can equip this groupoid with the Palm measure, resulting in a $r$-discrete pmp groupoid as before. \begin{defn} Let $\Pi$ be a $G$-invariant point process on $X$. Its \emph{groupoid cost} is \[ \cost_X(\Pi) - 1 = \inf_{\mathscr{G}}\left\{ \frac{1}{2}\EE\left[ \sum_{x \in U \cap \Pi} \deg_x{\mathscr{G}(\Pi)} \right] \right\} - \intensity(\Pi), \] where $U$ is a set of unit volume in $X$ and the infimum is taken over all \emph{connected} equivariantly defined factor graphs of $\Pi$. The \emph{cost of $X$} is \[ \cost(X) := \inf \{ \cost_X(\Pi) \mid \Pi \text{ is an invariant free point process on } X \}. \] \end{defn} Aside from replacing ${\mathbb{M}_0}(G)$ with $K \backslash {\mathbb{M}_0}(X)$, our earlier arguments apply verbatim and prove the following: \begin{itemize} \item If $\Phi : \MM(X) \to \MM(X)$ is a $G$-equivariant factor map, then $\cost_X(\Pi) \leq \cost_X(\Phi(\Pi))$. In particular, $\cost_X(\Pi)$ only depends on the isomorphism class of $\Pi$, \item Every \emph{free} point process weakly factors onto its own IID, \item The Poisson point process \emph{on $X$} has maximal $\cost_X$ amongst free $X$ processes, assuming the Poisson point process is free. \end{itemize} \begin{remark} In this level of generality, the Poisson point process on $X = G/K$ might not be free and thus must be assumed. For instance, let $G = \RR \times \RR/\ZZ$ and $K = \RR/\ZZ$. Then $K$ acts trivially on the quotient $X$, and thus \emph{no} $G$-invariant point process on $X$ is free (even their IID will not be free). These examples are rather contrived however. \end{remark} \begin{thm}\label{equalcost} If $\Pi$ is a \emph{free} point process on $X$, then its $\cost_X$ is equal to its cost as a $G$-action. \end{thm} Recall from the introduction that the cost of a free pmp action of $G$ is defined by picking an isomorphic representation of the action as a point process, and taking the cost of that. This theorem will employ a ``whittling'' construction. Note that we can view point processes on $X$ as random closed subsets of $G$ (which happen to be unions of cosets of a fixed compact subgroup). We are able to exploit freeness to \emph{deterministically} whittle this random closed subset to a point process: \begin{prop}\label{deterministiclift} If $\Pi$ is a free point process on $X$, then it admits a \emph{deterministic lift} to $G$: that is, there exists an invariant point process $\Upsilon = \Upsilon(\Pi)$ on $G$ such that \begin{itemize} \item $\Upsilon \subset \Pi$ almost surely, \item $\Upsilon$ intersects each coset $aK$ \emph{at most} once, and \item $\pi(\Upsilon) = \Pi$. \end{itemize} In other words, we are able to select one element out of every coset $aK \in \Pi$ in an equivariant and deterministic way. \end{prop} \begin{proof}[Proof of Theorem \ref{equalcost}] Observe that the process $\Upsilon$ from Proposition \ref{deterministiclift} is isomorphic to $\Pi$, so $\cost(\Pi) = \cost(\Upsilon)$. We verify that $\cost(\Upsilon) = \cost_X(\Pi)$. Note that factor graphs of $\Pi$ and $\Upsilon$ can be bijectively identified, and so will have the same edge measures. Finally, they have the same intensity: choose $U \subseteq X$ with volume one, and observe that $\Pi \cap U$ and $\Upsilon \cap UK$ are in bijection, with $UK$ also having volume one. \end{proof} We will require a simple lemma, which essentially already appears in Lemma \ref{independentsetsexist} but we isolate for clarity. It works for point processes on $G$ and on $X$. \begin{lem}\label{freeness} A point process $\Pi$ is free if and only if it admits a deterministic labelling by $[0,1]$ such that all of the labels are distinct (almost surely). \end{lem} \begin{proof} Clearly if such a labeling exists, then the process must be free. For the converse, let $I: {\mathbb{M}_0} \to [0,1]$ be a Borel isomorphism. Define a labelling by \begin{align*} &L : \MM \to [0,1]^\MM \\ &L(\omega) = \{ (x, I(x^{-1}\omega) \mid x \in \omega \}. \end{align*} Observe that two distinct points $x, y \in \omega$ receive the same label in $L(\omega)$ exactly when $I(x^{-1}\omega) = I(y^{-1}\omega)$, and so $xy^{-1}\omega = \omega$. If the process is free, then this never occurs, as desired. To run the proof on $X$, simply replace ${\mathbb{M}_0}$ by $L \backslash {\mathbb{M}_0}(X)$. \end{proof} \begin{proof}[Proof of Proposition \ref{deterministiclift}] By virtue of being free, we may use Theorem \ref{minden} to fix an isomorphism of $\Pi$ with a point process $\Pi'$ on $G$. The desired process $\Upsilon$ will be the result of pushing the points of $\Pi'$ into $\Pi$. Of course $\Pi'$ is itself a free process, so we may fix a deterministic labelling $L(\Pi')$ of its points \`{a} la Lemma \ref{freeness}. Assign each coset $aK$ of $\Pi$ to a point of $x$ of $\Pi'$ in your preferred equivariant way. For instance, note that every such coset intersects some (finite but non-zero) number of Voronoi cells of $\Pi'$. For each $aK \in \Pi$, assign it to whichever of these cells has the germ with the highest label in $L(\Pi')$. We denote by $A_x$ the set of cosets in $\Pi$ that we assign to $x \in \Pi'$ in this way. Fix a Borel transversal $T \subset G$. Note that $xT$ is another Borel transversal for any $x \in G$, so $xT \cap aK$ selects the unique point representative of $aK$ with respect to this transversal. Finally, set \[ \Upsilon = \bigcup_{\substack{x \in \Pi' \\ aK \in A_x}} xT \cap aK. \] This selects one representative from every coset in $\Pi$, and at every stage it was performed in an equivariant way, so is our desired invariant point process. \end{proof} \begin{proof}[Proof of Theorem \ref{symmetricspacecostmax}] Let $\Pi$ denote the Poisson point process on $G$. Then by the mapping theorem (Theorem \ref{mappingtheorem}), the image $\Upsilon$ of $\Pi$ under the quotient map $G \to X$ is the Poisson point process on $X$. Since $\cost_G$ can only increase under factor maps, we have \[ \cost_G(\Pi) \leq \cost_G(\Upsilon). \] But the Poisson point process has maximal cost amongst free $G$-actions, so there is equality. By Theorem \ref{equalcost}, \[ \cost_X(\Upsilon) = \cost_G(\Upsilon), \] and as discussed, the Poisson point process has maximal cost amongst all free point processes on $X$, finishing the proof. \end{proof} \begin{question} It is natural to ask if $G$ and $X$ have the same \emph{infimal} cost as well. \end{question} \subsection{Farber sequences in semisimple Lie groups} The goal of this section is to to prove Theorem \ref{carderiextension} from the introduction, which we restate here: \begin{thm*} Let $G$ be a semisimple real Lie group and let $\Gamma_{n}$ be a Farber sequence of lattices in $G$. Then \[ \limsup_{n\to\infty}\frac{d(\Gamma_n)-1}{\mathrm{vol}(G/\Gamma_n)} \leq\mathrm{c}_{\mathrm{P}}(G)-1. \] \end{thm*} There would be a more straightforward (and general) proof of the above if a more general form of cost monotonicity were true, however we are unable to prove (or disprove) the following statement: suppose $\Pi_n$ is a sequence of finite cost point processes that weakly converge to a random net $\Pi$. Is it true that \[ \limsup_{n \to \infty} \cost(\Pi_n) \leq \cost(\Pi) ? \] To prove Theorem \ref{carderiextension} we will use the geometric interpretation of being a Farber sequence -- specifically, see Corollary 3.3 of \cite{MR3664810}. In brief, it means that for all $r > 0$ the injectivity radius of a randomly chosen point of the quotient manifold $M_n = \Gamma_n \backslash X$ is larger than $r$ with high probability, where $X = G / K$ denotes the symmetric space of $G$. We will also heavily call upon the paper \cite{MR2863908}. Additionally, it will be assumed that the reader understands the proof of Theorem \ref{farbertheorem}. \begin{proof}[Proof of Theorem \ref{carderiextension}] First, let us handle the special case of $G = \PSL_2(\RR)$, for which the theorem is true but for fundamentally different reasons. In this case the $\Gamma_n$ being discussed are fundamental groups of finite volume hyperbolic surfaces, and we only require that $\covol(\Gamma_n)$ tends to infinity. This allows us to eliminate additive constants in the following: \[ \lim_{n\to\infty}\frac{d(\Gamma_n)-1}{\mathrm{vol}(G/\Gamma_n)} = \lim_{n\to\infty}\frac{b_1(\Gamma_n)}{\mathrm{vol}(G/\Gamma_n)} = \lim_{n\to\infty}\frac{-\chi(G/\Gamma_n)}{\mathrm{vol}(G/\Gamma_n)} = \frac{1}{2\pi} = \beta_1(G) \leq \mathrm{c}_{\mathrm{P}}(G)-1 \] where $\beta_1(G)$ is the first $L^2$-Betti number of $G$. We are using the Gauss-Bonnet formula above and Gaboriau's result that the first $L^2$-Betti number of a relation is dominated by its cost-1. Note that in \cite{conley2021one}, they prove (in particular) that (cross sections of) actions of $\PSL_2(\RR)$ are treeable and thus have fixed price equal to their $L^2$-betti number plus one. Thus we actually have equality all the way above. We now handle the other cases. Let us denote by $a\Gamma_n$ the lattice shift corresponding to $\Gamma_n < G$. Let us summarise the proof: \begin{enumerate} \item We produce a sequence of uniformly separated factors $\Phi^n(a\Gamma_n)$ of the lattice shifts $G \curvearrowright G/\Gamma_n$. Note that by equivariance they must be of the form $\Phi^n(a\Gamma_n) = a\Gamma_n F_n$, and \[ \cost(G \curvearrowright G/\Gamma_n) - 1 = \frac{d(\Gamma_n) - 1}{\text{vol}(G/\Gamma_n)} \leq \cost(\Phi^n(a\Gamma_n)) - 1 \] as cost is monotone under factors. \item We show that $\Phi^n(a\Gamma_n)$ admits subsequential weak limits, and any such subsequential weak limit is a random net. \item As in the proof of Theorem \ref{farbertheorem}, we now use the periodic IID lattice shift and distribute randomness, replacing $\Phi^n(a\Gamma_n)$ by marked processes which converge to an IID-labelled random net. \item Using results of \cite{MR2863908}, our desired inequality follows from cost monotonicity. \end{enumerate} We will show that the distance-$R$ factor graph $\mathscr{D}_R$ is connected on $\Phi^n(a\Gamma_n) = a\Gamma_n F_n$. Observe that, by left-invariance of the metric, this is true if and only if it is connected on $\Gamma_n F_n$. Observe that this is finitely many \emph{right} cosets, that is, $\Gamma_n F_n \subset {\Gamma_n}\backslash G$. We will show that $\mathscr{D}_R$ is connected by appealing to properties of the further quotient ${\Gamma_n}\backslash G/K = {\Gamma_n}\backslash X$. The essential result we use from \cite{MR2863908} is the following. As long as $G$ is not $\PSL(2,\RR)$, there exist constants $\e, \e' > 0$ and a sequence of \emph{connected} subsets $U_n \subset X$ such that \begin{itemize} \item The projection $\Gamma_n U_n \subset \Gamma_n \backslash X$ contains the $\e$-thick part of $\Gamma_n \backslash X$, and \item The projection $\Gamma_n U_n \subset \Gamma_n \backslash X$ is contained in the $\e'$-thick part of $\Gamma_n \backslash X$. \end{itemize} Here $\e$ is defined in Lemma 2.3 of \cite{MR2863908} and $\e' = \e/(2\mu)$, where $\mu$ is defined after the proof of Lemma 2.4. Crucially, these constants only depend on $G$. In the paper, our $U_n$ is denoted by $\widetilde{\psi}_{\leq 0}$ and it is a level set with respect to a function inversely measuring the injectivity radius. \begin{claim} There exists a sequence $\Phi^n(a\Gamma_n)$ of factors that are uniformly separated and any subsequential weak limit of them is a random net. \end{claim} \begin{proof}[Proof of claim] We choose $F_n \subset G$ following Corollary 2.13 of \cite{MR2863908}. We choose a maximal $\e'$-separated subset $E_n$ of $\Gamma_n U_n K \subset \Gamma_n \backslash X$. Then the union of $\e'$-balls around $E_n$ will cover $\Gamma_n U_n$ by maximality, hence, the set of points not covered by these balls lies in the $\e'$-thin part of $\Gamma_n \backslash X$. By the Farber condition, the density of these points tends to $0$ in $n$. This means that for any subsequential weak limit of the point processes $a\Gamma_n F_n$, the probability that the identity is distance more than $\e'$ from the root of $X$ is $0$. That is, the union of $\e'$-balls for any subsequential weak limit will equal $X$ a.s., that is, the weak limit will be a net. The slight difference is that we now need the same on $G$, not on $X$. In order to do that, we pick a coset representative (randomly or deterministically) with respect to $K$. This can only increase the $X$-distance but only by a bounded amount, so even on $G$, the same argument holds with worse constants. \end{proof} Similar to before, we distribute randomness from the $\Gamma_n$-periodic IID processes to $\Phi^n(a\Gamma_n)$. Call the resulting process $\Pi^n$. By passing to a subsequential weak limit, we can assume that $\Pi^n$ weakly converges to some process $\Upsilon$. As before in Theorem \ref{farbertheorem}, the Farber condition ensures that $\Upsilon$ is in fact an IID labelled process (and in particular, its cost is at most the cost of the Poisson point process). Our final task is to relate the cost of the $\Pi^n$ processes to the cost of $\Upsilon$. We write $\mu_0^n$ for the Palm measure of $\Pi^n$ and $\mu_0$ for the Palm measure of $\Upsilon$. By the proof of Theorem \ref{costmonotonicity}, any factor graph which $\delta$-computes the cost of $\Upsilon$ contains a $\mu_0$-continuity factor graph $\mathscr{G}$ which is connected for $\Upsilon$. Thus \[ \limsup_{n \to \infty} \overrightarrow{\mu_0^n}(\mathscr{G}) \leq \overrightarrow{\mu_0}(\mathscr{G}). \] A priori, there is no reason to expect that $\mathscr{G}$ is connected for \emph{any} $\Pi_n$, however. But note that by construction the graphing $\mathscr{G}$ has the property that for large enough $R > 0$ there exists a constant $N$ such that $\mathscr{G}^N(\omega) \supseteq \mathscr{D}_R(\omega)$ for all $\e'$-separated configurations $\omega$, where $\mathscr{D}_R$ denotes the distance-$R$ factor graph as usual. In particular, $\mathscr{G}$ is connected for the lattice shift factors $a\Gamma_n F_n$, as they are coarsely connected: by left-invariance of the metric, we may simply consider $\Gamma_n F_n$, and recall that its image in $X$ lies in the connected subset $U_n$. Now for any pair of points $x$ and $y$ in $\Gamma_n F_n$, take a path between their images $xK$ and $yK$ lying within $U_n$, and note that it induces a coarse path (with bounded jumps) between $x$ and $y$ themselves. Thus \[ \limsup_{n \to \infty} \frac{d(\Gamma_n) - 1}{\text{vol}(G/\Gamma_n)} \leq \cost(\Pi^n) - 1 \] as desired. \end{proof}
{ "timestamp": "2022-12-06T02:15:56", "yymm": "2102", "arxiv_id": "2102.07710", "language": "en", "url": "https://arxiv.org/abs/2102.07710" }
\section{Introduction} \label{sec:intro} The question of how to quantify the relative importance of variables has intrigued researchers for years. While it was largely of academic interest early on, the question has taken on greater urgency in the last two decades, due to the increasing frequency of large data sets and the popularity of ``black box'' machine learning methods for which scoring the importance of variables may be the only means of interpretation; see \citet{Bring94}, \citet{Bi12} and \citet{Wei15} for excellent surveys. A prime black-box example is random forest (RF, \citet{RF}), which consists of hundreds of unpruned regression trees. Its permutation-based scheme to produce importance scores has been copied by many methods. Some researchers have observed that the orderings of RF scores do not always agree with those based on traditional methods. \citet{Bureau05} used RF to identify single-nucleotide polymorphisms (SNPs) predictive of disease and found that while SNPs that are highly associated with disease, as measured by Fisher's exact test, tend to have high RF scores, the two orderings do not match. \citet{Diaz06} selected genes in microarray data by iteratively removing 20\% of the genes with the lowest RF scores at each step. They found that this yielded a smaller set of genes than linear discriminant analysis, nearest neighbor and support vector machine methods, and that the RF results were more variable. \begin{table} \centering \caption{Variables in COVID data} \label{tab:covid:vars} \vspace{0.5em} \begin{tabular}{lp{4.7in}} \hline \texttt{died} & Died while hospitalized (0=no, 1=yes) \\ \texttt{agecat} & Age group (0=18--50, 1=50--59, 2=60--69, 3=70--79, 4=80--90 years) \\ \texttt{race} & White; Black or African American; Asian; Native Hawaiian or other Pacific Islander; American Indian or Alaska Native; Unknown \\ \texttt{sex} & Gender (male/female) \\ \texttt{aids} & AIDS/HIV (0=no, 1=yes) \\ \texttt{cancer} & Any malignancy, including lymphoma and leukemia, except malignant neoplasm of skin (0=no, 1=yes) \\ \texttt{cerebro} & Cerebrovascular disease (0=no, 1=yes) \\ \texttt{charlson} & Charlson comorbidity index (0--20) \\ \texttt{CHF} & Congestive heart failure (0=no, 1=yes) \\ \texttt{CPD} & Chronic pulmonary disease (0=no, 1=yes) \\ \texttt{dementia} & Dementia (0=no, 1=yes) \\ \texttt{diabetes} & Diabetes mellitus (0=no, 1=yes) \\ \texttt{hemipara} & Hemiplegia or paraplegia (0=no, 1=yes) \\ \texttt{metastatic} & Metastatic solid tumor (0=no, 1=yes) \\ \texttt{MI} & Myocardial infarction (0=no, 1=yes) \\ \texttt{mildliver} & Mild liver disease (0=no, 1=yes) \\ \texttt{modsevliv} & Moderate/severe liver disease (0=no, 1=yes) \\ \texttt{PUD} & Peptic ulcer disease (0=no, 1=yes) \\ \texttt{PVD} & Peripheral vascular disease (0=no, 1=yes) \\ \texttt{RD} & Rheumatic disease (0=no, 1=yes) \\ \texttt{renal} & Renal disease (0=no, 1=yes) \\ \hline \end{tabular} \end{table} The differences in orderings may be demonstrated on a data set from \citet{Harrison20} of 31,461 patients aged 18--90 years diagnosed with the COVID-19 disease between January 20 and May 26, 2020, in the United States. Table~\ref{tab:covid:vars} lists the 21 variables, which consist of death during hospitalization, age group, sex, race, 16 comorbidities, and Charlson comorbidity index (a risk score computed from the comorbidities). The authors estimated mortality risk by fitting a multiple linear logistic regression model, without Charlson index, to each age group. They found 10 variables statistically significant at the 0.05 level (without multiplicity adjustment), namely, race, sex, and history of myocardial infarction (\texttt{MI}), congestive heart failure (\texttt{CHF}), dementia, chronic pulmonary disease (\texttt{CPD}), mild liver disease (\texttt{mildliver}), moderate/severe liver disease (\texttt{modsevliv}), renal disease (\texttt{renal}), and metastatic solid tumor (\texttt{metastatic}). Figure~\ref{fig:covid:barplots} shows the importance scores of the top 10 variables obtained from 12 methods discussed below. There is substantial variation in the orderings, although \texttt{agecat}, \texttt{charlson}, and \texttt{renal} are ranked in the top 3 by 7 of the 12 methods. Of the variables that \citet{Harrison20} found statistically significant, \texttt{CPD} is not ranked in the top 10 by any method, and \texttt{mildliver} and \texttt{metastatic} are ranked in the top 10 only twice and once, respectively. On the other hand, the non-significant variables \texttt{cancer}, \texttt{cerebro}, \texttt{diabetes}, \texttt{hemipara}, and \texttt{PVD} are ranked in the top 10 by 5, 10, 7, 3, and 9 methods, respectively. Statistical significance is clearly not necessarily consistent with the importance scores. \begin{figure} \centering \resizebox{0.95\textwidth}{!}{\includegraphics*[46,25][586,771]{covidplots.ps}} \caption{Top 10 variables for COVID data; scores of LASSO, RANGER, RF, RFSRC, and RLT are averaged over 100 runs with different random seeds} \label{fig:covid:barplots} \end{figure} What is one to do in the face of such disparate results? One solution is to average the ranks across the methods, but this assumes that the methods are equally good. \citet{Strobl07}, \citet{SZ08}, and others have shown that the scores from RF are unreliable because they are biased towards certain variable types. A method is said to be ``unbiased'' if all predictor variables have the same mean importance score when they are independent of the response variable. One goal of this paper is to find out if other methods are biased. Given a data set, bias may be uncovered by estimating the mean scores over random permutations of the response variable, keeping the values of the predictor variables fixed. Let $\mathtt{VI}_j(X)$ ($j=1,2,\ldots,J$) denote the importance score of variable $X$ in the $j$th permutation. Figure~\ref{fig:covid:perm} plots the values of $\overline{\mathtt{VI}}(X) = J^{-1} \sum_j \mathtt{VI}_j(X)$ in increasing order and their 2-standard error bars, for $J=1000$. An unbiased method should have all its error bars overlapping. The plots show that only CFOREST2, GUIDE, and RANGER have this property. \begin{figure} \centering \resizebox{0.9\textwidth}{!}{\includegraphics*[17,29][576,763]{covid_bias.ps}} \caption{Mean importance scores $\overline{\mathtt{VI}}$ (orange) and 2-SE bars (green) from 1000 random permutations of the dependent variable for COVID data. Variables ordered by increasing mean scores. CTREE and RPART are not included because they returned trees with no splits (and hence no importance scores) for all permutations.} \label{fig:covid:perm} \end{figure} Another interesting problem is to identify variables that are truly important. There few attempts at answering this question, despite its being central to variable selection---an important step if the total number of variables exceeds the sample size. Although it may be expected that all important variables should be included, \citet{lchen12} showed that under certain conditions, omitting the less important ones can yield a model with higher prediction accuracy. The remainder of this article is organized as follows. Section~\ref{sec:guide} describes the GUIDE method of calculating importance scores. Section~\ref{sec:methods} reviews the other 11 methods. Section~\ref{sec:simul} presents the results of simulations to identify the biased methods and show their effects on the scores. Section~\ref{sec:predict} examines the extent to which the scores of each method are consistent with two measures of predictive power of the variables. Section~\ref{sec:threshold} describes a general procedure for producing a threshold score such that, with high probability, variables with scores less than the threshold are independent of the response. Section~\ref{sec:miss} shows how the GUIDE method applies to data with missing values. Section~\ref{sec:conclusion} concludes the article with some remarks. \section{GUIDE} \label{sec:guide} The GUIDE algorithm for constructing regression and classification trees is described in \citet{guide} and \citet{guide09}, respectively. It differs from CART \citep{cart} in every respect except tree pruning, where both employ the same cost-complexity cross-validation technique. CART uses greedy search to select the split that most decreases node impurity but GUIDE uses chi-squared tests to first select a split variable and then searches for the best split based on it that most decreases node impurity. This approach started in \citet{lv88} and evolved principally through \citet{chly94}, \citet{lohshih97}, and \citet{guide,guide09}. Besides reducing computation, it lets GUIDE avoid biases in variable selection inherent among greedy search methods. Another major difference between GUIDE and CART is how each deals with missing values in predictor variables. \citet{cruise} showed that CART's solution through surrogate splits is another source of selection bias. An initial importance scoring method based on GUIDE was proposed in \citet{lchen12}. Though not designed to be unbiased, it turned out to be approximately unbiased unless there is a mix of ordinal and categorical variables. We present here an improved version for regression that ensures unbiasedness. As in the previous method, it uses a weighted sum of chi-squared statistics obtained from a shallow (four-level) unpruned tree, but it adds conditional tests for interaction and a permutation-based step for bias adjustment. Given a node $t$, let $n_t$ denote the number of observations in $t$. \begin{enumerate} \item Fit a constant to the data in $t$ and compute the residuals. \item Define a class variable $Z$ such that $Z=1$ if the observation has a positive residual and $Z=2$ otherwise. \item \begin{enumerate} \item If $X_k$ is an ordinal variable, transform it to a categorical variable $X_k'$ with $m$ roughly equal-sized categories, where $m=3$ if $n_t < 60$ and $m=4$ otherwise. \item If $X_k$ is a categorical variable, define $X_k' = X_k$. \end{enumerate} \item If $X_k$ has missing values, add an extra category to $X_k'$ to hold the missing values. \item For $k=1,2, \ldots, K$, where $K$ is the number of variables, perform a contingency table chi-squared test of $X_k'$ versus $Z$ and denote its p-value by $p_1(k,t)$. \item \label{step:7} If $\min_k p_1(k,t) \geq 0.10/K$ (first Bonferroni correction), carry out the following interaction tests. \begin{enumerate} \item Transform each ordinal $X_k$ to a 3-level categorical variable $X_k'$. If $X_k$ has no missing values, $X_k'$ is $X_k$ discretized at the 33rd and 67th sample quantiles. If $X_k$ has missing values, $X_k'$ is $X_k$ discretized at the sample median with missing values forming the third category. If $X_k$ is a categorical variable, let $X_k' = X_k$. \item For every pair $(X'_j, X'_k)$ with $j < k$, perform a chi-squared test with the $Z$ values as rows and the $(X'_j, X'_k)$ values as columns and let $p_2(j,k,t)$ denote its p-value. \item Let $(X_{j'}', X_{k'}')$ be the pair of variables with the smallest value of $p_2(j,k,t)$. If $p_2(j',k',t) < 0.20\{K(K-1)\}^{-1}$ (second Bonferroni correction), redefine $p_1(j',t) = p_1(k',t) = p_2(j',k',t)$. \end{enumerate} \item Let $k^*$ be the smallest value of $k$ such that $p_1(k^*,t) = \min_k p_1(k,t)$. Find the split on $X_{k^*}$ yielding the largest decrease in node impurity (i.e., sum of squared residuals). \end{enumerate} After a tree is grown with four levels of splits, the importance score of $X_k$ is computed as \begin{equation} \label{eq:guide:raw} v(X_k) = \sum_t \sqrt{n_t} \, \chi^2_1(k,t) \end{equation} where the sum is over the intermediate nodes and $\chi^2_1(k,t)$ denotes the $(1-p_1(k,t))$-quantile of the chi-squared distribution with 1 degree of freedom. The factor $\sqrt{n_t}$ in (\ref{eq:guide:raw}) first appeared in \citet{lchen12} but was changed to $n_t$ in \citet{lhm15}; we revert it back to $\sqrt{n_t}$ to prevent the root node from dominating the scores. The values of $v(X_k)$ are slightly biased due partly to differences between ordinal and categorical variables and partly to the above step~\ref{step:7}. To remove the bias, we adjust the scores by their means computed under the hypothesis that the response variable ($Y$) is independent of the $X$ variables. Specifically, the $Y$ values are randomly permuted $B$ times (the default is $B=300$) with the $X$ values fixed, and a tree with four levels of splits is constructed for each permuted data set. Let $v^*_b(X_k)$ be the value of~(\ref{eq:guide:raw}) in permutation $b=1,2,\ldots,B$, and define $\bar{v}(X_k) = B^{-1} \sum_b v^*_b(X_k)$. The GUIDE \emph{bias-adjusted} variable importance score of $X_k$ is \begin{equation} \label{eq:guide:scaled} \mathtt{VI}(X_k) = v(X_k) / \bar{v}(X_k). \end{equation} \section{Other methods} \label{sec:methods} We briefly review the other methods here. \begin{description} \item[RPART.] This is an R version of CART \citep{rpart}. Let $s = \{X_i \in A\}$ denote a split of node $t$ for some variable $X_i$ and set $A$, and let $t_L$ and $t_R$ denote its left and right child nodes. Given a node impurity function $i(t)$ at $t$, let $\Delta(s,t) = i(t)-i(t_L)-i(t_R)$ be a measure of the goodness of the split. For regression trees, $i(t) = \sum_{i \in t} (y_i-\bar{y}_t)^2$, where $\bar{y}_t$ is the sample mean at $t$. CART partitions the data with the split $s(t)$ that maximizes $\Delta(s,t)$. To evaluate the importance of the variables as well as to deal with missing values, CART finds, for each $X_j$ ($j \neq i$), the surrogate split $\tilde{s}_j(t)$ that best predicts $s(t)$. The importance score of $X_j$ is $\sum_t \Delta(\tilde{s}_j(t),t)$, where the sum is over the intermediate nodes of the pruned tree \citep[p.~141]{cart}. RPART measures importance differently from CART \citep{rpartvignette}. Given a split $s(t)$ and a surrogate $\tilde{s}(t)$, let $k(s(t), \tilde{s}(t))$ be the total number of observations in $t_L$ and $t_R$ correctly sent by $\tilde{s}(t)$. Let $n_L$ and $n_R$ denote the number of observations in $t_L$ and $t_R$, respectively. The ``adjusted agreement'' between $s$ and $\tilde{s}$ is $a(s,\tilde{s}) = \{k(s,\tilde{s})-\max(n_L,n_R)\}/\min(n_L,n_R)$. Call $X_i$ a ``primary'' variable if it is in $s$ and a ``surrogate'' variable if it is in $\tilde{s}$. Let $P(i)$ and $S(i)$ denote the sets of intermediate nodes where $X_i$ is the primary and surrogate variable, respectively. RPART defines $\mathtt{VI}(X_i) = \sum_{t \in P(i)} \Delta(s(t),t) + \sum_{t \in S(i)} a(s(t),\tilde{s}(t))\Delta(\tilde{s}(t),t)$ as the importance score of $X_i$. As shown below, this method yields biased scores, because maximizing the decrease in node impurity induces a bias towards selecting variables that allow more splits \citep{WL94,lohshih97} and the surrogate split method itself induces a bias when there are missing values \citep{cruise}. \item[GBM.] This is gradient boosting machine \citep{Friedman01}. It uses functional gradient descent to build an ensemble of short CART trees. For a single tree, the importance score of a variable is the square root of the total decrease in node impurity (squared error in the case of regression) over the nodes where the variable appears in the split. For an ensemble, it is the root mean squared importance score of the variable over the trees \citep[p.~1217]{Friedman01}. We use the R function \texttt{gbm} \citep{gbm-man} to construct the GBM models and the \texttt{varImp} function in the \texttt{caret} package \citep{caret} to calculate the importance scores. \item[RF.] This is the R implementation of random forest \citep{Liaw02}. It has two measures for computing importance scores. The first is the ``decrease in accuracy'' of the forest in predicting the ``out-of-bag'' (OOB) data before and after random permutation of the predictor variable, where the OOB data are the observations not in the bootstrap sample. The second uses the ``decrease in node impurity,'' which is the average of the total decrease in node impurity of the trees. Partly due to CART's split selection bias, the decrease in node impurity measure is known to be unreliable \citep{Strobl07,SZ08}. The results reported here use the ``decrease in accuracy'' measure. \item[RANGER.] \citet{SZ08} used pseudovariables to correct the bias in RF's ``decrease in node impurity'' method. (Pseudovariables were employed earlier by \citet{WBS07}.) Given $K$ predictor variables $\mathbf{X} = (X_1, X_2, \ldots, X_K)$, another $K$ pseudovariables $\mathbf{Z} = (Z_1, Z_2, \ldots, Z_K)$ are added where the rows of $\mathbf{Z}$ are random permutations of the rows of $\mathbf{X}$. The RF algorithm is applied to the $2K$ predictors and the importance score of $X_i$ is adjusted by subtracting the score of $Z_i$ for $i=1,2,\ldots,K$. This approach requires more computer memory and increases computation time (a forest has to be constructed for each generation of $\mathbf{Z}$). \citet{NKW18} proposed using only a single generation of $\mathbf{Z}$ and storing only the permutation indices rather than the values of $\mathbf{Z}$. Their method is implemented in the \texttt{ranger} R package \citep{ranger}. Although storing only the permutation indices saves computer memory, the use of a single permutation adds another level of randomness to the already random results of RF. In serious applications, there are no savings in computation time because RANGER must be applied many times to stabilize the average importance scores. In the real data examples here, the RANGER scores are averages over 100 replications. \item[RFSRC.] This is another ensemble method similar to RF \citep{Ishwaran07,Ishwaran08}. The importance of a variable $X$ is measured by the difference between the prediction error of the OOB sample before and after $X$ is ``noised up''. ``Noising up'' here means that if an OOB observation encounters a split on $X$ at a node $t$, it is randomly sent to the left or right branch, with equal probability, at $t$ and \emph{all} its descendent nodes. Missing values in a predictor variable are imputed nodewise, by replacing each missing value with a randomly selected non-missing value in the node. The results for RFSRC here are obtained with the \texttt{randomForestSRC} R package \citep{rfsrc-r}. \item[RLT.] This method may be thought of as ``RF-on-RF.'' Called ``reinforcement learning trees'' \citep{RLT}, it constructs an ensemble of trees from bootstrap samples, but uses the RF permutation-based importance scoring method to select the most important variable to split each node in each tree. After the ensemble is constructed, the final importance scores are obtained using the RF permutation scheme. The results here are produced by the \texttt{RLT} package \citep{RLTpack}. \item[CTREE.] This is the ``conditional inference tree'' algorithm of \citet{ctree}. It follows the GUIDE approach of using significance tests to select a variable to split each node of a tree. Unlike GUIDE, however, CTREE uses linear statistics based on a permutation test framework and, instead of pruning, it uses Bonferroni-type p-value thresholds to determine tree size. Further, the significance tests employ only observations with non-missing values in the $X$ variable being evaluated. Observations with missing values are passed through each split by means of surrogate splits as in CART. Importance scores are obtained as in RFSRC, except that an OOB observation missing the split value at a node is randomly sent to the left or right child node with probabilities proportional to the samples sizes of the non-missing observations in the child nodes. \item[CFOREST.] This is an ensemble of CTREE trees from the \texttt{partykit} R package. Instead of bootstrap samples, it takes random subsamples (without replacement) of about two-thirds of the data to construct each tree. \citet{Strobl07} showed that this removes a bias in RF that gives higher scores to categorical variables with large numbers of categories. This is the default option in \texttt{partykit}, which we denote by CFOREST1. Another option, which we denote by CFOREST2, is conditional permutation of the variables, which \citet{Strobl08} proposed for reducing the bias in RF towards correlated variables. \item[LASSO.] This is linear regression with the lasso penalty. The importance score of an ordinal variable is the absolute value of its coefficient in the fitted model and that of a categorical variable is the average of the absolute values of the coefficients of its dummy variables. All variables (including dummy variables) are standardized to have mean 0 and variance 1 prior to model fitting. We use the R implementation in the \texttt{glmnet} package \citep{glmnet-man}. \item[BARTM.] This is \texttt{bartMachine} \citep{Bleich14}, a Bayesian method of constructing a forest of regression trees using the BART \citep{BART} method. The underlying model is that the response variable is a sum of regression tree models plus homoscedastic Gaussian noise. Prior distributions must be specified for all unknown parameters, including the set of tree structures, terminal node parameters, and the Gaussian noise variance. According to \citet{Bleich14}, the importance of a variable is given by the relative frequency that it appears in the splits in the trees. The results here are obtained from the R package \texttt{bartMachine} with default parameters. \end{description} \section{Simulation experiments} \label{sec:simul} We performed 6 simulation experiments (E0--E5) involving 11 predictor variables ($B_1$, $B_2$, $C_1$, $C_2$, $N_1$, $N_2$, $N_3$, $N_4$, $S_1$, $S_2$, $S_3$) to compare the performance of the methods. Variable sets $\{B_1\}$, $\{C_1\}$, $\{B_2, C_2\}$, $\{N_1, N_2, N_3, N_4\}$, and $\{S_1, S_2, S_3\}$ are mutually independent. Variable $B_1$ is Bernoulli with $P(B_1 = 1) = 0.50$, and $C_1$, $C_2$ are independent categorical variables taking values $1,2,\ldots, 10$ with equal probability 0.10. Variable $B_2 = I(C_2 \leq 5)$ is a binary variable derived from $C_2$. Variable $N_1$ is independent standard normal except in model~E2 (see below). The triple $(N_2, N_3, N_4)$ is multivariate normal with zero mean, unit variance, and constant correlation 0.90. The triple $(S_1, S_2, S_3)$ is obtained by setting $S_1 = \min(U_1, U_2)$, $S_2 = |U_1-U_2|$, and $S_3 = 1-\max(U_1,U_2)$, where $U_1$ and $U_2$ are independent and uniformly distributed variables on the unit interval, so that $S_1+S_2+S_3 = 1$ and $\mbox{cor}(S_i,S_j) = -0.50$ ($i \neq j$). Their purpose is to see if there any effects of linear dependence on the importance scores. Table~\ref{tab:outcome-model:new} shows the models used to generate the dependent variable $Y = \mu(X) + \epsilon$, where $\mu(X)$ is a function of the predictor variables and $\epsilon$ an independent standard normal variable. Null model E0, where $Y$ is independent of the $X$ variables, tests for bias. The other models, which have one or two important variables, show the effects of bias on the scores. For each model, the scores are obtained from 1000 simulation trials, with random samples of 400 observations in each trial. Figure~\ref{fig:E0} shows the average scores and their 2-SE (simulation standard error) bars for model E0. Because the 2-SE bars should overlap if there is no selection bias, we see that only CFOREST2, CTREE, GUIDE, and RANGER are unbiased, with RANGER and, to a lesser degree, CFOREST2 exhibiting variance heterogeneity. BARTM, CFOREST1 and RF are biased towards correlated variables $N_2$, $N_3$ and $N_4$. GBM, RLT and RPART are biased towards categorical variables $C_1$ and $C_2$. RFSRC is biased against all categorical variables. LASSO is biased \emph{in favor} of $B_1$ but \emph{against} $B_2$. \begin{table} \centering \caption{Simulation models $Y = \mu(X) + \epsilon$, with $\epsilon$ independent standard normal} \label{tab:outcome-model:new} \vspace{0.5em} \begin{tabular}{@{}ll@{}} \hline E0 & $\mu(X) = 0$ \\ E1 & $\mu(X) = 0.2 N_2$ \\ E2 & $\mu(X) = 0.1(N_1 + N_2)$ \\ E3 & $\mu(X) = 0.2B_1$ \\ E4 & $\mu(X) = 0.2B_2 = 0.2 I(C_2 \leq 5)$ \\ E5 & $\mu(X) = 0.5\{I(B_1 = 0, C_1 \leq 5) + I(B_1 = 1, C_1>5)\}$ \\ \hline \end{tabular} \end{table} \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[25,26][581,767]{M0_raw.ps}}} \caption{Average importance scores with 2-SE error bars for model E0, where predictor variables are independent of $Y$.} \label{fig:E0} \end{figure} Figures~\ref{fig:E1}--\ref{fig:E5} show boxplots of the 1000 simulated scores for models~E1--E5. Boxplots of variables that affect $Y$ are drawn in red. We can draw the following conclusions. \begin{description} \item[E1.] The model is $\mu(X) = 0.2 N_2$, but the response is also associated with $N_3$ and $N_4$ through their correlation with $N_2$. Figure~\ref{fig:E1} shows that all but one method give their highest median scores to these three predictors. The exception is GBM---its strong bias towards variables $C_1$ and $C_2$ makes them likely to be incorrectly scored higher than $N_2$, $N_3$ and $N_4$. \item[E2.] The model is $\mu(X) = 0.1(N_1 + N_2)$, where $N_1$ is independent of $N_2$ but the latter is highly correlated with $N_3$ and $N_4$. We expect their scores to be larger than those of the other variables, with $N_1$ and $N_2$ being roughly equal and $N_3$ and $N_4$ close behind. Figure~\ref{fig:E2} shows this to be true of all methods except GBM, RF, RLT, and RPART. For RF and RPART, the presence of $N_3$ and $N_4$ raises the median score of $N_2$ above that of $N_1$. GBM again tends to incorrectly score $C_1$ and $C_2$ highest. RLT also frequently incorrectly scores these two categorical variables higher than $N_2$, $N_3$ and $N_4$. \item[E3.] The model is $\mu(X) = 0.2B_1$, with $B_1$ independent of the other predictors. All except GBM, LASSO, RF, RFSRC, RLT, and RPART are more likely to correctly score $B_1$ highest. GBM, RLT and RPART fail to do this due to bias towards $C_1$ and $C_2$. RF fails due to the high correlation of $N_2$, $N_3$ and $N_4$. CTREE and LASSO yield median scores of 0 for all predictors, including $B_1$. \item[E4.] The model is $\mu(X) = 0.2B_2$ but because $B_2 = I(C_2 \leq 5)$, the two should have the highest median importance scores. Only GUIDE, RANGER and possibly CFOREST2 have this property. BARTM and CFOREST1 give the highest median score to $B_2$ but midling median scores to $C_2$. Conversely, due to their bias towards categorical variables, GBM and RLT give the highest median score to $C_2$ but midling median scores to $B_2$. As in model E3, CTREE and LASSO cannot reliably identify $B_2$ or $C_2$ as important because both methods yield 0 median scores for all predictors. \item[E5.] The model is $\mu(X) = 0.5\{I(B_1 = 0, C_1 \leq 5) + I(B_1 = 1, C_1>5)\}$, which has an interaction between $B_1$ and $C_1$. BARTM, CFOREST1, CFOREST2, GUIDE, and RANGER correctly give highest median scores to these two predictors. GBM and RPART give $B_1$ the lowest median score due to their bias against binary variables. RF incorrectly gives $B_1$ and $C_1$ low median scores due to its preference for correlated predictors. RFSRC incorrectly gives $B_1$ and $C_1$ low median scores due to its bias against binary and categorical predictors. RLT gives $C_1$ and $C_2$ the highest median scores due to its bias towards these two variables. CTREE and LASSO are again ineffective because both give zero median scores to all predictors. \end{description} Overall, CFOREST2, GUIDE, and RANGER are the only unbiased methods and consequently are among the most likely to correctly identify the important variables. \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[25,26][581,767]{M2_raw_box.ps}}} \caption{Boxplots of importance scores over 1000 trials for model E1, where $\mu(X) = 0.2N_2$ and $N_2, N_3$, $N_4$ are highly correlated. Variables associated with $Y$ are in red.} \label{fig:E1} \end{figure} \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[25,26][581,767]{M3_raw_box.ps}}} \caption{Boxplots of importance scores over 1000 trials for model E2 where $\mu(X) = 0.1(N_1+N_2)$ and $N_2, N_3, N_4$ are highly correlated. Variables with effect on $Y$ are in red.} \label{fig:E2} \end{figure} \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[25,26][581,767]{M4_raw_box.ps}}} \caption{Boxplots of importance scores over 1000 trials for model E3, where $\mu(X) = 0.2 B_1$. Variables with effect on $Y$ are in red.} \label{fig:E3} \end{figure} \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[25,26][581,767]{M5_raw_box.ps}}} \caption{Boxplots of importance scores over 1000 trials for model E4, where $\mu(X) = 0.2 B_2$ and $B_2 = I(C_2 \leq 5)$. Variables with effect on $Y$ are in red.} \label{fig:E4} \end{figure} \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[25,26][581,767]{M8_raw_box.ps}}} \caption{Boxplots of importance scores over 1000 trials for model E5, where $\mu(X) = 0.5\{I(B_1 = 0, C_1 \leq 5) + I(B_1 = 1, C_1>5)\}$. Variables with effect on $Y$ are in red.} \label{fig:E5} \end{figure} \clearpage \section{Predictive importance} \label{sec:predict} ``Predictive importance'' may be interpreted as the effect of a variable on the prediction of a response, but it is not known which, if any, of the importance scoring methods directly measures the concept. BARTM scores variables by their frequencies of being chosen to split the nodes of the trees. GBM and RPART base their scores on decrease in impurity, and LASSO uses absolute values of regression coefficient estimates. CFOREST, CTREE, RANGER, RF, and RFSRC measure change in prediction accuracy after random permutation of the variables---an approach that \citet{Strobl08} call ``permutation importance.'' GUIDE scores may be considered as measures of ``associative importance,'' being based on chi-squared tests of association with the response variable at the nodes of a tree. To see how well the scores reflect predictive importance, we need a precise definition of the latter. Given predictor variables $X_1, X_2, \ldots, X_K$, consider the four models, \begin{eqnarray} Y & = & \mu + \epsilon \label{eq:model0} \\ Y & = & f_j(X_j) + \epsilon \label{eq:model1} \\ Y & = & g_j(X_1, \ldots, X_{j-1}, X_{j+1}, \ldots, X_K) + \epsilon \label{eq:model2} \\ Y & = & h(X_1, X_2, \ldots, X_K) + \epsilon \label{eq:model4} \end{eqnarray} where $\mu$ is a constant, $f_j$, $g_j$, and $h$ are arbitrary functions of their arguments, and $\epsilon$ is an independent variable with zero mean and variance possibly depending on the values of the $X$ variables. Equation~(\ref{eq:model0}) states that $E(Y)$ is independent of the predictors, (\ref{eq:model1}) states that it depends only on $X_j$, (\ref{eq:model2}) states that it depends on all variables except $X_j$, and (\ref{eq:model4}) allows dependence on all variables. Let $\hat{\mu}$, $\hat{f}_j$, $\hat{g}_j$, and $\hat{h}$ denote estimates of $\mu$, $f_j$, $g_j$, and $h$, respectively, obtained from a training sample and define \begin{eqnarray*} S_0 & = & E (Y - \hat{\mu})^2 \\ S_j & = & E (Y - \hat{f}_j(X_j))^2 \\ S_{-j} & = & E (Y - \hat{g}_j(X_1,\ldots,X_{j-1},X_{j+1},\ldots,X_K))^2\\ S & = & E (Y-\hat{h}(X_1,\ldots,X_K))^2 \end{eqnarray*} where the expectations are computed with $\hat{\mu}$, $\hat{f}_j$, $\hat{g}_j$, and $\hat{h}$ fixed. We call $(S_0 - S_j)$ the \emph{marginal predictive value} of $X_j$ because it is the difference in mean squared error between predicting $Y$ with and without $X_j$, ignoring the other predictors. We call $(S_{-j} - S)$ the \emph{conditional predictive value} of $X_j$ because it is the difference in mean squared error between predicting $Y$ without and with $X_j$, with the other predictors included. Correlations between the importance scores and marginal and conditional predictive values indicate how well the former reflects the latter. To compute the correlations for a given data set, we need first to estimate $\mu$, $f_j$, $g_j$, and $h$. Here we use the average of 5 ensemble methods, namely, CFOREST, GBM, GUIDE forest, RANGER, and RFSRC to obtain the estimates. This helps to ensure that no scoring method has an unfair advantage. We use leave-one-out cross-validation to estimate $S_0$, $S_j$, $S_{-j}$, and $S$. Specifically, given a data set $\{(y_i, x_{i1}, \ldots, x_{iK})\}$, $i=1,2,\ldots, n$, define the vectors and matrices \begin{eqnarray*} \mathbf{x}_j & = & (x_{1j},x_{2j},\ldots, x_{nj})' \\ \mathbf{x}_j^{(-i)} & = & (x_{1j},x_{2j},\ldots, x_{i-1,j},x_{i+1,j}, \ldots, x_{nj})' \\ \mathbf{X} & = & (\mathbf{x}_1,\mathbf{x}_2, \ldots, \mathbf{x}_K) \\ \mathbf{X}^{(-i)} & = & (\mathbf{x}_1^{(-i)},\mathbf{x}_2^{(-i)}, \ldots, \mathbf{x}_K^{(-i)}) \\ \mathbf{X}_{(-j)} & = & (\mathbf{x}_1,\mathbf{x}_2, \ldots, \mathbf{x}_{j-1}, \mathbf{x}_{j+1}, \ldots,\mathbf{x}_K) \\ \mathbf{X}_{(-j)}^{(-i)} & = & (\mathbf{x}_1^{(-i)},\mathbf{x}_2^{(-i)}, \ldots, \mathbf{x}_{j-1}^{(-i)}, \mathbf{x}_{j+1}^{(-i)}, \ldots, \mathbf{x}_K^{(-i)}) \end{eqnarray*} where $(\mathbf{x}_j^{(-i)}, \mathbf{X}^{(-i)}, \mathbf{X}_{(-j)}^{(-i)})$ are $(\mathbf{x}_j, \mathbf{X}, \mathbf{X}_{(-j)})$ without the $i$th row and $(\mathbf{X}_{(-j)}, \mathbf{X}_{(-j)}^{(-i)})$ are $(\mathbf{X}, \mathbf{X}^{(-i)})$ without the $j$th column. Let $(\hat{f}_j^{(-i)}, \hat{g}_j^{(-i)}, \hat{h}^{(-i)})$ denote the function estimates of $(f_j, g_j, h)$ based on $(\mathbf{x}_j^{(-i)}, \mathbf{X}_{(-j)}^{(-i)}, \mathbf{X}^{(-i)})$, respectively, using the average of the 5 ensemble methods. Let $\bar{y} = n^{-1} \sum_k y_k$, $\bar{y}^{(-i)} = (n-1)^{-1} \sum_{k \neq i} y_k$ and define the leave-one-out mean squared errors \begin{eqnarray*} \hat{S}_0 & = & n^{-1} \sum_{i=1}^n (y_i - \bar{y}^{(-i)})^ 2 \\ \hat{S}_j & = & n^{-1} \sum_{i=1}^n \{y_i - \hat{f}_j^{(-i)}(x_{ij})\}^2 \\\ \hat{S}_{-j} & = & n^{-1} \sum_{i=1}^n \{y_i - \hat{g}_j^{(-i)}(x_{i1}, x_{i2}, \ldots, x_{i,j-1}, x_{i,j+1}, \ldots, x_{iK})\}^2 \\ \hat{S} & = & n^{-1} \sum_{i=1}^n \{y_i - \hat{h}^{(-i)}(x_{i1}, x_{i2}, \ldots, x_{iK})\}^2. \end{eqnarray*} Denote the estimated marginal and conditional predictive values by $\mathtt{MPV}_j = \hat{S}_0-\hat{S}_j$ and $\mathtt{CPV}_j = \hat{S}_{-j}-\hat{S}$. We compute them for the following three real data sets. \begin{description} \item[Baseball.] The data give performance and salary information of 263 North American Major League Baseball players during the 1986 season \citep{baseball}. The response variable is log-salary and there are 22 predictor variables; see \citet{HV95} and references therein for definitions of the variables. The plot on the left side of Figure~\ref{fig:mar-con} shows a rather weak correlation of 0.318 between $\mathtt{CPV}$ and $\mathtt{MPV}$. Variable \texttt{Yrs} (number of years in the major leagues) has high values of $\mathtt{MPV}$ and $\mathtt{CPV}$ but \texttt{Batcr} (number of times at bat during career) has a high value of $\mathtt{MPV}$ and a negative value of $\mathtt{CPV}$. This implies that \texttt{Batcr} is an excellent predictor if it is used alone, but its addition after the other variables are included does not increase accuracy. \item[Mpg.] This data set gives the characteristics, price, and dealer cost of 428 new model year 2004 cars and trucks \citep{mpg}. We use 14 variables to predict city miles per gallon (mpg). The middle panel of Figure~\ref{fig:mar-con} shows that \texttt{Hp} (horsepower) has the highest values of $\mathtt{MPV}$ and $\mathtt{CPV}$. Variable \texttt{Make} (which has 38 categorical values) has the second highest $\mathtt{CPV}$ but its $\mathtt{MPV}$ is below average, indicating that its predictive power is mainly derived from interactions with other variables. The correlation between $\mathtt{CPV}$ and $\mathtt{MPV}$ is 0.378. \item[Solder.] \citet{CH92} used the data from a circuit board soldering experiment to demonstrate Poisson regression in R. The data, named \texttt{solder.balance} in the \texttt{rpart} R package, give the number of solder skips in a 5-factor unreplicated $3 \times 2 \times 4 \times 10 \times 3$ factorial experiment. Because not all scoring methods are applicable to Poisson regression, we use least squares with dependent variable the square root of the number of solder skips. The right panel of Figure~\ref{fig:mar-con} shows that $\mathtt{CPV}$ and $\mathtt{MPV}$ are almost perfectly correlated. This is a consequence of the factorial design. \end{description} \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[180,20][439,754]{imp-mar-con.ps}}} \caption{\texttt{CPV} versus \texttt{MPV} and their correlations for three data sets} \label{fig:mar-con} \end{figure} \begin{table} \centering \caption{Correlations between importance scores \texttt{VI} and marginal and conditional predictive values \texttt{MPV} and \texttt{CPV}} \label{tab:comp-cor} \vspace{0.5em} \begin{tabular}{l|ll|ll|ll} & \multicolumn{2}{c}{Baseball} & \multicolumn{2}{c}{Mpg} & \multicolumn{2}{c}{Solder} \\ Method & \texttt{MPV} & \texttt{CPV} & \texttt{MPV} & \texttt{CPV} & \texttt{MPV} & \texttt{CPV} \\ \hline BARTM & 0.75 & 0.68 & 0.85 & 0.48 & 0.4 & 0.46 \\ CFOREST1 & 0.87 & 0.1 & 0.82 & 0.28 & 1 & 1 \\ CFOREST2 & 0.82 & 0.16 & 0.69 & 0.78 & 0.99 & 1 \\ CTREE & 0.4 & 0.07 & 0.65 & 0.54 & 0.99 & 1 \\ GBM & 0.8 & 0.14 & 0.62 & 0.88 & 0.99 & 0.98 \\ GUIDE & 0.99 & 0.3 & 0.94 & 0.24 & 0.9 & 0.92 \\ LASSO & 0.19 & 0.59 & 0.75 & 0.55 & 0.73 & 0.76 \\ RANGER & 0.97 & 0.18 & 0.96 & 0.33 & 1 & 1 \\ RF & 0.83 & 0.16 & 0.54 & 0.28 & 0.87 & 0.91 \\ RFSRC & 0.79 & 0.02 & 0.72 & 0.8 & 1 & 1 \\ RLT & 0.69 & 0 & 0.67 & 0.77 & 0.99 & 1 \\ RPART & 0.92 & 0.2 & 0.85 & 0.44 & 0.9 & 0.93 \\ \hline \end{tabular} \end{table} Table~\ref{tab:comp-cor} gives the correlations between the importance scores \texttt{VI} and each of \texttt{MPV} and \texttt{CPV} for each method and Figure~\ref{fig:mpv-cpv} shows them graphically. The results may be summarized as follows. \begin{description} \item[Baseball.] The importance scores are highly correlated with \texttt{MPV} for GUIDE and RANGER, but not for LASSO where there is barely any correlation. On the other hand, the scores are weakly correlated with \texttt{CPV} for all methods except BARTM and LASSO. \item[Mpg.] GUIDE and RANGER are again the two methods with importance scores most highly correlated with \texttt{MPV}; the correlations for the other methods range from 0.54 for RF to 0.85 for BARTM and RPART. For \texttt{CPV}, GBM has the highest correlation of 0.88, followed by RFSRC (0.0.80) and CFOREST2 (0.78). \item[Solder.] Owing to the almost perfect correlation between \texttt{MPV} and \texttt{CPV}, their correlations with the importance scores are almost the same. BARTM and LASSO are the only two methods with correlations substantially below 0.90, indicating that they are measuring something besides \texttt{MPV} and \texttt{CPV}. \end{description} Across the three data sets, the importance scores of all methods, except for \texttt{BARTM} and \texttt{LASSO}, are consistent with \texttt{MPV}, with GUIDE, RANGER and RPART showing the highest consistency. Consistency with \texttt{CPV} is weaker and more variable between data sets. \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[170,16][447,777]{mpv-cpv.ps}}} \caption{Plots of (cor(\texttt{VI}, \texttt{CPV}), cor(\texttt{VI}, \texttt{MPV})) for three data sets; \texttt{B} = BARTM, \texttt{Cf1} = CFOREST1, \texttt{Cf2} = CFOREST2, \texttt{Ct} = CTREE, \texttt{Gb} = GBM, \texttt{Gu} = GUIDE, \texttt{L} = LASSO, \texttt{Ra} = RANGER, \texttt{Rf} = RF, \texttt{Rfs} = RFSRC, \texttt{Rl} = RLT, \texttt{Rp} = RPART} \label{fig:mpv-cpv} \end{figure} \section{Thresholding} \label{sec:threshold} It is useful to have a score threshold to identify the variables that are independent of the response. This is particularly desirable if the number of variables is large. Of the 12 scoring methods, only BARTM and GUIDE currently provide thresholds. We call a variable ``unimportant'' if it is independent of the response variable and ``important'' otherwise. Under the null hypothesis $H_0$ that all variables are unimportant, we define a ``Type I error'' as that of declaring at least one to be important. To control the probability of this error at significance level $\alpha$, \citet{Bleich14} randomly permute the $Y$ values several times, keeping the $X$ values fixed. They construct a BARTM forest to each set of permuted data, derive several candidate thresholds from the permutation distributions of the variable selection frequencies, and use cross-validation to choose among them. GUIDE similarly permutes the $Y$ values, keeping the $X$ values fixed. For $j=1,2,\ldots, 300$, let $u_j = \max_i \mathtt{VI}(X_i)$ denote the maximum value of the GUIDE importance scores for the $j$th permuted data set and let $u^*(\alpha)$ be the $(1-\alpha)$-quantile of the distribution of $\{u_1, u_2, \ldots, u_{300}\}$. Under $H_0$, the probability that one or more importance scores exceeds the value of $u^*(\alpha)$ is approximately $\alpha$. Bias adjustment of the importance scores defined in equation~(\ref{eq:guide:scaled}) requires one level of permutation. Calculation of $u^*(\alpha)$ requires a second level of permutation. To skip the second level, GUIDE uses the following approximation. In the permutations for bias adjustment, let $v_b = \max_i v^*_b(X_i)$, $b=1,2,\ldots,B$, denote the maximum unadjusted score, where $v^*_b(X_i)$ is defined above equation~(\ref{eq:guide:scaled}). Let $v^*(\alpha)$ denote the $(1-\alpha)$-quantile of $\{v_1, v_2, \ldots, v_B\}$. Let $s(X_i)$ be the unadjusted score for the unpermuted (real) data defined in (\ref{eq:guide:raw}). Finally, let $m$ denote the number of values of $s(X_i)$ greater than $v^*(\alpha)$. We declare the variables with the top $m$ values of the bias-adjusted scores $\mathtt{VI}(X_i)$ to be important. Let $\tilde{v}(\alpha)$ denote the average of the $m$th and $(m+1)$th largest values of $\mathtt{VI}(X_i)$. The GUIDE normalized importance scores are $\mathtt{VI}(X_i)/\tilde{v}(\alpha)$, so that variables with normalized scores less than 1.0 are considered unimportant. \begin{table} \centering \caption{Important variables (in alphabetical order for BARTM, in decreasing importance for GUIDE) for $\alpha = 0.05$} \label{tab:thresh} \vspace{1em} {\RaggedRight \begin{tabular}{l|p{2.5in}|p{2.5in}} Data & \multicolumn{1}{c|}{BARTM} & \multicolumn{1}{c}{GUIDE} \\ \hline COVID & \texttt{diabetes}, \texttt{race=Black} \texttt{or} \texttt{African American}, \texttt{race=Unknown}, \texttt{race=White} & \texttt{renal}, \texttt{charlson}, \texttt{agecat}, \texttt{MI}, \texttt{CHF}, \texttt{dementia}, \texttt{PVD}, \texttt{cerebro}, \texttt{cancer}, \texttt{diabetes}, \texttt{race}, \texttt{CPD}, \texttt{sex}, \texttt{metastatic}, \texttt{hemipara}, \texttt{modsevliv}, \texttt{mildliver} \\ \hline Baseball & \texttt{Hitcr}, \texttt{Rbcr}, \texttt{Runcr}, \texttt{Yrs} & \texttt{Batcr}, \texttt{Hitcr}, \texttt{Runcr}, \texttt{Rbcr}, \texttt{Wlkcr}, \texttt{Yrs}, \texttt{Hrcr}, \texttt{Hit86}, \texttt{Rb86}, \texttt{Bat86}, \texttt{Wlk86}, \texttt{Run86}, \texttt{Hr86}, \texttt{Pos86}, \texttt{Puto86} \\ \hline Mpg & \texttt{Cylin=3}, \texttt{Cylin=4}, \texttt{Enginsz}, \texttt{Hp}, \texttt{Make=Honda}, \texttt{Make=Kia}, \texttt{Make=Toyota}, \texttt{Type=car}, \texttt{Weight} & \texttt{Weight}, \texttt{Enginsz}, \texttt{Cylin}, \texttt{Hp}, \texttt{Dcost}, \texttt{Rprice}, \texttt{Width}, \texttt{Whlbase}, \texttt{Drive}, \texttt{Type}, \texttt{Make}, \texttt{Length}, \texttt{Region} \\ \hline Solder & \texttt{mask=B6}, \texttt{opening=small} & \texttt{opening}, \texttt{mask}, \texttt{solder}, \texttt{padtype} \\ \hline \end{tabular} } \end{table} Table~\ref{tab:thresh} lists the variables found to be important by BARTM and GUIDE in the COVID, Baseball, Mpg, and Solder data sets, using $\alpha = 0.05$. GUIDE orders the important variables by their \texttt{VI} values, but BARTM does not order them. The table shows that BARTM tends to find fewer important variables than GUIDE. Besides, because it transforms each categorical variable into several indicator variables, BARTM may find some indicators important and other indicators unimportant. For example in the SOLDER data, BARTM finds only level \texttt{B6} of \texttt{mask} and level \texttt{small} of \texttt{opening} important. \section{Missing values} \label{sec:miss} Among the 12 scoring methods, only CFOREST1, GUIDE, RPART, and RFSRC are directly applicable to data with missing values. By treating missing values as a special type of observation in GUIDE as described in Section~\ref{sec:guide}, its importance scores remain unbiased when there are missing values. To demonstrate this as well as observe the effect of missing values on CFOREST1, RPART and RFSRC, we apply the methods to a data set from a Bureau of Labor Statistics Consumer Expenditure (CE) Survey. The data consist of answers to more than 400 questions from 6464 survey respondents. The dependent variable is \texttt{INTRDVX}, the amount of interest and dividends from the previous year. About 25\% of the values of \texttt{INTRDVX} are missing, due to the question being inapplicable or the respondent refusing to answer it. Here we use the 4693 respondents with non-missing \texttt{INTRDVX} to obtain importance scores for its prediction. About 20\% of the other variables have missing values, with 67 of them having more than 95\% missing, including \texttt{STOCKX} (value of directly-held stocks, bonds, mutual funds, etc.), which may be expected to be a good predictor of \texttt{INTRDVX}. See \citet{lecl19,LZZZ20} for more information on the variables. Figure~\ref{fig:bls:impscr} shows barplots of the scores of the top 15 variables for each method. \texttt{STOCKX} is ranked most important by GUIDE and second most important by RFSRC, but it is not ranked in the top 15 by CFOREST1 and RPART. At least one of \texttt{FINCBTAX} (income before tax) or \texttt{FINCATAX} (income after tax) is in the top 15 of all four methods. These two variables have no missing values. \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[21,26][300,768]{bls_imp_plots.ps}}} \caption{Variables with 15 highest importance scores in CE data} \label{fig:bls:impscr} \end{figure} \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[32,17][580,753]{bls_bias.ps}}} \caption{Mean importance scores $\overline{\mathtt{VI}}$ (orange) and 2-SE bars (green) from 1000 random permutations of the response variable for CE data. Variables ordered by increasing mean scores. GUIDE has fewer variables because it combines missing-value flag variables with their associated variables.} \label{fig:ce:boxplots} \end{figure} We can use the same procedure that produced Figure~\ref{fig:covid:perm} to find out if there is bias in the importance scores by randomly permuting the \texttt{INTRDVX} values while holding the values of the predictor variables fixed. Let $J$ be the number of permutations and $m_j(k)$ be the importance score of variable $X_k$ in permutation $j$ ($j = 1, 2, \ldots, J$). Figure~\ref{fig:ce:boxplots} plots $\bar{m}(k) = J^{-1} \sum_j m_j(k)$ (in orange, arranged in increasing order) and their 2-SE error bars (in green) for each method, with $J=1000$. GUIDE is the only method with unbiased scores as its 2-SE bars completely overlap. The other three methods are biased. CFOREST1 is particularly biased against the three high-level categorical variables \texttt{HHID} (household identifier, 46 levels), \texttt{PSU}, (primary sampling unit, 21 levels), and \texttt{STATE} (39 levels). RPART is biased in favor of \texttt{STATE}, \texttt{HHID}, and two 15-level categorical variables \texttt{OCCUCOD1} (respondent occupation) and \texttt{OCCUCOD2} (spouse occupation). RFSRC is biased towards the binary variable \texttt{DIRACC} (access to living quarters) and the continuous variable \texttt{JFS\_AMT} (annual value of food stamps). \section{Conclusion} \label{sec:conclusion} We have presented an importance scoring method based on the GUIDE algorithm and compared it with 11 other methods in terms of bias and consistency with two measures of predictive importance. We say that a method is unbiased if the expected values of its scores are equal when all variables are independent of the response variable. We find that if the data do not have missing values, only CFOREST2, CTREE, GUIDE, and RANGER are unbiased. RF and RFSRC are biased against categorical variables; GBM is biased towards high-level categorical variables and against binary variables; RPART is biased against binary variables; and RLT is biased towards high-level categorical variables and against binary variables. BARTM, CFOREST1 and LASSO have biases that are hard to characterize. Only CFOREST1, GUIDE, RPART, and RFSRC are directly applicable to data with missing values. Among them, only GUIDE is unbiased. Unbiasedness of GUIDE is built into the method, through bias correction by random permutation of the values of the response variable. The technique is applicable to any scoring method that is not extremely biased, albeit at the cost of increasing computational time several hundred fold. \begin{figure} \centering \resizebox{\textwidth}{!}{\rotatebox{-90}{\includegraphics*[26,20][558,742]{cpu.ps}}} \caption{Average CPU times (sec.) for one set of importance scores; N is the sample size and K is the number of variables} \label{fig:cpu} \end{figure} Figure~\ref{fig:cpu} shows average computation times in seconds for each method to calculate one set of importance scores for the four data sets without missing values. The computations were performed on an Intel Xeon 2.40GHz computer with 56 cores and 240 GB memory. The timings are averages of 3 replications, to reduce the variability of randomized methods (CFOREST, GBM, LASSO, RANGER, RF, RFSRC, RLT) that employ random number seeds. In real applications the randomized methods will take much longer, because their importance scores have to be averaged over many replications. The barplot for the baseball data is drawn on a log scale due to the unusually long computation time for RANGER; we attribute the reason to there being 3 categorical variables each with 23 levels in that data. We use three data sets to examine whether the importance scores correlate well with two measures of predictive power, namely marginal predictive value (where other variables are ignored) and conditional predictive value (where other variables are fitted first). We find that the scores of many methods are highly correlated ($> 0.80$) with marginal predictive value, the exceptions being BARTM, CTREE, and LASSO. Correlations with conditional predictive values, however, are generally low, except for CFOREST2, GBM, RFSRC, and RLT, where the correlations range from 0.77 to 0.88 in one data set. Finally, we show how GUIDE constructs $100(1-\alpha)$\% threshold scores for distinguishing important from unimportant variables. The thresholds are constructed such that if all predictors are independent of the response, the probability that one or more of them score above the thresholds is $\alpha$. As with bias correction, the GUIDE threshold technique is applicable to other methods. \vskip 0.2in
{ "timestamp": "2021-02-17T02:00:06", "yymm": "2102", "arxiv_id": "2102.07765", "language": "en", "url": "https://arxiv.org/abs/2102.07765" }
\section{Introduction} It is unavoidable to deal with indeterminate information in practical situations \cite{Deng2020ScienceChina, Seiti2018, MURPHY20001, gao2019uncertaintyIJIS, Wang2018uncertainty}. Lots of researches have been made to seek for truly useful information contained in uncertainty \cite{meng2018fluid, deng2019zero, Yager2018}. Therefore, to address this problem, many methods have been proposed, such as $Z$-numbers \cite{Zadeh2011, li2020newuncertainty, Jiang2019Znetwork, liu2019derive}, $D$-numbers \cite{Deng2012, Deng2014, Xiao2019a, deng2019evaluating, IJISTUDNumbers}, probability theory \cite{Yager2014, Zhang2020b, Jiang2019IJIS}, soft sets \cite{Feng2018, Feng2016}, Dempster‐Shafer evidence theory \cite{Dempster1967Upper, book}, complex evidence theory \cite{Xiao2019complexmassfunction, Xiao2020b, Xiao2020CEQD, Xiao2020maximum},evidential reasoning \cite{Jiang2019, Zhou2018, Liu2018a, liao2020deng}, fuzzy sets \cite{Zadeh1965,Zadeh1979, Xue2020entailmentIJIS, 8944285, Zhou2020} and quantum mass function \cite{Gao2019quantummodel, Dai2020, Huang2019}. Among those proposed methods, quantum mass function \cite{Gao2019quantummodel} is the most widely useful and novel tool to solve problems occurred in traditional evidence theory at the level of quantum. In general, it is common to meet with a variety of applications developed on the base of classic evidence theory, like pattern recognition \cite{Liu2019, Song2018}. However, with respect to solving conflicts and dispelling uncertainties at the level of quantum, there is no any other previous methods to figure out the answer to related issues. In other words, researchers are not able to find reliable and effective improved methods to handle different kinds of troublesome errors appeared in the process of utilizing quantum mass function to dispose information in the form of quantum. Therefore, in this paper, a specific and customized method is proposed to avoid appearances of various problems when solving relevant questions at the level of quantum. Two dimensions of quantum basic probability assignment (QBPA) are designed to improve reliability and availability of original quantum mass function, which is called Two-dimensional quantum mass function (TDQMF) and denoted as $\mathbb{T} = \{\mathbb{Q}_{1},\mathbb{Q}_{2}\}$. The first dimension of TDQMF is consist of original QBPAs whose concrete constitution, namely frame of discernment in the form of quantum changes according to actual situations. Besides, the second one is a measure of confidential level of the first dimension whose essences are still QBPAs, but the difference is that the first dimension focuses on practical affairs in real life while the second one puts attention on QPBAs contained in the first dimension themselves which dose not considers the whole system of judgements in the way of the first dimension. And the frame of discernment of the second dimension is certain which is defined as $\Theta = \{\emptyset, Y, N, YN\}$. Among them, proposition "$Y$" represents "approval", "$N$" means "disagreement" and "$YN$" denotes that no further information can be provided. Besides, the contributions of TDQMF can be listed as: \begin{itemize} \item[$\bullet$] TDQMF is a pioneer in amending the disposing process involved in basic probability assignment in the form of quantum which provides a convenient way improve the reliability and validity of judgements given by original quantum mass function. \end{itemize} \begin{itemize} \item[$\bullet$] A completely novel rule of combination of TDQMF is proposed to provide credible results about real situations. \end{itemize} \begin{itemize} \item[$\bullet$] A corresponding algorithm developed on the base of TDQMF is designed to handle separate kinds of conditions and specific problems brought by practical applications. \end{itemize} All in all, TDQMF has the ability to handle problems in different fields of applications in the form of quantum and creates a new direction to manage quantum information. The rest of this paper is organized as follows. The section of preliminaries introduces concepts of quantum mass function \cite{Gao2019quantummodel} and $Z$-numbers \cite{Zadeh2011}. In the section of proposed method, the definition of TDQMF is given and a relevant rule of combination of it is also devised. Besides, for the next section, 4 applications are provided to verify the correctness and effectiveness of TDQMF involved in 4 diverse aspects: target recognition, decision making, income estimate and fault diagnosis. Conclusions are given in the last section. \section{Preliminaries} \label{Preliminaries} In this section, some related concepts including quantum mass function and $Z$-numbers are briefly introduced. \subsection{Quantum mass function} \begin{myDef}(Quantum frame of discernment)\end{myDef} Assume $\Theta$ is an exclusive and non-empty set whose elements $|A_{i} \rangle $ are mutually exclusive and consistent. Therefore, set $\Theta$ is named as the quantum frame of discernment. $\Theta$ is defined as \cite{Gao2019quantummodel}: \begin{equation} |\Theta \rangle = \{|A_{1} \rangle, |A_{2} \rangle, |A_{3} \rangle, ... , |A_{i} \rangle, ... , |A_{n} \rangle\} \end{equation} \par And the power set of $|\Theta \rangle$ which is consist of $2^{N}$ separate elements is defined as \cite{Gao2019quantummodel}: \begin{equation} 2^{|\Theta \rangle} = \{\emptyset, |A_{1} \rangle, |A_{2} \rangle, ..., |A_{n} \rangle, ..., \{|A_{1} \rangle, |A_{2} \rangle\}, ... , |\Theta \rangle\} \end{equation} \begin{myDef}(Quantum mass function)\end{myDef} On the base of the definition of the quantum frame of discernment, the quantum mass function $\mathbb{Q}$ can be defined as \cite{Gao2019quantummodel}: \begin{equation} \mathbb{Q}(|A \rangle) = \psi e^{j\theta} \end{equation} \par And it satisfies the properties which are defined as \cite{Gao2019quantummodel}: \begin{equation} \mathbb{Q}:2^{|\Theta \rangle} \rightarrow [0,1] \end{equation} \begin{equation} \mathbb{Q}(\emptyset) = 0 \end{equation} \begin{equation} \sum_{\mathbb{Q}(|A \rangle) \subseteq \mathbb{Q}(|\Theta \rangle)}|\mathbb{Q}(|A \rangle)| = 1 \end{equation} \par where the value of $|\mathbb{Q}(|A \rangle)|$ is equal to $\psi^{2}$. And the quantum mass function is also called quantum basic probability assignment (QBPA), in which the mass of $\psi^{2}$ represents the belief level of support of certain quantum proposition $|A \rangle$. \begin{myDef}(Quantum rule of combination)\end{myDef} \par Assume there are 2 QBPAs $\mathbb{Q}_{1}$ and $\mathbb{Q}_{2}$ and the rule of combination is defined as follows \cite{Gao2019quantummodel}: \begin{equation} \mathbb{Q}(|A \rangle) = \left\{ \begin{aligned} \frac{1}{1 - |\mathbb{K}|} \sum_{|B \rangle \cap |C \rangle = |A \rangle} \mathbb{Q}(|B \rangle) \times \mathbb{Q}(|C \rangle) \ \ \ \ |A \rangle \neq \emptyset \\ 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ |A \rangle = \emptyset \end{aligned} \right. \end{equation} \par The step of normalization is defined as \cite{Gao2019quantummodel}: \begin{equation} |\mathbb{Q}(|A \rangle)| = \frac{|\mathbb{Q}(|A \rangle)|}{|\mathbb{Q}(|A \rangle)| + |\mathbb{Q}(|B \rangle)|+ ... + |\mathbb{Q}(|A \rangle,|B \rangle)|+ ...} \end{equation} \par Besides, the factor of normalization is defined as \cite{Gao2019quantummodel}: \begin{equation} \mathbb{K} = \sum_{|B \rangle \cap |C \rangle = \emptyset} \mathbb{Q}_{1}(|B \rangle) \times \mathbb{Q}_{2}(|C \rangle) \end{equation} \par In the rule of combination of quantum, the factor $\mathbb{K}$ can be regarded as a kind of probability of quantum and $|\mathbb{K}|$ is a degree which measures the conflict among QBPAs. \subsection{Z-numbers} Z-numbers is a kind of pioneering method to measure uncertainties existing in information, which takes reliability of information into consideration. As some higher requirements of disposing information have been promoted, reliability of information has been paid more and more attention and focus \cite{Zadeh2011}. \begin{myDef}A Z-number is consist of two fuzzy numbers and its basic form is defined as Z = (A,B). Assume there is a kind of uncertain variable, X, a corresponding Z-number is connected with it. The first part of the constitution of the Z-number, A, is denoted as an estimate of variable X. More than that, the second part of the constitution of the Z-number, B, is a measure of reliability of the first component, A.\end{myDef} \section{Proposed method: TDQMF: two-dimensional quantum mass function} In this section, some relevant statement and definitions are provided. On the basis of information given, the rule of combination with respect to TDQMF is developed. \subsection{The definition of TDQMF} Assume $\mathbb{Q}_{1}$ and $\mathbb{Q}_{2}$ are two QBPAs, the second one is a measure of the degree of reliability of the first one. Thus, A TDQMF is defined as: \begin{equation} \mathbb{T} = \{\mathbb{Q}_{1},\mathbb{Q}_{2}\} \end{equation} And the properties it satisfies are defined as: \par 1. The frame of discernment in the form of quantum for $\mathbb{Q}_{1}$ is $2^{|\Theta \rangle} = \{\emptyset, |\mathbb{Q}_{1} \rangle, |\mathbb{Q}_{2} \rangle, ..., |\mathbb{Q}_{n} \rangle, ..., \{|\mathbb{Q}_{1} \rangle, |\mathbb{Q}_{2} \rangle\}, \\... , |\Theta \rangle\}$. According to the definition of the frame of discernment, the mass of $|\mathbb{Q}_{1} \rangle$ represents the most immediate support of the quantum proposition $|\mathbb{Q}_{1} \rangle$.\par 2. The frame of discernment in the form of quantum for $\mathbb{Q}_{2}$ is defined as $\Theta = \{\emptyset, Y, N, YN\}$, in which $"Y"$ denotes $"support"$ to $\mathbb{Q}_{1}$ and $"N"$ denotes $"not\ support"$ to $\mathbb{Q}_{1}$. In other words, the value of proposition $"Y"$ represents the degree of the reliability of $\mathbb{Q}_{1}$ and the value of proposition $"N"$ represents the degree of the unreliability of $\mathbb{Q}_{1}$. Besides, proposition $"YN"$ manifests uncertainty and hesitation in measuring whether $\mathbb{Q}_{1}$ is reliable or unreliable. More than that, if the mass of $\mathbb{Q}_{1}$ is very close to actual conditions, the proposition $"Y"$ in the frame of discernment of $\mathbb{Q}_{2}$ can obtain a much higher value and the proposition $"N"$ is expected to get a much lower value. In the opposite, if the mass of $\mathbb{Q}_{1}$ is distant from actual conditions, the proposition $"Y"$ in the frame of discernment of $\mathbb{Q}_{2}$ can obtain a much lower value and the proposition $"N"$ is expected to get a much higher value. It can be easily concluded that, the second dimensional judgement provides a much more convenient way to guarantee enough accuracy in decision making and pattern recognition.\\ \setlength{\tabcolsep}{4mm}{\begin{table}[h] \centering \caption{The frame of discernment for $\mathbb{Q}_{1}$ in Example 1.} \begin{spacing}{1.40} \begin{tabular}{l c c c c c}\hline $\mathbb{Q}_{1}$ &$\{x_{1}\}$& $\{x_{2}\}$&$\{x_{2}, x_{3}\}$\\\hline &$0.7071e^{0.7854j}$&$0.2828e^{0.7854j}$&$0.6782e^{0.4850j}$\\\hline \end{tabular} \end{spacing} \end{table}} \setlength{\tabcolsep}{4mm}{\begin{table}[h] \centering \caption{The frame of discernment for $\mathbb{Q}_{2}$ in Example 1.} \begin{spacing}{1.40} \begin{tabular}{l c c c c c}\hline $\mathbb{Q}_{2}$ &$\{Y\}$& $\{N\}$&$\{YN\}$\\\hline &$0.8306e^{0.7854j}$&$0.5385e^{0.7854j}$&$0.1414e^{0.7854j}$\\\hline \end{tabular} \end{spacing} \end{table}} \textbf{Example 1:} Assume the frame of discernment in the form of quantum for $\mathbb{Q}_{1}$ is $\Theta = \{x_{1},x_{2},x_{3}\}$, the propositions existing in the frame of discernment and their corresponding values are provided in Table 1. More than that, assume there are 100 experts and 69 of them approve the judgements made by $\mathbb{Q}_{1}$. Besides, the remain of them hold the opposite opinion except 2 experts can not made final decisions. Then, it can be easily obtained the values of the propositions existing in the frame of discernment of $\mathbb{Q}_{2}$. And the values are given in Table 2. \par \textbf{Note: }In order to simplify the process of transferring real numbers into the form of quantum (if needed), the value is supposed to be divided equally and then allocated to real part and imaginary part under certain regulations. \subsection{The rule of combination of TDQMF} In the last part, it can be easily summarized that the second dimension of judgements could play a very important role in properly adjusting the values of the propositions existing in certain frames of discernment from different evidences to obtain reasonable and valid results about estimates on practical circumstances. Therefore, the needs of designing a special method to combine TDQMF are urgent. In this section, an innovative rule of combination is proposed to produce accurate and valid judgements on practical situations. \begin{equation} \mathbb{Q}_{final}(\{x_{i}\}) = (\mathbb{Q}_{1}(\{x_{i}\}) + \mathbb{Q}_{1}(\{X_{x_{i}}\}) \times \frac{1}{n}) \times \mathbb{Q}_{2}(\{Y\}) + \mathbb{Q}_{1}(\{X_{non-x_{i}}\}) \times \mathbb{Q}_{2}(\{N\}) \ \ \ \ x_{i} \subset 2^{\Theta} \end{equation} \begin{equation} \mathbb{Q}_{final}(\{X_{x_{i}}\}) = \frac{n^{2} - n}{n^{2}} \times \mathbb{Q}_{1}(\{X_{x_{i}}\}) \times \mathbb{Q}_{2}(\{Y\})\ \ \ \ x_{i} \subset 2^{\Theta}\ \ and\ \ x_{i} \neq \Theta \end{equation} \begin{equation} \mathbb{Q}_{final}(\{{\Theta}\}) = \frac{n^{2} - n}{n^{2}} \times \mathbb{Q}_{1}(\{\Theta\}) + \mathbb{Q}_{2}(\{YN\}) \end{equation} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{d2} \caption{The procedure of combination of TDQMFs. Details of each step are offered in this flow chart.} \label{fig:1} \end{figure} \par Where $\{X_{x_{i}}\}$ and $\{X_{non-x_{i}}\}$ represent multiple propositions. And subscripts $x_{i}$ and $non-x_{i}$ means proposition $x_{i}$ and $non-x_{i}$ are part of the monolithic proposition $X$. In the construction of the rule of combination, for maximized utilization of given information, multiple propositions are divided proportionally according to the number of single propositions which are contained in this multiple proposition and distributed to single propositions which takes advantage of an efficient method called pignistic transformation \cite{article} which reduces uncertainty in handling ambiguous evidences. Besides, a negation of a negation of certain propositions is regarded as an approval of certain propositions. Therefore, with respect to single propositions, their mass is consist of the values of a direct approval to certain propositions and a dual negation of them. However, it is not proper to eliminate the values of multiple propositions because all of the values are managed in the form of quantum. In order to reduce uncertainty existing in the frame of discernment but not lose accuracy in describing actual conditions, it is proposed that $\frac{n^{2} - n}{n^{2}}$ of the values of multiple propositions are kept due to the definition of the quantum mass function, namely a value of quantum is a sum of squares of the numbers of real part and imaginary part. More than that, a dual negation of multiple propositions is not taken into consideration because reducing uncertainties is the main goal in managing quantum information. In the last, universal sets of frames of discernment are disposed solely because the second dimensional judgement may contain different levels of hesitation and all of information provided is supposed to be utilized to illustrate practical situations in a more sufficient way. \par After those modifications, all of the values obtained are expected to be combined using quantum rule of combination $n - 1$ times to get final descriptions of circumstances (n is the number of evidences). \par However, after a reallocation of the mass, a step of normalization to guarantee that the sum of all the values of propositions existing in frames of discernment is exactly equal to 1 is needed. A part of the operation of normalization is defined as: \begin{equation} |\mathbb{Q}_{FINAL}(\{{X_{i}}\})| = \frac{|\mathbb{Q}_{final}(\{{X_{i}}\})|}{|\mathbb{Q}_{final}(\{{X_{1}}\})| + |\mathbb{Q}_{final}(\{{X_{2}}\})| + ... + |\mathbb{Q}_{final}(\{{X_{i}}\})| + ... + |\mathbb{Q}_{final}(\{{X_{k}}\})|} \ \ \ \ k = 2^{\Theta} \end{equation} According to the value obtained, $|\mathbb{Q}_{FINAL}(\{{X_{i}}\})|$, it can be easily calculated to get the ratio of original values and normalized ones, $\xi$. Therefore, real part and imaginary part of QBPAs can be adjusted on the basis of the extraction of a root, $\xi$, in the light of definition of QBPAs. \par \textbf{Note: }When the value of $\mathbb{Q}_{2}(\{Y\})$ is equal to 1, modified values of singletons is equal to the ones modified by pignistic transformation \cite{article} at the level of quantum. \subsection{Algorithm developed on the base of TDQMF} \par \textbf{Step1: }Assume there is a group of TDQMFs, $\mathbb{T} = \{\mathbb{Q}_{1},\mathbb{Q}_{2}\}$. According to the proposed rule of combination of TDQMF, a combined quantum evidence, $\mathbb{Q}_{final}$, can be easily obtained. \par \textbf{Step2: }In order to ensure the sum of value of every proposition existing in the frame of discernment is equal to 1, a step of normalization is expected to be carried out. And what should be pointed out is that values which are calculated are modulo mass of each proposition not the ones which are in the form of quantum. \par \textbf{Step3: }Using the rule combination of quantum mass function, a final quantum evidence can be obtained. The process of combination can be briefly defined as: \begin{equation} \mathbb{Q}_{FINAL}(\{X_{i}\}) = (((Q_{1}(\{X_{i}\}) \otimes Q_{2}(\{X_{i}\})) \otimes Q_{3}(\{X_{i}\})) \otimes ...) \end{equation} \par \textbf{Step4: }According to definition of quantum mass function, calculate modulo value of $\mathbb{Q}_{FINAL}(\{X_{i}\}) \ \ (0 \leq i \leq n)$. \par \textbf{Step5: }Find the proposition whose modulo value is the biggest among all the values of other propositions. \begin{equation} |\mathbb{Q}_{SELECTED}| = MAX{|\mathbb{Q}_{FINAL}(\{x_{i}\})|} \ \ (0 \leq i \leq n) \end{equation} \par \textbf{Step6: }There are two kinds of situations and two different standard of judgement can be set therefore. In the first case, the proposition which has the biggest value is directly selected as the target without having any other operations. In the second case, a threshold $\curlyvee$ can be established to serve as a kind of standard in choosing proper target. If $|\mathbb{Q}_{SELECTED}| \geq \curlyvee$, then the proposition whose modulo value is equal to $|\mathbb{Q}_{SELECTED}|$ is selected as target. \begin{equation} \mathbb{Q}_{SELECTED} = \{x_{i}\} \end{equation} \begin{equation} Target \leftarrow \{x_{i}\} \end{equation} \begin{algorithm} \caption{:The details of the proposed algorithm } \textbf{Input:} A group of TDQMFs which are defined as $\mathbb{T}_{j} = \{\mathbb{Q}_{1},\mathbb{Q}_{2}\}\ \ (0 \leq j \leq n)$\\ A frame of discernment in the form of quantum$|\Theta \rangle = \{|A_{1} \rangle, |A_{2} \rangle, |A_{3} \rangle, ... , |A_{i} \rangle, ... , |A_{n} \rangle\}$\\ A frame of discernment of the second dimension of judgement $\Theta = \{\emptyset, Y, N, \Theta\}$\\ \textbf{Output:} The information of the chosen target\\ $\textbf{for} \ j =1; \ j \leq n$ \ \textbf{do} \\ 1. Combine different two dimensional quantum evidences $(\mathbb{T}_{j})$ to get the modified QBPAs using rule of combination of TDQMF.\\ \textbf{end}\\ 2. Normalize every component contained in the modified QBPAs.\\ 3. Combine QBPAs using rule of combination of QBPAs to get final results.\\ 4. Calculate modulo values of different propositions contained in the final results.\\ 5. Select the proposition whose value is the biggest. --- $MAX = ||A_{1} \rangle|$\\ ------ $\textbf{for} \ i =1; \ i \leq n$ \ \textbf{do} \\ --------- $\textbf{if}\ \ ||A_{i} \rangle| > MAX$\\ ------------ $MAX = ||A_{i} \rangle|$\\ ------ \textbf{end}\\ 6.\textcircled{{\footnotesize 1}}: If there is no other limitation, the proposition which corresponds with the value of $MAX$ is selected as target. \ \ \ \textcircled{{\footnotesize 2}}: If a threshold is set, only when the value of $||A_{i} \rangle|$ is bigger than threshold, a proposition can be chosen as target if its mass is equal to the value of $||A_{i} \rangle|$. In the opposite, no decisions can be made. \end{algorithm} On the contrary, if $|\mathbb{Q}_{SELECTED}| \leq \curlyvee$, then no target can be chosen, which means no substantial decisions can be made. \clearpage \section{Applications} \subsection{Application of target recognition} In real world, there are lots of uncertainties which leads to chaos when coming to trying to figure out what are the practical situations. In order to solve problems brought by uncertainties, researchers have made a lot efforts in identifying proper targets \cite{LiuF2020TFS, Han2018, Pan2020association, han2018evidential, deng2010target}. However, how to recognize corresponding objects rightly at the level of quantum is still an open issue. The following application illustrates the efficiency and validity of TDQMF in target recognition. \subsubsection{Application 1} Assume there is a system which has different sources of information from multiple transducers which is utilized for target recognition. And three targets are the objects which this system tries to identify. Therefore, the frame of discernment of the first dimension can be defined as $\Theta = \{x_{1}, x_{2}, x_{3}, x_{1}x_{2}\}$ and frame of discernment of the second dimension is defined as $\Theta = \{Y, N, YN\}$. Besides, a hypothesis is that there is only one target can be selected and four pieces of TDQMF, $\mathbb{T} = \{\mathbb{Q}_{1},\mathbb{Q}_{2}\}$ are provided. The details of first dimension of TDQMF are given in Table 3. And the details of second dimension of TDQMF are given in Table 4. More than those, exhaustive details of calculation process are provided in this first application. \begin{table}[h]\footnotesize \centering \caption{Quantum evidences given by multiple sensors} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Quantum \ \ evidences$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{x_{1}\}$&$\{x_{2}\}$&$\{x_{3}\}$&$\{x_{1}, x_{2}\}$\\ $\mathbb{Q}_{1}^{1}$&$0.5000e^{0.9271j}$&$0.6403e^{0.8960j}$&$0.4472e^{0.4636j}$&$0.3741e^{1.0068j}$\\ $\mathbb{Q}_{1}^{2}$&$0.5385e^{1.1902j}$&$0.7071e^{0.7853j}$&$0.3162e^{0.3217j}$&$0.3316e^{0.4404j}$\\ $\mathbb{Q}_{1}^{3}$&$0.5047e^{0.9827j}$&$0.6224e^{0.8081j}$&$0.4640e^{0.6477j}$&$0.3774e^{1.3026j}$\\ $\mathbb{Q}_{1}^{4}$&$0.4609e^{0.8621j}$&$0.6440e^{0.9397j}$&$0.4070e^{0.4855j}$&$0.4549e^{0.4964j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Experts' judgements on reliability of quantum evidences given by sensors} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Quantum \ \ judgements$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{Y\}$& $\{N\}$&$\{Y, N\}$&$\{\emptyset\}$\\ $\mathbb{Q}_{2}^{1}$&$0.8366e^{0.2970j}$&$0.4472e^{0.4636j}$&$0.3162e^{0.3217j}$&$0$\\ $\mathbb{Q}_{2}^{2}$&$0.7745e^{0.4423j}$&$0.4690e^{0.5493j}$&$0.4242e^{0.6004j}$&$0$\\% 0.6 0.7 + 0.3316 0.22 0.4 + 0.2449i 0.18 0.35 + 0.2397i 0 $\mathbb{Q}_{2}^{3}$&$0.8062e^{0.1243j}$&$0.5099e^{0.7587j}$&$0.3000e^{0.8410j}$&$0$\\% 0.65 0.8 + 0.1i 0.26 0.37 + 0.3508i 0.09 0.2 + 0.2236i 0 $\mathbb{Q}_{2}^{4}$&$0.7681e^{0.4242j}$&$0.5477e^{0.7519j}$&$0.3316e^{1.0764j}$&$0$\\\hline \end{tabular} \end{spacing} \end{table} First, it is supposed to use the rule of combination of TDQMF to obtain modified QBPAs. And selecting corresponding propositions correctly is the most important. For example, the process of obtaining the modified results of the first piece of quantum evidence is given as: $\mathbb{Q}_{FINAL_{1}}^{1}(\{x_{1}\}) = (\mathbb{Q}_{1}^{1}(\{x_{1}\}) + \mathbb{Q}_{1}^{1}(\{x_{1}, x_{2}\}) \times \frac{1}{n}) \times \mathbb{Q}_{2}^{1}(\{Y\}) + \mathbb{Q}_{1}^{1}(\{X_{non-x_{1}}\}) \times \mathbb{Q}_{2}^{1}(\{N\}) = (0.5000e^{0.9271j} + 0.3741e^{1.0068j} \times \frac{1}{2}) \times 0.8366e^{0.2970j} + (0.6403e^{0.8960j} + 0.4472e^{0.4636j}) \times 0.4472e^{0.4636j} = 1.0493e^{1.2172j}$ $\mathbb{Q}_{FINAL_{1}}^{1}(\{x_{2}\}) = (\mathbb{Q}_{1}^{1}(\{x_{2}\}) + \mathbb{Q}_{1}^{1}(\{x_{1}, x_{2}\}) \times \frac{1}{n}) \times \mathbb{Q}_{2}^{1}(\{Y\}) + \mathbb{Q}_{1}^{1}(\{X_{non-x_{2}}\}) \times \mathbb{Q}_{2}^{1}(\{N\}) = (0.6403e^{0.8960j} + 0.3741e^{1.0068j} \times \frac{1}{2}) \times 0.8366e^{0.2970j} + (0.5000e^{0.9271j} + 0.4472e^{0.4636j}) \times 0.4472e^{0.4636j} = 1.0708e^{1.2290j}$ $\mathbb{Q}_{FINAL_{1}}^{1}(\{x_{3}\}) = (\mathbb{Q}_{1}^{1}(\{x_{3}\})) \times \mathbb{Q}_{2}^{1}(\{Y\}) + \mathbb{Q}_{1}^{1}(\{X_{non-x_{3}}\}) \times \mathbb{Q}_{2}^{1}(\{N\}) = (0.4472e^{0.4636j}) \times 0.8366e^{0.2970j} + (0.5000e^{0.9271j} + 0.6403e^{0.8960j} + 0.3741e^{1.0068j}) \times 0.4472e^{0.4636j} = 1.0024e^{1.1736j}$ $\mathbb{Q}_{FINAL_{1}}^{1}(\{x_{1}, x_{2}\}) = \frac{n^{2} - n}{n^{2}} \times \mathbb{Q}_{1}^{1}(\{x_{1}, x_{2}\}) \times \mathbb{Q}_{2}^{1}(\{Y\}) = \frac{2^{2} - 2}{2^{2}} \times 0.3741e^{1.0068j} \times 0.8366e^{0.2970j} = 0.1565e^{1.3038j} $\mathbb{Q}_{FINAL_{1}}^{1}(\{\Theta\}) = \frac{n^{2} - n}{n^{2}} \times \mathbb{Q}_{1}^{1}(\{\Theta\}) + \mathbb{Q}_{2}^{1}(\{\Theta\}) = 0 + 0.3162e^{0.3217j} = 0.3162e^{0.3217j}$ Besides, the modified results of all of the propositions are given in Table 5. \begin{table}[h]\footnotesize \centering \caption{Quantum evidences after modification} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Combined \ \ TDQMF$ &\multicolumn{5}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{x_{1}\}$&$\{x_{2}\}$&$\{x_{3}\}$&$\{x_{1}, x_{2}\}$&$\{\Theta\}$\\ $\mathbb{Q}_{Modified_{1}}^{1}$&$1.0493e^{1.2172j}$&$1.0708e^{1.2290j}$&$1.0024e^{1.1736j}$&$0.1565e^{1.3038j}$&$0.3162e^{0.3217j}$\\ $\mathbb{Q}_{Modified_{1}}^{2}$&$0.9785e^{1.3347j}$&$1.0281e^{1.2555j}$&$0.9202e^{1.2418j}$&$0.1283e^{0.8830j}$&$0.4242e^{0.6004j}$\\ $\mathbb{Q}_{Modified_{1}}^{3}$&$1.0928e^{1.3458j}$&$1.0874e^{1.2762j}$&$1.0108e^{1.4370j}$&$0.1522e^{1.4270j}$&$0.3000e^{0.8410j}$\\ $\mathbb{Q}_{Modified_{1}}^{4}$&$1.0661e^{1.3481j}$&$1.1190e^{1.3277j}$&$1.1071e^{1.3731j}$&$0.1072e^{1.2313j}$&$0.3316e^{1.0764j}$\\\hline \end{tabular} \end{spacing} \end{table} Second, after obtaining modified QBPAs, a step of normalization is supposed to be executed to ensure that the sum of those modified QBPAs is still equal to 1 and meets the requirement of the definition of quantum mass function. Besides, all of the normalized values of QBPAs are listed in Table 6. \begin{table}[h]\footnotesize \centering \caption{Quantum evidences after normalization} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Normalized \ \ quantum \ \ evidences$ &\multicolumn{5}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{x_{1}\}$&$\{x_{2}\}$&$\{x_{3}\}$&$\{x_{1}, x_{2}\}$&$\{\Theta\}$\\ $\mathbb{Q}_{Normalized_{1}}^{1}$&$0.5650e^{1.2172j}$&$0.5942e^{1.2010j}$&$0.5398e^{1.1736j}$&$0.0842e^{1.3038j}$&$0.1702e^{0.3217j}$\\ $\mathbb{Q}_{Normalized_{1}}^{2}$&$0.5595e^{1.3347j}$&$0.5879e^{1.2555j}$&$0.5262e^{1.2418j}$&$0.0734e^{0.8830j}$&$0.2425e^{0.6004j}$\\ $\mathbb{Q}_{Normalized_{1}}^{3}$&$0.5831e^{1.3459j}$&$0.5803e^{1.2762j}$&$0.5393e^{1.4370j}$&$0.0812e^{1.4270j}$&$0.1600e^{0.8410j}$\\ $\mathbb{Q}_{Normalized_{1}}^{4}$&$0.5523e^{1.3481j}$&$0.5797e^{1.3731j}$&$0.5735e^{1.3731j}$&$0.0540e^{1.3218j}$&$0.1637e^{1.0764j}$\\\hline \end{tabular} \end{spacing} \end{table} Third, combination of QBPAs is expected to be carried out according to the rule of combination of quantum mass function to obtain the final values of indicator of targets. All of the combined results are given in Table 7. \begin{table}[h]\footnotesize \centering \caption{Quantum evidence after combination} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Combined \ \ quantum \ \ evidence$ &\multicolumn{5}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{x_{1}\}$&$\{x_{2}\}$&$\{x_{3}\}$&$\{x_{1}, x_{2}\}$&$\{\Theta\}$\\ $\mathbb{Q}_{Combined_{1}}$&$0.6140e^{1.3082j}$&$0.6918e^{1.1929j}$&$0.3799e^{1.2478j}$&$0.004e^{0.2703j}$&$0.0016e^{-0.5758j}$\\\hline \end{tabular} \end{spacing} \end{table} Forth, calculate the modulo values of different propositions according to the definition of QBPAs. All of the modulo values of separate propositions are given in Table 8. \begin{table}[h]\footnotesize \centering \caption{Modulo values of different propositions} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Modulo \ \ values$ &\multicolumn{5}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{x_{1}\}$&$\{x_{2}\}$&$\{x_{3}\}$&$\{x_{1}, x_{2}\}$&$\{\Theta\}$\\ $\mathbb{Q}_{Modulo_{1}}$&$0.3770$&$0.4786$&$0.1443$&$1.8814e-05$&$2.7050e-06$\\\hline \end{tabular} \end{spacing} \end{table} Fifth, it can be easily obtained that proposition $x_{2}$ has the biggest modulo value after checking the data provided by Table 8. Therefore, the selected proposition is $x_{2}$. Sixth, assume there is no threshold, then the objects, $\varpi$, which are corresponding to proposition $x_{2}$ are recognized as expected targets. \clearpage \subsection{Application of decision making} TDQMF can not only play an important role in target recognition, but also have advantages in decision making which have been studied in many aspects \cite{Fei2019b,Xiao2019a,Yager2014a,Xiao2020a,Xiao2020c}. Nevertheless, how to make a reasonable decisions at the level of quantum is still a problem waiting to be solved. Compared with original quantum mass function, TDQMF has higher accuracy in indicating appropriate objects and is more efficient in making determinations. The following example illustrates the priority of TDQMF in decision making. \subsubsection{Application 2} Assume there is a securities company which makes decision about whether purchase a certain kind of stock or sell another category of stock. According to the situation of market, multiple situation awareness sensors give information in the form of quantum. The information is consist of two parts, the first part is the direct judgements made by sensors on whether purchasing a certain kind of stock or selling it. Except for those, an additional status is "doing nothing" which is also regarded as a kind of judgements. And the second part of information is the measure of the reliability of the first part given by another group of transducers. Based on practical situations, some basic frames of this specific problem can be defined. The frame of discernment in the form of quantum of the first part is defined as $\Theta = \{P, S, D, PSD\}$. $P$ represents purchasing a kind of stock and $S$ denotes selling a kind of stock. Besides, $D$ means doing nothing and $PSD$ indicates that there is no idea about what to do. More than those, the frame of discernment in the form of quantum of the second part is defined as $\Theta = \{Y, N, YN\}$. Detailed data about the first dimension is given in table 9. Besides, detailed data of the second dimension are given in Table 10. After the process of disposing, the data after modification is given in Table 11. Besides, the data after normalization is given in Table 12 and the data after combination is given in Table 13. In the last, the modulo values of different propositions are given in Table 14. \begin{table}[h]\footnotesize \centering \caption{Quantum evidences given by multiple transducers} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Quantum \ \ evidences$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{P\}$&$\{S\}$&$\{D\}$&$\{PSD\}$\\ $\mathbb{Q}_{1}^{1}$&$0.7810e^{0.8760j}$&$0.3605e^{0.5880j}$&$0.4382e^{0.4738j}$&$0.2605e^{0.6957j}$\\ $\mathbb{Q}_{1}^{2}$&$0.7366e^{0.8430j}$&$0.3640e^{0.6489j}$&$0.3848e^{0.4287j}$&$0.4204e^{0.3133j}$\\ $\mathbb{Q}_{1}^{3}$&$0.8287e^{0.8451j}$&$0.3606e^{0.8050j}$&$0.3679e^{0.8238j}$&$0.2181e^{1.0946j}$\\ $\mathbb{Q}_{1}^{4}$&$0.7789e^{0.8398j}$&$0.4177e^{0.8362j}$&$0.4318e^{0.7362j}$&$0.1793e^{0.5914j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Another group of transducers' judgements on reliability of quantum evidences} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Quantum \ \ judgements$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{Y\}$& $\{N\}$&$\{Y, N\}$&$\{\emptyset\}$\\ $\mathbb{Q}_{2}^{1}$&$0.8275e^{0.7597j}$&$0.3905e^{0.6947j}$&$0.4032e^{0.5195j}$&$0$\\ $\mathbb{Q}_{2}^{2}$&$0.7920e^{0.8032j}$&$0.4177e^{0.8362j}$&$0.4450e^{0.7683j}$&$0$\\ $\mathbb{Q}_{2}^{3}$&$0.7009e^{0.7349j}$&$0.4404e^{0.6889j}$&$0.5609e^{0.7511j}$&$0$\\ $\mathbb{Q}_{2}^{4}$&$0.8287e^{0.7256j}$&$0.4263e^{0.6857j}$&$0.3623e^{0.1139j}$&$0$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Quantum evidences after modification} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Combined \ \ TDQMF$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{P\}$&$\{S\}$&$\{D\}$&$\{PSD\}$\\ $\mathbb{Q}_{Modified_{1}}^{1}$&$1.0115e^{1.4983j}$&$0.8369e^{1.4011j}$&$0.8701e^{1.3761j}$&$0.5749e^{0.5725j}$\\ $\mathbb{Q}_{Modified_{1}}^{2}$&$0.9885e^{1.5038j}$&$0.8509e^{1.4554j}$&$0.8545e^{1.4184j}$&$0.7075e^{0.5934j}$\\ $\mathbb{Q}_{Modified_{1}}^{3}$&$0.9503e^{1.5674j}$&$0.8287e^{1.5496j}$&$0.8305e^{1.5519j}$&$0.6994e^{0.8211j}$\\ $\mathbb{Q}_{Modified_{1}}^{4}$&$1.0548e^{1.5216j}$&$0.9099e^{1.5071j}$&$0.9165e^{1.4888j}$&$0.4716e^{0.2305j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Quantum evidences after normalization} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Normalized \ \ quantum \ \ evidences$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{P\}$&$\{S\}$&$\{D\}$&$\{PSD\}$\\ $\mathbb{Q}_{Normalized_{1}}^{1}$&$0.6032e^{1.4983j}$&$0.4991e^{1.4012j}$&$0.5189e^{1.3762j}$&$0.3429e^{0.5725j}$\\ $\mathbb{Q}_{Normalized_{1}}^{2}$&$0.5773e^{1.5038j}$&$0.4969e^{1.4554j}$&$0.4990e^{1.4184j}$&$0.4132e^{0.5934j}$\\ $\mathbb{Q}_{Normalized_{1}}^{3}$&$0.5711e^{1.5675j}$&$0.4980e^{1.5496j}$&$0.4990e^{1.5520j}$&$0.4203e^{0.8211j}$\\ $\mathbb{Q}_{Normalized_{1}}^{4}$&$0.6086e^{1.5216j}$&$0.5250e^{1.5072j}$&$0.5288e^{1.4888j}$&$0.2722e^{0.2305j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Quantum evidence after combination} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Combined \ \ quantum \ \ evidence$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{P\}$&$\{S\}$&$\{D\}$&$\{PSD\}$\\ $\mathbb{Q}_{Combined_{1}}$&$0.7050e^{0.8534j}$&$0.4887e^{0.6414j}$&$0.5134e^{0.6046j}$&$0.0211e^{1.5167j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Modulo values of different propositions} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Modulo \ \ values$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{P\}$&$\{S\}$&$\{D\}$&$\{PSD\}$\\ $\mathbb{Q}_{Modulo_{1}}$&$0.4970$&$0.2388$&$0.2636$&$0.0004$\\\hline \end{tabular} \end{spacing} \end{table} It can be easily obtained that the quantum evidences given by sensors has conclusion that the securities company is expected to purchase this kind of stock. The operation which is supposed to be carried out is clear and decision-makers are able to distinguish the most reasonable way to manage current situations. \clearpage \subsection{Application of income estimate} In the last two applications, the contents illustrate the advantages of TDQMF in target recognition and decision making. In this application, a more specific and practical case is offered to examine correctness and validity of TDQMF in ability of predictions based on actual conditions, namely income estimate. In fact, many researches on prediction about future trends which are even in the form of quantum have been made \cite{zhou2019robust, Dai2020, Xiao2020CEQD, Fan2019timeseries, Zhaojy2019timeseries}. Focused on the trend of changes of income, TDQMF is utilized to predict it based on provided effective and certain quantum evidences so as to give comparatively reasonable estimates. \subsubsection{Application 3} Assume there is a public company which needs a reasonable prediction about the expectation of future income. According to business circumstances and feedback of investment market, different sensors make their own judgements in the form of quantum about brief numbers of income the company is expected to receive. The circumstances of income of the company can be roughly divided into 3 ordered levels. Specific situations are "less than 1 billion", "between 1 billion and 10 billion" and "more than 10 billion". Propositions $\{\leq 1B, 1B-10B, \geq 10B, U\}$ represent the three status respectively. Therefore, the first dimension of frame of discernment can be defined as $\Theta = \{\leq 1B, 1B-10B, \geq 10B, U\}$. Among them, $U$ indicates that there is no predictions about the future income. And proposition $U$ can be also treated as an union set of proposition $\leq 1B$, $1B-10B$ and $\geq 10B$. Besides, the second dimension of the frame of discernment is defined as $\Theta = \{Y, N, YN\}$. Detailed information about the first dimension is given in table 15. Besides, detailed information of the second dimension is given in Table 16. More than that, the information after modification is given in Table 17. Except for those, the information after normalization is given in Table 18 and the combined information is given in Table 19. In the last, the modulo mass of different propositions is given in Table 20. \begin{table}[h]\footnotesize \centering \caption{Quantum evidences given by multiple sensors} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Quantum \ \ evidences$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{\leq 1B\}$&$\{1B-10B\}$&$\{\geq 10B\}$&$\{U\}$\\ $\mathbb{Q}_{1}^{1}$&$0.4472e^{1.1071j}$&$0.3905e^{0.8760j}$&$0.7810e^{0.8760j}$&$0.2397e^{0.5840}$\\ $\mathbb{Q}_{1}^{2}$&$0.4114e^{1.1180j}$&$0.4252e^{0.8519j}$&$0.7528e^{0.8794j}$&$0.2882e^{1.2164j}$\\ $\mathbb{Q}_{1}^{3}$&$0.4429e^{1.0768j}$&$0.3764e^{0.6913j}$&$0.7798e^{0.8579j}$&$0.2321e^{0.8683j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Another sensors' judgements on reliability of quantum evidences} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Quantum \ \ judgements$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{Y\}$& $\{N\}$&$\{Y, N\}$&$\{\emptyset\}$\\ $\mathbb{Q}_{2}^{1}$&$0.6640e^{0.9242j}$&$0.4924e^{1.1525j}$&$0.5626e^{0.6136j}$&$0$\\ $\mathbb{Q}_{2}^{2}$&$0.7211e^{0.5880j}$&$0.5000e^{0.6435j}$&$0.4795e^{0.4586j}$&$0$\\ $\mathbb{Q}_{2}^{3}$&$0.5336e^{0.2267j}$&$0.5580e^{0.6327j}$&$0.6389e^{0.2442j}$&$0$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Quantum evidences after modification} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Combined \ \ TDQMF$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{\leq 1B\}$&$\{1B-10B\}$&$\{\geq 10B\}$&$\{U\}$\\ $\mathbb{Q}_{Modified_{1}}^{1}$&$0.9202e^{-1.1515j}$&$0.8982e^{-1.1348j}$&$0.9252e^{-1.1280j}$&$0.7223e^{0.6070j}$\\ $\mathbb{Q}_{Modified_{1}}^{2}$&$0.9496e^{-1.5475j}$&$0.9493e^{1.5676j}$&$1.0215e^{1.5541j}$&$0.6329e^{0.6688j}$\\ $\mathbb{Q}_{Modified_{1}}^{3}$&$0.9176e^{1.3873j}$&$0.8862e^{1.4104j}$&$0.8837e^{1.3070j}$&$0.7698e^{0.3619j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Quantum evidences after normalization} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Normalized \ \ quantum \ \ evidences$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{\leq 1B\}$&$\{1B-10B\}$&$\{\geq 10B\}$&$\{U\}$\\ $\mathbb{Q}_{Normalized_{1}}^{1}$&$0.5285e^{-1.1515j}$&$0.5158e^{-1.1348j}$&$0.5313e^{-1.1280j}$&$0.4149e^{0.6070j}$\\ $\mathbb{Q}_{Normalized_{1}}^{2}$&$0.5269e^{-1.5475j}$&$0.5268e^{1.5676j}$&$0.5668e^{1.5541j}$&$0.3512e^{0.6688j}$\\ $\mathbb{Q}_{Normalized_{1}}^{3}$&$0.5296e^{1.3873j}$&$0.5115e^{1.4104j}$&$0.5101e^{1.3070j}$&$0.4443e^{0.3619j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Quantum evidence after combination} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Combined \ \ quantum \ \ evidence$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{\leq 1B\}$&$\{1B-10B\}$&$\{\geq 10B\}$&$\{U\}$\\ $\mathbb{Q}_{Combined_{1}}$&$0.5715e^{0.1205j}$&$0.5542e^{0.1467j}$&$0.5998e^{0.1749j}$&$0.0789e^{0.9842j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Modulo values of different propositions} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Modulo \ \ values$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{\leq 1B\}$&$\{1B-10B\}$&$\{\geq 10B\}$&$\{U\}$\\ $\mathbb{Q}_{Modulo_{1}}$&$0.3266$&$0.3072$&$0.3598$&$0.0062$\\\hline \end{tabular} \end{spacing} \end{table} It can be easily concluded that the prediction system gives a estimate that the company is expected to receive a income which is higher than 10 billion. In this case, the ability to predict future trends of TDQMF is proved. \clearpage \subsection{Application of fault diagnosis} In real life, how to correctly and accurately find errors existing in certain systems is a troublesome but crucial issue. As a result, fault diagnosis has been a heated topic in recent years, a lot of studies related with it have been made \cite{Zhang2020, Hang2014, Jiang2016, Basir2007, lin2018multisensor}. Most of them manage problems brought by fault diagnosis from the angle of Dempster-Shafer theory \cite{Dempster1967Upper, book}, certain specific faults can be diagnosed through customised methods. However, it is still vague and confusing when coming to seek for mistakes at the level of quantum. Therefore, usages in diagnosing different kinds of faults under complex situations are verified in this application. \subsubsection{Application 4} Assume there is a circuit board, one soldered dot decides whether the whole circuitry is interconnected and works normally. However, in the process of working, almost all of malfunctions is caused by the problems occurred in the soldered dot. Therefore, a group of sensors are arranged to report the judgements in the form of quantum on working status of the soldered dot. And the specific status can be described as "Normal", "faulted" and "sub-normal". Based on this practical situations, some basic circumstances of this specific problem can be depicted. The frame of discernment in the form of quantum of the first dimension is defined as $\Theta = \{Normal, Faulted, Sub-normal, W\}$. Among them, proposition $W$ indicates that there is no idea about the working status of the soldered dot. Besides, proposition $W$ can be also considered as a union set of the other three propositions, because $W$ represents an uncertainty in measuring actual situations. More than those, the frame of discernment in the form of quantum of the second dimension can be defined as $\Theta = \{Y, N, YN\}$. Detailed data about the first dimension is given in table 21. Besides, detailed data of the second dimension are given in Table 22. After processing, the data which goes through modification is given in Table 23. Except for it, the data after normalization is given in Table 24 and the combined data is given in Table 25. In the last, the modulo values of different propositions are given in Table 26. \begin{table}[h]\footnotesize \centering \caption{Quantum evidences given by multiple transducers} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Quantum \ \ evidences$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{Normal\}$&$\{Faulted\}$&$\{Sub-normal\}$&$\{W\}$\\ $\mathbb{Q}_{1}^{1}$&$0.4964e^{1.0891j}$&$0.4219e^{1.0222j}$&$0.6560e^{0.9151j}$&$0.3808e^{0.6638j}$\\ $\mathbb{Q}_{1}^{2}$&$0.5069e^{1.1866j}$&$0.4579e^{1.0191j}$&$0.6312e^{0.8638j}$&$0.3671e^{0.3068j}$\\ $\mathbb{Q}_{1}^{3}$&$0.4695e^{1.1071j}$&$0.4661e^{0.9530j}$&$0.6519e^{0.8505j}$&$0.3703e^{0.6713j}$\\ $\mathbb{Q}_{1}^{4}$&$0.4632e^{1.0007j}$&$0.4455e^{0.8012j}$&$0.6801e^{0.8478j}$&$0.3525e^{0.4963j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Another transducers' judgements on reliability of quantum evidences} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Quantum \ \ judgements$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{Y\}$& $\{N\}$&$\{Y, N\}$&$\{\emptyset\}$\\ $\mathbb{Q}_{2}^{1}$&$0.8000e^{0.3177j}$&$0.5793e^{0.3708j}$&$0.1555e^{0.6984j}$&$0$\\ $\mathbb{Q}_{2}^{2}$&$0.7564e^{0.4222j}$&$0.5232e^{0.4551j}$&$0.3923e^{0.5227j}$&$0$\\ $\mathbb{Q}_{2}^{3}$&$0.7854e^{0.3781j}$&$0.5770e^{0.4868j}$&$0.2236e^{0.4636j}$&$0$\\ $\mathbb{Q}_{2}^{4}$&$0.7430e^{0.5057j}$&$0.6352e^{0.4918j}$&$0.2104e^{0.6301j}$&$0$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Quantum evidences after modification} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Combined \ \ TDQMF$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{Normal\}$&$\{Faulted\}$&$\{Sub-normal\}$&$\{W\}$\\ $\mathbb{Q}_{Modified_{1}}^{1}$&$1.1152e^{1.3251j}$&$1.0980e^{1.3202j}$&$1.1475e^{1.3013j}$&$0.4093e^{0.6768j}$\\ $\mathbb{Q}_{Modified_{1}}^{2}$&$1.0663e^{1.3485j}$&$1.0054e^{1.3935j}$&$1.0438e^{1.3707j}$&$0.6336e^{0.4398j}$\\ $\mathbb{Q}_{Modified_{1}}^{3}$&$1.1029e^{1.3867j}$&$1.0981e^{1.3729j}$&$1.1318e^{1.3490j}$&$0.4679e^{0.5727j}$\\ $\mathbb{Q}_{Modified_{1}}^{4}$&$1.1367e^{1.3529j}$&$1.1362e^{1.3441j}$&$1.1613e^{1.3480j}$&$0.4443e^{0.5596j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Quantum evidences after normalization} \begin{spacing}{1.40} \begin{tabular}{c c c c c}\hline $Normalized \ \ quantum \ \ evidences$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{Normal\}$&$\{Faulted\}$&$\{Sub-normal\}$&$\{W\}$\\ $\mathbb{Q}_{Normalized_{1}}^{1}$&$0.5622e^{1.3251j}$&$0.5536e^{1.3202j}$&$0.5785e^{1.3014j}$&$0.2063e^{0.6768j}$\\ $\mathbb{Q}_{Normalized_{1}}^{2}$&$0.5589e^{1.3485j}$&$0.5270e^{1.3936j}$&$0.5472e^{1.3707j}$&$0.3321e^{0.4398j}$\\ $\mathbb{Q}_{Normalized_{1}}^{3}$&$0.5569e^{1.3867j}$&$0.5544e^{1.3729j}$&$0.5714e^{1.3491j}$&$0.2362e^{0.5727j}$\\ $\mathbb{Q}_{Normalized_{1}}^{4}$&$0.5593e^{1.3529j}$&$0.5591e^{1.3441j}$&$0.5715e^{1.3480j}$&$0.2186e^{0.5596j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Quantum evidence after combination} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Combined \ \ quantum \ \ evidence$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{Normal\}$&$\{Faulted\}$&$\{Sub-normal\}$&$\{W\}$\\ $\mathbb{Q}_{Combined_{1}}$&$0.5787e^{0.9625j}$&$0.5453e^{0.9548j}$&$0.6062e^{0.9340j}$&$0.0064e^{-1.2502j}$\\\hline \end{tabular} \end{spacing} \end{table} \begin{table}[h]\footnotesize \centering \caption{Modulo values of different propositions} \begin{spacing}{1.40} \begin{tabular}{c c c c c c}\hline $Modulo \ \ values$ &\multicolumn{4}{c}{$Values \ \ of \ \ propositions$}\\\hline &$\{Normal\}$&$\{Faulted\}$&$\{Sub-normal\}$&$\{W\}$\\ $\mathbb{Q}_{Modulo_{1}}$&$0.3349$&$0.2974$&$0.3675$&$4.1071e-05$\\\hline \end{tabular} \end{spacing} \end{table} It can be easily concluded that the status of the soldered dot is sub-normal. For workers, according to the outcomes of judgements, maintaining the dot is an important task to ensure the whole system is in a good condition. Therefore, TDQMF is able to find concrete faults and people could figure out corresponding solutions utilizing feedback from TDQMF. \clearpage \section{Conclusion} Quantum mass function proposes a novel angle to handle problems appeared in traditional evidence theory. However, the defects of original quantum mass function are also obvious. It may not reflect the actual conditions appropriately and give reasonable judgements. In order to solve these deficiencies, a two-dimensional quantum mass function is proposed in this paper. TDQMF not only takes actual situations into account, but also takes reliability of original judgements into consideration to give an authentic description of specific affairs. It can be regarded as a vigorous addition to quantum mass function which improves the confidential level and correctness of given results. Besides, the developed algorithm is also an efficient tool to extract truly useful key points in lots of applications when managing uncertain and complex quantum information. To sum up, TDQMF provides a feasible and applied solution to combine QBPAs and handle uncertainties existing in frames of discernment in the form quantum. \section*{Acknowledgment} This research was funded by the Chongqing Overseas Scholars Innovation Program (No. cx2018077). \bibliographystyle{elsarticle-num}
{ "timestamp": "2021-02-16T02:37:47", "yymm": "2102", "arxiv_id": "2102.07652", "language": "en", "url": "https://arxiv.org/abs/2102.07652" }
\section{Introduction} Measurement incompatibility is a key feature of Quantum theory which distinguishes it from the classical world \cite{Heinosaari_2016}. A pair of observables are incompatible if they are not measurable simultaneously,i.e., their outcomes can not be obtained jointly via a single joint measurement. Today, the connections among incompatibility, non-locality and steering are well known \cite{Brunner__nonloc_incomp,bush_bell_steer_incomp}. Non-classical features like Bell inequality violation as well as steering can be demonstrated only using incompatible measurements \cite{Barnett_nonloc_incomp,Uola_steer_incomp}. It is also well known that incompatible measurements provide an advantage over compatible measurements in several information-theoretic tasks in quantum information theory \cite{qkd_d,mubs}. Measurement compatibility can be characterized as the existence of a common (i.e, constructed using same ancilla state and Hilbert space) commuting Naimark extensions \cite{Mitra_Naimark_incomp,PhysRevA.89.032121,BENEDUCI2017197}. It has been recently shown there are several layers of classicality inside the compatibility of measurement\cite{Heino_layers_of_class, Mitra_layers_of_class}. Recently, a novel idea was presented in ref.\cite{Angelo} to get a better understanding of non-classicality associated with incompatibility. The authors of \cite{Angelo} introduced the concept of incompatibility of the physical context(IPC), which is a function of a given context, where a context comprises of a quantum state and two measurements. In a way their measure of IPC captures the notion of non-classicality associated with the context, as it vanishes when the state is a maximally mixed state or the measurements are commuting with each other. It was defined as the difference between the information remaining in a quantum state after the first sharp measurement and after the second sharp measurement. Moreover, the IPC is also linked with the information leakage when an eavesdropper performs a measurement on the state being transferred in a QKD like game \cite{note}. However, as we will show that the approach of \cite{Angelo} has several limitations. Firstly, They restrict information theoretic agents Alice, Eve and Bob to specific quantum operations and did not consider most general quantum operations i.e., quantum instruments. Secondly, if we do not restrict Alice, Eve and Bob to specific quantum operations which they did, then their measure of IPC can take negative value, which implies that the state after second measurement by eavesdropper Eve has more information than the state after first measurement which physically does to not make sense. Thirdly, it is not possible to extend this idea to a generic POVM measurements through their approach and without introducing quantum instruments. Fourthly, in presence of memory the IPC can increase which is against the intuition that incompatibility is non-increasing as we add memory. In this work, we have generalized the idea of IPC for POVMs and modified the corresponding information measure. Our measure of the IPC can never be negative and it is non-increasing on addition of memory. We also demonstrate the usefulness of the modified IPC measure through a QKD like scenario as an example. In this way our approach has a wider applicability. The rest of this paper is organised as follows: In section \eqref{sec:prelims}, we discuss the preliminary concepts necessary for this paper. Then, we discuss the limitations of the approach given in \cite{Angelo} and discuss our main results in section \eqref{sec:main}. Further, in section \eqref{sec:memory}, we include the presence of memory in our analysis. Finally, in section \eqref{sec:conc} we summarise our work and discuss about future direction. \section{Preliminaries}\label{sec:prelims} \subsection{Observables and Channels} An observable $A$ with outcome set $\Omega_A$ in quantum mechanics, is a collection of positive hermitian matrices $\{A(x)\mid x\in\Omega_A\}$ such that $\sum_xA(x)=\mathbb{I}$. A pair of observables $(A,B)$ acting on same $d$ dimensional Hilbert space $\mathcal{H}$ and with outcome sets $\Omega_A$ and $\Omega_B$ respectively, is compatible if there exist a joint observable $\mathcal{G}$ acting on same Hilbert space $\mathcal{H}$ and outcome set $\Omega_A\times\Omega_B$ such that for all $\rho\in\mathcal{S}(\mathcal{H})$, $x\in\Omega_A$ and $y\in\Omega_B$ \begin{eqnarray} A(x)=\sum_y\mathcal{G}(x,y); B(y)=\sum_x\mathcal{G}(x,y) \end{eqnarray} where $\mathcal{S}(\mathcal{H})$ is the state space. Only for PVMs, compatibility implies commutativity. We denote the set of all observables as $\mathcal{O}$. On the other hand, a quantum channel is a CPTP map from one state space $\mathcal{S}(\mathcal{H}_1)$ to another state space $\mathcal{S}(\mathcal{H}_2)$, where $\mathcal{H}_1$ and $\mathcal{H}_2$ are two Hilbert spaces. We denote the concatenation of two quantum channels $\Lambda_1$ and $\Lambda_2$ as $\Lambda_1\circ\Lambda_2$. Therefore, for all $\rho\in\mathcal{S}(\mathcal{H})$ $(\Lambda_1\circ\Lambda_2)(\rho)=\Lambda_1(\Lambda_2(\rho))$. Consider two quantum channels $\Gamma:\mathcal{S}(\mathcal{H}_1)\rightarrow\mathcal{S}(\mathcal{H}_2)$ and $\Lambda:\mathcal{S}(\mathcal{H}_1)\rightarrow\mathcal{S}(\mathcal{H}^{\prime}_1)$. If there exist a quantum channel $\Theta:\mathcal{S}(\mathcal{H}^{\prime}_1)\rightarrow\mathcal{S}(\mathcal{H}_2)$ such that $\Gamma=\Theta\circ\Lambda$ holds, we denote it as $\Gamma\preceq\Lambda$. If both $\Gamma\preceq\Lambda$ and $\Lambda\preceq\Gamma$ hold, we denote it as $\Gamma\simeq\Lambda$ and we call it as $\Gamma$ and $\Lambda$ are concatenation equivalent. We denote the set of all channels concatenation equivalent to channel $\Lambda$ as $[\Lambda]$. There exists a special type of channel known as \textit{completely depolarising channel}, which we will use in following section. A channel $\Sigma$ is called completely depolarising channel if for all $T\in\mathcal{L}^+(\mathcal{H})$, $\Sigma(T)=\text{Tr}(T)\eta$ for some fixed $\eta\in\mathcal{S}(\mathcal{H})$, where $\mathcal{L}^+(\mathcal{H})$ is set of positive linear operators on Hilbert Space $\mathcal{H}$. We denote the set of all channels as $\mathcal{C}$. \subsection{Quantum Instruments and Measurement models} In quantum measurements, there are two equivalent concepts, namely measurement models and quantum instruments \cite{Lahti, Heinosaari_pauli}. Measurement models are descriptions of measurement process, where as instruments are the concise version of it. Consider a measured system $S$ associated with a Hilbert space $\mathcal{H}_S$ and with density matrix $\rho$ and a ancilla system associated with another Hilbert space $\mathcal{H}_a$ and with density matrix $\sigma_a$. To perform a measurement on a measured system, at first a joint unitary $U$ have to be applied on the composite system where, $U$ is acting on Hilbert space $\mathcal{H}_S\otimes\mathcal{H}_a$. Then, a pointer observable $A^{\prime}$ with outcome set $\Omega_A^{\prime}$ have to be measured on the ancilla system. Now in this process if the observable $A$ with same outcome set as $A^{\prime}$ has, is the measured on the system $S$ then for all $x\in \Omega_A$ and $\rho \in \mathcal{H}_S$ we have \begin{equation} \text{Tr}[\rho A(x)]=\text{Tr}[U(\rho\otimes \sigma_a)U^{\dagger}(\mathbb{I}\otimes A^{\prime}(x))]. \end{equation} The average post measurement state is given by \begin{equation} \Lambda(\rho)=\text{Tr}_{\mathcal{H}_a}[U(\rho\otimes \sigma_a)U^{\dagger}]. \end{equation} Here, $\Lambda$ is a quantum channel. This measurement model is specified by the quadruple $(\mathcal{H}_a,\sigma_a,U,A^{\prime})$.\\ A quantum instrument $\mathcal{I}$ through which the measurement of an observable $A$ can be implemented, is a collection of CP trace non-increasing maps $\{\Phi_x\}$ such that for all $x\in \Omega_A$ and $\rho \in \mathcal{H}_S$ we have \begin{equation} \text{Tr}[\rho A(x)]=\text{Tr}[\Phi_x(\rho)]\label{obs_Ins} \end{equation} and \begin{equation} \sum_x\Phi_x(\rho)=\Lambda(\rho)\label{obs_chan} \end{equation} where $\Lambda$ is a quantum channel. We call such an instrument as $A$-compatible instrument. If $\mathcal{I}=\{\Phi_x\}$ is an $A$-compatible instrument, then another instrument $\Theta\circ\mathcal{I}=\{\Theta\circ\Phi_x\}$ is also an $A$-compatible instrument \cite{Heinosaari_parent_channel}, where $(\Theta\circ\Phi_x)(\rho)=\text{Tr}[\Phi_x(\rho)]\Theta\left( \frac{\Phi_x(\rho)}{\text{Tr}[\Phi_x(\rho)]}\right)$. We denote the set of all $A$-compatible instruments as $\mathcal{J}_A$.\\ Therefore, given a measurement model $(\mathcal{H}_a,\sigma_a,U,A^{\prime})$, one can associate a quantum instrument $\mathcal{I}$ such that for all $x\in \Omega_A$ and $\rho \in \mathcal{H}_S$ we have \begin{equation} \text{Tr}[\Phi_x(\rho)]=\text{Tr}[U(\rho\otimes \sigma_a)U^{\dagger}(\mathbb{I}\otimes A^{\prime}(x))].\label{eq:Measurement_model/Instrument} \end{equation} Similarly, given a quantum instrument it is possible to find out a measurement model such that equation \eqref{eq:Measurement_model/Instrument} holds \cite{Ozawa}. This implies that these two concepts are equivalent.\\ \subsection{Observable-Channel compatibility} A quantum channel $\Lambda$ is compatible with an observable $A$ if there exists a quantum instrument $\mathcal{I}=\{\Phi_x\}$ such that equations \eqref{obs_Ins} and \eqref{obs_chan} together hold. Otherwise, they are incompatible. If a channel $\Lambda$ and an observable $A$ are compatible, we denote it as $\Lambda\hbox{\hskip0.75mm$\circ\hskip-1.0mm\circ$\hskip0.75mm} A$ \cite{Heinosaari_Galois}. We call $\Lambda$ as $A$-compatible channel. \textit{It is well known that completely depolarising channels are compatible with any observable} \cite{Heinosaari_pauli}. For a quantum channel $\Lambda\in\mathcal{C}$ and an observable $A\in\mathcal{O}$, following sets are introduced in \cite{Heinosaari_Galois}: \begin{align} &\tau_{c}(\Lambda)=\{X\in\mathcal{O}\mid \Lambda\hbox{\hskip0.75mm$\circ\hskip-1.0mm\circ$\hskip0.75mm} X\};\\ &\sigma_c(A)=\{\Gamma\in\mathcal{C}\mid \Gamma\hbox{\hskip0.75mm$\circ\hskip-1.0mm\circ$\hskip0.75mm} A\}. \end{align} Let us now write down the following theorem which was originally proved in ref.\cite{Heinosaari_parent_channel}: \begin{theorem} Suppose $A\in\mathcal{O}$ be an observable acting on the state space $\mathcal{S(H)})$ and $(V,\mathcal{K},\hat{A})$ be its Naimark extension, i.e, $\mathcal{K}$ is a Hilbert space, $V:\mathcal{H}\rightarrow \mathcal{K}$ is an isometry and $\hat{A}=\{\hat{A}(x)\}$ is a PVM such that $V^{\dagger}\hat{A}(x)V=A(x)$ for all $x\in\Omega_A$. Then, \begin{equation} \sigma_c(A)=\{\Lambda\in\mathcal{C}\mid \Lambda\preceq\Lambda_A\} \end{equation} where for any state $\rho$, $\Lambda_A(\rho)=\sum_x\hat{A}(x)V\rho V^{\dagger}\hat{A}(x)$.\label{th:parent_channel} \end{theorem} We call $\Lambda_A$ as parent channel of $\sigma_c(A)$ and we also call the corresponding $A$-compatible instrument $\mathcal{I}_A$ a parent instrument in $\mathcal{J}_A$. Clearly, $\Lambda_A$ depends on the choice of the Naimark extension. But any two parent channels are concatenation equivalent. Therefore, we have freedom to choose it. \subsection{Holevo Bound}\label{sec:holevo} The Holevo bound captures the maximum classical information that can be extracted from an ensemble of quantum states\cite{Chuang}. Suppose, we have an ensemble $\mathcal{E}$=\{$p_X(x),\rho_x$\}, and our task is to determine the classical index $x$ by doing some measurements. The density matrix operator corresponding to this ensemble has the form $\rho=\sum_xp_X(x)\rho_x$. Now, we can do a measurement $\Lambda_y$, so that the information gain after doing the measurement is given by the mutual information $I=I(X;Y)$ after the measurement, where $Y$ is the random variable corresponding to the outcome of measurement. It is known that the maximum value of this mutual information is given by the Holevo bound\cite{Chuang,Wilde}, given by \begin{align}\label{eq:holevo} \chi(\mathcal{E})=S(\rho)-\sum_xp_X(x)S(\rho_x) \end{align} where $S(\rho)$ is the Von-Neumann entropy of the state $\rho$. It is interesting to note that the holevo bound $\chi$ is also the mutual information of a classical-quantum state of the form $\rho_{CQ}=\sum_xp_X(x)\ket{x}\bra{x}\otimes\rho_x$. Under the action of a channel $\Lambda$ the ensemble transforms as $\mathcal{E}\rightarrow \mathcal{E}^{\prime}=\{p_X(x),\rho_x^{\prime}\}$. But we know that the mutual information is non-increasing under the action of channels\cite{Wilde}, which implies that the Holevo information is also non-increasing under the action of quantum channels, i.e., \begin{align}\label{eq:holevo_dec} \chi(\mathcal{E})\geq\chi(\mathcal{E}^{\prime}). \end{align} \subsection{Incompatibility of physical context}\label{sec:old_ipc} In a recent work \cite{Angelo}, the concept of IPC was introduced which was further used to show quantum resource covariance\cite{Angelo2}. To define this idea, we need the notion of context. A context is defined as $\mathbb{C}=\{\rho,X,Y\}$, where $\rho$ is an arbitrary quantum state. Also, $X=\{X_i\}$ and $Y=\{Y_j\}$ are two observables, with $X_i$ and $Y_j$ as the respective eigen projectors. Other than the definition of context, we also need a game using which we define the incompatibility of a context $\mathbb{C}$. The game goes like this. Alice prepares the quantum state $\rho$, and of course it has some information content which can be quantified by using any known measure. The authors in ref.\cite{Angelo} quantify the information of $\rho$ using the following \begin{align}\label{eq:infAngelo} I(\rho)=\ln d-S(\rho), \end{align} where $S(\rho)$=-$Tr(\rho\ln\rho)$ is the von Neumann entropy of $\rho$ and $d$ is the dimension of the Hilbert space. This information is non-negative, i.e., $I(\rho)\geq0$, is ensured because $S(\rho)\leq\ln d$. After state preparation, Alice performs a noisy measurement with $X$ on the prepared state, so that $\rho$ transforms as \begin{align}\label{eq:xsharp} \rho \rightarrow \mathcal{N}_X(\rho)=\sum_{i=1}^dX_i\rho X_i. \end{align} So, after this operation the information content in the state $\mathcal{N}_X(\rho)$ is $I_1=I(\mathcal{N}_X(\rho))$. This state $\mathcal{N}_X(\rho)$ is then delivered to Bob, who verifies the information content of the state. In case Bob finds that there is no loss of information, Alice and Bob will agree that the channel is free from information leakage. But it might happen that there is an eavesdropper, Eve, who performs a noisy measurement Y on the state $\mathcal{N}_X(\rho)$, before it is delivered to Bob. The state is then transformed as \begin{align}\label{eq:ysharp} \mathcal{N}_X(\rho) \rightarrow (\mathcal{N}_{Y}\circ\mathcal{N}_{X})(\rho)=\mathcal{N}_{YX}(\rho)=\sum_{j=1}^dY_j\mathcal{N}_{X}(\rho) Y_j. \end{align} Thus, the information content in the state $\mathcal{N}_{XY}(\rho)$ is $I_2=I(\mathcal{N}_{YX}(\rho))$. And therefore, the leakage in the information content is given by \begin{align}\label{eq:old_incomp_phys_cont} \mathscr{I}_C=I_1-I_2&=I(\mathcal{N}_{X}(\rho))-I(\mathcal{N}_{YX}(\rho)),\nonumber \\ &=S(\mathcal{N}_{YX}(\rho))-S(\mathcal{N}_{X}(\rho)). \end{align} Hence, only if $\mathscr{I}_C>0$, Alice and Bob will know that there is information leakage from the channel. Notice that $\mathscr{I}_C=0$ in two kind of scenarios: 1) If $X$ and $Y$ commute with each other and 2) if $\rho$ is a maximally mixed state. In the first scenario $\mathcal{N}_X(\rho)=\mathcal{N}_{YX}(\rho)$ because the two operators are compatible with each other. And in the second type of scenarios $I_1=0$ and there is no information to loose which results in $I_1=I_2$. Thus, we require the incompatibility $I_C$ to be non-zero for Bob to detect any leakage of information Hence, the concept of IPC can be defined as \begin{definition} Context incompatibility is the resource encoded in a context $\mathbb{C}=\{\rho,X,Y\}$ that allows one to test the safety of a communication channel against information leakage. It is quantified as $\mathscr{I}_C=I_1-I_2=I(\mathcal{N}_{X}(\rho))-I(\mathcal{N}_{YX}(\rho))$. It is operationally related to the amount of information lost from the system under an external measurement. \end{definition} \section{Main Results}\label{sec:main} \subsection{Limitations of incompatibility of physical context} In this section, we discuss the limitations of approach given in \cite{Angelo}. \begin{enumerate} \item First of all, according to the approach given in \cite{Angelo}, the post-measurement states after measuring sharp observable $X\in\mathcal{O}$ on a quantum state $\rho$ is $\mathcal{N}_{X}(\rho)$. Therefore, to measure an observable $X$, Alice and Eve both are restricted to use a particular channel $\mathcal{N}_{X}\in \mathcal{C}$, or equivalently they are restricted to use a particular quantum instrument $\mathcal{I}_{X}=\{\Phi_{X}(x)\}$ such that $\sum_{x}\Phi_{X}(x)=\mathcal{N}_{X}$. Since, we have no control atleast over eavesdropper Eve, there is no reason to assume such a restriction. \item Second, to generalize it, suppose we remove such restriction, i.e., to measure an observable, now Alice and Eve can use all possible instruments that are compatible with that observable. Then to measure the observable $X$ if Alice uses an arbitrary instrument $\mathcal{I}^{\prime}_{X}=\{\Phi^{\prime}_{X}(x)\}$ such that $\Lambda^{\prime}=\sum_x\Phi^{\prime}_{X}(x)$ and to measure $Y$ Eve uses a special instrument $\mathcal{I}^{depo}_{Y}=\{\Phi^{depo}_{Y}(y)\}$ such that for all $\rho\in \mathcal{S}(\mathcal{H})$ and a fix pure state $\eta$, $\Lambda^{depo}_{\eta}(\rho)=\sum_y\Phi^{depo}_{Y}(y)(\rho)=\eta$ is a completely depolarising channel. Now, as $S(\eta)=0$, from equation \eqref{eq:old_incomp_phys_cont} we have \begin{align} \mathscr{I}_\mathcal{C}&=I(\Lambda^{\prime}(\rho))-I((\Lambda^{depo}_{\eta}\circ\Lambda^{\prime})(\rho))\nonumber\\ &=-S(\Lambda^{\prime}(\rho))\nonumber \\ &\leq 0. \end{align} The negativity of IPC implies that the post-measurement state of Eve has more information than the post-measurement state of Alice, which does not make sense. This is because the information which Alice sends to Bob, can not be increased by the Eavesdropper. Such a problem is occurring because Von Neumann entropy is not monotonically non-increasing under action of a quantum channel. Therefore, in this general context, their information measure is not a proper information measure. \item Thirdly, as we know that for any POVM, post measurement state depends on the quantum instrument used to implement that POVM, their results can not be generalised for POVMs without introducing quantum instruments or equivalently without introducing measurement models! \end{enumerate} \textit{Therefore, in our attempt to generalize the idea of IPC for POVMs, we need to modify idea and present it in a different way which we describe in following subsections.} \subsection{Modified measure of Information leakage } In this section we present a generalization of the game presented in Sec.\ref{sec:old_ipc}. Now, in the game, after the state preparation of $\rho$, instead of only doing a sharp measurement we allow Alice to perform a more generic measurement. Now, Alice performs her measurement with the POVM, $A$ on the quantum state $\rho\in\mathcal{S}(\mathcal{H})$ using the $A$-compatible instrument $\mathcal{I}^{\prime}_A=\{\Phi_{A,x}\}$ such that $\Lambda^{\prime}_A=\sum_x\Phi_{A,x}$ and generates the ensemble $\mathcal{E}_A=\{p_x, \rho_x\}$, where $p_x=\text{Tr}[\Phi_{A,x}(\rho)]$ and $\rho_x=\frac{\Phi_{A,x}(\rho)}{\text{Tr}[\Phi_{A,x}(\rho)]}$. Here, $\Lambda^{\prime}_A$ is the quantum channel such that $\Lambda^{\prime}_A:\mathcal{S}(\mathcal{H})\rightarrow \mathcal{S}(\mathcal{K})$. Furthermore, we quantify the information content of the ensemble $\mathcal{E}_A$ via the Holevo bound as: \begin{align*} \chi(\rho,\mathcal{I}^{\prime}_A)=S(\Lambda_A^{\prime}(\rho))-\sum_xp_xS(\rho_x). \end{align*} This measure of information has been previously used to quantify information gain in ref.\cite{Zhengjun}. Similarly, the eavesdropper Eve performs the POVM measurement $B$ on the quantum state $\Lambda_A^{\prime}(\rho)\in\mathcal{S}(\mathcal{K})$ using the $B$-compatible instrument $\mathcal{I}^{\prime}_B=\{\Phi_{B,y}\}$ such that $\Lambda^{\prime}_B=\sum_y\Phi_{B,y}$ and generates the ensemble $\mathcal{E}_B=\{p_x, \Lambda^{\prime}_B(\rho_x)\}$. Here, $\Lambda^{\prime}_B$ is the quantum channel such that $\Lambda^{\prime}_B:\mathcal{S}(\mathcal{K})\rightarrow \mathcal{S}(\mathcal{K}^{\prime})$. It should be noted that, Alice and Bob do not have access to Eve's measurement outcomes, her measurement can be represented using a channel. Now, the information remaining in the state $(\Lambda^{\prime}_B\circ \Lambda^{\prime}_A)(\rho)$ is given by its Holevo bound, i.e., \begin{align*} \chi(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}^{\prime}_B)=S((\Lambda^{\prime}_B\circ \Lambda^{\prime}_A)(\rho))-\sum_xp_xS(\Lambda^{\prime}_B(\rho_x)). \end{align*} Therefore, Bob who was expecting to receive an ensemble with information $\chi_(\rho,\mathcal{I}^{\prime}_A)$, would receive a different ensemble with information content $\chi(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}^{\prime}_B)$. Thus, the new form of information leakage of the channel is \begin{align} I^H_c(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}^{\prime}_B)&=\chi(\rho,\mathcal{I}^{\prime}_A)-\chi(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}^{\prime}_B) \nonumber \\ &=S(\Lambda_A^{\prime}(\rho))-S((\Lambda^{\prime}_B\circ \Lambda^{\prime}_A)(\rho))\nonumber\\ &+\sum_xp_xS(\Lambda^{\prime}_B(\rho_x))-\sum_xp_xS(\rho_x). \end{align} As Holevo bound is monotonically non-increasing under the action of quantum channels, $I^H_c(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}^{\prime}_B)\geq 0$. When $I^H_c(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}^{\prime}_B)>0$, Alice and Bob will be able to detect the information leakage in the channel.\\ Now, if Eve is rational, her goal will be to minimize leakage along with collecting information. Therefore, to measure $B$ she will choose an instrument such that $\mathcal{I}^H_c(\rho,\mathcal{I}^{\prime}_A, \mathcal{I}^{\prime}_B)$ takes the minimum value. Now, let $\Lambda_B$ be a parent channel in $\sigma_c(B)$ and corresponding $B$-compatible instrument be $\mathcal{I}_B$. Then, as for any other channel $\Lambda^{\prime}_B\in \sigma_B$, $\Lambda^{\prime}_B\preceq \Lambda_B$ holds and Holevo bound is monotonically decreasing under action of a quantum channel, \begin{align} \chi(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}^{\prime}_B)\leq \chi(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}_B)~\forall\mathcal{I}_B. \end{align} Therefore, implementation of a parent instrument keeps maximum amount of accessible information or equivalently maximum Holevo bound! Therefore, for a given instrument of Alice the minimum leakage of information is \begin{align} I^H_c(\rho,\mathcal{I}^{\prime}_A,B)&=\min_{\mathcal{I}^{\prime}_B}I^H_c(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}^{\prime}_B)\nonumber\\ &=\chi(\rho,\mathcal{I}^{\prime}_A)-\max_{\mathcal{I}^{\prime}_B}\chi(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}^{\prime}_B)\nonumber\\ &=\chi(\rho,\mathcal{I}^{\prime}_A)-\chi(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}_B)\nonumber\\ &=I^H_c(\rho,\mathcal{I}^{\prime}_A,\mathcal{I}_B). \end{align} Note that the choice of $B$ depends on output state space $\mathcal{S}(\mathcal{K})$ of the quantum channel $\Lambda^{\prime}_A$ and in that sense, it is arbitrary. Now, if Alice is also rational and she does not know the presence of Eve, she will try to create an ensemble with most accessible information such that the receiver i.e, Bob can get best amount of information, or equivalently, she will use an $A$-compatible instrument for which $\chi(\rho,\mathcal{I}^{\prime}_A)$ is maximum. Let, $\Lambda_A$ be a parent channel in $\sigma_c(A)$ and corresponding $A$-compatible parent instrument be $\mathcal{I}_A$. Then, using arguments as above \begin{align} \chi(\rho,\mathcal{I}^{\prime}_A)\leq \chi(\rho,\mathcal{I}_A)~\forall\mathcal{I}^{\prime}_A. \end{align} Therefore, if Alice uses the instrument $\mathcal{I}_A$, in this case the information leakage will be minimum when Alice uses a parent channel from $\sigma_c(A)$, and is given by: \begin{align} I^H_c(\rho,A,B)=I^H_c(\rho,\mathcal{I}_A,\mathcal{I}_B). \end{align} Clearly, if Eve uses any other quantum instrument (e.g., dimension preserving instrument) $\mathcal{I}^{\prime}_B$, then $I^H_c(\rho,\mathcal{I}_A,\mathcal{I}^{\prime}_B)\geq I^H_c(\rho,A,B)$. Therefore, assuming both Alice and Eve to be rational, $I^H_c(\rho,A,B)$ is the \textit{appropriate amount of information leak} when the parent instruments are used. \subsection{Incompatibility of physical context: A modified version } First of all we modify the notion of context so that, $\mathbb{C}=\{\rho,\mathbb{X},\mathbb{Y}\}$, where $\mathbb{X}$ and $\mathbb{Y}$ are POVM measurements acting on $\mathcal{S}(\mathcal{H})$ and $\mathcal{S}(\mathcal{H}^{\prime})$ respectively. Since, $\mathbb{X}$ and $\mathbb{Y}$ are given, to define IPC, we restrict Alice's instrument $\mathcal{I}^{\prime,\mathcal{H}^{\prime}}_\mathbb{X}=\{\Phi^{\prime,\mathcal{H}^{\prime}}_{\mathbb{X},x}\}$ such that $\Lambda^{\prime}_\mathbb{X}=\sum_x\Phi^{\prime,\mathcal{H}^{\prime}}_{\mathbb{X},x}$ and $\Lambda^{\prime}_\mathbb{X}:\mathcal{S}(\mathcal{H})\rightarrow\mathcal{S}(\mathcal{H}^{\prime})$. We denote the set of all such $\mathbb{X}$-compatible instruments as $\mathcal{J}^{\mathcal{H}^{\prime}}_\mathbb{X}$. With this restriction also, being rational, Alice's goal will be to maximize $\chi(\rho,\mathcal{I}^{\prime,\mathcal{H}^{\prime}}_\mathbb{X})$. Let, for some $\mathcal{I}_{\mathbb{X},max}^{\mathcal{H}^{\prime}}\in\mathcal{J}^{\mathcal{H}^{\prime}}_\mathbb{X}$, \begin{equation} \max_{\mathcal{I}_{\mathbb{X}}^{\mathcal{H}^{\prime}}} \chi(\rho,\mathcal{I}_{\mathbb{X}}^{\mathcal{H}^{\prime}})=\chi(\rho,\mathcal{I}_{\mathbb{X},max}^{\mathcal{H}^{\prime}}). \end{equation} Therefore, similar to the previous subsection, in this case the appropriate amount of information leak is \begin{equation} \mathfrak{I}(\mathbb{C})=I^H_c(\rho,\mathcal{I}_{\mathbb{X},max}^{\mathcal{H}^{\prime}},\mathcal{I}_\mathbb{Y}). \end{equation} For the special case of, $\mathcal{H}^{\prime}=\mathcal{H}$, we have \begin{equation} \mathfrak{I}(\mathbb{C})=I^H_c(\rho,\mathcal{I}_{\mathbb{X},max}^{\mathcal{H}},\mathcal{I}_\mathbb{Y}). \end{equation} Therefore, we can define the generalized version of IPC as: \\ \begin{definition} Context incompatibility is the resource encoded in a context $\mathbb{C}=\{\rho,\mathbb{X},\mathbb{Y}\}$ that allows one to test the safety of the channel against information leakage. This resource is quantified via $\mathfrak{I}(\mathbb{C})=I^H_c(\rho,\mathcal{I}_{\mathbb{X},max}^{\mathcal{H}},\mathcal{I}_\mathbb{Y})$, where $\mathcal{I}_{\mathbb{X},max}^{\mathcal{H}}$ is the $\mathbb{X}$-compatible instrument that maximizes $\chi(\rho,\mathcal{I}^{\mathcal{H}}_\mathbb{X})$. Operationally, it is the proper information leakage in the channel caused by an external measurement on the state. \end{definition} Clearly, if Eve uses any other quantum instrument (e.g., dimension preserving instrument) $\mathcal{I}^{\prime}_{\mathbb{Y}}$, then $I^H_c(\rho,\mathcal{I}_{\mathbb{X},max}^{\mathcal{H}},\mathcal{I}^{\prime}_\mathbb{Y})\geq \mathfrak{I}(\mathbb{C})$. Moreover, if Alice performs a sharp measurement $X=\{X_i\}$, from theorem \eqref{th:parent_channel}, choosing $V=\mathbb{I}$ or equivalently choosing $\mathcal{H}=\mathcal{K}$ and $X_i=\hat{X_i}$ we get a parent channel $\Lambda_A=\mathcal{N}(X):\mathcal{S}(\mathcal{H})\rightarrow\mathcal{S}(\mathcal{H})$. Let $\mathcal{I}_{X}=\{\Phi_{X}\}$ be corresponding $X$-compatible parent instrument. As, implementation of a parent channel keeps maximum amount of accessible information or, equivalently maximum Holevo bound, we have $\mathcal{I}_{\mathbb{X},max}^{\mathcal{H}}=\mathcal{I}_{X}$. Then the proper information leakage will have the following form \begin{align} \mathfrak{I}(\mathbb{C})&=\chi(\rho,\mathcal{I}_{X})-\chi(\rho,\mathcal{I}_{X},\mathcal{I}_{\mathbb{Y}}),\nonumber\\ &=S(\mathcal{N}_X(\rho))-S((\Lambda_{\mathbb{Y}}\circ\mathcal{N}_X)(\rho))\nonumber\\ &+\sum_xp_xS(\Lambda_{\mathbb{Y}}(\rho_x))-\sum_xp_xS(\rho_x). \end{align} where $\Lambda_\mathbb{Y}$ is the $\mathbb{Y}$-compatible parent channel corresponding to the $\mathbb{Y}$-compatible parent instrument $\mathcal{I}_{\mathbb{Y}}$. \subsection{Relation between two definitions} Our generalization of the measure of IPC, gives a simplified form when we demand that both Alice and Eve perform rank-1 sharp measurements $X$ and $Y$ using parent instruments $\mathcal{I}_{X}\in\mathcal{J}_{X}$ and $\mathcal{I}_{Y}\in\mathcal{J}_{Y}$, where $\mathcal{N}_X\in\sigma_c(X)$ and $\mathcal{N}_Y\in\sigma_c(Y)$ are corresponding channels respectively. In this case $\mathfrak{I}(\mathbb{C})$ reads as \begin{align}\label{eq:sharp_incompatibility} \mathfrak{I}(\mathbb{C})=\sum_xp_xS(\mathcal{N}_{Y}(\rho_x))-\mathscr{I}_C. \end{align} The above equation relates our generalized measure of IPC with the measure of IPC $\mathscr{I}_C$ defined in \cite{Angelo}. To compare the two measures of the IPC, we remind the reader that $\mathscr{I}_C$ is zero when 1) $X$ and $Y$ commute or 2) $\rho$ is a maximally mixed state (see sec\ref{sec:old_ipc}). Coming to the new measure of IPC we find that $\mathfrak{I}(\mathbb{C})=0$ whenever $X$ and $Y$ commute, because then $\mathcal{N}_{Y}(\rho_x)$ are pure states. However, $\mathfrak{I}(\mathbb{C})$ is not necessarily equal to zero when $\rho$ is a maximally mixed state(as in the Eq. (\ref{eq:sharp_incompatibility}), $\mathscr{I}_C$ is zero but $S(\mathcal{N}_{Y}(\rho_x))$'s are not zero). This implies that our measure captures the incompatibility of a context even when the state (belonging to the context) is a maximally mixed state. This is unlike the previous measure of IPC $\mathscr{I}_C$ given in ref \cite{Angelo}, which says that the context is compatible if the state is a maximally mixed state. This difference arises from the fact that the Holevo quantity(unlike the information measure in Eq.(\ref{eq:old_incomp_phys_cont}), which represents the extractable information, can be non-zero for an ensemble created from measurement on a maximally mixed state. We show the importance of the new IPC measure through the following example. \begin{example}\label{ex:Application} Consider a scenario in which Alice is randomly implementing $\sigma_x$ and $\sigma_z$ measurements on the maximally mixed state with equal probabilities and generates ensemble $\{\{\frac{1}{2},\ket{0}\bra{0}\},\{\frac{1}{2},\ket{1}\bra{1}\}\}$ and $\{\{\frac{1}{2},\ket{+}\bra{+}\},\{\frac{1}{2},\ket{-}\bra{-}\}\}$ respectively, for Bob. This is a QKD like situation. Now, the possible bases of measurements are known for eavesdropper Eve. But she does not know which measurement is exactly implemented in a particular run. Therefore, she is randomly measuring $\sigma_x$ and $\sigma_z$ on the ensemble created by Alice. In this case, if we use the measure of IPC from Eq.(\ref{eq:old_incomp_phys_cont}), we get the following as the average IPC \begin{align} \mathscr{I}_{avg}&=\frac{1}{4}\mathscr{I}(\frac{\mathbb{I}}{2}, \sigma_z, \sigma_z)+\frac{1}{4}\mathscr{I}(\frac{\mathbb{I}}{2}, \sigma_z, \sigma_x)\nonumber\\ &+\frac{1}{4}\mathscr{I}(\frac{\mathbb{I}}{2}, \sigma_x, \sigma_z)+\frac{1}{4}\mathscr{I}(\frac{\mathbb{I}}{2}, \sigma_x, \sigma_x\nonumber)\\ &=0 \end{align} Therefore, according to this analysis the information leakage is not detectable. But it is well established fact that if Alice and Bob declare the basis of their measurements Eve will be detected since her operation disturbs the ensemble. Therefore, this measure of IPC is not very useful here. Instead, if we use the modified measure of IPC, we get \begin{align} \mathfrak{I}_{avg}&=\frac{1}{4}\mathfrak{I}(\frac{\mathbb{I}}{2}, \sigma_z, \sigma_z)+\frac{1}{4}\mathfrak{I}(\frac{\mathbb{I}}{2}, \sigma_z, \sigma_x)\nonumber\\ &+\frac{1}{4}\mathfrak{I}(\frac{\mathbb{I}}{2}, \sigma_x, \sigma_z)+\frac{1}{4}\mathfrak{I}(\frac{\mathbb{I}}{2}, \sigma_x, \sigma_x)\nonumber\\ &=\frac{1}{4}\mathfrak{I}(\frac{\mathbb{I}}{2}, \sigma_z, \sigma_x)+\frac{1}{4}\mathfrak{I}(\frac{\mathbb{I}}{2}, \sigma_x, \sigma_z)\nonumber\\ &=\frac{1}{2}\ln 2\nonumber \\ &\neq 0, \end{align} where we have used the equation \eqref{eq:sharp_incompatibility} to arrived at the last line. This non-zero value suggests that, information leakage can be detected, as expected. Hence, the modified IPC measure is useful in this scenario. \end{example} In this Example we have considered that Eve is using a parent instrument for her measurement. If she uses any instrument other than the parent instrument the average information leakage will be higher than $\mathfrak{I}_{avg}$. \section{Incompatibility of physical context in the presence of memory}\label{sec:memory} Motivated by the work in \cite{berta}, where it was shown that in the presence of memory the total uncertainty of two measurements gets reduced, we ask the question, how the IPC will change in the presence of memory? To accommodate the presence of memory, we modify our game slightly, for the scenario where we perform only rank-one projective measurements ${X}$ and ${Y}$. In the modified game, our initial state $\sigma_{in}$ is the subsystem of the bipartite state $\sigma_{in,M}$, where $M$ acts as the memory. After the ${X}$ measurement on the subsystem $\sigma_{in}$, Alice produces the bipartite ensemble ${\rho}_{AM}=\sum_xp_x\rho^x_{AM}$, where $\rho^x_M=\text{Tr}_A[\rho^x_{AM}]$ acts as the memory and Bob receives the subsystem $A$ prepared in the state $\rho_A=\text{Tr}_M[\rho_{AM}]$. On this ensemble, if we use the approach in ref.\cite{Angelo}, the information content of $\rho_A$ conditioned on memory $\rho_M$ is given by \begin{align*} I^{mem}_1=\ln d-S(A|M)=\ln d-S(\rho_{AM})+S(\rho_M), \end{align*} where $S(A|M)=S(\rho_{AM})-S(\rho_M)$ is the conditional entropy \cite{Wilde}. After the ${Y}$ measurement by Eve on $\rho_A$, the ensemble transforms as $\sum_xp_x\rho^x_{AM}\rightarrow\sum_xp_x(\mathcal{N}_Y\otimes \mathbb{I})(\rho^x_{AM})=\sum_xp_x\rho^x_{A'M}$, so that the remaining information content of the state $\rho_{A'}$ is \begin{align*} I^{mem}_2=\ln d-S(A'|M)=\ln d-S(\rho_{A'M})+S(\rho_M), \end{align*} where $\mathbb{I}$ is the identity channel acting on the memory. Therefore, in the presence of memory, the expression of IPC takes the following form. \begin{align}\label{eq:old_incomp_phys_cont_mem} \mathscr{I}^{mem}_C&=I^{mem}_1-I^{mem}_2\nonumber \\ &=S(\rho_{A'M})-S(\rho_{AM}). \end{align} To compare the IPC with and without memory, we compare Eq.(\ref{eq:old_incomp_phys_cont}) with Eq.(\ref{eq:old_incomp_phys_cont_mem}), which gives the following \begin{align}\label{eq:oldIPCmemchange} \mathscr{I}_C-\mathscr{I}^{mem}_C&=[S(\rho_{A'})-S(\rho_{A'M})]-[S(\rho_{A})-S(\rho_{AM})]\nonumber \\ &=I^{coh}(M\rangle A')-I^{coh}(M\rangle A)\leq 0. \end{align} Here, $I^{coh}(M\rangle A)=S(\rho_A)-S(\rho_{AM})$ is the coherent information that is non-increasing under the action of quantum channels \cite{schumacher1,schumacher2,Wilde}. This analysis tells us that the IPC is increasing in the presence of memory, which seems contrary to the intuition that memory reduces the incompatibility. Next, we compute the IPC in the modified game with our approach. In our case, after the ${X}$ measurement, the extractable classical information from $\rho_A$ is the mutual information of the quantum-classical ensemble $\rho_{CA}=\sum_x\ket{x}_C\bra{x}\otimes\rho^x_A$ (see sec.{\ref{sec:holevo}}). However, now it is conditioned on the memory $\rho_M$. Therefore, in the presence of memory the extractable information will be the mutual information between $\rho_C=\sum_x\ket{x}_C\bra{x}$ and $\rho_A$, conditioned on the memory $\rho_M$ via the tripartite classical-quantum state $\rho_{CAM}=\sum_xp_x\ket{x}_C\bra{x}\otimes\rho^x_{AM}$, i.e., \begin{align*} \mathcal{X}^{mem}_1&=S(A:C|M)\\ &=S(A|M)+S(C|M)-S(AC|M) \\ &=S(\rho_{AM})-S(\rho_M)+S(\rho_{CM})-S(\rho_{CAM}). \end{align*} Here, we have simply expanded the conditional entropies to get the final form. Also, after Eve performs her measurement ${Y}$ on the subsystem $\rho_A$, the remaining mutual information between $\rho_{A'}$ and $\rho_C$ conditioned on the memory $\rho_M$, via the classical-quantum ensemble $\sum_xp_x\ket{x}_C\bra{x}\otimes\mathcal\rho^x_{A'M}$ is given by \begin{align*} \mathcal{X}^{mem}_2&=S(A':C|M)\\ &=S(A'|M)+S(C|M)-S(A'C|M) \\ &=S(\rho_{A'M})-S(\rho_M)+S(\rho_{CM})-S(\rho_{CA'M}). \end{align*} Therefore the IPC, using our approach in presence of memory, takes the following form \begin{align}\label{eq:memory_incompatibility} \mathfrak{I}^{mem}(\mathbb{C})&=\mathcal{X}^{mem}_1-\mathcal{X}^{mem}_2\nonumber \\ &=S(\rho_{AM})-S(\rho_{A'M})-S(\rho_{ACM})+S(\rho_{A'CM}) \nonumber\\ &=S(\rho_{AM})-S(\rho_{A'M})-\sum_xS(\rho^x_{AM})+\sum_xS(\rho^x_{A'M}) \nonumber \\ &=S(\rho_{AM})-S(\rho_{A'M})+\sum_xS(\rho^x_{A'}). \end{align} In the above calculations we have used the fact that $\rho^x_A$ are pure states so that $\rho^x_{AM}$ and $\rho^x_{A'M}$ are bipartite product states. Now, if we compare the IPC without and with memory in from Eq.(\ref{eq:sharp_incompatibility}) and Eq.(\ref{eq:memory_incompatibility}) respectively, we have \begin{align}\label{eq:newIPCmemchange} &\mathfrak{I}(\mathbb{C})-\mathfrak{I}^{mem}(\mathbb{C})\nonumber \\ &=[S(\rho_A)-S(\rho_{AM})]-[S(\rho_{A'})-S(\rho_{A'M})]\nonumber\\ &= I^{coh}(M\rangle A)-I^{coh}(M\rangle A')\geq 0. \end{align} Thus, we find that using our approach, the IPC is non-increasing in the presence of memory. It follows the intuition that the presence of memory should reduce the incompatibility, as the memory can be utilized to recover the lost information. On comparing Eq.(\ref{eq:oldIPCmemchange}) with Eq.\ref{eq:newIPCmemchange}, we find that $\mathscr{I}_C-\mathscr{I}^{mem}_C=-(\mathfrak{I}(\mathbb{C})-\mathfrak{I}^{mem}(\mathbb{C}))$. This relation strongly indicates that the information leakage content $\mathscr{I}_C$ envisaged in ref.\cite{Angelo}, is not capable of fully capturing the problems we face in a typical quantum information processing scenarios. This analysis also validates our approach for quantifying the IPC.\\ \begin{example}[Comparison of incompatibilities of a physical context with two different memories]\label{ex:memory_vs_leakge} Suppose, $\sigma_{in}=\alpha\ket{\lambda_1}\bra{\lambda_1}+\beta\ket{\lambda_2}\bra{\lambda_2}$ be a qubit state and $S_x=\{\ket{+}\bra{+},\ket{-}\bra{-}\}$ and $S_z=\{\ket{0}\bra{0},\ket{1}\bra{1}\}$ be the sharp spin measurements along $x$ and $z$ directions respectively, where $\{\ket{\lambda_1}, \ket{\lambda_2}\}$ is the eigen basis of $\sigma_{in}$. Here $0\leq\alpha,\beta\leq 1$ and $\alpha+\beta=1$. Now, we take our physical context as $\mathbb{C}_1=(\sigma_{in}, S_z, S_x)$. We will consider the following case where Alice is using memories $M$ keeping input state $\sigma_{in}$ fixed:\\ Suppose, Alice is using a qubit memory $M$ such that $\sigma_{in,M}=p\ket{\psi_{in,M}}\bra{\psi_{in,M}}+\frac{1-p}{4}\mathbb{I}_{in,M}$, where $0\leq p\leq 1$ , $\ket{\psi}_{in,M}=\sqrt{\alpha^{\prime}}\ket{\lambda_1}\ket{\lambda^{\prime}_1}+\sqrt{\beta^{\prime}}\ket{\lambda_2}\ket{\lambda^{\prime}_2}$, $\mathbb{I}_{AM}=\mathbb{I}_{4\times 4}$, $0\leq\alpha',\beta'\leq 1$, $\alpha'+\beta'=1$, $\{\ket{\lambda_1}^{\prime}, \ket{\lambda_2}^{\prime}\}$ is the eigen basis of $\sigma_{M}$ and $\sigma_{M}=Tr_{in}[\sigma_{in,M}]$. Alice chooses $\alpha^{\prime}$,$\beta^{\prime}$ and $p$ such that \begin{eqnarray} \alpha=p\alpha^{\prime}+\frac{1-p}{2}\label{eq:alpha_alpha^prime}\\ \beta=p\beta^{\prime}+\frac{1-p}{2}\label{eq:beta_beta^prime} \end{eqnarray} hold. Then, $\text{Tr}_{M}[\sigma_{in,M}]=\sigma_{in}$. For example, when $\alpha=\frac{1}{4}$ and $\beta=\frac{3}{4}$, one possible choice is $p=\frac{3}{4}$, $\alpha^{\prime}=\frac{1}{6}$ and $\beta^{\prime}=\frac{5}{6}$. The state of the memory is $\sigma_{M}=\text{Tr}_{in}(\sigma_{in,M})=\alpha\ket{\lambda^{\prime}_1}\bra{\lambda^{\prime}_1}+\beta\ket{\lambda^{\prime}_2}\bra{\lambda^{\prime}_2}$. Let, $q_{xy}={\braket{x|\lambda_y}}$ where $x\in\{0,1,+,-\}$ and $y\in\{1,2\}$. The bipartite ensemble, created by the $S_z$ measurement of Alice, is $\{p^{\prime}_i,\sigma^i_{AM}\}$ where, $p^{\prime}_i=\text{Tr}[(\ket{i}\bra{i}\otimes\mathbb{I})\sigma_{in,M}]= p[\alpha^{\prime}\mid q_{i1}\mid^2+\beta^{\prime}\mid q_{i2}\mid^2]+\frac{1-p}{2}$ and $\sigma^i_{AM}=\frac{(\ket{i}\bra{i}\otimes\mathbb{I})\sigma_{in,M}(\ket{i}\bra{i}\otimes\mathbb{I})}{\text{Tr}[(\ket{i}\bra{i}\otimes\mathbb{I})\sigma_{in,M}]}$ and $i\in\{0,1\}$. Now it can be easily checked that $\sigma^i_{AM}=\ket{i}\bra{i}\otimes[p\ket{\phi^{\prime}_i}\bra{\phi^{\prime}_i}+\frac{1-p}{2p^{\prime}_i}\frac{\mathbb{I}}{2}]$ where $\ket{\phi_i}=\frac{1}{\sqrt{p_i}}(\sqrt{\alpha^{\prime}}q_{i1}\ket{\lambda_1}+\sqrt{\beta^{\prime}}q_{i2}\ket{\lambda_2})$. The post-measurement average bipartite state is $\sigma_{AM}=\sum_ip^{\prime}_i\sigma^i_{AM}$. Clearly, $\sigma_A=\text{Tr}_M\sigma_{AM}=p\sum_ip^{\prime}_i\ket{i}\bra{i}+(1-p)\frac{\mathbb{I}}{2}$. After, Eve's $S_x$ measurement on $A$ part, the average bipartite state will become $\sigma_{A^{\prime}M}=\frac{\mathbb{I}}{2}\otimes\sigma_{M}$ where $\sigma_{M}=[p\sum_ip_i\ket{\phi^{\prime}_i}\bra{\phi^{\prime}_i}+(1-p)\frac{\mathbb{I}}{2}]=\sigma_{M}$ and the average state of $A$ part becomes $\sigma_{A^{\prime}}=\frac{\mathbb{I}}{2}$. So, the reduction in information leak is given as \begin{align} &\mathfrak{I}(\mathbb{C}_1)-\mathfrak{I}^{M}(\mathbb{C}_1)\nonumber \\ &=[S(\mathcal{\sigma_A})-S(\sigma_{AM})] \hspace{0.2cm}-[S(\sigma^{\prime}_A)-S(\sigma_{A^{\prime}M})]\nonumber\\ &=[S(\mathcal{\sigma_A})-S(\sigma_{AM})] \hspace{0.2cm}-[S(\sigma^{\prime}_A)-S(\sigma_{A^{\prime}})-S(\sigma_{M})]\nonumber\\ &=S(\sigma_{M})+S(\sigma_{A})-S(\sigma_{AM})=I(A:M)_{\sigma_{AM}}. \end{align} Now, consider a special case where $\ket{\lambda_1},\ket{\lambda_2}$ are the eigen basis of $\sigma_y$, $\alpha=\frac{1}{4}$ and $\beta=\frac{3}{4}$. In this case, $\mid q_{ij}\mid^2=\frac{1}{2}$ $\forall i\in\{0,1\}$ and $\forall j\in\{1,2\}$. Also, from equation \eqref{eq:alpha_alpha^prime} we get $\alpha^{\prime}=\frac{2p-1}{4p}$. Clearly, $\alpha^{\prime}\geq 0$ only for $p\geq \frac{1}{2}$. We plot the leakage difference $\mathfrak{I}(\mathbb{C}_1)-\mathfrak{I}^{M}(\mathbb{C}_1)$ with respect to $p$ in Fig. \eqref{fig:Plot}. \begin{figure}[hbt!] \includegraphics[width=8.4cm,height=6cm]{Plot.pdf} \caption{(Colour online) Plot of Concurrence and mutual information of $\sigma_{in,M}$ and the leakage difference vs the parameter $p$. It can be seen that Concurrence and mutual information of $\sigma_{in,M}$ and the leakage difference is monotonically increasing with respect to the parameter $p$. All quantities are normalized i.e., all of them have been divided by their maximum values.}\label{fig:Plot} \end{figure} To quantify the amount of memory we use the concurrence measure(ref. \cite{Wootters}) and the mutual information of the initial bipartite state $\sigma_{in,M}$. From Fig. \eqref{fig:Plot} we get that with increment of $p$, concurrence and mutual information of $\sigma_{in,M}$ and the leakage difference, are monotonically increasing with $p$. We can also say that the information leakage difference is a monotonically increasing function of both concurrence and mutual information in the state $\sigma_{in,M}$. Equivalently, we can say that the leakage with memory is monotonically decreasing with increasing value concurrence and mutual information. It can be observed from Fig.\eqref{fig:Plot}, the leakage difference is non-zero for the region $p\lessapprox 0.548$ where the concurrence is vanishing. In this region the non-zero leakage difference can be attributed to the non-vanishing mutual information. \end{example} The example \eqref{ex:memory_vs_leakge} suggests us to write down the following conjecture: \begin{conjecture} With increment of correlation between the memory and the input state, information leakage monotonically decreases. \end{conjecture} Therefore, we conclude based on the validity of the conjecture, that the presence of more memory correlation helps in reducing the leakage. \section{conclusion}\label{sec:conc} In this work, we have derived the measure of an appropriate information leakage in all QKD like games. Moreover, introducing quantum instruments, we have generalised the notion of IPC for POVMs. We have shown the relation between previous and our approaches for sharp measurements. Our approach always leads to a non-negative measure of IPC. We have also shown that the modified IPC measure is more useful compared to the earlier IPC measure in Eq.(\ref{eq:old_incomp_phys_cont}), in a QKD like scenario as an example. Also, on including memory, our measure of IPC can never increase. In fact in example\eqref{ex:memory_vs_leakge}, we have shown that information leakage monotonically decreases with increment of correlation between input state and memory. Thus, we have successfully modified the notion of IPC for generic measurements.\\ Our work opens up several future directions. Firstly, it would be useful to construct the resource theory of IPC using our measure. Further, our measure can be a useful tool for generic information-theoretic tasks, that involve transmission of classical information over quantum channels. We would like to explore how our generalized version of IPC can be related to incompatibility of POVMs. \bibliographystyle{apsrev4-1}
{ "timestamp": "2021-10-05T02:11:25", "yymm": "2102", "arxiv_id": "2102.07732", "language": "en", "url": "https://arxiv.org/abs/2102.07732" }
\section{Formal Definition of Proximity Measures} We detail here the difference distance or similarity metrics used in this article. \begin{itemize} \item \textbf{Euclidean Distance:} Euclidean distance is one of the most commonly used metrics, measuring the dissimilarity between vectors. $euclidean(x^p,x^q)=\lVert x^p - x^q \rVert_2$ \item \textbf{Cosine Distance:} Cosine distance is also a distance that is very commonly used in the literature. It can be defined as follows: $cosine(x^p,x^q) =1 - \dfrac{x^p\cdot x^q}{\lVert x^p \rVert_2 \lVert x^q \rVert_2}$ \item \textbf{Pearson's correlation:} Pearson's correlation is based on the covariance. It assesses very well the linear correlations between the variables. $Pearson(x^p,x^q) = \dfrac{cov(x^p,x^q)}{\sigma(x^p) \sigma(x^q)}$. \item \textbf{Spearman's rank correlation:} Spearman's correlation is a non-parametric measure of rank correlation and it assesses to what extent the relation between two variables can be represented by a monotonic function. It is calculated by: $Spearman(u,v) = \dfrac{cov(rg(x^p),rg(x^q))}{\sigma(rg(x^p)) \sigma(rg(x^q))}$. \item \textbf{Kendall's rank correlation:} Similarly to Spearman's correlation, Kendall's correlation is a measure of rank correlation considering the similitude of the ranking order of the observations for the two compared objects. It assesses the best non-linear dependencies. It is calculated by: $Kendall(x^p,x^q)=2\dfrac{N_C - N_D}{n(n-1)}$ where $N_C$ is the number of concordant pairs and $N_D$ the number of discordant pairs. Pairs of observations $(x^p_u, x^p_v)$ and $(x^q_u,x^q_v)$ are considered concordant if their ranks agree i.e. $x^p_u > x^p_v \Leftrightarrow x^q_u > x^q_v$ they are said discordant if $x^p_u > x^p_v \Leftrightarrow x^q_u < x^q_v$. \item \textbf{Kullback-Leibler divergence:} Kullback-Leibler divergence measures how different two probability distributions are. It represents the expectation that two distributions present a similar behavior. It is computed by: $KL(x^p,x^q)=-\sum_u P(x^p_u) \log \dfrac{P(x^q_u)}{P(x^p_u)}$. We made this measure symmetric by considering $KL(x^p,x^q) + KL(x^q,x^p)$. \end{itemize} \section{Formal Definition of Clustering Evaluation Metrics} A number of different metrics were used to evaluate the different clustering algorithms for the sample clustering. In this section, we present formally the different metrics. \begin{enumerate} \item \textbf{Adjusted Rand Index (ARI):} It is a similarity measure between a clustering $C'$ and a ground truth $C$. RI corresponds to the proportion of pairs of elements that are in different clusters in both $C$ and $C'$ called $a$ or in the same cluster in both $C$ and $C'$ called $b$. \begin{equation} RI = \dfrac{a+b}{\dbinom{n}{2}} \end{equation} RI is then corrected for chance by taking into account its expected value $E(RI)$: \begin{equation} ARI = \dfrac{RI - E(RI)}{max(RI) - E(RI)} \end{equation} \item \textbf{Normalized Mutual Information (NMI):} It is a normalized mutual information metric between a clustering $C'$ and the ground truth class $C$. \begin{equation} NMI = \dfrac{MI(C,C')}{mean(H(C),H(C'))} \end{equation} Mutual Information (MI) and Shannon Entropy (H) are defined in Section III-A of the manuscript. \item \textbf{Homogeneity:} Considering a clustering $C'$ and the ground truth $C$. It values clusters of $C$ containing elements all belonging to a same cluster in $C'$. \begin{equation} homogeneity = 1 - \dfrac{H(C|C')}{H(C)} \end{equation} \item \textbf{Completeness:} Considering a clustering $C'$ and the ground truth $C$, completeness is a complement of homogeneity as it values clusters of $C$ having all their elements belonging to the same cluster in $C'$. \begin{equation} completeness = 1 - \dfrac{H(C'|C)}{H(C')} \end{equation} \item \textbf{Fowlkes-Mallow Score (FMS):} Considering a clustering $C'$ and the ground truth clustering $C$. This metric corresponds to the geometric mean of the pairwise precision and recall. \begin{equation} FMS = \dfrac{TP}{\sqrt{(TP+FP)(TP+FN)}} \end{equation} the $TP$, $FP$ and $FN$ indicate the numbers of true positives, false positives and false negatives respectively. \end{enumerate} \section{Computational Complexity \& Running Times For Gene Clustering} \label{section:time} The computation time is an important parameter playing a significant role for the selection of a clustering algorithm. For each algorithm, the approximate average time needed for the clustering is presented in Table~II of the main manuscript. The different computational times have been computed using Intel(R) Xeon(R) CPU E5-4650 v2 @ 2.40GHz cores. In general, the computational time increases with the cluster number for all the clustering methods. However, for the reported clusters of Table~II, LP-Stability remains one of the fastest with a computational time approximately equal to 1.5h. K-Means needs approximately twice this time due to the several iterations (in our case $100$) performed in order to account for different initialization conditions. CorEx is by far the most computationally expensive, requiring more than 5 days for the clustering, making this algorithm not efficient for data with this high dimensionality. \section{Gene Signature Composition} We detail here the $27$ genes of our proposed signature and their main functions. We also provide a brief summary of the analysis obtained using GTEx Portal on July 2020 (www.gtexportal.org). \begin{itemize} \item HSFX1: DNA binding transcription, GTEx: Overexpressed in brain cerebellum and cerebellar hemisphere and ovary tissues \item C3P1: endopeptidase inhibitor activity, GTEx: Highly overexpressed in liver tissues \item CCDC30: Coiled-Coil Domain, GTEx: Slightly overexpressed in all brain tissues \item CNRIP1: cannabinoid receptor, GTEx: Particularly expressed in many tissues and in particular all brain tissues \item CD53: regulation of cell development GTEx: Highly expressed in blood and lymphocytes \item SPRR4: UV-induced cornification, GTEx: more expressed in sun exposed tissues particularly skin \item RIF1: DNA repair, GTEx: expressed in many tissues including heart, blood lymphocytes and brain \item COL1A2: collagen making, GTEx: highly overexpressed in cultured fibroblasts \item ZNF767: gene expression, GTEx: highly expressed in several tissues including uterus, vagina, ovary, brain cerebellum and cerebellar hemisphere \item CD3E: antigen recognition (linked to immunodeficiency), GTEx: more expressed in whole blood tissues \item MATR3: nucleic acid binding and nucleotide binding, GTEx: highly expressed in several tissues including uterus, vagina, ovary and brain \item NCAPH: Cell Cycle, Mitotic and Mitotic Prometaphase, GTEx: highly expressed in EBV-transformed lymphocytes and on a smaller extend in cultured fibroblasts \item ASH1L: transcriptional activators, GTEx: expressed in many tissues including heart, blood lymphocytes, uterus, vagina, ovary and brain \item ANKRD30A: DNA-binding transcription factor activity (related to breast cancer), GTEx: more expressed in breast mammary tissues \item GNA15: among its related pathways are CREB Pathway and Integration of energy metabolism, GTEx: especially overexpressed in oesophagus mucosa \item GADD45GIP1: Among its related pathways are Mitochondrial translation and Organelle biogenesis and maintenance, GTEx: expressed in many tissues slightly overexpressed in cultured fibroblasts \item CD302: cell adhesion and migration, GTEx: especially overexpressed in lung and liver tissues \item SFTA3: Among its related pathways are Surfactant metabolism and Diseases of metabolism, GTEx: Overexpressed in Lung and thyroid \item C1orf159: Protein Coding gene, GTEx: especially overexpressed in testis \item RPS8: Among its related pathways are Viral mRNA Translation and Activation of the mRNA upon binding of the cap-binding complex and eIFs, and subsequent binding to 43S, GTEx: especially overexpressed in ovary tissues \item ZEB2: Among its related pathways are MicroRNAs in cancer and TGF-beta Receptor Signaling, GTEx: especially overexpressed in spinal cord (c-1) and tibial nerve \item GSX1: sequence-specific DNA binding and proximal promoter DNA-binding transcription activator activity, RNA polymerase II-specific, GTEx: especially expressed in hypothalamus \item ADNP: Vasoactive intestinal peptide is a neuroprotective factor that has a stimulatory effect on the growth of some tumor cells and an inhibitory effect on other, GTEx: overexpressed in quantity of tissues, especially so in EBV-transformed lymphocytes, testis, ovary and uterus \item CLIP3: plays a role in T cell apoptosis by facilitating the association of tubulin and the lipid raft ganglioside GD3, GTEx: expressed in many tissues slightly overexpressed in EBV-transformed lymphocytes \item YEATS2: Among its related pathways are Chromatin organization, GTEx: expressed in many tissues overexpressed in EBV-transformed lymphocytes \item ACBD4: Among its related pathways are Metabolism and Peroxisomal lipid metabolism, GTEx: expressed in many tissues overexpressed in liver, thyroid, uterus and vagina \item SNRPG: Among its related pathways are mRNA Splicing - Minor Pathway and Transport of the SLBP independent Mature mRNA, GTEx: expressed in many tissues strongly overexpressed in EBV-transformed lymphocytes and cultured fibroblasts \end{itemize} \section{Samples Clustering Comparison} In Fig.~1, we present the sample clustering performed using the CorEx signature having maximal ES and DI. This clustering due to the very low number of clusters failed to provide a good sample clustering, giving quite intermixed clusters. This is the reason that for further evaluation of the CorEx algorithm we used the gene signature provided by 25 genes (Fig.~5 and of the main manuscript). Moreover in Fig.~2 we present the influence of two different distance metrics in the sample clustering of the same gene signature (LP-Stability with 27 genes). In particular, Spearman’s correlation tends to better separate the different tumors into different clusters, while the Kendall’s seems to generate clusters that groups tumor-related sample. \begin{figure}[!t] \centering [scale=0.48]{../figure/signature_fail.jpg} \caption{\textbf{Gene Signature Assessment for the CorEx algorithm.} The graph depicts the distribution of the different tumor types in $10$ different clusters using the best signature produced by CorEx algorithm ($5$ genes). From the graph one can observe that the different tumor types are quite intermixed across the different clusters without any association between them. } \label{fig:signature_fail} \end{figure} \begin{figure*}[!t] \centering [scale=0.40]{../figure/sample_dist.jpg} \caption{\textbf{Gene Assessment performed with LP-Stability clustering and Kendall's correlation-based distance.} The plot presents the distribution of tumors using the signature produced by LP-Stability and Kendall's correlation-based distance (right) in comparison to the Spearman's one (left). This assessment is performed in order to compare the influence of the distance in the clustering. We can observe that there are some similar clusters between the two distances such as the well defined clusters of GBM and LIHC together with LUAD/LUSC cluster (Cluster $5$), a squamous cluster (Cluster $7$) and some well defined BRCA clusters (Clusters $1$ and $8$). Thus, regarding sample clustering, it appears that the good characterization of monotonic relations offered by Spearman's Rank correlation-based distance is better suited than the more general characterization of the Kendall's one.} \label{fig:sample_kendall} \end{figure*} In Table~I, we present a more detailed comparison of the distribution of the tumor types for these LP-Stability and K-Means signatures. Both signatures generate clusters that successfully associate lung tumors such as LUSC and LUAD (clusters 3 \& 4 respectively), squamous tumors mainly composed of BLCA, CESC, LUSC and HNSC types (clusters 0 \& 8 and 1 \& 8 respectively) and smoking related tumors mainly containing CESC, HNSC, READ, LUSC and LUAD (clusters 7 \& 8 respectively). Concerning BRCA, K-Means clusters it into two different groups, one that consists mainly with BRCA samples, while the second one consists of a minority of BRCA samples grouped together with the GBM which types are not really related. On the other hand, LP-Stability clusters BRCA in several small unblended clusters that express the various molecular types of BRCA, and groups the remaining BRCA with the OV type which is directly related (cluster 3). These results are very promising as they are in accordance with other recent omic studies. In particular in [2] the authors used a large set of different omics data to define a clustering reporting pan-squamous clusters (LUSC, HNSC, CESC, BLCA), but also pan-gynecology clusters (BRCA, OV) and pan-lung clusters (LUAD, LUSC). They also reported the separation of BRCA into several clusters linked to basal, luminal, Chr 8q amp or HER2-amp subtypes. Moreover, both algorithms provided a good, almost perfect separation of the LIHC and GBM samples into well defined clusters. This separation indicates that these specific tumor types are very different from the rest or even that at least one gene included in the produced signatures is differently expressed compared to the rest of the samples. \begin{table*}[!t] \label{tab:samples_compa} \caption{Discovery Power: A complete comparison for the distribution of the tumor types (above $10\%$) from the best performing algorithms. LP-Stability with $27$ genes using Kendall's correlation-based distance and K-Means with $30$ genes using Euclidean distance. The last column indicates the algorithm that provided the best distribution for the specific tumor type. It highlights the superiority of the LP-Stability signature. } \centering \begin{tabular}{|c|c|c|c|} \hline Tumor Types & \textbf{LP-Stability} ($27$ genes) & \textbf{K-Means} (30 genes) & Best\\ \hline \hline BLCA & \begin{tabular}{@{}c@{}} $57\%$ BLCA $\Rightarrow$ $33\%$ cluster 8 \\ $26\%$ BLCA $\Rightarrow$ $10\%$ cluster 0 \\ $< 10\%$ BLCA $\Rightarrow$ clusters 1, 3, 7 \end{tabular} & \begin{tabular}{@{}c@{}} $54\%$ BLCA $\Rightarrow$ $59\%$ cluster 7 \\ $18\%$ BLCA $\Rightarrow$ $22\%$ cluster 1 \\ $14\%$ BLCA $\Rightarrow$ $7\%$ cluster 8\\ $< 10\%$ BLCA $\Rightarrow$ cluster 2, 4, 9\\ \end{tabular} & $\sim$\\ \hline BRCA & \begin{tabular}{@{}c@{}} $26\%$ BRCA $\Rightarrow$ $75\%$ cluster 1 \\ $20\%$ BRCA $\Rightarrow$ $100\%$ cluster 2 \\ $19\%$ BRCA $\Rightarrow$ $100\%$ cluster 6 \\ $18\%$ BRCA $\Rightarrow$ $100\%$ cluster 9\\ $10\%$ BRCA $\Rightarrow$ $20\%$ cluster 3\\ \textbf{Homogeneous Clusters or with related types} \end{tabular} & \begin{tabular}{@{}c@{}} $55\%$ BRCA $\Rightarrow$ $98\%$ cluster 0\\ $27\%$ BRCA $\Rightarrow$ $20\%$ cluster 4\\ $< 10\%$ BRCA $\Rightarrow$ clusters 1, 2, 7\\ Clusters unrelated to GBM type \end{tabular} & LP-Stability\\ \hline CESC & \begin{tabular}{@{}c@{}} $58\%$ CESC $\Rightarrow$ $15\%$ cluster 0\\ $38\%$ CESC $\Rightarrow$ $16\%$ cluster 8\\ \textbf{Squamous related clusters} \end{tabular} & \begin{tabular}{@{}c@{}} $54\%$ CESC $\Rightarrow$ $15\%$ cluster 8\\ $25\%$ CESC $\Rightarrow$ $16\%$ cluster 1\\ $16\%$ CESC $\Rightarrow$ $16\%$ cluster 7\\ \textbf{Squamous mixed with non squamous} \end{tabular} & LP-Stability\\ \hline GBM & $100\%$ GBM $\Rightarrow$ $79\%$ cluster 7 & \begin{tabular}{@{}c@{}} $98\%$ GBM $\Rightarrow$ $57\%$ cluster 2 \\ Mixed with unrelated BRCA types \end{tabular} & LP-Stability\\ \hline HNSC & \begin{tabular}{@{}c@{}} $89\%$ HNSC $\Rightarrow$ $43\%$ cluster 0\\ $10\%$ HNSC $\Rightarrow$ $7\%$ cluster 8\\ \textbf{Squamous related clusters} \end{tabular} & \begin{tabular}{@{}c@{}} $86\%$ HNSC $\Rightarrow$ $62\%$ cluster 8\\ $11\%$ HNSC $\Rightarrow$ $18\%$ cluster 1\\ \textbf{Squamous related clusters} \end{tabular} & $\sim$\\ \hline LIHC & $90\%$ LIHC $\Rightarrow$ $100\%$ cluster 5 & $98\%$ LIHC $\Rightarrow$ $98\%$ cluster 5 & $\sim$\\ \hline READ & \begin{tabular}{@{}c@{}} $82\%$ READ $\Rightarrow$ $9\%$ cluster 8 \\ \textbf{Smoking related} \end{tabular} & \begin{tabular}{@{}c@{}} $55\%$ READ $\Rightarrow$ $10\%$ cluster 7 \\ $32\%$ READ $\Rightarrow$ $5\%$ cluster 4 \\ \textbf{Smoking related} \end{tabular} & $\sim$\\ \hline LUAD & \begin{tabular}{@{}c@{}} $80\%$ LUAD $\Rightarrow$ $85\%$ cluster 4 \\ \textbf{Lung cluster} \end{tabular} & \begin{tabular}{@{}c@{}} $93\%$ LUAD $\Rightarrow$ $83\%$ cluster 3\\ \textbf{Lung cluster} \end{tabular} & $\sim$\\ \hline LUSC & \begin{tabular}{@{}c@{}} $54\%$ LUSC $\Rightarrow$ $25\%$ cluster 0\\ $23\%$ LUSC $\Rightarrow$ $18\%$ cluster 8 \\ $15\%$ LUSC $\Rightarrow$ $15\%$ cluster 4\\ \textbf{Squamous and lung clusters} \end{tabular} & \begin{tabular}{@{}c@{}} $53\%$ LUSC $\Rightarrow$ $97\%$ cluster 6\\ $20\%$ LUSC $\Rightarrow$ $17\%$ cluster 3 \\ $11\%$ LUSC $\Rightarrow$ $21\%$ cluster 1\\ \textbf{Squamous and lung clusters} \end{tabular} & K-Means\\ \hline OV & \begin{tabular}{@{}c@{}} $92\%$ OV $\Rightarrow$ $60\%$ cluster 3\\ $< 5\%$ OV $\Rightarrow$ clusters 1, 8\\ \textbf{Cluster with related BRCA} \end{tabular} & \begin{tabular}{@{}c@{}} $71\%$ OV $\Rightarrow$ $86\%$ cluster 9 \\ $15\%$ OV $\Rightarrow$ $10\%$ cluster 4 \\ $10\%$ OV $\Rightarrow$ $7\%$ cluster 7 \\ $< 10\%$ OV $\Rightarrow$ clusters 0,2\\ \textbf{Mixed clusters} \end{tabular} & LP-Stability\\ \hline \end{tabular} \end{table*} \section{Gene Screening Analysis for K-Means algorithm ($30$ genes).} In this section we present a detailed gene screening analysis for the different clusters produced by the K-Means signature and presented in Fig.~5 on the main manuscript. We noticed that cluster $0$ which is a well defined cluster containing mainly BRCA samples, presents enrichment in diverse biological processes as regulation of transcription by RNA polymerase II, regulation of nucleobase-containing compound metabolic process or regulation of gene expression. The most significant gene C10orf32 has not been identified as a gene related to cancer. However, it is more related to the lysosomes movement process. Cluster $1$ seems to be very intermixed with different biological processes being enriched. For the BRCA samples different skin related pathways especially keratin are enriched, which are important for several types of cancers. The most significant gene for BRCA samples PKP1 is related to molecular recruitment. However, different processes for other tumor types are also enriched. In particular, for the LUSC samples do not present any significant gene with high enough score. The one with the highest score is ANKRD13B which is related to membrane binding processes. HNSC samples are enriched in general RNA metabolic processes and DNA-binding. Their most significant gene, CADPS2, is involved in calcium binding especially important in autism. BLCA samples have no genes with scores above the considered threshold. The most significant gene is FOXC1 which is involved in DNA-binding and has been shown of utmost interest in several type of cancers. CESC type do not have any significant gene with TRIM8 being the one with the highest score. This gene seems to be related to Interferon gamma signaling and Innate Immune System. Its regulation has been shown to be altered in some cancers. After this analysis, it appears that this cluster contains rather heterogeneous samples without common biological processes even if several are linked to cancer. Besides, the biological relevance of the cluster is not very clear as we can observe very few significant genes per tumor type. Cluster $2$ groups GBM and BRCA samples. It presents for the BRCA ones an enrichment in voltage-gated calcium channel activity only. This biological pathway has been identified as a new target for BRCA in~[3]. The most enriched gene is CACNB2 which is an antigene involved in voltage-gated calcium channel. A study for the GBM samples cannot be performed as all the GBM samples are in this cluster. Regarding cluster $3$, LUSC samples present enrichment in cilium activity and surfactant homeostasis. Their most significant gene ARRB1 programs a desensitization to stimuli. It seems to be of interest for the chemosensitivity of lung cancer. For the LUAD samples in this cluster, the only significant gene NKX2-1 is a thyroid-specific gene also involved in morphogenesis. It has been found to be a prognostic marker in early stage non-small lung cancers. Cluster $4$ consists mainly of BRCA samples which are enriched in processes of immune response. However, the most significant gene ACTR3 code for a complex essential for cell shape and motility which is not related to immune response. Cluster $4$ seems quite heterogeneous concerning the processes and the significant genes. In particular, the HNSC samples are related to extra-cellular organization. Their most significant gene KLF17 is related to DNA-binding transcription that is involved in epithelial-mesenchymal transition and metastasis in breast cancer. READ samples do not have significant genes, the one with the highest score is the GRM2 which is particularly involved in neurotransmission and central nervous diseases. For the OV samples the only significant gene is the SPHK1 which regulates cell proliferation and cell survival. It has been linked to ovarian cancer in~[1]. LUAD samples present few significant genes which do not enrich any biological process. The most significant gene, UCA1, plays a role in cell proliferation and has been proven to be of interest in bladder cancer. Cluster $5$ groups the entire LIHC tumor type. Thus, a gene screening analysis is so not possible. For the cluster $6$, LUSC samples have very few significant genes. The enriched processes that are associated with are related to tissue development, Estrogen signaling and mammary gland morphogenesis. Its most significant gene is FRRS1 which is related to ferric-chelate reductase activity. Thus, this homogeneous cluster does not seem to contain a biological meaningful subset of LUSC samples. \begin{table*}[!t] \caption{\textbf{Expression Power} of the sample clustering using as features respectively our proposed signature, a referential signature from literature~[28] and average performance using $10$ sets of randomly-selected genes of same size as the proposed signature. We observe that the two best performing signatures are the ones produced with our pipeline. The first using K-Means clustering the second, our proposed signature, using LP-Stability.} \label{tab:EP} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Signature & ARI (\%) & NMI (\%) & Homogeneity (\%) & Completeness (\%) & FMS (\%) & \begin{tabular}[c]{@{}l@{}}Expression \\ Power (\%) \end{tabular} \\ \hline Random & 29+/-5 & 37+/-4 & 37+/-4 & 37+/-4 & 39+/-4 & 36 \\ \hline CorEx & 12 & 20 & 21 & 20 & 23 & 19 \\ \hline K-Means & 52 & 63 & 65 & 62 & 58 & 60 \\ \hline Referential & 34 & 41 & 42 & 40 & 42 & 40 \\ \hline \textbf{Proposed} & \textbf{33} & \textbf{52} & \textbf{52} & \textbf{53} & \textbf{43} & \textbf{46} \\ \hline \end{tabular} \end{table*} Cluster $7$ groups BLCA, BRCA, CESC, LUSC and READ samples. OV samples present very few significant genes without enriched biological processes. BLCA samples present significant genes related to transcription and the most significant gene, C17orf28, is related to several cancers. BRCA samples have very few significant genes and are weakly enriched in mitosis processes as there are only two enriched processes. The most significant gene FSD1 is related to coiled-coil region. CESC samples are enriched in cilium organization, cell projection assembly and the most significant gene is EPCAM which is related to gastrointestinal carcinoma and is a target of immunotherapy. So, it does not present links with CESC or other carcinomas of the cluster. It seems that these CESC samples would have been more suitable for cluster $3$ since LUSC samples of this cluster are strongly enriched in the same pathways. Finally, READ samples do not present significant enough genes. However, the most significant one is EFNB2 which is involved in several development processes and in particular in the nervous system and in erythropoiesis. This gene has also been found of interest in tumor growth. For cluster $8$, LUSC samples have numerous significant genes enriched in epidermis related processes and skin development pathways and in particular the most significant gene KRT14 is related to these processes. This might be related to a subset of non-small cell lung cancers characterized by Epidermal growth factor receptor (EGFR) mutations. Similarly, HNSC samples of cluster $8$ are also linked to keratin, epidermis and skin development processes which also characterize a subtype of HNSC. The most significant gene lad1 is related to structural molecule activity and codes for a protein involved in the basement membrane zone. We found the same pathways for CESC samples whose most significant gene is KRT5 and BLCA samples with KRT6A. Cluster $9$ mainly contains OV samples which do not present gene significant enough. However, their most significant gene CLU has been identified as a potential cancer target in~[4]. \section{Expression Power} We report here the detailed results of section VII-B-4 of the main manuscript. In particular in Table~II, we report the performance of the ARI, NMI, Homogeneity, Completeness, FMS and overall Expression Power (average of the different metrics). One could observe that the best performance is reported by K-Means and our proposed signature produced with LP-Stability. K-Means superiority in different metrics is indicated due to the very good performance on the BRCA samples, the most represented type in our dataset. Besides, those scores does not take into account the biological meaning of the clusters however they offer a general indication of the well-definition of the sample clustering. \section{Implementation Details For Tumor/ Subtumor Classification} Regarding the supervised categorization of tumor types and subtypes classes, the evaluated algorithms were: Nearest Neighbor, \{Linear, Sigmoid, Radial Basis Function (RBF), Polynomial Kernel\} Support Vector Machines (SVM), Gaussian Process, Decision Trees, Random Forests, AdaBoost, XGBoosting, Gaussian Naive Bayes, Bernoulli Naive Bayes, Multi-Layer Perceptron (MLP) \& Quadratic Discriminant Analysis. We selected the top classifiers regarding balanced accuracy ensuring both good performance and good generalisation. In particular, for tumor types classification the selection criteria include \textit{(i)} high balanced accuracy (equal or above $80\%$) on the validation set and \textit{(ii)} small difference (smaller than $20\%$) on the balanced accuracy metric between training and validation. While for tumor subtypes classification, we selected the top $5$ classifiers regarding balanced accuracy presenting a small difference (smaller than $20\%$) on the balanced accuracy metric between training and validation. For our experiments on tumor types classification with our proposed signature, the classifiers that fulfill these criteria were the: \{Linear, Polynomial, RBF Kernels\} SVM, Gaussian Process, Random forest, MLP, XGBoosting. For the sake of conciseness, we do not detail the selected classifiers for other experiments and other signatures. Those top classifiers' good performance were leveraged through a majority voting scheme. To deal with the problem of the unbalanced dataset, each class received a weight inversely proportional to its size. Concerning the different hyperparameters of the best performing classifiers, SVM was granted a regularization parameter of $10$ and polynomial kernel function of degree $4$ for the Polynomial method. In addition, the RBF SVM was granted a kernel coefficient of $3$. The Gaussian Process was granted a RBF kernel and the multi class predictions were achieved through one versus rest scheme. The Random Forest classifier was composed of $100$ Decision Trees of maximum depth $4$. The MLP classifier was used with a LBFGS optimizer, a ReLU activation, $3000$ maximum iterations, a batch size of $200$, learning rate was updated thanks to an inverse scaling exponent of power $t$ with $t$ denoting the current step and early stopping method was used as the termination criteria. XGBoosting was used with $n_{classes}$ regression trees at each boosting stage, a deviance loss, a learning rate of $0.5$ and $40$ boosting stages, when looking for the best split, $\sqrt{n_{features}}$ features were considered. \section{Full Tumor Types Classification performance} In Table~III we present the training/ validation results of the different classifiers using the LP-Stability and our proposed signature. Moreover, in Table~IV, we present the results for the training/ test tumor classification results for our proposed signature. The table reports the performance of the selected algorithms together with the voting (ensemble) classifier. \begin{table*}[!h] \caption{\textbf{Predictive Power: Tumor Types, Proposed Signature} Training-Validation tumor types classification performance using the proposed signature ($27$ genes). Voting Classifier is composed of classifiers having reached a balanced accuracy above $80\%$ on validation.} \centering \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Classifier} & \multicolumn{2}{l|}{Balanced Accuracy (\%)} & \multicolumn{2}{l|}{Weighted Precision (\%)} & \multicolumn{2}{l|}{Weighted Sensitivity (\%)} & \multicolumn{2}{l|}{Weighted Specificity (\%)} \\ \cline{2-9} & Training & Validation & Training & Validation & Training & Validation & Training & Validation \\ \hline Nearest Neighbors & 88 & 79 & 92 & 86 & 92 & 85 & 97 & 95 \\ \hline Linear SVM & 91 & 88 & 91 & 90 & 89 & 89 & 99 & 99 \\ \hline poly SVM & 98 & 91 & 97 & 92 & 96 & 92 & 100 & 99 \\ \hline sigmoid SVM & 55 & 50 & 70 & 67 & 50 & 49 & 94 & 94 \\ \hline RBF SVM & 98 & 89 & 96 & 91 & 96 & 90 & 100 & 98 \\ \hline Gaussian Process & 96 & 90 & 97 & 94 & 97 & 93 & 99 & 98 \\ \hline Decision Tree & 68 & 66 & 85 & 38 & 47 & 45 & 94 & 94 \\ \hline Random Forest & 93 & 89 & 94 & 92 & 92 & 90 & 99 & 99 \\ \hline MLP & 100 & 87 & 100 & 92 & 100 & 92 & 100 & 98 \\ \hline AdaBoost & 72 & 64 & 81 & 75 & 74 & 70 & 98 & 95 \\ \hline Gaussian Naive Bayes & 32 & 32 & 69 & 61 & 58 & 58 & 69 & 69 \\ \hline Bernouilli Naive Bayes & 59 & 59 & 75 & 71 & 74 & 75 & 90 & 91 \\ \hline QDA & 71 & 67 & 87 & 82 & 78 & 76 & 98 & 98 \\ \hline XGBoosting & 100 & 88 & 100 & 93 & 100 & 92 & 100 & 98 \\ \hline \textbf{Voting Classifier} & \textbf{99} & \textbf{92} & \textbf{98} & \textbf{94} & \textbf{98} & \textbf{94} & \textbf{100} & \textbf{99} \\ \hline \end{tabular} \end{table*} \begin{table*}[!h] \caption{\textbf{Predictive Power: Tumor Types, Proposed Signature} Training-Test tumor types classification performance using the proposed signature ($27$ genes) after retraining on entire Training-Validation set} \centering \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Classifier} & \multicolumn{2}{l|}{Balanced Accuracy (\%)} & \multicolumn{2}{l|}{Weighted Precision (\%)} & \multicolumn{2}{l|}{Weighted Sensitivity (\%)} & \multicolumn{2}{l|}{Weighted Specificity (\%)} \\ \cline{2-9} & Training & Test & Training & Test & Training & Test & Training & Test \\ \hline Linear SVM & 91 & 89 & 91 & 90 & 89 & 88 & 99 & 98 \\\hline poly SVM & 98 & 87 & 97 & 89 & 97 & 88 & 100 & 98 \\\hline RBF SVM & 98 & 88 & 97 & 90 & 97 & 88 & 100 & 99 \\\hline Gaussian Process & 95 & 88 & 97 & 92 & 97 & 92 & 99 & 98 \\\hline Random Forest & 92 & 90 & 93 & 92 & 91 & 90 & 99 & 99 \\\hline MLP & 100 & 87 & 100 & 90 & 100 & 89 & 100 & 98 \\\hline XGBoosting & 100 & 91 & 100 & 94 & 100 & 94 & 100 & 98 \\\hline \textbf{Voting Classifier} & \textbf{99} & \textbf{92} & \textbf{99} & \textbf{94} & \textbf{98} & \textbf{93} & \textbf{100} & \textbf{99} \\ \hline \end{tabular} \end{table*} Tables~V and VI we summarise the performances for the signature presented in~[28]. Using the referential algorithm~[28] only three classifiers were selected and used for the tumor classification, reporting also lower performance. \begin{table*}[!h] \caption{\textbf{Predictive Power: Tumor Types, Referential Signature~[28]} Training-Validation tumor types classification performance using the referential signature ($78$ genes). Voting Classifier is composed of classifiers having reached a balanced accuracy above $80\%$ on validation and presenting a difference of balanced accuracy between training and validation below $20\%$.} \centering \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Classifier}} & \multicolumn{2}{c|}{Balanced Accuracy (\%)} & \multicolumn{2}{c|}{Weighted Precision (\%)} & \multicolumn{2}{c|}{Weighted Sensitivity (\%)} & \multicolumn{2}{c|}{Weighted Specificity (\%)} \\ \cline{2-9} \multicolumn{1}{|c|}{} & Training & Validation & Training & Validation & Training & Validation & Training & Validation \\ \hline Nearest Neighbors & 85 & 70 & 87 & 72 & 86 & 72 & 97 & 95 \\ \hline Linear SVM & 88 & 83 & 87 & 84 & 86 & 84 & 98 & 98 \\ \hline poly SVM & 94 & 78 & 94 & 79 & 93 & 78 & 99 & 97 \\ \hline sigmoid SVM & 53 & 55 & 59 & 58 & 46 & 48 & 94 & 94 \\ \hline RBF SVM & 99 & 80 & 99 & 82 & 99 & 82 & 100 & 97 \\ \hline Gaussian Process & 95 & 86 & 96 & 88 & 96 & 88 & 99 & 98 \\ \hline Decision Tree & 50 & 45 & 54 & 46 & 54 & 52 & 90 & 90 \\ \hline Random Forest & 78 & 75 & 77 & 75 & 75 & 73 & 97 & 96 \\ \hline Neural Net & 100 & 80 & 100 & 80 & 100 & 80 & 100 & 97 \\ \hline AdaBoost & 54 & 52 & 54 & 56 & 53 & 55 & 93 & 93 \\ \hline Gaussian Naive Bayes & 30 & 31 & 56 & 48 & 41 & 41 & 83 & 84 \\ \hline Bernouill Naive Bayes & 29 & 27 & 37 & 31 & 37 & 35 & 84 & 85 \\ \hline QDA & 78 & 65 & 84 & 73 & 83 & 73 & 97 & 96 \\ \hline Gradient Boosting & 99 & 76 & 100 & 82 & 100 & 82 & 100 & 97 \\ \hline \textbf{Voting Classifier} & \textbf{99} & \textbf{87} & \textbf{99} & \textbf{88} & \textbf{99} & \textbf{88} & \textbf{100} & \textbf{98} \\ \hline \end{tabular} \end{table*} \begin{table*}[!h] \caption{\textbf{Predictive Power: Tumor Types, Referential Signature~[28]} Training-Test tumor types classification performance using the referential signature ($78$ genes) after retraining on entire Training-Validation set} \centering \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Classifier} & \multicolumn{2}{l|}{Balanced Accuracy (\%)} & \multicolumn{2}{l|}{Weighted Precision (\%)} & \multicolumn{2}{l|}{Weighted Sensitivity (\%)} & \multicolumn{2}{l|}{Weighted Specificity (\%)} \\ \cline{2-9} & Training & Test & Training & Test & Training & Test & Training & Test \\ \hline Linear SVM & 88 & 82 & 87 & 81 & 86 & 81 & 98 & 98 \\ \hline RBF SVM & 99 & 80 & 99 & 81 & 99 & 81 & 100 & 97 \\ \hline Gaussian Process & 95 & 83 & 95 & 83 & 95 & 83 & 99 & 97 \\ \hline \textbf{Voting Classifier} & \textbf{100} & \textbf{85} & \textbf{100} & \textbf{89} & \textb{100} & \textbf{89} & \textbf{100} & \textbf{98} \\ \hline \end{tabular} \end{table*} Tables~VII and~VIII we present the performances for the random signature. Using the random signature only two classifiers were selected, fulfilling the used criteria. This signature reports the lowest performance compared to the other two signatures. \begin{table*}[!t] \caption{\textbf{Predictive Power: Tumor Types, Random Signatures} Training-Validation tumor types classification average performance over $10$ random signatures ($27$ genes each). Voting Classifier is composed of classifiers having reached a balanced accuracy above $80\%$ on validation.} \centering \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Classifier} & \multicolumn{2}{l|}{Balanced Accuracy (\%)} & \multicolumn{2}{l|}{Weighted Precision (\%)} & \multicolumn{2}{l|}{Weighted Sensitivity (\%)} & \multicolumn{2}{l|}{Weighted Specificity (\%)} \\ \cline{2-9} & Training & Validation & Training & Validation & Training & Validation & Training & Validation \\ \hline Nearest Neighbors & 83+/-2 & 71+/-3 & 84+/-1 & 72+/-2 & 83+/-2 & 72+/-3 & 97+/-1 & 95+/-1 \\ \hline Linear SVM & 8+/-1 & 78+/-2 & 78+/-1 & 77+/-2 & 77+/-2 & 75+/-2 & 97+/-0 & 97+/-0 \\ \hline poly SVM & 94+/-1 & 79+/-2 & 93+/-2 & 78+/-2 & 92+/-2 & 78+/-2 & 99+/-0 & 97+/-0 \\ \hline sigmoid SVM & 39+/-6 & 37+/-7 & 55+/-4 & 56+/-5 & 34+/-6 & 34+/-6 & 94+/-1 & 94+/-1 \\ \hline RBF SVM & 9+/-1 & 8+/-2 & 88+/-2 & 8+/-2 & 88+/-2 & 8+/-2 & 98+/-0 & 97+/-0 \\ \hline Gaussian Process & 89+/-1 & 81+/-2 & 89+/-1 & 81+/-2 & 89+/-1 & 81+/-2 & 98+/-0 & 97+/-0 \\ \hline Decision Tree & 49+/-4 & 48+/-5 & 55+/-14 & 41+/-11 & 41+/-9 & 4+/-9 & 92+/-2 & 92+/-2 \\ \hline Random Forest & 76+/-2 & 75+/-3 & 75+/-2 & 73+/-3 & 73+/-2 & 71+/-3 & 97+/-0 & 96+/- 0 \\ \hline Neural Net & 99+/-1 & 76+/-2 & 99+/-1 & 76+/-2 & 99+/-1 & 76+/-2 & 100+/-0 & 96+/-0 \\ \hline AdaBoost & 56+/-5 & 53+/-5 & 57+/-4 & 55+/-5 & 53+/-6 & 51+/-7 & 93+/-1 & 93+/-1 \\ \hline Gaussian Naive Bayes & 26+/-6 & 28+/-7 & 57+/-6 & 55+/-8 & 41+/-5 & 42+/-6 & 81+/-3 & 81+/-4 \\ \hline Bernouill Naive Bayes & 28+/-7 & 28+/-7 & 42+/-4 & 34+/-8 & 38+/-5 & 38+/-5 & 86+/-3 & 86+/-3 \\ \hline QDA & 60+/-11 & 57+/-10 & 69+/-6 & 65+/-6 & 50+/-15 & 48+/-14 & 95+/-1 & 95+/-1 \\ \hline Gradient Boosting & 100+/-0 & 78+/-3 & 100+/-0 & 80+/-2 & 100+/-0 & 80+/-2 & 100+/-0 & 97+/-0 \\ \hline \textbf{Voting Classifier} & \textbf{94+/-1} & \textbf{83+/-2} & \textbf{93+/-1} & \textbf{82+/-2} & \textbf{93+/-1} & \textbf{82+/-2} & \textbf{99+/-0} & \textbf{98+/-1} \\ \hline \end{tabular} \end{table*} \begin{table*}[!h] \caption{\textbf{Predictive Power: Tumor Types, Random Signatures} Training-Test tumor types classification average performance over $10$ random signatures ($27$ genes each) after retraining entire Training-Validation set} \centering \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Classifier} & \multicolumn{2}{l|}{Balanced Accuracy (\%)} & \multicolumn{2}{l|}{Weighted Precision (\%)} & \multicolumn{2}{l|}{Weighted Sensitivity (\%)} & \multicolumn{2}{l|}{Weighted Specificity (\%)} \\ \cline{2-9} & Training & Test & Training & Test & Training & Test & Training & Test \\ \hline RBF SVM & 90+/-1 & 79+/-2 & 88+/-2 & 79+/-1 & 88+/-2 & 78+/-2 & 98+/-0 & 97+/-0 \\ \hline Gaussian Process & 89+/-1 & 80+/-1 & 89+/-1 & 80+/-1 & 89+/-1 & 81+/-1 & 98+/-0 & 97+/-0 \\ \hline \textbf{Voting Classifier} & \textbf{96+/-5} & \textbf{84+/-2} & \textbf{95+/-5} & \textbf{87+/-3} & \textbf{94+/-7} & \textbf{86+/-4} & \textbf{99+/-1} & \textbf{97+/-1} \\ \hline \end{tabular} \end{table*} \section{Reviewer 1} REVIEWS Reviewer: 1 Recommendation: Accept With No Changes Comments: This paper was very clear and easy to follow and makes a clear contribution to the field. Additional Questions: 1. Please explain how this manuscript advances this field of research and/or contributes something new to the literature.: This paper describes the application of the LP-Stability algorithm to cluster-based analysis of genes, and compares it with other standard clustering algorithms using various distance-based metrics. The paper uses TCGA data sets to compare results of the various algorithms. While this paper is an extension of a previously published conference article, it cites the conference article and clearly indicates how the two articles differ. It also contains substantially more technical information than the conference article. 2. Is the manuscript technically sound? Please explain your answer under Public Comments below.: Appears to be - but didn't check completely 1. Which category describes this manuscript?: Practice / Application / Case Study / Experience Report 2. How relevant is this manuscript to the readers of this periodical? Please explain your rating under Public Comments below.: Very Relevant 1. Are the title, abstract, and keywords appropriate? Please explain under Public Comments below.: Yes 2. Does the manuscript contain sufficient and appropriate references? Please explain under Public Comments below.: References are sufficient and appropriate 3. Does the introduction state the objectives of the manuscript in terms that encourage the reader to read on? Please explain your answer under Public Comments below.: Yes 4. How would you rate the organization of the manuscript? Is it focused? Is the length appropriate for the topic? Please explain under Public Comments below.: Satisfactory 5. Please rate the readability of the manuscript. Explain your rating under Public Comments below.: Easy to read 6. Should the supplemental material be included? (Click on the Supplementary Files icon to view files): Yes, as part of the digital library for this submission if accepted 7. If yes to 6, should it be accepted: As is Please rate the manuscript. Please explain your answer.: Excellent \section{Reviewer 2} Recommendation: Revise and resubmit as “new” Comments: This manuscript presented a comparison of three methods, namely LP-stability based clustering, K-mean and CorEx on selected TCGA gene expression. The manuscript was organized like a computer science paper but the major contribution is the results and implication of the selected methods. The authors generated a substantial amount of results, some sounds interesting. I have the following major concerns to this manuscript: \subsection{ Biological implications: Two types of successful application of sample-wise clustering analysis on cancer data include (i) disease stratification (such as the CMS classification of colon cancer) and (ii) like the immuno-landscape characterization cited in the manuscript. (i) focusing on subtype classification to optimize clinical relevance while (ii), with gene selection, focusing on characterization the range and possible classes of cancers with respect to a certain biological mechanism. However, in this manuscript, I cannot see significant biological implications from the results. Even the functions of the genes and clusters were discussed, how these clusters related to different disease property and clinical impact was not analyzed and discussed. I do not consider the current results can contribute significant insight to current understanding of cancer.} The point raised here is indeed a key point. We first want to highlight that the focus of this study, more than offering another analysis of clusters or even cancer signature, is to present a pipeline adaptable and general that enables to obtain such clusterings and signatures in any context and then analyzing them. Indeed, we feel that the normalization of clustering techniques and signatures comparisons is a very important yet not well studied aspect preventing any clear advance in the field. An additional point we want to clarify is that we aim here to focus on the discovery power of the signature we establish which justify the use of an unsupervised feature selection technique to define the signature. Besides, the point of this pipeline is to define a signature that is compact, redundancy-free and yet informative. The genes selected are so unbiased and not selected to answer a particular predictive task. They so might not well studied and so bringing a full biological analysis of their function would be very challenging. However, we brought a special attention to this very relevant comment in our revision. We so believe that this new submitted version is bringing sufficient proof of both the validity of the approach and of the informativeness of the signature proposed. To highlight the quality of the clustering method genes-wise, we so provide detailed results of enrichment score and Dunn's Index of the clusters with the proposed method, two referential methods and random clustering. To justify the quality of the signature we performed a sample-wise clustering using as features our proposed signature, a referential signature from literature and random genes showing the superior ability of our signature to have a meaningful separation of samples into clusters has proven by the study of tumor types distribution and the analysis relying on SAM method. We added in this revised version a supervised classification task. To show both that the signature while containing only $27$ genes still contain enough information and tissue-specific information in response to a comment of reviewer 2. We compared results to the ones using the referential signature from literature and random genes showing the highest informativeness of our signature. We also provide in supplementary files the details of the genes involved and functions they are associated to. We studied the enriched processes of the genes of the signature to prove the redundancy-freedom.On the kind advice of reviewer 2 we now report a summary of the GTEx portal analysis for each gene. \subsection{ Gene signature selectioned: The authors offered clear introduction of the methods and implications except how the 27 genes were selected. As mentioned by the authors "The predictive power of the best signature per algorithm was further assessed by measuring its ability to separate in a completely unsupervised manner 10 different tumor types (Table II) through sample clustering.", if this is how the genes were selected, I highly doubt the genes are tissue specific genes, which should be tested by checking GETX data. I also concern the number of genes selected here are too small. How and what biological aspects can be well represented by the space spanned by the expression profile of the 27 selected genes need to be evaluated.} We have taken care of this very relevant comment. In the revised version we now state at the beginning of section III-B: "The best signature was selected for each clustering algorithm using the method detailed in section II-F on the clustering presenting the highest DI while having best ES." \subsection{Rationality of the analysis and comparison: Three top factors that affect the TCGA gene expression data are (i) copy number varation in cancer cell, (ii) tissue components (cancer, immune and stromal cells) and (iii) tissue specific genes. Due to the selection of the ~30 genes can be biased to tissue specific genes or possible copy number varation, I concern the pathway enrichment analysis against PPI is too limited. Check the enrichment to MsigDB pathways of different biological meaning.} This is a very relevant point. We are well aware of the limitations of the PPI Enrichment. However, as we want to establish here an automatic and unbiased pipeline of comparison of clusterings we wanted to avoid the use of enrichment in specific pathways. Indeed, so doing, we would have lost the generality of the comparison by considering only clusters that would be relevant according to some pre-defined functions or processes. Besides, we are interested in defining a new biological metric for clustering assessment which is a very relevant topic though beyond the scope of this article. Besides, anyone who would want to use our methodology for a different task could adapt both metrics we use at his convenience. \subsection{I recommend the authors to test the methods on selected genes of certain functions, such as the immuno-response or metabolism or certain cancer hallmarkers and mine the landscape of the samples from different cancer types. Then the biological meaning of each cluster can be better annotated and the feature of the selected methods can be better seen.} We take good note of this very relevant remark. However, the aim of this study is to prove the validity of the methodology used and of the signature defined. We details in our response section II-A the different proofs we brought to these two points. Additional Questions: 1. Please explain how this manuscript advances this field of research and/or contributes something new to the literature.: Mining the performance of sample-wise clustering methods on cancer transcriptomics data. 2. Is the manuscript technically sound? Please explain your answer under Public Comments below.: Yes 1. Which category describes this manuscript?: Practice / Application / Case Study / Experience Report 2. How relevant is this manuscript to the readers of this periodical? Please explain your rating under Public Comments below.: Relevant 1. Are the title, abstract, and keywords appropriate? Please explain under Public Comments below.: No 2. Does the manuscript contain sufficient and appropriate references? Please explain under Public Comments below.: References are sufficient and appropriate 3. Does the introduction state the objectives of the manuscript in terms that encourage the reader to read on? Please explain your answer under Public Comments below.: Yes 4. How would you rate the organization of the manuscript? Is it focused? Is the length appropriate for the topic? Please explain under Public Comments below.: Satisfactory 5. Please rate the readability of the manuscript. Explain your rating under Public Comments below.: Readable - but requires some effort to understand 6. Should the supplemental material be included? (Click on the Supplementary Files icon to view files): Yes, as part of the digital library for this submission if accepted 7. If yes to 6, should it be accepted: After revisions. Please include explanation under Public Comments below. Please rate the manuscript. Please explain your answer.: Good \section{Reviewer 3} Recommendation: Revise and resubmit as “new” Comments: In this manuscript, the authors proposed a linear programming-based clustering method to identify the representative gene groups for molecular signature identification. Then based on the identified gene clusters, the tumor samples were stratified into different groups. The proposed framework was applied to large-scale TCGA datasets. The gene clustering algorithm was compared to K-means and CorEx algorithms and the performance was evaluated by Dunn's Index and the enrichment on the PPI network. The sample clustering results were evaluated by matching the clusters to different tumor types. The reviewer has the following concerns: Major comments: \subsection{ Biomarker identification is a very important step for drug discovery and targeted therapy to improve disease prognosis and diagnosis. Most proposed methods chose supervised feature selection methods to identify the molecular signatures that associate with clinical variables. Why did the authors choose the unsupervised clustering method to identify signature genes? A better description is needed in the Introduction section.} We acknowledge the importance to better justify the importance in our approach to rely on an unsupervised feature selection instead of a supervised one. We were stating that "Unsupervised techniques such as clustering are very efficient for the study of large high-dimensional datasets and aim to discover unknown indiscernible structures and correlations". In the revised version we so added in introduction "Besides, at the difference of supervised techniques meant to select features for a specific purpose, unsupervised clustering enables to discover unknown patterns, associations and correlations in the data." \subsection{ The parameter 's' in equation S(q) on page 2 was not defined in the manuscript. In addition, how did the algorithm can automatically select the number of clusters?} We corrected this overlook. this parameter 's' was the parameter minimize in the inferior bound of the formula. We clarified that this automatic determination of the number of clusters was performed thanks to the optimization performed on the function C of the primal problem which characterizes the assignation of a point to a given center. The number of clusters is so indirectly determined by this function as it is the number of centers that have a point assigned to them. Please refer to section II-B-1 for complete definition of the optimization problem. \subsection{ Which PPI network was applied to calculate Enrichment Score? A reference is needed. It is better to do enrichment analysis on KEGG pathways and Gene Ontology terms to evaluate the biological relevance of the gene sets} Thank you for this relevant comment, we added the reference of the PPI network used (from the String Consortium) for computing the enrichment in section II-D of the revised paper. We gave details on the reasons of using PPI network for enrichment in response II-C. \subsection{ It is better to use NMI and ARI to evaluate the sample clustering results in Figure 6. The higher the consistency with the tumor subtypes, the better the results.} NMI and ARI are indeed very relevant metric for supervised clustering evaluation. However, they prevent any discovery of new patterns or associations between the samples which we were particularly focusing on by using an unsupervised technique as explained in response III-A. \subsection{ On page 9, it was mentioned that 'we look for genes that are expressed differently for the samples of one tumor type in a cluster compared to the other samples of the same tumor type'. It is still unclear how to select the significant genes from each gene cluster.} Indeed, the details of the technique used for defining the significant genes (SAM method) are given in section II-D. Besides for clarity sake we added in the revised version reference to this method section. Minor comments: \subsection{ It is unclear to me how did the authors conclude that '10 times better' and '$35\%$ better' in the Abstract based on the experimental results.} These comparisons are coming from the comparison of average scores in Dunn's Index (DI) and PPI Enrichment Score (ES) of our favored method (LP-Stability) and the referential clustering methods (K-Means and CorEx). We added a column with Average Di in the revised version for better clarity. we can see that LP-Stability has in average a DI of 38.5\% while K-Means has only 1.2 and CorEx 0.6, so the score of LP-Stability is in fact more than 30 times higher, we so put this more precise number in the abstract. Besides, we apologize for the typo but the improvement in average ES was in fact of 25\% and not 35\%, it has been corrected in revised version. Besides, we do not take into account the results obtained withe the random clustering as in the abstract we compare to referential clustering techniques for which random clustering does not qualify. \subsection{ Typos in the supplementary document. e.g., '?' in section III.} Thank you for this comment, it has been addressed in the revised version. Additional Questions: 1. Please explain how this manuscript advances this field of research and/or contributes something new to the literature.: The authors proposed a linear programming based gene clustering algorithm to identify gene signatures that associate tumor types from large-scale gene expression data. 2. Is the manuscript technically sound? Please explain your answer under Public Comments below.: Partially 1. Which category describes this manuscript?: Research/Technology 2. How relevant is this manuscript to the readers of this periodical? Please explain your rating under Public Comments below.: Relevant 1. Are the title, abstract, and keywords appropriate? Please explain under Public Comments below.: Yes 2. Does the manuscript contain sufficient and appropriate references? Please explain under Public Comments below.: Important references are missing; more references are needed 3. Does the introduction state the objectives of the manuscript in terms that encourage the reader to read on? Please explain your answer under Public Comments below.: Could be improved 4. How would you rate the organization of the manuscript? Is it focused? Is the length appropriate for the topic? Please explain under Public Comments below.: Satisfactory 5. Please rate the readability of the manuscript. Explain your rating under Public Comments below.: Readable - but requires some effort to understand 6. Should the supplemental material be included? (Click on the Supplementary Files icon to view files): Yes, as part of the digital library for this submission if accepted 7. If yes to 6, should it be accepted: After revisions. Please include explanation under Public Comments below. Please rate the manuscript. Please explain your answer.: Fair \end{document} \section{Introduction} \footnotetext{Version submitted to IEEE Transactions on Bioinformatics and Computational Biology© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.} \IEEEPARstart{O}{mics} data analysis - including genomics, transcriptomics and metabolomics - has greatly benefited from the tremendous sequencing technique advances~\cite{Kurian2014} allowing to highly increase the quality and the quantity of data. These omics techniques are pivotal aspects of the development of personalized medicine by enabling a better understanding of fine-grained molecular mechanisms~\cite{Hanahan2011}. In oncology, these techniques provide a more comprehensive insight of the biological processes intricacy in cancers giving momentum to molecular-type characterization through omics or even multi-omics approaches~\cite{ramaswamy2001multiclass,Chen2017}. Such a precise and robust characterization is a highly valuable asset for tumor characterization and provides significant acumen on their treatment. Genomics, probably the most prominent omics technique, refer to the study of entire genomes contrary to genetics that interrogate individual variants or single genes~\cite{Hasin2017}. In this direction, novel methods study specific variants of genes aimed at producing robust biomarkers, which contribute to both the response of patients to treatment~\cite{wan2010,Sun2018} and the association with complex and Mendelian diseases~\cite{Dunne2017}. However, the relatively low number of samples per tumor subtype, along with the curse of dimensionality and the lack of ground truth affect many of these studies~\cite{Drucker2013}, which may prevent any statistically meaningful causal relation discovery. Unsupervised clustering is a very efficient technique to study large high-dimensional datasets aimed at discovering unknown indiscernible structures and correlations~\cite{Halkidi2001}. Clustering algorithms aspire to single out a group separation of the data favoring low variation inside the groups and high variation between groups. Notwithstanding, there is a large variety of clustering approaches, relying on different properties, leading to significantly different outcomes. The main challenge of clustering is the definition of a metric/similarity function depicting the notion of closeness between objects under consideration. This includes not only the intrinsic clustering properties the algorithm seeks to optimize but also the notion of distance involved. Qualitative and quantitative evaluation is a critical step towards clustering effective adoption and relies on the basis of independent and reliable measures for the proper comparison of the parameters and methods. Numerous existing metrics assess the quality of the clusters from a statistical point of view as the Silhouette Value~\cite{Kaufman1987}, Dunn's Index~\cite{Kovacs2005} or more recently the Diversity Method~\cite{Kingrani2017}. In addition, in presence of annotations, the Rand Index~\cite{hubert1985}. Clustering evaluation is even more challenging in the case of genomics, as the clusters should also be biologically informative. Protein-Protein Interaction (PPI) and the Gene Ontology (GO) have been recently introduced in this sub-domain to assess the biological soundness of the clusters through Enrichment Scores~\cite{Wagner2015,Pepke2017}. \begin{figure*}[!t] \centering [scale=0.4]{pipeline} \caption{\textbf{Proposed Framework.} A general overview of the different steps of our process. Our proposed framework is composed of two steps. First, a clustering algorithm, here LP-Stability, is used to generate clusters of genes having similar expression profiles. Then, the clustering that performs best on both mathematical and biological scores is selected as a gene signature. In the second step, the generated signature is used to perform sample clustering and sample classification. The performance on this step is evaluated by analysing the distribution of the samples into the different clusters or the performance on the classification tasks, here the target was the tumor types and subtypes characterization.} \label{fig:pipeline} \end{figure*} The use of cluster analysis on RNA-seq transcriptomes is a wide-spread technique~\cite{Cowen2017} whose main goal is to define groups of genes that have similar expression profiles, proposing compact signatures~\cite{Dunne2017}. These robust signatures are necessary to identify associations with different biological processes, as tumor types or cancer molecular subtypes, and to highlight gene coding for proteins interacting together or participating in the same biological process~\cite{vanDam2017}. The main advantages of unsupervised clustering compared to supervised approaches - or methods guided by specific biological functions or processes - is the ability to discover unknown patterns, associations and correlations in the genome. Furthermore, unsupervised discovery offers better tractability when applied to the tremendous amount of genes. In addition, studies that perform clustering relying on a priori knowledge lead to redundant signatures and loss of information~\cite{Cantini2017}. This is one of the reasons that several studies focus on statistical pattern recognition methods such as the center-based K-Means~\cite{Macqueen1967}, the model-based CorEx~\cite{VerSteeg2014} or the stability-based LP-Stability~\cite{Komodakis2009} towards the identification of meaningful and predictive groups of genes as biomarkers~\cite{Bailey2018}. In this direction, CorEx~\cite{Pepke2017} has recently been introduced to generate gene signatures evaluated and optimized over ovarian tumors that address this specific tumor type with a high dimensional gene signature composed of several hundred genes. To deal with this high dimensionality, studies propose methods to combine and prune existing signatures to obtain a unique compact and informative signature~\cite{Cantini2017,thorsson2018}. Although dimension reduction through clustering is not new~\cite{Pepke2017}, there is an important shortfall in literature of a thorough, mathematically and biologically meaningful comparison of clusterings methods on a same database. In many studies, a single evaluation metric is used and there is no relevant comparison with other algorithms. By “relevant”, we mean here that the optimization of the different baseline algorithm hyperparameters is ensured and compared through a fair evaluation metric. Mathematical metrics for instance, are highly dependent on the property the algorithm is optimizing and the distance notion considered. The evaluation of this bias through, as an example, random clusters using different distance notions to offer a fair comparison between the different algorithms. Finally, a few surveys~\cite{Oyelade2016} propose a thorough comparison, using several evaluation criteria, albeit reporting results shown in several other studies without actually comparing the methods on a same database with all the criteria at once. In this paper, we introduce a novel unsupervised approach that is modular, scalable and metric free towards the definition of a predictive gene signature while proposing a complete methodology for comparison, analysis and evaluation of genomic signatures. The backbone of our methodology refers to a powerful graph-based unsupervised clustering method, the LP-Stability algorithm~\cite{Komodakis2009}, which has been successfully adapted in various fields including, recently, in genomics~\cite{battistella2019}. Our approach offers: \begin{enumerate}[label=(\roman*)] \item Standardization and automatization concerning gene clustering evaluation for the selection of the best distance notions, metrics, algorithms and hyperparameters; \item Creation of generic, low dimensional signatures using the gene expressions of all coding genes, including comparisons to random signatures to highlight statistical superiority; \item Systematic assessment of the biological power of gene signatures by evaluating the different tumor type and subtype associations via supervised (proving tissue-specificity and predictive power), and unsupervised (proving automatic discovery and expression power) techniques. By this, we demonstrated the power of the proposed gene signature (based on $27$ genes) compared to other methods in the literature; \item Thorough biological analysis of the processes involved in sample clusters via gene screening techniques, affirming the robustness of the obtained results. \end{enumerate} \section{Overview of the Proposed Approach} The overview of the method presented in this paper is summarized in Fig.~\ref{fig:pipeline}. To evaluate and select the best gene signature, we introduce two distinct metrics checking both mathematical and biological properties. In particular, we used the mathematical assessment metric of Dunn's Index (DI)~\cite{Kovacs2005} and the biological one of Enrichment Score in PPI~\cite{Pepke2017} which are both referential for the assessment of clustering although they have never been combined. Then, a low dimensional aggregated gene signature is defined by combining representative genes in each cluster. To prove the power of the discovered biomarker, a systematic and thorough evaluation regarding its biological and clinical relevance was performed. In particular, the signature was evaluated and compared through sample clustering and sample classification. As targeted by the sample clustering, we chose the different tumor types and assessed the success of the clustering through sample distribution analysis and clustering evaluation metrics such as Rand Index and Mutual Information. In addition, we used the method from~\cite{tusher2001significance} to obtain important genes for the samples of each cluster which were associated to their pathways using~\cite{szklarczyk2018}. Finally, the last evaluation criteria was the performance in categorizing the cancer types and subtypes through supervised machine learning techniques. Our proposed signature has been compared against both signatures designed from commonly used algorithms for gene clustering~\cite{Macqueen1967,Pepke2017} and a recently proposed prominent gene signature~\cite{thorsson2018}. \section{Discovering Correlations in Gene Expressions} \label{section:methodo} \subsection{Algorithms} We consider here $n$ points $S=\{x^1,...x^n\}$ in a space of dimension $m$ where each point $x^p$ coordinates will be described by $x^p=(x^p_1,...,x^p_m)$. In this study, we consider several notions of dissimilarity $d$. If we denote $k$ the number of clusters in a clustering $C$, then the clustering is a set of clusters $C=\{C_1,...,C_k\}$ defined such that $\forall 1\leq i,j \leq k$, $C_i \cap C_j = \emptyset$ and $\bigcup_{1\leq i\leq k} C_i = S$. The number of points in cluster $C_i$ will be denoted by $n_i$. We call centroid $\mu_i$ of cluster $C_i$ the mean of the points of the cluster. Finally, we get a discrete random variable $X=\{X_1,...,X_b\}$ from a point $x\in S$ by binning $b$ bins. We denote $P(X)$ the probability mass function of $X$. Then from that, the Shannon Entropy $H$ of variable $X$ is defined by $H(X)=-\sum_{1\leq i \leq b} P(X_i)\ln P(X_i)$. \textbf{K-Means algorithm}~\cite{Macqueen1967} is a very popular and simple algorithm used for data following Gaussian distributions. First, the algorithm draws an initial random set of cluster centroids. Then, until convergence it iteratively determines $k$ clusters by assigning the points to their closest centroid and computes their new centroids $\mu_i$. The algorithm aims to solve \begin{equation} \min_C \sum_{i=1}^k \sum_{x\in C_i} d(x, \mu_i) \; . \end{equation} Only the number of clusters $k$ has to be defined beforehand. Generally, K-Means is used with Euclidean distance for convergence issues. A main drawback of this technique is that the random initialization is a source of nondeterminism and may cause instability in the cluster generation for different runs. We address this issue by selecting the best clustering over multiple iterations with a different initialization. \textbf{CorEx algorithm}~\cite{VerSteeg2014} is a model-based algorithm that has been applied to various fields and, especially on genes~\cite{Pepke2017} with great success. This algorithm aims to define a set $S'$ of $k$ latent factors accounting for the most variance of the dataset $S$. Formally, it relies on the Total Correlation of discrete random variables $X^1,...,X^p$ defined by \begin{equation} TC(X^1,...,X^p)=\sum_{1 \leq i \leq p} H(X^i) - H(X^1,...,X^p) \end{equation} and the Mutual Information of two random variables \begin{equation} MI(X^i,X^j)= \sum_{X^i_p\in X^i}\sum_{X^j_q\in X^j}P(X^i_p,X_i^q)\log\dfrac{P(X^i_p,X^j_q)}{P(X^i_p)P(X^j_q)} \end{equation} where $P(X^i_p,X^j_q)$ is the joint probability function and $P(X^i_p),P(X^j_q)$ are marginal probability functions. To guarantee a reliable definition of the latent variables, the algorithm minimizes the Total Correlation, $TC(S|S')$, corresponding to the additional information brought by the points in $S$ compared to the latent factors of $S'$. Then, to obtain the clustering, each point $x^p$ is allocated to the cluster of the latent factor $f$ maximizing the mutual information, $MI(X^p,f)$. Similar to K-Means, the only requirement of CorEx algorithm is the number of clusters $k$. Moreover, K-Means is the only algorithm under consideration that directly operates on the gene expression matrix to build a statistical model without using a given distance matrix. \textbf{LP-Stability algorithm}~\cite{Komodakis2009} is based on linear programming and relies on the same definition of clusters as K-Means \textit{i.e.} we want to minimize the distance between each point of a cluster and the center of the cluster. However, the novelty and interest of this technique is that instead of taking centroids as cluster centers, it defines stable cluster centers. Formally, we aim to optimize the following linear system \begin{align} \begin{split} PRIMAL ~\equiv ~& \min_C \sum_{p,q} d(x^p,x^q)C(p,q) \\ & s.t.~ \sum_q C(p,q)=1 \\ & ~ ~~~C(p,q) \leq C(q,q) \\ & ~ ~~~C(p,q) \geq 0 \; . \end{split} \end{align} where $C(p,q)$ represents the fact that $x^p$ belongs to the cluster of center $x^q$. The formula corresponds to the minimization of the distance between a point and its cluster center while ensuring that each point must belong to one and only one cluster and that centers belong to their own cluster. The determination of the stable centers relies on the following notion of stability: \begin{align*} S(q)=\inf_{s} \{ s, \ d(q,q) + s ~ \ \text{PRIMAL has no }\\ \text{optimal solution with} \ C(q,q)>0 \} \; . \end{align*} The stability of a point is the maximum penalty the point can receive while remaining an optimal cluster center in PRIMAL. Besides, to better exploit particular field constraints of the points or to better tune the number of clusters, penalty value $S_q>=0$ can be added to point $q$. We will then consider the penalty vector $S$ weighting the distance $d$ such as $\forall q, S_q\in S, d'(q,q)=d(q,q) + S_q$. Doing so, we will impose a stronger minimal stability for the cluster centers entailing a lower number of clusters. Let us denote $\mathcal{Q}$ the set of stable clusters centers. The algorithm solves the clustering using the DUAL problem \begin{align} \begin{split} DUAL ~\equiv ~& \max_D D(h)=\sum_{p\in \mathcal{V}} h^p \\ & s.t.~ h^p = \min_{q\in \mathcal{V}}h(p,q) \\ & ~ ~~~\sum_{p\in \mathcal{V}} h(p,q)=\sum_{p\in \mathcal{V}} d(x^p,x^q) \\ & ~ ~~~h(p,q) \geq d(x^p,x^q) \; . \end{split} \end{align} where $h(p,q)$ corresponds here to the minimal pseudo-distance between $x^p$ and $x^q$ and $h^p$ to the one from $x^p$. This previous DUAL problem is then conditioned by considering only centers in the set of stable points $\mathcal{Q}$: \begin{equation} DUAL_\mathcal{Q} = \max DUAL ~ s.t. ~h_{pq}=d_{pq}, \forall\{p,q\}\cap \mathcal{Q} \neq \emptyset \; . \end{equation} This method presents several advantages. It is versatile and can integrate any metric function while, it does not make prior assumptions on the number of clusters or their distribution. It aims to define clustering in a global manner seeking for an automatic selection of the cluster centers. For that matter, it relies on the optimization of the set of stable centers, as well as the assignment of each observation to the most appropriate cluster, meaning the one minimizing the distance to the center. This algorithm only requires a penalty vector $S$, influencing the number of clusters. \subsection{Proximity Measures} \label{section:dist} To tackle the issue of high dimensionality of the data combined with a low ratio between samples and dimensions of each sample, we studied several different distance notions. In this study we considered the following distances: the Euclidean distance, the Cosine distance, the Pearson's correlation, the Spearman's rank correlation, the Kendall's rank correlation and the Kullback-Leibler divergence. These standard metrics are further detailed in Supplementary Materials. Depending on the type of data or algorithm used, different proximity measures may heavily impact the performance for the clustering. The different correlations cover the range of $[-1,1]$, the value is positive when the observations evolve in a similar way for the compared variables and negative when they evolve in opposite ways. High absolute values indicate high correlations in the observations. On the other hand, high values in terms of distance indicate observations that are not similar in the specific feature space. In this work, we used $\sqrt{2(1 - c)}$ to convert correlations into distances. For simplicity, distances coming from correlations will be referred to as correlation-based distances for the rest of the paper. \subsection{Unsupervised Gene Clustering Evaluation} To evaluate the performance of gene clustering methods, we used both mathematical and biological criteria. The quality of the results was assessed using the biological relevance information brought by the Enrichment Score, while the prominent Dunn's Index statistical method was considered regarding the clustering mathematical appropriateness. \begin{itemize} \item \textbf{Enrichment Score (ES):} Enrichment is the most commonly adopted technique to assess biological relevance in an automatic manner~\cite{Pepke2017}. We considered here the Enrichment in PPI by studying the proteins corresponding to the genes. Even if, contrary to enrichment in a given biological process, PPI does not integrate specific information about predefined pathways and biological processes, it fulfills our aim of an unbiased and general metric. Enrichment for a cluster represents the probability of obtaining the same number of interactions in a random set of genes of same size as in the evaluated cluster. In particular, the cluster is considered as enriched if the p-value is below a given threshold (abbreviated by th). The ES corresponds to the proportion of enriched clusters in the clustering. To calculate the ES, Stringdb library based on String PPI network~\cite{szklarczyk2018} was used. \item \textbf{Dunn's Index (DI):} The Dunn's Index~\cite{Kovacs2005} studies the ratio between inter-cluster and intra-cluster variance. The former is meant to be large as the distributions in different clusters should be different. The latter has to be small as we want points that are in a same cluster to follow a common distribution. Formally,\\ $Dunn(\mathcal{C}) = \dfrac{\min_{1\leq i,j \leq k} \delta(C_i,C_j)}{\max_{1 \leq i \leq k}\Delta(C_i)}$ where $\delta(C_1,C_2)$ is the distance between the two closest points of the clusters $C_i$ and $C_j$, $\Delta(C_i)$ is the diameter of the cluster \textit{i.e.} the distance between the two farthest points of the cluster $C_i$. This assessment score is highly sensitive to extreme not well-formed clusters making it ideal for our problem. \end{itemize} \section{Definition of a Low Dimensional Cancer Signature} \label{section:methodo2} \subsection{Signature Selection} \label{section:ss} The ultimate goal of our approach is to produce low dimensional signatures as a byproduct of unsupervised clustering outcome. To this end, the signatures were produced by selecting the most representative gene per cluster for the clusterings with the highest ES and DI performance. For LP-Stability algorithm, the selected genes were the stable centers that the algorithm relies on, in the the rest of the algorithms, we chose as representatives the clusters medoids which are the genes the closest to the cluster centroid. This choice is motivated by the fact that the stable centers of the clustering obtained through LP-Stability are also medoids. In complement, once a signature is selected, a redundancy analysis was performed using STRING tool~\cite{szklarczyk2018} to decipher any biological process that was particularly over-represented so suggesting redundancy of the information. In addition, Genotype-Tissue Expression (GTEx) portal~\footnote{www.gtexportal.org} was used to assess the tissue specificity for the proposed signature. A good signature should present genes with different expression profiles over the different tissues. This tool offers a visual representation for each gene of their expression and regulation in different tissues. It relies on the analysis of multiple human tissues from donors to identify correlations between genotype and tissue-specific gene expression levels. \subsection{Sample Clustering: Discovery Power} \label{section:DP} In order to perform sample clustering, we compared several algorithms and distances. The most meaningful results were obtained with the K-Medoids method, a variant of K-Means, combined with the Spearman's rank correlation-based distance. The relevance of the obtained results was assessed by analyzing the partition of the different tumor types in the clusters. In particular, driven from known biological evidence, we considered as meaningful the associations of lung tumors (LUSC, LUAD), squamous tumors (LUSC, HNSC, CESC), gynecologic tumors (BRCA, OV, CESC), smoking related tumors (LUSC, LUAD, BLCA, CESC, HNSC). We disregarded samples types in a cluster representing less than $5\%$ of the total cluster size. We considered a poorly defined cluster to be a cluster presenting less than $50$ samples or distribution of the samples types in the same proportions as in the whole dataset as it would show random associations. Gene screening analysis was also used to identify the genes that are expressed differently over the sample clusters and thus indicating the biological processes involved. For that, we used the SAM method~\cite{tusher2001significance}, that aims to identify the genes that are differently expressed over two groups of samples. SAM assesses the significance of the variations of the gene expression using a statistical t-test, providing a significance score and a False Discovery Rate (FDR). To better assess the relevance of separating samples of the same tumor type, we studied the genes that are expressed differently for each tumor type in a cluster compared to all the other samples of the same tumor type. We thus pinpointed significant genes for each cluster and each tumor type by cluster. Once more, the method in~\cite{szklarczyk2018} has been used for assessing the biological relevance of the clusters and their association to different tumor types, by studying the biological processes involved. A well-defined sample clustering is characterized by different clusters presenting different enriched biological processes and pathways while different tumor types in a same cluster should be enriched in the same ones \subsection{Sample Clustering: Expression Power} To assess how well the different tumor types have been separated, we used several different metrics (formal definitions are provided in Supplementary Materials). \begin{itemize} \item \textbf{Adjusted Rand Index (ARI)} is a similarity measure between a clustering $C'$ and a ground truth $C$. ARI corresponds to the proportion of pairs of elements that are in different clusters in both $C$ and $C'$ called $a$ or in a same cluster in both $C$ and $C'$ called $b$. \item \textbf{Normalized Mutual Information (NMI)} is a normalized measure of the mutual dependence between a clustering and the reference group. \item \textbf{Homogeneity:} Considering a clustering $C'$ and a ground truth $C$. It values clusters of $C$ containing elements all belonging to a same cluster in $C'$. \item \textbf{Completeness:} Considering a clustering $C'$ and a ground truth $C$. This complement of homogeneity values clusters of $C$ having all their elements belonging to a same cluster in $C'$. \item \textbf{Fowlkes-Mallow Score (FMS):} It corresponds to the geometric mean of the pairwise precision and recall. \end{itemize} \subsection{Supervised Tumor Types/SubTypes Categorization} The evaluation of the provided signatures were further assessed by a supervised setting in order to highlight their tissue specificity properties. The supervised framework for tumor types and subtypes categorization was adapted from~\cite{covid1}. This classification pipeline relies on an ensemble of machine learning classifiers, exploring the ones with strong generalisation power. The best performing in terms of balanced accuracy and generalisation are combined through a probabilistic consensus schema to provide the appropriate label. Towards the evaluation of the reported performance, we relied on classic machine learning metrics, namely balanced accuracy, weighted precision, weighted specificity and weighted sensitivity. The use of weighted metrics instead of the non-weighted ones is required here as we are considering a multi-class classification task with very unbalanced classes. The weighted scores (WS) were defined as \begin{equation} WS = \frac{1}{N}\sum_l N_l S_l \end{equation} where $N$ corresponds to the total number of samples, $N_l$ the number of samples with class of label $l$ and $S_l$ the non-weighted score in one-vs-rest classification for the class $l$. \section{Implementation Details} \label{section:ID} The parameters of each algorithm for the gene clustering were obtained using grid search. In order to benchmark the behavior of each algorithm on different number of clusters, we evaluated their performance for the following number of clusters: from $2$ to $10$ with an increment of $1$, $15$, $20$, $25$ and between $30$ and $100$ with an increasing step of $10$ for the Random Clustering and K-Means algorithms and $25$ for CorEx algorithm because of its computational complexity. LP-Stability automatically determines the number of clusters. In order to create meaningful comparisons, we adjusted the penalty vector $S$ in order to obtain approximately the same number of clusters as with the rest of the algorithms. For comparison purposes, we used the same penalty for all the genes, however, for the LP-Stability algorithm the penalty value could be adjusted and customized depending on the importance of specific genes. For the ES we reported the behavior of the algorithms with different threshold values \textit{i.e.} $0.005$, $0.025$, $0.05$ and $0.1$. Furthermore, in reporting the DI value, each method has been evaluated with the same proximity measure it relies on. For K-Means that is sensitive to initialization, we performed $100$ iterations for each parameter and selected the best clustering based on DI only to cope with the computational cost of the ES. This iterative process augments the computational time of the algorithm, but reports clusters with better statistical significance and more stable scores. Similarly, we performed $100$ repetitions of random clustering and observed rather similar results, we selected the clustering that reports the best DI score and reported its results. For the sample clustering, we considered $10$ clusters corresponding to the actual $10$ tumor types. For the gene screening, we selected the most significant genes that reported a significance score of $7$ which corresponds to a q-value of FDR close to zero in most cases, while for the biological processes we considered only the $10$ most enriched processes by screening. \begin{table}[!t] \caption{\textbf{Description of the dataset used in this study.} The different tumors and tumor types together with the corresponding number of samples are summarised. Urothelial Bladder Carcinoma (BLCA), Breast Invasive Carcinoma (BRCA), Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC), Glioblastoma Multiforme (GBM), Head and Neck Squamous Cell Carcinoma (HNSC), Liver Hepatocellular Carcinoma (LIHC), Rectum Adenocarcinoma (READ), Lung adenocarcinoma (LUAD), Lung Squamous Cell Carcinoma (LUSC) and Ovarian Cancer (OV).} \label{tab:tumors-locations} \centering \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Tumor Type} & Clustering & \multicolumn{3}{c|}{Classification} \\ \cline{2-5} & \#Samples & \#Samples Types & \multicolumn{2}{c|}{\#Samples Subtypes:} \\ \hline BLCA & 427 & 129 & \multicolumn{2}{c|}{——} \\ \hline \multirow{5}{*}{BRCA} & \multirow{5}{*}{1212} & \multirow{5}{*}{1223} & Normal: & 144 \\ \cline{4-5} & & & LumA: & 582 \\ \cline{4-5} & & & LumB: & 220 \\ \cline{4-5} & & & Her2: & 83 \\ \cline{4-5} & & & Basal: & 194 \\ \hline CESC & 309 & —— & \multicolumn{2}{c|}{——} \\ \hline GBM & 171 & 827 & \multicolumn{2}{c|}{——} \\ \hline \multirow{4}{*}{HNSC} & \multirow{4}{*}{566} & \multirow{4}{*}{279} & Mesenchymal: & 75 \\ \cline{4-5} & & & Basal: & 87 \\ \cline{4-5} & & & Atypical: & 68 \\ \cline{4-5} & & & Classical: & 49 \\ \hline \multirow{3}{*}{LIHC} & \multirow{3}{*}{423} & \multirow{3}{*}{183} & iCluster1: & 65 \\ \cline{4-5} & & & iCluster2: & 55 \\ \cline{4-5} & & & iCluster3: & 63 \\ \hline LUAD & 576 & 230 & \multicolumn{2}{c|}{——} \\ \hline LUSC & 552 & 178 & \multicolumn{2}{c|}{——} \\ \hline \multirow{4}{*}{OV} & \multirow{4}{*}{307} & \multirow{4}{*}{489} & Proliferative: & 138 \\ \cline{4-5} & & & Mesenchymal: & 109 \\ \cline{4-5} & & & Differentiated: & 135 \\ \cline{4-5} & & & Immunoreactive: & 107 \\ \hline \multirow{2}{*}{READ} & \multirow{2}{*}{72} & \multirow{2}{*}{111} & CIN: & 102 \\ \cline{4-5} & & & GS: & 9 \\ \hline \end{tabular} \end{table} \begin{figure}[!b] \centering \scalebox{0.45}{ \begin{tikzpicture} \begin{axis}[ title={\textbf{Lp-Stability}}, xlabel=\textsc{Number of Clusters}, ylabel= \textsc{Enrichment Score}, xmin=0,xmax=100,ymin=92,ymax=100, legend style={at={(0.03,0.02)},anchor=south west} ] \addplot plot coordinates { (22, 100) (27, 100) (27, 100) (29, 100) (31, 100) (38, 100) (43, 97.6744186) (44, 97.6744186) (50, 98) (60, 98.33333333) (72, 98.61111111) (98, 95.91836735) }; \addplot plot coordinates { (22, 100) (27, 100) (27, 100) (29, 100) (31, 100) (38, 100) (43, 97.6744186) (44, 97.6744186) (50, 98) (60, 98.33333333) (72, 98.61111111) (98, 96.43877551) }; \addplot plot coordinates { (22, 100) (27, 100) (27, 100) (29, 100) (31, 100) (38, 100) (43, 97.6744186) (44, 97.6744186) (50, 98) (60, 98.33333333) (72, 98.61111111) (98, 96.93877551) }; \addplot plot coordinates { (22, 100) (27, 100) (27, 100) (29, 100) (31, 100) (38, 100) (43, 97.6744186) (44, 97.6744186) (50, 98) (60, 98.33333333) (72, 98.61111111) (98, 96.53877551) }; \addplot plot coordinates { (22, 100) (27, 100) (27, 100) (29, 100) (31, 100) (38, 100) (43, 97.6744186) (44, 97.6744186) (50, 98) (60, 98.33333333) (72, 98.61111111) (98, 96.33877551) }; \legend{$th=0.005$\\$th= 0.025$\\$th=0.05$\\$th=0.075$\\$th=0.10$\\} \end{axis} \end{tikzpicture}} \scalebox{0.45}{ \begin{tikzpicture} \begin{axis}[ title={\textbf{CorEx}}, xlabel=\textsc{Number of Clusters}, ylabel= \textsc{Enrichment Score}, xmin=0,xmax=100,ymin=50,ymax=100, legend style={anchor=north east} ] \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 90) (15, 80) (20, 80) (25, 72) (50, 52) (75, 57.703) (100, 53.6) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 90) (15, 80) (20, 85) (25, 72) (50, 52) (75, 58.1081) (100, 58.4) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 100) (15, 80) (20, 85) (25, 72) (50, 52) (75, 58.1081) (100,60) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 100) (15, 80) (20, 85) (25, 72) (50, 56) (75, 58.1081) (100,62.4) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 100) (15, 86.66) (20, 85) (25, 72) (50, 57.99) (75, 62.16) (100, 67.2) }; \legend{$th=0.005$\\$th= 0.025$\\$th=0.05$\\$th=0.075$\\$th=0.10$\\} \end{axis} \end{tikzpicture}} \vspace{5mm} \scalebox{0.45}{ \begin{tikzpicture} \begin{axis}[ title={\textbf{K-Means}}, xlabel=\textsc{Number of Clusters}, ylabel= \textsc{Enrichment Score}, xmin=0,xmax=100,ymin=20,ymax=100, legend style={anchor=north east} ] \addplot plot coordinates { (2, 50) (3, 66.6667) (4, 75) (5, 80) (6, 83.333) (7, 85.71) (8, 87.5) (9, 77.7778) (10, 70) (15, 53.333) (20, 35) (25, 36) (30, 30) (35, 34.29) (40, 32.5) (50, 26) (60, 23.333) (70, 24.28) (80, 22.5) (90, 21.111) (100, 23) }; \addplot plot coordinates { (2, 50) (3, 66.6667) (4, 75) (5, 80) (6, 83.333) (7, 85.71) (8, 87.5) (9, 77.7778) (10, 70) (15, 60) (20, 35) (25, 36) (30, 33.333) (35, 37.14) (40, 35) (50, 26) (60, 25) (70, 24.28) (80, 25) (90, 26.66677) (100, 23) }; \addplot plot coordinates { (2, 50) (3, 66.6667) (4, 75) (5, 80) (6, 83.333) (7, 85.71) (8, 87.5) (9, 77.7778) (10, 70) (15, 60) (20, 40) (25, 40) (30, 33.333) (35, 37.14) (40, 37.5) (50, 28) (60, 25) (70, 25.714) (80, 28.75) (90, 27.7778) (100, 24) }; \addplot plot coordinates { (2, 50) (3, 66.6667) (4, 75) (5, 80) (6, 83.333) (7, 85.71) (8, 87.5) (9, 77.7778) (10, 70) (15, 60) (20, 45) (25, 40) (30, 33.333) (35, 37.14) (40, 37.5) (50, 30) (60, 25) (70, 25.714) (80, 28.75) (90, 27.7778) (100, 24) }; \addplot plot coordinates { (2, 50) (3, 66.6667) (4, 75) (5, 80) (6, 83.333) (7, 85.71) (8, 87.5) (9, 77.7778) (10, 70) (15, 60) (20, 45) (25, 40) (30, 36.6667) (35, 37.14) (40, 40) (50, 32) (60, 26.666) (70, 28.571) (80, 31.25) (90, 30) (100, 27) }; \legend{$th=0.005$\\$th= 0.025$\\$th=0.05$\\$th=0.075$\\$th=0.10$\\} \end{axis} \end{tikzpicture}} \scalebox{0.45}{ \begin{tikzpicture} \begin{axis}[ title={\textbf{Random}}, xlabel=\textsc{Number of Clusters}, ylabel= \textsc{Enrichment Score}, xmin=0,xmax=100,ymin=0,ymax=100, legend style={anchor=north east} ] \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 100) (15, 100) (20, 100) (25, 100) (30, 86.666) (40, 37.5) (50, 36) (60, 11.6667) (70, 10) (80, 7.5) (90, 2.22) (100, 5) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 100) (15, 100) (20, 100) (25, 100) (30, 93.333) (40, 67.5) (50, 48) (60, 40) (70, 25.714) (80, 20) (90, 14.444) (100, 10) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 100) (15, 100) (20, 100) (25, 100) (30, 93.333) (40, 82.5) (50, 64) (60, 50) (70, 31.428) (80, 36.25) (90, 24.444) (100, 15) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 100) (15, 100) (20, 100) (25, 100) (30, 96.6667) (40, 85) (50, 64) (60, 56.6667) (70, 41.429) (80, 38.75) (90, 30) (100, 23) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 100) (15, 100) (20, 100) (25, 100) (30, 96.6667) (40, 85) (50, 64) (60, 56.667) (70, 41.4286) (80, 38.75) (90, 30) (100, 23) }; \legend{$th=0.005$\\$th= 0.025$\\$th=0.05$\\$th=0.075$\\$th=0.10$\\} \end{axis} \end{tikzpicture}} \caption{\textbf{Evaluation of the clustering performance for different Enrichment Threshold values.} LP-Stability (upper left), CorEx (upper right), K-Means (lower left), Random (lower right). The figure presents the percentage of the enriched clusters for the threshold values of $0.005$, $0.025$, $0.05$, $0.075$, $0.1$ and using Kendall's correlation-based distance. The higher differences in the enrichment thresholds are reported from the Random Clustering when the number of clusters is relatively high. For the rest of the algorithms and especially LP-Stability, the different thresholds only slightly impact the reported results.} \label{fig:gene_enrichment} \end{figure} \section{Dataset} In this study, we based our experiments on The Cancer Genome Atlas (TCGA) dataset~\cite{grossman2016}. TCGA contains a comprehensive dataset including several data types such as DNA copy number, DNA methylation, mRNA expression, miRNA expression, protein expression, and somatic point mutation. We focused our study on tumor types relevant for radiotherapy and/or immunotherapy. For the gene clustering part, our dataset consists of \textbf{4615} samples (Table \ref{tab:tumors-locations} second column). In particular, we investigated the following types of tumors, namely: Urothelial Bladder Carcinoma (BLCA), Breast Invasive Carcinoma (BRCA), Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC), Glioblastoma Multiforme (GBM), Head and Neck Squamous Cell Carcinoma (HNSC), Liver Hepatocellular Carcinoma (LIHC), Rectum Adenocarcinoma (READ), Lung adenocarcinoma (LUAD), Lung Squamous Cell Carcinoma (LUSC) and Ovarian Cancer (OV). For each sample, we had the RNA-seq reads of \textbf{20 365} genes processed using normalized RNA-seq by Expectation-Maximization (RSEM)~\cite{Li2011}. Several articles as~\cite{Salem2017} consider the challenging and important task of generating biomarkers for distinguishing tumor and subtumor types. In this study, we also focus on this task basing our experiments on the cohort presented in~\cite{thorsson2018} by selecting samples from the $10$ locations used for the gene signature. This cohort consists of $3653$ samples (Table~\ref{tab:tumors-locations} third and fourth columns). For the tumor subtypes characterisation, we focused on subtypes that had more than $50\times n\_subtype$ samples. At the end, $5$ different tumor types namely the BRCA, HNSC, LIHC, READ and OV have been used for subtypes classification. \section{Results and Discussion} This study has been designed upon three pivotal complementary aspects. The first one relates to the genes clustering performance to assess the definition of the signature regarding both a mathematical (DI) and a biological (ES) metric (section~\ref{section:res_gene_clust}). The second evaluates the ability of the signature to relevantly separate the different tumor samples in an unbiased manner in particular through sample clustering (section~\ref{section:unsup}). The third aspect characterizes the tissue specificity of the signature thanks to classification tasks on tumor types and subtypes (section~\ref{section:classification}). To better estimate the results obtained, comparisons with referential clustering methods and gene signatures are performed throughout the different evaluations. A global comparison with all references over all metrics is provided in section~\ref{section:compa}. \subsection{Results on Clustering Gene Data} \label{section:res_gene_clust} The obtained clusters were evaluated using both mathematical and biological evaluation criteria. Starting with the biological criteria, Fig.~\ref{fig:gene_enrichment} presents a comparison of the different ES per algorithm for different threshold ($th$) values. We observed that for the different clustering methods the threshold does not significantly change the behavior of the ES, indicating a strong statistical significance for the clusters. But, it is not the case for the random signature on which for a number of clusters higher than $30$ one can observe an important disparity between the different $th$ of the ES. For the rest of the study, we will use $th=0.005$. \begin{figure*}[t!] \centering \scalebox{0.58}{ \begin{tikzpicture} \begin{axis}[ title={\textbf{Random: Dunn's Index}}, xlabel=\textsc{Number of Clusters}, ylabel= \textsc{Dunn's Index}, xmin=0,xmax=100,ymin=0,ymax=50, legend style={anchor=north east} ] \addplot plot coordinates { (2, 0.000986) (3, 0.000986) (4, 0.000986) (5, 0.000986) (6, 0.000986) (7, 0.000986) (8, 0.000986) (9, 0.000986) (10, 0.000986) (15, 0.000986) (20, 0.000986) (25, 0.000986) (30, 0.000986) (40, 0.000986) (50, 0.000986) (60, 0.000986) (70, 0.000986) (80, 0.000986) (90, 0.000986) (100, 0.000986) }; \addplot plot coordinates { (2, 1.01) (3, 1.01) (4, 1.01) (5, 1.01) (6, 1.01) (7, 1.01) (8, 1.01) (9, 1.01) (10, 1.01) (15, 1.01) (20, 1.01) (25, 1.01) (30, 1.01) (40, 1.01) (50, 1.01) (60, 1.01) (70, 1.01) (80, 1.01) (90, 1.01) (100, 1.01) }; \addplot plot coordinates { (2, 20.2) (3, 16.7) (4, 13.5) (5, 13.5) (6, 13.5) (7, 13.5) (8, 16.7) (9, 13.51) (10, 13.51) (15, 13.51) (20, 13.51) (25, 13.51) (30, 13.51) (40, 13.51) (50, 13.51) (60, 13.51) (70, 13.51) (80, 13.51) (90, 13.51) (100, 13.51) }; \addplot plot coordinates { (2, 36.4) (3, 36.4) (4, 33.2) (5, 32.6) (6, 13.8) (7, 13.8) (8, 32.6) (9, 32.6) (10, 13.8) (15, 13.8) (20, 13.8) (25, 13.8) (30, 13.8) (40, 13.8) (50, 13.8) (60, 13.8) (70, 13.8) (80, 13.8) (90, 13.8) (100, 13.8) }; \addplot plot coordinates { (2, 0.000986) (3, 0.000986) (4, 0.000986) (5, 0.000986) (6, 0.000986) (7, 0.000986) (8, 0.000986) (9, 0.000986) (10, 0.000986) (15, 0.000986) (20, 0.000986) (25, 0.000986) (30, 0.000986) (40, 0.000986) (50, 0.000986) (60, 0.000986) (70, 0.000986) (80, 0.000986) (90, 0.000986) (100, 0.000986) }; \addplot plot coordinates { (2, 0.000986) (3, 0.000986) (4, 0.000986) (5, 0.000986) (6, 0.000986) (7, 0.000986) (8, 0.000986) (9, 0.000986) (10, 0.000986) (15, 0.000986) (20, 0.000986) (25, 0.000986) (30, 0.000986) (40, 0.000986) (50, 0.000986) (60, 0.000986) (70, 0.000986) (80, 0.000986) (90, 0.000986) (100, 0.000986) }; \end{axis} \end{tikzpicture}} \scalebox{0.58}{ \begin{tikzpicture} \begin{axis}[ title={\textbf{LP-Stability: Dunn's Index}}, xlabel=\textsc{Number of Clusters}, ylabel= \textsc{Dunn's Index}, xmin=0,xmax=100,ymin=0,ymax=50, legend style={anchor=north west} ] \addplot plot coordinates { (7, 0.000986) (7, 0.000986) (10, 0.000986) (16, 0.000986) (24, 0.000986) (35, 0.000986) }; \addplot plot coordinates { (24, 1.37) (34, 5.32) (34, 5.32) (39, 5.32) (39, 5.32) (47, 5.32) (50, 5.32) (52, 5.32) (59, 5.32) (67, 5.32) (80, 5.32) (89, 5.32) (110, 5.32) }; \addplot plot coordinates { (31, 20.3) (33, 20.3) (34, 20.3) (41, 20.3) (43, 20.3) (46, 20.3) (52, 20.3) (60, 20.3) (66, 20.3) (82, 20.3) (98, 19.5) }; \addplot plot coordinates { (22, 40.3) (27, 40.6) (27, 40.6) (29, 40.6) (31, 39.9) (38, 36.8) (43, 37.2) (44, 37.2) (50, 37.5) (60, 37.5) (72, 37.6) (98, 36.8) }; \addplot plot coordinates { (15, 5) (35, 1) (36, 1) (36, 1) (42, 1.3) (57, 2) (57, 2) (59, 2) (60, 2) (75, 2) }; \addplot plot coordinates { (15, 4.72) (35, 1) (37, 1) (37, 1) (38, 1) (40, 1) (53, 1) (55, 1) (56, 1) (64, 1) (66, 1) (82, 1) (97, 1) }; \end{axis} \end{tikzpicture}} \scalebox{0.58}{ \begin{tikzpicture} \begin{axis}[ title={\textbf{LP-Stability: Enrichment Score}}, xlabel=\textsc{Number of Clusters}, ylabel= \textsc{Dunn's Index}, xmin=0,xmax=100,ymin=0,ymax=105, legend style={at={(1.05,0.8)},anchor=north west} ] \addplot plot coordinates { (7, 85.7) (7, 100) (10, 90) (16, 93.7) (24, 87.5) (35, 26) }; \addplot plot coordinates { (24, 100) (34, 100) (34, 100) (39, 100) (39, 100) (47, 100) (50, 100) (52, 100) (59, 100) (67, 98.5) (80, 96.25) (89, 95.5) (110, 94.54) }; \addplot plot coordinates { (31, 100) (33, 100) (34, 97) (41, 97.56) (43, 97.67) (46, 97.82) (52, 98) (60, 98.33) (66, 98.48) (82, 97.56) (98, 96) }; \addplot plot coordinates { (22, 100) (27, 100) (27, 100) (29, 100) (31, 100) (38, 100) (43, 97.7) (44, 97.72) (50, 98) (60, 98.3) (72, 98.67) (98, 96) }; \addplot plot coordinates { (15, 5) (35, 1) (36, 1) (36, 1) (42, 1.3) (57, 2) (57, 2) (59, 2) (60, 2) (75, 2) }; \addplot plot coordinates { (15, 4.72) (35, 1) (37, 1) (37, 1) (38, 1) (40, 1) (53, 1) (55, 1) (56, 1) (64, 1) (66, 1) (82, 1) (97, 1) }; \legend{$Kullback-Leibler$\\$Pearson's$\\$Spearman's$\\$Kendall's$\\$Euclidean$\\$Cosine$\\} \end{axis} \end{tikzpicture}} \caption{\textbf{Evaluation of the clustering performance for different distances.} The performance of the different distances are presented for both Random (left) in terms of Dunn's Index and LP-Stability clustering (middle and right) in terms of Dunn's Index and Enrichment Score. Only DI results are presented for Random as ES computation on a same clustering is not influenced by the distance used. Both ES and DI are presented in percentages in terms of the number of clusters. The figure highlights the superiority of the correlation-based distances and in particular the one reported by Kendall's for both mathematical and biological aspects.} \label{fig:gene_distances} \end{figure*} To select the best distance per method we used the DI metric. In Fig.~\ref{fig:gene_distances} one can observe the influence of the distance with respect to the number of clusters for the random and LP-stability methods. Compared with random clustering one can observe the bias that each distance introduces for the DI score. In particular, with correlation-based distances the reported DI scores are on average $10$ times higher. Thus, to tackle this problem of bias, for our comparisons, we will refer to a clustering difference in DI scores with the corresponding random clustering for the same number of clusters and distance. Based on our experiments we also noticed that the different distances greatly affects the performance of the clustering algorithm, with the correlation-based distances (especially the Kendall's correlation) reporting in general higher performances. To ensure the biological meaning of the clusters, we also report the performance of the different distances for ES. Once again the superiority of correlation-based distances both in terms of performance and stability is indicated. Besides, only the Euclidean distance does not reach the maximal value of $100\%$. This is due to the unbalanced clusters that Euclidean distance favors, leading to very small clusters that are less likely to be enriched. For the rest of the paper we selected Kendall's correlation-based distance when reporting LP-Stability and Random Clustering performances. In Table~\ref{tab:bests}, we summarize the performance of LP-Stability in comparison to other algorithms based on both ES and DI scores together with the reported number of clusters. Additional information about the average Enrichment and the average computational time per algorithm is also provided in the table. The best performance of DI is achieved with the LP-Stability and the Kendall's Correlation-based distance. Moreover, even if almost all the methods, except K-Means, reached an Enrichment Score of $100\%$, LP-Stability still reports the highest average Enrichment, with $96\%$ while CorEx reaches only $71\%$. Another interesting point from this analysis is the indication of the optimal number of clusters per algorithm. Only LP-Stability reports its best value with more than $25$ clusters while the rest of the algorithms have their best performance with less than $7$ clusters and even $2$ clusters only if we consider DI alone. This might seem to be an argument in favor of the other algorithms as they are able to define a more compact signature. However, such a low number of clusters highlights failure on characterizing a clustering structure as they favor a disposition where genes are grouped altogether. This is also indicated by the low average ES and DI scores. Computational time study was presented in~\cite{battistella2019} and can be found in Supplementary Material. \begin{table*}[!t] \caption{Comparison of the different evaluated algorithms in terms of PPI Enrichment Score (ES) with a threshold of $0.005$, Dunn's Index (DI), Average ES and computational time. LP-Stability algorithm outperforms the rest of the algorithms reporting highest DI and Average ES score and the lowest computational time.} \label{tab:bests} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Best ES} & \multicolumn{3}{c|}{Best DI} & \multirow{2}{*}{Average ES (\%)} & \multirow{2}{*}{Average DI (\%)} & \multirow{2}{*}{Time} \\ \cline{2-7} & ES (\%) & DI (\%) & Clusters & ES (\%) & DI (\%) & Clusters & & & \\ \hline Random & 100 & 36 & 2 & 100 & 36 & 2 & 54 & 19.8 & - \\ \hline K-Means (Euclidean) & 85.7 & 2.5 & 7 & 50 & 15.6 & 5 & 37 & 1.2 & 3h \\ \hline CorEx (Total Correlation) & 100 & 2.4 & 5 & 100 & 2.4 & 5 & 71 & 0.6 & \textgreater{}5 days \\ \hline LP-Stability (Kendall’s) & 100 & 40.6 & 27 & 100 & 40.6 & 27 & 96 & 38.5 & 1.5h \\ \hline \end{tabular} \end{table*} A thorough comparison of the different algorithms for a different number of clusters is presented in Fig.~\ref{fig:graph_comparison}. For both DI and ES the superiority of the proposed LP-Stability in comparison to the other algorithms can be observed both in terms of stability for a varying number of clusters and performance. The reported results indicate that the proposed method can generate clusters that are both mathematically and biologically meaningful. Moreover, one can observe that for Random Clustering, the reported enrichment is very high, however dropping dramatically for more than $30$ clusters, while the DI is really low for all the cases. This highlights the need to study both the mathematical performance and the stability of the biological score as ES alone would not give significant results. \begin{figure*}[!t] \centering \scalebox{0.8}{ \begin{tikzpicture} \begin{axis}[ title={\textbf{Enrichment Score}}, xlabel=\textsc{Number of Clusters}, ylabel= \textsc{Enrichment Score}, xmin=0,xmax=100,ymin=0,ymax=100, legend style={at={(0.03,0.02)},anchor=south west} ] \addplot plot coordinates { (22, 100) (27, 100) (27, 100) (29, 100) (31, 100) (38, 100) (43, 97.6744186) (44, 97.6744186) (50, 98) (60, 98.33333333) (72, 98.61111111) (98, 95.91836735) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 90) (15, 80) (20, 80) (25, 72) (50, 52) (75, 57.703) (100, 53.6) }; \addplot plot coordinates { (2, 50) (3, 66.6667) (4, 75) (5, 80) (6, 83.333) (7, 85.71) (8, 87.5) (9, 77.7778) (10, 70) (15, 53.333) (20, 35) (25, 36) (30, 30) (35, 34.29) (40, 32.5) (50, 26) (60, 23.333) (70, 24.28) (80, 22.5) (90, 21.111) (100, 23) }; \addplot plot coordinates { (2, 100) (3, 100) (4, 100) (5, 100) (6, 100) (7, 100) (8, 100) (9, 100) (10, 100) (15, 100) (20, 100) (25, 100) (30, 86.666) (40, 37.5) (50, 36) (60, 11.6667) (70, 10) (80, 7.5) (90, 2.22) (100, 5) }; \end{axis} \end{tikzpicture}} \scalebox{0.8}{ \begin{tikzpicture} \begin{axis}[ title={\textbf{Dunn's Index}}, xlabel=\textsc{Number of Clusters}, ylabel= \textsc{Dunn's Index}, xmin=0,xmax=100,ymin=0,ymax=50, legend style={at={(1.1,0.7)},anchor=north west} ] \addplot plot coordinates { (22, 40.3) (27, 40.6) (27, 40.6) (29, 40.6) (31, 39.9) (38, 36.8) (43, 37.2) (44, 37.2) (50, 37.5) (60, 37.5) (72, 37.6) (98, 36.8) }; \addplot plot coordinates { (2, 1.3) (3, 1.4) (4, 2.4) (5, 2.4) (6, 0.7) (7, 0.8) (8, 0.2) (9, 0.3) (10, 0.05) (15, 0.02) (20, 0.02) (25, 0.06) (50, 0.005) (75, 0.003) (100, 0.003) }; \addplot plot coordinates { (2, 15.6) (3, 3.29) (4, 2.54) (5, 2.90) (6, 2.55) (7, 1.89) (8, 1.59) (9, 1.59) (10, 1.25) (15, 0.83) (20, 0.834) (25, 0.489) (30, 0.516) (35, 0.555) (40, 0.507) (50, 0.577) (60, 0.461) (70, 0.495) (80, 0.485) (90, 0.48) (100, 0.47) }; \addplot plot coordinates { (2, 36.4) (3, 36.4) (4, 33.2) (5, 32.6) (6, 13.8) (7, 13.8) (8, 32.6) (9, 32.6) (10, 13.8) (15, 13.8) (20, 13.8) (25, 13.8) (30, 13.8) (40, 13.8) (50, 13.8) (60, 13.8) (70, 13.8) (80, 13.8) (90, 13.8) (100, 13.8) }; \legend{$LP-Stability$\\$CorEx$\\$K-Means$\\$Random$\\} \end{axis} \end{tikzpicture}} \caption{\textbf{Evaluation of the different clustering algorithms.} For the different evaluated algorithms the ES and the DI are presented in terms of number of clusters and using Kendall's correlation-based distance. For both metrics, LP-Stability reports the highest and more stable values. Moreover, the rest of the algorithms tends to report their higher scores for a very small number of clusters (often $2$), indicating their failure to discover clustering structures.} \label{fig:graph_comparison} \end{figure*} \subsection{Unsupervised Signature Assessment} \label{section:unsup} \subsubsection{Signature Selection} The signature was selected using the method detailed in section~\ref{section:ss} on the clustering presenting the highest DI among clusterings having best ES. However, due to the relatively low number of genes for signatures based on K-Means, CorEx or Random Clustering, the sample clusterings with those signatures gave quite irrelevant intermixed tumor types (Fig.1 in Supplementary). To deal with this and for comparison reasons, we used for all these algorithms the gene signatures produced with $25$ and $30$ genes and in the following, when referring to CorEx and K-Means signatures we will refer to those signatures. To facilitate future studies leveraging this work, we provide in additional material the clustering of genes and samples. Regarding the evaluation of the enriched biological processes for the different signatures, we found that LP-Stability signature ($27$ genes with Kendall's correlation-based distance) does not present any redundancy in the biological processes, in contrast to the K-Means ($30$ genes with Euclidean distance) which presents several hundred of enriched biological processes. Moreover, CorEx signature ($30$ genes with Total Correlation) presents a low biological redundancy with only phototransduction process being enriched. Our proposed gene signature using LP-Stability is composed by $27$ genes. Globally these genes, which descriptions are detailed in Supplementary Materials, are related to cell development and cell cycle (CD53, NCAPH, GNA15, GADD45GIP1, CD302, NCAPH, YEATS2), DNA transcription (HSFX1, CCDC30, MATR3, ASH1L, ANKRD30A, GSX1), gene expression (ZNF767, C1orf159, RPS8, ZEB2), DNA repair (RIF1), antigen recognition (ZNF767), apoptosis (C3P1, CLIP3), mRNA splicing (SNRPG). We also have many genes specific to cancer or having a major impact on cancer (CD53, ANKRD30A, ZEB2, ADNP, SFTA3, ACBD4). All these processes are highly important and significant for cancer. We also report in Supplementary Materials for each gene the main tissues they are overexpressed using GTEx portal. Finally, even if we have many genes related to specific tissue types such as brain, blood lymphocytes, liver or gynecologic tissues, the overall profiles of each gene are unique. \subsubsection{Sample Clustering: Discovery Power} The predictive powers of the best signature per algorithm together with the random signatures, and the signature presented in~\cite{thorsson2018}, were further assessed by measuring their ability to separate $10$ different tumor types (Table~\ref{tab:tumors-locations}) in a completely unsupervised manner, through sample clustering. In Fig.~\ref{fig:sample_clustering}, the results for the LP-Stability (with $27$ clusters, ES $100\%$ and DI $40.6\%$ using Kendall's correlation-based distance) signature, K-Means (with $30$ genes, ES of $30\%$ and a DI of $0.52\%$ using Euclidean distance), CorEx (with $25$ genes, ES $72 \%$ and DI $0.06 \%$ using Total Correlation), Random Clustering signature (with $27$ genes, ES $86.6\%$ and DI of $13.8\%$ using Kendall's correlation-based distance) and the signature from~\cite{thorsson2018} are presented. One can observe that CorEx and Random signatures fail to properly separate the tumor types and for this reason for the rest of the section we present a detailed comparison of the K-Means and LP-Stability signatures only. A thorough analysis of the clusterings of Fig.~\ref{fig:sample_clustering} is provided in ~\cite{battistella2019} and in Supplementary Materials and shows evidence validating the relevance of the approach presented here. \begin{figure*}[!t] \centering [scale=0.5]{sample_clustering} \caption{\textbf{Gene Signature Assessment via tumor distribution analysis across 10 sample clusters generated in the different signatures feature space.} The graph presents the distribution of the different tumors for Random Clustering signature ($27$ genes) , CorEx , K-Means , a referential gene signature~\cite{thorsson2018} and LP-Stability. The distribution of tumors for Random and CorEx algorithms is quite intermixed without a lot of associations between the tumor types while K-Means, referential and LP-Stability signatures seem to favor some good tumor associations.} \label{fig:sample_clustering} \end{figure*} Another interesting point is that the distance used greatly affects the distribution of the different tumor types for the clustering. This proves the importance of the distance selection in combination with the selected algorithm. Based on our experiments, we noticed that Spearman's and Kendall's correlations provide the best sample clustering for all the algorithms. In particular, Spearman's correlation tends to better separate the different tumors into different clusters, while the Kendall's seems to generate clusters that groups tumor-related samples (Fig.2 in Supplementary Materials). To compare our results with other methods in the literature, we assessed our gene signature against a knowledge-based signature of $78$ genes that has been proven to be appropriate for determining immune related sample clusters~\cite{thorsson2018}. The obtained tumor distribution is presented in Fig.~\ref{fig:sample_clustering}, reporting quite intermixed associations. Again, LIHC and BRCA are separated properly while the rest of the tumors are clustered in unrelated groups. This comparison indicates the need for compact signatures, highlighting at the same time the difficulty of capturing the full genome information as well as the need for an automatically computed signature to avoid redundancy and information loss. \subsubsection{Gene Screening Analysis} \label{section:screening} Screening analysis aims to identify the significant genes for each cluster which are then used to determine the enriched biological processes per cluster. To determine those significant genes, we used the SAM method to look for genes that are expressed differently for the samples of one tumor type in a cluster compared to the other samples of the same tumor type (see section~\ref{section:DP} for more details). We will refer to those genes as differentially expressed genes in the remainder of the article. Besides, the SAM method scores allows to determine significant genes, in the following, we will refer in particular to the most significant genes for a cluster or a tumor type in a cluster for the genes reaching the highest SAM scores. This method, allows to check if the tumor types of a given cluster share genes related to similar biological processes, highlighting the biological relevance of the cluster. Meanwhile, this method enables us to verify the relevance of the distribution of the same tumor types into different clusters by checking the absence of similar biological processes. In particular, a summary of the analysis for our proposed signature's sample clustering is presented in Table~\ref{tab:gene_screening}. In this section, we provide a detailed analysis per tumor type for each cluster for both K-Means and LP-Stability selected signatures. \begin{table*}[!t] \caption{Analysis of the biological pathways and most significant genes per cluster for the sample clustering performed using our proposed signature of $27$ genes via LP-Stability algorithm and Kendall's correlation-based distance. The table highlights the separation between inflamed and non-inflamed tumors and the identification of well-known cancer subtypes such as BRCA.} \label{tab:gene_screening} \centering \scalebox{0.95}{ \begin{tabular}{|c| c| c| c| c| c|} \hline \multirow{2}{*}{\textbf{Clusters}} & \textbf{Significant} & \textbf{Tumor} & \textbf{Validated Role of} \textbf{the Gene} & \textbf{Biological Pathways} & \textbf{Key Feature} \\ & \textbf{Genes} & \textbf{Type} & \textbf{in Tumor Type} & & \textbf{of the cluster}\\\hline \multirow{4}{*}{Cluster 0} & IL4R & HNSC & Treatment target & Immune and defense response &\multirow{2}{*}{An inflamed solid } \\ \cline{2-4} & GNA15 & LUSC & Lung cancers treatment & Regulation of cell proliferation& \multirow{2}{*}{tumors cluster}\\ \cline{2-4} & \multirow{2}{*}{KRT5} & \multirow{2}{*}{CESC} & Biomarker distinguishing adenocarcinomas & \multirow{2}{*}{Interferon-gamma mediated process}& \\ & & & from squamous cell carcinomas~\cite{Xiao2017} & &\\\hline \multirow{2}{*}{Cluster 1} & \multirow{2}{*}{FPR3} & \multirow{2}{*}{BRCA} & \multirow{2}{*}{Immune inflammation related~\cite{li2017}} & \multirow{2}{*}{Immune response related pathways} & An inflamed BRCA\\ &&&&& tumors cluster \\ \hline \multirow{2}{*}{Cluster 2} & \multirow{2}{*}{TTC28} & \multirow{2}{*}{BRCA} & \multirow{2}{*}{Related to basal breast cancer risk~\cite{Hamdi2016}} & \multirow{2}{*}{Basal plasma membrane} & A basal BRCA\\ &&&&& tumors cluster\\ \hline \multirow{5}{*}{Cluster 3} & \multirow{3}{*}{NDUFB10} & \multirow{3}{*}{BRCA} & Related to breast~\cite{Zhang2015} and ovarian~\cite{PermuthWey2011} & \multirow{2}{*}{Mitochondrial complexes and} & \multirow{5}{*}{A gynecologic tumors}\\ &&& cancers, poor prognosis for & \multirow{3}{*}{cells organization related processes} & \\ &&& esophageal lineage and LIHC &&\\\cline{2-4} & \multirow{2}{*}{SLC39A6} & \multirow{2}{*}{OV} & Poor prognosis for & & cluster linked to LIHC\\ & &&esophageal lineage and LIHC & & \\\hline \multirow{2}{*}{Cluster 4} & SFTA3 & LUAD & Related to LUAD and LUSC~\cite{Schicht2014, Xiao2017} & Pathways of immune response & \multirow{2}{*}{An inflamed lung tumors cluster}\\ \cline{2-4} & NAPSA & LUSC & Related to LUAD & Surfactant homeostasis & \\\hline \multirow{2}{*}{Cluster 5} & & \multirow{2}{*}{LIHC}& & & A pure complete LIHC\\ &&&&& tumor cluster\\ \hline \multirow{2}{*}{Cluster 6} & \multirow{2}{*}{FOXA1} & \multirow{2}{*}{BRCA} & Related to Breast & \multirow{2}{*}{Metabolic processes} & A luminal BRCA \\ &&& Luminal cancer~\cite{Cappelletti2017} && tumors cluster\\\hline \multirow{4}{*}{Cluster 7} & & GBM & Complete GBM cluster & Response to stimulus, &\\\cline{2-4} & ANGPTL5 & BRCA & Angiopoietin-like protein family & \multirow{2}{*}{cardiovascularity,} & A GBM tumors cluster \\ \cline{2-4} & \multirow{2}{*}{FERMT2} & \multirow{2}{*}{BLCA} & Related to various cancer & & with other tumors, all enriched\\ & &&including breast ones & blood vessels related & in cardiovascularity pathways\\\hline \multirow{2}{*}{Cluster 8} & UQCRH & BRCA & Mitochondrial Hinge protein related~\cite{Modena2003} & Metabolic processes & \multirow{2}{*}{Mis-splicing related tumors}\\ \cline{2-4} & AP1M2 & BLCA & Tyrosine-based signals & general compound processes & \\\hline \multirow{2}{*}{Cluster 9} & \multirow{2}{*}{CIRBP} & \multirow{2}{*}{BRCA} & \multirow{2}{*}{Driver of many cancers} & Related to alternative splicing & \multirow{2}{*}{Alternative splicing related BRCA}\\ & & && processes and organelles & \\ \hline \end{tabular}} \end{table*} Starting with the gene screening of LP-Stability, cluster $0$ is one of the most intermixed clusters. This cluster contains significant genes for different tumor types that are associated with immune, defense response and other inflammatory processes with strong enrichment. Among the most significant genes we can report IL4R for HNSC, this Interleukin is a treatment target for multiple cancers, GNA15 for LUSC which has been highlighted in lung cancer treatment or for CESC. KRT5, has been identified as a potential biomarker to distinguish adenocarcinomas to squamous cell carcinomas~\cite{Xiao2017}. Continuing our analysis with cluster $1$ which includes mainly BRCA samples, its most significant gene is FPR3. This gene seems to be related to immune inflammation and multiple cancers including breast~\cite{li2017}. For cluster $2$ which is mainly composed of BRCA samples, we identified that it is a basal BRCA cluster. Indeed, its most significant gene TTC28 is related to breast cancer and especially basal BRCA~\cite{Hamdi2016}. Besides, the cluster is enriched for basal plasma membrane. Cluster $3$ is also a mixed cluster mostly composed of BRCA and OV cancers. It seems to be related to mitochondrial complexes and organization. Its most significant gene is the NDUFB10 which is related to breast cancer patients~\cite{Zhang2015}, OV cancer~\cite{PermuthWey2011} and also correlated to a decreased viability in esophageal squamous lineage as LIHC. Cluster $4$ is a lung related cluster, composed of LUAD/LUSC samples. LUAD samples are related to immune response and LUSC samples to surfactant homeostasis which is linked to many lung diseases. The most significant gene for LUAD is SFTA3, a lung protein~\cite{Schicht2014} and a biomarker distinguishing LUAD and LUSC~\cite{Xiao2017}, whereas the most significant gene for LUSC samples is NAPSA that has been proven to be of relevance for LUAD tumors. Cluster $5$ mostly consists of LIHC samples, grouping all the LIHC samples in this cluster. Similarly cluster $7$ consists of all GBM tumors. Thus, the screening process is not applicable for them as it compares samples from the same tumor type over different clusters. Cluster $6$ is a luminal breast cancer cluster, related to metabolic processes which have already been studied in a breast cancer context~\cite{Schramm2010}. The most significant gene seems to be the FOXA1, a gene related to Estrogen-Receptor Positive Breast Cancer and Luminal Breast Carcinoma~\cite{Cappelletti2017}. Cluster $7$ is GBM tumors cluster. It is interesting to notice that next two dominant tumor types in the cluster, BRCA and BLCA, are related to cardiovascularity and blood vessels, their respective most significant genes are ANGPTL5 and FERMT2. The latter having been highlighted in GBM proliferation~\cite{Alshabi2019}. Cluster $8$ has no biological process linked to immune response, but presents a strong association to metabolic and structural processes. This group of processes has been found significant for BRCA~\cite{Read2018}. The most significant genes for this cluster are the AP1M2 for BLCA samples which interacts in tyrosine-based signals and has been considered in epithelial cells studies, the UQCRH for BRCA a gene encoding mitochondrial Hinge protein that is important in soft tissue sarcomas and in particular in two cell lines of breast cancer and one of ovarian cancer~\cite{Modena2003}. Finally, cluster $9$ is a cluster with BRCA tumors, it has CIRP as its most significant gene, which is considered to be an oncogene in several cancers and in particular for BRCA. Cluster $9$ presents alternative splicing and coiled coil processes. This analysis highlights that each cluster is enriched in similar biological processes while the processes from different clusters are different. Moreover, it reveals that even if clusters $0$ and $8$ contain different tumor types, they present a homogeneity in their biological processes. Cluster $0$ is especially interesting as it contains inflamed tumor samples and cluster $8$ non-inflamed samples. These two clusters contain all the CESC samples, proving once more the relevance of the LP-Stability signature as they automatically and without any prior knowledge separate inflammatory and non-inflammatory CESC samples. This specific problem is an active field of research~\cite{Heeren2016}. Clusters $0$ and $8$ provide an even more valuable insight when studying the genes IFNG, STAT1, CCR5, CXCL9, CXCL10, CXCL11, IDO1, PRF1, GZMA, MHCII and HLA-DRA highlighted in~ \cite{ayers2017} for their major role in immunotherapy. Indeed, for each tumor type in cluster $0$ all or most of these genes are differentially expressed which is not the case for cluster $8$, so proving the specificity and clinical relevance of the separation of these clusters. On a second level, we analyzed the distribution of the BRCA cancer samples on different clusters, examining its clinical relevance. We chose to highlight BRCA in this comparison, as it is the most represented tumor type and it presents a variety of subtypes. BRCA samples are distributed into clusters $1$, $2$, $3$, $6$, $8$ and $9$ using the LP-Stability signature, featuring the main molecular subtypes of BRCA. In particular, cluster $1$ contains immune inflammatory samples, cluster $2$ basal samples and cluster $6$ the luminal Estrogen-Receptor Positive samples. Additionally cluster $3$ is a gynecologic cluster with BRCA samples presenting relations to OV samples. Cluster $8$ features mis-plicing related tumors which are strongly related to BRCA samples~\cite{Koedoot2019}. Cluster $9$ is marked by alternative splicing whose implications in cancers are well known and studied~\cite{singh2017}. It is also interesting to report that hallmarks genes BRCA1 and BRCA2 are positively and differentially expressed in luminal BRCA cluster $6$ which attests of an over-expression of these genes for cluster $6$. This observation is consistent with~\cite{mahmoud2017}, where BRCA1 and BRCA2 were more expressed in luminal BRCA samples as they are markers for good prognosis. Besides, these genes present an under-expression in cluster $3$ which is coherent as this mixed cluster groups BRCA and OV samples that are known to present a bad prognosis. For comparison, we performed the same analysis with the sample clustering produced by the K-Means algorithm with the $30$ genes. After the analysis, we observed that this signature seems to be very specific for the BRCA tumors while reporting weaker relevance in the separation of other samples. Indeed, we can observe that the separation of BRCA samples is rather meaningful as in each cluster BRCA samples present rather relevant differentially expressed genes and enriched biological processes. However, the clusters are lacking homogeneity as the different tumor types of the clusters present unrelated differentially expressed genes and enriched biological processes. Besides, K-Means signature fails to properly characterize other tumor types. This issue might be explained by the over-representation of BRCA samples in our data set. Detailed analysis of the sample clustering using K-Means signature can be found in Supplementary Materials. Additionally, in order to indicate the significance of the distance used we considered different distances to perform the sample clustering. Our experiments confirmed that the distance that gave the best biologically relevant clusters was Spearman's correlation-based distance. Moreover, after the screening analysis we observed that the differentially expressed genes are not necessarily the genes selected in the signatures. This observation indicates that the strength of our approach is to combine genes that might not be the most informative taken individually but whose combination allows a good compact representation of the information brought by the whole genome for cancer tumors. It is also worth mentioning that LP-Stability signature correctly separates immune inflammatory samples from the others for all tumor types. \subsubsection{Expression Power} The expression power of our signature was further evaluated using the ARI, NMI, homogeneity, completeness and FMS metrics and compared with the rest of the signatures. We called the average of those scores the Expression Power of the signature and report it in Fig.\ref{fig:spider}. Detailed results for each score are provided in Supplementary Materials. For Random Clustering, we calculated the metrics on the average results of sample clustering designed from $10$ random signatures. Overall the performances of K-Means and LP-Stability are the best with the first outperform the second. Good performance of K-Means could be due to the good separation of BRCA clusters, the dominant tumor type in our dataset. \subsection{Tumor Types/Subtypes Classification Tasks} \label{section:classification} The predictive power of our proposed signature has been assessed in a supervised setting by classifying the samples according to their tumor and sub-tumor types. This experiment aims to evaluate the tissue-specific information captured by each signature. In Table~\ref{tab:types}, we report the performance on training and test for each signature using the same classification strategy. Our experiments highlight that even random signatures with the relatively small number of $27$ genes reports good performance with a balanced accuracy of $84\%$. This proves that even a low number of genes are informative enough to perform a good separation of tumor types. However, our proposed signature reports the highest balanced accuracy reaching $92\%$ outperforming the referential signature~\cite{thorsson2018} which reached a balanced accuracy of $85\%$. Regarding tumor subtypes classification, results averaged over all considered tumor types are provided in Table~\ref{tab:subtypes}. Our proposed method presents the highest performance with a balanced accuracy of $68\%$, outperforming the other algorithms by at least $9\%$. This task is quite challenging as we are using the same compact signature to characterize all the different tumor types at a fine molecular level. Considering the complexity of the task and the important number of different classes, results obtained with the proposed signature are very promising. Indeed, it is surpassing the random signatures average balanced accuracy by $11\%$ and the referential signature, devised on this specific dataset, by $9\%$. \begin{table*}[!t] \caption{\textbf{Tumor Types Classification} performance using the average performance of $10$ sets of randomly-selected genes of same size as the proposed signature, CorEx, K-Means, the referential~\cite{thorsson2018} and our proposed signatures.} \label{tab:types} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Signature} & \multicolumn{2}{c|}{Balanced Accuracy (\%)} & \multicolumn{2}{c|}{Weighted Precision (\%)} & \multicolumn{2}{c|}{Weighted Sensitivity (\%)} & \multicolumn{2}{c|}{Weighted Specificity (\%)} \\ \cline{2-9} & Training & Test & Training & Test & Training & Test & Training & Test \\ \hline Random & 96+/-5 & 84+/-2 & 95+/-5 & 87+/-3 & 94+/-7 & 86+/-4 & 99+/-1 & 97+/-1 \\ \hline CorEx & 100 & 85 & 100 & 90 & 100 & 91 & 100 & 98 \\ \hline K-Means & 100 & 90 & 100 & 94 & 100 & 94 & 100 & 98 \\ \hline Referential {[}28{]} & 100 & 85 & 100 & 89 & 100 & 89 & 100 & 98 \\ \hline \textbf{Proposed} & \textbf{99} & \textbf{92} & \textbf{99} & \textbf{94} & \textbf{98} & \textbf{93} & \textbf{100} & \textbf{99} \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[!t] \caption{\textbf{Tumor Subtypes Classification} performance using the average performance using $10$ sets of randomly-selected genes of same size as the proposed signature, CorEx, K-Means, the referential~\cite{thorsson2018} and our proposed signature. Only the $5$ types of tumors with more than $50 \times n\_subtypes$ samples were studied} \label{tab:subtypes} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Signature} & \multicolumn{2}{c|}{Balanced Accuracy (\%)} & \multicolumn{2}{c|}{Weighted Precision (\%)} & \multicolumn{2}{c|}{Weighted Sensitivity (\%)} & \multicolumn{2}{c|}{Weighted Specificity (\%)} \\ \cline{2-9} & Training & Test & Training & Test & Training & Test & Training & Test \\ \hline Random & 81+/-11 & 57+/-9 & 85+/-8 & 66+/-10 & 82+/-9 & 62+/-7 & 87+/-12 & 74+/-23 \\ \hline CorEx & 82+/-19 & 59+/-14 & 83+/-18 & 70+/-11 & 81+/-20 & 65+/-8 & 94+/-6 & 71+/-36 \\ \hline K-Means & 85+/-12 & 53+/-24 & 89+/-10 & 67+/-15 & 79+/-20 & 56+/-19 & 96+/-3 & 69+/-38 \\ \hline Referential {[}28{]} & 90+/-11 & 59+/-7 & 91+/-9 & 68+/-10 & 90+/-9 & 67+/-12 & 97+/-4 & 70+/-35 \\ \hline \textbf{Proposed} & \textbf{85+/-11} & \textbf{68+/-9} & \textbf{90+/-6} & \textbf{73+/-13} & \textbf{82+/-16} & \textbf{63+/-9} & \textbf{93+/-6} & \textbf{89+/-6} \\ \hline \end{tabular} \end{center} \end{table*} \subsection{Global Comparison} \label{section:compa} In order to better summarize the different results and provide a fair comparison with random, the state of the art and the referential signatures a spider chart is presented in Fig.~\ref{fig:spider}. The comparison focuses in $3$ different criteria: (i) criteria based on the gene clustering performance in blue, (ii) criteria based on the informativeness of the signature for unsupervised clustering tasks in green and (iii) criteria based on the relevance of the signature for supervised classification tasks in gold. Discovery Power is the proportion of tumor types that are relevantly grouped in sample clustering according to related tumor types, the criteria of evaluation are presented in section ~\ref{section:DP}. Expression Power corresponds to the average of the following clustering scores: ARI, NMI, homogeneity, completeness and FMS the results are provided in Supplementary. Predictive Power: Types is the balanced accuracy on test of the tumor types classification task results are provided in Table~\ref{tab:types}. Predictive Power: Subtypes is the average over all tumor types of the balanced accuracy on test for tumor subtypes classification the results are provided in Table~\ref{tab:subtypes}. Biological Relevance is the average ES of the gene clustering method results provided in Table~\ref{tab:bests}. Mathematical Relevance is the average DI score of the gene clustering method results provided in Table~\ref{tab:bests}. Decreasing Time Complexity is the average time taken for the gene clustering, the bigger the area in the chart the faster, results are provided in Table~\ref{tab:bests}. Our proposed signature is shown to be largely superior by at least $10\%$ to random and referential signatures in all criteria except compactness. It is also superior to the other signatures designed using our pipeline with other prominent clustering methods. One interesting exception is the Tumor-Specific Expression Power of K-Means-derived signature. The signature defined with K-Means differentiates the types of tumor well as also proved by the Predictive Power: Types but does not perform well on identifying the subtypes (Predictive Power: Subtypes). This is also due to the lower Discovery Power of K-Means compared to our proposed signature. \begin{figure}[!t] \centering [scale=0.45]{spider} \caption{\textbf{Comparison of the different signatures.} Blue: criteria based on the gene clustering performance, Green: criteria based on the informativeness of the signature for unsupervised clustering tasks and Gold: criteria based on the relevance of the signature for supervised classification tasks.} \label{fig:spider} \end{figure} \section{Conclusions} In this paper, we present a framework for gene clustering definition and comparison, for gene signature selection and evaluation in terms of redundancy, compactness and expression power. In particular, we present a mathematical and biological evaluation of gene clustering, an extensive sample clustering evaluation using quantitative and field specific clinical, biological metrics, and a supervised approach for its association with tumor types and subtypes characterization. In this framework we have shown the interest of using LP-Stability algorithm, a powerful center-based clustering algorithm, for gene clustering. Moreover, the obtained clusters formulate a gene signature proving causality and strong associations with tumor phenotypes. These results compete with those reported in the literature by using a large set of different omics data. In addition, our compact signature has been compared and proved to be more expressive than a prominent knowledge-based gene signature~\cite{thorsson2018}. An extensive biological analysis evidenced that the designed signature, leads to sample clusters with high relevance and correlation to cancer-related processes and immune response reporting promising results in tumor types and subtypes classification In the future, we aim to extend the proposed method towards discovering stronger gene dependencies through higher-order relations between gene expression data, as well as further evaluation of this biomarker for therapeutic treatment selection in the context of cancer. \small{ \section{Acknowledgement} We would like to acknowledge the support of Amazon Web Services grant and Pr. Stefano Soatto for fruitful discussions. We also acknowledge ARC sign'it program and Siric Socrates INCA. This work was partially supported by the Fondation pour la Recherche M\'edicale (FRM; no. DIC20161236437) and by the ARC sign'it grant ARC: Grant SIGNIT201801286. We also thank Y. Boursin, M. Azoulay and Gustave Roussy Cancer Campus DTNSI team for providing the infrastructure resources used in this work. } \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2021-02-16T02:39:57", "yymm": "2102", "arxiv_id": "2102.07713", "language": "en", "url": "https://arxiv.org/abs/2102.07713" }
\section{Introduction} The use of energy has been increasing exponentially through the last few years around the world. Specifically, the building sector alone consumes more than 40\% of the global energy produced worldwide \cite{cao2016building,Himeur2020AE}. This consumption is expected to increase by 1.3\% per year on average from 2018 to 2050 in organization for economic cooperation and development (OECD) countries (e.g. USA, Europe, Australia, etc.), while this rate will be more than 2\% for non-OECD countries (e.g. Middle East, China, Russia, etc.) \cite{economidou2020review}. Therefore, experts naturally assume that the rise of population and quality of life in various regions will result in a growing need for electricity-consuming devices and individualized equipment, and hence, an increasing energy consumption rate \cite{pylsy2020buildings,Himeur2020iscas}. In order to alleviate this issue, recent research and development projects and initiatives have been focused on developing nearly zero energy buildings (nZEB) in the last decade, which incorporate renewable and sustainable energy resources and energy management systems \cite{refat2020prospect}. However, these kind of measures could not be supported in all countries around the globe due to its high deployment cost \cite{huang2018uncertainty,lin2020towards}. Consequently, finding other cost-effective or no-cost energy saving solutions became the core of interest for the building energy community, especially those based on the use of information and communication technologies (ICT) \cite{Himeur2020IJIS-NILM-R}. One of these challenging approaches is behavioral change, which allows end-users to polish their energy consumption behaviors and trim their wasted energy without investing more time and elbow grease, but only by using recommender algorithms, artificial intelligence (AI) tools and already used smartphones \cite{azizi2019making,staddon2016intervening,belhadi65deep}. To this regard, energy providers, policy makers and end-users in the building sector have become progressively aware of the importance of behavioral change in promoting energy saving and reducing carbon emissions \cite{fraternali2017encompass,casals2017serious}. In this context, an increasing number of literature, projects and commercial products have recently arisen to explore the research interest of sustainable behavior change, explicitly to address the relation between attitudes in order to improve energy consumption behavior \cite{Sardianos2020GreenCom}. This is also due to the widespread use of AI, Internet of things (IoT) devices and other ICT tools, which have a positive impact on raising end-users' awareness, shaping their attitudes towards energy saving and boosting their achievements \cite{hwang2016efficient,Himeur2020icict}. While most of the research efforts have been conducted towards developing and improving new technologies and materials that reduce wasted energy and promote energy saving, human-related aspects, especially those related to end-user's behavior \cite{Himeur2020IntelliSys} have received less attention. Therefore, strategies and objectives must be set in order to shape the behaviors of buildings' end-users and owners \cite{becchio2018impact}. This can be achieved through developing context-aware recommender systems \cite{ashouri2018development} that combine the knowledge of AI, behavioral analytics and human decision-making processes, to implement powerful behavioral change support systems \cite{csimcsek2016semantic,Varlamis2020CCIS}, in which recommendations could be easily embedded into daily behaviors to reach an effective energy saving level. In this regard, changing daily behavior of end-users has become a key challenge \cite{iweka2019energy,sardianos2020emergence}. This challenge requires training and awareness exercises, incentive recommendations and feedback assessments for inducing a permanent change. Despite the success of recommender systems in different research and development applications (e.g. in healthcare, online shopping, movies, music, travel plans, etc.), there is still room for more research to improve their performance, especially in the energy saving domain. According to our knowledge, no work yet has been dedicated to the survey of challenges, difficulties and future perspectives of energy saving recommender systems. As a result, there are still open questions, in the energy research community, about the reliability and efficacy of recommendation systems. To alleviate theses issues, we provide this survey article that performs an in-depth, critical analysis of energy saving recommender systems for buildings. More specifically, this paper identifies the open issues and critical challenges that impede the development of effective energy recommender systems that incorporate human-in-the-loop, by promoting energy efficiency behaviors and reducing carbon emissions. To that end, a taxonomy of energy recommender systems is presented. The currently available frameworks are described and the factors that impact the efficacy of current implementation are discussed. Such factors refer to the nature of developed recommender algorithms, the computing platforms used to implement them, their objectives, the incentive measures utilized to motivate end-users and the evaluation metrics that fit the context of energy. Moving forward, a critical analysis and discussion is conducted to identify the limitations and difficulties encountered when developing energy recommender systems. Finally, we derive and decipher the issues that remain unresolved and attract an increasing research interest along with the hottest research directions for recommendation systems' performance improvement. To summarize, the main contribution axes of the paper could be outlined as follows: \begin{itemize} \item Propose the first review framework of recommender systems for energy efficiency in buildings. \item Conduct a novel taxonomy of existing energy efficiency recommender systems through analyzing the nature of the different components used to build a recommender framework, e.g. (i) objective of the recommendations, (ii) methodology and algorithm of choice utilized by the recommender engine, (iii) computing platforms, and (iv) evaluation metrics and incentive measures. \item Perform in-depth, critical analysis of the existing frameworks to identify the current challenges and difficulties that remain unresolved. \item Provide insights about the future orientations that could be targeted to overcome existing energy efficiency recommender systems' issues, to improve their quality and facilitate their applicability. \end{itemize} The remainder of this paper is organized as follows. Section \ref{sec:methodology} briefly explains the methodology that we followed in order to identify the works related to the development of energy saving recommender systems. Section \ref{sec:related_work} begins with an introduction to recommender systems, and then focuses on the analysis of human-driven energy efficiency frameworks of buildings, using recommendation systems for triggering and maintaining this behavioral change. Additionally, it summarizes the objectives of such systems, methodologies and algorithms they use, computing platforms and evaluation metrics. Moving forward, Section \ref{sec4} conducts in-depth and critical analyses of existing energy recommender systems by discussing their limitations and issues. Following, Section \ref{sec5} presents current challenges and future orientations that should attract the attention of R\&D communities in the near and far future. Lastly, Section \ref{sec6} derives the final conclusions. \section{Methodology} \label{sec:methodology} In order to perform our literature review, we based our methodology on the techniques presented in \cite{kitchenham2004procedures}. Identifying the need for a review is of equal importance to the results of the review. The need for this review derives from the fact that lots of systems and solutions have been developed in the field of energy efficiency for buildings, which add to the variety of the research field. Our study reveals that there is not yet a systematic review that explains all the steps: from the conception of an energy saving solution, to the delivery to end-users. By conducting this review, the following questions will be answered: \begin{enumerate} \item Why have recommender systems gained significant attention for energy efficiency in buildings? \item What are the main research directions that existing energy saving recommender system frameworks followed? \item What are the main objectives and which methodologies were used to achieve them? \end{enumerate} We performed our bibliometric research under the perspective of a narrative review. Studies related to the use of recommendations for improving energy efficiency and promoting energy savings in buildings have been searched. Our search took place through the Scopus database from 2000 to 2020. The following terms have been searched in titles, abstracts and keywords: \enquote{recommendation}, \enquote{recommenders}, \enquote{recommender systems}, \enquote{energy saving}, \enquote{energy efficiency}, \enquote{buildings}, \enquote{behaviour}. A \textbf{\textit{search in Scopus}}, in the title, abstract and keyword fields \footnote{TITLE-ABS-KEY ( ( \enquote{energy saving} OR \enquote{energy efficiency} OR energy ) AND ( \enquote{recommendation} OR \enquote{recommender} OR \enquote{recommender systems} OR \enquote{recommendation systems} ) AND behaviour AND buildings )} returned \textbf{\textit{283 articles}}, which are broadly organized in \textbf{\textit{three major research directions}}, as we explain in the following paragraphs and depict in Fig.~\ref{fig:domain_survey}: \begin{enumerate} \item Recommendations for enhancing buildings energy efficiency \item Intelligent systems that promote energy saving in buildings \item Recommender systems that put humans in the center of the decision making process for energy efficiency, in each of the previous cases or in both \end{enumerate} \begin{figure*}[!htb] \centering \includegraphics[width=0.8\columnwidth]{Fig1.pdf} \caption{The impact of energy efficiency in buildings in bringing human-in-the-loop.} \label{fig:domain_survey} \end{figure*} A large group of works focuses on energy efficient buildings by design. Such \enquote{high-performance} buildings employ energy optimization techniques by including natural ventilation, thermal storage and optimal window size and placement. The number of works in this field is vast, but most of them are slightly related to recommender systems and human behavior. So we suggest readers to consult a few survey works in this domain \cite{hauge2011user, taherahmadi2020toward}. Another major group of works focuses on the use of intelligent systems for monitoring and reducing the buildings' unnecessary energy waste. The survey of Boodi et al. \cite{boodi2018intelligent} summarizes the state-of-the-art works in building energy management system (BEMS) and distinguishes three types of models that combine environmental conditions, energy prices, comfort criteria and occupancy prediction in order to optimize the heating, ventilation and air conditioning (HVAC) systems operation: white box, black box and gray box models. More work on intelligent systems for energy efficiency in buildings can be found in a few more surveys \cite{de2014intelligent, khajenasiri2017review, shareef2018review}. The third group of works employ recommendation systems and algorithms, usually as a complement to the previous two approaches (i.e. on energy efficiency and smart buildings). The final decision is always on the human, who plays a vital role in the efficacy of the proposed solution. The recommendations in this group are either targeted to building owners (mainly for large public or commercial buildings) or to the occupants of residential buildings. The main difference between the two, is that in the former case, recommendations refer to an energy plan or an energy saving strategy that can be used to balance between saving and comfort, whereas in the latter one, the recommendations are about energy saving actions (e.g. device turn-off, or work shifting) that can have an immediate impact on the building's energy consumption. As depicted in Fig.~\ref{fig:domain_survey}, the focus of our survey is on recommender systems for energy efficiency and more specifically on the intersection of the three aforementioned domains. In the following, we examine various aspects of systems that combine intelligent systems and action recommendations to improve energy efficiency and convert conventional buildings into smart ones. The section that follows begins with an overview of recommender systems and answers the last question of our survey methodology by presenting the objectives and methodologies used to achieve them. \section{Recommender systems for energy efficiency in buildings} \label{sec:related_work} According to the 2012 ACM Computing Classification System\footnote{https://www.acm.org/publications/class-2012}, recommender systems are categorized as information systems that focus on information retrieval tasks. Under this prism, it is now very common for more and more scenario-specific applications to adopt various kinds of recommender systems to serve their needs and goals. Although they have originally been applied online for content personalization based on users' explicit or implicit preferences \cite{resnick1997recommender, schafer1999recommender}, they soon have been extended to a wide range of different real-world applications \cite{martin2009recsys}, from place and people recommendations based on location \cite{bao2015recommendations} to action recommendations for reshaping energy profiles \cite{alsalemi2020achieving, sardianos2019want}. In addition, due to the high demand of personalization in most of real-life scenarios, various approaches of adopting recommender systems have been proposed based on the type of decision making the system has to support and the goal these recommendations have to meet. Although all these approaches still share the same logic as before (i.e. recommend items to users), they go beyond content personalization and consequently increase the requirements from recommended item, which apart from matching a specific user's interests, also have to be novel, profitable, feasible in terms of the user context, generated at the right place and moment \cite{eirinaki2018recommender}. \begin{figure*}[!htb] \centering \includegraphics[width=0.8\columnwidth]{Fig2.pdf} \caption{Phases of recommendation process.} \label{rec_phases} \end{figure*} The used approach for implementing a recommender system significantly depends on the goal and applicability of the system. As depicted in Fig.~\ref{rec_phases}, the main phases that each recommendation process consists of are: \begin {enumerate} \item \textbf{Information collection phase}: In this phase, all the necessary explicit or implicit user information is collected in order to profile the targeted users. Depending on the type of the used recommender system, the information needed for user profiling might differ in volume or attributes, but in minimum it has to cover the case scenario for which the recommender system is used. It is critical, in this phase, to recognize and prepare this information, which will be used for system training and fine-tuning in the next phase. The efficiency and impact of the resulting recommendations rely on the utilized algorithm, but also depend on the quality of training data. \item \textbf{Learning phase}: In this phase, the system extracts the most representative features and trains the model that best identifies and quantifies the relationship among the users and the \enquote{items} that the recommendation engine will create recommendations for. \item \textbf{Prediction and recommendation phase}: In the third and last phase of the recommendation process, the system predicts the \enquote{unknown} values of the user-to-items preferences using the pre-trained model, and ranks the items that are most likely to fit users' preferences. As a result, it adds the top ranked items in the list of recommendations that are to be presented to the user. Of course, filters can be employed to rule out items that do not match the user context, and more criteria can be used to improve the ranking of items to be recommended \cite{castells2015novelty}. \end {enumerate} In following sections, we present a taxonomy of energy saving recommender systems, in which we describe state-of-the-art research frameworks with the aim of identifying the most prominent and recent advances of this technology. Fig.~\ref{taxonomy} illustrates a taxonomy of energy saving recommender systems that is introduced based on different parameters, including the objective of the recommender system, the recommendation methodology, the computing platforms, the evaluation metrics, and the employed measures to encourage end-users to adopt energy saving behavior. \begin{figure*}[!t] \centering \includegraphics[width=\columnwidth]{Fig3.pdf} \caption{Taxonomy of energy saving recommender systems.} \label{taxonomy} \end{figure*} \subsection{The objectives of energy efficiency recommender systems} The analysis performed in this article focuses on the third group of research works that we identified through the literature survey. As we explained earlier, the group comprises two main subgroups, each one targeting a different audience, and thus, having different objectives. \textbf{O1. Strategy recommenders:} The objective of such systems is to \textbf{\textit{recommend the best strategy}} for each case, either it is a building, an energy consumption forecast model, or an operational energy setup for HVAC or lights. For example, \cite{Pinto2018} proposes a building energy efficiency recommender system that relies on case-based reasoning (CBR). In a similar direction, \cite{kaur2019energy} discusses a recommender system for selecting the operational light intensity that satisfies user's comfort that is suitable for both user activity and energy saving. Finally, \cite{cui2016short} presents a generalized structure for forecasting building energy trends based on building specifications. The solution is mostly targeted to energy prosumers and allows them to predict the total energy consumption of a building by choosing the right building profile. \textbf{O2. Action recommenders:} These systems are tailored to the everyday needs of building occupants and their main objective is to \textbf{\textit{recommend actions that minimise the energy footprint of occupants}}. The actions can either assume static users and inelastic needs, or build on their flexibility to move around the building or postpone their needs at a later time (e.g. using the laundry machine after hours). For example, \cite{cuffaro2017resource} develops a Resource-oriented rule-based engine that generates advice for energy savings in the form of real-time alerts or logged incidents for monitoring purposes. On the other side, \cite{Wei2018} focuses on commercial buildings and distinguishes two types of recommendations, which both aim to maximize the space usage: recommendations for occupants to move from one space to another, and recommendations for occupants to shift their schedule related to the building. In a more recent work \cite{wei2020deep}, authors present a recommender system for reducing energy consumption in commercial buildings. They employ human-in-the-loop methodologies and utilize deep reinforcement learning in order to learn actions with energy saving capability and actively deliver recommendations to building end-users. Once again, the recommendations have to balance between user comfort and energy efficiency, but the final decision lies in the end-users' hands. For example, ReViCEE \cite{KAR2019135} analyzes historic energy usage fingerprints and provides individual and collaborative recommendations that balance between comfort and efficient usage of energy. \subsection{Methodologies and algorithms for energy efficiency recommendations} \label{sec3.2} In this section, we overview recommendation systems methodologies widely used in the building energy saving field. \vskip3mm \noindent \textbf{M1. Case-based} recommender systems are in essence rule-based systems, which recommend actions for one or more end-users through handling every end-user separately. Explicitly, specific energy usage habits and preferences are examined with reference to an ensemble of rules/heuristics and predetermined decisions that initiate, if achieved, the associated energy efficiency acts. In \cite{Schweizer7424470}, a rule-based recommender system is implemented to effectively learn the energy consumption patterns and interests of end-users, and hence, allows them to independently promote energy efficiency. In this regard, a frequent-sequential data mining model is deployed to extract the characteristics of consumers. Similarly, in \cite{OSADCHIY2019535}, a pairwise association rule-based algorithm is proposed for drawing the collective preferences of groups of end-users. The resulting recommender system is able to provide personalised suggestions to users without requiring a complex rating approach. While in \cite{Pinto2018}, the authors propose a case-based reasoning recommendation system, which is built based on system knowledge (i.e. cases) representing the historical energy consumption actions in order to shape the end-user behavior towards an energy efficiency comportment. Explicitly, the system is able to suggest energy efficiency actions to end-users at every different moment of the day. This is done by analyzing their energy usage footprints and comparing them with the ones already stored in the knowledge base. In this line, in order to identify the similar behaviors at every time-stamp, a k-Nearest Neighbor (KNN) approach is deployed, while a support vector machine (SVM)-based method is utilized to optimize the weighting parameters of every example. Moving forward, an expert system is then employed, which includes a set of ad-hoc rules for ensuring the application of the developed scheme to the case under consideration. Recently, in \cite{dahihande2020reducing}, pattern mining techniques are used for creating appliance usage profiles that consider user time context. Then, they filter out the most prominent time-appliance patterns that indicate the ideal behavior and when they detect an appliance usage that deviates from the typical behavior, they send a recommendation notification to the end-user to interact accordingly with the device. \vskip3mm \noindent \textbf{M2. Collaborative filtering} methods assume that the ensemble of end-users select from a closed set of actions (or items). These explicit or implicit choices are used to filter (predict) the preference of any end-user for any action in the set of available actions (or items). Therefore, the recommended actions to each end-user represent the most preferred to him/her (or those having the highest estimated rating) \cite{morawski2017fuzzy} or those generated from the same group he/she pertains to \cite{castro2018group}. In this line, energy efficiency recommender systems, using collaborative filtering, utilize various intelligent agents, which interact proficiently and dynamically identify the end-users' preferences. This helps in promoting energy saving actions to consumers by providing them with personalized recommendations that appropriately fit their interests. For example, in \cite{Zhang8412100}, appliances' consumption data of a specific household are analyzed before predicting the rating levels of different energy usage plans and identifying the related user preferences for every plan. Moving forward, a filtering (prediction) model is used to allow users to choose suitable consumption plans with adequate tariffs. In the same way, the authors in \cite{ZHENG2020117775} opt to generate energy efficiency recommendations based on a dual-step procedure, i.e. extracting features and triggering tailored recommendations. Explicitly, at the first stage, a matrix representation is adopted to adapt user preferences with reference to appliances usage. Following, a collaborative filtering approach is employed to detect similar users and generate personalized recommended actions using KNN clustering. While in \cite{KAR2019135}, the authors design the ReViCEE recommender system, which delivers tailored recommendations to end-users at a university campus building in Singapore, helping them curtail their electricity usage. Accordingly, ReViCEE learns end-users' interests by analyzing historic energy usage fingerprints. In this regard, individual and collaborative preferences related to the use of lights are captured from current consumption patterns before triggering a set of recommendations to grant end-users the best compromise between visual comfort and energy saving. \vskip3mm \noindent \textbf{M3. Context-aware} recommender systems aim to produce more pertinent recommendations which are adjusted to the particular contextual circumstances of the end-user \cite{Adomavicius2015}. This can be based on the detection of historic energy consumption patterns and their underlying context, which can then be used to develop rule-based recommendations aiming to achieve end-user's satisfaction \cite{sardianos2020model,RAZA201984}. They usually require a longer interaction with end-users with the system in order to collect a larger amount of data that helps in a better adaptation of the generated recommendations to the specific contextual situation of the consumer \cite{alsalemi2020micro}. The authors in \cite{Shigeyoshi2013} introduce a context-aware recommendation system based on (i) the analysis of power consumption footprints in various contexts; (ii) the maintenance of a base of historic formulated recommendations to avoid replicated recommendations; and (iii) a social survey for evaluating the recommendations efficiency, in which 47 users adopted the recommender system suggestions and provided feedback on their efficiency. The empirical assessment illustrates that when the recommendations are chosen randomly and in large numbers, they can overflow users and may have a detrimental impact on the end-users. \vskip3mm \noindent \textbf{M4. Rasch-based} models represent a psychometric paradigm that is generally utilized to analyze the user's responses to a specific set of recommendations \cite{radha2016lifestyle}. It aims at identifying the best compromise between the user's behavior and the ability to implement the generated recommendations. In this line, a Rasch-based recommender system relies on performing a Rasch analysis, which explains the probability of the end-user to perform a particular recommendation as a function of the end-user's ability and the recommendation’s difficulty \cite{Starke2015}. For example, the authors in \cite{Starke2017} investigate to what extent Rasch-based recommendations could help in reducing the end-user' efforts, improving the system assistance while increasing the choice satisfaction and leading to the promotion of energy efficiency behaviors in buildings. To this end, a Rasch-based recommendation system is developed, where up to 79 energy-efficiency recommendations are generated to assist end-users in making correct energy usage actions, enhancing system support, collecting their feedback and rating their satisfaction. \vskip3mm \noindent \textbf{M5. Probabilistic relational models (PRM)} can be used to capture the knowledge hidden in the energy consumption data in a probabilistic manner, which represents the probability of an action to match certain usage patterns or preferences \cite{chulyadyo2014personalized}. PRMs replace the user-item preference matrix with a relational database with \enquote{Users} and \enquote{Items} being the main entities and real transactions being the captured relationship between the two. When a transaction between a user and an item is recorded, the probabilities for users and items with similar attribute values increase. In this essence, probabilistic relational paradigms are developed for predicting end-user's preferences and habits, where tailored recommendations are derived to motivate end-users to reduce their wasted energy \cite{kumar2001recommendation}. In \cite{Li7093924}, the authors record and analyze historical energy usage footprints in a work space via the use of a continuous Markov chain model. The model focuses on investigating time-series energy data using a multi-objective programming scheme. Next, a tailored energy usage advice is drawn and each end-user is notified to establish energy saving measures and support the adoption of renewable energy solutions. Similarly, in \cite{wei2020deep}, a recommender system aiming at optimizing power usage in commercial buildings is introduced using human-in-the-loop model. Moving forward, the energy saving approach for a building is implemented and modeled based on a Markov decision paradigm, and a deep reinforcement learning scheme is implemented to learn energy saving recommendations, and engage consumers in effective sustainable behaviors. The implemented system helps in learning end-users' energy consumption actions/habits with a good accuracy, and hence, results in providing end-users with useful and personalized recommendations. Finally, further experiments are conducted to employ a feedback rating procedure to evaluate user satisfaction and identify the best recommendations. \vskip3mm \noindent \textbf{M6. Fusion-based} models rely on the analysis of different kinds of data, such as energy consumption footprints, ambient conditions (i.e. temperature, humidity, light, and luminosity), outdoor weather information and user preferences/habits, for producing better and well-timed recommendations \cite{OKU2011}. The so-called fusion-based recommendation systems adopt data fusion approaches, which either collect and analyze different kinds of data representations from distinct sources before making decisions \cite{xin2011effective} or include various sub-recommenders and aggregate their recommendation outputs, thus, building a recommendation ensemble \cite{zhang2010fusion,himeur2020data}. For example, in \cite{wroblewska2020multimodal}, authors introduce a multi-modal embedding fusion-based recommendation system that combines information from multiple sources and modalities. In \cite{ji2020brs}, a hybrid recommender system is proposed, which is based on the assumption that the end-user choice is generally impacted by its direct (and even indirect) friends' preferences. The system fuses different kinds of data (e.g. social data, score, and review patterns) and trains a preference prediction model, using a joint-representation learning process, to extract the best recommendations. In \cite{shambour2012trust}, authors incorporate additional information from the consumer's social trust network as well as actionable semantic-domain knowledge, in order to improve the recommendations accuracy and increase their coverage. In the same manner, in \cite{wang2019collaborative}, by exploiting different social data sources (produced by the Internet, e.g. consumer profiles, social relationships, behaviors, preferences, etc.), a recommendation system using social data fusion is proposed. Explicitly, it aims to utilize social data fusion for identifying similar consumers, and hence, updating each consumer rating of recommended actions using similar consumers. In \cite{pradhan2020multi}, a multi-level fusion-based recommender system is developed to produce collaborator recommendations. It fuses deep learning and biased random walk models to provide tailored recommendations for possible end-users having the same preferences. Following, in \cite{aiello2018decision}, a recommender system is introduced to support sustainable greenhouse management in buildings using multi-sensor data aggregation based system. Specifically, contextual information, mathematical formulations and experts' knowledge are used and fused to help in generating more effective recommendations. \vskip3mm \noindent \textbf{M7. Deep learning-based} recommender systems, have gained significant attention recently in various research topics, including visual recognition, healthcare, fraud detection, natural language processing, etc. \cite{HimeurCOGN2020}. Their use is extended due to their remarkable performance in many learning tasks, and additionally because of their interesting ability to learn characteristic representations from the ground up. The impact of deep learning is widespread as well to other research issues, in which it demonstrates its efficiency to retrieve information and trigger recommendations. Evidently, the field of deep learning in recommender system is flourishing \cite{Zhang10.1145/3285029,app10072441}. In \cite{R2020113054}, with the aim of addressing the gap in collaborative filtering-based methods, a deep learning model is adopted. In effect, collaborative filtering systems are seriously suffering from the cold start issue, especially with the absence of historic data about the users and their energy consumption preferences. Moreover, the latent parameters learnt by these systems are naturally linear. To that end, deep learning is employed, where embeddings are deployed to represent users and their preferences, and thus, to allow the learning of non-linear latent parameters. This approach better alleviates the cold start issue, since information about users and their preferences is embedded in the deep learning model. In \cite{app10144926}, a collaborative filtering approach, called DeepMF, which combines deep neural networks with matrix factorization is introduced for improving both the predictions of end-users' preferences and the provided recommendations. In this context, DeepMF applies an iterative refinement of a matrix factorization paradigm based on multilayer architecture, in which the acquired knowledge from a layer is used sequentially as input for the following layer, and so on. \vskip3mm \noindent \textbf{M8. Classical optimization} is proposed in addition to learning-based recommender systems to achieve an optimal energy usage in households and other kinds of buildings. Indeed, an essential part of the literature review is based on classical optimization techniques, which can (i) provide relevant recommendations to both end-users and energy providers; and (ii) reduce wasted energy automatically through controlling energy demand and electrical devices \cite{shah2019review,tushar2019optimizing}. For example, in \cite{ceballos2019simulation}, the authors propose an energy optimization approach, which aims to predict the amount of energy used by the heating and cooling systems in a set of commercial or institutional buildings. Following, the potential impacts of various energy saving measures based on parameter optimization are investigated before recommending tailored actions to optimize energy consumption. Similarly, in \cite{rocha2015improving}, Rocha et al. discuss the potential of energy saving in buildings using efficient energy and policy measures. Accordingly, an optimization model that mimics a smart building energy management system is developed through the aggregation of the decisions on heating and cooling systems operations with decisions on energy demand. Moving forward, an optimization and ontology-driven multi-agent recommender system is introduced in \cite{anvari2017multi} to reduce wasted energy. It can monitor and optimize energy usage within an integrated home/building and/or microgrid systems using different renewable energy resources and controllable loads. In this regard, several agents are developed and integrated together with the aim of improving their cooperation and optimizing the operation strategy of the whole energy system. In \cite{lu2020economic}, in order to optimize energy saving and guarantee a perfect thermal comfort of the end-users, the uncertainties due to outdoor weather conditions, building parameters and human behaviors are thoroughly modeled. Following, an adaptive economic dispatch approach is introduced, which is based on conducting a thermal comfort management process using a two-step algorithm. Similarly, in \cite{di2017two}, aiming at reducing energy demand uncertainties and energy bills in households and small public buildings, a two-step energy monitoring framework is proposed. Specifically, uncertainties due to energy demand variations and prediction errors of renewable generation are detected before generating appropriate recommendations to reduce wasted energy. Moreover, in \cite{salakij2016model,yu2017model}, energy usage optimization is conducted by forecasting the heat and moisture transfer, which directly affect the indoor climate and the overall thermal performance of buildings. In \cite{paul2019real}, reducing wasted energy in various buildings is ensured by optimizing the consumed energy taking into consideration the abrupt changes produced by the rooftop solar generation and the real-time price of energy. Accordingly, a novel parameter called the load criticality rate has been deployed, which represents the threshold value applied by each building occupants to their power consumption. Furthermore, the energy reduction task is considered as a stochastic, multi-objective optimization issue. While in \cite{lu2019thermal}, an analytical model that describes thermal dynamic characteristics of district heating networks in buildings is developed to optimize energy consumption while keeping an acceptable comfort level of the end-users. \subsection{Computing platforms} Aiming at bridging the gap between development and implementation of energy efficiency recommender systems, this section presents the main computing solutions that can be utilized, as a standalone solution or in an integration of various ones, to implement these systems. \vskip3mm \noindent \textbf{P1. Cloud computing} relies on the use of cloud infrastructure principles to provide reliable and scalable methods to solve resource-intensive computational issues \cite{abbasi2019software}. The employment of cloud-based services and solutions in home automation and building energy monitoring has been widely and globally popular. The cloud computing model facilitates a broad variety of processing applications, exploits the potentials of IoT and leverages IoT processing restrictions by moving the most demanding parts such as deep learning algorithms \cite{AlsalemiRTDPCC2020} to the cloud. The main challenge for cloud computing solutions remains to be the privacy of IoT data \cite{zhou2017security} that are transferred and processed on the cloud platform. \vskip3mm \noindent \textbf{P2. Edge computing} receives increasing attention although transmitting data to the cloud for processing has become a central topic in recent decades, pushing cloud computing as a prevailing model in computing \cite{ren2019survey}. In effect, the rapid growth in the amount of devices and data traffic in the age of IoT puts major hurdles on the capacity-limited Internet and on unregulated service delays. Through utilizing cloud storage alone, it becomes impossible to fulfill the delay-critical and context-aware service specifications of IoT apps. Met with these problems, computing paradigms are moving from clustered cloud computing to dispersed edge computing. Edge computing allows some of the data processing to be done on the device (i.e. the edge) to lift some of the burden off the cloud server \cite{Alsalaemi2020sca,HIMEUR2020115872,Himeur2020icpr} and guarantee privacy, grace to the decentralised processing. \vskip3mm \noindent \textbf{P3. Fog computing} has been recently used for developing distributed, low-energy recommender systems based on IoT architectures. Their main applications are in the domains of healthcare \cite{devarajan2019fog}, banking \cite{hernandez2020fog}, or information brokerage \cite{wang2019fog}. Despite the many advantages of fog computing, which comprise low latency, privacy, uninterrupted service and location awareness, there is still no application that combines fog computing with energy efficient recommendations. \vskip3mm \noindent \textbf{P4. Hybrid edge-cloud approaches} stand between cloud and edge computing, and their primary aim is to increase the flexibility of IoT systems by moving data processing from the edge to the cloud and vice-versa, depending on the system constraints \cite{linthicum_edge_nodate}. For example, when a small dataset is at use, hybrid systems decide to offload it to the edge device for rapid processing and for reducing the communication overhead, while in more demanding situations, data are pushed to the cloud in order to get more efficient results. Also, employing an edge-cloud architecture can help distribute computations between the edge and cloud for performance optimization \cite{huang2019deepar}. This flexibility of hybrid approaches creates new, out-of-the-box possibilities for more powerful yet resource-efficient IoT solutions. \subsection{Evaluation metrics} Modelling a recommender system that fits the need of the business/initiative, encompasses an evaluation phase that tests the recommender's capabilities to the limits. A recommender system suggests items (or actions) to the users, based on their own expected preferences. Several metrics have been devised for this purpose that allow system performance evaluation in predicting and providing sensible recommendations that fit a given scenario. Among the long list of metrics that can be used for the evaluation of provided recommendations \cite{gunawardana2015evaluating, wu2012evaluating}, we highlight those metrics that we see relevant to an energy efficiency recommendation system. \vskip3mm \noindent \textbf{E1. Rating accuracy} measures the deviation between the predicted and the actual ratings assigned by a user to each recommended item. The simplest and most popular error are \textit{Mean Absolute Error (MAE)} and \textit{Root Mean Square Error (RMSE)} \cite{chai2014root}, which are defined as follows: \begin{equation} \label{eq:accuracy} MAE= \dfrac{1}{|\hat{R}|}\sum_{\hat{r}_{ui}\in \hat{R}} |r_{ui}-\hat{r}_{ui}| \qquad \qquad RMSE= \sqrt{\dfrac{1}{|\hat{R}|}\sum_{\hat{r}_{ui}\in \hat{R}} (r_{ui}-\hat{r}_{ui})^2} \end{equation} where $r_{ui}$ is the actual rating of user $u$ for item $i$ and $\hat{r}_{ui}$ is the predicted rating. \vskip3mm \noindent \textbf{E2. Ranking accuracy} assumes that items are ranked in a decreasing-rating order for each user and only the top items are presented. So, they evaluate the similarity in the order of rated items, providing a more robust evaluation method than MAE or RMSE. Pearson ($c$) and Spearman ($\rho$) correlation coefficients measure the linear relationship between two (parametric/non-parametric) variables, as defined by the following equations: \begin{equation} \label{eq:correlation} c(x,y)= \dfrac{1}{N-1}\dfrac{\sum^N_{i=1}(x_i-\bar{x})(y_i-\bar{y})}{s_x \,s_y} \qquad \qquad \rho(u,v) = \dfrac{1}{N-1}\dfrac{\sum^N_{i=1}(u_i-\bar{u})(v_i-\bar{v})}{s_u \,s_v} \end{equation} where $x_i$ and $y_i$ are the $i^{th}$ elements in the variables of interest, and $\bar{x}$ and $\bar{y}$ are the sample means. Similarly, $s_x$ and $s_y$ represent the sample standard deviation for a sample of size $N$. $u$ and $v$ are the ranked variables counterpart of $x$ and $y$. The values vary between -1 and 1 where the former exhibits a strong negative relation between two variables, and the latter being a positive one. A value of 0 indicates the absence of any linear relationship. Such metrics can evaluate the recommender system as a whole, regardless of the used rating scale \cite{herlocker2004evaluating}. They enable developers to assess the recommender's ability to provide ratings for energy suggestions that are consistent with existing users' ratings, thus, evaluating the quality of the given suggestions. According to an experimental work done by \cite{herlocker2002empirical}, both Pearson and Spearman produced quite similar results, thus, being redundant if both used. Therefore, one can only use either in the context of recommender systems. \vskip3mm \noindent \textbf{E3. Information retrieval metrics}, such as precision, recall, and F1 score, are among the most relevant metrics that can be used in recommendation systems context with slight modifications. Since the item ratings are usually in a 1-5 scale, a threshold value $T$ is used to convert an absolute rating to a binary prediction that classifies whether the item is relevant or not \cite{herlocker2004evaluating}. The metrics are depicted in equations (\ref{eq:recall}-\ref{eq:f1_score}). In the context of recommender systems, true positive (TP) indicates that the recommended energy saving action is relevant within the user context, or has been accepted by the user, while false positive (FP) means that the recommended action is irrelevant. On the other hand, false negative (FN) indicates that the recommender system failed to recommend an energy-efficient action, that was actually performed by the user, and lastly, true negative (TN) refers to a system that does not provide energy-efficient suggestions when the context does not demand one. \begin{equation} \label{eq:recall} Recall = \dfrac{TP}{TP+FN} \end{equation} \begin{equation} \label{eq:precision} Precision = \dfrac{TP}{TP+FP} \end{equation} \begin{equation} \label{eq:f1_score} F1 \,\, Score = 2\times\dfrac{Precision~{\times}~Recall}{Precision + Recall} = \dfrac{2TP}{2TP+FP+FN} \end{equation} To elaborate further, if an excessive energy consumption is detected, multiple recommendations can be relevant to reduce the energy consumption. For instance, if a television or a room's lighting were left on, while no one is in the room, the recommender system advising to turn-off either the television or the room's lighting would be relevant (i.e. TP) to reduce household's energy consumption. On the other hand, FP would be suggesting to turn-off either the television or the lighting in a room where someone is currently within. \vskip3mm \noindent \textbf{E4. Serendipity and freshness} measure the novelty and variability of items recommended to the users in an effort to avoid recommending the same items again and again. Serendipity measures the amount of surprise an accepted action generates for a user \cite{gunawardana2015evaluating}. As if the recommender system is informing the user about a new piece of information accompanying an action, which is relevant but he/she has not heard about before. From this perspective, a distance metric adapted from \cite{gunawardana2015evaluating}, defined by equation \eqref{eq:serendipity-distance}, is utilized to check the serendipitous virtue of the recommender system. It calculates the distance between a suggested action $a$ and a set of previous suggestions $A$ that the user considered. $n_{A,\,c}$ is the number of suggested actions within the same group $c \in C$ in $A$, $n_A$ is the maximum number of suggested actions from a single class $c$ in $A$, and $c(a)$ is the class of action $a$. \begin{equation} \label{eq:serendipity-distance} d(a,A) = \dfrac{1+n_A-n_{A,\,c(a)}}{1+n_A} \end{equation} Equation \eqref{eq:serendipity-distance} estimates the distance a recommender system accumulates for a list of suggested actions. As an example, imagine a user $i$ was subjected to a list of actions $A$ throughout a certain period. By dividing this set to two subsets of suggested observed actions $A^i_{o}$ and hidden ones $A^i_{h}$, the $A^i_{o}$ can be utilized by the recommender system to generate energy-efficient action suggestions. Now, if the recommender is asked to generate 10 actions, we would like the recommender to suggest valid and relevant suggestions to the user $i$, which are NOT in the $A^i_{o}$ set, thus, increasing the system's serendipitous virtue. This is measured by calculating the distance score $d(a,A^i_{o})$ for each $a \in A^i_{h}$, where the score will be reduced if actions from the same class as $a$, depicted in $c(a)$, are numerous, i.e. $n_{A,\,c(a)}$ is large. It is worth noting that, in a sense, serendipity battles the accuracy of the recommender system, thus, it is important to periodically check the relevance of suggested actions, as users can refrain from using the recommender system if it kept suggesting irrelevant ones \cite{gunawardana2015evaluating}. Freshness, on the other hand, indicates the recommender capability in suggesting new recommendations each time the user interacts with \cite{mcnee2006making}. However, since the given actions to reduce energy consumption are limited by nature in small households, it is possible to adjust the definition such that the same action is not recommended in a matter of few hours/days. Thus, equation \eqref{eq:serendipity-distance} can be revisited and utilized every $k$ hours/days to ensure that an action $a$ was minimally suggested in the previous $k$ hours/days. \vskip3mm \noindent \textbf{E5. Acceptance ratio (AR)} aims to quantify the agreement that a certain user exhibits to energy efficiency suggestions provided by the recommender system. In numbers, the ratio allows the recommender system to understand the underlying probability of accepting the suggestions it provides, either on a holistic-level, or for a certain energy efficiency suggestion, e.g. turning off the air conditioner, as seen in equation \eqref{eq:acceptance_ratio}. \begin{equation} \label{eq:acceptance_ratio} AR = \dfrac{1}{C}\sum_{c=1}^CAR_c\qquad \qquad AR_c = \dfrac{1}{M}\sum_{i=1}^M\dfrac{a_{i,c}}{r_{i,c}} \end{equation} where $r_{i,c}$ is a recommendation and $a_{i,c}$ is either 0 or 1 depending on whether the recommendation $r_{i,c}$ is accepted. $M$ is the number of recommendations per energy efficiency suggestion belonging to the same group $C$. The importance of this metric prevails when the recommender decides to send a suggestion, where it helps in answering the question: Which energy saving suggestion should the recommender system send, an extreme energy saving suggestion with low acceptance rate, or a moderate one with high acceptance rate? \vskip3mm \noindent \textbf{E6. Other metrics} aim to evaluate: i) the \textit{Coverage} of recommendations in terms of the item or user space, which means that the recommender systems must suggest all possible actions and recommend actions to all users, ii) the \textit{Confidence} of the system in its recommendations, iii) the \textit{Trust} of users to them, iv) the \textit{Utility}, v) the \textit{Risk}, vi) the \textit{Robustness} and many more. \cite{alsalemi2020achieving} \subsection{Incentive measures} Aiming at increasing the acceptance ratio of energy saving recommendations, recommender systems should provide the user with incentives, to encourage and motivate them in promoting more sustainable energy behaviors. In this context, several incentives have been deployed in energy efficiency recommendation systems, such as feedback rating, green currency and social comparison. \vskip3mm \noindent \textbf{I1. Feedback rating} is of significant importance, because it can show the end-users' satisfaction regarding the delivered recommendations \cite{karjalainen2011consumer,karlin2015effects}. Therefore, gathering feedback ratings helps to efficiently adapt the recommendations to the users' preferences and interests, and thereby results in a substantial contribution to deliver sustainable reductions in energy consumption. \vskip3mm \noindent \textbf{I2. Green currency} is tightly coupled with the rapid development and prevalent use of blockchain and cryptocurrencies. The energy research community has also inspired the creation of green and/or $CO_{2}$ coins, which can be used as motivation towards energy efficiency. For instance, in \cite{garbi2019beneffice}, end-user engagement is promoted via the adoption of monetary rewards, named $CO_{2}$ credits, in recognition of energy savings and effective achievements. \vskip3mm \noindent \textbf{I3. Social comparison} has recently been adopted in various recommender systems with different applications (e.g. tourism and travels, movies, e-commerce, e-learning, healthcare, etc.). In the energy efficiency scenario, social comparison aims to motivate electricity saving in households \cite{petkov2011engaging}. This research area relies on the use of fundamental theories of social psychology which can teach us on how to encourage end-users to preserve energy by recommending individuals with better profiles as a reference \cite{jain2013can,du2016modelling}. More specifically, normalized comparison modules (that perform consumer rankings) are incorporated in the eco-feedback tools, in order to help end-users to compare their energy usage patterns with those of their peers and neighbors. The effectiveness of this incentive mechanism comes from the fact that end-users are highly influenced by engagements and rankings of their peers on social networks \cite{wemyss2019does,morley2018digitalisation}. All in all, Table~\ref{SummaryRS} summarizes the characteristics of the recent and relevant energy efficiency recommender systems. It highlights their advantages, and outlines the main information about their taxonomy, which could be extracted from the overview conducted above. \begin{table} [t!] \caption{Summary of the relevant energy efficiency recommender systems proposed in the literature.} \label{SummaryRS} \begin{center} \begin{tabular}{llllll} \hline {\small Framework} & {\small Type of RS} & {\small Advantages} & {\small % Computing} & {\small Objective} & {\small Application} \\ & & & {\small Platform} & & {\small Environment} \\ \hline {\small Pinto et al. \cite{Pinto2018}} & {\small M1} & {\small Recommend operational light intensity} & {\small P1} & {\small O1} & {\small Households% } \\ & & {\small satisfying users' comfort} & & & \\ {\small Kaur et al. \cite{kaur2019energy}} & {\small M1} & {\small Predict energy consumption ratings and} & {\small P2} & {\small O2} & {\small % Households} \\ & & {\small offer personalized recommendations} & & & \\ {\small Schweizer et al. \cite{Schweizer7424470}} & {\small M1} & {\small % Learn the energy usage habits and} & {\small P1} & {\small O1} & {\small % Households} \\ & & {\small interests of consumers} & & & \\ {\small Zhang et al. \cite{Zhang8412100}} & {\small M2} & {\small Generate tailored recommendations} & {\small P1} & {\small O2} & {\small Households} \\ & & {\small following predicted ratings of energy usage} & & & \\ {\small Zheng et al. \cite{ZHENG2020117775}} & {\small M2} & {\small Provide appliance-level consumption} & {\small P3} & {\small O2} & {\small Households% } \\ & & {\small recommendations} & & & \\ {\small ReViCEE \cite{KAR2019135}} & {\small M2} & {\small Predict collaborative recommendations of} & {\small P4} & {\small O1} & {\small % Households} \\ & & {\small light preferences of end-users} & & & \\ {\small Garcia et al. \cite{Garcia2017}} & {\small M2} & {\small Produce tailored advice on end-users'} & {\small P1} & {\small O2} & {\small % Households} \\ & & {\small activities similar scenarios} & & & \\ {\small Shigeyoshi et al. \cite{Shigeyoshi2013}} & {\small M3} & {\small % Produce contextual based advice with} & {\small P3} & {\small O2} & {\small % Households} \\ & & {\small social experiment ratings} & & & \\ {\small Luo et al. \cite{Luo2017}} & {\small M3} & {\small Tailored recommendations with textual} & {\small P1} & {\small O2} & {\small % Households} \\ & & {\small appliance advertisements} & & & \\ {\small Wei et al. \cite{Wei2018}} & {\small M3} & {\small Provide move and shift-schedule} & {\small P1} & {\small O1} & {\small Commercial} \\ & & {\small recommendations} & & & {\small buildings} \\ {\small REHAB-C \cite{SARDIANOS2020394}} & {\small M3} & {\small Tailored recommendations with feedback} & {\small P2, P3} & {\small O2} & {\small % Academic} \\ & & {\small on end-users' preferences} & & & {\small buildings} \\ {\small Starke et al. \cite{Starke2015}} & {\small M4} & {\small Provide Rasch profile based recommen-} & {\small P3} & {\small O1} & {\small % Households} \\ & & {\small dations of end-users' behavior} & & & \\ {\small Starke et al. \cite{Starke2017}} & {\small M4} & {\small Generate Rasch profile recommendations} & {\small P3} & {\small O1} & {\small % Households} \\ & & {\small based on a social experiment} & & & \\ {\small Li et al. \cite{Li7093924}} & {\small M5} & {\small Provide tailored recommendations to support} & {\small P1} & {\small O1} & {\small Work spaces% } \\ & & {\small the use of renewable energy solutions} & & & \\ {\small Wei at al. \cite{wei2020deep}} & {\small M5} & {\small Optimize power consumption using} & {\small P1} & {\small O2} & {\small Commercial} \\ & & {\small human-in-the loop model} & & & {\small buildings} \\ {\small Bravo et al. \cite{Bravo2019}} & {\small M6} & {\small Create energy saving recommendations} & {\small P2} & {\small O1} & {\small Households} \\ & & {\small based on electricity price} & & & \\ {\small Aiello\ et al. \cite{aiello2018decision}} & {\small M6} & {\small % Fuse contextual information, mathematical} & {\small P2} & {\small O1} & {\small Public buildings} \\ & & {\small formulation and experts' knowledge} & & & \\ {\small Kiran et al. \cite{R2020113054}} & {\small M7} & {\small Alleviate the cold start issue} & {\small P1} & {\small O2} & {\small Households} \\ {\small Pinto et al. \cite{app10144926}} & {\small M7} & {\small Improve predictions of consumers'} & {\small P3} & {\small O2} & {\small Public buildings} \\ & & {\small preferences and recommendations} & & & \\ {\small Rocha et al. \cite{rocha2015improving} } & {\small M8} & {\small % Decision aggregation of heating and cooling} & {\small -} & {\small O2} & {\small Public buildings} \\ & & {\small systems operations with energy demand} & & & \\ {\small Anvari et al. \cite{anvari2017multi}} & {\small M8} & {\small % Multi-agent based optimization} & {\small P1} & {\small O2} & {\small % Households/} \\ & & & & & {\small public buildings} \\ \hline \end{tabular} \end{center} \end{table} \section{Critical analysis and discussion} \label{sec4} First of all, based on the overview of existing energy efficiency recommender systems conducted above and by analyzing the summary in Table~\ref{SummaryRS}, it has been clearly demonstrated that most of the frameworks have been implemented using cloud computing. This is due to its different advantages, among them is its flexible lease and release of computing resources as per the end-user's requirement \cite{ari2019enabling}. However, cloud computing has other issues, such as the privacy preservation problem, which could be occurred whenever the data may outbreak the service provider and the information could be deleted purposely \cite{schaefer2020management}. Additionally, technical issues could also occur due to the fact that servers could be down, and hence, it becomes hard to gain back access to the needed resources/data at the right moment and from anywhere. For instance, non-availability of services could be a result of a denial-of-service attack (DoS) \cite{mahjabin2017survey}. Furthermore, it is worth noting that most of the recommender systems have been used to polish energy consumption behaviors in households while there are also other frameworks that discussed their utilization in commercial buildings, work spaces and other types of public buildings (e.g. hotels and hospitals). While for the objectives of the recommender systems, almost the same attention has been given for developing both action recommendations and strategy recommendations in existing energy saving recommendation systems frameworks. On the other hand, recommender systems for energy efficiency face other difficulties and issues, which need to be overcome while developing reliable solutions. In this section, we focus on introducing them along with discussing the commercialization potential of energy saving recommender systems and related issues, i.e. identifying the main market barriers and market drivers \cite{himeur2020marketability}. \subsection{Limitations and difficulties} Although there has been a significant progress in developing recommender systems as discussed above, various issues that hinder the establishment of effective recommendation engines still exist. The most critical problems and difficulties that exercise a negative impact can be summarized as portrayed in Fig.~\ref{critical_analysis}. Explicitly, it outlines the main difficulties and commercialization issues discussed in the context of energy saving recommender systems. \subsubsection{Data sparseness} Recommender systems are mainly based on the analysis of historic consumer data, which usually comprise few customer demographics and mainly customer ratings for items (or actions). Because a consumer may only rate a small number of the actions that are available on the recommendation platform, this leads to a sparseness on the ratings for some actions or users \cite{natarajan2020resolving,jain2020efficient}. Explicitly, this results in producing unreliable recommendations, which in turn could reduce consumer satisfaction \cite{zhang2020alleviating}. \begin{figure*}[!t] \centering \includegraphics[width=\columnwidth]{Fig4.pdf} \caption{A summary of limitations and difficulties of energy saving recommender systems.} \label{critical_analysis} \end{figure*} \subsubsection{Cold start problem} This issue refers to the lack of data (i.e. ratings) for new consumers or new actions. The problem is more acute in the start of the recommendations generation process, when the platform still has a few consumers and limited information about their preferences \cite{son2016dealing}. Explicitly stated user preferences can be used to enhance customer profile, and user similarity metrics can be used to assign newcomers to existing customer clusters with similar preferences. Another problem occurs in systems that refresh their catalog of items or actions. When a new action is added as an option to the system, it does not have any ratings yet, so collaborative methods cannot recommend them. Therefore, this leads to the fact that an action cannot be recommended easily, and hence, less likely to be noticed by consumers \cite{lika2014facing,liu2014promoting}. Probabilistic techniques that recommend new items with lower probabilities and rule-based recommenders can be used in place, to tackle the cold start problem. \subsubsection{Lack of benchmark datasets} Due to the large variability in the type of buildings and the respective objectives for energy savings, when developing a recommender system for this purpose, it is of utmost importance to use energy consumption datasets form different environments (e.g. households, commercial buildings and industrial areas) to evaluate the developed systems and efficiently generate personalized recommendations. In this regard, datasets are utilized as benchmarks for developing new recommender models and comparing them to existing systems under the same conditions. Therefore, datasets play a major role in the creation of successful recommendation systems, and most of the effective and robust recommender systems are those built upon large-scale datasets including big amounts of consumers' data \cite{verbert2011dataset}. However, for the energy sector, another important issue that impedes the generation of efficient recommendations is the absence of appropriate datasets along with the difficulties encountered to collect them. Moreover, the lack of open-access repositories containing existing datasets make the recommender systems comparison very difficult, or even impossible \cite{drachsler2010issues, ccano2015characterization}. \subsubsection{Lack of reproducibility of results} As presented in Section \ref{sec3.2}, various recommender systems have been developed and utilized for generating energy saving recommendations. However, the comparison of their efficacy is a daunting task since the assessment results could hardly be reproducible due to the absence of toolkits that support such tasks \cite{isinkaye2015recommendation}. Explicitly, the actual issues that hinder reproducing recommendation systems results had put recommender system community in a problematic situation \cite{dacrema2019troubling, ekstrand2011rethinking}. Researchers and developers (who require effective recommender algorithms and baselines against which to compare novel approaches) usually obtain very limited guidance in existing research and development sources \cite{HimeurENB2020}. In this line, to alleviate these issues and enable reproducibility, recommendation systems community needs to (i) review other research topics and inspire from them; (ii) establish a common definition of reproducibility; (iii) capture and comprehend the decisive factors that impact reproducibility; (iv) perform more extensive experiments; (v) promote developing and using recommender frameworks; and (vi) launch experimental platforms that includes recent state-of-the-art algorithms and datasets and establish best-practice guidelines for recommendation system research, which will also ensure a fair comparison among systems \cite{beel2016towards,ie2019recsim}. \subsection{Commercialization of developed solutions} \subsubsection{Market drivers} Energy conservation and energy security have been big concerns in recent years. Energy deficiency does not only affect the economy, culture and development of the world, but also results in global warming. This is why recommender systems play an imperative role in transforming end-user behavior towards improved energy efficiency. Therefore, it is of fundamental importance to investigate the main market drivers that will force the incorporation of energy-efficient and cost-effective technology, and to perform activities that help in controlling the decision-making process for energy efficient buildings, thus, providing an incentive for energy conservation \cite{trianni2013drivers}. An important driver is the EU energy and climate package \cite{climateenergy2030} for the year 2030 that includes the goals of 40\% reduction of greenhouse gas emissions (compared to 1990 levels), and 27\% increase of renewable energy sources (RES) in the EU-27 energy mix (today 6.5\%) and 27\% mitigation in the primary energy consumed (saving 13\% compared to 2006 scales). These goals have led the inception of many energy efficiency research initiatives, some of which incorporate recommender systems \cite{himeur2020data}. A global driver, that is derived from the same incentives, is the strategic decision, made by the top management of several firms, to comply with the new energy saving plans. Such decisions include energy reductions within the firm itself, optimizations of the products' and/or services' production pipeline, and efforts to reduce the overall carbon footprint. The TARHSEED initiative, is a bright example: the governmental initiative's aim to raise awareness on energy saving activities and reduce unnecessary energy usage in the country as a whole \cite{kaabi2012conservation, tarhseed_qatar_2020}, has been boosted by several advertising initiatives, standards and competitions, which motivated both individuals and corporations to optimize their energy efficiency. \subsubsection{Market barriers} The technology readiness level (TRL), the compatibility (i.e. how feasible it is to integrate it with existing systems), and the business model behind a technological solution are crucial factors in its ability to change its target market. The same factors hold for energy saving recommender systems and affect their impact to the energy market landscape. The global energy landscape is changing, driven by the need to reduce emissions and increase the security of supply while increasing the intermittent renewable energy in the energy mix. In this new landscape, the increasing power consumption requires maintaining the power grid reliability, regulating electricity flow with less mismatching between electricity generation and demand and reducing the energy footprint. The optimization of scheduling, the improvement of energy quality and assets efficiency, the integration of dynamic pricing and the incorporation of more renewable electricity sources are among the continuous challenges of the traditional energy grid. For a recommender system to penetrate the market and establish its presence, there is a number of market barriers to consider. First, technical barriers present a major milestone in adapting recommender systems in mainstream products and services. Namely, there is a discrepancy between the different evaluation standards of recommender systems, i.e. there is no unified standard for objective benchmarking. Moreover, the lack of comprehensive datasets impedes the progress in creating powerfully dynamic recommenders. From a legal standpoint, several market barriers exist, which in the context of energy efficiency, comprise a number of regional standards and regulations for residential and industrial energy consumption. For example, in Greece, the Transposition of the European Directive 2009/28/EC was established in 2010. In order to acquire a new permit to build in 2011, it is either appropriate to obtain an annual solar percentage of 60\% for the development of hygienic hot water from solar thermal systems or to explain technological challenges in the event of non-compliance. This, in turn, puts additional obstacles towards commercializing recommender systems as they have to comply with many standards to obtain international recognition. \section{Current challenges and future orientations} \label{sec5} The focus in this section is to provide insights on where the actual energy recommender systems research is heading to as well as the related challenges attracting considerable R\&D in the foreseeable and far future. Specifically, Fig.~\ref{current_future} summarizes the current challenges and future orientations of energy efficiency recommender systems, which have been addressed in this framework. \begin{figure*}[!t] \centering \includegraphics[width=\columnwidth]{Fig5.pdf} \caption{Currant challenges and future orientations.} \label{current_future} \end{figure*} \subsection{Current challenges} \subsubsection{Explainable AI \& recommender systems} The recent success of machine learning (ML) techniques in solving daily prediction or classification tasks has dramatically influenced the number of applications that adopt ML models, using them as black boxes that makes them difficult to understand by the end-users \cite{gunning2017explainable}. The ability of an ML model to \enquote{explain itself and its actions} to the users is considered to be an emerging important factor for the current modern AI applications transitioning to modern explainable AI models \cite{arrieta2020explainable,WU2021165}, as depicted in Fig.~\ref{fig:transition-of-recommendation-systems}. \begin{figure*}[!ht] \centering \includegraphics[width=\columnwidth]{Fig6.pdf} \caption{The transition to explainable AI.} \label{fig:transition-of-recommendation-systems} \end{figure*} In general, there are five key concepts that can describe each recommendation task and are referred to as ``the five \textit{Ws} of recommendations'': $W$hen, $W$here, $W$ho, $W$hat and $W$hy \cite{zhang2020explainable}. The five \textit{Ws} usually represent the time-driven recommendations (\textit{when}), location-driven recommendations (\textit{where}), their social component (\textit{who}), application-driven recommendations (\textit{what}), and their explanations (\textit{why}), respectively. Following the emergence of explainable AI, the goal of \enquote{Explainable Recommendation Systems} is to offer helpful suggestions to consumers, accompanied by explanations that generally relate to the reasons for providing these recommendations or the advantages of selecting the suggested alternatives \cite{SAGI2020124,fernandez2020random}. The key contribution of these reasons is that they will boost the system's persuasiveness, customer comprehension and happiness and offer an instant reward to the user. Latest work on explanation-driven recommendations is centered around two core aspects: 1) the type of explanations generated (e.g. textual, visual, etc.); and 2) the employed algorithm or model to generate the explanation, which can be loosely classified as matrix factorization, subject modeling, graph-driven, deep learning, knowledge-graph, interaction rules, and post-hoc models, etc. \cite{zhang2020explainable}. Explainable recommendations, classified by the nature of explanation produced, are enumerated as follows: \begin{itemize} \item \textbf{User-based and item-based explanations}: This is a traditional type of explanation based on user's feedback and is expressed as a statement of similarity among the system's different users (in the case of user-based) or items (in the case of item-based recommendations). \item \textbf{Content-based explanation}: This type is solely based on the item's feature space (e.g. for book recommendations the book type, the writer, etc.). \item \textbf{Textual explanations}: In the textual explanations, the recommendations include explanation sentences that may be based on other users' reviews or natural language processing techniques. \item \textbf{Visual explanations}: Visual explanations utilize item images for explainable recommendations indicating the part of the image that matches the item images that the user might be interested in. \item \textbf{Social explanations}: These explanations refer to items that user's \enquote{friends} in the social networks or in a specific community also prefer. \item \textbf{Hybrid explanations}: Hybrid explanations refer to combinations of one or more of the previous types of explanations. \end{itemize} Focusing on energy saving recommender systems, currently we identify the lack of recommendation systems in the area of energy sustainability, which adopt the recent trends of explainable AI. Recent surveys like \cite{zhang2018explainable} try to overview existing methods to set the current research issues related to the explainable recommendations. For instance, \cite{gao2019explainable} proposes a deep explicit attentive multi-view learning architecture for modeling multi-level characteristics of explanations, or the framework in \cite{balog2019transparent} that examines another scheme to create a set-based recommendation platform to generate textual and transparent explanations of film recommendations. Aiming at developing a knowledge-based scheme to create explainable item recommendations, a technique to leverage external knowledge is proposed in \cite{catherine2017explainable}, which is based on adopting knowledge graphs when information from content and product/item reviews is unavailable for generating explanations. Interpretable models, are based on transparent processes to decide the recommendation lists, hence, it is easier for generating appropriate explicit feature-level explanations to justify the reasons behind the recommender's suggestion for particular items \cite{zhang2014explicit}. Following the same scenario of graph-based models, He et al. in \cite{he2015trirank} introduce a technique that could rank the vertices of a tripartite graph and furnish explanations for the top-ranked, aspects-target, and user-recommended item triplets. By contrast, in the field of energy saving and recommendations for energy-related behavior, there is limited literature that elaborates on the rules of producing a particular recommendation. Authors in \cite{grimaldo2019user} propose a user-centric and visual analytics approach for developing an interactive and explainable forecasting and analysis of electric power demand in prosumer settings. Moreover, it has been advocated that this would be endorsed by behavioral analysis to enable the treatment of possible relationships between energy usage footprints and the interaction of prosumers with energy analysis tools, including customer portals and recommendation systems. \subsubsection{Psycho-cognitive recommender systems} Most of the actual tailored energy recommender systems are, by and large, limited in terms of effectiveness \cite{sardianos2020data}. To improve their performance, the incorporation of cognitive and behavioral knowledge, is essential, where tailored recommendation frameworks can be friendlier and more human-centric \cite{shi2015towards,zhao2014context}. Thus, this results in eventually enhancing users' experience and loyalty and increase their satisfaction. To that end, more effort should be established in this direction aiming at developing psycho-cognitive method recommendation systems that generate personalized energy saving actions and advice based on consumers' preferences, emotional states and centers of interest \cite{shafto2016human,kopeinik2016improving}. Accordingly, psycho-cognitive recommender systems are new intelligent recommender systems that help in (i) comprehending end-users' preferences; (ii) detecting changes in end-users' habits and attitudes through time; (iii) predicting end-users' unknown choices and behavioral change; and (iv) investigating adaptive techniques for generating intelligent recommendations within a changing environment \cite{hamlabadi2017framework,kopeinik2017applying}. All these tasks mixed together could improve end-users' behaviors and increase their awareness towards a more sustainable and energy-efficient usage \cite{aguilar2017general,beheshti2020towards}. A typical representation of psycho-cognitive recommender system is illustrated in Fig.~\ref{cog-rec}, in which three important mechanisms called knowledge-driven, cognition-driven and data-driven are used to develop a psycho-cognitive recommendation framework. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{Fig7.pdf} \caption{Typical representation of a cognitive recommender system for energy saving.} \label{cog-rec} \end{figure} The same category also includes recommender systems that build on persuasiveness in order to maximize acceptance \cite{bothos2016recommender}. In some cases, the messages sent to the user are also personalized to match user's preferences and values \cite{sardianos2020emergence}. Active learning is a key component in such approaches because it allows the system to continuously adapt to the changing user needs and demands \cite{sanchez2020persuasion}. \subsubsection{Privacy preserving recommendations} The preservation of user privacy is a key requirement in several recommender systems, especially in online social communities. Several techniques have been proposed in the past, ranging from k-anonymity, to differential privacy and homomorphic encryption. This kind of frameworks could be split into three main groups: (i) perturbation-based techniques that introduce noise to the existing data \cite{puglisi2015content}, without affecting the final recommendation result; (ii) encryption-based schemes that transform the original information within the main recommendation technique (e.g. within the matrix factorization component \cite{tang2017privacy}); and (iii) techniques that develop novel matrix factorization algorithms under local differential privacy (LDP) \cite{shin2018privacy}. In the case of content recommender systems, group-based approaches \cite{li2017efficient} implement the principles of k-anonymity in order to maintain recommendation efficiency without affecting user privacy. Similar challenges are set for navigation solutions that rely on crowd-sourced data collection \cite{tseng2016privacy}. Another essential challenge in privacy-preservation is servers-related, and tackles their features, especially when they are unreliable (untrusted) or comprise security weaknesses (vulnerabilities), and thereby collecting consumers' feedback may result in cyber liability owing to data leakage \cite{wang2016trust}. Early works in the privacy of consumer data in electric load monitoring applications \cite{mclaughlin2011protecting} mostly focus on non-intrusive monitoring techniques to combat potential invasions of privacy \cite{himeur2020smart}. Later works minimize the amount of collected reference data through sampling. For example, in \cite{englert2015enhancing} authors remove redundant energy traces, which do not contribute new knowledge to the recommender system. To the same direction end, privacy-preserving recommendation approaches aim at preserving consumers' privacy through hiding their rating feedback from servers and/or other consumers \cite{badsha2017privacy,jiang2019towards}. Fig.~\ref{priv-rec} presents a typical representation of an energy saving recommender systems. It illustrates what are the sensitive information that need to be encrypted before submitting them to the recommender system's server, such as energy consumption data (provides the intruder with information about the presence/absence of the end-user in his household), end-users' feedback and ratings and private information (personal data, location, number of end-users, etc.) \cite{badsha2016practical,xu2018privacy}. After storing and processing collected data, the recommender system encrypts the generated recommendations before sending them to the targeted end-user. Moreover, end-users could collaborate with each other to compute action similarity using their private keys. On the other hand, with the arrival of the blockchain technology, new opportunities have been opened up to develop a novel generation of recommender systems that can overcome the privacy-preservation issues and protect consumers' data \cite{begum2019towards,pu2020efficient}. For example, Bosri et al. propose Private-Rec, which is privacy-preserving recommender system using AI and blockchain \cite{bosri2020integrating}. Explicitly, blockchain has been used to provide the end-user with a secure mechanism using the distributed attribute where data could be exchanged with the required permission. While in \cite{casino2019efficient}, blockchain is deployed as the backbone of a decentralized recommender system, in which a secure architecture has been introduced using decentralized locality sensitive-hashing classification along with recommendation generation. \begin{figure*}[!t] \centering \includegraphics[width=\columnwidth]{Fig8.pdf} \caption{A typical privacy-preserving recommender system based on encryption and collaborative filtering.} \label{priv-rec} \end{figure*} \subsubsection{Time-aware recommendations} The concepts of time-aware and concept-aware recommendations have been widely discussed in the recommender systems' community. From the early works on movie recommendations \cite{gantner2010factorization}, to the more recent works on time-aware point-of-interest recommendations \cite{yuan2013time, zhang2015ticrec}, several frameworks for modeling, computing and presenting time-aware recommendations have been proposed in the general domain \cite{stefanidis2013framework, campos2014time}. Recommender systems for the energy sector differ highly from those used in other research topics. Explicitly, most existing models concentrate on recommending energy saving actions that fit consumers' preferences while putting a slight importance on the temporal information and its influence on the recommendations \cite{linda2020effective}. In this respect, we assert that further attention should be paid for time-aware recommendations in energy saving applications in buildings to push them into the foreground \cite{wang2018personalized,qian2019ears}. This kind of recommendations is more appropriate to emergency situations, e.g. the case of the Coronavirus COVID-19 pandemic, where real-time and time-aware recommendations should be provided according to the current situation. Explicitly, due to the mass restrictions imposed on people's movement, the rise of teleworking and online learning has led to an increase of energy consumption in domestic buildings \cite{beheshti2020towards,nilashi2020intelligent}. On the other side, with the widespread use of ML tools, using deep learning models would be a promising approach to develop recommender systems that encompass contextual information into neural collaborative filtering recommender frameworks \cite{lamche2015context,unger2020context}. \subsubsection{Large-scale recommender systems} As we are in the big data era, modern energy saving recommendation systems face tremendously increasing data volume and complexity due to the use of a massive number of connected devices. Traditional computation algorithms and experiences on small datasets may not be efficient today. Therefore, developing robust recommender systems that is capable of processing large-scale data, is becoming a challenging endeavour for several applications. Authors in \cite{eirinaki2018recommender} provide an interesting survey on the challenges and solutions for recommender systems for large-scale social networks. Big data, variety, and volume are the three major challenges for recommender systems in large social networks, which bring state-of-the-art collaborative filtering algorithms to their limits. Additionally, the large volatility of social network data (e.g. new users and items added or removed on a daily basis) has raised the interest for new evaluation metrics, that promote recent \cite{sanchez2018time} and diverse \cite{kunaver2017diversity} entries (e.g. diversity, freshness, serendipity, etc.) and tackle the cold start problem. From the ML and deep learning perspective \cite{batmaz2019review}, graph convolutional methods are gaining the researchers' interest \cite{ying2018graph}, since they can summarize the graph structure of social networks and combine it with the lateral information that may be hidden in the items or in the relations among them. Compact latent factor models that combine content with ratings \cite{liu2016large} prove to be more efficient than simple collaborative filtering algorithms in tackling the cold start problem. In order to balance the exploration-exploitation dilemma (exploit interesting items while exploring new items) and continuously capture user feedback without relying on item context, multi-armed bandit approaches \cite{barraza2020introduction, zhou2017large, canamares2019multi, sanz2019simple} have been recently proposed. In order to solve the technical issues that may arise from the scalability of recommender systems in large datasets, several parallel and distributed algorithms have been proposed, which either rely on the splitting of the dataset, using social or other information \cite{sardianos2017scaling, bathla2020scalable} and its parallel processing, or on the refactoring of existing algorithms in order to take advantage of the use of graphical processing units (GPUs) \cite{li2019efficient, li2017msgd, sardianos2018survey}. The issue of big data handling has also been studied in the domain of energy efficient recommender systems for recommending energy plans \cite{zhang2018big}, providing actionable recommendations \cite{sardianos2020data} or improving comfort and energy efficiency in tandem \cite{kar2019revicee}. The solutions discussed so far in the pertinent literature focus on data sampling or compression. \subsection{Future orientations} This section highlights the most promising research directions that will have a significant impact on improving the effectiveness of energy saving recommender systems in the near future. \subsubsection{Explainable recommender systems} As described before, increasing the user's trust and transparency to the system is an important concept in modern ML models and also a tool for maximizing the recommendations' acceptance in modern recommender systems. Thus, in the field of recommender systems for energy saving, the system has to accompany every recommendation for an energy saving action with: \begin {enumerate} \item an explanation of \textbf{why} this particular advice is suggested \item a statistical fact on \textbf{what} are the benefits following the recommended action \end {enumerate} Hence, we have introduced the (EM)\textsuperscript{3} explainable recommender system for energy efficiency\footnote{(EM)$^3$: Consumer Engagement Towards Energy Saving Behavior by means of Exploiting Micro Moments and Mobile Recommendation Systems (\url{http://em3.qu.edu.qa/})}, and described the two most essential characteristics that recommendation explainability is based on, as depicted in Fig.~\ref{fig:explRecommendations-methodology} \cite{SARDIANOS2020394,sardianos2020smart}: \begin {enumerate} \item \textbf{Reasoning:} This feature explores the context of global recommendations and seeks to provide thorough justification of why each recommendation has been made. It may be metadata of the \textit{end-user context} (e.g. the end-user is not present in the room), about the \textit{device consumption} (e.g. it is turned on for a long time) or about the \textit{external circumstances} that cause the appliance to be switched off. \item \textbf{Persuasion:} This aspect draws on end-users' expectations, motivations and long-term values, and adopts their ratings to pick the most suitable and relevant explanation about each recommended intervention. \end{enumerate} \begin{figure*}[!ht] \centering \includegraphics[scale=0.64]{Fig9.pdf} \caption{The flow of explainable recommendations generated in the (EM)\textsuperscript{3} framework.} \label{fig:explRecommendations-methodology} \end{figure*} Based on this strategy, and adopting a hybrid type of explanation in our approach, we enhance a persuasion fact along with the textual explanation in the recommendations' body. Using this approach, we try to provide the actual benefit the recommended action will achieve for the end-user, in an attempt to persuade him/her to increase his acceptance over the provided recommendations. Initial results of our evaluations show that such an approach can impact users' trust to the system and can bring an increase of 20\% in the acceptance ratio of provided recommendations. \subsubsection{Visualization recommender systems} In this technological age, it is not a secret that humans are attracted to imagery type of media, much more than the ones of textual nature. This can be witnessed by how millennials are exploiting technology nowadays. With this in mind, it can be argued that in order to have a better recommendation dialogue with end-users, aiding such recommendations with visualized charts and evidence can significantly aid in making them persuasive. By stating this, it is by no means indicating that the textual recommendations, i.e. explanations provided by the recommenders, should be discarded, but rather they are complementary to one another. Visualization and textual interpretations hand-in-hand can be integrated fully to structure suggestions given by the recommender systems, which are deemed to influence behavioral change. All the following discussed frameworks have used visualizations one way or the other to influence behavioral change in their system. A semantic smart home system for energy efficiency, namely SESAME, is proposed in \cite{Fensel2013} to provide daily and monthly overlays of energy consumption data, CO$_{2}$ footprint and financial impact. Also, the user interface (UI) to control the appliances and create certain rules is also shown. On the other hand, Fernández et al. \cite{RodriguezFernandez2016} showcase a heatmap exhibiting, in hourly fashion, the usage of air conditioning facilities for a whole week. Similarly, friendly UI is also provided to allow users more control over given services through smart devices. On the other hand, enCOMPASS framework push visualizations aided by context-aware collaborative recommender systems on mobile platforms to provide energy-efficient recommendations from socio-technical point-of-view \cite{Fraternali2017,Fraternali2018}. Moreover, the framework also created two games on smartphones to teach children about the importance of rationalizing the consumption of both water and energy. The former being taught through ``Drop! The Question'' application, which is developed with SmartH2O framework, and the latter through ``Funergy'' application \cite{Albertarelli2017}. Entropy, another framework, provides conventional time-series visualization for sensor data streaming in a desktop application to aid the recommender system \cite{Fotopoulou2017}, while both Bernard and HEMS-IoT create a mobile application for that purpose \cite{Zorrilla2019,Machorro-Cano2020}. CASER framework produces both web and mobile applications for data visualization but showcase variant visualization including time-series and heatmap charts on both household and substation levels (multiple households) \cite{Sitoula2019}. Lastly, (EM)\textsuperscript{3} creates two distinct applications, where the first application on iOS showcases data visualization in recommendations \cite{Alsalemi2020} and the latter on both Android and iOS, developed on React Native, studies the effect of different charts on end-users understanding \cite{Al-Kababji2020}. Fig.~\ref{vis-rec} depicts the flowchart of the visualization recommender system developed in the (EM)\textsuperscript{3}. From the previous illustrated work, three important prospects can be further investigated when integrating the data visualization pillar with the recommendations. Firstly, further studies can be established to understand the impact different visualizations have on end-users as in \cite{al2020interactive}. Not only that, but also create novel data visualizations specifically for energy consumption data, which are deemed simple for end-users from different backgrounds. Secondly, using the visualization graph hand-in-hand when the recommendation is suggested. In other words, the recommender system refers to the visualization and demonstrates the anomalous consumption through such visualizations. Thirdly, with the recent pandemic humanity is facing (i.e. COVID-19), people are working from home for social distancing, and thus, the energy consumption has increased in the domestic sector \cite{BBC2020}. Therefore, it would be more important than ever to generate personalized and timely recommendations. \begin{figure*}[!htb] \centering \includegraphics[width=\columnwidth]{Fig10.pdf} \caption{Example of the visual recommender system proposed in the (EM)\textsuperscript{3} framework.} \label{vis-rec} \end{figure*} \subsubsection{Non-intrusive recommender systems} Non-intrusive load monitoring (NILM) has been deployed in different energy saving projects instead of submetering for detecting appliance-level consumption data and other related information, e.g. when exactly a specific appliance is turning on/off without the need to install further submeters \cite{Himeur2020IJIS-NILM,siddiqui:hal-02954362}. In this line, collecting energy consumption feedback using NILM has turned around good performance at low/no cost. On one hand, developing efficient energy saving recommender systems relies on an accurate analysis of energy consumption data, especially at the appliance-level, and on the detection of abnormal energy usage. Tailored recommendations are then produced following the feedback of the anomaly detection stage. Thus, the development of non-intrusive recommendation systems using submetering is a promising solution to be investigated, which is not scalable because it needs to analyze individual appliance consumption traces \cite{himeur2020detection}. On the other hand, for energy data analysis and visualization, NILM has been considered as a scalable and practical alternative to submetering. However, the use of NILM in recommender systems has not been discussed before since the aim of NILM is to provide appliance-level energy footprints. In this regard, it is of significant interest to assess the signal fidelity of devices' fingerprints generated by existing NILM algorithms to develop effective non-intrusive recommender systems. This could figure out end-users' preferences and related information as well \cite{himeur2020anomaly}. Consequently, by using NILM instead of submetering, the development cost of recommender systems will significantly be reduced \cite{Luo2017,Himeur2020IJIS-AD}. \subsubsection{Edge/Fog recommender systems} Energy recommender systems have become an essential solution for energy efficiency in buildings. While a large number of existing frameworks are focused on using cloud-to-edge architectures, in which recommended energy saving actions are transmitted to the edge device (e.g. consumer's smartphone) after completing the computing task in the cloud server \cite{gong2020edgerec}. Although these architectures allow to achieve a good efficiency, they are prone to serious noticeable delays in the system's feedback and user's reaction because of the network bandwidth and latency between the cloud and edge \cite{sun2020convergence}. By contrast, implementing the recommender algorithms directly on the edge can allow real-time computing and identify consumers' interests/preferences more accurately and thereby increasing their satisfaction and trust of the generated recommendations \cite{su2019edge,himeur2020emergence}. Thus, recently, a great deal of attention is devoted to develop and implement recommender systems on the edge and/or on fog devices, which can tremendously reduce the computational time, minimize the cost of cloud hosting and ensure privacy-preservation \cite{hidasi2018cutting,felfernig2017recommendation,sayed2021endorsing}. To that end, various frameworks have been proposed in different research fields to investigate the applicability of edge-based and fog-based recommender systems \cite{wang2020privacy,zhou2020cost}. For instance, in \cite{wang2019fog}, aiming to satisfy the new requirements of recommender systems, e.g. the low latency and uninterrupted service, Wang et al. propose a fog-based recommendation framework based on collaborative filtering. It can overcome the problem of information overload in fog computing and help in generating personalized recommendations for improving system performance. In the same manner, a fog-based recommender system which helps to bridge the gap between the cloud and end-devices is proposed in \cite{ibrahim2020fog}. This system has been used to improve the performance of the E-Learning environments. Furthermore, in \cite{jabeen2019iot}, a fog-based recommender system is introduced to provide recommendations regarding the lifestyle, dietary plans and exercises to an ensemble of cardiovascular disease patients. While in \cite{gong2020edgerec}, an edge-based recommender system is proposed to allow (i) capturing real-time end-users' preferences more precisely; and (ii) generating personalized and satisfying recommendations. Fig.~\ref{fog-rec} describes a typical representation of a fog-based recommender system for multiple users, in which a hybrid computing scheme, using cloud and fog servers, is generally adopted to implement the main tasks of the recommendation framework. \begin{figure*}[!t] \centering \includegraphics[width=\columnwidth]{Fig11.pdf} \caption{Example of a typical representation of a fog-based energy recommender system.} \label{fog-rec} \end{figure*} \section{Conclusions} \label{sec6} This article presents a critical review of recommender systems for energy efficiency in buildings. Accordingly, a taxonomy of recommender systems is firstly conducted based on different aspects. Following, a critical analysis is conducted to highlight their strengths and limitations before deriving the current challenges and cutting-edge topics, which can be targeted in the near future to improve their performances. By and large, energy saving recommendation systems are proving to be a promising solution to promote sustainability and reduce carbon emissions, especially with the widespread deployment of smart-meters, IoT sensors and ML tools. Their evolution is accompanying the evolution of the intelligent Internet systems. The first generation of recommendation frameworks were based on collaborative filtering, case-based, PRM and Rasch-based engines through analyzing only energy-based data. While the second generation relies on the use of information fusion and deep learning models, in which other kinds of data are gathered and transmitted to the recommender engine to be analyzed together with energy consumption footprints. This movement, that the second generation is promoting for, helps in generating more accurate recommendations. Moving forward, we have also discussed the third generation of recommender systems for energy efficiency, which relies on adding other innovative modules into the recommender engines, i.e. explanations, visualizations and time-aware information processing. The use of edge computing technologies and edge AI is playing a major role in making development real-time recommendation systems a reality. Moreover, this results in improving the quality and acceptance of recommendations and increasing the end-users' satisfaction. In this line, future research will focus on fostering the existing systems and technologies for improving both the quality and applicability of recommendation frameworks. Concurrently, novel directions of research will be furthered to develop a novel generation of highly automated recommender system via the use of (i) NILM strategies instead of conventional smart-metering; (ii) edge computing as an alternative to cloud computing; and (iii) privacy-preservation recommendation systems to increase end-users' trust. Finally, it is worth noting that the application of recommender systems in the building energy sector is a very promising field since it does not only recommend energy saving actions but can also be extended to help consumers acquire appliances. In this regards, several factors could impact the choice of the consumer, such as the energy consumption of the appliance, its manufacturer, its price and other specifications. However, this will make the development of a more comprehensive energy efficiency recommender systems more challenging. In the grand scheme, recommender system will remain a strong pillar in the future of artificial intelligence and behavior change. \section*{Acknowledgements}\label{acknowledgements} This paper was made possible by National Priorities Research Program (NPRP) grant No. 10-0130-170288 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
{ "timestamp": "2021-02-16T02:37:49", "yymm": "2102", "arxiv_id": "2102.07654", "language": "en", "url": "https://arxiv.org/abs/2102.07654" }
\section{Introduction} Multi Armed Bandits (MAB) algorithms are often used in web services \citep{agarwal2016making, li2010contextual}, sensor networks \citep{tran2012long}, medical trials \citep{badanidiyuru2018bandits,rangi2019unifying}, and crowdsourcing systems \citep{rangi2018multi}. The distributed nature of these applications makes these algorithms prone to third party attacks. For example, in web services decision making critically depends on reward collection, and this is prone to attacks that can impact observations and monitoring, delay or temper rewards, produce link failures, and generally modify or delete information through hijacking of communication links \citep{agarwal2016making} \citep{cardenas2008secure,rangi2021learning}. { Making} these systems secure requires an understanding of the regime where the systems can be attacked, as well as designing ways to mitigate these attacks. In this paper, we study both of these aspects in a stochastic MAB setting. We consider a data poisoning attack, also referred as man in the middle (MITM) attack. In this attack, there are three agents: the environment, the learner (MAB algorithm), and the attacker. At each discrete time-step $t$, the learner selects an action $i_t$ among $K$ choices, the environment then generates a reward $r_t(i_t)\in[0,1]$ corresponding to the selected action, and attempts to communicate it to the learner. However, an adversary intercepts $r_t(i_t)$ and can contaminate it by adding noise {$\epsilon_t(i_t)\in [-r_t(i_t),1-r_t(i_t)]$}. It follows that the learner observes the contaminated reward $r^o_t(i_t)=r_t(i_t)+\epsilon_t(i_t)$, and $r^o_t(i_t)\in [0,1]$. Hence, the adversary acts as a ``man in the middle'' between the learner and the environment. {We present an upper bound on both the amount of contamination, which is the total amount of additive noise injected by the attacker, and the number of attacks, which is the number of times the adversary contaminates the observations, sufficient to ensure that the regret of the algorithm is $\Omega(T)$, where $T$ is the total time of interaction between the learner and the environment.} Additionally, we establish that this upper bound is order-optimal by providing a lower bound on the number of attacks and the amount of contamination. A typical way to protect a distributed system from a MITM attack is to employ a secure channel between the learner and the environment \citep{asokan2003man,sieka2007establishing,callegati2009man}. These secure channels ensure the CIA triad: confidentiality, integrity, and availability \citep{ghadeer2018cybersecurity,doddapaneni2017secure,goyal2019security}. Various ways to establish these channels have been explored in the literature \citep{asokan2003man,sieka2007establishing, haselsteiner2006security, callegati2009man}. An alternative way to provide security is by auditing, namely perform data verification \citep{karlof2003secure}. The idea of data verification or using trusted information is also embraced in the learning literature where small number of observations are verified \citep{charikar2017learning,bishop2020optimal}. Establishing a secure channel or an effective auditing method or getting trusted information is generally costly \citep{sieka2007establishing}. Hence, it is crucial to design algorithms that achieve security, namely the performance of the algorithm is unaltered (or minimally altered) in presence of attack, while limiting the usage of these additional resources. Motivated by these observations, we consider a \emph{reward verification} model in which the learner can access verified (i.e. uncontaminated) rewards from the environment. This verified access can be implemented through a secure channel between the learner and the environment, or using auditing. At any round $t$, the learner can decide whether to access the possibly contaminated reward $r^o_t(i_t)=r_t(i_t)+\epsilon_t(i_t)$, or to access the verified reward $r^o_t(i_t)=r_t(i_t)$. Since verification is costly, the learner faces a tradeoff between its performance in terms of regret, and the number of times access to a verified reward occurs. Second, the learner needs to decide when to access a verified reward during the learning process. We design an order-optimal bandit algorithm which strategically plans the verification, and makes no assumptions on the attacker’s strategy. Against this background, we make the following contributions in this paper: \begin{itemize} \item First, in Section~\ref{sec: characterization of attacks} we provide a tight characterisation about the total (expected) number of contaminations needed for a successful attack. Specifically, while it is well-known that with $O(\log{T})$ expected number of contaminations, a strong attacker can successfully attack \emph{any} bandit algorithm (see Section~\ref{subsec: upper bound successful attack} for a more detailed discussion), it is not known to date whether this amount of contamination is necessary. We fill this gap by providing a matching lower bound on the amount of contamination (Theorem 1). This result is based on a novel insight of UCB's behaviour, which may be of independent interest. Specifically, we show that for arbitrary (even adversarial) reward sequences, UCB will pull every arm at least $\log(T/2)$ times for sufficiently large $T$. Such conversativeness property of UCB guarantees its robustness against any attack strategy with $o(\log T)$ contaminations. Note that we also extend the state-of-the-art results on the sufficient condition by proposing a simpler yet optimal attack scheme, which is oblivious to the bandit algorithm's actual behaviour (Proposition 1). \item We then consider bandit algorithms with verification as a means of defense against these attacks. In our first set of investigations, we consider the case of having unlimited number of verification (Section~\ref{subsec: unlimited verification}). We first show that the minimum number of verification needed to recover from any strong attack is $\Theta(\log{T})$ (Theorem 2 and Corollary 2). We then propose an Explore-Then-Commit (ETC) based method, called Secure-ETC that can achieve full recovery from any attacks with this optimal amount of verification (Observation 1). While Secure-ETC is simple, it might not stop the exploration phase before exceeding the time horizon. To avoid this situation, we also propose a UCB-like method called Secure-UCB, which also enjoys full recovery under optimal verification scheme (Theorem 3). \item Finally, we consider the case when the number of verifications is bounded above by a budget $B$. We first show that if the attacker has unlimited contamination budget, it is impossible to fully recover from the attack if the verification budget $B = o(T)$ (Theorem 4). However, when the attacker also has a finite contamination budget $C$, as typically assumed in the literature, we propose Secure-BARBAR, which achieves $\tilde{O}\bigg(\min\Big\{C, { T\log{({2}/{\beta})}}/{\sqrt{B}}\Big\}\bigg)$ regret against a weaker attacker (who has to place the contamination before seeing the actual pull of the bandit algorithm). It remains an intriguing open question whether there exists efficient but limited verification schemes against stronger attackers. \end{itemize} \section{Preliminaries and Problem Statement} \label{sec: problem description} \subsection{Poisoning Attacks on Stochastic Bandits} We consider the classical stochastic bandit setting under data poisoning attacks. In this setting, a learner can choose from a set of $K$ actions for $T$ rounds. At each round $t$, the learner chooses an action $i_t\in [K]$, triggers a reward $r_{t}(i_t)\in [0,1]$ and observes a possibly corrupted (and thus altered) reward $r^o_{t}(i_t)\in [0,1]$ corresponding to the chosen action. The reward $r_t(i)$ of action $i$ is sampled independently from a fixed unknown distribution of action $i$. Let $\mu_i$ denote the expected reward of action $i$ and $i^*=\mbox{argmax}_{i\in[K]}\mu_i$.\footnote{For convenience, we assume $i^*$ is unique though all our conclusions hold when there are multiple optimal actions.} Also, let $\Delta(i)=\mu_{i^*}-\mu_i$ denote the difference between the expected reward of actions $i^*$ and $i$. Finally, we assume that $\{\mu_i\}_{i\in[K]}$ are unknown to both the \emph{learner} and the \emph{attacker}. The reward $r^o_{t}(i_t)$ observed by the learner and the true reward $r_t(i_t)$ satisfy the following relation \begin{equation} r^o_{t}(i_t)=r_t(i_t)+\epsilon_t(i_t), \end{equation} where the contamination $\epsilon_t(i_t)$ added by the attacker can be a function of $\{i_n\}_{n=1}^t$ and $\{r_n(i_n)\}_{n=1}^t$. Additionally, since $r^o_{t}(i_t)\in [0,1]$, we have that $\epsilon_t(i_t)\in [-r_t(i_t),1-r_t(i_t)]$. If $\epsilon_t(i_t) \neq 0$, then the round $t$ is said to be \emph{under attack}. Hence, the {\emph{number of attacks}} is $\sum_{t=1}^T\mathbf{1}(\epsilon_t(i_t)\neq 0)$ and the \emph{amount of contamination} is $\sum_{t=1}^T |\epsilon_t(i_t)|$. The regret $R^{\mathcal{A}}(T)$ of a learning algorithm $\mathcal{A}$ is the difference between the total expected true reward from the best fixed action and the total expected \emph{true} reward over $T$ rounds, namely \begin{equation}\label{eq:RegretOfAlg} R^{\mathcal{A}}(T)=T\mu_{i^*}-\mathbb{E}[\sum_{t=1}^T r_{t}(i_t)], \end{equation} The objective of the learner is to minimize the regret $R^{\mathcal{A}}(T)$. In contrast, the objective of the attacker is to increase the regret to at least $\Omega(T)$. As a convention, we say the attack is ``successful'' only when it leads to $\Omega(T)$ regret \citep{jun2018adversarial,liu2019data}. The first question we address is the following. \noindent {\bf Question 1: } {\it Is there a \emph{tight characterization} of the amount of contamination and the number of attacks leading to a regret of~~$\Omega(T)$ in stochastic bandits?} \subsection{Remedy via Limited Reward Verification} It is well known that no stochastic bandit algorithm can be resilient to data poisoning attacks {if the attacker has sufficiently large amount of contamination} \citep{liu2019data}. Therefore, to guarantee sub-linear regret {when the attacker has an unbounded amount of contamination} it is necessary for the bandit algorithm to exploit additional (and possibly costly) resources. We consider one of the most natural resource --- \emph{verified rewards}. Namely, we assume that at any round $t$, the learner can choose to access the true, uncontaminated reward of the selected action $i_t$, namely, when \emph{ round $t$ is verified} we have $r^o_t(i_t)=r_t(i_t)$. This process of accessing true rewards is referred to as \emph{verification}. If the learner performs verification at each round, then {it is clear that} the regret of any bandit algorithm is unaltered in the presence of attacker. Unfortunately, this is unrealistic because verification is costly in practice. Therefore, the learner has to carefully balance the regret and the number of verifications. This naturally leads to the second question that we aim to answer in this paper: \noindent {\bf Question 2: } {\it Is there a \emph{tight characterization} of the number of verifications needed by the learner to guarantee the optimal $O (\log T)$ regret for \emph{any} poisoning attack? } {Finally, we consider the case of limited amount of contamination from the attacker and limited number of verifications from the bandit algorithm. In the direction of studying this trade-off between contamination and verification, the third question that we aim to answer in this paper is: \noindent {\bf Question 3: } {\it Can we improve upon the $\Omega{(C)}$ regret lowerbound if the attacker's contamination budget is at most $C$, and the number of verifications that can be used by a bandit algorithm is also bounded above by a budget $B$. } } In this paper we answer the three questions above. \section{Tight Characterization for the Cost of Poisoning Attack} \label{sec: characterization of attacks} In this section we show that if an attack can successfully induce $\Omega(T)$ linear regret for any bandit algorithm, both its expected number of attacks and the expected amount of contamination must be $\Theta(\log T)$. In other words, there exists a ``robust'' stochastic bandit algorithm that cannot be successfully attacked by any attacker with only $o(\log T )$ expected amount of contamination, and we show the celebrated UCB algorithm satisfies this property. The key technical challenge in proving the above result is to show the sublinear regret of UCB against \emph{arbitrary} poisoning attack using at most $o(\log T)$ amount of contamination. In order to prove this strong result, we discover a novel ``convervativeness'' property of the UCB algorithm which may be of independent interest and has already found application in completely different tasks \cite{Shi2021Neurips}. To complement and also to match the above lower bounds of any successful attack, we design a data poisoning attack that can indeed use $O(\log T)$ expected number of attacks to induce $\Omega(T)$ regret for any order-optimal bandit algorithm, namely any algorithm which has $O(\log T)$-regret in the absence of attack. Since $r_t^o(i_t)\in [0,1]$, this implies that the attack would require at most $O(\log T)$ expected amount of contamination. \subsection{Lower Bound on the Contaminations} We show that there exists an order-optimal bandit algorithm --- in fact, the classical UCB algorithm --- which cannot be attacked with $o(\log T)$ amount of contamination by \emph{any} poisoning attack strategy. This implies that if an attacking strategy is required to be successful for all order-optimal bandit algorithms, then the amount of contamination needed is at least $\Omega(\log T)$. Since the amount of contamination is bounded above by the number of attacks, this also implies that any attacker requires at least $\Omega(\log T)$ number of attacks to be successful. While adversarial attacks to bandits have been extensively studied recently, to our knowledge such a lower bound on the attack strategy is novel and not known before; previous results have mostly studied the upper bound, i.e, how much contaminations are need for successful attacks \cite{jun2018adversarial,liu2019data}. Here we briefly describe the well-known UCB algorithm \citep{auer2002finite}, and defer its details to Algorithm \ref{alg:UCB} in Appendix \ref{append:AlgUCB}. At each round $t\leq K$, UCB selects an action in round robin manner. At each round $t>K$, the selected action $i_t$ has the maximum \emph{upper confidence bound}, namely \begin{equation}\label{eq:ucb-def} i_t= \mbox{argmax}_{i\in[K]} \bigg( \hat{\mu}_{t-1}(i)+ \sqrt{\frac{8\log t}{N_{t-1}(i)}} \bigg), \end{equation} where $N_t(i)=\sum_{n=1}^t\mathbf{1}(i_n=i)$ is the number of rounds action $i$ is selected until (and including) round $t$, and \begin{equation} \hat{\mu}_{t}(i)=\frac{ \sum_{n=1}^t r^o_{n}(i_n) \mathbf{1}(i_n = i)}{N_{t}(i)}, \end{equation} is the empirical mean of action $i$ until round $t$. Note that the algorithm uses the \emph{observed} rewards. The following Theorem \ref{thm:lowerBoundonUCB} establishes that the UCB algorithm will have sublinear regret $o(T)$ under any poisoning attack if the amount of contamination is $o(\log T)$. The proof of Theorem \ref{thm:lowerBoundonUCB} crucially hinges on the following ``conservativeness'' property about the UCB algorithm, which may be of independent interest.\footnote{Indeed, Lemma \ref{lemma:Min number of pulls} has been applied in \citep{Shi2021Neurips} to the task of incentivized exploration in order to show that a \emph{principal} can get sufficient feedback from every arm even if the \emph{agent} who pulls arms has completely different preferences from the principal.} \begin{lemma}[Conservativeness of UCB] \label{lemma:Min number of pulls} Let $t_0$ be such that ${t_0}/{(\log (t_0))^2} \geq 36K^2$. Then for all $ t \geq t_0$ and any sequence of rewards $\{r^o_n(i)\}_{i\in [K],n\leq t}$ in $[0,1]$ (can even be adversarial), UCB will select every action at least $ \log (t/2)$ times up until round $t$. \end{lemma} Lemma \ref{lemma:Min number of pulls} is inherently due to the design of the UCB algorithm. { Its proof does \emph{not} rely on the rewards being stochastic, and it holds deterministically --- i.e., at any time $t \geq t_0$, UCB will pull each action at least $ \log (t/2)$ times. } This lemma leads to the following theorem. \begin{theorem}\label{thm:lowerBoundonUCB} For all $0<\epsilon<1$ and $\alpha>0$ such that $0<\epsilon\alpha\leq 1/2$, and for all $T > \max\{(t_0)^{\frac{1}{1-\alpha \epsilon}}, \exp{(4^\alpha)}\}$, if the total \emph{amount} of contamination by the attacker is $\sum_{n=1}^T |\epsilon_n(i_n)|\leq {(\log T)^{1-\epsilon}}$, then there exists a constant $c_1$ such that the expected regret of UCB algorithm is \begin{equation} R^{UCB}(T)\leq c_1\big( T^{1-\alpha \epsilon} \max_i\Delta(i)+ \sum_{i \not = i^*}\log T/\Delta(i)\big), \end{equation} which implies the regret $R^{UCB}(T)$ is $o(T)$. \end{theorem} The constant $\alpha$ in Theorem \ref{thm:lowerBoundonUCB} is an adjustable \emph{parameter} to control the tradeoff between the scale of time horizon $T$ ($T \geq \max\{(t_0)^{\frac{1}{1-\alpha \epsilon}}, \exp{(4^\alpha)}\}$) and the dominating term $(T^{1-\alpha \epsilon} \max_i\Delta(i))$ in the regret. If $\epsilon$ is small, then the larger $\alpha$ leads to a smaller regret, however $T$ should be sufficiently large in order for us to see such a regret. The upper bound on the expected regret in Theorem \ref{thm:lowerBoundonUCB} holds if the total {amount} of contamination is at most $(\log T)^{1-\epsilon}$. Furthermore, if the total number of attacks is at most $(\log T)^{1-\epsilon}$, then using $|\epsilon_t(i_t)|\leq 1$, we have that $\sum_{n=1}^T |\epsilon_n(i_n)|\leq {(\log T)^{1-\epsilon}}$. Hence, Theorem \ref{thm:lowerBoundonUCB} also establishes that if the total number of attacks is $o(\log T)$, then the expected regret of UCB is $o( T)$. Thus, the attacker requires at least $\Omega(\log T)$ amount of contamination (or number of attacks) to ensure its success. The lower bound on the amount of contamination in Theorem~\ref{thm:lowerBoundonUCB} cannot be directly compared with the upper bound in Proposition~\ref{thm:constantAttack} since the former assumes that the amount of contamination is bounded above by $o(\log{T})$ \emph{almost surely}, while the latter is a bound on the \emph{expected} amount of contamination. Instead, we consider the following corollary, which can be easily derived from Theorem~\ref{thm:lowerBoundonUCB} using Markov's inequality, and establishes the lower bound on the expected amount of contamination necessary for a successful attack. \begin{corollary} \label{corr:PAC lower bound of attacker for UCB} For all $\epsilon\in (0,1)$ and $T$ such that the conditions in Theorem~\ref{thm:lowerBoundonUCB} are satisfied, if the expected amount of contamination by the attacker is at most $(\log{T})^{1-\epsilon}$, in other words $o(\log T)$, then the regret of UCB is $o(T)$. \end{corollary} \subsection{Matching Upper Bound on Contamination} \label{subsec: upper bound successful attack} We now show that there indeed exists attacks that can succeed with $O(\log T)$ attacks. Consider an attacker who tries to ensure any action $i_A\in [K]$ to be selected by the bandit algorithm at least $\Omega (T)$ times in expectation. This implies that the expected regret of the bandit algorithm is $\Omega(T)$ if $i_A\neq i^*$. We consider the following simple attack, that pulls the observed reward down to $0$ whenever the target suboptimal action $i_A$ is not selected. Namely, \begin{equation}\label{eq:attackStrategy1} r_t^o(i_t)=\begin{cases} r_t(i_t)&\mbox{ if } i_t=i_A,\\ 0 &\mbox{ if } i_t\neq i_A. \end{cases} \end{equation} Equivalently, the attacker adds $\epsilon_t(i_t)=-r_t(i_t)\mathbf{1}(i_t\neq i_A)$ to the true reward $r_t(i_t)$. {Unlike the attacks in \cite{jun2018adversarial,liu2019data}, the attack in \eqref{eq:attackStrategy1} is oblivious to rewards, since it overwrites all the rewards observation by zero.} The following proposition establishes an upper bound on the expected number of attacks sufficient to be successful. \begin{proposition}\label{thm:constantAttack} For any stochastic bandit algorithm $\mathcal{A}$ with expected regret in the \emph{absence} of attack given by \begin{equation}\label{eq:algo1} R^{\mathcal{A}}(T)=O\bigg(\sum_{i\neq i^*}\frac{\log^\alpha(T)}{(\Delta(i))^\beta}\bigg), \end{equation} where $\alpha\geq 1$ and $\beta\geq 1$; and for any target action $i_A\in [K]$; if an attacker follows strategy \eqref{eq:attackStrategy1}, then it will use an expected number of attacks \begin{equation}\label{eq:numConntam} \mathbb{E}[\sum_{t=1}^T \mathbf{1}(\epsilon_t(i_t)\neq 0)]]=O\bigg({(K-1)\log^{\alpha}(T)}/{\mu_{i_A}^{\beta+1}}\bigg),% \end{equation} an expected amount of contamination \begin{equation} \mathbb{E}[\sum_{t=1}^T|\epsilon_t(i_t)|]=O\bigg({(K-1)\log^{\alpha}(T)}/{\mu_{i_A}^{\beta+1}}\bigg), \end{equation} and it will force $\mathcal{A}$ to select the action $i_A$ at least $\Omega(T)$ times in expectation, namely $ \mathbb{E}[\sum_{t=1}^T \mathbf{1}(i_t= i_A)]=\Omega(T)$ . \end{proposition} Proposition \ref{thm:constantAttack} provides a relationship between the regret of the algorithm without attack and the number of attacks (or amount of contamination) sufficient to ensure that the target action $i_A$ is selected $\Omega(T)$ times, which also implies $R^{\mathcal{A}}(T)=\Omega(T)$ if $i_A\neq i^*$. Another important consequence of the proposition is that for an order optimal algorithm such as UCB, we have that $\alpha=1$ and $\beta=1$ in \eqref{eq:algo1}. Thus, the expected number of attacks and the expected amount of contamination are $O(\log T)$. A small criticism to the attack strategy \eqref{eq:attackStrategy1} might be that it pulls down the reward ``too much''. This turns out to be fixable. In Appendix \ref{append:gap-attack}, we prove that a different type of attack that pulls the reward of any action $i \not = i_A$ down by an \emph{estimated} gap $\Delta = 2 \max \{ \mu_{i} - \mu_{i_A}, 0 \} $ (similar to the ACE algorithm in \cite{ma2018data}) will also succeed. However, the number of attacks now will be inversely proportional to $\min_{i\neq i_A}|\mu_i-\mu_{i_A}|^{\beta+1}$, while not $ \mu_{i_A}^{\beta+1}$ as in Proposition \ref{thm:constantAttack}. \section{Verification based Algorithms} \label{sec: verification} In this section we explore the idea of using verifications to rescue our bandit model from reward contaminations. In particular, we first investigate the case when the amount of verification is not limited, and therefore our main goal is to minimize the number of verifications (along with aiming to restore the order-optimal logarithmic regret bound). We then discuss the case when the number of verifications is bounded above by a budget $B$ (typically of $o(T)$). \subsection{Saving Bandits with Unlimited Verifications} \label{subsec: unlimited verification} In this setting we assume that the number of verifications is not bounded above, and therefore, our goal is to minimize the number of verifications that is required to restore the logarithmic regret bound. To do so, we first show that any successful verification based algorithm (i.e., they can restore the logarithmic regret) would require $\Omega(\log{T})$ verifications. In particular, the following theorem establishes that for all consistent learning algorithm\footnote{A learning algorithm is consistent \citep{kaufmann2016complexity} if for all $t$, the action $i_{t+1}$ (a random variable) is measurable given the history $\mathcal{F}_{t}=\sigma (i_1,r^o_1(i_1), i_2,r^o_2(i_2) \ldots, i_{t},r^o_{t}(i_{t}))$. } $\mathcal{A}$ and sufficiently large $T$, if the algorithm $\mathcal{A}$ uses $O((\log T)^{1-\alpha})$ verifications with $0<\alpha < 1$, then the expected regret is $\Omega{((\log T)^{\beta}})$ with $\beta>1$ in the MAB setting with verification. \begin{theorem}\label{thm:lowBoundVerification} Let $KL(i_1,i_2)$ denote the KL divergence between the distributions of actions $i_1$ and $i_2$. For all $0<\alpha<1$, $1<\beta$ and all consistent learning algorithm $\mathcal{A}$, there exists a time $t^*$ and an attacking strategy such that for all $T\geq 2t^*$ satisfying $(\log T)^{1-\alpha}+\beta\log (4\log T)\leq \log T,$ if the total number of verifications $N^s_T$ until round $T$ is \begin{equation}\label{eq:boundOnVeri} N^s_T<(\log T)^{1-\alpha} /\min_{i_1,i_2\in [K]}KL(i_1,i_2), \end{equation} then the expected regret of $\mathcal{A}$ is at least $\Omega((\log T)^\beta)$. \end{theorem} Theorem \ref{thm:lowBoundVerification} establishes that $\Omega(\log T)$ verifications are necessary to obtain $O(\log T)$ regret. Here, we assume that the number of verifications is bounded above \emph{almost surely}. Nevertheless, if instead the \emph{expected} number of verifications is bounded, we shall obtain the following similar bound. \begin{corollary} \label{corr:verification expected lower bound} For all $0<\alpha<1$, $1<\beta$, all consistent learning algorithm $\mathcal{A}$ and sufficiently large $T$ such that the requirements in Theorem \ref{thm:lowBoundVerification} are satisfied, there exists an attacking strategy such that if the \emph{expected number of verifications} $N^s_T$ until round $T$ is $\mathbb{E}[N^s_T]<(\log T)^{1-\alpha} /\min_{i_1,i_2\in [K]}KL(i_1,i_2)$, then the expected regret of $\mathcal{A}$ is at least $\Omega((\log T)^\beta)$. \end{corollary} We now move to design an algorithm that matches this optimal number of verifications. Our algorithm is based on the following simple idea: Contamination is only effective when the contaminated reward is used for estimating the mean reward value of the arms, and therefore, influencing the learnt order of the arms. As such, any algorithm that do not need these estimates for most of the time would not suffer much from the contamination if the remaining pulls (when the observed rewards are used for mean estimation) is properly secured via verification. This idea naturally lends us to the explore-then-commit (ETC) type of bandit algorithms~\citep{garivier2016explore}, where in the first phase, the algorithm aims to learn the optimal arm by solving a best arm identification (BAI) problem (exploration phase), and in the second (commit) phase, it just repeatedly pulls the learnt best arm~\citep{kaufmann2016complexity}. It is clear that if the first phase is fully secured (i.e., every single pull within that phase is verified), then we can learn the best arm with high probability, and thus, can ignore the contaminations within the second phase. % The choice of the BAI algorithm for the exploration phase is important though. In particular, any BAI with fixed pulling budget would not work here, as they cannot guarantee logarithmic regret bounds~\citep{garivier2016explore}. On the other hand, BAI with fixed confidence will suffice. In particular, we state the following: \begin{observation} \label{verification upper bound for Secure-ETC} Any ETC algorithm, where the exploration phase uses BAI with fixed confidence $\delta = \frac{1}{T}$ and every single pull in that phase is verified, enjoys an expected regret bound of $O\Big(\sum_{i \neq i^*}{\log{T}}/{\Delta_i}\Big)$. In addition, the expected number of verifications is bounded above by $O\Big(\sum_{i \neq i^*}{\log{T}}/{\Delta^2_i}\Big)$. \end{observation} We refer to the ETC algorithm enhanced with verification described in the above observation as Secure-ETC. The proof of Observation~\ref{verification upper bound for Secure-ETC} is simple and hence omitted from the main paper. Note that this result, alongside with Theorem \ref{thm:lowBoundVerification}, show that Secure-ETC uses order-optimal number of verification, and enjoys an order-optimal expected regret, irrespective of the attacker's strategy. The main drawback of Secure-ETC algorithms is that there is positive probability that the algorithm may keep exploring until the end time $T$. While such small probability event turns out to not be an issue regarding its expected regret, one might prefer another type of algorithm which properly mix the exploration and exploitation. For such interested readers, we propose another algorithm, named Secure-UCB (for Secure Upper Confidence Bound), which integrates verification into the classical UCB algorithm, and also enjoys similar order-optimal regret bounds and order-optimal expected number of verifications. Due to space limitations, we defer both the detailed description of Secure-UCB and its theoretical analysis to the appendix (see Appendix~\ref{appendix:verify with Secure-UCB} for more details). However, for the sake of completeness, we state the following theorem below \begin{theorem} \label{thm:SUCB_simple} For all $T$ such that {$T\geq c_2\log T/\min_{i\neq i^*}\Delta^2(i)$}, Secure-UCB performs $O(\log T)$ number of verification in expectation, and the expected regret of the algorithm is $O(\log T)$ irrespective of the attacker's strategy. Namely, \begin{equation \begin{split} \sum_{i\in [K]}\mathbb{E}[N^s_T(i)]&\leq c_3\big(\sum_{i\neq i^*}{\log T}/{\Delta^2(i)}\big), \end{split} \end{equation} \begin{equation \begin{split} R(T)&\leq c_4\big(\sum_{i\neq i^*}{\log T}/{\Delta(i)}\big), \end{split} \end{equation} where $N^s_T(i)$ is the total number of verifications for arm $i$ until round $T$ and $c_2$, $c_3$ and $c_4$ are numerical constants (concrete values can be found in the appendix). \end{theorem} It is worth noting that due to the sequential nature of UCB, designing a UCB-like algorithm with verification is far from trivial and therefore its technical analysis is significantly more involved. \subsection{Saving Bandits with Limited Verifications} \label{subsec:limited verification} While unlimited verification can completely restore the original regret bounds, we will show next that this is unfortunately not the case if the number of verification are bounded. In particular, we state this negative result. \begin{theorem}\label{thm:LowerBoundFixed Budget} Consider an attacker with unlimited contamination budget. For any $T$, $K\geq 2$ and $N^s_T\geq K$, if the total number of verifications performed until round $T$ is at most $N^s_T$, then there exists a distribution over the assignment of rewards such that the expected \emph{gap-independent} regret of any learning algorithm is at least \begin{equation} R(T)\geq cT\sqrt{K/{N^s_T}}. \end{equation} where $c$ is a numerical constant. In addition, for any $T$, $K\geq 2$, and $N^s_T\geq K$, there exists a distribution over the assignment of rewards such that the expected cost, defined as the sum of expected regret and the number of verifications, of any learning algorithm is at least $\Omega(T^{2/3})$. \end{theorem} We remark that the goal of Theorem \ref{thm:LowerBoundFixed Budget} is to demonstrate that, unlike the unlimited verification case in subsection \ref{subsec: unlimited verification}, here it is impossible to fully recover from the attack --- in the sense of achieving order optimal regret bounds as in the original bandit setting without attacks --- if $B \in o(T)$, and this motivates our following study (Theorem \ref{thm:Secure-BARBAR regret bound}) of developing regret bounds that scale with the budget $B$. For this purpose, it suffices to have a gap-independent lower bound as in Theorem \ref{thm:LowerBoundFixed Budget}. Nevertheless, we acknowledge that an interesting research question is to see whether one can achieve a gap-dependent lower bound. This is out of the scope of our current paper and is an independent open question. \begin{algorithm}[t] \begin{algorithmic}[1] \STATE \textbf{Input}: confidences $\beta, \delta \in (0,1)$, time horizon $T$, verification budget $B$ \STATE Set $n^B_i = \Big\lfloor {B}/{K} \Big \rfloor$, $T_0 = B$, $\Delta^0_i = 1$ for all $i \in [K]$, and $\lambda = 1024\ln(\frac{8K}{\delta}\log_2{T})$ \FOR{ epochs $m = 1,2,\dots$} \STATE Set $n_i^m = \lambda (\Delta_i^{m-1})^{-2}$ for all $i \in [K]$, $N_m = \sum_{i=1}^{K}n^m_i$, and $T_m = T_{m-1} + N_m$ \FOR{$t= T_{m-1}$ \TO $T_m$ } \STATE choose arm $i$ with probability $n^m_i/N_m$ and pull it \STATE if $n^B_i > 0$ then \emph{verify the pull} (i.e., \emph{observe the true reward}), and reduce $n^B_i$ by $1$ \ENDFOR \STATE Let $S^m_i$ be the total \emph{observed} rewards from pulls of arm $i$ within epoch $m$ (including both verified and unverified ones) \STATE \textbf{If} $\;$ \emph{all} the pulls of arm $i$ were verified in epoch $m$ \textbf{then} $r^m_i = S^m_i/n^m_i$ \STATE \textbf{Else if} $S^m_i/n^m_i \geq \mu_i^B$ \textbf{then} $r^m_i = \min \Big\{S^m_i/n^m_i, \mu^B_i + \frac{\Delta^{m-1}_i}{16} + \sqrt{\frac{\ln{2/\beta}}{2n_B}}\Big\}$ \STATE \textbf{Else} $r^m_i = \max \Big\{S^m_i/n^m_i, \mu^B_i - \frac{\Delta^{m-1}_i}{16} - \sqrt{\frac{\ln{2/\beta}}{2n_B}}\Big\}$ \STATE Set $r^{m}_{*} = \max_{i}\{r_i^{m} - \Delta_i^{m-1}/16\}$, $\Delta^m_i = \max\{2^{-m}, r^{m}_{*} - r_i^{m}\}$ \ENDFOR \caption{Secure-BARBAR} \label{alg:Secure-BARBAR} \end{algorithmic} \end{algorithm} Now, this impossibility result relies on the assumption that the attacker has an unlimited contamination budget (or amount of contamination). One might ask what would happen if the attacker is also limited by a contamination budget $C$ as typically assumed in the relevant literature~\citep{gupta2019better,bogunovic2020stochastic,lykouris2018stochastic}. We now turn to the investigation of this setting in more detail where contamination budget is at most $C$. To start with, we assume for now that the attacker can only place the contamination before seeing the actual actions of the bandit algorithm. We refer to this type of attackers as \emph{weak} attackers, as opposed to the ones we have been dealing with in this paper (see Section 5 for a comprehensive comparison of different attacker models). We describe an algorithm that addresses this case in a provably efficient way. In particular, we introduce Secure-BARBAR (Algorithm~\ref{alg:Secure-BARBAR}), which is built on top of the BARBAR algorithm proposed by~\cite{gupta2019better}. The key differences are: (i) Secure-BARBAR sets up a verification budget $n^B_i$ for each arm $i$ and verify that arm until this budget deplets (lines $6-7$); and (ii) use these reward estimate to adjust the estimates ( lines $9-13$). By doing so, we achieve the following result: \begin{theorem} \label{thm:Secure-BARBAR regret bound} With probability at least $1-\delta - \beta$, the regret of Secure-BARBAR against any weak attackers with contamination budget $C$ is bounded by \begin{equation} \begin{split} &O\bigg(K\min\Big\{C, \frac{T \log{\frac{2}{\beta}}\ln(\frac{8K}{\delta}\log_2{T})}{\sqrt{B/K}}\Big\} \\ &\qquad \qquad + \sum_{i \neq i^*}\frac{\log{T}}{\Delta_i}\log{\Big(\frac{K}{\delta}\log{T}\Big)}\bigg). \end{split} \end{equation} \end{theorem} The regret bound is of $\tilde{O}\bigg(\min\Big\{C, { T\log{({2}/{\beta})}}/{\sqrt{B}}\Big\}\bigg)$, which breaks the known $\Omega(C)$ lower bound of the non-verified setting if $C$ is large \cite{gupta2019better}. \paragraph{A note on efficient verification schemes against strong attackers.} In the case of strong attackers, with a careful combination of the idea described in Secure-BARBAR to incorporate the verified pulls into the estimate of the average reward at each round (lines $9-12$ in Algorithm~\ref{alg:Secure-BARBAR}), and the techniques used in the proof of Theorem 1 from ~\cite{bogunovic2020stochastic} \footnote{The key step is to replace Lemma 1 from~\cite{bogunovic2020stochastic} with a verification aware version, using similar ideas applied in the proof of Theorem~\ref{thm:Secure-BARBAR regret bound}.}, we can prove the following result: With probability at least $1-\delta - \beta$, we can achieve a regret upper bound of $\tilde{O}\bigg(\min\big\{C, { T\log{({2}/{\beta})}}/{\sqrt{B}}\big\}\log{T}\bigg)$. This can be done by modifying the Robust Phase Elimination (RPE) algorithm described in~\cite{bogunovic2020stochastic} with the verification and estimation steps from Algorithm~\ref{alg:Secure-BARBAR}. The drawback of this approach is that it only works when the contamination budget $C$ is known in advance. Although~\cite{bogunovic2020stochastic} have also provided a method against strong attackers with unknown contamination budget $C$, their method can only achieve $\tilde{O}(C^2)$ under some restrictive constraints (e.g., $C$ has to be sufficiently small). In addition, it is not clear how to incorporate our ideas introduced for Secure-BARBAR to that approach in an efficient way (i.e., to significantly reduce the regret bound from $\tilde{O}(C^2)$). Given this, it remains future work to derive an efficient verification method against strong attackers with unknown contamination budget $C$, which can yield regret bounds better than $\tilde{O}(C^2)$. \section{Comparison of Attacker Models} \label{sec:comparison of attacker models} This section provides a more detailed comparison between the different attacker models from the (robust bandits) literature and their corresponding performance guarantees. In particular, at each round $t$, a \emph{weak attacker} has to make the contamination \emph{before} the actual action is chosen. On the other hand, a \emph{strong attacker} can observe both the chosen actions and the corresponding rewards before making the contamination. From the perspective of contamination budget (or the amount of contamination), it can either be bounded above surely by a threshold, or that bound only holds in expectation. We refer to the former as \emph{deterministic budget}, while we call the latter as \emph{expected budget}. To date, the following three attacker models have been studied: (i) weak attacker with deterministic budget; (ii) strong attacker with deterministic budget; and (iii) strong attacker with expected budget. \paragraph{Weak attacker with deterministic budget.} For this attacker model, \cite{gupta2019better} have proposed a robust bandit algorithm (called BARBAR) that provably achieves $O(KC + (\log{T})^2)$ regret against a weak attacker with (unknown) deterministic budget $C$. They have also proved a matching regret lower bound of $\Omega(C)$. These results imply that in order to successfully attack BARBAR (i.e., to force a $\Omega(T)$ regret), a weak attacker with deterministic budget would need a contamination budget of $\Omega(T)$. \paragraph{Strong attacker with deterministic budget.} \cite{bogunovic2020stochastic} have shown that there is a phased elimination based bandit algorithm that achieves $O(\sqrt{T} + C\log{T})$ regret if $C$ is known to the algorithm, and $O(\sqrt{T} + C\log{T} + C^2)$ if $C$ is unknown. Note that by moving from the weaker attacker model to the stronger one, we suffer an extra loss in terms of achievable regret (i.e., from $O(C)$ to $O(C^2)$) in case of unknown $C$. While the authors have also proved a matching regret lower bound of $\Omega(C)$ for the known budget case, they have not provided any similar results for the case of unknown budget. Nevertheless, their results show that in order to successfully attack their algorithm, an attacker of this type would need a contamination budget of $\Omega(T)$ for the case of known contamination budget, and $\Omega(\sqrt{T})$ if that budget is unknown. \paragraph{Strong attacker with expected budget.} Our Proposition~\ref{thm:constantAttack} shows that this attacker can successfully attack any order-optimal algorithm with a $O(\log{T})$ expected contamination budget (note that~\cite{liu2019data} have also proved a similar, but somewhat weaker result). We have also provided a matching lower bound on the necessary amount of expected contamination budget against UCB. {It is worth noting that if the rewards are unbounded, then the attacker may use even less amount contamination (e.g., $O(\sqrt{\log{T}})$) to achieve a successful attack~\citep{zuo2020near}.} \paragraph{{Saving} bandit algorithms with verification.} The above mentioned results also indicate that if an attacker uses a contamination budget $C$ (either deterministic or expected), the regret that any (robust) algorithm would suffer is $\Omega(C)$. A simple implication of this is that if an attacker has a budget of $\Theta(T)$ (e.g., he can contaminate all the rewards), then no algorithm can maintain a sub-linear regret if they can only rely on the observed rewards. Secure-ETC, Secure-UCB, and Secure-BARBAR break this barrier of $\Omega(C)$ regret with verification. In particular, the former two still enjoy an order-optimal regret of $O(\log{T})$ against any attacker (even when they have $\Theta(T)$ contamination budget) while only using $O(\log{T})$ verifications. The latter, when playing against a weak attacker, still suffers a swift increase in the regret as $C$ is increased. But this increase is not linear in $C$ as in the non-verified setting. \section{Conclusions} \label{sec: conclusions} In this paper we introduced a reward verification model for bandits to counteract against data contamination attacks. In particular, we contributions can be grouped as follows: We first revisited the analysis of strong attacker and proved the first attack lower bound of $\Theta(\log{T})$ expected number of contaminations for a successful attack. This lower bound is shown to be tight with our oblivious attack scheme, the contamination of which matches the lower bound. We then move to verification based approaches with unlimited verification, where we first provided two algorithms, Secure-ETC and Secure-UCB, which can recover any attacks with logarithmic number of verifications. We also provided a matching lower bound on the number of verifications. For the case of limited verifications, we first showed that full recovery is impossible if the attacked has unlimited contamination budget, unless the verification budget $B = \Theta(T)$. In case the attacker is also limited by a budget $C$, we proposed Secure-BARBAR, which achieves a regret lower than the $\Theta(C)$ regret barrier, if used against a weak attacker. For future research, when facing a strong attacker with contamination budget $C$, we briefly discussed how a similar idea from Secure-BARBAR with limited verification can be used to achieve a regret bound better than $O(C\log{T})$. However, this idea requires that $C$ is known in advance. It is an open question whether for the case of unknown $C$ we can get a similar regret bound that is better than the regret we can achieve for the non-verified case. Second, since bounding the contamination in expectation and almost surely leads to different results (see Section \ref{sec:comparison of attacker models}), it would be interesting to study the setting where number of verifications is bounded almost surely. Third, another interesting extension is a \emph{partial feedback verification} model, where the learner can only request a feedback about whether the observed reward is corrupted or not but cannot see the true reward. Finally, extending our study to RL is an intriguing future direction. \bibliographystyle{unsrtnat}
{ "timestamp": "2022-05-05T02:04:48", "yymm": "2102", "arxiv_id": "2102.07711", "language": "en", "url": "https://arxiv.org/abs/2102.07711" }
\section{Introduction and Background} \label{section:Introduction} Machine-type communication (MTC) is poised to increase sharply over the coming decades~\cite{liu2018sparse,munari2020grant}. This poses a great challenge to the foundation on which wireless infrastructures are relying, as MTC often produces short packets and fleeting connections. This results in a constantly changing collection of devices wishing to transmit bursty data. In this context, it becomes impractical to adopt the enrollment-estimation-scheduling paradigm, as the cost of acquiring side information on the channel state becomes prohibitive. This reality is reflected in the current interest in grant-free schemes and uncoordinated wireless access. Another aspect of coordinated system that is hard to maintain in MTC is the implicit agreement between an access point and an active device regarding the MAC layer configuration. This consideration serves as a motivation behind the unsourced random access (URA) model introduced by Polyanskiy~\cite{polyanskiy2017perspective}. A distinguishing feature of this approach is that active devices must share a same encoding function. The signal sent by a device therefore depends solely on the message payload, and not on its identity. If a device wishes to be recognized, then it must embed an identifier within its message. Altogether, the URA model captures several fundamental aspects of MTC: this has led to a rapid growth of literature on the topic ~\cite{ordentlich2017low,marshakov2019polar,calderbank2020chirrup,amalladinne2020coded,fengler2019sparcs-isit,amalladinne2020AMP,facenda2020efficient,decurninge2020tensor,shyianov2020massive}. An aspect of URA that has received little consideration is the heterogeneous scenario where various types of device are present. In such situations, devices from distinct groups may wish to communicate messages of different lengths and they may have varying power budgets. This reality can be accommodated by allowing group-based encoding and separating groups in signal space \cite{hao2020exploration}. Interestingly, standard URA admits a compressed sensing (CS) interpretation where signals sent by active devices are collectively viewed as sparse indices in the space of possible messages. Under this viewpoint, the heterogeneous URA framework becomes similar to compressive demixing in very large dimensional spaces \cite{mccoy2014sharp}. In this article, we combine recent advances in coded compressed sensing~\cite{amalladinne2020coded,fengler2019sparcs-isit,amalladinne2020AMP} and established concepts in demixing~\cite{chen2001atomic,boyd2011distributed,hegde2012signal,amelunxen2014living,mccoy2014convexity} to develop a novel communication scheme for the two-class URA setting. \section{System Model and Encoding Process} \label{sec:System Model and Encoding Process} In this section, we begin by describing the two-class URA architecture. Consider the scenario in which $K_1$ active devices form group~1 and $K_2$ devices form group~2, each user wishing to send a message to the access point. These transmissions take place over an uncoordinated multiple access channel. Active devices are cognizant of frame boundaries and they collectively signal over $n$ channel uses (real degrees of freedom). The signal received at the destination is of the form \begin{equation} \label{equation:ChannelModel} \textstyle \ensuremath{\vec{y}} = \sum_{i=1}^{K_1} \ensuremath{\vec{x}}_i^{(1)} + \sum_{i=1}^{K_2} \ensuremath{\vec{x}}_i^{(2)} + \ensuremath{\vec{z}} \end{equation} where signal $\ensuremath{\vec{x}}_{\iota}^{(g)} \in \mathcal{C}_g \subset \mathbb{R}^n$ is the codeword transmitted by active device~$\iota \in [1,\ldots,K_g]$ in group~$g\in \{1,2\}$, and $\mathcal{C}_g$ is the codebook associated with the group $g$. The noise component $\ensuremath{\vec{z}}$ is composed of independent Gaussian elements, each with distribution $\mathcal{N}(0,1)$. Adhering to the URA framework, we emphasize that all the active devices within a group share a same codebook; consequently, $\ensuremath{\vec{x}}_{\iota}^{(g)}$ is a function of the payload of device~$\iota$, and not of its identity. Our proposed algorithm loosely parallels coded compressed sensing (CCS). The selection of a codeword mimics the encoding process obtained by combining the outer code of Amalladinne et al.~\cite{amalladinne2018coupled,amalladinne2020coded} and the SPARC-inspired inner encoding of Fengler et al.~\cite{fengler2019sparcs-isit,fengler2019sparcs}, albeit we must account for the two groups. More specifically, information bits are encoded using a concatenated structure as in~\cite{amalladinne2020AMP,amalladinne2020unsourced}. The outer code is an LDPC code over a large alphabet. Every output symbol is turned into an index vector, which contains zeros everywhere except for a single location where it features a one. These index vectors are concatenated into a SPARC-like vector~\cite{joseph2013fast,rush2017capacity,venkataramanan2019sparse}, which is subsequently multiplied by a sensing matrix. Such a construction is somewhat intricate, and it can hardly be explained in detail within a short article. We point the reader to~\cite{amalladinne2020coded} for an extensive description of the scheme. Below, we shall only provide a synopsis of the approach and, then, we emphasize the modifications that must be performed to accommodate two groups of devices. Consider a sequence of information bits $\ensuremath{\vec{w}} \in \{0, 1\}^w$. This sequence $\ensuremath{\vec{w}}$ is encoded using an LDPC code, leading to a codeword $\vec{{v}}$ which is then partitioned into $L$ blocks $\vec{{v}}(\ell)$ of length $v = w + p$, so that $\vec{{v}} = {\vec{{v}}(1)} {\vec{{v}}(2)} \cdots {\vec{{v}}(L)}$. This LDPC code is designed to permit efficient decoding of several codewords on a same graph using belief propagation (BP) through the application of fast Fourier transform (FFT) or similar techniques~\cite{goupil2007fft}. As part of the next encoding step, each sub-block $\vec{{v}}(\ell)$ is transformed into a message index of length $m = 2^{v/L}$. Mathematically, we have \begin{equation} \label{equation:IndexFunction} \begin{split} \ensuremath{\mathbf{m}}(\ell) &= f(\vec{{v}}(\ell)), \end{split} \end{equation} where the function $f(\cdot)$ is a bijection between $\{ 0, 1 \}^{v/L}$ and standard basis elements of length $m$. The SPARC-like message $\ensuremath{\mathbf{m}}$ is subsequently obtained as the concatenation of these index vectors, that is $\ensuremath{\mathbf{m}} = \ensuremath{\mathbf{m}}(1) \ensuremath{\mathbf{m}}(2) \cdots \ensuremath{\mathbf{m}}(L)$. In the original URA formulation, the binary vector $\ensuremath{\mathbf{m}}$ is leveraged to generate transmit signal as $\ensuremath{\boldsymbol{\Phi}} \ensuremath{\mathbf{m}}$. This is illustrated in Fig.~\ref{figure:MessageEncoding}. The signal aggregate produced by all the active devices then assumes the form $\ensuremath{\boldsymbol{\Phi}} \ensuremath{\mathbf{s}}$, where $\ensuremath{\mathbf{s}} = \sum_i \ensuremath{\mathbf{m}}_i$ is the sum of the SPARC-like messages from all the active devices. \begin{figure}[bth] \centering \input{Figures/LDPC} \caption{ A depiction of the CCS encoding process in Sec. \ref{sec:System Model and Encoding Process}. The function $f(\cdot)$ in \eqref{equation:IndexFunction} maps strings of bits to index vectors.} \label{figure:MessageEncoding} \end{figure} To accommodate two groups of devices, we must expand this construction. This warrants the creation of two distinct codes $\mathcal{C}_1$ and $\mathcal{C}_2$, both of which follow an encoding architecture akin to Fig.~\ref{figure:MessageEncoding}. Thus, the received signal becomes \begin{equation} \label{equation:InnerChannel} \begin{split} \ensuremath{\vec{y}} &= \ensuremath{\boldsymbol{\Phi}}_1 \ensuremath{\mathbf{s}}_1 + \ensuremath{\boldsymbol{\Phi}}_2 \ensuremath{\mathbf{s}}_2 + \ensuremath{\vec{z}} . \end{split} \end{equation} Signal $\ensuremath{\mathbf{s}}_1$ is the sum of the SPARC-like vectors $\ensuremath{\mathbf{m}}^{(1)}_i$ encoded using $\mathcal{C}_1$; and $\ensuremath{\mathbf{s}}_2$ is the sum of the vectors $\ensuremath{\mathbf{m}}^{(2)}_i$ encoded using $\mathcal{C}_2$. We note that both $\ensuremath{\mathbf{s}}_1$ and $\ensuremath{\mathbf{s}}_2$ possess sparse structures. This hints at a connection between \eqref{equation:InnerChannel} and compressive demixing. \begin{remark} There are at least two ways to arrive at \eqref{equation:InnerChannel}. The simplest avenue is to encode the two groups separately, with devices in group~1 leading to $\ensuremath{\mathbf{s}}_1$ and devices in group~2, $\ensuremath{\mathbf{s}}_2$. The second option is to create layers as in \cite{hao2020exploration}. Suppose that messages within group~1 are smaller than those in group~2, i.e., $w_1 < w_2$. Then, all the devices in group~1 employ $\mathcal{C}_1$. On the other hand, the devices in group~2 encode their first $w_1$ bits with $\mathcal{C}_1$ and, after adding linking parities, they encode the remaining $w_2 - w_1$ bits using $\mathcal{C}_2$. These distinct approaches lead to essentially equivalent mathematical formulations, up to parameterization. Therefore, we discuss the simpler form with the understanding that findings extend to the two alternatives. \end{remark} \section{Demixing and Decoding Algorithm} Having established the encoding for our multi-class URA problem, we now turn our attention to the decoding process. Conceptually, the receiver seeks to recover $\ensuremath{\mathbf{s}}_1$ and $\ensuremath{\mathbf{s}}_2$; and it must disentangle the information messages embedded in each aggregate. The first task can be viewed as support recovery through compressive demixing, and the second objective corresponds to message disambiguation. As in~\cite{amalladinne2020AMP}, we propose a decoding framework that performs these two undertakings concurrently. The building blocks for our algorithm include BP on factor graphs~\cite{kschischang2001factor} and approximate message passing (AMP) applied to non-separable functions~\cite{berthier2020state}. \subsection{Belief Propagation (BP)} The application of BP to bipartite LDPC graphs is well understood~\cite{kschischang2001factor} and we shall only briefly review it here. It can be summarized as follows: $\ensuremath{\boldsymbol{\lambda}}_{\ell}$ denotes the trivial factor associated with a local observation; $\ensuremath{\boldsymbol{\mu}}_{s \to a}$ represents the message from variable~$s$ to factor~$a$; and, $\ensuremath{\boldsymbol{\mu}}_{a \to s}$ is the message from factor~$a$ to node~$s$. Under suitable conditions, BP efficiently computes marginal probabilities at every variable node~$s$. \begin{figure}[tbh] \centering \input{Figures/BP} \caption{ Our scheme employs factor graphs over large alphabets, one per group, to partially recover several codewords at once.} \label{figure:BeliefPropagation} \end{figure} Still, there is an important distinction between standard decoding on factor graph and its use in URA settings. The decoder seeks to recover multiple messages on a same graph simultaneously. This distinguishing aspect and its implications are carefully described in~\cite{amalladinne2020unsourced}. Note that it forces the LDPC code to be over a very large alphabet. Also, it changes the way $\{ \ensuremath{\boldsymbol{\lambda}}_{\ell} \}$ are initialized, a consideration we delayed until the next section. Nevertheless, we can interpret vector $\ensuremath{\boldsymbol{\lambda}}_{\ell}$ for one group as a collection of local estimates. Entry $\ensuremath{\boldsymbol{\lambda}}_{\ell}(k)$ is an estimate for the event that a designated device has produced basis element $\ensuremath{\mathbf{e}}_k$ within section~$\ell$, i.e., a proxy for $\mathrm{Pr} (\ensuremath{\mathbf{m}}_i(\ell) = \ensuremath{\mathbf{e}}_k)$. Once trivial factor nodes are initialized, the message passing rules are essentially standard. A message going from check node~$a_p$ to variable node~$s$ is given by \begin{equation} \label{equation:BP-Check2Variable} \ensuremath{\boldsymbol{\mu}}_{a_p \to s} (k) = \sum_{\ensuremath{\mathbf{k}}_{a_p}: k_p = k} \mathcal{G}_{a_p} \left( \ensuremath{\mathbf{k}}_{a_p} \right) \prod_{s_j \in N(a_p) \setminus s} \ensuremath{\boldsymbol{\mu}}_{s_j \to a_p} (k_j), \end{equation} where $N(a_p)$ is the graph neighborhood of $a_p$ and indicator function $\mathcal{G}_{a_p} (\cdot)$ assesses the local consistency of the index sub-vector $\ensuremath{\mathbf{k}}_{a_p} = ( k_s : s \in N(a_p) )$. Similarly, a message passed from variable node $s_{\ell}$ to check node $a$ can be expressed as \begin{equation} \label{equation:BP-Variable2Check} \ensuremath{\boldsymbol{\mu}}_{s_{\ell} \rightarrow a} (k) \propto \ensuremath{\boldsymbol{\lambda}}_{\ell} (k) \prod_{a_p \in N(s_{\ell}) \setminus a} \ensuremath{\boldsymbol{\mu}}_{a_p \to s_{\ell}} (k), \end{equation} where the `$\propto$' symbol indicates that the measure should be normalized. At any point in the iterative process, the belief vector on section~$\ell$ based on extrinsic information is proportional to the product of the messages from adjoining check factors, i.e. \begin{equation} \label{equation:EquivalentPriors} \ensuremath{\boldsymbol{\mu}}_{s_{\ell}}(k) = \prod_{a \in N(s_{\ell})} \ensuremath{\boldsymbol{\mu}}_{a \to s_{\ell}} (k) . \end{equation} The estimated marginal distribution of a specific device having transmitted index~$k$ at variable node~$s_{\ell}$ is proportional to the product of the current messages from all adjoining factors, including its intrinsic information, that is \begin{equation} \label{equation:BP-TildeM} \begin{split} p_{s_{\ell}} (k) &\propto \ensuremath{\boldsymbol{\lambda}}_{\ell} (k) \prod_{a \in N(s_{\ell})} \ensuremath{\boldsymbol{\mu}}_{a \to s_{\ell}} (k) = \ensuremath{\boldsymbol{\lambda}}_{\ell} (k) \ensuremath{\boldsymbol{\mu}}_{s_{\ell}} (k) . \end{split} \end{equation} A normalized version of $\ensuremath{\boldsymbol{\lambda}}_{\ell} (k) \ensuremath{\boldsymbol{\mu}}_{s_{\ell}} (k)$ can be viewed as an estimate for probability $\mathrm{Pr} \left( \ensuremath{\mathbf{m}}_i(\ell) = \ensuremath{\mathbf{e}}_k \right)$ (within a group), where $i$ is fixed. Again, in the context of multi-class URA and \eqref{equation:InnerChannel}, there are actually two factor graphs: one associated with $\mathcal{C}_1$ and the other, with $\mathcal{C}_2$. BP is similar in these two cases, which justifies our unified treatment. Below, we draw a distinction between the two codes wherever appropriate. It is also worth noting that, during joint decoding, we utilize BP in an unusual way with frequent reinitialization and only a few steps of BP per round. The motivation behind this strategy is explained below. \subsection{Approximate Message Passing (AMP)} In this section, we explain the component of the decoding algorithm associated with the inner code. The mathematical structure in \eqref{equation:InnerChannel} hints at several demixing algorithms found in the literature to address similar tasks \cite{boyd2011distributed,hegde2012signal,amelunxen2014living,mccoy2014convexity}. Yet, as mentioned in the introduction, a distinguishing feature of our setting lies in the very large dimensions of the raw index vectors. Much like in the case of CCS, this precludes the direct application of existing demixing methods. As such, we strive to create schemes that harness lessons from the past while also admitting a computationally tractable implementation. Specifically, we wish to leverage insights from demixing and transpose them into an AMP framework, which has recently been adapted to the standard URA formulation~\cite{fengler2019sparcs-isit,amalladinne2020AMP}. The appeal of AMP-based solutions stems, partly, from their low-complexity and mathematical tractability~\cite{donoho2009message,bayati2011dynamics}. To conform to the structure of typical AMP formulations, we rewrite the received signal in \eqref{equation:InnerChannel} as \begin{equation} \label{equation:InnerChannel2} \begin{split} \ensuremath{\vec{y}} = \ensuremath{\mathbf{A}} \begin{bmatrix} d_1 \ensuremath{\mathbf{s}}_1 \\ d_2 \ensuremath{\mathbf{s}}_2 \end{bmatrix} + \ensuremath{\vec{z}} = \begin{bmatrix} \ensuremath{\mathbf{A}}_1 & \ensuremath{\mathbf{A}}_2 \end{bmatrix} \begin{bmatrix} d_1 \ensuremath{\mathbf{s}}_1 \\ d_2 \ensuremath{\mathbf{s}}_2 \end{bmatrix} + \ensuremath{\vec{z}} . \end{split} \end{equation} where $d_g \ensuremath{\mathbf{A}}_g = \ensuremath{\boldsymbol{\Phi}}_g$. In this abstraction, matrix $\ensuremath{\mathbf{A}}$ has independent, zero-mean Gaussian entries. The variance of these entries is selected as to normalize the 2-norm of every column, $\mathbb{E} \left\| \ensuremath{\mathbf{A}}_{:,\iota} \right\|^2 = 1$. Constant $d_g$ is a non-negative amplitude; it accounts for the transmit power employed by devices in each group. We emphasize that \eqref{equation:InnerChannel2}, on top of having a standard AMP format, matches the canonical compressive demixing equation found in~\cite{amelunxen2014living}, albeit with an additional noise term. In its current form, this representation invites a composite iterative AMP decoder that alternates between \eqref{equation:AMP-Residual} and \eqref{equation:AMP-Denoising}: \begin{gather} \ensuremath{\vec{z}}^{(t)} = \ensuremath{\vec{y}} - \ensuremath{\mathbf{A}} \begin{bmatrix} d_1 \ensuremath{\mathbf{s}}_1^{(t)} \\ d_2 \ensuremath{\mathbf{s}}_2^{(t)} \end{bmatrix} + \frac{\ensuremath{\vec{z}}^{(t-1)}}{n} \operatorname{div} \ensuremath{\boldsymbol{\eta}}^{(t-1)} \left( \ensuremath{\mathbf{r}}^{(t-1)} \right) \label{equation:AMP-Residual} \\ \begin{bmatrix} \ensuremath{\mathbf{s}}_1^{(t+1)} \\ \ensuremath{\mathbf{s}}_2^{(t+1)} \end{bmatrix} = \ensuremath{\boldsymbol{\eta}}^{(t)} \left( \ensuremath{\mathbf{r}}^{(t)} \right) = \begin{bmatrix} \ensuremath{\boldsymbol{\eta}}_1^{(t)} \left( \ensuremath{\mathbf{r}}_1^{(t)} \right) \\ \ensuremath{\boldsymbol{\eta}}_2^{(t)} \left( \ensuremath{\mathbf{r}}_2^{(t)} \right) \end{bmatrix}, \label{equation:AMP-Denoising} \end{gather} where the \emph{effective observation} $\ensuremath{\mathbf{r}}^{(t)}$ is given by \begin{equation} \label{equation:Effective-Observation} \ensuremath{\mathbf{r}}^{(t)} = \begin{bmatrix} \ensuremath{\mathbf{r}}_1^{(t)} \\ \ensuremath{\mathbf{r}}_2^{(t)} \end{bmatrix} = \ensuremath{\mathbf{A}}^{\mathrm{T}} \ensuremath{\vec{z}}^{(t)} + \begin{bmatrix} d_1 \ensuremath{\mathbf{s}}_1^{(t)} \\ d_2 \ensuremath{\mathbf{s}}_2^{(t)} \end{bmatrix} . \end{equation} The iterative process is initiated with $\ensuremath{\mathbf{s}}^{(0)} = \ensuremath{\mathbf{0}}$ and $\ensuremath{\vec{z}}^{(0)} = \ensuremath{\vec{y}}$. Equation \eqref{equation:AMP-Residual} is the computation of the \emph{residual} augmented by the characteristic AMP Onsager term~\cite{bayati2011dynamics}. The second equation is a state update, which is performed through denoising using function $\ensuremath{\boldsymbol{\eta}}^{(t)} (\cdot)$. The remainder of this section is devoted to defining and justifying the denoising functions. An astounding property of AMP is that, under certain conditions, the effective observation is asymptotically distributed as the true state plus Gaussian noise. For the problem at hand, in the limit where the system gets larger, $\ensuremath{\mathbf{r}}^{(t)}$ becomes approximately distributed as \begin{equation} \begin{bmatrix} d_1 \ensuremath{\mathbf{s}}_1 \\ d_2 \ensuremath{\mathbf{s}}_2 \end{bmatrix} + \tau_t \ensuremath{\boldsymbol{\zeta}}_t, \end{equation} where the entries in $\ensuremath{\boldsymbol{\zeta}}_t$ are independent $\mathcal{N}(0,1)$ random variables and $\tau_t$ is a deterministic quantity. This phenomenon underlies the rationale behind our denoising functions. The first ingredient of our denoiser is the posterior mean estimate (PME) of Fengler et al.~\cite{fengler2019sparcs}, a low-complexity estimator that encourages sparsity in $\ensuremath{\mathbf{s}}^{(t)}$. Their work advocates an element-wise estimate of the form \begin{equation} \label{equation:OriginalPME} \hat{s}_{g} \left( q, r, \tau \right) = \frac{q \exp \left( - \frac{ \left( r - d_{g} \right)^2}{2 \tau^2} \right)} { q \exp \left( - \frac{ \left( r - d_{g} \right)^2}{2 \tau^2} \right) + (1-q) \exp \left( -\frac{r^2}{2 \tau^2} \right)} \end{equation} where $q$ is the prior probability of entry $s$ being equal to one. As its name suggests, $\hat{s}_{g} \left( q, r, \tau \right)$ is equal to the mean of $s$ under the aforementioned Gaussian approximation $r \sim d_g s + \tau \zeta$. This approach performs well for the single-class URA problem, especially in view of its low complexity. The PME denoiser found in~\cite{fengler2019sparcs} employs uninformative priors based on the known sparsity level of $\ensuremath{\mathbf{s}}$. The approach was subsequently improved after realizing that priors can be enhanced by performing one round of BP on the factor graph of the outer LDPC code~\cite{amalladinne2020unsourced}. Specifically, in this latter case, $\ensuremath{\boldsymbol{\lambda}}_{\ell}$ is initialized based on the PME of the corresponding entries in $\ensuremath{\mathbf{s}}$. This vector estimate is then refined using \eqref{equation:BP-Check2Variable} and \eqref{equation:BP-Variable2Check}. Finally, the distribution afforded by \eqref{equation:EquivalentPriors} is then utilized as priors for a final round of PME computation, leading to the output of the denoiser. Details are available in~\cite{amalladinne2020unsourced} and, as such, we do not reproduce them in this article. Conceptually, the enhanced version provides a means to embed the local structure of the outer code in the AMP iterate. Accordingly, it can improve performance significantly while adding little to the computational complexity of the CCS approach. The dynamic PME denoiser can be adapted to the current context, with its multiple classes. We propose the following candidate implementation. As in \eqref{equation:AMP-Denoising}, we use two instances of the denoiser, one for each group, \begin{equation} \label{equation:PME-Denoiser} \ensuremath{\boldsymbol{\eta}}_g^{(t)}(\ensuremath{\mathbf{r}}_g) \quad g \in \{ 1, 2 \} . \end{equation} These functions are the dynamic PME denoiser defined by \begin{equation} \label{equation:PME-decomposition} \ensuremath{\boldsymbol{\eta}}_g^{(t)}(\ensuremath{\mathbf{r}}_g) = \hat{\ensuremath{\mathbf{s}}}_{g,1}(\ensuremath{\mathbf{r}}_g, \tau_t) \cdots \hat{\ensuremath{\mathbf{s}}}_{g,L_g}(\ensuremath{\mathbf{r}}_g, \tau_t) . \end{equation} The number of sections in \eqref{equation:PME-decomposition} matches the number of symbols produced by the LDPC code in $\mathcal{C}_g$ (see Fig.~\ref{figure:MessageEncoding}). Furthermore, individual entries are given by \begin{equation*} \hat{\ensuremath{\mathbf{s}}}_{g,\ell}(\ensuremath{\mathbf{r}}_g, \tau_t) = \left( \hat{s}_{g} \left( \ensuremath{\mathbf{q}}(\ell, k), \ensuremath{\mathbf{r}}(\ell, k), \tau_t \right) : k \in 0, \ldots, m_{g} - 1 \right), \end{equation*} where $\{ \ensuremath{\mathbf{q}}(\ell, k) \}$ are the BP extrinsic estimate of \eqref{equation:EquivalentPriors} employed as prior probabilities in the PME. An important subtlety in extending previous results to the current, more elaborate scenario is that $\tau_t$ is shared by the two parallel denoisers $\ensuremath{\boldsymbol{\eta}}_1^{(t)}$ and $\ensuremath{\boldsymbol{\eta}}_2^{(t)}$. It must therefore be computed (or evaluated) through the joint state evolution, leading to \begin{equation*} \begin{split} \tau_{t+1}^2 = \sigma^2 + \lim_{n \rightarrow \infty} \frac{1}{n} \mathbb{E} \left\| \ensuremath{\mathbf{D}} \ensuremath{\boldsymbol{\eta}}^{(t)} \left( \begin{bmatrix} d_1 \ensuremath{\mathbf{s}}_1 \\ d_2 \ensuremath{\mathbf{s}}_2 \end{bmatrix} + \tau_{t} \ensuremath{\boldsymbol{\zeta}}_{t} \right) - \begin{bmatrix} d_1 \ensuremath{\mathbf{s}}_1 \\ d_2 \ensuremath{\mathbf{s}}_2 \end{bmatrix} \right\|^2, \end{split} \end{equation*} where $\ensuremath{\mathbf{D}} = \operatorname{diag} (d_1 \ensuremath{\mathbf{I}}, d_2 \ensuremath{\mathbf{I}})$ accounts for the signal amplitudes and $\sigma^2$ is the noise variance in \eqref{equation:ChannelModel}. In practice, $\tau_t$ is often approximated by $\tau_t^2 \approx \| \ensuremath{\vec{z}}^{(t)} \|^2/n$. A second nuance in AMP for multi-class URA is the computation of the Onsager correction. To get the proper form, we note that \begin{equation} \label{equation:PME-Onsager} \operatorname{div} \ensuremath{\mathbf{D}} \ensuremath{\boldsymbol{\eta}}^{(t)} (\ensuremath{\mathbf{r}}) = d_1 \operatorname{div} \ensuremath{\boldsymbol{\eta}}_1^{(t)}(\ensuremath{\mathbf{r}}_1) + d_2 \operatorname{div} \ensuremath{\boldsymbol{\eta}}_2^{(t)}(\ensuremath{\mathbf{r}}_2) . \end{equation} Adapting results from \cite[Prop.~8]{amalladinne2020coded}, we then obtain \begin{equation} \label{equation:PME-OnsagerCorrection} \operatorname{div} \ensuremath{\mathbf{D}} \ensuremath{\boldsymbol{\eta}}^{(t)}(\ensuremath{\mathbf{r}}) = \frac{1}{\tau_t^2} \left( \big\| \ensuremath{\mathbf{D}}^2 \ensuremath{\boldsymbol{\eta}}^{(t)}(\ensuremath{\mathbf{r}}) \big\|_1 - \big\| \ensuremath{\mathbf{D}} \ensuremath{\boldsymbol{\eta}}^{(t)}(\ensuremath{\mathbf{r}}) \big\|^2 \right) . \end{equation} This compact form takes advantage of the identities \begin{align*} \big\|\ensuremath{\mathbf{D}} \ensuremath{\boldsymbol{\eta}}^{(t)}(\ensuremath{\mathbf{r}}) \big\|^2 &= \big\| d_1 \ensuremath{\boldsymbol{\eta}}_1^{(t)}(\ensuremath{\mathbf{r}}_1) \big\|^2 + \big\| d_2 \ensuremath{\boldsymbol{\eta}}_2^{(t)}(\ensuremath{\mathbf{r}}_2) \big\|^2 \\ \big\|\ensuremath{\mathbf{D}}^2 \ensuremath{\boldsymbol{\eta}}^{(t)}(\ensuremath{\mathbf{r}}) \big\|_1 &= \big\| d_1^2 \ensuremath{\boldsymbol{\eta}}_1^{(t)}(\ensuremath{\mathbf{r}}_1) \big\|_1 + \big\| d_2^2 \ensuremath{\boldsymbol{\eta}}_2^{(t)}(\ensuremath{\mathbf{r}}_2) \big\|_1 . \end{align*} These results crucially assume that the number of BP iterations on each LDPC factor graph is strictly less than its shortest cycle. The computation of the Onsager correction would become more involved otherwise. In practice, this is not an issue because every AMP composite step entails the initiation of the local observations followed by a single round of BP on each graph. Expression \eqref{equation:PME-OnsagerCorrection} can be substituted as the Onsager term in \eqref{equation:AMP-Residual}, thus completing the description of the AMP algorithm for the two-class setting. \subsection{Disambiguation} Once the AMP algorithm has converged, the messages embedded within $\hat{\ensuremath{\mathbf{s}}}_1$ and $\hat{\ensuremath{\mathbf{s}}}_2$ still need to be disentangled. The presence of the LDPC-based message passing within the denoising functions encourages local consistency, but may fall short of decoding. To address this issue, it suffices to run a version of the outer decoding algorithm first introduced in~\cite{amalladinne2020coded}. Starting from root sections, the algorithm keeps track of all possible paths. A message is added to the output list when it fulfills all the factors of the outer code. This step is somewhat standard, as it is done separately for $\mathcal{C}_1$ and $\mathcal{C}_2$. Additional details can be found in \cite{amalladinne2020coded,amalladinne2020AMP}. \section{Practical Considerations and Performance} As mentioned above, AMP is a powerful conceptual framework for the design and analysis of efficient algorithms. Still, in practice, it can be beneficial to deviate slightly from theoretical underpinnings, especially for exceedingly large spaces. Certain decisions can greatly reduce computational complexity. \subsection{Implementation Details} One modification commonly found in the CCS literature is to create $\ensuremath{\mathbf{A}}$ by sampling rows of a large Hadamard matrix (except the row composed of all ones), rather than generate it randomly. Empirical evidence suggests that this does not significantly affect performance, while circumventing the need to store a large matrix. Furthermore, the matrix multiplication steps in \eqref{equation:AMP-Residual} and \eqref{equation:Effective-Observation} can then be performed using the fast Walsh-Hadamard transform. For the problem at hand, we could create $\ensuremath{\mathbf{A}}$ in a similar manner. Yet, inspecting \eqref{equation:Effective-Observation} and \eqref{equation:PME-Denoiser}, we gather that $\ensuremath{\mathbf{r}}_1^{(t)}$ and $\ensuremath{\mathbf{r}}_2^{(t)}$ are employed separately within the denoiser. It therefore suffices to produce these sub-vectors independently. Thus, $\ensuremath{\mathbf{A}}_1$ and $\ensuremath{\mathbf{A}}_2$ can be generated in a disjoint fashion, possibly each through the sampling of a matrix amenable to fast transform techniques. This approach retains the structural properties of the problem, and lowers the computational load slightly. Nevertheless, it is important that $\ensuremath{\mathbf{A}}_1$ and $\ensuremath{\mathbf{A}}_2$ maintain good incoherence properties. Natural candidates for this step include standard Hadamard matrices, but also extend to matrices constructed using second-order Reed-Muller functions~\cite{howard2008fast}. These latter matrices feature good correlation properties and they have appeared in the context of single-class CCS in the past~\cite{calderbank2020chirrup}, although their use therein is quite different. \subsection{Numerical Simulations} We compare the performance of the proposed multi-class approach against two baselines for multi-class CCS. The first implementation executes (group) \emph{treating interference as noise} (TIN). That is, a single-class recovery algorithm for $\mathcal{C}_1$ is applied to $\ensuremath{\vec{y}}$; similar steps are followed for $\mathcal{C}_2$, again using $\ensuremath{\vec{y}}$ as the input. This scheme would arise naturally if the two device groups were assigned to distinct access points that are not cooperating, but nevertheless share a same spectral band. The second multi-class approach is group \emph{successive interference cancellation} (SIC). Therein, a single-class recovery algorithm for $\mathcal{C}_1$ is applied to $\ensuremath{\vec{y}}$, which yields estimate $\hat{\ensuremath{\mathbf{s}}}_1$. Interference from group~1 is then subtracted from observation vector $\ensuremath{\vec{y}}$, leading to updated observation $\ensuremath{\vec{y}}_2 = \ensuremath{\vec{y}} - \ensuremath{\mathbf{A}}_1 \hat{s}_1$. A single-class algorithm for $\mathcal{C}_2$ is subsequently used on the refined observation $\ensuremath{\vec{y}}_2$. The third option is the novel multi-class CCS-AMP algorithm introduced in this article. It can be categorized as joint decoding (JD). The system parameters for these three candidate schemes are identical. Group sizes are taken to be $K_1 = K_2 = 25$ active devices. The payloads are $w_1 = 128$ and $w_2 = 96$ bits. The rate of the outer codes are $1/2$ and $3/8$ for the first and second class. The message from every device is encoded into a block of $n=38400$ channel uses. The \emph{energy-per-bit} for both groups is identical; and it is denoted by ${E_{\mathrm{b}}}/{N_0}$: this implies that devices in group~1 are using more energy, as they are transmitting more information bits. The factor graphs for the outer code employed in our simulations are analogous to the designs in~\cite{amalladinne2020unsourced}. Information messages are partitioned into fragments of 16~bits and, hence, the alphabet size of the outer code in both cases is $2^{16}$. The factor graphs have parameters $L_1 = L_2 = 16$. Both $\ensuremath{\mathbf{A}}_1$ and $\ensuremath{\mathbf{A}}_2$ are formed by sampling rows of Hadamard matrices (excluding rows of all ones). The algorithm performs one round of BP, after reinitialization, per AMP composite step. Under the parameters above, the AMP algorithm converges rapidly. The disambiguation is subsequently performed on the factor graph of the outer code. The $K_g$ most promising codewords are returned as output of the last step. Results do not use phased decoding where top candidates are identified and peeled before repeating the overall CCS-AMP decoding process. This approach provides a slight improvement in performance, but comes at a substantial cost in terms of computational complexity. The three implementations described in this section feature essentially equivalent computational complexity. \begin{figure}[t] \centerline{\input{Figures/MultiClassCCS}} \caption{This plot shows a comparison of per-user error rate for the two groups under the three candidate implementations. The proposed novel multi-class URA algorithm performs best at essentially no additional computation load.} \label{fig:errorRatesK100trials} \end{figure} Illustrative numerical results are summarized in Fig.~\ref{fig:errorRatesK100trials}, where per-user probabilities of error are plotted for both groups and the three candidate implementations as a function of energy-per-bit ${E_{\mathrm{b}}}/{N_0}$. Every point on this graph is averaged over $100$ trials for statistical accuracy. As expected, group SIC outperforms group TIN; yet we emphasize that the group~$1$, which is decoded first in SIC, experiences the exact same performance under these two schemes. Moreover, the multi-class CCS-AMP algorithm performs significantly better than the previous two schemes. This is hardly surprising because joint decoding typically outperforms sequential decoding schemes or TIN-based approaches. However, we emphasize two important aspects of our findings. First, the form of the Onsager in the AMP iterate retains a simple mathematical form despite the presence of the two classes and the coupled state evolution equation. Second and more importantly, the greater performance of joint decoding comes at essentially no additional complexity. The same operations must be performed in sequence for all three implementations. This particular aspect is somewhat surprising because joint decoding schemes are typically much more demanding from a computational point of view. In closing, it is worth noting that Fig.~\ref{fig:errorRatesK100trials} contains no benchmarks, other than those produced within this work. This is a byproduct of the fact that standard demixing algorithms are not meant to work at dimensions on the order of $2^{96}$ or $2^{128}$. \section{Conclusion} In this paper, an extension of coded compressed sensing to the multi-class unsourced random access scenario is studied. A joint decoding approach is proposed to concurrently decode messages emanating from multiple classes of users, leveraging insights from compressed demixing. Within the framework of approximate message passing (AMP), the computational complexity of the proposed scheme is the same as that of alternate implementations such as \textit{treating interference as noise} and \textit{successive interference cancellation}. However, these other schemes perform poorly compared to concurrent multi-class decoding. The ability to decode classes of device jointly with essentially no added complexity is surprising and should be embraced. Our findings provide a blueprint for generic demixing in very high dimensional spaces, which forms a promising avenue for future research. \newpage \bibliographystyle{IEEEbib}
{ "timestamp": "2021-02-16T02:39:45", "yymm": "2102", "arxiv_id": "2102.07704", "language": "en", "url": "https://arxiv.org/abs/2102.07704" }
\section{Conclusion} \label{sec:conclusion} We proposed a simple but effective knowledge distillation approach by introducing the novel student-friendly teacher network (SFTN). Our strategy sheds a light in a new direction to knowledge distillation by focusing on the stage to train teacher networks. We train teacher networks along with their student branches, and then perform distillation from teachers to students. The proposed strategy turns out to achieve outstanding performance, and can be incorporated into various knowledge distillation algorithms easily. For the demonstration of the effectiveness of our strategy, we conducted comprehensive experiments in diverse environments, which show consistent performance gains compared to the standard teacher networks regardless of architectural and algorithmic variations. The proposed approach is effective for achieving higher accuracy with reduced model sizes, but it is not sufficiently verified in the unexpected situations with out-of-distribution inputs, domain shifts, lack of training examples, etc. Also, it still rely on a large number of training examples and still has the limitation of high computational cost and potential privacy issue. \vspace{-0.1cm} \section{Acknowledgments and Disclosure of Funding} \label{sec:ack} This work was partly supported by Samsung Electronics Co., Ltd. (IO200626-07474-01) and Institute for Information \& Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) [2017-0-01779; XAI, 2021-0-01343; Artificial Intelligence Graduate School Program (Seoul National University)]. \section{Experiments} \label{sec:experiments} We evaluate the performance of SFTN in comparison to existing methods and analyze the characteristics of SFTN in various aspects. We first describe our experiment setting in Section~\ref{sub:experiment}. Then, we compare results between SFTNs and the standard teacher networks with respect to classification accuracy in various knowledge distillation algorithms in Section~\ref{sub:benchmark}. The results from ablative experiments for SFTN and transfer learning are discussed in the rest of this section. \subsection{Experiment Setting} \label{sub:experiment} We perform evaluation on multiple well-known datasets including ImageNet~\cite{IMAGENET} and CIFAR-100~\cite{cifar09} using several different backbone networks such as ResNet~\cite{resnet}, WideResNet~\cite{WRN}, VGG~\cite{vggnet}, ShuffleNetV1~\cite{ShuffleNetv1}, and ShuffleNetV2~\cite{ShuffleNetv2}. For comprehensive evaluation, we adopt various knowledge distillation techniques, which include KD~\cite{KD}, FitNets~\cite{FitNet}, AT~\cite{AT}, SP~\cite{sp}, VID~\cite{vid}, RKD~\cite{RKD}, PKT~\cite{pkt}, AB~\cite{ABdistill}, FT~\cite{FT}, CRD~\cite{CRD}, SSKD~\cite{SSKD}, and OH~\cite{OH}. Among these methods, the feature distillation methods~\cite{FitNet,AT,sp,vid,RKD,pkt,ABdistill,FT,OH} conduct joint distillation with conventional KD~\cite{KD} during student training, which results in higher accuracy in practice than the feature distillation only. We also include comparisons with collaborative learning methods such as DML~\cite{DML} and KDCL~\cite{KDCL}, and a curriculum learning technique, RCO~\cite{RCO}. We have reproduced the results from the existing methods using the implementations provided by the authors of the papers. \begin{table*}[t] \centering \caption{Comparisons between SFTN and the standard teacher models on CIFAR-100 dataset when the architectures of the teacher-student pairs are homogeneous. In all the tested algorithms, the students distilled from the teacher models given by SFTN outperform the ones trained from the standard teacher models. All the reported results are based on the outputs of 3 independent runs.} \label{table:homogeneous_teacher_student} \scalebox{0.75}{% \begin{tabular}{c||ccc|ccc|ccc|ccc} Teacher/Student & \multicolumn{3}{c|}{WRN40-2/WRN16-2} & \multicolumn{3}{c|}{WRN40-2/WRN40-1} & \multicolumn{3}{c|}{resnet32x4/resnet8x4} & \multicolumn{3}{c}{resnet32x4/resnet8x2} \\ Teacher training & Stan. & SFTN & $\Delta$ & Stan. & SFTN & $\Delta$ & Stan. & SFTN & $\Delta$ & Stan. & SFTN & $\Delta$ \\ \hline\hline Teacher Acc. & 76.30 & 78.20 & & 76.30 & 77.62 & & 79.25 & 79.41 & & 79.25 & 77.89 & \\ Student Acc. w/o KD & & 73.41 & & & 72.16 & & & 72.38 & & & 68.19 & \\ \hline KD~\cite{KD} & 75.46 & 76.25 & $+$0.79 & 73.73 & 75.09 & $+$1.36 & 73.39 & 76.09 & $+$2.70 & 67.43 & 69.17 & $+$1.74 \\ FitNets~\cite{FitNet} & 75.72 & 76.73 & $+$1.01 & 74.14 & 75.54 & $+$1.40 & 75.34 & 76.89 & $+$1.55 & 69.80 & 71.07 & $+$1.27 \\ AT~\cite{AT} & 75.85 & 76.82 & $+$0.97 & 74.56 & 75.86 & $+$1.30 & 74.98 & 76.91 & $+$1.93 & 68.79 & 70.90 & $+$2.11 \\ SP~\cite{sp} & 75.43 & 76.77 & $+$1.34 & 74.51 & 75.31 & $+$0.80 & 74.06 & 76.37 & $+$2.31 & 68.39 & 70.03 & $+$1.64 \\ VID~\cite{vid} & 75.63 & 76.79 & $+$1.16 & 74.21 & 75.76 & $+$1.55 & 74.86 & 77.00 & $+$2.14 & 69.53 & 71.08 & $+$1.55 \\ RKD~\cite{RKD} & 75.48 & 76.49 & $+$1.01 & 73.86 & 75.11 & $+$1.25 & 74.12 & 76.62 & $+$2.50 & 68.54 & 70.91 & $+$2.36 \\ PKT~\cite{pkt} & 75.71 & 76.57 & $+$0.86 & 74.43 & 75.49 & $+$1.06 & 74.70 & 76.57 & $+$1.87 & 69.29 & 70.75 & $+$1.45 \\ AB~\cite{ABdistill} & 70.12 & 70.76 & $+$0.64 & 74.38 & 75.51 & $+$1.13 & 74.73 & 76.51 & $+$1.78 & 69.76 & 71.05 & $+$1.29 \\ FT~\cite{FT} & 75.6 & 76.51 & $+$0.91 & 74.49 & 75.11 & $+$0.62 & 74.89 & 77.02 & $+$2.13 & 69.70 & 71.11 & $+$1.40 \\ CRD~\cite{CRD} & 75.91 & 77.23 & $+$1.32 & 74.93 & 76.09 & $+$1.16 & 75.54 & 76.95 & $+$1.41 & 70.34 & 71.34 & $+$1.00 \\ SSKD~\cite{SSKD} & 75.96 & 76.80 & $+$0.84 & 75.72 & 76.03 & $+$0.31 & 75.95 & 76.85 & $+$0.90 & 69.34 & 70.29 & $+$0.96 \\ OH~\cite{OH} & 76.00 & 76.39 & $+$0.39 & 74.79 & 75.62 & $+$0.83 & 75.04 & 76.65 & $+$1.61 & 68.10 & 69.69 & $+$1.59 \\ \hline Best & 76.00 & 77.23 & $+$1.23 & 75.72 & 76.09 & $+$0.37 & 75.95 & 77.02 & $+$1.07 & 70.34 & 71.34 & $+$1.00 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t] \centering \caption{Comparisons between SFTN and the standard teacher models on CIFAR-100 dataset when the architectures of the teacher-student pairs are heterogeneous. In all the tested algorithms, the student models distilled from the teacher models given by SFTN outperform the ones trained from the standard teacher models. All the reported results are based on the outputs of 3 independent runs. } \label{table:heterogeneous_teacher_student} \scalebox{0.75}{% \begin{tabular}{c||ccc|ccc|ccc|ccc} Teacher/Student & \multicolumn{3}{c}{resnet32x4/ShuffleV1} & \multicolumn{3}{c}{resnet32x4/ShuffleV2} & \multicolumn{3}{c}{ResNet50/VGG8} & \multicolumn{3}{c}{WRN40-2/ShuffleV2} \\ Teacher training & Stan. & SFTN & $\Delta$ & Stan. & SFTN & $\Delta$ & Stan. & SFTN & $\Delta$ & Stan. & SFTN & $\Delta$ \\ \hline\hline Teacher Acc. & 79.25 & 80.03 & & 79.25 & 79.58 & & 78.70 & 82.52 & & 76.30 & 78.21 & \\ Student Acc. w/o KD & &71.95 & & &73.21 & & &71.12 & & &73.21 & \\ \hline KD~\cite{KD} & 74.26 & 77.93 & $+$3.67 & 75.25 & 78.07 & $+$2.82 & 73.82 & 74.92 & $+$1.10 & 76.68 & 78.06 & $+$1.38 \\ FitNets~\cite{FitNet} & 75.95 & 78.75 & $+$2.80 & 77.00 & 79.68 & $+$2.68 & 73.22 & 74.80 & $+$1.58 & 77.31 & 79.21 & $+$1.90 \\ AT~\cite{AT} & 76.12 & 78.63 & $+$2.51 & 76.57 & 78.79 & $+$2.22 & 73.56 & 74.05 & $+$0.49 & 77.41 & 78.29 & $+$0.88 \\ SP~\cite{sp} & 75.80 & 78.36 & $+$2.56 & 76.11 & 78.38 & $+$2.27 & 74.02 & 75.37 & $+$1.35 & 76.93 & 78.12 & $+$1.19 \\ VID~\cite{vid} & 75.16 & 78.03 & $+$2.87 & 75.70 & 78.49 & $+$2.79 & 73.59 & 74.76 & $+$1.17 & 77.27 & 78.78 & $+$1.51 \\ RKD~\cite{RKD} & 74.84 & 77.72 & $+$2.88 & 75.48 & 77.77 & $+$2.29 & 73.54 & 74.70 & $+$1.16 & 76.69 & 78.11 & $+$1.42 \\ PKT~\cite{pkt} & 75.05 & 77.46 & $+$2.41 & 75.79 & 78.28 & $+$2.49 & 73.79 & 75.17 & $+$1.38 & 76.86 & 78.28 & $+$1.42 \\ AB~\cite{ABdistill} & 75.95 & 78.53 & $+$2.58 & 76.25 & 78.68 & $+$2.43 & 73.72 & 74.77 & $+$1.05 & 77.28 & 78.77 & $+$1.49 \\ FT~\cite{FT} & 75.58 & 77.84 & $+$2.26 & 76.42 & 78.37 & $+$1.95 & 73.34 & 74.77 & $+$1.43 & 76.80 & 77.65 & $+$0.85 \\ CRD~\cite{CRD} & 75.60 & 78.20 & $+$2.60 & 76.35 & 78.43 & $+$2.08 & 74.52 & 75.41 & $+$0.89 & 77.52 & 78.81 & $+$1.29 \\ SSKD~\cite{SSKD} & 78.05 & 79.10 & $+$1.05 & 78.66 & 79.65 & $+$0.99 & 76.03 & 76.95 & $+$0.92 & 77.81 & 78.34 & $+$0.53 \\ OH~\cite{OH} & 77.51 & 79.56 & $+$2.05 & 78.08 & 79.98 & $+$1.90 & 74.55 & 75.95 & $+$1.40 & 77.82 & 79.14 & $+$1.32 \\ \hline Best & 78.05 & 79.56 & $+$1.51 & 78.66 & 79.98 & $+$1.32 & 76.03 & 76.95 & $+$0.92 & 77.82 & 79.21 & $+$1.39 \\ \hline \end{tabular}% } \end{table*} \subsection{Main Results} \label{sub:benchmark} To show effectiveness of SFTN, we incorporate SFTN into various existing knowledge distillation algorithms and evaluate accuracy. We present implementation details and experimental results on CIFAR-100~\cite{cifar09} and ImageNet~\cite{IMAGENET} datasets. \subsubsection{CIFAR-100} CIFAR-100~\cite{cifar09} consists of 50K training images and 10K testing images in 100 classes. We select 12 state-of-the art distillation methods to compare accuracy of SFTNs with the standard teacher networks. To show the generality of the proposed approach, 8 pairs of teacher and student models have been tested in our experiment. The experiment setup for CIFAR-100 is identical to the one performed in CRD\footnote{https://github.com/HobbitLong/RepDistiller}; most experiments employ the SGD optimizer with learning rate 0.05, weight decay 0.0005 and momentum 0.9 while learning rate is set to 0.01 in the ShuffleNet experiments. The hyperparameters for the loss function are set as $\lambda_\text{T}=1$, $\lambda_\text{R}^\text{CE}=1$, $\lambda_\text{R}^\text{KL}=3$, and $\tilde{\tau}=1$ in student-aware training while ${\tau=4}$ in knowledge distillation. Table~\ref{table:homogeneous_teacher_student} and \ref{table:heterogeneous_teacher_student} demonstrate the full results on the CIFAR-100 dataset. Table~\ref{table:homogeneous_teacher_student} presents the distillation performance of all the compared algorithms when teacher and student pairs have the same architecture type while Table~\ref{table:heterogeneous_teacher_student} shows the results from teacher-student pairs with heterogeneous architecture styles. Both tables clearly demonstrate that SFTN is consistently better than the standard teacher network in all algorithms. The average difference between SFTN and the standard teacher is 1.58\% points, and the average difference between best student accuracy of SFTN and the standard teacher is 1.10\% points. We note that the outstanding performance of SFTN is not only driven by the higher accuracy of teacher models achieved by our student-aware learning technique. As observed in Table~\ref{table:homogeneous_teacher_student} and \ref{table:heterogeneous_teacher_student}, the proposed approach often presents substantial improvement compared to the standard distillation methods despite similar or lower teacher accuracies. Refer to Section~\ref{sub:ablation_study} for the further discussion about the relation of accuracy between teacher and student networks. Figure~\ref{fig:comp_best_student} illustrates the accuracies of the best student models of the standard teacher and SFTN given teacher and student architecture pairs. Despite the small capacity of the students, the best student models of SFTN sometimes outperform the standard teachers while the only one best student of the standard teacher shows higher accuracy than its teacher. \begin{figure*}[t] \begin{center} \includegraphics[width=0.8\linewidth]{./images/fig3_rev2.png} \end{center} \vspace{-0.3cm} \caption{Accuracy comparison of the best students from SFTN with the standard teacher on CIFAR-100. The four best student models of SFTN (blue) outperform the standard teachers (gray) while the only one best student of the standard teacher (red) achieves higher accuracy than its teacher (gray).} \label{fig:comp_best_student} \end{figure*} \subsubsection{ImageNet} ImageNet~\cite{IMAGENET} consists of 1.2M training images and 50K validation images for 1K classes. We adopt the standard Pytorch set-up for ImageNet training for this experiment\footnote{https://github.com/pytorch/examples/tree/master/imagenet}. The optimization is given by SGD with learning rate 0.1, weight decay 0.0001 and momentum 0.9. \begin{table*}[t] \centering \caption{Top-1 and Top-5 validation accuracy on ImageNet of SFTN in comparison to other methods.} \label{table:imagenet_result} \scalebox{0.75}{% \begin{tabular}{c||ccc|ccc} Teacher/Student & \multicolumn{6}{c}{ResNet50/ResNet34} \\ \hline & \multicolumn{3}{c|}{Top-1} & \multicolumn{3}{c}{Top-5} \\ Teacher training & Standard & SFTN & $\Delta$& Standard & SFTN & $\Delta$ \\ \hline\hline Teacher Acc. & 76.45 & 77.43 & & 93.15 & 93.75 & \\ Student Acc. w/o KD & & 73.79 & & & 91.74 & \\ \hline KD~\cite{KD} & 73.55 & 74.14 & $+$0.59 & 91.81 & 92.21 & $+$0.40 \\ FitNets~\cite{FitNet} & 74.56 & 75.01 & $+$0.45 & 92.31 & 92.51 & $+$0.20 \\ SP~\cite{sp} & 74.95 & 75.53 & $+$0.58 & 92.54 & 92.69 & $+$0.15 \\ CRD~\cite{CRD} & 75.01 & 75.39 & $+$0.38 & 92.56 & 92.67 & $+$0.11 \\ OH~\cite{OH} & 74.56 & 75.01 & $+$0.45 & 92.36 & 92.56 & $+$0.20 \\ \hline Best & 75.01 & 75.53 & $+$0.52 & 92.56 & 92.69 & $+$0.13 \\ \hline \end{tabular} } \end{table*} The coefficients of individual loss terms are set as $\lambda_\text{T}=1$, $\lambda_\text{R}^\text{CE}=1$, and $\lambda_\text{R}^\text{KL}=1$, where $\tilde{\tau}=1$. We conduct the ImageNet experiment for 5 different knowledge distillation methods, where teacher models based on ResNet50 transfer knowledge to student networks with ResNet34. As presented in Table~\ref{table:imagenet_result}, SFTN consistently outperforms the standard teacher network in all settings. The best student accuracy of SFTN achieves the higher top-1 accuracy than the standard teacher model by approximately 0.5\% points. This results implies that the proposed algorithm has great potential on large datasets as well. \subsection{Comparison with Collaborative and Curriculum Learning Methods} \label{sub:other_knowledge_transfer} Contrary to traditional knowledge distillation methods based on static pretrained teachers, collaborative learning approaches employ dynamic teacher networks trained jointly with students and curriculum learning methods keep track of the optimization history of teachers for distillation. Table~\ref{table:other_knowledge_transfer} shows that SFTN outperforms the collaborative learning techniques such as DML~\cite{DML} and KDCL~\cite{KDCL}; the heterogeneous architectures turn out to be effective for mutual learning. On the other hand, the accuracy of SFTN is consistently higher than that of the curriculum learning method, RCO~\cite{RCO}, under the same and even harsher training condition in terms of the number of epochs. Although the identification of the optimal checkpoints may be challenging in the trajectory-based learning, SFTN improves its accuracy substantially with more iterations as shown in the results for SFTN-4. \begin{table*}[t] \caption{Comparision with collaborative and curriculum learning approaches on CIFAR-100. We employ KDCL-Na\"ive for ensemble logits of KDCL. RCO is based on the one-stage EEI (equal epoch interval) while RCO-EEI-4 adopts 4 anchor points selection with the EEI strategy. Note that both RCO-EEI-4 and SFTN-4 are trained for $240\times4$ epochs.} \label{table:other_knowledge_transfer} \centering \scalebox{0.74}{% \begin{tabular}{c||cc|cc|cc|cc|cc|cc} Teacher & \multicolumn{2}{c|}{WRN40-2} & \multicolumn{2}{c|}{WRN40-2} & \multicolumn{2}{c|}{resnet32x4} & \multicolumn{2}{c|}{resnet32x4} & \multicolumn{2}{c|}{resnet32x4} & \multicolumn{2}{c}{ResNet50} \\ Student & \multicolumn{2}{c|}{WRN16-2} & \multicolumn{2}{c|}{WRN40-1} & \multicolumn{2}{c|}{resnet8x4} & \multicolumn{2}{c|}{ShuffleV1} & \multicolumn{2}{c|}{ShuffleV2} & \multicolumn{2}{c}{VGG8} \\\hline\hline Standard teacher Acc. & \multicolumn{2}{c|}{76.30} & \multicolumn{2}{c|}{76.30} & \multicolumn{2}{c|}{79.25} & \multicolumn{2}{c|}{79.25} & \multicolumn{2}{c|}{79.25} & \multicolumn{2}{c}{78.70} \\ Student Acc. w/o KD & \multicolumn{2}{c|}{73.41} & \multicolumn{2}{c|}{72.16} & \multicolumn{2}{c|}{72.38} & \multicolumn{2}{c|}{71.95} & \multicolumn{2}{c|}{73.21} & \multicolumn{2}{c}{71.12} \\ & Stu. & $\Delta$ & Stu. & $\Delta$ & Stu. & $\Delta$ & Stu. & $\Delta$ & Stu. & $\Delta$ & Stu. & $\Delta$ \\\hline Standard & 75.46 & $+$2.05 & 73.73 & $+$1.57 & 73.39 & $+$1.01 & 74.26 & $+$2.31 & 75.25 & $+$2.04 & 73.82 & $+$2.70 \\ DML~\cite{DML} & 75.30 & $+$1.89 & 74.08 & $+$1.92 & 74.34 & $+$1.96 & 73.37 & $+$1.42 & 73.80 &$+$ 0.59 & 73.01 & $+$1.89 \\ KDCL~\cite{KDCL} & 75.45 & $+$2.04 & 74.65 & $+$2.49 & 75.21 & $+$2.83 & 73.98 &$+$ 2.03 & 74.30 & $+$1.09 & 73.48 & $+$2.36 \\ RCO~\cite{RCO} & 75.36 & $+$1.95 & 74.29 & $+$2.13 & 74.06 & $+$1.68 & 76.62 & $+$4.67 & 77.40 & $+$4.19 & 74.30 & $+$3.18 \\ SFTN & 76.25 & $+$2.84 & 75.09 & $+$2.93 & 76.09 & $+$3.71 & 77.93 & $+$5.98 & 78.07 & $+$4.86 & 74.92 & $+$3.80 \\\hline RCO-EEI-4~\cite{RCO} & 75.69 & $+$2.28 & 74.87 & $+$2.71 & 73.73 & $+$1.35 & 76.97 & $+$5.02 & 76.89 & $+$3.68 & 74.24 & $+$3.12 \\ SFTN-4 & 76.96 & $+$3.55 & 76.31 & $+$4.15 & 76.67 & $+$4.29 & 79.11 & $+$7.16 & 78.95 & $+$5.74 & 75.52 & $+$4.40 \\\hline \end{tabular}% } \end{table*} \subsection{Effect of Hyperparameters} \label{sub:ablation_study} SFTN computes the KL-divergence loss, $\mathcal{L}_\text{R}^\text{KL}$, to minimize the difference between the softmax outputs of teacher and student branches, which involves two hyperparameters, temperature of the softmax function, ${\tau}$, and weight for KL-divergence loss term, $\lambda_\text{R}^\text{KL}$. We discuss the impact and trade-off issue of the two hyperparameters. In particular, we present our observations that the student-aware learning is indeed helpful to improve the accuracy of student models while maximizing performance of teacher models may be suboptimal for knowledge distillation. \begin{table*}[t] \caption{Effect of varying $\tilde{\tau}$ in the KL-divergence loss of the student-aware training tested on CIFAR-100, where $\tau$ for the knowledge distillation is set to 4. The student accuracy is fairly stable over a wide range of the hyperparameter. Note that the accuracies of SFTNs and the student model are rather inversely correlated, which implies that the maximization of teacher models is not necessarily ideal for knowledge distillation.} \label{table:varying_tau} \centering \scalebox{0.75} \begin{tabular}{c||cccc|c||cccc|c} & \multicolumn{5}{c||}{Accuracy of SFTN} & \multicolumn{5}{c}{Student accuracy by KD} \\ \hline Teacher & \multicolumn{2}{c}{resnet32x4} & \multicolumn{2}{c|}{WRN40-2} & \multirow{2}{*}{Avg.} & \multicolumn{2}{c}{resnet32x4} & \multicolumn{2}{c|}{WRN40-2} & \multirow{2}{*}{Avg.} \\ Student & \hspace{-0.05cm}ShuffleV1\hspace{-0.05cm} & \hspace{-0.05cm}ShuffleV2\hspace{-0.05cm} & WRN16-2 & WRN40-1 & & \hspace{-0.05cm}ShuffleV1\hspace{-0.05cm} & \hspace{-0.05cm}ShuffleV2\hspace{-0.05cm} & WRN16-2 & WRN40-1 & \\ \hline\hline $\tilde{\tau}=1$ & 81.19 & 80.26 & 78.23 & 78.14 & 78.85 & 76.05 & 77.18 & 76.30 & 74.75 & \textbf{75.58} \\ $\tilde{\tau}=5$ & 81.23 & 81.56 & 79.22 & 78.31 & 79.54 & 75.36 & 75.59 & 76.31 & 73.64 & 75.10 \\ $\tilde{\tau}=10$ & 81.27 & 81.98 & 78.81 & 78.38 & 79.58 & 74.47 & 75.93 & 75.85 & 73.62 & 74.76 \\ $\tilde{\tau}=15$ & 81.89 & 81.74 & 79.27 & 78.63 & \textbf{79.74} & 74.78 & 75.65 & 75.79 & 73.49 & 74.81 \\ $\tilde{\tau}=20$ & 81.60 & 81.70 & 78.84 & 78.45 & 79.59 & 74.62 & 75.88 & 75.82 & 74.03 & 74.95 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t] \caption{Effect of varying $\lambda_\text{R}^\text{KL}$ in the knowledge distillation via SFTN tested on CIFAR-100. The accuracies of SFTNs and the corresponding students are not correlated while the accuracy gaps of the two models drop as $\lambda_\text{R}^\text{KL}$ increases.} \label{table:varying_s_kl} \centering \scalebox{0.75}{% \begin{tabular}{c||cccc|c||cccc|c} & \multicolumn{5}{c||}{Accuracy of SFTN} & \multicolumn{5}{c}{Student accuracy by KD} \\ \hline Teacher & \multicolumn{2}{c}{resnet32x4} & \multicolumn{2}{c|}{WRN40-2} & \multirow{2}{*}{Avg.} & \multicolumn{2}{c}{resnet32x4} & \multicolumn{2}{c|}{WRN40-2} & \multirow{2}{*}{Avg.} \\ Student & \hspace{-0.08cm}ShuffleV1\hspace{-0.08cm} & \hspace{-0.08cm}ShuffleV2\hspace{-0.08cm} & WRN16-2 & WRN40-1 & & \hspace{-0.08cm}ShuffleV1\hspace{-0.08cm} & \hspace{-0.08cm}ShuffleV2\hspace{-0.08cm} & WRN16-2 & WRN40-1 & \\ \hline\hline $\lambda_\text{R}^\text{KL}=1$ & 81.19 & 80.26 & 78.23 & 78.14 & \textbf{79.46} & 76.05 & 77.18 & 76.30 & 74.75 & 76.07 \\ $\lambda_\text{R}^\text{KL}=3$ & 78.70 & 79.80 & 77.83 & 77.57 & 78.48 & 77.36 & 78.56 & 76.20 & 74.71 & \textbf{76.71} \\ $\lambda_\text{R}^\text{KL}=6$ & 78.29 & 78.29 & 77.28 & 76.05 & 77.48 & 77.33 & 77.70 & 76.02 & 74.67 & 76.43 \\ $\lambda_\text{R}^\text{KL}=10$ & 73.02 & 75.01 & 75.03 & 73.51 & 74.14 & 75.57 & 76.62 & 74.19 & 73.08 & 74.87 \\ \hline Standard & 79.25 & 79.25 & 76.30 & 76.30 & 77.78 & 74.31 & 75.25 & 75.28 & 73.56 & 74.60 \\ \hline \end{tabular}% } \end{table*} \vspace{-0.2cm} \paragraph{Temperature of softmax function} The temperature parameter of the KL-divergence loss in \eqref{eq:sb_kl}, denoted by $\tilde{\tau}$, controls the softness of $\tilde{\mathbf{q}}_\text{T}$ and $\tilde{\mathbf{q}}_\text{R}^{i}$; as $\tilde{\tau}$ gets higher, the output of the softmax function becomes smoother. Despite the fluctuation in teacher accuracy, student models given by knowledge distillation via SFTN maintain fairly consistent results. Table~\ref{table:varying_tau} also shows that the performance of SFTNs and the student models is rather inversely correlated. This result implies that a loosely optimized teacher model may be more effective for knowledge distillation according to this ablation study. \vspace{-0.2cm} \paragraph{Weight for KL-divergence loss} The hyperparameter $\lambda_\text{R}^\text{KL}$ facilitates knowledge distillation by making $\tilde{\mathbf{q}}_\text{T}$ similar to $\tilde{\mathbf{q}}_\text{R}^{i}$. However, it affects the accuracy of teacher network negatively. Table~\ref{table:varying_s_kl} shows that the average accuracy gaps between SFTNs and the corresponding student models drop gradually as $\lambda_\text{R}^\text{KL}$ increases. One interesting observation is the student accuracy via SFTN with $\lambda_\text{R}^\text{KL}=10$ in comparison to its counterpart via the standard teacher; even though the standard teacher network is more accurate than SFTN by a large margin, its corresponding student accuracy is lower than that of SFTN. \iffalse \subsection{Effect of student branch size} \label{sub:student_branch_size} \begin{table*}[!htb] \caption{Effects of student branch size. All the reported accuracies in this table is computed by the outputs of 3 independent runs. Bold denotes the highest accuracy among the various student branch models.} \label{table:student_branch_size_result} \vskip 0.1in \centering \scalebox{0.78}{% \begin{tabular}{c||ccc|ccc|ccc} Models(Teacher/Student) & \multicolumn{3}{c|}{WRN40-2/resnet8x4} & \multicolumn{3}{c|}{resnet32x4/resnet8x4} & \multicolumn{3}{c}{resnet32x4/WRN16-2} \\\hline Student branch size & Small & Equal & Large & Small & Equal & Large & Small & Equal & Large \\ Student branch model & resnet8x2 & resnet8x4 & resnet32x4 & resnet8x2 & resnet8x4 & resnet32x4 & WRN16-1 & WRN16-2 & WRN40-2 \\\hline\hline Teacher Accuracy & 76.22 & 78.18 & 78.82 & 77.89 & 79.41 & 80.85 & 76.53 & 79.04 & 80.3 \\ Stu. Acc. w/o KD & & 72.38 & & & 71.12 & & & 73.41 & \\ \hline KD & 74.08 & \textbf{75.46} & 74.84 & 75.19 & \textbf{76.09} & 75.19 & 75.14 & 76.33 & \textbf{76.37} \\ SP & 74.01 & \textbf{75.58} & 75.24 & 75.76 & \textbf{76.37} & 75.62 & 75.61 & \textbf{76.91} & 76.34 \\ FT & 74.55 & 75.83 & \textbf{76.03} & 76.54 & \textbf{77.02} & 76.48 & 76.26 & \textbf{77.07} & 76.81 \\ CRD & 75.8 & \textbf{76.94} & 76.52 & 76.72 & \textbf{76.95} & 76.54 & 76.55 & \textbf{77.39} & 77.27 \\ SSKD & 73.67 & 75.85 & \textbf{75.93} & 75.77 & \textbf{76.85} & 76.67 & 75.27 & 77.14 & \textbf{77.35} \\ OH & 74.79 & 75.98 & \textbf{76.09} & 75.69 & \textbf{76.65} & 76.38 & 75.85 & 76.87 & \textbf{77.00} \\\hline Average & 74.48 & \textbf{75.94} & 75.78 & 75.94 & \textbf{76.66} & 76.15 & 75.78 & \textbf{76.95} & 76.86 \\ Best & 75.8 & \textbf{76.94} & 76.52 & 76.72 & \textbf{77.02} & 76.67 & 76.55 & \textbf{77.39} & 77.35 \\\hline \end{tabular}% } \end{table*} Since SFTN employs student branch for student-aware training of a teacher network, SFTN enhances not only distillation performance, but also affects the performance of teacher networks. To analysis relation between distillation performance and teacher network's performance, we have investigated how the size of student branch affects to performances of STFN itself and distilled student. Table~\ref{table:student_branch_size_result} shows that the best and average student accuracies are obtained when the student branch is equal to the student network, though teacher’s performance increases as size of student branch increases. Since larger student branch is of different capability from the student network, increased teacher’s performance cannot be distilled to the student network easily. \fi \subsection{Versatility of SFTN} \label{sub:versatility} Although our teacher network obtained from the student-aware training procedure is specialized for a specific student model, it is also effective to transfer knowledge to the students models with substantially different architectures. Table~\ref{table:student_branch_var} shows that the benefit of our method is also preserved well as long as the student branch has similar capacity to the student models, where the model capacity is defined by the achievable accuracy via independent training without distillation. In addition, it presents that larger students branches are often effective to enhance distillation performance while smaller student branches are not always helpful. In summary, these results imply that a teacher network in SFTN trained for a specific architecture of student network has the potential to transfer its knowledge to other types of student networks. \subsection{Use of Pretrained Teachers} \label{sub:pretrained_teacher} The main goal of knowledge distillation is to maximize the benefit in student networks, and the additional training cost may not be critical in many real applications. However, the increase of training cost originated from the student branch of the teacher network is still undesirable. We can sidestep this limitation by adopting pretrained teacher networks in the student-aware training stage. The training cost of SFTN teacher networks is reduced significantly by using pretrained models, and Table~\ref{table:computational_overhead} presents the tendency clearly. Compared to 240 epochs for the standard student-aware training, fine-tuning pretrained teacher networks only needs 60 epochs for convergence; we train the student branches only for the first 30 epochs and fine-tune the whole network for the remaining 30 epochs. Table~\ref{table:pretrained_teacher} shows that fine-tuned pretrained teacher networks have potential to enhance distillation performance. They achieve almost same accuracy with the full SFTN in 6 knowledge distillation algorithms. \begin{table*}[t] \caption{Effectiveness of knowledge distillation via SFTN when student models have different capacity compared to the one used in the student-aware training. This experiment is conducted on CIFAR-100. The numbers in bold and red denote the best and the second-best results. SB denotes student branch. {(M1: resnet8x2, M2: resnet8x4, M3: resnet32x4, M4: WRN16-1, M5:WRN16-2, M6: WRN40-2, M7: ShuffleV2)} } \label{table:student_branch_var} \centering \scalebox{0.74}{% \begin{tabular}{>{\centering}p{2,2cm}||>{\centering}p{0.75cm}>{\centering}p{0.75cm}>{\centering}p{0.75cm}>{\centering}p{0.75cm}>{\centering}p{0.75cm}>{\centering}p{0.75cm}>{\centering}p{0.75cm}|>{\centering}p{0.75cm}>{\centering}p{0.75cm}>{\centering}p{0.75cm}>{\centering}p{0.75cm}>{\centering}p{0.75cm}>{\centering}p{0.75cm} p{0.75cm}} Teacher/Student & \multicolumn{7}{c|}{WRN40-2/WRN16-2 (M6/M5)} & \multicolumn{7}{c}{resnet32x4/ResNet8x4 (M3/M2)} \\\hline SB capacity & N/A & \multicolumn{2}{c}{Smaller} & \multicolumn{2}{c}{Equal/similar} & \multicolumn{2}{c|}{Larger} & N/A & \multicolumn{2}{c}{Smaller} & \multicolumn{2}{c}{Equal/similar} & \multicolumn{2}{c}{Larger} \\ SB model & N/A & M1 & M4 & M5 & M7 & M3 & M6 & N/A & M4 & M1 & M2 & M7 & M3 & M6 \\ \hline\hline SB Acc. & -- & 68.19 & 67.10 & 73.41 & 73.21 & 79.25 & 76.30 & -- & 67.10 & 68.19 & 72.38 & 73.21 & 79.25 & 76.30 \\ Teacher Acc. & 76.30 & 76.22 & 75.98 & 78.20 & 78.21 & 78.82 & 78.69 & 79.25 & 76.53 & 77.89 & 79.41 & 79.58 & 80.85 & 80.30 \\ \hline KD~\cite{KD} & 75.46 & 74.67 & 74.73 & \textbf{76.25} & \textcolor{red}{75.68} & 75.56 & 75.63 & 73.39 & 74.71 & 75.19 & \textbf{76.09} & \textcolor{red}{75.82} & 75.19 & 75.03 \\ SP~\cite{sp} & 75.43 & 74.75 & 75.29 & \textbf{76.77} & \textcolor{red}{76.56} & 76.13 & 76.08 & 74.06 & 75.31 & 75.76 & \textbf{76.37} & \textcolor{red}{76.09} & 75.62 & 75.36 \\ FT~\cite{FT} & 75.60 & 74.61 & 75.23 & \textcolor{red}{76.51} & \textbf{76.66} & 76.47 & 76.23 & 74.89 & 75.79 & 76.54 & \textbf{77.02} & 76.62 & 76.48 & \textcolor{red}{76.63} \\ CRD~\cite{CRD} & 75.91 & 76.14 & 76.07 & \textcolor{red}{77.23} & \textbf{77.45} & 77.06 & 76.70 & 75.54 & 76.38 & \textcolor{red}{76.72} & \textbf{76.95} & 76.64 & 76.54 & 76.46 \\ SSKD~\cite{SSKD} & 75.96 & 74.35 & 74.41 & \textbf{76.80} & 76.49 & \textcolor{red}{76.73} & 76.72 & 75.95 & 75.06 & 75.77 & \textbf{76.85} & 76.22 & \textcolor{red}{76.67} & 76.24 \\ OH~\cite{OH} & 76.00 & 74.95 & 74,.97 & \textcolor{red}{76.39} & \textbf{76.49} & 76.27 & 76.15 & 75.04 & 75.65 & 75.69 & \textbf{76.65} & \textcolor{red}{76.48} & 76.38 & 76.44 \\\hline Average & 75.73 & 74.91 & 75.12 & \textbf{76.66} & \textcolor{red}{76.56} & 76.37 & 76.25 & 74.81 & 75.48 & 75.95 & \textbf{76.66} & \textcolor{red}{76.31} & 76.15 & 76.03 \\ Best & 76.00 & 76.14 & 76.07 & \textcolor{red}{77.23} & \textbf{77.45} & 77.06 & 76.72 & 75.95 & 76.38 & \textcolor{red}{76.72} & \textbf{77.02} & 76.64 & 76.67 & 76.63 \\\hline \end{tabular}% } \end{table*} \begin{table}[h] \centering \vspace{-0.4cm} \caption{Training time of SFTN teachers with pretrained models. The additional training time in each SFTN with a pretrained teacher model is presented in the last column; it is significantly reduced compared to the standard SFTN teacher (the second-last column).} \label{table:computational_overhead} \vspace{0.2cm} \scalebox{0.8}{% \begin{tabular}{c|cccc} \multirow{2}{*}{Models (teacher/student)} & \multicolumn{4}{c}{Training time (sec)} \\ & Teacher & Student & SFTN teacher & SFTN teacher with a pretrained model \\\hline\hline resnet32x4/ShuffleV1 & 6,005 & 5,624 & 10,910 & 2,298 \\ resnet32x4/ShuffleV2 & 6,005 & 6,221 & 10,949 & 2,370 \\ WRN40-2/WRN16-2 & 3,940 & 1,745 & \ \ 6,028 & 1,178 \\ WRN40-2/WRN40-1 & 3,940 & 3,698 & \ \ 7,431 & 1,621 \end{tabular}% } \vspace{-0.2cm} \end{table} \begin{table}[t] \begin{minipage}{0.485\textwidth} \centering \caption{Performance of SFTN fine-tuned from a pretrained teacher (SFTN-FT) on CIFAR-100.} \vspace{0.1cm} \label{table:pretrained_teacher} \scalebox{0.75}{% \begin{tabular}{c|ccc} Teacher/Student & \multicolumn{3}{c}{resnet32x4/ShuffleV2} \\ Teacher training method & Standard & SFTN & SFTN+FT \\ \hline \hline Teacher Acc. & 79.25 & 80.03 & 80.41 \\ Student Acc. w/o KD & & 71.95 & \\ \hline KD~\cite{KD} & 75.25 & 78.07 & 78.12 \\ SP~\cite{sp} & 76.11 & 78.38 & 78.51 \\ FT~\cite{FT} & 76.42 & 78.37 & 77.90 \\ CRD~\cite{CRD} & 76.35 & 78.43 & 78.88 \\ SSKD~\cite{SSKD} & 78.66 & 79.65 & 79.15 \\ OH~\cite{OH} & 78.08 & 79.98 & 79.68 \\ \hline Average & 76.81 & 78.81 & 78.71 \\ Best & 78.66 & 79.98 & 79.68 \\ \hline \end{tabular} } \end{minipage} \hspace*{-\textwidth} \hfill \begin{minipage}{0.485\textwidth} \centering \caption{Similarity measurements between teachers and students on CIFAR-100.} \vspace{0.37cm} \label{table:similarity} \scalebox{0.7}{% \begin{tabular}{c|cc|cc} Models (teacher/student) & \multicolumn{4}{c}{resnet32x4/ShuffleV2} \\ \hline Similarity metric & \multicolumn{2}{c|}{KL-divergence} & \multicolumn{2}{c}{CKA} \\ Teacher training method & Standard & SFTN & Standard & SFTN \\ \hline\hline KD~\cite{KD} & 1.10 & 0.47 & 0.88 & 0.95 \\ FitNets~\cite{FitNet} & 0.79 & 0.38 & 0.89 & 0.95 \\ SP~\cite{sp} & 0.95 & 0.45 & 0.89 & 0.95 \\ VID~\cite{vid} & 0.88 & 0.45 & 0.88 & 0.95 \\ CRD~\cite{CRD} & 0.81 & 0.43 & 0.88 & 0.95 \\ SSKD~\cite{SSKD} & 0.54 & 0.26 & 0.92 & 0.97 \\ OH~\cite{OH} & 0.85 & 0.37 & 0.90 & 0.96 \\ \hline AVG & 0.84 & 0.39 & 0.89 & 0.96 \\ \hline \end{tabular}% } \end{minipage} \end{table} \subsection{Similarity between Teacher and Student Representations} \label{sub:similarity} The similarity between teacher and student models is an important measure for knowledge distillation performance in the sense that a student network aims to resemble the output representations of a teacher network. We employ KL-divergence and CKA~\cite{cka} as similarity metrics, where lower KL-divergence and higher CKA indicate higher similarity. Table~\ref{table:similarity} presents the similarities between the representations of a teacher and a student based on ResNet32$\times$4 and ShuffleV2, respectively, which are given by various algorithms on the CIFAR-100 test set. The results show that the distillations from SFTNs always give higher similarity to the student models with respect to the corresponding teacher networks; SFTN reduces the KL-divergence by more than 50\% in average while improving the average CKA by 7\% points compared to the standard teacher network. The improved similarity of SFTN is natural since it is trained with student branches to obtain student-friendly representations via the KL-divergence loss. \iffalse Figure~\ref{fig:comp_training}(a) illustrates the KL-divergence loss for the distillation of SFTN converges faster and is optimized better than the knowledge distillation from the standard teacher network. Consequently, we observe higher test accuracy of SFTN as shown in Figure~\ref{fig:comp_training}(b). \begin{figure*}[t] \centering \begin{subfigure}{.49\linewidth} \centering \includegraphics[width=.85\linewidth]{./images/fig4_a.png} \subcaption{KL-divergence loss} \label{fig:comp_training_a} \end{subfigure} \centering \begin{subfigure}{.49\linewidth} \centering \includegraphics[width=.85\linewidth]{./images/fig4_b.png} \subcaption{Test accuracy} \label{fig:comp_training_b} \end{subfigure} \caption{\bhhan{Visualization of training and testing curves on CIFAR-100,} where ResNet32$\times$4 and ShuffleV2 are employed as the teacher and student networks, respectively. SFTN converges faster and show improved test accuracy than the standard teacher models.} \label{fig:comp_training} \end{figure*} \fi \section{Experiments} \label{sec:experiments} We now evaluate the performance of SFTN in comparison to existing methods and analyze the characteristics of SFTN in various aspects. We first describe our experiment setting in Section~\ref{sub:experiment}. Then, we present the comparison results between SFTN and the standard teacher networks with respect to classification accuracy in various knowledge distillation algorithms in Section~\ref{sub:benchmark}. The results from ablative experiments for SFTN and transfer learning are discussed in Section~\ref{sub:ablation_study} and \ref{sub:transfer_learning}. Also we measure similarity between distillation student and teacher, Section~\ref{sub:similarity} shows results of similarity. \subsection{Experiment Setting} \label{sub:experiment} We perform evaluation on multiple well-known datasets including ImageNet~\cite{IMAGENET}. CIFAR-100~\cite{cifar09}, and STL10~\cite{STL10}. For the experiment, we select several different backbone networks such as ResNet~\cite{resnet}, WideResNet~\cite{WRN}, VGG~\cite{vggnet}, ShuffleNetV1~\cite{ShuffleNetv1}, and ShuffleNetV2~\cite{ShuffleNetv2}. For comprehensive evaluation, we adopt various knowledge distillation techniques, which include KD~\cite{KD}, FitNets~\cite{FitNet}, AT~\cite{AT}, SP~\cite{sp}, VID~\cite{vid}, RKD~\cite{RKD}, PKT~\cite{pkt}, AB~\cite{ABdistill}, FT~\cite{FT}, CRD~\cite{CRD}, SSKD~\cite{SSKD}, and OH~\cite{OH}. Among these methods, the feature distillation methods~\cite{FitNet,AT,sp,vid,RKD,pkt,ABdistill,FT,OH} conduct joint distillation with conventional KD~\cite{KD} during student training, which results in higher accuracy in practice than the feature distillation only. We have reproduced the results from the existing methods using the implementations provided by the authors. \begin{table*}[t] \centering \caption{Comparisons between SFTN and the standard teacher models on CIFAR-100 dataset when the architectures of the teacher-student pairs are homogeneous. In all the tested algorithms, the student models distilled from the teacher models given by SFTN outperform the ones trained from the standard teacher models. All the reported accuracies in this table is computed by the outputs of 3 independent runs. } \label{table:homogeneous_teacher_student} \scalebox{0.82}{% \begin{tabular}{c||ccc|ccc|ccc|ccc} Models (Teacher/Student) & \multicolumn{3}{c|}{WRN40-2/WRN16-2} & \multicolumn{3}{c|}{WRN40-2/WRN40-1} & \multicolumn{3}{c|}{ResNet32x4/ResNet8x4} & \multicolumn{3}{c}{VGG13/VGG8} \\ Teacher training method & Standard & SFTN & $\Delta$ & Standard & SFTN & $\Delta$ & Standard & SFTN & $\Delta$ & Standard & SFTN & $\Delta$ \\ \hline\hline Teacher Accuracy & 76.30 & 78.2 & & 76.30 & 77.62 & & 79.25 & 79.41 & & 75.38 & 76.76 & \\ Student accuracy w/o KD & & 73.41 & & & 72.16 & & & 72.38 & & & 71.12 & \\ \hline KD~\cite{KD} & 75.46 & 76.25 & $+$0.79 & 73.73 & 75.09 & $+$1.36 & 73.39 & 76.09 & $+$2.70 & 73.41 & 74.52 & $+$1.11 \\ FitNets~\cite{FitNet} & 75.72 & 76.73 & $+$1.01 & 74.14 & 75.54 & $+$1.40 & 75.34 & 76.89 & $+$1.55 & 73.49 & 74.38 & $+$0.89 \\ AT~\cite{AT} & 75.85 & 76.82 & $+$0.97 & 74.56 & 75.86 & $+$1.30 & 74.98 & 76.91 & $+$1.93 & 73.78 & 73.86 & $+$0.08 \\ SP~\cite{sp} & 75.43 & 76.77 & $+$1.34 & 74.51 & 75.31 & $+$0.80 & 74.06 & 76.37 & $+$2.31 & 73.37 & 74.62 & $+$1.25 \\ VID~\cite{vid} & 75.63 & 76.79 & $+$1.16 & 74.21 & 75.76 & $+$1.55 & 74.86 & 77.00 & $+$2.14 & 73.81 & 74.73 & $+$0.92 \\ RKD~\cite{RKD} & 75.48 & 76.49 & $+$1.01 & 73.86 & 75.11 & $+$1.25 & 74.12 & 76.62 & $+$2.50 & 73.52 & 74.48 & $+$0.96 \\ PKT~\cite{pkt} & 75.71 & 76.57 & $+$0.86 & 74.43 & 75.49 & $+$1.06 & 74.70 & 76.57 & $+$1.87 & 73.60 & 74.51 & $+$0.91 \\ AB~\cite{ABdistill} & 70.12 & 70.76 & $+$0.64 & 74.38 & 75.51 & $+$1.13 & 74.73 & 76.51 & $+$1.78 & 73.20 & 74.67 & $+$1.47 \\ FT~\cite{FT} & 75.6 & 76.51 & $+$0.91 & 74.49 & 75.11 & $+$0.62 & 74.89 & 77.02 & $+$2.13 & 73.64 & 74.30 & $+$0.66 \\ CRD~\cite{CRD} & 75.91 & 77.23 & $+$1.32 & 74.93 & 76.09 & $+$1.16 & 75.54 & 76.95 & $+$1.41 & 74.26 & 74.86 & $+$0.60 \\ SSKD~\cite{SSKD} & 75.96 & 76.80 & $+$0.84 & 75.72 & 76.03 & $+$0.31 & 75.95 & 76.85 & $+$0.90 & 74.94 & 75.66 & $+$0.72 \\ OH~\cite{OH} & 76.00 & 76.39 & $+$0.39 & 74.79 & 75.62 & $+$0.83 & 75.04 & 76.65 & $+$1.61 & 73.94 & 74.72 & $+$0.78 \\ \hline Best & 76.00 & 77.23 & $+$1.23 & 75.72 & 76.09 & $+$0.37 & 75.95 & 77.02 & $+$1.07 & 74.94 & 75.66 & $+$0.72 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t] \centering \caption{Comparisons between SFTN and the standard teacher models on CIFAR-100 dataset when the architectures of the teacher-student pairs are heterogeneous. In all the tested algorithms, the student models distilled from the teacher models given by SFTN outperform the ones trained from the standard teacher models. All the reported accuracies in this table is computed by the outputs of 3 independent runs. } \label{table:heterogeneous_teacher_student} \scalebox{0.82}{% \begin{tabular}{c||ccc|ccc|ccc|ccc} Models (Teacher/Student) & \multicolumn{3}{c}{ResNet32x4/ShuffleV1} & \multicolumn{3}{c}{ResNet32x4/ShuffleV2} & \multicolumn{3}{c}{ResNet50/VGG8} & \multicolumn{3}{c}{WRN40-2/ShuffleV2} \\ Teacher training method & Standard & SFTN & $\Delta$ & Standard & SFTN & $\Delta$ & Standard & SFTN & $\Delta$ & Standard & SFTN & $\Delta$ \\ \hline\hline Teacher Accuracy & 79.25 & 80.03 & & 79.25 & 79.58 & & 78.70 & 82.52 & & 76.30 & 78.21 & \\ Student accuracy w/o KD & &71.95 & & &73.21 & & &71.12 & & &73.21 & \\ \hline KD~\cite{KD} & 74.26 & 77.93 & $+$3.67 & 75.25 & 78.07 & $+$2.82 & 73.82 & 74.92 & $+$1.10 & 76.68 & 78.06 & $+$1.38 \\ FitNets~\cite{FitNet} & 75.95 & 78.75 & $+$2.80 & 77.00 & 79.68 & $+$2.68 & 73.22 & 74.80 & $+$1.58 & 77.31 & 79.21 & $+$1.90 \\ AT~\cite{AT} & 76.12 & 78.63 & $+$2.51 & 76.57 & 78.79 & $+$2.22 & 73.56 & 74.05 & $+$0.49 & 77.41 & 78.29 & $+$0.88 \\ SP~\cite{sp} & 75.80 & 78.36 & $+$2.56 & 76.11 & 78.38 & $+$2.27 & 74.02 & 75.37 & $+$1.35 & 76.93 & 78.12 & $+$1.19 \\ VID~\cite{vid} & 75.16 & 78.03 & $+$2.87 & 75.70 & 78.49 & $+$2.79 & 73.59 & 74.76 & $+$1.17 & 77.27 & 78.78 & $+$1.51 \\ RKD~\cite{RKD} & 74.84 & 77.72 & $+$2.88 & 75.48 & 77.77 & $+$2.29 & 73.54 & 74.70 & $+$1.16 & 76.69 & 78.11 & $+$1.42 \\ PKT~\cite{pkt} & 75.05 & 77.46 & $+$2.41 & 75.79 & 78.28 & $+$2.49 & 73.79 & 75.17 & $+$1.38 & 76.86 & 78.28 & $+$1.42 \\ AB~\cite{ABdistill} & 75.95 & 78.53 & $+$2.58 & 76.25 & 78.68 & $+$2.43 & 73.72 & 74.77 & $+$1.05 & 77.28 & 78.77 & $+$1.49 \\ FT~\cite{FT} & 75.58 & 77.84 & $+$2.26 & 76.42 & 78.37 & $+$1.95 & 73.34 & 74.77 & $+$1.43 & 76.80 & 77.65 & $+$0.85 \\ CRD~\cite{CRD} & 75.60 & 78.20 & $+$2.60 & 76.35 & 78.43 & $+$2.08 & 74.52 & 75.41 & $+$0.89 & 77.52 & 78.81 & $+$1.29 \\ SSKD~\cite{SSKD} & 78.05 & 79.10 & $+$1.05 & 78.66 & 79.65 & $+$0.99 & 76.03 & 76.95 & $+$0.92 & 77.81 & 78.34 & $+$0.53 \\ OH~\cite{OH} & 77.51 & 79.56 & $+$2.05 & 78.08 & 79.98 & $+$1.90 & 74.55 & 75.95 & $+$1.40 & 77.82 & 79.14 & $+$1.32 \\ \hline Best & 78.05 & 79.56 & $+$1.51 & 78.66 & 79.98 & $+$1.32 & 76.03 & 76.95 & $+$0.92 & 77.82 & 79.21 & $+$1.39 \\ \hline \end{tabular}% } \end{table*} \subsection{Main Results} \label{sub:benchmark} To show effectiveness of SFTN, we incorporate SFTN into various existing knowledge distillation algorithms and evaluate accuracy. We present implementation details and experimental results on CIFAR-100~\cite{cifar09} and ImageNet~\cite{IMAGENET} datasets. \subsubsection{CIFAR-100} CIFAR-100~\cite{cifar09} is one of the common benchmark datasets for the evaluation of knowledge distillation algorithms. It consists of 50K training images and 10K testing images in 100 classes. We select 12 state-of-the art distillation methods to compare accuracy of SFTN with the standard teacher network. To show the generality of the proposed approach, 8 pairs of teacher and student models have been tested in our experiment. The experiment setup for CIFAR-100 is identical to the one performed in CRD\footnote{https://github.com/HobbitLong/RepDistille}; most experiments employ the SGD optimizer with learning rate 0.05, weight decay 0.0005 and momentum 0.9 while learning rate is set to 0.01 in ShuffleNet experiments. The hyper-parameters for the loss function are set as $\lambda_\text{T}=1$, $\lambda_\text{R}^\text{CE}=1$, $\lambda_\text{R}^\text{KL}=3$, and ${\tau=1}$. Table~\ref{table:homogeneous_teacher_student} and \ref{table:heterogeneous_teacher_student} demonstrates the full results on the CIFAR-100 dataset. Table~\ref{table:homogeneous_teacher_student} presents distillation performance of all the compared algorithms when the architecture styles of teacher and student pairs are same while Table~\ref{table:heterogeneous_teacher_student} shows distillation performance of different architectural style teacher-student pairs. Both tables clearly present that SFTN is consistently better than the standard teacher network in all experiments. The average difference between standard teacher and SFTN is 1.49\% points, and the average difference between best student accuracy of standard teacher and SFTN is 1.06\% points. Figure~\ref{fig:comp_best_student} illustrates the accuracies of the best student models of both teacher networks given teacher and student architecture pairs. Despite the small capacity of the student networks, the 5 best student models of SFTN outperform the standard teachers while the only one best student of the standard teacher shows higher accuracy than its teacher. \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{./images/fig3.png} \end{center} \vspace{-0.3cm} \caption{Accuracy comparison of the best students from SFTN with the standard teacher. The 5 best student models of SFTN outperform the standard teachers while the only one best student of the standard teacher shows higher accuracy than its teacher.} \label{fig:comp_best_student} \end{figure*} \subsubsection{ImageNet} \begin{table}[t] \centering \caption{Top-1 and Top-5 validation accuracy on ImageNet.} \label{table:imagenet_result} \scalebox{0.8}{% \begin{tabular}{c||ccc|ccc} \begin{tabular}[c]{@{}c@{}}Models\\(Teacher/Student)\end{tabular} & \multicolumn{6}{c}{ResNet50/ResNet34} \\ \hline & \multicolumn{3}{c|}{Top-1} & \multicolumn{3}{c}{Top-5} \\ Teacher training & Stan. & SFTN & $\Delta$& Stan. & SFTN & $\Delta$ \\ \hline\hline Teacher Acc. & 76.45 & 77.43 & & 93.15 & 93.75 & \\ Stu. Acc. w/o KD & & 73.79 & & & 91.74 & \\ \hline KD~\cite{KD} & 73.55 & 74.14 & $+$0.59 & 91.81 & 92.21 & $+$0.40 \\ FitNets~\cite{FitNet} & 74.56 & 75.01 & $+$0.45 & 92.31 & 92.51 & $+$0.20 \\ SP~\cite{sp} & 74.95 & 75.53 & $+$0.58 & 92.54 & 92.69 & $+$0.15 \\ CRD~\cite{CRD} & 75.01 & 75.39 & $+$0.38 & 92.56 & 92.67 & $+$0.11 \\ OH~\cite{OH} & 74.56 & 75.01 & $+$0.45 & 92.36 & 92.56 & $+$0.20 \\ \hline Best & 75.01 & 75.53 & $+$0.52 & 92.56 & 92.69 & $+$0.13 \\ \hline \end{tabular}% } \end{table} ImageNet~\cite{IMAGENET} is a large-scale dataset constructed for image classification. The dataset consists of 1.2M training images and 50K validation images for 1K classes. We adopt the standard Pytorch ImageNet training setup\footnote{https://github.com/pytorch/examples/tree/master/imagenet} for this experiment. The optimization is given by SGD with learning rate 0.1, weight decay 0.0001 and momentum 0.9. The coefficients of individual loss terms are set as $\lambda_\text{T}=1$, $\lambda_\text{R}^\text{CE}=1$ $\lambda_\text{R}^\text{KL}=1$ and $\tau=1$. We conduct the ImageNet experiment for 5 different knowledge distillation methods with one pair of teacher and student models, where teacher models based on ResNet50 distillate knowledge to ResNet34 student networks. As presented in Table~\ref{table:imagenet_result}, SFTN consistently outperforms the standard teacher network in all settings. Also, the best student accuracy of SFTN achieves higher top-1 accuracy than standard teacher model by approximately 0.5\%. This results implies that the proposed SFTN works well on large datasets as well. \subsection{Effect of Hyperparameters} \label{sub:ablation_study} SFTN computes the KL-divergence loss to minimize the difference between the softmax outputs of teacher and student branches, which involves two relevant hyperparameters, temperature of the softmax function, ${\tau}$, and weight for KL-divergence loss term, $\lambda_\text{R}^\text{KL}$. This section discusses the impact and trade-off issue of the two hyperparameters. In particular, we present our observations that learning student-friendly representations is indeed helpful to improve the accuracy of student models while maximizing the performance of SFTN may be suboptimal for knowledge distillation. \begin{table*}[!htb] \caption{Effects of varying ${\tau}$ in the knowledge distillation via SFTN. The student accuracy is fairly stable over a wide range of the parameter. Note that the accuracies of SFTN and student are rather inversely correlated, which implies that the maximization of teacher models is not necessarily idea for knowledge distillation.} \label{table:varying_tau} \centering \scalebox{0.82} \begin{tabular}{c||cccc|c||cccc|c} & \multicolumn{5}{c||}{Accuracy of SFTN} & \multicolumn{5}{c}{Student accuracy by KD~\cite{KD}} \\ \hline Teacher & ResNet32x4 & ResNet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} & ResNet32x4 & ResNet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} \\ Student & ShuffleV1 & ShuffleV2 & WRN16-2 & WRN40-1 & & ShuffleV1 & ShuffleV2 & WRN16-2 & WRN40-1 & \\ \hline ${\tau}$=1 & 81.19 & 80.26 & 78.23 & 78.14 & 78.85 & 76.05 & 77.18 & 76.30 & 74.75 & \textbf{75.58} \\ ${\tau}$=5 & 81.23 & 81.56 & 79.22 & 78.31 & 79.54 & 75.36 & 75.59 & 76.31 & 73.64 & 75.10 \\ ${\tau}$=10 & 81.27 & 81.98 & 78.81 & 78.38 & 79.58 & 74.47 & 75.93 & 75.85 & 73.62 & 74.76 \\ ${\tau}$=15 & 81.89 & 81.74 & 79.27 & 78.63 & \textbf{79.74} & 74.78 & 75.65 & 75.79 & 73.49 & 74.81 \\ ${\tau}$=20 & 81.60 & 81.70 & 78.84 & 78.45 & 79.59 & 74.62 & 75.88 & 75.82 & 74.03 & 74.95 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[!htb] \caption{Effects of varying $\lambda_\text{R}^\text{KL}$ in the knowledge distillation via SFTN. The accuracies of SFTN and student are not correlated while the accuracy gaps of the two model drops as $\lambda_\text{R}^\text{KL}$ increases.} \label{table:varying_s_kl} \centering \scalebox{0.82}{% \begin{tabular}{c||cccc|c||cccc|c} & \multicolumn{5}{c||}{Accuracy of SFTN} & \multicolumn{5}{c}{Student accuracy by KD~\cite{KD}} \\ \hline Teacher & ResNet32x4 & ResNet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} & ResNet32x4 & ResNet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} \\ Student & ShuffleV1 & ShuffleV2 & WRN16-2 & WRN40-1 & & ShuffleV1 & ShuffleV2 & WRN16-2 & WRN40-1 & \\ \hline $\lambda_\text{R}^\text{KL}$=1 & 81.19 & 80.26 & 78.23 & 78.14 & \textbf{79.46} & 76.05 & 77.18 & 76.30 & 74.75 & 76.07 \\ $\lambda_\text{R}^\text{KL}$=3 & 78.70 & 79.80 & 77.83 & 77.57 & 78.48 & 77.36 & 78.56 & 76.20 & 74.71 & \textbf{76.71} \\ $\lambda_\text{R}^\text{KL}$=6 & 78.29 & 78.29 & 77.28 & 76.05 & 77.48 & 77.33 & 77.70 & 76.02 & 74.67 & 76.43 \\ $\lambda_\text{R}^\text{KL}$=10 & 73.02 & 75.01 & 75.03 & 73.51 & 74.14 & 75.57 & 76.62 & 74.19 & 73.08 & 74.87 \\ \hline Standard & 79.25 & 79.25 & 76.30 & 76.30 & 77.78 & 74.31 & 75.25 & 75.28 & 73.56 & 74.60 \\ \hline \end{tabular}% } \end{table*} \paragraph{Temperature of softmax function} The temperature parameter, denoted by ${\tau}$, controls the softness of $\mathbf{q}_\text{T}$ and $\mathbf{q}_\text{R}^{i}$; as ${\tau}$ gets higher, the output of the softmax function becomes smoother. The accuracy of the student model given by KD via SFTN is fairly consistent. Table~\ref{table:varying_tau} also shows that the performance of SFTNs and student models is rather inversely correlated. In other words, a loosely optimized teacher model turns out to be more effective for knowledge distillation according to this ablation study. \paragraph{Weight for KL-divergence loss} $\lambda_\text{R}^\text{KL}$ is a parameter that makes $\mathbf{q}_\text{T}$ similar to $\mathbf{q}_\text{R}^{i}$, and consequently facilitates knowledge distillation. However, it affects the accuracy of teacher network negatively. Table~\ref{table:varying_s_kl} shows that the average accuracy gaps between SFTNs and student models drops gradually as $\lambda_\text{R}^\text{KL}$ increases. One interesting observation is the student accuracy via SFTN with $\lambda_\text{R}^\text{KL}=10$ compared to its counterpart via the standard teacher; even though the standard teacher network is more accurate than SFTN by a large margin, its corresponding student accuracy is lower than that of SFTN. More details about similarity between student and teacher network is discussed in Section~\ref{sub:similarity}. \subsection{Transferability} \label{sub:transfer_learning} The goal of transfer learning is to obtain versatile representations that adapt well on unseen datasets. To investigate transferability of the student models distilled from SFTN, we perform experiments to transfer the student features learned on CIFAR-100 to STL10~\cite{STL10} and TinyImageNet~\cite{TinyImageNet}. The representations of the examples in CIFAR-100 are obtained from the last student block and frozen during transfer learning, and then we make the features fit to the target datasets using linear classifiers attached to the last student block. Table~\ref{table:transfer_result} presents transfer learning results on 5 different knowledge distillation algorithms using ResNet32$\times$4 and ShuffleV2 as teacher and student, respectively. Our experiments show that the accuracy of transfer learning on the student models derived from SFTN is consistently better than the students associated with the standard teacher. The best student accuracy of SFTN even outperforms that of the standard teacher by 3.02\% points on STL10 and 4.06\% points on TinyImageNet. \begin{table*}[t] \centering \caption{The accuracy of student models on STL10 and TinyImageNet by transferring knowledge from the models trained on CIFAR-100.} \label{table:transfer_result} \scalebox{0.82}{% \begin{tabular}{c||ccc|ccc} Models (Teacher/Student) & \multicolumn{6}{c}{ResNet32x4/ShuffleV2} \\ \hline & \multicolumn{3}{c|}{CIFAR100 $\rightarrow$ STL10} & \multicolumn{3}{c}{CIFAR100 $\rightarrow$ TinyImageNet} \\ ~~~Teacher training method~~~ & ~Standard~ & ~SFTN~ & ~~$\Delta$~~ & ~Standard~ & ~~SFTN~~ & ~$\Delta$~ \\ \hline\hline Teacher accuracy & 69.81 & 76.84 & & 31.25 & 40.16 & \\ Student accuracy w/o KD & & 70.18 & & & 33.81 & \\ \hline KD~\cite{KD} & 67.49 & 73.81 & $+$6.32 & 30.45 & 37.81 & $+$7.36 \\ SP~\cite{sp} & 69.56 & 75.01 & $+$5.45 & 31.16 & 38.28 & $+$7.12 \\ CRD~\cite{CRD} & 71.70 & 75.80 &$+$4.10 & 35.50 & 40.87 & $+$5.37 \\ SSKD~\cite{SSKD} & 74.43 & 77.45 & $+$3.02 & 38.35 & 42.41 & $+$4.06 \\ OH~\cite{OH} & 72.09 & 76.76 & $+$4.67 & 33.52 & 39.95 & $+$6.43 \\ \hline Best & 74.43 & 77.45 & $+$3.02 & 38.35 & 42.41 & $+$4.06 \\ \hline \end{tabular}% } \end{table*} \subsection{Similarity} \label{sub:similarity} Similarity between student and teacher network is an important measure for knowledge distillation tasks considering student network is basically trying to resemble similar output of teacher network. Here, we use KL-divergence and CKA~\cite{cka} as metrics of similarity between student and teacher network, where lower KL-divergence and higher CKA indicate higher similarity. \begin{table}[t] \centering \caption{Similarity measurements between teachers and students on the CIFAR-100 test set. Higher CKA and lower KL indicate that the representations given by two models are more similar. } \label{table:similarity} \scalebox{0.8}{% \begin{tabular}{c|cc|cc} Models (Teacher/Student) & \multicolumn{4}{c}{ResNet32x4/ShuffleV2} \\ \hline & \multicolumn{2}{c|}{KL-divergence} & \multicolumn{2}{c}{CKA} \\ Teacher training method & Standard & SFTN & Standard & SFTN \\ \hline\hline KD~\cite{KD} & 1.10 & 0.47 & 0.88 & 0.95 \\ FitNets~\cite{FitNet} & 0.79 & 0.38 & 0.89 & 0.95 \\ AT~\cite{AT} & 0.86 & 0.42 & 0.88 & 0.95 \\ SP~\cite{sp} & 0.95 & 0.45 & 0.89 & 0.95 \\ VID~\cite{vid} & 0.88 & 0.45 & 0.88 & 0.95 \\ RKD~\cite{RKD} & 0.88 & 0.47 & 0.88 & 0.95 \\ PKT~\cite{pkt} & 0.89 & 0.46 & 0.89 & 0.95 \\ AB~\cite{ABdistill} & 0.92 & 0.47 & 0.88 & 0.95 \\ FT~\cite{FT} & 0.80 & 0.42 & 0.88 & 0.95 \\ CRD~\cite{CRD} & 0.81 & 0.43 & 0.88 & 0.95 \\ SSKD~\cite{SSKD} & 0.54 & 0.26 & 0.92 & 0.97 \\ OH~\cite{OH} & 0.85 & 0.37 & 0.90 & 0.96 \\ \hline AVG & 0.86 & 0.42 & 0.89 & 0.95 \\ \hline \end{tabular}% } \end{table} Table~\ref{table:similarity} presents the similarities between the representations of a ResNet32$\times$4 teacher and a ShuffleV2 student given by various algorithms on the CIFAR-100 test set. The results show that the distillation from SFTN always gives higher similarity to the student model with respect to the teacher network; SFTN reduces KL-divergence by 50\% in average while improving average CKA by 6\% compared to the standard teacher network. Since SFTN is trained with student branches to obtain student-friendly representations via a KL-divergence loss, the improved similarity is quite natural. Figure~\ref{fig:comp_training}(a) illustrates the KL-divergence loss for the distillation of SFTN converges faster and is optimized better than the knowledge distillation from the standard teacher network. Consequently, we observe higher test accuracy of SFTN as shown in Figure~\ref{fig:comp_training}(b). \begin{figure}[ht] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.85\linewidth]{./images/fig4_a.png} \subcaption{KL-divergence loss during training on CIFAR-100.\vspace{0.1cm}} \label{fig:comp_training_a} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.85\linewidth]{./images/fig4_b.png} \subcaption{Test accuracy on CIFAR-100.} \label{fig:comp_training_b} \end{subfigure} \caption{Visualization of some training and testing curves, where ResNet32$\times$4 and ShuffleV2 are employed as the teacher and student networks, respectively. SFTN converges faster and show improved test accuracy than the standard teacher models.} \label{fig:comp_training} \end{figure} \section{Introduction} \label{sec:introduction} Knowledge distillation~\cite{KD} is a well-known technique to learn compact deep neural network models with competitive accuracy, where a smaller network (student) is trained to simulate the representations of a larger one (teacher). The popularity of knowledge distillation is mainly due to its simplicity and generality; it is straightforward to learn a student model based on a teacher and there is no restriction about the network architectures of both models. The main goal of most approaches is how to transfer dark knowledge to student models effectively, given predefined and pretrained teacher networks. Although knowledge distillation is a promising and convenient method, it sometimes fails to achieve satisfactory performance in terms of accuracy. This is partly because the model capacity of a student is too limited compared to that of a teacher and knowledge distillation algorithms are suboptimal~\cite{kang2020towards,mirzadeh2019improved}. In addition to this reason, we claim that the consistency of teacher and student features is critical to knowledge transfer and the inappropriate representation learning of a teacher often leads to the suboptimality of knowledge distillation. \begin{figure*}[ht] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{./images/fig_1_all_half_1.png} \subcaption{Standard knowledge distillation} \label{fig:sub-first} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{./images/fig_1_all_half_2.png} \subcaption{Student-friendly teacher network} \label{fig:sub-second} \end{subfigure} \caption{Comparison between the standard knowledge distillation and our approach. (a) The standard knowledge distillation trains teachers alone and then distill knowledge to students. (b) The proposed student-friendly teacher network trains teachers along with student branches, and then distill more easy-to-transfer knowledge to students.} \label{fig:comparison} \end{figure*} We are interested in making a teacher network hold better transferable knowledge by providing the teacher with a snapshot of the student model at the time of its training. We take advantage of the typical structures of convolutional neural networks with multiple blocks and make the representations of each block in teachers easy to be transferred to students. The proposed approach aims to train teacher models friendly to students for facilitating knowledge distillation; we call the teacher model trained by this strategy student-friendly teacher network (SFTN). SFTN is deployed in arbitrary distillation algorithms easily due to its generality for training models and transferring knowledge. SFTN is partly related to collaborative learning methods~\cite{DML, KDCL, PCL}, which may suffer from the correlation between the models trained jointly and fail to fully exploit knowledge in teacher models. On the other hand, SFTN is free from the limitation since it performs knowledge transfer from a teacher to a student in one direction via a two-stage learning procedure---student-aware training of teacher network followed by knowledge distillation from a teacher to a student. Although the structure of a teacher network depends on target student models, it is sufficiently generic to be adopted by students with various architectures. Figure~\ref{fig:comparison} demonstrates the main difference between the proposed algorithm and the standard knowledge distillation methods. The following is the list of our main contributions: \begin{itemize} \item We adopt a student-aware teacher learning procedure before knowledge distillation, which enables teacher models to transfer their representations to students more effectively. \item The proposed approach is applicable to diverse architectures of teacher and students while it can be incorporated into various knowledge distillation algorithms. \item We demonstrate that the integration of SFTN into various baseline algorithms and models improve accuracy consistently with substantial margins. \end{itemize} \vspace{-0.2cm} The rest of the paper is organized as follows. We first discuss the existing knowledge distillation techniques in Section~\ref{sec:related}. Section~\ref{sec:student} describes the details of the proposed SFTN including the knowledge distillation algorithm. The experimental results with in-depth analyses are presented in Section~\ref{sec:experiments}, and we make the conclusion in Section~\ref{sec:conclusion}. \section{More Analysis} \begin{table}[h] \centering \vspace{-0.4cm} \caption{Accuracy on CIFAR-100-C~\cite{hendrycks2019robustness}. The results are the average of 19 distortion results of the CIFAR-100-C. The SFTN student by KD consistently achieves higher average accuracy than the student by KD.} \label{table:corrupted_cifar100} \vspace{0.2cm} \scalebox{0.68}{% \begin{tabular}{c||cccc|c||cccc|c} & \multicolumn{5}{c||}{Accuracy of Studnet by KD} & \multicolumn{5}{c}{Accuracy of SFTN Student by KD} \\\hline Teacher & resnet32x4 & resnet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} & resnet32x4 & resnet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} \\ Student & ShuffleV2 & ShuffleV1 & WRN16-2 & WRN40-1 & & ShuffleV2 & ShuffleV1 & WRN16-2 & WRN40-1 & \\\hline w/o distortion & 75.30 & 74.10 & 75.69 & 73.69 & 74.70 & 77.52 & 77.83 & 76.13 & 75.11 & \textbf{76.65} \\ intensity=1 & 63.84 & 63.84 & 61.83 & 61.29 & 62.70 & 66.01 & 65.15 & 62.35 & 61.60 & \textbf{63.78} \\ intensity=2 & 55.49 & 56.16 & 52.49 & 52.88 & 54.26 & 57.68 & 56.20 & 53.30 & 52.63 & \textbf{54.95} \\ intensity=3 & 50.34 & 51.08 & 46.91 & 47.50 & 48.96 & 52.26 & 50.82 & 47.79 & 47.31 & \textbf{49.55} \\ intensity=4 & 44.02 & 44.35 & 40.45 & 41.10 & 42.48 & 46.71 & 44.31 & 41.28 & 41.03 & \textbf{43.33} \\ intensity=5 & 34.29 & 35.28 & 30.66 & 31.66 & 32.97 & 35.37 & 34.08 & 31.39 & 31.74 & \textbf{33.15} \\\hline \end{tabular}% } \vspace{-0.2cm} \end{table} \subsection{Robustness of Data Distribution Shift} \label{sub:Robustness} Knowledge distillation models are typically deployed on resource-hungry systems that apply to a real-world problem. And out-of-distribution inputs and domain shifts are inevitable problems in knowledge distillation models. So we employed CIFAR-100-C~\cite{hendrycks2019robustness} to evaluate the robustness of our models compared to the standard knowledge distillation. Table~\ref{table:corrupted_cifar100} demonstrate the benefit of SFTN on the CIFAR-100-C dataset, while w/o distortion presents the CIFAR-100 performance of the tested models. The proposed algorithm outperforms the standard knowledge distillation. However, the average margins diminish gradually with an increase in the corruption intensities. This would be partly because highly corrupted data often suffer from the randomness of predictions and the knowledge distillation algorithms are prone to fail in making correct predictions without additional techniques. \subsection{Transferability} \label{sub:transfer_learning} \begin{table}[h] \centering \vspace{-0.4cm} \caption{The accuracy of student models on STL10~\cite{STL10} and TinyImageNet~\cite{TinyImageNet} by transferring knowledge from the models trained on CIFAR-100.} \label{table:transfer_result} \vspace{0.2cm} \scalebox{0.8}{% \begin{tabular}{c||ccc|ccc} Models (Teacher/Student) & \multicolumn{6}{c}{resnet32x4/ShuffleV2} \\ \hline & \multicolumn{3}{c|}{CIFAR100 $\rightarrow$ STL10} & \multicolumn{3}{c}{CIFAR100 $\rightarrow$ TinyImageNet} \\ ~~~Teacher training method~~~ & ~Standard~ & ~SFTN~ & ~~$\Delta$~~ & ~Standard~ & ~~SFTN~~ & ~$\Delta$~ \\ \hline\hline Teacher accuracy & 69.81 & 76.84 & & 31.25 & 40.16 & \\ Student accuracy w/o KD & & 70.18 & & & 33.81 & \\ \hline KD~\cite{KD} & 67.49 & 73.81 & $+$6.32 & 30.45 & 37.81 & $+$7.36 \\ SP~\cite{sp} & 69.56 & 75.01 & $+$5.45 & 31.16 & 38.28 & $+$7.12 \\ CRD~\cite{CRD} & 71.70 & 75.80 &$+$4.10 & 35.50 & 40.87 & $+$5.37 \\ SSKD~\cite{SSKD} & 74.43 & 77.45 & $+$3.02 & 38.35 & 42.41 & $+$4.06 \\ OH~\cite{OH} & 72.09 & 76.76 & $+$4.67 & 33.52 & 39.95 & $+$6.43 \\ \hline AVG & 71.05 & 75.77 & $+$4.71 & 33.80 & 39.86 & $+$6.07 \\ \hline \end{tabular}% } \vspace{-0.2cm} \end{table} The goal of transfer learning is to obtain versatile representations that adapt well on unseen datasets. To investigate transferability of the student models distilled from SFTN, we perform experiments to transfer the student features learned on CIFAR-100 to STL10~\cite{STL10} and TinyImageNet~\cite{TinyImageNet}. The representations of the examples in CIFAR-100 are obtained from the last student block and frozen during transfer learning, and then we make the features fit to the target datasets using linear classifiers attached to the last student block. Table~\ref{table:transfer_result} presents transfer learning results on 5 different knowledge distillation algorithms using ResNet32$\times$4 and ShuffleV2 as teacher and student, respectively. Our experiments show that the accuracy of transfer learning on the student models derived from SFTN is consistently better than the students associated with the standard teacher. The average student accuracy of SFTN even outperforms that of the standard teacher by 4.71\% points on STL10~\cite{STL10} and 6.07\% points on TinyImageNet~\cite{TinyImageNet}. \subsection{Relationship between Teacher and Student Accuracies} Figure~\ref{fig:teacher_network_capacity} demonstrates the relationship between teacher and student accuracies. According to our experiment, higher teacher accuracy does not necessarily lead to better student models. Also, even in the case that the teacher accuracies of SFTN are lower than those of the standard method, the student models of SFTN consistently outperform the counterparts of the standard method. One possible explanation is that SFTN learns adaptive temperatures to the individual elements of a logit. Table~\ref{table:entropy_compare} shows that teacher networks entropy can be tempered to higher value so that student networks can be more similar to teacher networks by introducing student branch. This result implies that the accuracy gain of a teacher model is not the main reason for the better results of SFTN. \begin{figure*}[ht] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{./images/Teacher_capa_b_bar.png} \subcaption{Teacher accuracy on CIFAR-100.} \label{fig:teacher_network_capacity_a} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{./images/Teacher_capa_a_bar.png} \subcaption{Student Accuracy by KD~\cite{KD}.\vspace{0.1cm}} \label{fig:teacher_network_capacity_b} \end{subfigure} \caption{Relationship between teacher and student accuracies tested on CIFAR-100, where ResNet with different sizes and MobileNetV2 are employed as teacher and student networks, respectively. In general, the teacher accuracy of SFTN is lower than the standard teacher network, but the student models of SFTN is consistently outperform standard methods.} \label{fig:teacher_network_capacity} \end{figure*} \begin{table}[h] \centering \vspace{-0.4cm} \caption{Shows that the entropy of a teacher given by SFTN is higher than that of a teacher for the standard distillation. This result implies that student-aware training learns adaptive temperatures to the individual elements of a logit, which would be better than the simple temperature scaling by a global constant employed in the standard knowledge distillation.} \label{table:entropy_compare} \vspace{0.2cm} \scalebox{0.8}{% \begin{tabular}{l|lllll} Teacher & ResNet18 & ResNet34 & ResNet50 & ResNet101 & ResNet152 \\ Student & \multicolumn{5}{c}{MobilenetV2} \\\hline\hline Student training entropy & \multicolumn{5}{c}{0.0041} \\ Standard teacher training entropy & 0.0004 & 0.0002 & 0.0002 & 0.0002 & 0.0002 \\ SFTN teacher training entropy & 0.0053 & 0.0041 & 0.0044 & 0.0048 & 0.0042 \\\hline\hline Student accuracy & \multicolumn{5}{c}{65.71} \\ Standard student accuracy & 68.39 & 67.05 & 68.27 & 68.60 & 68.71 \\ SFTN student accuracy & 69.08 & 68.77 & 69.15 & 68.99 & 68.90 \end{tabular}% } \vspace{-0.2cm} \end{table} \iffalse \begin{figure*}[t] \begin{center} \includegraphics[width=0.85\linewidth]{./images/teacher_capa_merge.png} \end{center} \caption{Effect of teacher network capacity on CIFAR-100, where five ResNet architectures and MobileNetV2 are employed as the teacher and student networks, respectively. Generally, the teacher accuracy of SFTN is lower than the standard, but the student accuracy of SFTN is consistently better than standard.} \label{fig:teacher_network_capacity2} \end{figure*} \fi \subsection{Training and Testing Curves} \label{sub:Training_and_Testing_Curves} Figure~\ref{fig:comp_training}(a) illustrates the KL-divergence loss of SFTN for knowledge distillation converges faster than the standard teacher network. This is probably because, by the student-aware training through student branches, SFTN learns better transferrable knowledge to student model than the standard teacher network. We believe that it leads to higher test accuracies of SFTN as shown in Figure~\ref{fig:comp_training}(b). \begin{figure*}[ht] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.85\linewidth]{./images/fig4_a.png} \subcaption{KL-divergence loss during training on CIFAR-100.\vspace{0.1cm}} \label{fig:comp_training_a} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.85\linewidth]{./images/fig4_b.png} \subcaption{Test accuracy on CIFAR-100.} \label{fig:comp_training_b} \end{subfigure} \caption{Visualization of training and testing curves on CIFAR-100, where ResNet32$\times$4 and ShuffleV2 are employed as teacher and student networks, respectively. SFTN converges faster and show improved test accuracy than the standard teacher models.} \label{fig:comp_training} \end{figure*} \subsection{Additional study of hyperparameters} In main paper, we show the effects of ${\tau}$ and $\lambda_\text{R}^\text{KL}$. However, there are a few more hyperparameters that affect performance in the SFTN framework. So we present additional results of hyperparameters. Table~\ref{table:Branch_number} shows that average student accuracy by KD of Branch1+2 is consistently better than the models with a single branch. Also, the average student accuracy by KD of a single branch is higher than standard KD. We also test the impacts of $\lambda_\text{R}^\text{CE}$ and $\lambda_\text{T}$, which control the weight of cross-entropy loss in the student branch and the teacher network, respectively. Table~\ref{table:lambada_R_CE} and \ref{table:lambada_T} show that our results are very consistent for the variations of $\lambda_\text{R}^\text{CE}$ and $\lambda_\text{T}$, achieving 76.83$\pm$0.10 and 76.55$\pm$0.43, respectively, while the accuracy of the standard distillation is 74.60. These additional results with respect to the various hyperparameter settings show the robustness of the SFTN framework. \vspace{1cm} \begin{table} \caption{Effects of number of student branches on CIFAR-100. Branch1 and Branch2 denote the version that has a single student branch after $F_{T}^{1}$ and $F_{T}^{2}$, respectively while Branch1+2 indicates the model with student branches after $F_{T}^{1}$, $F_{T}^{2}$. Refer to Fig. 2(a) of the main paper for the definition of Branch1 and Branch2. } \label{table:Branch_number} \begin{subtable}{1\textwidth} \centering \scalebox{0.85}{% \begin{tabular}{c||cccc|c} \multicolumn{1}{l||}{} & \multicolumn{5}{c}{Accuracy of SFTN} \\\hline Teacher & resnet32x4 & resnet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} \\ Student & resnet8x2 & ShuffleV2 & WRN16-2 & ShuffleV2 & \\\hline Branch1 & 74.61 & 78.06 & 77.19 & 77.34 & 76.80 \\ Branch2 & 75.58 & 76.57 & 75.94 & 75.79 & 75.97 \\ Branch1+2 & 77.89 & 79.58 & 78.20 & 78.21 & \textbf{78.47} \\\hline standard & 79.25 & 79.25 & 76.30 & 76.30 & 77.78 \\\hline \end{tabular} } \caption{Results of teacher network trained with student-aware training}\label{table:Branch_number_a} \end{subtable} \bigskip \begin{subtable}{1\textwidth} \centering \scalebox{0.85}{% \begin{tabular}{c||cccc|c} \multicolumn{1}{l||}{} & \multicolumn{5}{c}{Student Accuracy by KD} \\\hline Teacher & resnet32x4 & resnet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} \\ Student & resnet8x2 & ShuffleV2 & WRN16-2 & ShuffleV2 & \\\hline Branch1 & 69.46 & 78.11 & 75.66 & 77.23 & 75.12 \\ Branch2 & 69.71 & 76.74 & 75.86 & 77.52 & 74.96 \\ Branch1+2 & 69.17 & 78.07 & 76.25 & 78.06 & \textbf{75.39} \\\hline standard & 67.43 & 75.25 & 75.46 & 76.68 & 73.71 \\\hline \end{tabular} } \caption{Results of student network by KD}\label{table:Branch_number_b} \end{subtable} \end{table} \begin{table} \caption{Effect of $\lambda_\text{R}^\text{CE}$ on CIFAR-100. Student accuracies by KD are consistent for the variation of $\lambda_\text{R}^\text{CE}$. } \label{table:lambada_R_CE} \begin{subtable}{1\textwidth} \centering \scalebox{0.85}{% \begin{tabular}{c||cccc|c} \multicolumn{1}{l||}{} & \multicolumn{5}{c}{Accuracy of SFTN} \\\hline Teacher & resnet32x4 & resnet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} \\ Student & resnet8x2 & ShuffleV2 & WRN16-2 & ShuffleV2 & \\\hline $\lambda_\text{R}^\text{CE}=1$ & 78.70 & 79.80 & 77.83 & 77.57 & 78.48 \\ $\lambda_\text{R}^\text{CE}=3$ & 79.71 & 79.98 & 78.41 & 77.94 & \textbf{79.01} \\ $\lambda_\text{R}^\text{CE}=5$ & 79.03 & 79.90 & 77.85 & 78.24 & 78.76 \\\hline standard & 79.25 & 79.25 & 76.30 & 76.30 & 77.78 \\\hline \end{tabular} } \caption{Results of teacher network trained with student-aware training}\label{table:lambada_R_CE_a} \end{subtable} \bigskip \begin{subtable}{1\textwidth} \centering \scalebox{0.85}{% \begin{tabular}{c||cccc|c} \multicolumn{1}{l||}{} & \multicolumn{5}{c}{Student Accuracy by KD} \\\hline Teacher & resnet32x4 & resnet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} \\ Student & resnet8x2 & ShuffleV2 & WRN16-2 & ShuffleV2 & \\\hline $\lambda_\text{R}^\text{CE}=1$ & 77.36 & 78.56 & 76.20 & 74.71 & 76.71 \\ $\lambda_\text{R}^\text{CE}=3$ & 77.74 & 78.11 & 76.55 & 75.04 & 76.86 \\ $\lambda_\text{R}^\text{CE}=5$ & 77.73 & 78.05 & 76.45 & 75.40 & \textbf{76.91} \\\hline standard & 74.31 & 75.25 & 75.28 & 73.56 & 74.60 \\\hline \end{tabular} } \caption{Results of student network by KD}\label{table:lambada_R_CE_b} \end{subtable} \end{table} \begin{table} \caption{Effect of $\lambda_\text{T}$ on CIFAR-100. Student accuracies by KD are consistent for the variation of $\lambda_\text{T}$.} \label{table:lambada_T} \begin{subtable}{1\textwidth} \centering \scalebox{0.85}{% \begin{tabular}{c||cccc|c} \multicolumn{1}{l||}{} & \multicolumn{5}{c}{Accuracy of SFTN} \\\hline Teacher & resnet32x4 & resnet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} \\ Student & resnet8x2 & ShuffleV2 & WRN16-2 & ShuffleV2 & \\\hline $\lambda_\text{T} =1$ & 78.70 & 79.80 & 77.83 & 77.57 & 78.48 \\ $\lambda_\text{T} =3$ & 80.37 & 81.04 & 78.41 & 78.65 & 79.62 \\ $\lambda_\text{T} =5$ & 80.59 & 81.23 & 78.41 & 78.46 & \textbf{79.67} \\\hline standard & 79.25 & 79.25 & 76.30 & 76.30 & 77.78 \\\hline \end{tabular} } \caption{Results of teacher network trained with student-aware training}\label{table:lambada_T_a} \end{subtable} \bigskip \begin{subtable}{1\textwidth} \centering \scalebox{0.85}{% \begin{tabular}{c||cccc|c} \multicolumn{1}{l||}{} & \multicolumn{5}{c}{Student Accuracy by KD} \\\hline Teacher & resnet32x4 & resnet32x4 & WRN40-2 & WRN40-2 & \multirow{2}{*}{AVG} \\ Student & resnet8x2 & ShuffleV2 & WRN16-2 & ShuffleV2 & \\\hline $\lambda_\text{T} =1$ & 77.36 & 78.56 & 76.20 & 74.71 & 76.71 \\ $\lambda_\text{T} =3$ & 77.66 & 78.33 & 76.68 & 75.22 & \textbf{76.97} \\ $\lambda_\text{T} =5$ & 76.45 & 77.14 & 76.19 & 74.75 & 76.13 \\\hline standard & 74.31 & 75.25 & 75.28 & 73.56 & 74.60 \\\hline \end{tabular} } \caption{Results of student network by KD}\label{table:lambada_T_b} \end{subtable} \end{table} \begin{table} \caption{Similarity results of various teacher-student pairs. The similarity between teacher and student in the SFTN is consistently higher than standard knowledge distillation.} \label{table:additional_similarity} \vskip 0.1in \begin{subtable}{1\textwidth} \centering \scalebox{0.85}{% \begin{tabular}{c|cc|cc|cc|cc} Teacher model & \multicolumn{2}{c|}{resnet32x4} & \multicolumn{2}{c|}{resnet32x4} & \multicolumn{2}{c|}{WRN40-2} & \multicolumn{2}{c}{WRN40-2} \\ Student model & \multicolumn{2}{c|}{ShuffleV2} & \multicolumn{2}{c|}{ShuffleV1} & \multicolumn{2}{c|}{WRN16-2} & \multicolumn{2}{c}{WRN40-1} \\\hline Teacher training method & Standard & SFTN & Standard & SFTN & Standard & SFTN & Standard & SFTN \\\hline\hline KD~\cite{KD} & 1.10 & 0.47 & 1.08 & 0.43 & 0.72 & 0.26 & 0.91 & 0.35 \\ FitNets~\cite{FitNet} & 0.79 & 0.38 & 0.83 & 0.35 & 0.70 & 0.29 & 0.81 & 0.36 \\ SP~\cite{sp} & 0.95 & 0.45 & 0.90 & 0.37 & 0.07 & 0.26 & 0.80 & 0.32 \\ CRD~\cite{CRD} & 0.81 & 0.43 & 0.85 & 0.40 & 0.65 & 0.26 & 0.77 & 0.31 \\ SSKD~\cite{SSKD} & 0.54 & 0.26 & 0.57 & 0.23 & 0.51 & 0.19 & 0.55 & 0.21 \\ OH~\cite{OH} & 0.85 & 0.37 & 0.78 & 0.30 & 0.69 & 0.23 & 0.75 & 0.27 \\\hline AVG & 0.84 & 0.39 & 0.84 & 0.35 & 0.66 & 0.25 & 0.77 & 0.30 \\\hline \end{tabular}% } \caption{KL divergence results between teacher and student for various combinations of teacher-student architectures and knowledge distillation methods. SFTN consistently generates more similar output distributions than the standard approaches.}\label{table:additional_KL} \end{subtable} \bigskip \begin{subtable}{1\textwidth} \centering \scalebox{0.85}{% \begin{tabular}{c|cc|cc|cc|cc} Teacher model & \multicolumn{2}{c|}{resnet32x4} & \multicolumn{2}{c|}{resnet32x4} & \multicolumn{2}{c|}{WRN40-2} & \multicolumn{2}{c}{WRN40-2} \\ Student model & \multicolumn{2}{c|}{ShuffleV2} & \multicolumn{2}{c|}{ShuffleV1} & \multicolumn{2}{c|}{WRN16-2} & \multicolumn{2}{c}{WRN40-1} \\\hline Teacher training method & Standard & SFTN & Standard & SFTN & Standard & SFTN & Standard & SFTN \\\hline\hline KD~\cite{KD} & 0.88 & 0.95 & 0.90 & 0.94 & 0.83 & 0.93 & 0.86 & 0.93 \\ FitNets~\cite{FitNet} & 0.89 & 0.95 & 0.91 & 0.95 & 0.84 & 0.92 & 0.86 & 0.93 \\ SP~\cite{sp} & 0.89 & 0.95 & 0.92 & 0.97 & 0.92 & 0.96 & 0.92 & 0.96 \\ CRD~\cite{CRD} & 0.88 & 0.95 & 0.91 & 0.96 & 0.84 & 0.94 & 0.85 & 0.92 \\ SSKD~\cite{SSKD} & 0.92 & 0.97 & 0.92 & 0.96 & 0.83 & 0.93 & 0.87 & 0.94 \\ OH~\cite{OH} & 0.90 & 0.96 & 0.92 & 0.97 & 0.84 & 0.95 & 0.88 & 0.94 \\\hline AVG & 0.89 & 0.96 & 0.91 & 0.96 & 0.85 & 0.94 & 0.87 & 0.94 \\\hline \end{tabular}% } \caption{CKA results between teacher and student for various combinations of teacher-student architectures and knowledge distillation methods. SFTN consistently generates more similar representations than the standard approaches.}\label{table:additional_CKA} \end{subtable} \begin{subtable}{1\textwidth} \centering \scalebox{0.85}{% \begin{tabular}{c|cc|cc|cc|cc} Teacher model & \multicolumn{2}{c|}{resnet32x4} & \multicolumn{2}{c|}{resnet32x4} & \multicolumn{2}{c|}{WRN40-2} & \multicolumn{2}{c}{WRN40-2} \\ Student model & \multicolumn{2}{c|}{ShuffleV2} & \multicolumn{2}{c|}{ShuffleV1} & \multicolumn{2}{c|}{WRN16-2} & \multicolumn{2}{c}{WRN40-1} \\\hline Teacher training method & Standard & SFTN & Standard & SFTN & Standard & SFTN & Standard & SFTN \\\hline\hline KD~\cite{KD} & 76.48 & 82.15 & 75.87 & 82.55 & 77.57 & 83.27 & 76.09 & 82.27 \\ FitNets~\cite{FitNet} & 78.77 & 83.53 & 77.48 & 83.80 & 77.77 & 82.76 & 76.09 & 82.04 \\ SP~\cite{sp} & 77.76 & 82.26 & 77.86 & 83.04 & 77.92 & 83.32 & 76.85 & 82.03 \\ CRD~\cite{CRD} & 78.19 & 82.37 & 76.99 & 82.63 & 78.39 & 83.48 & 77.40 & 82.75 \\ SSKD~\cite{SSKD} & 82.14 & 85.90 & 82.10 & 86.25 & 79.88 & 85.35 & 79.67 & 85.24 \\ OH~\cite{OH} & 80.33 & 84.52 & 80.34 & 85.48 & 78.50 & 84.64 & 77.59 & 83.66 \\\hline AVG & 78.95 & 83.46 & 78.44 & 83.96 & 78.34 & 83.80 & 77.28 & 83.00 \\\hline \end{tabular}% } \caption{Top-1 prediction agreement between teacher and student for various combinations of teacher-student architectures and knowledge distillation methods. SFTN consistently achieves higher top-1 agreement than the standard approaches.}\label{table:additional_top1} \end{subtable} \end{table} \subsection{Additional study of similarity} Table 9 of the main paper presents KL-divergence and CKA between the one teacher-student pair(resnet32x4/shuffleV2). To show the generality of the similarity, Table ~\ref{table:additional_similarity} presents additional results of similarity. Table results illustrate a higher similarity of the teacher given by student-aware training with the corresponding student than the similarity between teacher and student in the standard knowledge distillation. Compared to the standard teacher network, SFTN achieves an average 50\% reduction in KL-divergence, a 7\% point improvement in average CKA, and an average of 5\% higher top 1 agreement. \subsection{CIFAR-100 Results with Error Bars} \begin{table*}[t] \centering \caption{Comparisons with error bars between SFTN and the standard teacher models on CIFAR-100 dataset when the architectures of the teacher-student pairs are homogeneous. All the reported results are based on the outputs of 3 independent runs.} \label{table:homogeneous_teacher_student_error_bars} \scalebox{0.70}{% \begin{tabular}{c||cc|cc|cc|cc} Models (Teacher/Student) & \multicolumn{2}{c|}{WRN40-2/WRN16-2} & \multicolumn{2}{c|}{WRN40-2/WRN40-1} & \multicolumn{2}{c|}{ResNet32x4/ResNet8x4} & \multicolumn{2}{c}{VGG13/VGG8} \\ Teacher training method & Standard & SFTN & Standard & SFTN & Standard & SFTN & Standard & SFTN \\\hline\hline Teacher Accuracy & 76.30 & 78.20 & 76.30 & 77.62 & 79.25 & 79.41 & 75.38 & 76.76 \\ Student accuracy w/o KD & \multicolumn{2}{c|}{73.41} & \multicolumn{2}{c|}{72.16} & \multicolumn{2}{c|}{72.38} & \multicolumn{2}{c}{71.12} \\\hline KD~\cite{KD} & 75.46±0.23 & 76.25±0.14 & 73.73±0.21 & 75.09±0.05 & 73.39±0.15 & 76.09±0.32 & 73.41±0.10 & 74.52±0.34 \\ FitNet~\cite{FitNet} & 75.72±0.30 & 76.73±0.28 & 74.14±0.58 & 75.54±0.32 & 75.34±0.24 & 76.89±0.09 & 73.49±0.26 & 74.38±0.86 \\ AT~\cite{AT} & 75.85±0.27 & 76.82±0.24 & 74.56±0.11 & 75.86±0.27 & 74.98±0.12 & 76.91±0.15 & 73.78±0.33 & 73.86±0.15 \\ SP~\cite{sp} & 75.43±0.24 & 76.77±0.45 & 74.51±0.50 & 75.31±0.48 & 74.06±0.28 & 76.37±0.17 & 73.37±0.16 & 74.62±0.16 \\ VID~\cite{vid} & 75.63±0.28 & 76.79±0.12 & 74.21±0.05 & 75.76±0.20 & 74.86±0.37 & 77.00±0.22 & 73.81±0.12 & 74.73±0.45 \\ RKD~\cite{RKD} & 75.48±0.45 & 76.49±0.18 & 73.86±0.23 & 75.11±0.14 & 74.12±0.31 & 76.62±0.26 & 73.52±0.20 & 74.48±0.23 \\ PKT~\cite{pkt} & 75.71±0.38 & 76.57±0.22 & 74.43±0.30 & 75.49±0.12 & 74.7±0.33 & 76.57±0.08 & 73.60±0.07 & 74.51±0.13 \\ AB~\cite{ABdistill} & 70.12±0.18 & 70.76±0.11 & 74.38±0.61 & 75.51±0.07 & 74.73±0.18 & 76.51±0.25 & 73.20±0.26 & 74.67±0.23 \\ FT~\cite{FT} & 75.6±0.22 & 76.51±0.35 & 74.49±0.41 & 75.11±0.19 & 74.89±0.17 & 77.02±0.15 & 73.64±0.61 & 74.30±0.14 \\ CRD~\cite{CRD} & 75.91±0.25 & 77.23±0.09 & 74.93±0.30 & 76.09±0.47 & 75.54±0.57 & 76.95±0.41 & 74.26±0.37 & 74.86±0.46 \\ SSKD~\cite{SSKD} & 75.96±0.03 & 76.80±0.84 & 75.72±0.26 & 76.03±0.15 & 75.95±0.14 & 76.85±0.13 & 74.94±0.24 & 75.66±0.22 \\ OH~\cite{OH} & 76.00±0.07 & 76.39±0.14 & 74.79±0.19 & 75.62±0.27 & 75.04±0.21 & 76.65±0.11 & 73.94±0.24 & 74.72±0.17 \\\hline Best & 76.00±0.07 & 77.23±0.09 & 75.72±0.26 & 76.09±0.47 & 75.95±0.14 & 77.02±0.15 & 74.94±0.24 & 75.66±0.22 \\\hline \end{tabular}% } \end{table*} \begin{table*}[t] \centering \caption{Comparisons with error bars between SFTN and the standard teacher models on CIFAR-100 dataset when the architectures of the teacher-student pairs are heterogeneous. All the reported results are based on the outputs of 3 independent runs.} \label{table:heterogeneous_teacher_student_error_bars} \scalebox{0.70}{% \begin{tabular}{c||cc|cc|cc|cc} Models (Teacher/Student) & \multicolumn{2}{c|}{ShuffleV1/resnet32x4} & \multicolumn{2}{c|}{ShuffleV2/resnet32x4} & \multicolumn{2}{c|}{vgg8/ResNet50} & \multicolumn{2}{c}{ShuffleV2/wrn40-2} \\ Teacher training method & Standard & SFTN & Standard & SFTN & Standard & SFTN & Standard & SFTN \\\hline\hline Teacher Accuracy & 79.25 & 80.03 & 79.25 & 79.58 & 78.7 & 82.52 & 76.30 & 78.21 \\ Student accuracy w/o KD & \multicolumn{2}{c|}{71.95} & \multicolumn{2}{c|}{73.21} & \multicolumn{2}{c|}{71.12} & \multicolumn{2}{c}{73.21} \\\hline KD~\cite{KD} & 74.26±0.16 & 77.93±0.11 & 75.25±0.05 & 78.07±0.30 & 73.82±0.38 & 74.92±0.35 & 76.68±0.36 & 78.06±0.16 \\ FitNet~\cite{FitNet} & 75.95±0.23 & 78.75±0.20 & 77.00±0.19 & 79.68±0.14 & 73.22±0.37 & 74.80±0.21 & 77.31±0.21 & 79.21±0.25 \\ AT~\cite{AT} & 76.12±0.08 & 78.63±0.27 & 76.57±0.19 & 78.79±0.11 & 73.56±0.25 & 74.05±0.31 & 77.41±0.38 & 78.29±0.14 \\ SP~\cite{sp} & 75.80±0.29 & 78.36±0.18 & 76.11±0.40 & 78.38±0.38 & 74.02±0.41 & 75.37±0.13 & 76.93±0.07 & 78.12±0.08 \\ VID~\cite{vid} & 75.16±0.30 & 78.03±0.25 & 75.70±0.40 & 78.49±0.19 & 73.59±0.12 & 74.76±0.37 & 77.27±0.19 & 78.78±0.2 \\ RKD~\cite{RKD} & 74.84±0.23 & 77.72±0.60 & 75.48±0.05 & 77.77±0.39 & 73.54±0.09 & 74.70±0.34 & 76.69±0.23 & 78.11±0.11 \\ PKT~\cite{pkt} & 75.05±0.38 & 77.46±0.14 & 75.79±0.05 & 78.28±0.12 & 73.79±0.06 & 75.17±0.14 & 76.86±0.15 & 78.28±0.13 \\ AB~\cite{ABdistill} & 75.95±0.20 & 78.53±0.13 & 76.25±0.25 & 78.68±0.22 & 73.72±0.12 & 74.77±0.18 & 77.28±0.24 & 78.77±0.16 \\ FT~\cite{FT} & 75.58±0.10 & 77.84±0.11 & 76.42±0.45 & 78.37±0.16 & 73.34±0.29 & 74.77±0.42 & 76.80±0.41 & 77.65±0.14 \\ CRD~\cite{CRD} & 75.60±0.09 & 78.20±0.33 & 76.35±0.46 & 78.43±0.06 & 74.52±0.21 & 75.41±0.32 & 77.52±0.39 & 78.81±0.23 \\ SSKD~\cite{SSKD} & 78.05±0.15 & 79.10±0.32 & 78.66±0.32 & 79.65±0.05 & 76.03±0.24 & 76.95±0.05 & 77.81±0.19 & 78.34±0.15 \\ OH~\cite{OH} & 77.51±0.27 & 79.56±0.12 & 78.08±0.18 & 79.98±0.27 & 74.55±0.16 & 75.95±0.12 & 77.82±0.16 & 79.14±0.23 \\\hline Best & 78.05±0.15 & 79.56±0.12 & 78.66±0.32 & 79.98±0.27 & 76.03±0.24 & 76.95±0.05 & 77.82±0.16 & 79.21±0.25 \\\hline \end{tabular}% } \end{table*} To provide variance information from multiple experiments, Table ~\ref{table:homogeneous_teacher_student_error_bars} and ~\ref{table:heterogeneous_teacher_student_error_bars} shows CIFAR-100 results with error bars. The average difference between the error bars for the standard teacher and SFTN is 0.01\% points. Therefore, the variance of SFTN is similar to the standard teacher. \section{Implementation Details} We present the details of our implementation for better reproduction. \subsection{CIFAR-100} The models for CIFAR-100 are trained for 240 epochs with a batch size of 64, where the learning rate is reduced by a factor of 10 at the 150$^\text{th}$, 180$^\text{th}$, and 210$^\text{th}$ epochs We use randomly cropped 32${\times}$32 image with 4-pixel padding and adopt horizontal flipping with a probability of 0.5 for data augmentation. Each channel in an input image is normalized to the standard Gaussian. \subsection{ImageNet} ImageNet models are learned for 100 epochs with a batch size of 256. We reduce the learning rate by an order of magnitude at the 30$^\text{th}$, 60$^\text{th}$, and 90$^\text{th}$ epochs. In training phase, we perform random cropping with the range from 0.08 to 1.0, which denotes the relative size to the original image while adjusting the aspect ratios by multiplying a random scalar value between 3/4 and 4/3 to the original ratio. All images are resized to 224${\times}$224 and flipped horizontally with a probability of 0.5 for data augmentation. In validation phase, images are resized to 256${\times}$256, and then center-cropped to 224${\times}$224. Each channel in an input image is normalized to the standard Gaussian. \section{Architecture Details} We present the architectural details of SFTN with VGG13 and VGG8, respectively for teacher and student on CIFAR100. VGG13 and VGG8 are modularized into 4 blocks based on the depths and the feature map sizes. VGG13 SFTN adds a student branch to every output of the teacher network block except the last one. Figure~\ref{fig:example_of_teacher}, \ref{fig:example_of_student} and \ref{fig:example_of_sftn} illustrate the architectures of VGG13 teacher, VGG8 student, and VGG13 SFTN with a VGG8 student branch attached. Table~\ref{table:vgg13_teacher_desc}, \ref{table:vgg8_student_desc} and \ref{table:vgg13_sftn_desc} describe the full details of the architectures. \begin{figure} \centering \begin{minipage}{.485\textwidth} \centering \includegraphics[width=0.6\linewidth]{./images/supp_1_1.png} \caption{Architecture of VGG13 teacher model. ${ B_\text{T}^{i}}$ and ${ B_\text{S}^{i}}$ denote the ${i^\text{th}}$ block of teacher network and the ${i^\text{th}}$ block of student network, respectively. Table~\ref{table:vgg13_teacher_desc} shows detailed description of VGG13 teacher.} \label{fig:example_of_teacher} \end{minipage}% \hspace*{-\textwidth} \hfill \begin{minipage}{.485\textwidth} \centering \includegraphics[width=0.6\linewidth]{./images/supp_1_2.png} \caption{Architecture of VGG8 student. ${ B_\text{T}^{i}}$ and ${ B_\text{S}^{i}}$ denote the ${i^\text{th}}$ block of teacher network and the ${i^\text{th}}$ block of student network, respectively. Table~\ref{table:vgg8_student_desc} shows detailed description of VGG8 student.} \label{fig:example_of_student} \end{minipage} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{./images/supp_2.png} \end{center} \caption{Architecture of SFTN with VGG13 teacher and VGG8 student branch. ${ B_\text{T}^{i}}$, ${ B_\text{S}^{i}}$ and $\mathcal{T}^{i}$ denote the ${i^\text{th}}$ block of teacher network, the ${i^\text{th}}$ block of student network and teacher network feature transform layer, respectively. Table~\ref{table:vgg13_sftn_desc} shows detailed description of VGG13 SFTN attached VGG8 student branch.} \label{fig:example_of_sftn} \end{figure*} \onecolumn \FloatBarrier \begin{table}[ht] \centering \caption{VGG13 detailed teacher.} \label{table:vgg13_teacher_desc} \vskip 0.1in \resizebox{\textwidth}{!}{% \begin{tabular}{c||cccccccc} Layer & Input Layer & Input Shape & Filter Size & Channels & Stride & Paddings & Output Shape & Block \\\hline\hline Image & - & - & - & - & - & - & 3x32x32 & - \\\hline Conv2d-1 & Image & 3x32x32 & 3x3 & 64 & 1 & 1 & 64x32x32 & \multirow{13}{*}{${ B_\text{T}^{1}}$} \\ BatchNorm2d-2 & Conv2d-1 & 64x32x32 & - & 64 & - & - & 64x32x32 & \\ Relu-3 & BatchNorm2d-2 & 64x32x32 & - & - & - & - & 64x32x32 & \\ Conv2d-4 & Relu-3 & 64x32x32 & 3x3 & 64 & 1 & 1 & 64x32x32 & \\ BatchNorm2d-5 & Conv2d-4 & 64x32x32 & - & 64 & - & - & 64x32x32 & \\ Relu-6 & BatchNorm2d-5 & 64x32x32 & - & - & - & - & 64x32x32 & \\ MaxPool2d-7 & Relu-6 & 64x32x32 & 2x2 & - & 2 & 0 & 64x16x16 & \\ Conv2d-8 & MaxPool2d-7 & 64x16x16 & 3x3 & 128 & 1 & 1 & 128x16x16 & \\ BatchNorm2d-9 & Conv2d-8 & 128x16x16 & - & 128 & - & - & 128x16x16 & \\ Relu-10 & BatchNorm2d-9 & 128x16x16 & - & - & - & - & 128x16x16 & \\ Conv2d-11 & Relu-10 & 128x16x16 & 3x3 & 128 & 1 & 1 & 128x16x16 & \\ BatchNorm2d-12 & Conv2d-11 & 128x16x16 & - & 128 & - & - & 128x16x16 & \\ Relu-13 & BatchNorm2d-12 & 128x16x16 & - & - & - & - & 128x16x16 & \\\hline MaxPool2d-14 & Relu-13 & 128x16x16 & 2x2 & - & 2 & 0 & 128x8x8 & \multirow{7}{*}{${ B_\text{T}^{2}}$} \\ Conv2d-15 & MaxPool2d-14 & 128x8x8 & 3x3 & 256 & 1 & 1 & 256x8x8 & \\ BatchNorm2d-16 & Conv2d-15 & 256x8x8 & - & 256 & - & - & 256x8x8 & \\ Relu-17 & BatchNorm2d-16 & 256x8x8 & - & - & - & - & 256x8x8 & \\ Conv2d-18 & Relu-17 & 256x8x8 & 3x3 & 256 & 1 & 1 & 256x8x8 & \\ BatchNorm2d-19 & Conv2d-18 & 256x8x8 & - & 256 & - & - & 256x8x8 & \\ Relu-20 & BatchNorm2d-19 & 256x8x8 & - & - & - & - & 256x8x8 & \\\hline MaxPool2d-21 & Relu-20 & 256x8x8 & 2x2 & - & 2 & 0 & 256x4x4 & \multirow{7}{*}{${ B_\text{T}^{3}}$} \\ Conv2d-22 & MaxPool2d-21 & 256x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \\ BatchNorm2d-23 & Conv2d-22 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-24 & BatchNorm2d-23 & 512x4x4 & - & - & - & - & 512x4x4 & \\ Conv2d-25 & Relu-24 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \\ BatchNorm2d-26 & Conv2d-25 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-27 & BatchNorm2d-26 & 512x4x4 & - & - & - & - & 512x4x4 & \\\hline Conv2d-28 & Relu-27 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \multirow{6}{*}{${ B_\text{T}^{4}}$} \\ BatchNorm2d-29 & Conv2d-28 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-30 & BatchNorm2d-29 & 512x4x4 & - & - & - & - & 512x4x4 & \\ Conv2d-31 & Relu-30 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \\ BatchNorm2d-32 & Conv2d-31 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-33 & BatchNorm2d-32 & 512x4x4 & - & - & - & - & 512x4x4 & \\\hline AvgPool2d-34 & Relu-33 & 512x4x4 & - & - & - & - & 512x1x1 & - \\\hline Linear-35 & AvgPool2d-34 & 512x1x1 & - & - & - & - & 100 & - \\\hline \end{tabular}% } \end{table} \FloatBarrier \FloatBarrier \begin{table}[ht] \centering \caption{VGG8 student model.} \label{table:vgg8_student_desc} \vskip 0.1in \resizebox{\textwidth}{!}{% \begin{tabular}{c||cccccccc} Layer & Input Layer & Input Shape & Filter Size & Channels & Stride & Paddings & Output Shape & Block \\\hline\hline Image & - & - & - & - & - & - & 3x32x32 & - \\\hline Conv2d-1 & Image & 3x32x32 & 3x3 & 64 & 1 & 1 & 64x32x32 & \multirow{7}{*}{${ B_\text{S}^{1}}$} \\ BatchNorm2d-2 & Conv2d-1 & 64x32x32 & - & 64 & - & - & 64x32x32 & \\ Relu-3 & BatchNorm2d-2 & 64x32x32 & - & - & - & - & 64x32x32 & \\ MaxPool2d-4 & Relu-3 & 64x32x32 & 2x2 & - & 2 & 0 & 64x16x16 & \\ Conv2d-5 & MaxPool2d-4 & 64x16x16 & 3x3 & 128 & 1 & 1 & 128x16x16 & \\ BatchNorm2d-6 & Conv2d-5 & 128x16x16 & 2x2 & 128 & 1 & - & 128x16x16 & \\ Relu-7 & BatchNorm2d-6 & 128x16x16 & - & - & - & - & 128x16x16 & \\\hline Maxpool2d-8 & Relu-7 & 128x16x16 & 2x2 & - & 2 & 0 & 128x8x8 & \multirow{4}{*}{${ B_\text{S}^{2}}$} \\ Conv2d-9 & Maxpool2d-8 & 128x8x8 & 3x3 & 256 & 1 & 1 & 256x8x8 & \\ BatchNorm2d-10 & Conv2d-9 & 256x8x8 & - & 256 & - & - & 256x8x8 & \\ Relu-11 & BatchNorm2d-10 & 256x8x8 & - & - & - & - & 256x8x8 & \\\hline MaxPool2d-12 & Relu-11 & 256x8x8 & 2x2 & - & 2 & 0 & 256x4x4 & \multirow{4}{*}{${ B_\text{S}^{3}}$} \\ Conv2d-13 & MaxPool2d-12 & 256x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \\ BatchNorm2d-14 & Conv2d-13 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-15 & BatchNorm2d-14 & 512x4x4 & - & - & - & - & 512x4x4 & \\\hline Conv2d-16 & Relu-15 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \multirow{3}{*}{${ B_\text{S}^{4}}$} \\ BatchNorm2d-17 & Conv2d-16 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-18 & BatchNorm2d-17 & 512x4x4 & - & - & - & - & 512x4x4 & \\\hline AvgPool2d-19 & Relu-18 & 512x4x4 & - & - & - & - & 512x1x1 & - \\\hline Linear-20 & AvgPool2d-19 & 512x1x1 & - & - & - & - & 100 & - \\\hline \end{tabular}% } \end{table} \FloatBarrier \FloatBarrier \begin{table}[ht] \centering \caption{Details of SFTN architecture with VGG13 teacher and VGG8 student branch.} \label{table:vgg13_sftn_desc} \vskip 0.1in \resizebox{\textwidth}{!}{% \begin{tabular}{c||cccccccc} Layer & Input Layer & Input Shape & Filter Size & Channels & Stride & Paddings & Output Shape & Block \\\hline\hline Image & - & - & - & - & - & - & 3x32x32 & - \\\hline Conv2d-1 & Image & 3x32x32 & 3x3 & 64 & 1 & 1 & 64x32x32 & \multirow{13}{*}{ ${ B_\text{T}^{1}}$} \\ BatchNorm2d-2 & Conv2d-1 & 64x32x32 & - & 64 & - & - & 64x32x32 & \\ Relu-3 & BatchNorm2d-2 & 64x32x32 & - & - & - & - & 64x32x32 & \\ Conv2d-4 & Relu-3 & 64x32x32 & 3x3 & 64 & 1 & 1 & 64x32x32 & \\ BatchNorm2d-5 & Conv2d-4 & 64x32x32 & - & 64 & - & - & 64x32x32 & \\ Relu-6 & BatchNorm2d-5 & 64x32x32 & - & - & - & - & 64x32x32 & \\ MaxPool2d-7 & Relu-6 & 64x32x32 & 2x2 & - & 2 & 0 & 64x16x16 & \\ Conv2d-8 & MaxPool2d-7 & 64x16x16 & 3x3 & 128 & 1 & 1 & 128x16x16 & \\ BatchNorm2d-9 & Conv2d-8 & 128x16x16 & - & 128 & - & - & 128x16x16 & \\ Relu-10 & BatchNorm2d-9 & 128x16x16 & - & - & - & - & 128x16x16 & \\ Conv2d-11 & Relu-10 & 128x16x16 & 3x3 & 128 & 1 & 1 & 128x16x16 & \\ BatchNorm2d-12 & Conv2d-11 & 128x16x16 & - & 128 & - & - & 128x16x16 & \\ Relu-13 & BatchNorm2d-12 & 128x16x16 & - & - & - & - & 128x16x16 & \\\hline MaxPool2d-14 & Relu-13 & 128x16x16 & 2x2 & - & 2 & 0 & 128x8x8 & \multirow{7}{*}{ ${ B_\text{T}^{2}}$} \\ Conv2d-15 & MaxPool2d-14 & 128x8x8 & 3x3 & 256 & 1 & 1 & 256x8x8 & \\ BatchNorm2d-16 & Conv2d-15 & 256x8x8 & - & 256 & - & - & 256x8x8 & \\ Relu-17 & BatchNorm2d-16 & 256x8x8 & - & - & - & - & 256x8x8 & \\ Conv2d-18 & Relu-17 & 256x8x8 & 3x3 & 256 & 1 & 1 & 256x8x8 & \\ BatchNorm2d-19 & Conv2d-18 & 256x8x8 & - & 256 & - & - & 256x8x8 & \\ Relu-20 & BatchNorm2d-19 & 256x8x8 & - & - & - & - & 256x8x8 & \\\hline MaxPool2d-21 & Relu-20 & 256x8x8 & 2x2 & - & 2 & 0 & 256x4x4 & \multirow{7}{*}{${ B_\text{T}^{3}}$} \\ Conv2d-22 & MaxPool2d-21 & 256x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \\ BatchNorm2d-23 & Conv2d-22 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-24 & BatchNorm2d-23 & 512x4x4 & - & - & - & - & 512x4x4 & \\ Conv2d-25 & Relu-24 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \\ BatchNorm2d-26 & Conv2d-25 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-27 & BatchNorm2d-26 & 512x4x4 & - & - & - & - & 512x4x4 & \\\hline Conv2d-28 & Relu-27 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \multirow{6}{*}{${ B_\text{T}^{4}}$} \\ BatchNorm2d-29 & Conv2d-28 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-30 & BatchNorm2d-29 & 512x4x4 & - & - & - & - & 512x4x4 & \\ Conv2d-31 & Relu-30 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \\ BatchNorm2d-32 & Conv2d-31 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-33 & BatchNorm2d-32 & 512x4x4 & - & - & - & - & 512x4x4 & \\\hline AvgPool2d-34 & Relu-33 & 512x4x4 & - & - & - & - & 512x1x1 & - \\\hline Linear-35 & AvgPool2d-34 & 512x1x1 & - & - & - & - & 100 & - \\\hline \multicolumn{9}{c}{Student Branch 1} \\\hline\hline Conv2d-36 & Relu-13 & 128x16x16 & 1x1 & 128 & 1 & 0 & 128x16x16 & \multirow{3}{*}{$\mathcal{T}^{1}$} \\ BatchNorm2d-37 & Conv2d-36 & 128x16x16 & - & 128 & - & - & 128x16x16 & \\ Relu-38 & BatchNorm2d-37 & 128x16x16 & - & - & - & - & 128x16x16 & \\\hline Maxpool2d-39 & BatchNorm2d-37 & 128x16x16 & 2x2 & - & 2 & 0 & 128x8x8 & \multirow{4}{*}{${ B_\text{S}^{2}}$} \\ Conv2d-40 & Maxpool2d-39 & 128x8x8 & 3x3 & 256 & 1 & 1 & 256x8x8 & \\ BatchNorm2d-41 & Conv2d-40 & 256x8x8 & - & 256 & - & - & 256x8x8 & \\ Relu-42 & BatchNorm2d-41 & - & - & - & - & - & 256x8x8 & \\\hline MaxPool2d-43 & Relu-42 & 256x8x8 & 2x2 & - & 2 & 0 & 256x4x4 & \multirow{4}{*}{${ B_\text{S}^{3}}$} \\ Conv2d-44 & MaxPool2d-43 & 256x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \\ BatchNorm2d-45 & Conv2d-44 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-46 & BatchNorm2d-45 & - & - & - & - & - & 512x4x4 & \\\hline Conv2d-47 & Relu-46 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \multirow{3}{*}{${ B_\text{S}^{4}}$} \\ BatchNorm2d-48 & Conv2d-47 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-49 & BatchNorm2d-48 & - & - & - & - & - & 512x4x4 & \\\hline AvgPool2d-50 & Relu-49 & 512x4x4 & - & - & - & - & 512x1x1 & - \\\hline Linear-51 & AvgPool2d-50 & 512x1x1 & - & - & - & - & 100 & - \\\hline \end{tabular}% } \end{table} \FloatBarrier \FloatBarrier \begin{table}[ht] \renewcommand\thetable{7} \centering \caption{Continued from the previous table.} \vskip 0.1in \resizebox{\textwidth}{!}{% \begin{tabular}{c||cccccccc} Layer & Input Layer & Input Shape & Filter Size & Channels & Stride & Paddings & Output Shape & Block \\\hline\hline \multicolumn{9}{c}{Student Branch 2} \\\hline\hline Conv2d-52 & Relu-20 & 256x8x8 & 1x1 & 256 & 1 & 0 & 256x8x8 & \multirow{3}{*}{$\mathcal{T}^{2}$} \\ BatchNorm2d-53 & Conv2d-52 & 256x8x8 & - & 256 & - & - & 256x8x8 & \\ Relu-54 & BatchNorm2d-53 & 256x8x8 & - & - & - & - & 256x8x8 & \\\hline MaxPool2d-55 & Relu-54 & 256x8x8 & 2x2 & - & 2 & 0 & 256x4x4 & \multirow{4}{*}{${ B_\text{S}^{3}}$} \\ Conv2d-56 & MaxPool2d-55 & 256x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \\ BatchNorm2d-57 & Conv2d-56 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-58 & BatchNorm2d-57 & - & - & - & - & - & 512x4x4 & \\\hline Conv2d-59 & Relu-58 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \multirow{3}{*}{${ B_\text{S}^{4}}$} \\ BatchNorm2d-60 & Conv2d-59 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-61 & BatchNorm2d-60 & - & - & - & - & - & 512x4x4 & \\\hline AvgPool2d-62 & Relu-61 & 512x4x4 & - & - & - & - & 512x1x1 & - \\\hline Linear-63 & AvgPool2d-62 & 512x1x1 & - & - & - & - & 100 & - \\\hline \multicolumn{9}{c}{Student Branch 3} \\\hline\hline Conv2d-64 & Relu-27 & 512x4x4 & 1x1 & 512 & 1 & 0 & 512x4x4 & \multirow{3}{*}{$\mathcal{T}^{3}$} \\ BatchNorm2d-65 & Conv2d-64 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-66 & BatchNorm2d-65 & 512x4x4 & - & - & - & - & 512x4x4 & \\\hline Conv2d-67 & Relu-66 & 512x4x4 & 3x3 & 512 & 1 & 1 & 512x4x4 & \multirow{3}{*}{${ B_\text{S}^{4}}$} \\ BatchNorm2d-68 & Conv2d-67 & 512x4x4 & - & 512 & - & - & 512x4x4 & \\ Relu-69 & BatchNorm2d-68 & - & - & - & - & - & 512x4x4 & \\\hline AvgPool2d-70 & Relu-69 & 512x4x4 & - & - & - & - & 512x1x1 & - \\\hline Linear-71 & AvgPool2d-70 & 512x1x1 & - & - & - & - & 100 & - \\\hline \end{tabular}% } \end{table} \FloatBarrier \section{Related Work} \label{sec:related} Although deep learning has shown successful outcomes in various fields, it is still difficult to apply deep neural networks to real-world tasks due to their excessive requirement for computation and memory. There have been many attempts to reduce the computational cost of deep learning models, and knowledge distillation is one of the examples. Various computer vision~\cite{Chen2017LearningEO, facemodel, reidentification, styletransfer} and natural language processing~\cite{tinybert, mclkd, distillbert, distill_response} tasks often employ knowledge distillation to obtain efficient models. Recently, some cross-modal tasks~\cite{action_recognition, radio_signals, zhou2020domain} transfer knowledge across domains. This section summarizes the research efforts to improve performance of models via knowledge distillation. \subsection{What to distill} Since Hinton et al.~\cite{KD} introduce the basic concept of knowledge distillation, where the dark knowledge in teacher models is given by the temperature-scaled representations of the softmax function, various kinds of information have been employed as the sources of knowledge for distillation from teachers to students. FitNets~\cite{FitNet} distills intermediate features of a teacher network, where the student network transforms the intermediate features using guided layers and then calculates the difference between the guided layers and the intermediate features of teacher network. The position of distillation is shifted to the layers before the ReLU operations in \cite{OH}, which also proposes the novel activation function and the partial $L_2$ loss function for effective knowledge transfer. Zagoruyko and Komodakis~\cite{AT} argue importance of attention and propose an attention transfer (AT) method from teachers to students while Kim et al.~\cite{FT} compute the factor information of the teacher representations using an autoencoder, which is decoded by students for knowledge transfer. Relational knowledge distillation (RKD)~\cite{RKD} introduces a technique to transfer relational information such as distances and angles of features. CRD~\cite{CRD} maximizes mutual information between a teacher and a student via contrastive learning. There exist a couple of methods to perform knowledge distillation without teacher models. For example, ONE~\cite{lan2018knowledge} distills knowledge from an ensemble of multiple students while BYOT~\cite{byot} transfers knowledge from deeper layers to shallower ones. Besides, SSKD~\cite{SSKD} distills self-supervised features of teachers to students for transferring richer knowledge. \subsection{How to distill} Several recent knowledge distillation methods focus on the strategy of knowledge distillation. Born again network (BAN)~\cite{BAN} presents the effectiveness of sequential knowledge distillation via the networks with an identical architecture. A curriculum learning method~\cite{RCO} employs the optimization trajectory of a teacher model to train students. Collaborative learning approaches~\cite{DML, KDCL, PCL} attempt to learn multiple models with distillation jointly, but their concept is not well-suited for asymmetric teacher-student relationship, which may lead to suboptimal convergence of student models. The model capacity gap between a teacher and a student is addressed in \cite{kang2020towards, efficacykd, mirzadeh2019improved}. TAKD~\cite{mirzadeh2019improved} employs an extra network to reduce model capacity gap between teacher and student models, where a teacher transfers knowledge to a student via a teaching assistant network with an intermediate size. An early stopping technique for training teacher networks is proposed to obtain better transferable representations and a neural architecture search is employed to identify a student model with the optimal size~\cite{kang2020towards}. Our work proposes a novel student-friendly learning technique of a teacher network to facilitate knowledge distillation. \section{Student-Friendly Knowledge Distillation} \label{sec:student} \begin{figure}[t] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{./images/fig2_1_font.png} \subcaption{Student-aware training of a teacher network} \label{fig:overview_first} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{./images/fig2_2_font.png} \subcaption{Knowledge distillation} \label{fig:overview_second} \end{subfigure} \caption{Overview of the student-friendly teacher network (SFTN). In this figure, $\mathbf{F}$, $B$, $\mathcal{T}$, and $\mathbf{q}$ denote a feature map, a network block, a teacher network feature transform layer, and a softmax output, respectively, where the superscript means the network block index and the subscript $\text{S}$, $\text{T}$, and $\text{R}$ respectively indicate the student network, the teacher network, and the student branch in the teacher model. The loss for teacher network ${\mathcal{L}_\text{T}}$ is given by \eqref{eq:loss_tn} while Kullback-Leibler loss $\mathcal{L}_\text{R}^\text{KL}$ and cross entropy loss $\mathcal{L}_\text{R}^\text{CE}$ are defined in \eqref{eq:sb_kl} and \eqref{eq:sb_ce}, respectively. (a) When training a teacher, SFTN optimizes $\mathbf{F}_\text{T}^{i}$ and $\mathbf{q}_\text{T}$ for better knowledge transfer to student networks. (b) In the distillation stage, the features in the teacher network, $\mathbf{F}_\text{T}^{i}$ and $\mathbf{q}_\text{T}$, are distilled to student networks with existing knowledge distillation algorithms straightforwardly.} \label{fig:overview} \end{figure} This section describes the details of the student-friendly teacher network (SFTN), which transfers the features of teacher models to student networks more effectively than the standard distillation. Figure~\ref{fig:overview} illustrates the main idea of our method. \subsection{Overview} \label{sub:overview} The conventional knowledge distillation approaches attempt to find the way of teaching student networks given the architecture of teacher networks. The teacher network is trained with the loss with respect to the ground-truth, but the objective is not necessarily beneficial for knowledge distillation to students. To the contrary, SFTN framework aims to improve the effectiveness of knowledge distillation from the teacher to the student models. \vspace{-0.2cm} \paragraph{Modularizing teacher and student networks} We modularize teacher and student networks into multiple blocks based on the depth of layers and the feature map sizes. This is because knowledge distillation is often performed at every 3 or 4 blocks for accurate extraction and transfer of knowledge in teacher models. Figure~\ref{fig:overview} presents the case that both networks are modularized into 3 blocks, denoted by $\{ B_\text{T}^{1}, B_\text{T}^{2}, B_\text{T}^{3} \}$ and $\{ B_\text{S}^{1}, B_\text{S}^{2}, B_\text{S}^{3} \}$ for a teacher and a student, respectively. \vspace{-0.2cm} \paragraph{Adding student branches} SFTN augments student branches to a teacher model for the joint training of both parts. Each student branch is composed of a teacher network feature transform layer $\mathcal{T}$ and a student network blocks. Note that $\mathcal{T}$ is similar to a guided layer in FitNets~\cite{FitNet} and transforms the dimensionality of the channel in $\mathbf{F}_\text{T}^{i}$ into that of $B_\text{S}^{i+1}$. Depending on the configuration of teacher and student networks, the transformation need to increase or decrease the size of the feature maps. We employ 3${\times}$3 convolutions to reduce the size of $\mathbf{F}_\text{T}^{i}$ while 4${\times}$4 transposed convolutions are used to increase its size. Also, 1${\times}$1 convolutions is used when we do not need to change the size of $\mathbf{F}_\text{T}^{i}$. The features transformed to a student branch is forwarded separately to compute the logit of the branch. For example, as shown in Figure~\ref{fig:overview}(a), $\mathbf{F}_\text{T}^1$ in the teacher stream is transformed to fit $B_\text{S}^2$, which initiates a student branch to derive $\mathbf{q}_\text{R}^1$ while another student branch starts from the transformed features of $\mathbf{F}_\text{T}^2$. Note that $\mathbf{F}_\text{T}^{3}$ has no trailing teacher network block in the figure and has no associated student branch because it is directly utilized to compute the logit of the main teacher network. \vspace{-0.2cm} \paragraph{Training SFTN} The teacher network is trained along with multiple student branches corresponding to individual blocks in the teacher, where we minimize the differences in the representations between the teacher and the student branches. Our loss function is composed of three terms: loss in the teacher network $\mathcal{L}_\text{T}$, Kullback-Leibler loss $\mathcal{L}_\text{R}^\text{KL}$ in the student branch, and cross-entropy loss $\mathcal{L}_\text{R}^\text{CE}$ in the student branch. The main loss term, $\mathcal{L}_\text{T}$, minimizes the error between $\mathbf{q}_\text{T}$ and the ground-truth while $\mathcal{L}_\text{R}^\text{KL}$ enforces $\mathbf{q}_\text{R}^{i}$ and $\mathbf{q}_\text{T}$ to be similar to each other and $\mathcal{L}_\text{R}^\text{CE}$ makes $\mathbf{q}_\text{R}^{i}$ fit the ground-truth. \vspace{-0.2cm} \paragraph{Distillation using SFTN} As shown in Figure~\ref{fig:overview}(b), the conventional knowledge distillation technique is employed to simulate $\mathbf{F}_\text{T}^{i}$ and $\mathbf{q}_\text{T}$ by $\mathbf{F}_\text{S}^{i}$ and $\mathbf{q}_\text{S}$ respectively. The actual knowledge distillation step is straightforward because the representations of $\mathbf{F}_\text{T}^{i}$ and $\mathbf{q}_\text{T}$ have already been learned properly at the time of training SFTN. We expect the performance of the student network distilled from the SFTN to be better than the one obtained from the conventional teacher network. \subsection{Network Architecture} \label{sub:network} SFTN consists of a teacher network and multiple student branches. The teacher and student networks are divided into $N$ blocks, where a set of blocks in the teacher is given by $\mathbb{B}_\text{T} = \{B_\text{T}^{i}\}_{i=1}^{N}$ while the blocks in the student is denoted by $\mathbb{B}_\text{S} = \{ B_\text{S}^{i}\}_{i=1}^{N}$. Note that the last block in the teacher network does not have the associated student branch. Given an input of the network, $\mathbf{x}$, the output of the softmax function for the main teacher network, $\mathbf{q}_\text{T}$, is given by \begin{equation} \begin{split} \mathbf{q}_\text{T}(\mathbf{x} ; \tau) = \text{softmax} \left( \frac{\mathcal{F}_\text{T}(\mathbf{x})}{\tau} \right), \end{split} \label{eq:SFTN_output} \end{equation} where $\mathcal{F}_\text{T}$ denotes the logit of the teacher network and $\tau$ is the temperature of the softmax function. On the other hand, the output of the softmax function in the $i^\text{th}$ student branch, $\mathbf{q}_\text{R}^i$, is given by \begin{align} \mathbf{q}_\text{R}^i(\mathbf{F}_\text{T}^i ; \tau) = \text{softmax} \left( \frac{\mathcal{F}_\text{S}^i(\mathcal{T}^i(\mathbf{F}_\text{T}^i))}{\tau} \right), \label{eq:sb_i_output} \end{align} where $\mathcal{F}_\text{S}^i$ denotes the logit of the $i^\text{th}$ student branch. \subsection{Loss Functions} \label{sub:loss} The teacher network in the conventional knowledge distillation framework is traned only with $\mathcal{L}_\text{T}$. However, SFTN has additional loss terms such as $\mathcal{L}_\text{R}^\text{KL}$ and $\mathcal{L}_\text{R}^\text{CE}$ as described in Section~\ref{sub:overview}. The total loss function of SFTN, denoted by $\mathcal{L}_\text{SFTN}$, is given by \begin{equation} \mathcal{L}_\text{SFTN} = \lambda_\text{T} \mathcal{L}_\text{T} + \lambda_\text{R}^\text{KL} \mathcal{L}_\text{R}^\text{KL} + \lambda_\text{R}^\text{CE} \mathcal{L}_\text{R}^\text{CE}, \label{eq:loss_sftn} \end{equation} where $\lambda_\text{T}$, $\lambda_{R}^\text{KL}$ and $\lambda_\text{R}^\text{CE}$ are the weights of individual loss terms. Each loss term is defined as follows. First, $\mathcal{L}_\text{T}$ is given by the cross-entropy between the teacher's prediction $\mathbf{q}_\text{T}$ and the ground-truth label $\mathbf{y}$ as \begin{equation} \mathcal{L}_\text{T} = \text{CrossEntropy}(\mathbf{q}_\text{T}, \mathbf{y}) \label{eq:loss_tn} \end{equation} The knowledge distillation loss, denoted by $\mathcal{L}_\text{R}^\text{KL}$, employs the KL divergence between $\mathbf{q}_\text{R}^i$ and $\mathbf{q}_\text{T}$, where $N-1$ student branches except for the last block in the teacher network are considered together as \begin{equation} \mathcal{L}_\text{R}^\text{KL} = \frac{1}{N-1} \sum_{i=1}^{N-1} \text{KL}(\tilde{\mathbf{q}}_\text{R}^i || \tilde{\mathbf{q}}_\text{T}), \label{eq:sb_kl} \end{equation} where $\tilde{\mathbf{q}}_\text{R}^i$ and $\tilde{\mathbf{q}}_\text{T}$ denote smoother softmax function outputs with a larger temperature, $\tilde{\tau}$. The cross-entropy loss of the student network, $\mathcal{L}_\text{R}^\text{CE}$, is obtained by the average cross-entropy loss from all the student branches, which is given by \begin{equation} \mathcal{L}_\text{R}^\text{CE} = \frac{1}{N-1}\sum_{i=1}^{N-1} \text{CrossEntropy}(\mathbf{q}_\text{R}^i, \mathbf{y}). \label{eq:sb_ce} \end{equation} Note that we set $\tau$ to 1 for both the cross-entropy loss, $\mathcal{L}_\text{T}$ and $\mathcal{L}_\text{R}^\text{CE}$. \iffalse \subsection{Discussion} \label{sub:discussion} \sout{SFTN learns teacher models that transfer knowledge to students more effectively. One drawback of the proposed method is the increase of training cost due to the two-stage training framework. However, the main goal of knowledge distillation is to maximize the benefit in student networks, and the additional training cost may not be critical in many real applications. In the next section, we show that SFTN is effective to learn high-performance student models combined with various knowledge distillation techniques.} \fi
{ "timestamp": "2022-01-25T02:22:04", "yymm": "2102", "arxiv_id": "2102.07650", "language": "en", "url": "https://arxiv.org/abs/2102.07650" }
\section{Introduction} While there are many tools in probability theory for showing that a random variable is concentrated, there are few for proving \emph{anti-concentration} in a general setting. One family of results in this direction is Littlewood--Offord theory \cite{erdos-LwO,halasz,LittlewoodOfford,stanley,rogozin}, also known as small ball probability, which is a set of tools for obtaining upper bounds on the probability that a random sum is in a ``small'' set. This line of work has led to \emph{inverse Littlewood--Offord theorems} \cite{nguyen-vu,tao-vu,rudelson2008littlewood} which often present a useful dichotomy: either a certain random variable exhibits this anti-concentration or a special structure is present. This approach has been extended to low degree polynomials \cite{meka2016anti,kwan2019algebraic} although sharp results remain elusive in these non-linear cases. Another route towards anti-concentration is by a coupling approach: when the variable of interest is a function of a random environment, one can often couple two instances of the environment so that one instance of the variable is larger than the other. This approach was taken by Wehr and Aizenman \cite{wehr-aizenman} to yield lower bounds on certain variances in the context of the Ising model (and other related models) and is also taken up in other ad-hoc approaches to proving lower bounds on fluctuations~\cite{bollobas-janson,gong-houdre-lember,houdre-ma,houdre-matzinger,janson,lember-matzinger,rhee} which culminated in a recent unifying work of Chatterjee \cite{chatterjee}. For lower bounds specifically on the variance, there are also a few other tools available; the Cram\'er--Rao inequality \cite{cramer,rao} and a related approach by Cacoullos \cite{cacoullos} provide variance lower bounds for functions of i.i.d.\ random variables in terms of Fisher information. While these approaches are powerful, they all depend deeply on interpreting the random variable of interest as a function of a random environment, typically a family of i.i.d.\ variables, and thus do not apply to variables without such an interpretation. In this paper, we prove anti-concentration estimates for a random variable based solely on the location of the roots of its probability generating function. For a random variable $X \in \{0,\ldots,n\}$, we let $$ f_X(z) := \sum_{k} \mathbb{P}(X = k)z^k $$ be its probability generating function. We shall write $p_i = \mathbb{P}( X = i)$, when $X$ is clear from context. Our original motivation derives from a conjecture of Pemantle (see \cite{clt1}) and a related conjecture of Ghosh, Liggett and Pemantle \cite{GLP} on random variables with real stable probability generating functions. Pemantle conjectured that random variables $X \in \{0,\ldots,n\}$ for which $f_X$ has no roots in a sector $\{ z : |\arg(z) | < \delta \}$ are approximately normal, provided $\sigma(X) \gg 1$. We formulated the following natural conjecture which implies, when applied with ideas from \cite{LPRS,clt1}, an important subcase of Pemantle's conjecture, when the roots are bounded away from $0$ and $\infty$. \begin{conjecture}\label{conj:lin-var} Let $X \in \{0,\ldots,n\}$ be a random variable with $p_0 p_n > 0$ and probability generating function $f_X$ and let $\delta > 0 $, $R \geq 1$. If the zeros $\z$ of $f_X$ satisfy $ |\arg(\z)| \geq \delta$ and $R^{-1} \leq |\z| \leq R$ then \[ \mathrm{Var}(X) = \Omega_{R,\delta}( n ). \] \end{conjecture} While we ultimately resolved the conjecture of Pemantle by different means (see \cite{clt2}), Conjecture~\ref{conj:lin-var} remains of independent interest and motivates our work here. What is perhaps surprising about this conjecture is that it says that random variables $X$ of this type have variance that is essentially as large as possible, as it is not hard to see that $\mathrm{Var}(X) = O_{R,\delta(n)}$. Indeed, under the assumptions of Conjecture \ref{conj:lin-var}, we have $$\mathrm{Var}(X) = \frac{d^2}{d t^2}\log f_X(e^{\theta})\, \bigg|_{t = 0} = \sum_{j = 1}^n \frac{d^2}{d t^2}\log |e^t - \zeta_j |\, \bigg|_{t = 0} = O_{R,\delta}(n)\,, $$ where $\z_1,\ldots,\z_n$ are the roots of $f$. Thus, Conjecture~\ref{conj:lin-var} implies that the variance of such random variables is \emph{determined} up to constant factors. In this paper we not only prove Conjecture~\ref{conj:lin-var}, but supply a near-optimal constant. In other words, we give a near-optimal lower bound for the variance of $X$ based on the smallest argument of a root and the smallest annulus that contains the zeros of $f_X$. \begin{theorem}\label{thm:Var-lower-bound} Let $X \in \{0,\ldots,n\}$ be a random variable with $p_0 p_n > 0$ and probability generating function $f_X$ and let $\delta > 0 $, $R \geq 1$. If the zeros $\z$ of $f_X$ satisfy $ |\arg(\z)| \geq \delta$ and $R^{-1} \leq |\z| \leq R$ then \[ \mathrm{Var}(X) \geq c R^{-2\pi/\delta} n ,\] where $c>0$ is an absolute constant. \end{theorem} We will in fact prove a slightly stronger lower-bound, but we postpone this more technical statement to Section~\ref{sec:proof}. \subsection{Some corollaries of Theorem~\ref{thm:Var-lower-bound}} In \cite{clt1,clt2} we studied the relationship between zero-free regions of probability generating functions and central limit theorems, culminating in two sharp central limit theorems\footnote{See Lebowitz, Pittel, Ruelle and Speer's work \cite{LPRS} for earlier results on the relationship between zero-free regions and central limit theorems.}. In \cite{clt2}, we resolved Pemantle's conjecture by showing that if the generating function of a random variable $X$ has no roots with argument less than $\delta$, then $X$ is approximately Gaussian provided $\mathrm{Var}(X) \gg \delta^{-2}$. Combining this \cite[Theorem 1.4]{clt2} with Theorem \ref{thm:Var-lower-bound} allows us to prove a quantitative central limit theorem for $X$. \begin{corollary}\label{cor:Gauss-flux} Let $X \in \{0,\ldots,n\}$ be a random variable with $p_0p_n> 0$, mean $\mu$, variance $\sigma^2$ and probability generating function $f_X$. Also let $\delta >0$, $R \geq 1$ and set $X^{\ast} := (X-\mu)\sigma^{-1}$. If the zeros $\z$ of $f_X$ satisfy $ |\arg(\z)| \geq \delta$ and $R^{-1} \leq |\z| \leq R$ then $$\sup_{t \in \mathbb{R}} \left|\P\left( X^{\ast} \leq t \right) - \P(Z \leq t) \right| \leq c \delta^{-1} R^{\pi/\delta} \cdot n^{-1/2}, $$ where $Z$ is a standard Gaussian random variable and $c > 0$ is an absolute constant. \end{corollary} Thus, while Theorem \ref{thm:Var-lower-bound} shows a lower bound on the variance of $X$, Corollary~\ref{cor:Gauss-flux} allows us to deduce that $X$ also has \emph{fluctuations} on the order of $\sqrt{\mathrm{Var}(X)} = \Theta_{R,\delta}(\sqrt{n})$, provided $n$ is large relative to $R^{2\pi/\delta}$. From Corollary \ref{cor:Gauss-flux}, we obtain a Littlewood--Offord type theorem for general random variables. To understand this result in the context of Littlewood--Offord theory, we recall the classical result of Erd\H{o}s~\cite{erdos-LwO} which says that if $X$ is of the form $X = \sum_{i=1}^n \varepsilon_iv_i$, where $\varepsilon_1,\ldots,\varepsilon_n \in \{-1,1\}$ are i.i.d. uniform and $v_1,\ldots,v_n \in \mathbb{R}$ are non-zero then \[ \max_{y}\, \mathbb{P}( X = y ) = O(n^{-1/2}). \] The following corollary of Theorem~\ref{thm:Var-lower-bound} says that a similar result holds even if $X$ is \emph{not} a sum of independent random variables: one needs only some control on the roots of the probability generating function of $X$. \begin{corollary}\label{cor:LwO} Let $X \in \{0,\ldots,n\}$ be a random variable with $p_0 p_n > 0$ and probability generating function $f_X$ and let $\delta >0$, $R\geq 1$. If the zeros $\z$ of $f_X$ satisfy $ |\arg(\z)| \geq \delta$ and $R^{-1} \leq |\z| \leq R$, then $$\max_y\, \P(X = y) \leq c \delta^{-1} R^{\pi/\delta} n^{-1/2} $$ where $c > 0$ is an absolute constant. \end{corollary} Again this is best possible, up to a factor of $2$ in the exponent of $R$. The reader may have anticipated a upper bound of the form $c(\mathrm{Var}(X))^{-1/2}$ in Corollary~\ref{cor:LwO}, in the extreme case, which is a factor of $\delta^{-1}$ off of what we have. However this additional factor \emph{is} necessary; a fact which can, for example, be seen in the example of sharpness in Section~\ref{subsec:discuss}. \subsection{Zeros of generating functions} \label{subsec:zeros} Perhaps surprisingly, many families of random variables, which are otherwise elusive, are known to have generating functions with zero-free regions. A classical instance is provided in the highly influential pair of 1952 works of Lee and Yang \cite{lee-yang,yang-lee}, which showed that the roots of the partition function in the ferromagnetic Ising model lie on the unit circle. That is, in our terminology, the probability generating function for ``up spins'' in the Ising model has all of its roots on the unit circle. Another classical example is provided by Heilmann and Leib \cite{heilmann-leib70}, who showed that a similar special property is enjoyed by the random variable $X = |M|$ where $M$ is a uniformly chosen matching in a graph $G$. In this case they showed that the roots of the probability generating function are known to be \emph{real}. These results on zeros have significant implications for the study of these random systems. Lee and Yang connected the theory of zero-freeness to the non-existence of a phase transition: the ferromagnetic Ising model cannot have a phase transition as the external field $h$ varies, except at $h = 0$. The work of Heilmann and Leib yields similar results for the study of ``monomer-dimer'' systems and, in fact, here tells us that there is no phase transition at all. While there are other techniques for ruling out phase transitions for the monomer-dimer model on amenable graphs such as $\mathbb{Z}^d$ \cite{van-den-berg} (see \cite{lebowitz-martin,preston} for analogous results for the Ising model ), the zero-free approach remains the most robust route to proving there is no phase transition for these models on arbitrary graphs. From a more combinatorial perspective, Godsil \cite{godsil81}, in a classic work, used the work of Heilmann and Leib to obtain a central limit theorem for the size of a random matching in a $d$-regular graph for fixed $d$. More recently, this was taken much further by Kahn \cite{kahn00} who, essentially relying on the work of Heilmann and Leib, gave a nearly complete understanding of this phenomena for matching in graphs. Recently, it has shown to be fruitful to consider multivariate (or in the case of random variables, \emph{multi-dimensional}) versions of these polynomials. In the case of both the Ising and monomer-dimer models, \emph{multivariate} zero-free regions are now known and a general theory has developed around these results. One particular success has been had with \emph{stable polynomials}, which emerged in probability theory with the uprising work of Borcea, Br\"anden and Liggett \cite{BBL}, who connected polynomials to a natural notion of negative dependence set out by Pemantle in his influential work on the subject \cite{pemantle2000}. We call a random variable with a real stable generating function \emph{strong Rayleigh} and since the work of Borcea, Br\"anden and Liggett \cite{BBL} many random variables have been shown to be strong Rayleigh, such as the edges of a uniform spanning tree, independent Bernoulli random variables with conditioned sum, and more \cite{pemantle-survey}. Our original motivation was to solve a question raised by Ghosh Liggett and Pemantle \cite{GLP} on the limiting shape of strong Rayleigh distributions. While we now know these distributions approximate multivariate gaussians \cite{clt2}, Theorem~\ref{thm:Var-lower-bound} allows us to understand the scale of this normal shape, in all directions. More specifically, if the generating function $p(z_1,\ldots,z_d)$ of a random variable $(X_1,\ldots,X_d)$ is stable, then the random variable $m_1 X_1 + \cdots + m_d X_d$ for non-negative integers $m_j$ is zero-free in the sector $\{|\arg(z)| < \pi / \max m_j \}$. Theorem \ref{thm:Var-lower-bound} then shows that no positive linear combination of $(X_1,\ldots,X_d)$ can be degenerate, assuming some control over the maximum and minimum root. Another light in which to view our results comes from the analytic theory of characteristic functions as studied by Yu. V. Linnik, Ostrovskii and others. We refer the reader to \cite{LinnikConj} and the book of Linnik and Ostrovskii \cite{linnikBook} and the references therein for more detail on this fascinating line of research. \subsection{Sharpness of results}\label{subsec:discuss} The sharpness of Theorem \ref{thm:Var-lower-bound} (and Corollaries~\ref{cor:LwO} and~\ref{cor:Gauss-flux}), up to a factor of $2$ in the exponent of $R$, is supplied by a natural class of random variables, which we describe here. Given $R \geq 1$ and $\delta = \pi/k$ for some integer $k \geq 3$. We choose $p = (1 + R^k)^{-1}$ and let $X_1,\ldots,X_{ n/k }$ be independent, identically distributed Bernoulli random variables where $p = \mathbb{P}(X_j = 1)$ and thus $\mathrm{Var}(X_j) = p(1-p)$. Now define $X$ to be the sum $$X = k \sum_{j = 1}^{ n/k }X_i.$$ One can then see that $f_X(z) = (p z^k + (1 - p))^{ n/k}$ and thus all roots have modulus $R$ and argument $\geq \pi/k$. It is not hard to additionally show that \[ \mathrm{Var}(X) = \Theta\left( \delta^{-1} R^{-\pi/\delta}n \right), \] demonstrating that Theorem~\ref{thm:Var-lower-bound} is sharp up to the factor of $2$ in the exponent\footnote{Interestingly, the extra factor of $\delta^{-1}$ appears in our more detailed technical statement, Theorem~\ref{thm:Var-lower-bound-sharper}, while the exponent remains off by a factor of $2$.} of $R$. Likewise, one can show that this example satisfies \[ \max_{y}\, \mathbb{P}( X = y ) = \Theta\left( \delta^{-1/2}R^{\pi/2\delta} n^{-1/2} \right), \] provided $(n/k) R^{-k} \gg 1$. Thus it remains an interesting open problem to close the gap between this example and Theorem~\ref{thm:Var-lower-bound} and Corollary~\ref{cor:LwO}. \subsection{Outline of proof} The proof of Theorem~\ref{thm:Var-lower-bound} is broken into three principal steps. The first step draws on the results of our paper \cite{clt2} and is carried out in Section~\ref{sec:Var-and-phi} where we relate $\mathrm{Var}(X)$ to the value of the function $$\varphi_{\gamma}(z) = \log| f_X(z )| - \log |f_X(ze^{i\gamma})|$$ at $z=1$. The function $\varphi_{\gamma}$ has several nice properties that will be crucial for us: we will see that if $\gamma \approx c\delta$, the function $\varphi_\gamma$ is both positive and harmonic in some sector of the positive real axis and, in addition, the Taylor expansion of $\varphi_\gamma(1)$ in variable $\gamma$ has leading term $\gamma^2 \mathrm{Var}(X)/2$, thus providing the link with the variance. Bounding the variance in terms of $\varphi_\gamma(1)$ then amounts to showing that higher terms in this Taylor expansion may be disregarded. However, removing these ``higher'' terms is no small matter and it is at this point that we make essential use of the tools built up in our previous paper \cite{clt2} which allow us to lower bound $\mathrm{Var}(X)$ in terms of $\varphi_\gamma(1)$ (Lemma~\ref{lem:Var-and-phi}) by controlling the higher cumulants in terms of $\mathrm{Var}(X)$. Indeed, heavy use of the fact that $f_X$ has non-negative coefficients is used in this step. After Lemma \ref{lem:Var-and-phi} is in place, our path diverges from the ideas and results in \cite{clt2}. The second step in the proof of Theorem~\ref{thm:Var-lower-bound} is carried out in Section~\ref{sec:connection-to-mellin} where we obtain a lower bound for $\varphi_{\gamma}(1)$ (Lemma \ref{lem:phi-at-least-H_r}) in terms of the value of a certain ``truncated Mellin transform'' $H_{M,\tau}(s)$. While at this point this is a somewhat mysterious step, we will see later that this Mellin transform has some useful properties in our context, that will allow us to get a good handle on it. To relate $\varphi_{\gamma}(1)$ to this truncated Mellin transform we (after some preparations) use the fact that $\log|f_X(z)|$ is harmonic in the sector $\arg(\zeta) \in (0,\delta)$ to write $\varphi_\gamma(r e^{i(\delta-\gamma)})$ as an integral around the boundary of the sector against some Poisson kernel $P(t)$. We are then able to truncate this integral and do a direct comparison to the similar-looking Mellin transform. Again we make use of the non-negativity hypothesis in this step. The final step, which is presented in Sections~\ref{sec:mellin-comp} and~\ref{sec:truncation}, is to control the value of this truncated Mellin transform $H_{M,\tau}(s)$. The key ingredient here is that in our situation of constrained roots, we have very good control over this object for small $s$. Indeed, when we take $M \to \infty$ and $s \to 0$, we have that $H_{M,\tau}$ approaches $n\tau^2$, thereby providing the factor of $n$ in Theorem \ref{thm:Var-lower-bound}. We compute this limit by first calculating the (non truncated) Mellin transform \emph{exactly}, in the $s \rightarrow 0$ limit. We then show that we can (carefully) truncate the integral without too much loss. The three lower bounds, $\mathrm{Var}(X)$ in terms of $\varphi_\gamma(1)$, $\varphi_\gamma(1)$ in terms of $H_{M,\tau}(s)$, and $H_{M,\tau}(s)$ in terms of $n \tau^2$, are then assembled in Section \ref{sec:proof} to prove Theorem \ref{thm:Var-lower-bound}. The main contribution of this paper, broadly speaking, is to understand how the constraints on the roots of $f_X$ and non-negativity of the coefficients of $f_X$ interact. Interestingly, using \emph{only} information on the zeros, is not enough, only by using the full strength of this interaction are we able to deduce our results. Indeed, one can interpret the results of this paper as developing a tool kit for understanding how the non-negativity of $f_X$ interacts with information on the location of the roots. \section{Basic definitions and properties} \label{sec:definitions} In this section we introduce some of the central objects in this paper and state their basic properties. We refer the reader to our paper \cite{clt2} for a more careful treatment of some of the basic results mentioned in this section. For $z\in \mathbb{C} \setminus \{0\}$, we write $z = re^{i\theta}$, where $r>0$ and $\theta \in [-\pi,\pi]$ and then define $\arg(z) = \theta$. For $-\pi \leq \alpha \leq \beta \leq \pi$, we define the \emph{sector} \[S (\alpha,\beta) = \{ z \in C : \arg(z) \in [\alpha,\beta] \};\] and define $S(\delta) = S(-\delta,\delta)$. We use the notation $f(x) = O(g(x))$ to denote $|f(x)| \leq Cg(x)$, for a positive constant $C$ and we use the notation $o_{x \rightarrow 0}(1)$ to denote a quantity that tends to zero as $x \rightarrow 0$. For a random variable $X \in \{0,\ldots,n\}$ we let $f_X$ be its probability generating function and define the \emph{logarithmic potential of} $X$ to be \[ u_X(z) = \log|f_X(z)|. \] One of the reasons for the use of the logarithmic potential is immediately apparent: if $f_X$ has no zeros in an open set $\Omega$, then $u_X$ is a harmonic function on $\Omega,$ allowing us to appeal to tools available for working with harmonic functions. We now note and define a few basic properties of $u_X$. If $u$ is a function defined on $S(\alpha,\beta)$, we say that $u$ is \emph{symmetric on} $S(\alpha,\beta)$ if $u(z) = u(\bar{z})$, whenever $z,\bar{z} \in S(\alpha,\beta)$. Of course, if $u_X$ is the logarithmic potential of a random variable, then $u_X$ is symmetric due to the fact that $f_X$ has real coefficients: indeed, write \[ u_X(z) = \log|f_X(z)| = \log|\overline{f_X(z)}| = \log|f_X(\bar{z})| = u_X(\bar{z}). \] We now introduce a key notion that captures the property that $f_X$ has positive coefficients in terms of the logarithmic potential. We say that a function $u$ defined on a sector $S(\alpha,\beta)$ is \emph{weakly positive} if \begin{equation} \label{eq:weak-positivity} u(|z|) \geq u(z), \end{equation} for all $z \in S(\alpha,\beta)$. We shall make essential use of the fact that the logarithmic potential of a random variable is weakly positive; indeed, since $f_X$ has non-negative coefficients we have that \[ |f_X(|z|)| \geq |f_X(z)|, \] for all $z \in \mathbb{C}$. Weak positivity of $u_X$ follows by taking logarithms of both sides. The notion of weak-positivity has been studied in several papers before \cite{BE,deAngelis2,deAngelis3,deAngelis1,eremenko-fryntov,MS-strongPos}; in particular, Bergweiler and Eremenko \cite{BE} showed that weak positivity characterizes the logarithmic potentials of polynomials with non-negative coefficients up to limits. We now introduce\footnote{This is the special case $b = 0$ of our notion of $b$-\emph{decreasing} from our previous paper \cite{clt2}.} an essential definition for the work in this paper. We say that a function $u$, defined on the sector $S(\delta)$, is \emph{rotationally decreasing} if $u(\rho e^{i\theta})$ is a decreasing function of $\theta \in [0,\delta]$, for all $\rho >0$. As we shall see in Section~\ref{sec:Var-and-phi}, the properties of being weakly positive and harmonic in $S(\delta)$ combine nicely to give us this enriched positivity property. We shall also draw upon a simple expansion of $u_X$ in terms of the roots of $f_X$. In particular, we have \begin{equation} \label{eq:exp-u-with-roots} u_X(z) = \sum_{|\zeta| < 1} \log\left|1 - \frac{\zeta}{z} \right| + \sum_{|\z| \geq 1 }\log\left|1 - \frac{z}{\zeta}\right| + N_X\log|z| + c_X \,, \end{equation} where $N_X := |\{ \z : |\z| < 1 \}|$, the sums are over the roots $\z$ of $f_X$ and $c_X \in \mathbb{R}$ is defined so that $u_X(1) = 0$. This last property is due to the fact that $f_X(1) = 1$. Since we will work in the case of $u_X$ harmonic in $S(\delta)$, we will often use the theory of Poisson integration, in which we write a value $u_X(z)$ in terms of an integral along the boundary of $S(\delta)$. The sector $S(\delta)$ is unbounded and so we will need some basic control over the asymptotic growth of $u_X(z)$ as $z \to \infty$ and $z \to 0$, both of which will be readily available. We say that a function $u$ on a sector $S(\alpha,\beta)$ has \emph{logarithmic growth} if \[ u(z) = O(\log |z|) \text{ as }z \to \infty, \textit{ and } \, u(z) = O(\log |z|^{-1} ) \text{ as } z \to 0 ,\] for $z \in S(\alpha,\beta)$. Notice that since $f_X$ is a polynomial of degree at most $n$, $u_X$ has logarithmic growth. We now introduce an important companion to $u_X$ in this paper, the function $\varphi_{\gamma} = \varphi_{\gamma,u}$. For $\gamma \in (0,\delta)$ and a function $u$ on $S(\delta)$, define \begin{equation} \label{eq:def-of-phi} \varphi_{\gamma}(z) := u(z) - u(e^{i\gamma}z). \end{equation} The importance of $\varphi_{\gamma}$ comes jointly from the fact that it is both positive and harmonic in a sector and its leading term in the series expansion of $\varphi_\gamma(1)$ is $\gamma^2 \mathrm{Var}(X) / 2$. This second observation will be noted in Section~\ref{sec:Var-and-phi} and we record the first observation here. \begin{obs} \label{obs:basic-vp-facts} For $\delta >0$ and $\gamma \in (0,\delta)$, let $u$ be a symmetric function on $S(-\delta,\delta)$ and put $\varphi_{\gamma}(z) = \varphi_{\gamma,u}(z)$. \begin{enumerate} \item If $u$ is a harmonic function on $S(\delta)$ then $\varphi_{\gamma}(z)$ is harmonic on $S(-\delta,\delta-\gamma)$; \item If $u$ is rotationally-decreasing in the sector $S(\delta)$ then $\varphi_{\gamma}$ is positive in $S(-\gamma/2,\delta-\gamma)$. \end{enumerate} \end{obs} \noindent This observation is not hard to check, but can also be found in \cite[Section 3]{clt2}. \section{Relating $\varphi_\gamma$ to the variance of $X$} \label{sec:Var-and-phi} In this section we prove that the variance of $X$ can be lower-bounded by $\varphi_{\gamma}(1)$ (defined at \eqref{eq:def-of-phi}) for $\gamma \in (0,\delta/2^{6})$. \begin{lemma} \label{lem:Var-and-phi} For $\delta \in (0,\pi)$ and $\gamma \in (0,\delta/2^{6})$, let $X \in \{0,\ldots,n\}$ be a random variable with $\mathrm{Var}(X) > 0$, for which $f_X(z)$ has no zeros in $S(\delta)$. Then \[ \mathrm{Var}(X) \geq c \gamma^{-2} \varphi_{\gamma}(1), \] where $c>0$ is an absolute constant and $\varphi_{\gamma} = \varphi_{\gamma,u}$. \end{lemma} To prove this, we rely heavily on tools developed by the authors in \cite{clt2}. Our first step is to use the following lemma which tells us that $u_X$ is rotationally decreasing in a sector. \begin{lemma} \label{lem:decreasing} For $\delta >0$, let $u$ be a weakly positive, symmetric and harmonic function on the sector $S(\delta)$, which has logarithmic growth. Then $u$ is rotationally decreasing in $S(\delta/2)$. \end{lemma} \begin{proof} This lemma follows from Lemma 4.1 in \cite{clt2}, by applying the lemma for all $b \rightarrow 0$ and taking $r$ sufficiently large so that the condition is satisfied.\end{proof} \vspace{2mm} We now turn to note a useful series expansion of $u(e^{w})$. We refer the reader to \cite[Lemma 3.1]{clt2} for a more detailed proof of the facts stated here. If $u$ is symmetric and harmonic in $S(\delta)$ we may express $u(e^{w})$ as a series for all $w$ in a neighborhood of $0\in \mathbb{C}$. Indeed, if $u(1) = 0$ we may write \begin{equation} \label{eq:U-series} u(e^{w}) = \sum_{j\geq 1} a_j\Re\left( w^j \right), \end{equation} for all $w \in B(0,\delta)$, where $(a_j)_{j\geq 1}$ is a sequence of real numbers. The sequence $(a_j)_{j\geq 1}$ is very closely related to the \emph{cumulant sequence} of the random variable $X$, and in particular \begin{equation} \label{eq:var-is-2nd-cumulant} \mathbb{E} X = a_1, \quad \mathrm{Var}(X) = a_2/2. \end{equation} We call this sequence $(a_j)_{j\geq 1}$ the \emph{normalized cumulant sequence of} $X$ and, more generally, of a symmetric harmonic function on $S(\delta)$. Using the definition of $\varphi_\gamma = \varphi_{\gamma,u}$ at \eqref{eq:def-of-phi} along with \eqref{eq:U-series} we obtain an expansion for $\varphi_\gamma$ \begin{equation} \label{eq:phi-series} \varphi_{\gamma}(e^{w}) = \sum_{j\geq 2 } a_j\Re\left(w^j - (i\gamma + w)^j\right), \end{equation} for sufficiently small $w$, when $\gamma \in (0,\delta/2)$. We will then apply the following lemma which tells us that if a function is decreasing, weakly positive and symmetric we can obtain very tight control over the tail of the normalized cumulant sequence. The following lemma is the special case of $b=0$ of Lemma 6.1 in our paper \cite{clt2}. \begin{lemma}\label{lem:CumulantDecay} For $\varepsilon \in (0,1/2)$, let $u$ be a rotationally decreasing, symmetric and harmonic function on $B(1,2^{4}\varepsilon)$. Let $(a_j)_{j\geq 1}$ be the normalized cumulant sequence of $u$. If $(a_j)_{j\geq 2}$ is a non-zero sequence then for all $L \geq 2$ we have \begin{equation} \label{equ:CumulantFrac} \frac{ \sum_{j\geq L } |a_j|\varepsilon^j }{ \sum_{j\geq 2} |a_j|\varepsilon^i } \leq C \cdot 2^{-L}, \end{equation} where $C >0$ is an absolute constant. \end{lemma} We also need the following result from our paper \cite[Lemma 7.1]{clt2}, which says that if there is a small $k$ for which $|a_k|$ is large \emph{then} $|a_2| = 2\mathrm{Var}(X)$ must \emph{also} be large. This lemma can be a seen as an quantitative form of a (non-quantitative) lemma co-discovered by De Angelis \cite{deAngelis1} and Bergweiler, Eremenko and Sokal \cite{BES}; further, it may be viewed as a quantitative version of Marcinkiewicz's classical theorem on cumulants \cite{lukacs,marcinkiewicz}: \begin{lemma}\label{lem:BES} For $s > 0 $ and $L> 2$, let $u$ be a weakly positive, symmetric and harmonic function on $B(1,2s)$ and let $(a_j)_{j}$ be its normalized cumulant sequence. If $(a_j)_{j\geq 2}$ satisfies \begin{equation} \label{equ:cutoff} \sum_{j\geq 2}^{L} |a_j|s^j \geq \sum_{j> L} |a_j|s^j, \end{equation} then there exists a real number $s_{\ast} > s2^{-6(L+1)}$ for which $ |a_2| \geq s^{j-2}_{\ast}|a_j|, $ for all $j \geq 2$.\end{lemma} \vspace{4mm} With these tools laid out, we are now in a position to prove the main result of this section. \vspace{4mm} \noindent \emph{Proof of Lemma~\ref{lem:Var-and-phi}.} Let $X \in \{0,\ldots,n\}$ be a random variable, let $f_X$ be its probability generating function, let $u = u_X$ be its logarithmic potential and let $\varphi_{\gamma} = \varphi_{\gamma,u}$ be the function defined at \eqref{eq:def-of-phi} for $0 < \gamma < \delta/2^{6}$. Since $f_X$ has no zeros in $S(\delta)$, it follows that $u_X$ is harmonic on $S(\delta)$ and thus we may use the expansion of $\varphi(e^w)$, \[ \varphi_{\gamma}(e^{w}) = \sum_{j \geq 2} a_j\Re\big( w^j - (i\gamma + w) ^j\big), \] as noted in Section~\ref{sec:definitions} at \eqref{eq:phi-series}, which is valid for all $|w| \leq \delta/2$. Now set $w = 0 $ and apply the triangle inequality to obtain \[ |\varphi_{\gamma}(1)| \leq \sum_{j \geq 2} |a_j|\gamma^j. \] Since $u$ is weakly-positive, symmetric and harmonic in $S(\delta)$ and has logarithmic growth, we may apply Lemma~\ref{lem:decreasing} to see that $u$ is rotationally decreasing in $S(\delta/2)$. Seeking to apply Lemma~\ref{lem:CumulantDecay} with $\varepsilon = \gamma$, note that $B(1,2^4 \gamma) \subset B(1,\delta/4) \subset S(\delta/2)$ and so for all $L$, we have \begin{equation} \label{eq:app-of-cumulant-decay} \frac{ \sum_{j\geq L } |a_j|\gamma^j }{ \sum_{j\geq 2} |a_j|\gamma^i } \leq C \cdot 2^{-L}, \end{equation} where $C$ is a large, but absolute constant. If we put $L = \log_2 C+1$, we have \[ \varphi_{\gamma}(1)/2 \leq \sum_{ j = 2}^{L} |a_j|\gamma^j,\] and thus by averaging there is a $j \in [L]$ for which \begin{equation} \label{eq:aj-and-vp} \varphi_{\gamma}(1)/(2\gamma^jL) \leq |a_j|. \end{equation} Now \eqref{eq:app-of-cumulant-decay} also tells us that \[\sum_{ j = 2}^{L} |a_j|\gamma^j \geq \sum_{ j>L} |a_j|\gamma^j,\] and so we may apply Lemma~\ref{lem:BES} to learn that there is a $s_{\ast} \geq \gamma 2^{-6(L+1)} =: \gamma c_0 $ for which $|a_2| \geq s_{\ast}^{j-2}|a_j|$. Using this, along with \eqref{eq:aj-and-vp} gives \[ \mathrm{Var}(X) = |a_2|/2 \geq \frac{1}{2}(\gamma c_0 )^{j-2}|a_j| \geq \varphi_{\gamma}(1)(c_0\gamma)^{-2} (\gamma c_0)^j\frac{1}{4L\gamma^j} \geq c\gamma^{-2}\varphi_{\gamma}(1) , \] where $c >0$ is an absolute constant. \qed \section{Connection to a Mellin Transform} \label{sec:connection-to-mellin} In the previous section, we showed that we could bound $\mathrm{Var}(X)$ in terms of $\varphi_{\gamma}(1)$. In this section, we take another step towards the proof of Theorem~\ref{thm:Var-lower-bound}, by obtaining a lower bound for $\varphi_{\gamma}(1)$ in terms of a function\footnote{As with $\varphi_{\gamma}$, we shall usually suppress the explicit dependence on $u$ as it will be clear from context.} $H_{M,\tau}(s) = H_{M,\tau,u}(s)$, which resembles a truncated version of a Mellin transform. In particular, for a harmonic function $u$ on $S(0,\tau)$ for $\tau \in (0,\pi)$, $s \in (0,1)$ and $M \geq 1$, we define \begin{equation}\label{eq:H_R-def} H_{M,\tau,u}(s) := \int_{1/M}^M (u(t) - u(e^{i\tau} t))t^{-(s + 1)}\,dt \, .\end{equation} We note that taking $M \to \infty$ yields the Mellin transform of our function $\varphi_\tau(t)$. The following lemma is the main result of this section and allows us to control $\varphi_{\gamma}$ in terms of $H_{M,\tau}$. \begin{lemma} \label{lem:phi-at-least-H_r} For $\delta \in (0,\pi)$, $M \geq 1$, $s \in (0,1)$ and for $\eta \in (0,1/2)$ set $\gamma = \eta \delta$ and let $u$ be a weakly positive harmonic function on $S(\delta)$ that has logarithmic growth. Then \begin{equation} \varphi_{\gamma}(1) \geq \frac{c_\eta H_{M,\delta}(s)}{\delta M^{2\pi/\delta + s}}, \end{equation} where $c_\eta >0$ is a constant depending only on $\eta$. \end{lemma} So far we have not made it clear why $H_{M, \delta}$ is any easier to work with than $\varphi_{\gamma}$ and the reader may be skeptical that Lemma~\ref{lem:phi-at-least-H_r} is of any use to us. However, we shall see that $H_{M,\delta}(s)$ has a very different behavior for small $s >0$, which we are able to take full advantage of. The reader should also keep in mind that when we apply Lemma~\ref{lem:phi-at-least-H_r} in our proof of Theorem~\ref{thm:Var-lower-bound}, we will ultimately choose $M$ to be a multiple of $R$, $s \to 0$, and $\eta$ an explicit small constant. We now turn to the proof of Lemma~\ref{lem:phi-at-least-H_r}, which naturally breaks into three steps. In the first step we compare $\varphi_{\gamma}(1)$ with values $\varphi_{\gamma}(\rho e^{i(\delta-\gamma)/2})$ for all $\rho \approx 1$. In the second step we use the theory of \emph{Poisson integration} to express $\varphi_{\gamma}(\rho e^{i(\delta-\gamma)/2})$ in an integral form with positive integrand. Then, finally, we relate the integral form obtained in the second step to the integral $H_{M,\delta}(s)$. \subsection{Moving away from the boundary} Here we shall compare $\varphi_{\gamma}(1)$ to the values $\varphi_\gamma(\rho e^{i(\delta - \gamma)/2})$, where $\rho \approx 1$, by using the connection between harmonic functions and Brownian motion. This connection is well-known and we refer the reader to Chapters 7 and 8 in the book \cite{peres} for a detailed treatment. Here we need a basic estimate on the probability that a Brownian motion hits the top side of a particular polar-rectangle at its first exit. \begin{obs}\label{obs:B-motion-hits-side} For $\delta >0$ and $\eta \in (0,1/2)$, let $\gamma =\eta \delta$, let $(B_t)_t$ be a planar Brownian motion started at $1 \in \mathbb{C}$, let \[ R := \{ \rho e^{i\theta} : \rho \in [e^{-\delta},e^\delta], \theta \in [-\gamma/2, (\delta - \gamma)/2] \}, \] and let $T$ be the stopping time $T := \min\{t : B_t \in \partial R \}$. Then \[ \mathbb{P}\left( \arg(B_T) = (\delta - \gamma)/2 \right) \geq c_{\eta},\] for some constant $c_{\eta} >0$ depending only on $\eta$. \end{obs} This estimate together with non-negativity of $\varphi_\gamma$ then allows us to compare $\varphi_\gamma(1)$ to values of $\varphi$ away from the boundary of $S(0,\delta)$. \begin{lemma}\label{lem:vp-compare} For $\delta \in (0,\pi)$ and for $\eta \in (0,1/2)$, let $\gamma = \eta \delta$ and let $u$ be a weakly-positive, symmetric, harmonic function on $S(\delta)$ that has logarithmic growth. Then \[ \varphi_{\gamma}(1) \geq c_\eta \min_{\rho \in[e^{-\delta},e^\delta]} \varphi_{\gamma}(\rho e^{i(\delta-\gamma)/2}), \]where $c_\eta >0$ is a constant depending only on $\eta$. \end{lemma} \begin{proof} First note that since $u$ is a weakly-positive, symmetric, harmonic function on $S(\delta)$ with logarithmic growth, then we can apply Lemma~\ref{lem:decreasing} to learn that $u$ is rotationally decreasing in $S(\delta/2)$. Thus, by Observation~\ref{obs:basic-vp-facts}, we see that $\varphi_\gamma$ is harmonic and non-negative in the region $$R= \{ \rho e^{i\theta} : \rho \in [e^{-\delta},e^\delta], \theta \in [-\gamma/2, (\delta - \gamma)/2] \}. $$ We now use the Brownian motion interpretation of harmonic functions to write \begin{equation} \label{eq:vp-boundary} \varphi_\gamma(1) = \mathbb{E}\, \varphi_\gamma(B_T )\,, \end{equation} where $B_t$ is a standard planar Brownian motion started at $1 \in R$ and $T= \min \{t : B_t \in \partial R\}$. Using Observation~\ref{obs:B-motion-hits-side}, \eqref{eq:vp-boundary} and the fact that $\varphi_{\gamma}(B_T)$ is non-negative in $R$ allows us to bound \[ \varphi_\gamma(1) \geq \mathbb{P}\left( \arg(B_T) = \frac{\delta - \gamma}{2} \right) \min_{\rho \in [e^{-\delta},e^{\delta}]} \varphi_\gamma(\rho e^{i(\delta - \gamma)/2} ) \geq c_{\eta} \min_{\rho \in [e^{-\delta},e^{\delta}]} \varphi_\gamma(\rho e^{i(\delta - \gamma)/2} ),\] as desired.\end{proof} \subsection{Obtaining an integral form} We now take our second step towards Lemma~\ref{lem:phi-at-least-H_r} by obtaining an integral form for the function $\varphi_{\gamma}$. To state this, let $\tau \in (0,\pi)$, set $g = \pi/\tau$, and then for each $z \in S(0,\tau)$ define the function $P_{z,\tau} :\mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0}$ by \begin{equation} \label{eq:Def-of-P-Ker} P_{z,\tau}(t) = \frac{g t^{g - 1} \Im(z^g)}{\pi | z^g - t^g|^2}\,. \end{equation} The following Lemma gives us our desired integral form of the function $\varphi_{\gamma}$ when evaluated at points of the form $\rho e^{i(\tau - \gamma)/2}$. \begin{lemma}\label{lem:IntFormForDiff} For $\tau \in (0,\pi)$ and $\gamma \in (0,\tau/2)$, let $u$ be a harmonic function on a neighborhood of $S(0,\tau)$ that has logarithmic growth. Then for $z(\rho) = \rho e^{i(\tau - \gamma)/2}$ we have \begin{equation} \label{equ:diffofUs} \varphi_{\gamma}(z(\rho)) = \int_{0}^{\infty} ( u(t) - u( te^{i\tau}) )P_{z,\tau/2}(t)\, dt, \end{equation} where $P_{z,\tau/2}(t)$ is defined at \eqref{eq:Def-of-P-Ker}.\end{lemma} Here, we require only two further properties of $P_{z,\tau/2}$, that $P_{z,\tau/2}(t) \geq 0$ for all $t >0$ and the following basic estimate on its growth. \begin{lemma} \label{lem:Ratio-PKernel-and-ts} For $M \geq 1$, $\tau \in (0,\pi)$ and $\eta \in (0,1/2)$ let $\gamma = \eta \tau$. Then for $s \in (0,1)$ $$\min_{M^{-1} \leq t \leq M} \min_{e^{-\tau }\leq \rho \leq e^\tau } \{ t^{1 + s}P_{z(\rho),\tau/2}(t) \} \geq c_{\eta} \tau^{-1} M^{-2\pi/\tau - s}$$ where $z(\rho) = \rho e^{i(\tau - \gamma)/2}$ and $c_\eta > 0$ is a constant depending only on $\eta$. \end{lemma} The idea behind Lemma \ref{lem:IntFormForDiff} is to use the connection between harmonic functions and Brownian motion to write $u(\rho e^{i(\tau - \gamma)/2})$ as an integral over the boundary of $S(0,\tau/2)$ and $u(\rho e^{i(\tau + \gamma)/2})$ as an integral over the boundary of $S(\tau/2,\tau)$. By symmetry, the contribution of integrals along the ray $\arg(z) = \tau/2$ will precisely cancel when we take the difference $u(\rho e^{i(\tau - \gamma)/2}) - u(\rho e^{i(\tau + \gamma)/2})$, giving the identity Lemma~\ref{lem:IntFormForDiff}. We postpone the proofs of Lemmas~\ref{lem:IntFormForDiff} and \ref{lem:Ratio-PKernel-and-ts} to Appendix~\ref{sec:mellin-appendix}, as the details are not particularly interesting and distract somewhat from the main course of our proof. For now, we see how these pieces fit together to prove Lemma~\ref{lem:phi-at-least-H_r}. \vspace{4mm} \begin{proof}[Proof of Lemma~\ref{lem:phi-at-least-H_r}] Let $\tau \in (\delta/2,\delta)$. For $z = \rho e^{i(\tau - \gamma)/2}$ we apply Lemma~\ref{lem:IntFormForDiff} and write \begin{equation} \label{equ:Poisson-form-of-vp} \varphi_\gamma(z) = \int_{0}^\infty (u(t) - u(te^{i\tau})) P_{z,\tau/2}(t)\,dt\,. \end{equation} We now make crucial use of weak-positivity \eqref{eq:weak-positivity} (that is $u(t) - u(te^{i\tau}) \geq 0$) and the fact that $P_{z,\tau/2}(t) \geq 0$, to write \[ \varphi_\gamma(z) \geq \int_{M^{-1}}^M(u(t) - u(te^{i\tau})) P_{z,\tau/2}(t)\,dt . \] An application of Lemma~\ref{lem:vp-compare} together with weak-positivity and Lemma~\ref{lem:Ratio-PKernel-and-ts} gives \begin{align*} \varphi_\gamma(1) &\geq c_\eta \min_{e^{-\tau} \leq \rho \leq e^\tau} \int_{M^{-1}}^M (u(t) - u(te^{i\tau})) P_{z,\tau/2}(t)\,dt \\ &\geq c_\eta \min_{M^{-1} \leq t \leq M} \min_{e^{-\tau} \leq \rho \leq e^\tau} \{ t^{1 + s} P_{z,\tau/2}(t) \} \cdot \int_{M^{-1}}^M (u(t) - u(t e^{i\tau})) t^{-(1+ s)}\,dt \\ &\geq c_{\eta}c'_{\eta} \cdot \tau^{-1} M^{-2\pi/\tau - s} H_{M,\tau}(s)\,. \end{align*}Taking $\tau \uparrow \delta$ completes the Lemma. \end{proof} \section{A Mellin Transform Calculation} \label{sec:mellin-comp} The goal of this short section is to compute $H_{M,\tau}(s)$ when $M = +\infty$ and $s \rightarrow 0$. For this, we define \[ L_{\z,\tau}(t) := 2\log \left|1 - t\z^{-1} \right| - \log \left|1 - e^{i\tau}t \z^{-1} \right| - \log\left| 1 - e^{-i\tau}t \z^{-1} \right|,\] for $\tau \in (0,\pi)$ and $\z \in \mathbb{C} \setminus \{0\}$, Our main goal of this section will be to show the following. \begin{lemma}\label{lem:whole-log-int} For $\tau \in (0,\pi)$, $\z \in \mathbb{C} \setminus \{0\}$ with $|\arg(\z)| > \tau$ and $s \in (0,1)$, we have \begin{equation} \label{eq:L-whole-int} \int_0^{\infty} L_{\z,\tau}(t) t^{-(s+1)}\, dt = \tau^2+ o_{s \rightarrow 0}(1). \end{equation} \end{lemma} This calculation is implicit in the work of Eremenko and Fryntov~\cite{eremenko-fryntov} and begins to reveal the special structure that emerges when $s$ is small. Indeed, angular information about the root $\z$ is lost as we send $s \rightarrow 0$. We also note the connection between $L_{\z,\tau}$ and our function $H_{M,\tau}$. \begin{lemma}\label{lem:L-form} Let $X \in \{0,\ldots,n\}$ be a random variable with probability generating function $f_X$. For all $M \geq 1$ and $\tau >0$, we have $$H_{M,\tau}(s) = \frac{1}{2} \sum_{\zeta} \int_{1/M}^M L_{\z,\tau}(t) t^{-(s+1)}\, dt ,$$ where is the sum is over the roots $\{\z\}$ of $f_X$. \end{lemma} \begin{proof} Set $u = u_X$ to be the logarithmic potential of $X$. The symmetry property of $u$ implies \[ u(t) - u(e^{i\tau}t) = \frac{1}{2}\left(2u(t) - u(e^{i\tau}) - u(e^{-i\tau})\right), \] and then, using the definition of $H_{M,\tau}(s)$, we write \begin{equation} \label{eq:HR-in-bound} H_{M,\tau}(s) = \frac{1}{2}\int_{1/M}^M \left( 2u(t) - u(e^{i\tau}t) - u(e^{-i\tau}t)\right)t^{-(s+1)}\, dt. \end{equation} Now, using the expansion of $u$ in terms of its roots (as we noted at \eqref{eq:exp-u-with-roots}) gives \[ u(z) = \sum_{|\zeta| < 1} \log\left|1 - \frac{\zeta}{z} \right| + \sum_{|\z| \geq 1 }\log\left|1 - \frac{z}{\zeta}\right| + N_X\log|z| + c_X, \] which allows us to write \begin{equation}\label{eq:u-diff-as-L-sum} 2u(t) - u(e^{i\tau}t) - u(e^{-i\tau}t) = \sum_{|\z| \geq 1} L_{\z,\tau}(t) + \sum_{|\z| < 1} L_{\z^{-1},\tau}(1/t) = \sum_{\zeta} L_{\z,\tau}(t) \end{equation} by using the identity $L_{\zeta,\tau}(t) = L_{\zeta^{-1},\tau}(1/t)$. Using \eqref{eq:u-diff-as-L-sum} in \eqref{eq:HR-in-bound} and swapping the sum and integral completes the proof. \end{proof} \vspace{2mm} To aid in our calculation we define (abusing notation slightly), for $\theta \in \mathbb{R}$ and $\tau \in (0,\pi)$, \begin{equation} \label{eq:def-of-Lt} L_{\theta,\tau}(t) := 2\log \left|1 - e^{i\theta}t \right| - \log \left|1 - e^{i(\theta + \tau)}t\right| - \log\left| 1 - e^{i(-\theta +\tau)}t \right|. \end{equation} We also define the function $\phi_s(\theta)$ by first setting $\phi_s(\theta) = \cos(s(\theta - \pi))$, for $\theta \in [0,2\pi]$ and then extend this function periodically to all of $\mathbb{R}$. That is, we define $\phi_s(\theta) := \cos(s(\theta - \pi - 2k\pi))$, for all $\theta \in [2\pi k,2\pi(k+1)]$ and all $k \in \mathbb{Z}$. We note the following fact before moving on to the proof of Lemma~\ref{lem:whole-log-int}. \begin{fact} \label{fact:int-formula} For $\theta \in \mathbb{R}$ and $s \in (0,1)$ we have $$\int_{0}^\infty \log\left|1 - e^{i\theta} t \right| t^{-(s+1)}\,dt = c_s \phi_s(\theta), $$ where $c_s = \pi/ (s \sin(\pi s))$. \end{fact} This identity appears as equation I.4.22 in \cite{mellin} as well as \cite{eremenko-fryntov}. \begin{proof}[Proof of Lemma~\ref{lem:whole-log-int}] Write $\z = \rho e^{i\theta}$. Changing variables, we write \begin{equation}\label{eq:chang-var} \int_0^{\infty} L_{\z,\tau}(t) t^{-(s+1)}\, dt = \rho^{-s}\int_0^{\infty} L_{-\theta,\tau}(t) t^{-(s+1)}\, dt \end{equation} and then applying Fact~\ref{fact:int-formula} gives \[ \int_0^{\infty} L_{-\theta,\tau}(t) t^{-(s+1)}\, dt = c_s\left(2\phi_s( - \theta) - \phi_s(\tau - \theta) - \phi_s(\theta+\tau)\right). \] Note that since $L_{-\theta,\tau} = L_{\theta,\tau}$, we may assume that $\theta \in [0,\pi]$. So, the periodicity of $\phi_s$ allows us to write the expression in the brackets as \[ 2\phi_s(2\pi - \theta) - \phi_s(2\pi + \tau - \theta) - \phi_s(\tau + \theta) \] which is equal to \begin{equation}\label{eq:mellin-L-cos} 2\cos(s(\pi - \theta)) - \cos(s(\tau - \theta + \pi)) - \cos(s( \tau + \theta -\pi)), \end{equation} by the definition of $\phi_s$ and the fact that $\theta \in [\tau,\pi]$, which implies that each of the arguments \[ 2\pi - \theta , 2\pi + \tau - \theta, \tau + \theta \in [0,2\pi]. \] We now use the Taylor expansion of cosine to express \eqref{eq:mellin-L-cos} as \[ s^2\tau^2 - \frac{s^4 \tau^2}{12}(6(\pi - \tau)^2 + \tau^2) + {E}_{\zeta}(s) = s^2\tau^2 + O(s^4). \] where $|{E}_\zeta(s)| \leq \frac{4}{6!}((s\pi)^6)$ for $s < 1$. We now notice that \[ \lim_{s \rightarrow 0} s^2c_s = \lim_{s\rightarrow 0} \frac{\pi s^2}{s \sin(\pi s)} = 1 \] and thus \[ \int_0^{\infty} L_{\theta,t}(t) t^{-(s+1)}\, dt = c_s\left(s^2\tau^2 + O(s^4)\right) = \tau^2 + o_{s\rightarrow 0}(1) . \] Putting this together with \eqref{eq:chang-var} finishes the proof for $\rho \geq 1$ after noting that $\rho^{-s} = 1 + o_{s \to 1}(1)$. The proof for $\rho\leq 1 $ is symmetric. \end{proof} \section{Truncating the Mellin Transform}\label{sec:truncation} In this section we prove the following lemma which will be essential to obtaining a lower bound on $H_{M,\tau}(s)$. \begin{lemma} \label{lem:mainLogBound} For $\varepsilon \in (0,1/2)$, $M \geq 1+\varepsilon$ and $\tau \in (0,\pi)$, let $\z \in \mathbb{C} \setminus \{0\}$ satisfy $\tau < |\arg(z)| \leq \pi$ and $M^{-1} \leq |\z| \leq M$. Then, for sufficiently small $s >0 $, we have \begin{equation} \label{equ:trunc-geq0} \int_{1/M}^M L_{\z,\tau}(t)t^{-(s+1)}\, dt \geq 0. \end{equation} If we additionally have $|\arg(\z)| \geq \pi/4$ then, for sufficiently small $s >0$, we have \begin{equation} \label{eq:trunc-int} \int_{1/M}^M L_{\z,\tau}(t)t^{-(s+1)}\, dt \geq |\zeta|^{-s}2^{-10}\varepsilon \tau^2 (1 + o_{\tau \rightarrow 0}(1)). \end{equation} \end{lemma} We will also need the following cheap bound to deal with the case when $\delta$ is bounded away from $0$. \begin{lemma} \label{lem:mainLogBound2} For $\alpha \in (0,1)$ and $\tau \in (0,\pi)$ let $\z \in \mathbb{C} \setminus \{ 0 \}$, be such that $(\alpha M)^{-1} \leq |\z| \leq \alpha M$ and $|\arg(\zeta)| > \tau$. Then we have \[ \int_{1/M}^M L_{\z}(t)t^{-(s+1)}\, dt \geq |\zeta|^{-s} \left( \tau^2/2 + O\left( \alpha^{1-s} \right) \right),\] for sufficiently small $s >0$. \end{lemma} We note that this bound is sufficient to prove a weaker version of Theorem \ref{thm:Var-lower-bound} of the form $\mathrm{Var}(X) \geq c_\delta R^{-2\pi/\delta} n$ if one does not care about the dependence on $\delta$. \subsection{A few preparations} To prove Lemmas~\ref{lem:mainLogBound} and \ref{lem:mainLogBound2} we need a few useful results about the family of functions $L_{\theta,\tau}$ first: \begin{obs} \label{obs:L-stuff} Let $\theta\in [-\pi,\pi]$ and $\tau \in (0,\pi)$. \begin{enumerate} \item \label{obs:symmetry} For $t>0$, we have $L_{\theta,\tau}(t) = L_{\theta,\tau}(1/t)$; \item \label{obs:decay} $|L_{\theta,\tau}(t)| = O(1/t)$ as $t\rightarrow \infty$ and $|L_{\theta,\tau}(t)| = O(t)$ as $t \rightarrow 0$. \end{enumerate} \end{obs} We now use the symmetry of $L_{\theta,\tau}(t)$ to obtain an estimate for a truncated version of the integral featured in Lemma~\ref{lem:whole-log-int}. \begin{lemma}\label{lem:int-0-to-1} For $\tau\in (0,\pi)$, $\tau < |\theta| \leq \pi$ and $s \in (0,1)$, we have that \[ \int_0^{1} L_{\theta,\tau}(t)t^{-(s+1)}\, dt = \tau^2/2 + o_{s\rightarrow 0}(1). \] \end{lemma} \begin{proof} Write $L(t) = L_{\theta,\tau}(t)$ and set \[ I_0 := \int_0^1 L(t) t^{-(s+1)}\, dt \quad\text{ and }\quad I_{\infty} := \int_1^{\infty} L(t) t^{-(s+1)} \, dt. \] Applying Lemma~\ref{lem:whole-log-int} to $\z = e^{-i\theta}$, we have \begin{equation} \label{eq:sum-I0-Iinfty} I_0 + I_{\infty} = \int_0^{\infty} L(t) t^{-(s+1)} \, dt = \tau^2 + o_{s\rightarrow 0}(1). \end{equation} Now note that by changing variables $t = 1/x$ and using the symmetry $L(1/x) = L(x)$ (Observation~\ref{obs:symmetry}), we have \[ I_0 = \int_1^{\infty} \frac{L(x)}{x^{1+s}} x^{2s} \, dx \] and thus we can see that $I_0 \approx I_{\infty}$ for small $s$. Indeed, using the fact that $L(t) = O(1/t)$ (Observation~\ref{obs:decay}) we have \begin{equation} \label{eq:diff-bound} |I_0 - I_{\infty}| = \left| \int_1^{\infty} \frac{L(t)}{t^{s+1}}(t^{2s}-1) \, dt \right| \leq C \int_{1}^{\infty} \frac{1}{t^{s+2}}(t^{2s}-1)\, dt, \end{equation} which tends to $0$ as $s \rightarrow 0$. To finish, note \eqref{eq:sum-I0-Iinfty} tells us that $I_0 + I_{\infty} \approx \tau^2$. Rearranging this gives \[ |2I_0 - \tau^2| \leq \tau^2 + |I_0- I_{\infty}| + o_{s\rightarrow 0}(1) = o_{s\rightarrow 0}(1), \] as desired. \end{proof} \vspace{4mm} The following elementary lemmas will allow us to throw away parts of the integral that are negative. First we note that $L_{\theta,\tau}$ undergoes at most one sign-change on $[1,\infty)$. When working with the sign-change $\lambda(\theta,\tau)$, which we define in the following lemma, we subscribe to the convention that $1/\infty = 0$. \begin{lemma} \label{lem:sign-change} Let $\tau\in (0,\pi)$. \begin{enumerate} \item \label{part:theta-small} If $\theta \in (\tau,\pi/2)$, there exists a sign-change $\lambda = \lambda(\theta,\tau) > 1$ for which $L_{\theta,\tau}(t) \geq 0$ for $t \in [1,\lambda]$ and $L_{\theta,\tau}(t) \leq 0$, for $t \geq \lambda$; \item \label{part:theta-big} If $\theta \in [\pi/2,\pi)$ then $L_{\theta,\tau}(t) \geq 0$ for all $t > 0$. In this case, we define $\lambda(\theta,\tau) := +\infty$. \end{enumerate} \end{lemma} We now use this sign-change to lower bound the contribution of the integral near $1$ in the case of $|\theta| \geq \pi/4$: \begin{lemma}\label{lem:small-log-bound} For $\tau \in (0,\pi)$ and $\theta$ satisfying $\pi/4 < |\theta| \leq \pi$, let $\varepsilon \in (0,1/2)$ be such that $\lambda^{-1} \leq 1-\varepsilon$, where $\lambda = \lambda(\theta,\tau)$ is the sign change of $L_{\theta,\tau}$. Then \[ \int_{1-\varepsilon}^1 L_{\theta,\tau}(t)t^{-(s+1)} \, dt \geq \frac{\varepsilon \tau^2 }{2^{9}}(1 + o_{\tau \rightarrow 0}(1)). \] \end{lemma} As the proofs of Lemmas~\ref{lem:sign-change} and Lemma~\ref{lem:small-log-bound} are somewhat tedious, we postpone them to Appendix~\ref{sec:trunc-appendix}. \subsection{Proofs of Lemmas~\ref{lem:mainLogBound} and ~\ref{lem:mainLogBound2}} We now use Lemmas~\ref{lem:int-0-to-1}, \ref{lem:sign-change} and \ref{lem:small-log-bound} to prove Lemma~\ref{lem:mainLogBound}, the main result of this section. \vspace{4mm} \begin{proof}[Proof of Lemma~\ref{lem:mainLogBound}] Let $\tau \in (0,\pi)$ and let $\z = \rho e^{i\theta}$ for $\rho >0$ and $\theta$ satisfying $\tau < |\theta| \leq \pi$. We first assume $\rho \geq 1$ and note that \begin{equation} \label{eq:trunc-change-of-var} \int_{1/M}^M L_{\z}(t) t^{-(s+1)}\, dt = \rho^{-s} \int_{1/(M\rho)}^{M/\rho} L_{\theta,\tau}(x) x^{-(s+1)}\, dx,\end{equation} by the change of variables $x = t/M$. We put $a := 1/(r\rho)$, $b :=r/\rho$, $L(t) := L_{\theta,\tau}$ and let $\lambda := \lambda(\theta,\tau)$ be the sign-change from Lemma~\ref{lem:sign-change}. We proceed in two different cases depending on whether $b \geq \lambda$ or $b < \lambda$. \noindent \textbf{Case 1} : Assume $b \geq \lambda$. Then $L(t) \leq 0$ for all $t \geq b$ and since \[ a = 1/(M\rho) \leq \rho/M = 1/b \leq 1/\lambda\] we have that $L(t) \leq 0$ for all $0 \leq t \leq a$. As a result, we have \[ \int_{a}^{b} L(t) t^{-(s+1)}\, dt \geq \int_{0}^{\infty} L(t) t^{-(s+1)}\, dt \geq \tau^2/2, \] where the last inequality holds by \eqref{eq:L-whole-int}, in Lemma~\ref{lem:whole-log-int}, for sufficiently small $s$. \noindent \textbf{Case 2} : Assume now that we have $1 \leq b \leq \lambda$. In this case we have that $L(t) \geq 0$ for all $t \in [1,b]$. We now break into two further cases. If $a \leq \lambda^{-1}$, then for sufficiently small $s$ we have \begin{equation} \label{eq:using-0-1-int} \int_{a}^{b} L(t) t^{-(s+1)} \, dt \geq \int_{0}^{b} L(t) t^{-(s+1)} \, dt \geq \int_{0}^{1} L(t) t^{-(s+1)} \, dt \geq \tau^2/4,\end{equation} where the second inequality follows from the fact that $\lambda \geq b \geq 1$. In the final case we have $a \geq \lambda^{-1}$ and hence $L(t) \geq 0$ for all $t \in [a,b]$, thus we have that \[ \int_{a}^{b} L(t) t^{-(s+1)} \, dt \geq 0.\] In the case that $|\theta| \geq \pi/4$, we have to additionally show \eqref{eq:trunc-int}. Since $r \geq 1+\varepsilon$, we have $a \leq (1+\varepsilon)^{-1} \leq 1 - \varepsilon + \varepsilon^2 \leq 1-\varepsilon/2$, since $0 <\varepsilon < 1/2$. Thus \[ \int_{a}^{b} L(t) t^{-(s+1)} \, dt \geq \int^1_{1-\varepsilon/2} L(t) t^{-(s+1)} \, dt \geq\frac{\varepsilon \tau^2 }{2^{10}}(1 + o_{\tau \rightarrow 0}(1)), \] by using Lemma~\ref{lem:small-log-bound}. This completes the proof when $\rho \geq 1$. The case $\rho \leq 1$ is symmetric. \end{proof} The proof of Lemma~\ref{lem:mainLogBound2} is a simple application of Observation~\ref{obs:decay} and Lemma~\ref{lem:whole-log-int}. \begin{proof}[Proof of Lemma~\ref{lem:mainLogBound2}] Let $s\in (0,1)$, which we will take to be sufficiently small and write $\zeta = \rho e^{i\theta}$. Then we have \[ \int_{1/M}^M L_{\z,\tau}(t) t^{-(s+1)}\, dt = \rho^{-s}\int_{1/(M\rho)}^{M/\rho} L_{-\theta,\tau}(t) t^{-(s+1)}\, dt. \] Put $L(t) := L_{-\theta,\tau}(t)$ and define $I_1 := \int_0^{1/(M\rho)} L(t) t^{-(s+1)}\,dt$ and $I_2 :=\int_{M/\rho}^{\infty} L(t) t^{-(s+1)}\,dt$ so that \[ \int_{1/(M \rho)}^{M/\rho} L(t) t^{-(s+1)}\, dt = \int_0^{\infty} L(t)t^{-(s+1)}\, dt - I_1 - I_2 = \tau^2 - I_1 - I_2 + o_{s \rightarrow 0}(1).\] We now use that $L(t) = O(t)$, for $t$ small, and the fact that $1/(M\rho) \leq \alpha$ to bound \[ |I_1| = \left|\int_0^{1/(M\rho)} L(t)t^{-(s+1)}\, dt\right| \leq C\int_0^{1/(\rho M)} t^{-s}\, dt = O\left({\alpha^{1-s}} \right). \] Similarly, since $|L(t)| = O(1/t)$ for $t$ large, and $\rho/M \leq \alpha $, we have \[ |I_2| = \left|\int_{M/\rho}^{\infty} L(t)t^{-(s+1)}\, dt\right| \leq C \int_{M/\rho}^{\infty}t^{-(2+s)} \, dt = O\left(\alpha^{1+s}\right). \] Thus, for $s >0$ sufficiently small, the result follows from Lemma \ref{lem:whole-log-int}. \end{proof} \section{The Proof of Theorem~\ref{thm:Var-lower-bound} } \label{sec:proof} We now arrive, at last, at the proof of Theorem~\ref{thm:Var-lower-bound}. As we mentioned in the introduction, we actually prove a slightly sharper result. \begin{theorem}\label{thm:Var-lower-bound-sharper} Let $X \in \{0,\ldots,n\}$ be a random variable with $p_0p_n >0$ and with probability generating function $f_X$ and let $\varepsilon \in (0,1), \delta >0, R \geq 1 +\varepsilon$. If the zeros $\z$ of $f_X$ satisfy $ |\arg(\z)| \geq \delta$ and $R^{-1} \leq |\z| \leq R$ then \[ \mathrm{Var}(X) \geq c \max\{\varepsilon,\delta\} \cdot \delta^{-1} R^{-2\pi/\delta} n,\] where $c>0$ is an absolute constant. \end{theorem} One last ingredient is the classical theorem of Obrechkoff \cite{obrechkoff} on the radial distribution of the set of zeros of a polynomial with non-negative coefficients. \begin{theorem} \label{thm:Os-thm} Let $f(z)$ be a polynomial with non-negative coefficients for which $f(0) \not= 0$. Then \[ \left| \left\lbrace \z\in \mathbb{C} : f(\z) = 0, |\arg(\z)| \leq \alpha \right\rbrace \right| \leq \frac{2\alpha}{\pi}\deg(f). \] \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:Var-lower-bound-sharper}] Let $u = u_X$ be the logarithmic potential of $X$. From the discussion in Section~\ref{sec:definitions}, we know $u$ to be weakly-positive, symmetric and harmonic in \[ S(\delta) = \{ z : |\arg(z)| \leq \delta \}\,.\] We also know that $u$ has logarithmic growth. Note that we may assume that $n \geq 1$, as the case $n=0$ leaves us with nothing to prove. Now, looking to apply Lemma~\ref{lem:Var-and-phi}, we note that $\mathrm{Var}(X) >0$, since $\mathbb{P}(X=0)\mathbb{P}(X=n)>0$. Choose $\gamma =\delta/2^{7}$ and apply Lemma~\ref{lem:Var-and-phi} to carry out our first step and obtain \begin{equation}\label{eq:final1} \mathrm{Var}(X) \geq c_1\gamma^{-2}\varphi_{\gamma}(1). \end{equation} We now introduce a parameter $M = \alpha^{-1}R$ for $\alpha \in (0,1)$ to be chosen later. Since $u$ is a weakly positive harmonic function on $S(\delta)$ with logarithmic growth, we may apply Lemma~\ref{lem:phi-at-least-H_r} with parameter $M$, $\tau \in (\delta/2,\delta)$ and $\eta = 2^{-7}$ to obtain \begin{equation}\label{eq:final2} \varphi_{\gamma}(1) \geq \frac{c_2 H_{M,\tau}(s)}{\tau M^{2\pi/\tau + s}},\end{equation} for all $s \in (0,1)$. We now use Lemma~\ref{lem:L-form} and \eqref{equ:trunc-geq0} in Lemma~\ref{lem:mainLogBound}, for sufficiently small $s$, to remove all terms with $|\arg(\z)| \leq \pi/4$ from the sum. Indeed, we have \begin{equation}\label{eq:final2.5} H_{M,\tau}(s) = \frac{1}{2}\sum_{\z} \int_{1/M}^M L_{\z,\tau}(t) t^{-(s+1)} \, dt \geq \frac{1}{2}\sum_{\z : |\arg(\z)| \geq \pi/4} \int_{1/M}^M L_{\z,\tau}(t) t^{-(s+1)}. \end{equation} There are now a few cases to consider. We let $\delta_0 >0$ be a (small) absolute constant (to be chosen later) and split into two cases, depending on whether $\delta > \delta_0$ or not. { \bf Case 1 :} We first consider an easy case, when $\delta > \delta_0$. In this case, we consider $\tau \in (\delta_0, \delta)$ and set $M := \alpha^{-1} R$, where $\alpha \in (0,1)$ is chosen to be a sufficiently small constant so that when we apply Lemma~\ref{lem:mainLogBound2} we get \begin{equation} \label{eq:2.75} \int_{1/M}^M L_{\z}(t)t^{-(s+1)}\, dt \geq R^{-s}\left( \tau^2/2 + O\left( \alpha^{1-s} \right)\right) \geq R^{-s}\tau^2/4, \end{equation} for sufficiently small $s$. So, with this choice, apply \eqref{eq:2.75} to \eqref{eq:final2.5} to obtain \begin{equation} \label{eq:2.99} H_{M,\tau}(s) \geq R^{-s}\frac{\tau^2}{8} \left(\sum_{\z : |\arg(\z)| \geq \pi/4} 1 \right) \geq \frac{\tau^2n}{32}, \end{equation} for sufficiently small $s>0$, where the second inequality follows from Obrechkoff's theorem, Theorem~\ref{thm:Os-thm}. Chain together \eqref{eq:final1} and \eqref{eq:final2} with \eqref{eq:2.99} to obtain \[ \mathrm{Var}(X) \geq \frac{c_1}{\gamma^2}\varphi_{\gamma}(1) \geq \frac{c_3}{\gamma^2\tau} H_{M,\tau}(s)M^{-2\pi/\tau + s} \geq \alpha^{-s}\alpha^{2\pi/\tau}\frac{c_3\tau}{32\gamma^2}R^{-2\pi/\tau}n. \] Thus recalling that $\gamma = \tau/2^{7}$ and that $\alpha^{\pi/\tau} \geq \alpha^{\pi/\delta_0} >0$ is a constant, we may take $\tau \uparrow \delta$ to complete the proof. { \bf Case 2 :} In this case we may assume that $\delta$ satisfies $\delta_0 > \delta >0$, where $\delta_0$ is an absolute constant to be chosen later. We choose $M = (1+\delta)R$ and apply \eqref{eq:trunc-int} from Lemma~\ref{lem:mainLogBound} to each term in the sum in \eqref{eq:final2.5} to get \begin{equation} \label{eq:final3} H_{M,\tau}(s) \geq R^{-s}\left(\frac{\varepsilon_0 \tau^2 }{2^{11}}(1 + o_{\tau \rightarrow 0}(1))\right)\left( \sum_{\z : |\arg(\z)| \geq \pi/4} 1 \right) \geq \left(R^{-s}\frac{\varepsilon_0 \tau^2 }{2^{12}}(1 + o_{\tau \rightarrow 0}(1))\right)n,\end{equation} where $\varepsilon_0 = \max\{ \varepsilon, \delta \}$. We may now assume that $\delta_0$ is small enough so that $0 < \tau < \delta_0$ implies that $(1+o_{\tau \rightarrow 0}(1)) \geq 1/2$ in \eqref{eq:final3}. Thus \[H_{M,\tau}(s) \geq \frac{\varepsilon_0 \tau^2}{2^{14}}n \] and \[ \mathrm{Var}(X) \geq \frac{c_1}{\gamma^2}\varphi_{\gamma}(1) \geq \frac{c_2}{\gamma^2\tau} H_{M,\tau}(s)M^{-2\pi/\tau + s} \geq c_3(1+\delta)^{-2\pi/\delta} \varepsilon_0\tau^{-1}R^{-2\pi/\tau} n,\] for sufficiently small $s>0$. Since $(1+\delta)^{-2\pi/\delta}$ is bounded below, the result follows by taking $\tau \uparrow \delta$. This completes the proof of Theorem~\ref{thm:Var-lower-bound}. \end{proof}
{ "timestamp": "2022-10-10T02:13:08", "yymm": "2102", "arxiv_id": "2102.07699", "language": "en", "url": "https://arxiv.org/abs/2102.07699" }
\section{Introduction} \label{intro} The rapidly growing size of deep neural networks (NNs) presents new challenges in their storage, computation, and power consumption for resource-constrained devices \citep{dean2012large,lecun2015deep}. This makes it crucial to compress and efficiently store the NN parameters. The most commonly used approach is to separate the problem of model compression from physical storage. Reliable digital storage media, fortified by error correcting codes (ECCs), provide error-free storage to users -- this allows researchers to develop model compression techniques independently from the precise characteristics of the devices used to store the compressed weights \citep{model_comp_old_survey, model_comp_new_survey}. Meanwhile, memory designers strive to create efficient storage by hiding physical details from users. Although the decoupled approach enables isolated investigation of model compression and simplifies the problem, it misses the opportunity to exploit the full capabilities of the storage device. With no context from data, memory systems dedicate the same amount of resources to each bit of stored information. To address this shortcoming, we investigate the joint optimization of NN model compression and physical storage -- specifically, we perform model compression with the additional knowledge of the memory's physical characteristics. This allows us to dedicate more resources to important bits of data, while relaxing the resources on less valuable bits. Our sole objective is to maximize the storage \emph{density} for a given NN model. Memory density is of great importance in resource-constrained devices such as mobile phones or augmented reality (AR) glasses. When comparing different memory technologies, density is measured as number of bits stored per silicon area. In this paper, we focus on one memory technology, namely, phase-change memory (PCM), as it is emerging as a compelling alternative to conventional memories. Therefore, density can be simply measured by counting memory cells, and our objective reduces to minimizing the number of memory cells used to store the given model. To achieve maximum density, we consider the extreme case of analog storage. Recent studies have demonstrated the promise of end-to-end analog memory systems for storing analog data, such as NN weights, as they have the potential to reach higher storage capacities than digital systems with a significantly lower coding complexity \citep{zarcone2018joint,zheng2018error, PMID:32747716}. However, analog devices are noisy. This has presented several key challenges for the compression task. First, the noise characteristic of such memories is a non-linear function of the input value written onto the cell. Second, slight perturbations of the NN weights from the memory cell may cause the network's performance to plummet \citep{achille2019information}, which is not accounted for in most NN compression techniques. Motivated by the above challenges, we draw inspiration from classical information theory and coding theory to develop a framework for encoding and decoding NN parameters to be stored on analog devices \citep{shannon2001mathematical}. In particular, our method: (i) leverages existing compression techniques such as pruning \citep{guo2016dynamic,frankle2018lottery} and knowledge distillation (KD) \citep{distillation,polino2018model,xie2020self} to learn a compressed representation of the NN weights; and (ii) utilizes various coding strategies to ensure robustness of the compressed network against storage noise. To the best of our knowledge, this is the first work on NN compression for analog storage devices with \textit{realistic} noise characteristics, unlike previous work that only investigate white Gaussian noise \citep{upadhyayaerror,zhou2020noisy, fagbohungbe2020benchmarking}. In particular, our contributions are: \begin{enumerate} \item We develop a variety of strategies to mitigate the effect of noise and preserve the NN performance on PCM storage devices. \item We present methods to combine these strategies with existing model compression techniques. \end{enumerate} Empirically, we evaluate the efficacy of our methods on MNIST, CIFAR-10 and ImageNet datasets and show that storage density can be increased by $16$ times on average compared to conventional error-free digital storage. \section{Preliminaries} \label{prelims} \subsection{Problem Statement and Notation} We consider a supervised learning problem where $x \in \mathcal{X} \subseteq \mathbb{R}^d$ is the input variable, and $y \in \mathcal{Y} = \{1, \ldots, k\}$ is the set of corresponding labels. We use capital letters to denote random variables, e.g., $X$ and $Y$. We assume access to samples $\mathcal{D} = \{(x_i, y_i)\}_{i=1}^n$ drawn from an underlying (unknown) joint distribution $p_{\rm{data}}(x,y)$, which are used to train a NN predictor $f_w: \mathcal{X} \rightarrow \mathcal{Y}$. This network, parameterized by the weights $w \in \mathcal{W}$ to be compressed and stored on analog storage devices (where $\mathcal{W}$ denotes the parameter space of NNs), indexes a conditional distribution $p_w(y|x)$ over the labels $y$ given the inputs $x$. The NN weights $w$ will be exposed to some input dependent device noise $\epsilon(w)$ when written to the analog storage device, yielding noisy versions of the weights $\hat{w}=w+ \epsilon(w)$. In our experiments, we find that PCM noise dramatically hurts classification performance -- as a concrete example, the test accuracy of a ResNet-18 model trained on CIFAR-10 drops from 95.10\% (using $w$) to 10\% (using $\hat{w}$) after the weights are corrupted (via naive storage on analog PCM). Thus to preserve NN performance even after it is stored on the analog device, we explore various strategies $g: \mathcal{W} \rightarrow \mathcal{W}$ for designing \textit{reconstructions} of the perturbed weights $g(\hat{w})$ such that the resulting distribution over the output labels $p_{g(\hat{w})}(y|x)$ is close to that of the original network $p_w(y|x)$. We note that this notion of ``closeness" between the original weights $w$ and the reconstructed weights $g(\hat{w})$ has several interpretations. In Section~\ref{sparsity_driven}, we explore hand-designed methods to minimize the distance between $w$ and $g(\hat{w})$ in Euclidean space: $\min_{g} \left \Vert g(\hat{w})-w \right \Vert_2$. In Sections~\ref{sensitivity_driven}-\ref{ae}, we study ways to minimize the Kullback-Leibler (KL) divergence between these two output distributions: \begin{align} \begin{aligned} \min_{g} \mathbb{E} _{x \sim p_{\textrm{data}}(x)} [D_{KL}(p_w(y|x) || p_{g(\hat{w})}(y|x))] \end{aligned} \label{kl} \end{align} where we \textit{learn} the appropriate transformation $g(\cdot)$. \subsection{Phase Change Memory (PCM)} Phase-change memory (PCM) is an emerging memory technology that has the ability to store continuous values \citep{wong2010phase}. This makes PCM a compelling choice for future wearable devices. In particular, NN model parameters can be stored in high-density PCM arrays, and thus reducing the total footprint of on-device memories. As mentioned previously, we only consider PCMs as the main storage for NNs, and aim to reduce the total number of memory cells used for storing a full model. To simulate a realistic storage, we use the PCM model derived from actual measurements collected from experiments on PCM array. Details about the measurements are given in Appendix~\ref{PCM_app}. Figure~\ref{fig:channel} illustrates the PCM model as a \emph{noisy channel}. Each point in Figure~\ref{fig:channel} corresponds to a possible read (output) value for a given write (input) value. For simplicity, we linearly map both the inputs and outputs to be within range $[-1, 1]$ to satisfy power constraints of the device. \begin{figure}[h] \centering \includegraphics[width=.27\textwidth]{figures/Channel_char_crop.jpg} \caption{Characteristics of the channel noise in a PCM cell, which is dependent on the input value. Each point corresponds to a possible cell output value for a given input.} \label{fig:channel} \end{figure} \begin{figure*}[t!] \centering % \subfigure[1 memory cell]{\includegraphics[width=.25\textwidth]{figures/f_channel_1_dev.jpg}} \subfigure[10 memory cells]{\includegraphics[width=.25\textwidth]{figures/f_channel_10_dev.jpg}} \subfigure[100 memory cells]{\includegraphics[width=.25\textwidth]{figures/f_channel_100_dev.jpg}} \caption{Behavior of $\phi=C \circ f$ (i.e., inverted channel) when outputs are read from an average over 1, 10, 100 cells.}\label{fig:channel_inv} \end{figure*} Our realistic PCM model has different noise mean and variance for each input. Using this model is an improvement over prior work that only investigate white Gaussian noise \citep{upadhyayaerror,zhou2020noisy, fagbohungbe2020benchmarking} when storing NN models on noisy storage. Conventionally, memory cells are used to store error-free digital data by allowing enough noise margin between written values. In the case of the model shown in Figure~\ref{fig:channel}, a maximum of 2 bits of data can be stored in each PCM cell. We set this as our baseline: a decoupled approach first does model compression, and then stores the compressed data on a PCM array with the rate of 2 bits per cell. For example, 1KB of data can be stored reliably on 4,096 PCM cells. Our method considers using the same PCM cells as analog storage devices. We assume that the read/write circuitry of the PCM has a precision equivalent to that of a single-precision (32-bit) floating point data. This means that a floating point number can be directly written to a memory cell, and the data read back is also a 32-bit float, albeit with the added PCM noise. This simplification allows us to only focus on maximizing memory density, without worrying about read/write circuit complexity. We also assume that random access to model weights are not required, so there will be no ``memory packing'' problem when storing weights of variable sizes (i.e., different number of cells). Note that there exists a middle ground between the error-free 2 bit per cell case and the analog storage case. For instance, one can use a PCM cell to store 3 bits of data with some noise. We do not investigate this middle ground and only focus on the extreme case of analog storage, which provides a possible bound for the more practical middle ground. \section{Methodology} \subsection{Robust Coding Strategies} \label{robust_code} In this section, we devise several novel coding strategies for $g(\cdot)$ that can be be applied to the model post-training to mitigate the effect of perturbations in the weights. For each strategy, we require a pre-mapping process to remove the non-linearity in the mean of the PCM response in Figure~\ref{fig:channel}. \textbf{Inverting the Channel:} The mean function of the PCM noise model $\mu: \mathcal{X} \rightarrow \mathbb{R}$ can be learned via a k-nearest neighbor classifier on the channel response measurements as shown in Figure~\ref{fig:channel}. We learn an inverse function $h = \mu^{-1}$ to remove the non-linearity in the mean, since $\phi = C \circ h$ is an identity function with zero-mean noise (Figure~\ref{fig:channel_inv}) where $C$ represents the PCM channel, i.e., $\phi(x) = x + \epsilon_0(x)$ where $\epsilon_0(x)$ is a zero-mean noise with input dependent standard deviation $\sigma(x)$ due to the PCM channel. Thus the relationship between input weights $w_{in}$ to be stored and output weights $w_{out}$ to be read is: \begin{align*} w_{out} = \frac{\phi(\alpha \cdot w_{in} - \beta) +\beta} {\alpha} \end{align*} where $\alpha$ and $\beta$ are scale and shift factors, respectively. Since the noise is zero-mean, we have: \begin{align*} w_{out} = \frac{\alpha \cdot w_{in} - \beta +\epsilon_0(w_{in}) +\beta} {\alpha}= w_{in}+\frac{\epsilon_0(w_{in})}{\alpha} \end{align*} If we use the noisy channel $\phi(\cdot)$ $r$ times (i.e., store the same weight on $r$ ``independent'' cells and average over the outputs), the relationship between $w_{in}$ and $w_{out}$ becomes: \begin{align*} w_{out}= w_{in} + \hat{\epsilon}(w_{in},r,\alpha) \end{align*} where the standard deviation of $\hat{\epsilon}(w_{in},r, \alpha)$ is $\frac{\sigma(w_{in})}{\alpha \cdot \sqrt{r}}$ (see Appendix~\ref{noise} for the derivation). This provides us two tools to protect the NN weights against zero-mean noise ($\phi(\cdot)$) \begin{enumerate}[label={}] \item \textbf{(Method \#1)} Increase the number of channel usage $r$ (increase the number of PCM cells per weight). \item \textbf{(Method \#2)} Increase the scale factor $\alpha$ under the condition that $\alpha \cdot w_{in}$ satisfies the cell input range limitation ($|\alpha \cdot w_{in}-\beta| \leq 1$). \end{enumerate} For the first method, we observe in Figure~\ref{fig:channel_inv} that the response becomes less noisy as we increase the number of PCM cells used, $r$. However, we desire to keep $r$ at a minimum for an efficient storage. The second method allows us to make use of the full analog range by scaling the weights. This is particularly useful when storing weights with small magnitude. But we cannot increase $\alpha$ without bound because of the device constraint, that is we need to satisfy $-1 \leq\alpha \cdot w_{in} -\beta\leq 1$ (in practice, this constraint is $-0.65 \leq\alpha \cdot w_{in} -\beta \leq 0.75$ since the remaining portion of the response is non-invertible, and therefore unusable for our purposes). Such limitations leave more to be desired for a general-purpose robust coding strategy for NN weights. \subsubsection{Sparsity-Driven Protection} \label{sparsity_driven} \setlength{\tabcolsep}{3pt} \begin{table*}[!t] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{clccccccccccc} \toprule & & 2056 cells & 512 cells & 64 cells & 32 cells & 16 cells & 8 cells & 4 cells & 3 cells & 2 cells & 1 cell & Additional Bits\\ \midrule &No Protection & 95.1 & 94.2 & 27.1 & 9.0 & 10.2 & 10.2 & 9.9 & 9.6 & 9.8 & 10.3 & 0\\ &SP &95.1 & 95.0 & 94.2 & 94.0 & 92.8 & 89.5 & 67.0 & 41.9 & 11.8 & 9.8 & 1\\ ResNet-18&AM+AR & 95.0 & 95.0 & 94.8 & 94.7 & 94.4 & 93.7 & 93.1 & 92.7 & 89.2 & 58.0 & 1\\ CIFAR-10&SP+AM &95.1 & 95.1 & 95.0 & 95.0 & 94.8 & 94.7 & 94.6 & 93.9 & 93.2 & 90.6 & 2\\ &SP+AM+AR &95.1 & 95.1 & 95.0 &\textbf{95.0} & \textbf{95.0} & \textbf{95.0} & \textbf{95.0} & \textbf{95.1} & \textbf{94.8} & \textbf{94.4} & 2\\ &SP+AM+AR+Sens. & 95.1 & 95.1 & 95.1 & \textbf{95.1} & \textbf{95.1} & \textbf{95.1} & \textbf{95.0} & \textbf{95.1} & \textbf{94.8} & \textbf{95.0} & 3\\ \midrule &No Protection & 67.6 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0\\ &SP & 75.6 & 0.044 & 0.009 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 1\\ ResNet-50&AM+AR & 76.0 & 75.0 & 66.7 & 66.4 & 63.2 & 57.5 & 34.1 & 22.4 & 0.037 & 0.001 & 1\\ ImageNet&SP+AM & 76.1 & 70.6 & 69.4 & 55.1 & 51.3 & 34.6 & 28.0 & 17.6 & 0.046 & 0.002 & 2\\ &SP+AM+AR & \textbf{76.6} & \textbf{75.4} & \textbf{74.5} & \textbf{74.2} & \textbf{73.8} & \textbf{73.8} & \textbf{72.8} & \textbf{72.2} & \textbf{70.2} & \textbf{66.0} & 2\\ &SP+AM+AR+Sens. & \textbf{76.6} & \textbf{75.8} & \textbf{75.6} & \textbf{75.3} & \textbf{74.8} & \textbf{74.9} & \textbf{74.5} & \textbf{73.7} & \textbf{73.0} & \textbf{68.8} & 3 \\ \bottomrule \\ \end{tabular} } \caption{Accuracy of ResNet-18 on CIFAR-10 and ResNet-50 on ImageNet when weights are perturbed by the PCM cells. Baseline accuracy (without noise) is $95.1 \%$ for CIFAR-10 and $76.6 \%$ for ImageNet. SP: sign protection, AM: adaptive mapping, AR: sparsity-driven adaptive redundancy, Sens.: sensitivity-driven adaptive redundancy. Results are averaged over three runs. Higher is better. } \label{tab:cifar_sparsity} \end{table*} Next, we exploit the observation that the weights of a fully-trained NN tend to be sparse (see Appendix~\ref{histogram_app}). Table~\ref{tab:cifar_sparsity} shows that the test accuracy of a Resnet-18 model on CIFAR-10 with weights corrupted by PCM noise drops from $95.10 \%$ to $10 \%$ (random prediction for CIFAR-10) when less than $64$ memory cells are used per weight. If PCM is used as digital storage, 16 PCM cells per weight would be enough since PCM has a capacity of $2$ bits and one weight is 32-bit float. Our goal is to preserve classification accuracy by using less than $16$ cells per weight to outperform digital storage. (SP+AM+AR) row of Table~\ref{tab:cifar_sparsity} (the fifth row) shows that we achieve $95.10 \%$ accuracy with $3$ cells (and $94.44 \%$ even with $1$ cell) per weight using the sparsity-driven protection; on the other hand, the NN performance is compromised without sparsity-driven protection even using $512$ cells per weight with an accuracy of $94.20 \%$ (see No Protection row) -- more than $512 \times$ times reduction in the required amount of storage. Figure~\ref{fig:comparison_imagenet} demonstrates the effectiveness of our approach in preserving NN performance on PCM (Figure~\ref{fig:comparison_cifar_app} for CIFAR-10 in Appendix). \textbf{Sign Protection:} When scaling the weights by $\alpha$ to fit them in range $[-0.65, 0.75]$ of $\phi$ (Figure~\ref{fig:channel_inv}), small weights are mapped to values very close to zero. This is problematic because a majority of the trained NN weights have small magnitudes, and thus the NN with reconstructed weights will suffer a severe drop in performance due to sign errors ( Appendix~\ref{histogram_app}). Therefore, we store the sign and magnitude of the weights separately. The sign can be represented by 1 bit and 2 sign bits can be stored using 1 cell. When we store magnitudes instead of actual weights, we can use an $\alpha$ that is twice as large, reducing the variance of noise from Method \#2. The (No Protection) and (SP) rows of Table~\ref{tab:cifar_sparsity} illustrate the effect of sign protection as the required memory to achieve a test accuracy of $94.1 \pm 0.1 \%$ on CIFAR-10 is reduced by $16 \times$ times (from $512$ cells to $32$ cells). \textbf{Adaptive Mapping:} Although protecting the sign bit leads to accuracy gains on CIFAR-10, we still require more than $16$ devices to achieve the original accuracy without PCM noise. To further improve the efficiency, we use the observation that the majority of nonzero weights after pruning are quite small (see Appendix~\ref{histogram_app}). This implies that using different values of $\alpha$ depending on the magnitude of the weight (larger scale factor $\alpha$ for small weights) can reduce the overall variance of the cell's noise from Method \#2. This requires an extra bit to indicate whether a weight is small or large since two different $\alpha$'s are used in encoding and decoding. Together with the sign bit that we store separately, we need to store $2$ bits per weight which can be done using $1$ PCM cell per weight. With this strategy, according to (SP) and (SP+AM) rows of Table~\ref{tab:cifar_sparsity}, we can increase the accuracy from $9.8 / 0.001 \%$ to $90.6 / 51.3 \%$ with an average of $1/16$ cells per weight on CIFAR-10$/$ImageNet. \textbf{Adaptive Redundancy:} Finally, we propose to vary the number of PCM cells used for larger and smaller weights. The average number of cells that we aim to minimize is: \begin{align*} r_{avg} = \frac{r_{small} \times N_{small} +r_{large} \times N_{large}}{N_{small}+N_{large}} \end{align*} where $N_{small}$ and $N_{large}$ are the number of small and large weights; $r_{small}$ and $r_{large}$ are the number of cells used per weight for small and large weights, respectively. Using more cells for larger weights (which are more critical for NN performance) increases the accuracy while it does not increase the average redundancy too much (Method \#1) since most weights are sparse, e.g., $N_{small}=11,168,312$ and $N_{large}=5,650$ in ResNet-18 trained on CIFAR-10. Sign protection and adaptive mapping help protect mostly the small weights while adaptive redundancy protects large weights by reserving more resources for them. As shown in the (SP+AM) and (SP+AM+AR) rows of Table~\ref{tab:cifar_sparsity}, combining these strategies increases the accuracy by $4\% / 66 \%$ on CIFAR-10$/$ImageNet for the 1 cell per weight case without requiring any additional bits. \subsubsection{Sensitivity-Driven Protection} \label{sensitivity_driven} While we have demonstrated the success of sparsity-driven protection strategies for $g(\cdot)$ even with $1$ cell per weight for ResNet-18 on CIFAR-10, the method falls short for more complex datasets. For ImageNet, the accuracy of ResNet-50 with sparsity-driven protection against $1$ PCM per weight is $10.6 \%$ lower than the original accuracy without PCM noise ($76.6 \%$) (see Table~\ref{tab:cifar_sparsity}). To close this gap, we study Eq.~\ref{kl} in more detail. Let us define $\delta_w$ as the final perturbation on the weights, i.e., $g(\hat{w})=w+\delta_w$. We first approximate the KL divergence via a second-order Taylor expansion: \begin{align*} \mathbb{E} _{x \sim p_{\textrm{data}}(x)} [D_{KL}(p_w(y|x) || p_{w+\delta w}(y| x))] \\ \approx \delta w ^T F \delta w + O (||\delta w||^3) \end{align*} where $F$ is the Fisher Information Matrix \citep{martens2014new,grosse2016kronecker}: \begin{align*} F & := \mathbb{E} _{x,y \sim p_{\rm{data}}(x,y)}[\nabla_w \log p_w(y|x) \nabla_w ^T\log p_w(y|x)] \end{align*} Dropping the off-diagonal entries in $F$ yields: \begin{align*} \delta w ^T F \delta w & \approx \mathbb{E} _{x,y \sim p_{\rm{data}}(x,y)} \sum_{j=1}^d \left( \delta w_j \cdot \frac{\partial \log{p_w(y|x)}}{\partial w_j} \right)^2 \\ & = \sum_{j=1}^d \delta w_j^2 \mathbb{E} _{x,y \sim p_{\rm{data}}(x,y)} \left( \frac{\partial \log{p_w(y|x)}}{\partial w_j} \right)^2 \end{align*} where $d$ is the number of weights. Thus the KL divergence between the conditional distribution parameterized by the original ($w$) and perturbed weights ($g(\hat{w}) = w+\delta w$) that we aim to minimize is: \begin{align*} \mathbb{E} _{x \sim p_{\textrm{data}}(x)} [D_{KL}(p_w(y|x) || p_{w+\delta w} (y|x))] \\ \approx \frac{1}{N}\sum_{j=1}^d \delta w_j^2 \sum_{i=1}^N \left( \frac{\partial \log{p_w(y_i|x_i)}}{\partial w_j} \right)^2 \end{align*} where the expectation is approximated with Monte Carlo samples from the empirical data distribution. We note that for each weight $w_j$, we can estimate how much performance degradation a perturbation by $\delta w_j$ can cause by evaluating the term $ \frac{1}{N}\sum_{i=1}^N\left (\frac{\partial \log{p_w(y_i|x_i)}}{\partial w_j}\right )^2$. We call this term ``sensitivity'' and denote it as: $ s_j= \frac{1}{N}\sum_{i=1}^N\left(\frac{\partial \log{p_w(y_i|x_i)}}{\partial w_j}\right)^2$. For the storage of sensitive weights $w_j$ with large $s_j$, we should use more PCM cells as slight perturbations of these weights may lead to significant changes in the output distributions (Method \#1). This will not affect the number of cells per weight on average ($r_{avg}$) too much because the gradients of a fully trained network is known to be sparse (Appendix~\ref{histogram_app}). In fact, $\frac{\partial}{\partial w_j}\log{p_w(y_i|x_i)}$ is nothing but the Stochastic Gradient Descent (SGD) gradient that is computed at each iteration. Thus ordering the weights based on sensitivity can be done easily with one pass over the whole dataset. In our experiments, we combine sensitivity-driven protection and sparsity-driven protection to vary the number of memory cells per weight according to both magnitude and sensitivity. From (SP+AM+AR) and (SP+AM+AR+Sens.) rows of Table~\ref{tab:cifar_sparsity}, sensitivity-driven protection provides $2.8 \%$ improvement in test accuracy using $1$ cell per weight for ResNet-50 trained on ImageNet. Figure~\ref{fig:comparison_imagenet} illustrates that sparsity-driven and sensitivity-driven protection strategies provide $2056$ times more efficient storage of NN models by preserving the accuracy against PCM noise. This makes analog storage for NN weights an appealing alternative to digital storage. We refer the reader to Figure~\ref{fig:comparison_cifar_app} for similar results on CIFAR-10. \subsection{KL Regularization for Robustness} \label{kl_training} The following set of techniques for constructing $g(\cdot)$ are designed to correct the errors that the robust coding strategies in the previous section fail to address. \subsubsection{Robust Training} \label{robust_train} For robust training, we regularize the standard cross-entropy loss with the KL divergence in Eq.~\ref{kl}. Specifically, the loss function is as follows: \begin{align*} \mathcal{L}(w) &=\mathbb{E} _{x,y \sim p_{\rm{data}}} [-\log p_w(y|x)] \\ &\quad + \lambda \mathbb{E} _x [D_{KL}(p_w(y|x) || p_{g(\hat{w})}(y|x))] \end{align*} where we approximate the true expectation with Monte Carlo samples from the empirical data distribution. We note that the final perturbation $\delta w$ on the weights ($g(\hat{w})=w+\delta w$) can be thought of as a noise or the effect of pruning on the $w$. In particular, when we apply pruning, we have: \begin{align*} \delta w_j & = 0 \ &\text{for a non-pruned weight } w_j, \\ \delta w_j & = -w_j \ &\text{for a pruned weight } w_j. \end{align*} In our experiments, we add Gaussian noise as a perturbation $\delta w$ (i.e., $g(\hat{w})=w+\delta w$ is a noisy weight), during robust training with the hope that trained network will learn $g(\cdot)$ to be more robust to both PCM and Gaussian noise. We observe that the trained network is also more robust to pruning in addition to noise. In the magnitude pruning, only the small weights are set to zero, that is $\delta w_j$ is equal to $0$ (for large weights) or $-w_j$ (for small weights). In other words, $\delta w_j$ is always a small value. We hypothesize that this small perturbation due to pruning can be regarded as a noise added onto the weights and as a result a network trained with our robust strategy is more robust to both noise and pruning effect. Our experiments on CIFAR-10 and ImageNet verify that robust training has a protective effect against noise where robust coding strategies are not enough. \subsubsection{Robust Distillation} \label{distill} Distillation is a well-explored NN compression technique \citep{distillation}. The idea is to first train a teacher network that is large enough to capture a smooth probability distribution on labels, and then train a smaller student network by distilling the output probability information from the teacher network. We utilize distillation to optimize a compressed model (student) to be robust to PCM noise using the following noisy student loss function: \begin{align*} \mathcal{L}_s(w_s) & =(1-\lambda)\mathbb{E} _{x,y \sim p_{\rm{data}}} [-\log p_{\hat{w}_s}(y|x)] \\ &\quad + \lambda \mathbb{E} _x [D_{KL}(p_{w_t}(y|x) || p_{g(\hat{w}_s)}(y|x))] \end{align*} where $\lambda \in [0,1]$ is a scalar, $w_t$ and $w_s$ are teacher and student weights, and $g(\hat{w}_s)= g(w_s+\epsilon(w_s))=w_s+\delta w_s$ with $\epsilon(w_s)$ being the PCM noise and $\delta w_s$ being the noise injected onto the student network's weights. We note the similarity between the noisy student loss function and loss function of the robust training. Although \citep{zhou2020noisy} provides an initial exploration into robust distillation for noisy analog storage, they specifically model the noise process with white Gaussian noise, whereas we leverage experimental data collected from real storage devices to build a realistic model of the PCM noise. \subsection{End-to-End Learning} \label{ae} Finally, we explore an end-to-end learning approach for $g(\cdot)$ that jointly optimizes over the model compression and the known characteristics of the noisy storage device. We take a probabilistic approach and assume access to a set of weights $\{W_j\}_{j=1}^K$ from $K$ different models that have been trained on the same dataset $\mathcal{D}$ -- this serves as an empirical distribution over NN weights, as different initializations of NNs typically converge to different local minima. Additionally, we assume that each element in the weight matrix $W_{ij} \sim \mathcal{N}(0,1)$ \citep{zhou2018non,dziugaitetowards}. To first learn the compressed representation of the NN weight, we propose to maximize the mutual information (MI) between the original network weight $W$ and the compressed representation of the weight $Z$ that has been corrupted by the noisy channel. Following \cite{choi2019neural}, our coding scheme can be represented by the following Markov chain: $W \rightarrow \hat{Z} \rightarrow Z \rightarrow \hat{W}$, where $\hat{Z}$ denotes the compressed weight and $Z = \hat{Z} + \epsilon(Z)$ where $\epsilon(Z)$ denotes the input dependent PCM noise. Then: \begin{align*} I(W;Z) &= H(W) - H(W|Z) \\ &= -H(W|Z) \\ &\geq \mathbb{E}_{q_\phi(Z|W)}[\log p_\theta(W|Z)] \end{align*} where $q_\phi(Z|W)$ and $p_\theta(W|Z)$ denote variational approximations to the true posterior distributions $p(Z|W)$ and $p(W|Z)$, respectively \citep{barber2003algorithm}. We train an autoencoder $g_{\phi,\theta}(\cdot)$ with a stochastic encoder and decoder parameterized by $\phi,\theta$ to learn this variational posterior via the above lower bound to the true MI. The decoder is trained to output a set of reconstructed weights such that its predictions are close to those of the original network. For the Gaussian channel noise model, $q_\phi(Z|W)$ will be distributed according to a Gaussian distribution and the lower bound will also simplify to a difference of quadratics. Therefore, our final objective function becomes: \begin{equation} \label{ae_loss} \min_{\phi,\theta} \mathbb{E}_{x,w}[D_{KL}(p_w(y|x)||p_{g_{\phi,\theta}(\hat{w})}(y|x)) + (g_{\phi,\theta}(\hat{w}) - w)^2] \end{equation} Our approach differs from KD in that we \textit{learn} the weight compression scheme rather than using a fixed student network architecture. \section{Experimental Results} \label{experiments} In this section, we empirically investigate: (1) whether the aforementioned protection strategies for $g(\cdot)$ help to preserve NN classification accuracy; and (2) the improvements in storage efficiency when using our protection strategies. All experimental results are averaged over 3-5 runs; for conciseness, we report only the average numbers and refer the reader to Appendix~\ref{confidence} for the complete results with confidence intervals. Hardware setup is found in Appendix~\ref{PCM_app}. For all our experiments, we consider three image datasets: MNIST \citep{lecun2010mnist}, CIFAR-10 \citep{krizhevsky2009learning} and ImageNet \citep{imagenet_cvpr09}. We use the standard train/val/test splits. For the MNIST experiments, we use two architectures: LeNet \citep{lecun1998gradient}, and a 3-layer MLP. For CIFAR-10, we use two types of ResNets \citep{he2016deep}: ResNet-18 and a slim version of ResNet-20. For ImageNet, we use pretrained ResNet-50 from PyTorch \citep{he2016deep}. For additional details on model architectures and hyperparameters, we refer the reader to Appendix~\ref{hyperparams}. \subsection{Sparsity and Sensitivity Driven Protection} \begin{figure}[h] \centering \includegraphics[width=.4\textwidth]{figures/pcm_resnet_figure3_v3.png} \caption{Accuracy of ResNet-50 on ImageNet. SP: sign protection, AM: adaptive mapping, AR: sparsity-driven adaptive redundancy, Sens.: sensitivity-driven adaptive redundancy. Experiments are conducted three times.}\label{fig:comparison_imagenet} \end{figure} We show the effect of sparsity- and sensitivity-driven protection in Figure~\ref{fig:comparison_imagenet} on ResNet-50 for ImageNet (Figure~\ref{fig:comparison_cifar_app} for CIFAR-10 in Appendix). The exact numerical results are given in Table~\ref{tab:cifar_sparsity}. In CIFAR-10 experiments, the number of small weights was $N_{small}=11$M and the number of large weights was $N_{large}=5.6$K and number of cells per weight in average for adaptive redundancy is not more than $0.02$ above the listed numbers in the table. Similarly, in ImageNet experiments, $N_{small}=25.4$M, $N_{large}=15$K and the difference between the number of cells per weight on average and the listed number is always smaller than $0.06$. These margins increase by only $0.01$ when we dedicate more cells to store sensitive and large weights. As shown in Table~\ref{tab:cifar_sparsity}, sparsity driven protection strategies are enough for ResNet-18 on CIFAR-10 to preserve the classification accuracy ($95.10 \%$) with $3$ cells per weight and the accuracy is only $0.66 \%$ less than the classification accuracy with $1$ cell per weight. However, there is a $4 \%$ drop in the classification accuracy of ResNet-50 on ImageNet with the number of cells per weight smaller than $4$ when sparsity-driven protection is applied alone. Sensitivity driven protection is able to reduce this gap by $3\%$. We report the test accuracy when adaptive mapping and adaptive redundancy is applied without sign protection in (AM+AR) rows of Table~\ref{tab:cifar_sparsity} to show that sign protection is indeed a critical step. For instance, by comparing (AM+AR) and (SP+AM+AR) rows of Table~\ref{tab:cifar_sparsity}, sign protection can increase the accuracy from $58 \%$ (AM+AR) to $94.4 \%$ (SP+AM+AR) on CIFAR-10. We also consider quantization as a way to improve the efficiency of digital storage. For instance, \citep{jacob2018quantization} shows that it is possible to preserve the accuracy with small loss by quantizing the weights into $8$ bits. This corresponds to $4$ PCM cells per weight since PCM has a capacity of $2$ bits. Table~\ref{tab:cifar_sparsity} shows that we outperform the digital storage even when weights are quantized to $8$ bits since the accuracy is preserved with $1$ cell per weight on CIFAR-10 (with $0.1\%$ loss) and with $3$ cells per weight on ImageNet (with $3.6 \%$ loss). We do not compare our results with more aggressive quantization techniques \citep{adaptiveQuant,scalableQuant, oktay2019scalable, DeepCABAC, choi2020universal, QuantNoise} that can achieve higher efficiency in digital storage since they also bring a huge complexity with multiple retraining stages. We also combine the sparsity driven protection with pruning to gain further storage efficiency improvements. Results on sparsity driven protection strategies on $90 \%$ pruned ResNet-18 (on CIFAR-10) are in Appendix~\ref{pruning_app}. We retrain the pruned model for $20$ epochs to tune the nonzero weights. The accuracy of the pruned model without noise is $94.8 \%$ and we get $94.2 \%$ test accuracy with PCM noise using only $1$ cell per weight. Compared to the baseline (no compression and each $32$-bit weight is digitally stored on $16$ PCM cells), we provide $16 \times 10 = 160$ times more efficient storage with $90\%$ pruning and using $1$ cell per weight. \subsection{Robust Training} We compare robust training with naive training (training with no noise) in Table~\ref{tab:cifar_robust_train} on ResNet-18 (on CIFAR-10) and on ResNet-50 (on ImageNet). In our experiments, we use $\lambda = 0.5$ as the coefficient of the KL divergence term in the loss function. We tried different combinations of the sparsity and sensitivity driven strategies, and in all cases, robust training provides better robustness against PCM noise than the naive training. Robust training provides robustness against pruning as well. When ResNet-18 trained without noise is pruned with $90\%$ sparsity, the accuracy drops from $95.1 \%$ to $90.2 \%$. However, the same model trained with $N(0,0.006)$ gives $95.0 \%$ accuracy after $90\%$ pruning. \setlength{\tabcolsep}{3pt} \begin{table}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{llcccc} \toprule & & Naive Train & Robust Train & Robust Train & Add. \\ & &(no noise) & (with N(0, 0.01)) & (with N(0, 0.006)) & Bits \\ \midrule &No Noise & 95.10 & 95.50 & \textbf{95.60} &0\\ &PCM (No Protection) & 9.70 & 8.30 & 9.90 &0\\ &PCM+SP & 9.70 & 10.63 & 10.33 &1\\ CIFAR-10&PCM+AP & 27.69 & \textbf{86.20} & 81.83 &1\\ &PCM+SP+AP & 90.60 & 94.73 & \textbf{94.80} &2\\ &PCM+AP+Sens & 36.21 & \textbf{86.73} & 78.93 &2\\ &PCM+SP+AP+Sens & 94.95 & \textbf{95.03} & \textbf{95.03} &3\\ \midrule &No Noise & 76.60 & 76.60 & 76.60 &0\\ &PCM (No Protection) & 0.001 & 0.001 & 0.001 &0\\ &PCM+SP & 0.001 & \textbf{0.004} & 0.002 &1\\ ImageNet&PCM+AP & 0.001 & \textbf{0.002} & 0.001 &1\\ &PCM+SP+AP & 66.00 & 69.00 & \textbf{69.20} &2\\ &PCM+AP+Sens & 0.003 & \textbf{0.006} & 0.005 &2\\ &PCM+SP+AP+Sens & 68.80 & \textbf{70.20} & \textbf{70.40} &3\\ \bottomrule \\ \end{tabular} } \caption{Robust training vs. naive training for ResNet-18 on CIFAR-10 and ResNet-50 on ImageNet. SP: sign protection, AP: adaptive protection (adaptive mapping+sparsity-driven adaptive redundancy), Sens.: sensitivity-driven adaptive redundancy.} \label{tab:cifar_robust_train} \end{table} \subsection{Robust Distillation} \setlength{\tabcolsep}{3pt} \begin{table}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{ccccccc} \toprule & Number of & Teacher & Teacher & Student & Noisy Student & Add. \\ & PCM cells & ResNet-18 & ResNet-20 & ResNet-20 & ResNet-20 & Bits \\ \midrule No Noise & 16 & 95.70 & 92.50 & 92.90 & \textbf{93.00} &0\\ \midrule PCM+AP &3 & 16.23 & 48.38 & 73.38 & \textbf{81.75} &1\\ \midrule \centered{PCM+SP+AP} & \centered{1 \\ 3} & \centered{93.35 \\ 94.78} & \centered{86.30 \\ 89.73} & \centered{88.58 \\ 89.98} & \centered{\textbf{90.65} \\ \textbf{91.33}} & \centered{2} \\ \midrule PCM+AP+Sens. & 1 & 9.60 & 29.68 & 38.18 & \textbf{69.49} &2 \\ \midrule \centered{PCM+SP+AP+Sens.} & \centered{1 \\ 3} & \centered{93.36 \\ 94.90} & \centered{88.40 \\ 89.92} & \centered{88.96 \\ 90.44} & \centered{\textbf{91.10} \\ \textbf{91.78}} & \centered{3} \\ \bottomrule \\ \end{tabular} } \caption{Accuracy of ResNet-20 distilled from ResNet-18 on CIFAR-10. SP: sign protection, AP: adaptive protection (adaptive mapping+sparsity-driven adaptive redundancy), Sens.: sensitivity-driven adaptive redundancy. During the distillation of noisy student, a Gaussian noise with N(0,0.01) is injected onto the weights.} \label{tab:cifar_distill_PCM} \end{table} Table~\ref{tab:cifar_distill_PCM} shows the distillation results on ResNet-20 distilled from ResNet-18 (on CIFAR-10) with PCM noise applied during test time. We compare three networks: (1) a student ResNet-20 distilled without noise injection, (2) a student ResNet-20 distilled with Gaussian noise injection, and (3) a teacher ResNet-20 (a baseline) trained without noise. As shown in Table~\ref{tab:cifar_distill_PCM}, student ResNet-20 distilled with noise injection outperforms both teacher ResNet-20 and student ResNet-20 distilled without noise when weights are perturbed by PCM cells at test time. For additional results with Gaussian noise injected at test time and additional experiments on MNIST, we refer the reader to Appendix~\ref{distill_exp_app}. \subsection{End-to-End Learning} Next, we explore the effectiveness of the end-to-end learning scheme in which we train an autoencoder on a \textit{set of model weights} on a synthetic example. We generate two sets of 2-D Gaussian mixtures: (1) an ``easy" task in which we first sample a two-dimensional mean vector $\mu_1 \in \mathbb{R}^2$ where $\mu_{1i} \sim \mathcal{U}[-1,0)$ and $\mu_2 \in \mathbb{R}^2$ where $\mu_{2i} \sim \mathcal{U}[0,1)$ for $i=\{1,2\}$. After sampling these means, we draw a set of 50K points for the two mixture components: $x_1 \sim \mathcal{N}(\mu_1, I)$ and $x_2 \sim \mathcal{N}(\mu_2,I)$. This ensures that the two mixture components are well-separated. For the ``hard" task, we sample overlapping mean vectors: both components of $\mu_1$ and $\mu_2$ are drawn from $\mathcal{U}[-1,+1]$ before sampling from their respective Gaussian mixture components. We generate 50 datasets per task, where each dataset has 50K data points. We train 50 different logistic regression models on each of the datasets for both the ``easy" and ``hard" tasks (each model has 3 parameters). Then, we train an autoencoder on these sets of weights for both Gaussian and PCM channel noise. For the easy task with Gaussian noise, the autoencoder achieves an accuracy of 95.2\%; for PCM noise, it achieves 91.8\% accuracy. For the hard task with Gaussian noise, the autoencoder achieves an average accuracy of 78.8\% across all classifiers; for PCM channel noise, it achieves 78.6\% on average across all 50 classifiers. Interestingly, we find that the 1-D classifier representations in the autoencoder's latent space is semantically meaningful (Appendix~\ref{appendix_logreg}). \section{Related Work} \label{related} \textbf{Model Compression.} Compression of NN parameters has a rich history starting from the early works of \citep{OBD,OBS}, with techniques such as pruning \citep{ han2015learning, deep_compression,snip, frankle2018lottery, woodfisher, hydra}, quantization \citep{adaptiveQuant,scalableQuant, choi2020universal, QuantNoise} and KD \citep{distillation,polino2018model,xie2020self}. \cite{adversarialdistil} explores KD as a way to encourage robustness to adversarial perturbations in input space, while we specifically desire robustness to weight perturbations. Although probabilistic approaches to model compression have been explored in \cite{louizos2017bayesian,reagan2018weightless,havasi2018minimal}, we additionally consider the physical constraints of the storage device used for memory as part of our learning scheme. Our end-to-end approach is most similar to \cite{oktay2019scalable}, in that they also learn a decoder to map NN weights into a latent space prior to quantization. However, our method is different in that our autoencoder also learns the appropriate compression scheme (with an encoder), we inject noise into our compressed weights rather than quantizing, and we do not require training a NN from scratch. \textbf{Analog Computing in NNs.} A complementary line of work exists on utilizing analog components in NN training and/or inference \citep{leblebici, onchiplearning,binas2016precise, du2018analog, AnalogMemoryAcc, Joshi2020AccurateDN}. The common theme is performing computation in the analog domain, perhaps with in-memory computation capabilities, in order to reduce the overall computation power. However, analog computing comes with its own complexities such as inherent noise or lack of flexibility. In contrast, our work only focuses on storing NN models in analog cells and our objective is to increase storage density. Once the parameters are read from memory, the actual computation happens in the conventional digital domain. \section{Discussion} \label{discussion} \section{Conclusion} \label{conclusion} In this work, we formalized the problem of jointly learning the model compression and physical storage to maximize memory utility in noisy analog storage devices. We introduced a variety of novel coding techniques for preserving NN classification accuracy even in the presence of weight perturbations, and demonstrated their effectiveness on MNIST, CIFAR-10, and ImageNet. Additionally, we explored different training strategies that can be coupled with existing compression techniques such as distillation, and provided an initial foray into a fully end-to-end learning scheme for the joint problem. Extending our framework to allow for quantization would be interesting future work, as well as exploring how to handle models trained on a variety of datasets and tasks in the end-to-end learning scheme. \section{Acknowledgement} The authors would like to thank TSMC Corporate Research for technical discussions. This work was supported in part by a Sony Stanford Graduate Fellowship. \section*{Appendix} \label{appendix} \renewcommand{\thesubsection}{\Alph{subsection}} \subsection{Additional Details on PCM Arrays} \label{PCM_app} \begin{figure*}[t!] \centering \subfigure[1-sigma error bar plot. ]{\includegraphics[width=.32\textwidth]{figures/errorbar.png}} \subfigure[Channel output distribution. ]{\includegraphics[width=.32\textwidth]{figures/dist.png}} \subfigure[Interpolated measurement data. ]{\includegraphics[width=.32\textwidth]{figures/Channel_char.jpg}} \caption{Characteristics of the channel noise in a PCM cell. (a) 1-standard deviation error bars for each input level, where both the mean and variation of the output has a nonlinear relationship to the input value. (b) Distribution of output values corresponding to 31 distinct input values into the PCM array. (c) Channel response values for the PCM array.} \label{fig:channel_app} \end{figure*} The measurement data used in the paper to model the PCM channel noise is collected on a PCM array with one mega cells. The memory cells are first at initial state and then programmed with electrical pulses. The physical state of the memory cell (output) would be continuously tuned with different programming pulse amplitude (input). To collect the measurement data for this work, we programmed the array with 31 different pulse amplitudes and more than 1000 cells for each pulse amplitude. Figure~\ref{fig:channel_app}(a) shows the 1-standard deviation error bar from cell-to-cell variation for each input level, where both the mean and standard deviation of the output is non-linearly related to the input. Figure~\ref{fig:channel_app}(b) is the histogram of output distributions corresponding to 31 input levels, which shows that the output is roughly Gaussian distributed conditioned on the input level. The channel response to the input values in between measurement points is then interpolated in order to construct the differentiable continuous analog channel model used in this work as shown in Figure~\ref{fig:channel_app}(c). The differentiability of the model allows for an end-to-end learning scheme as shown in Figure~\ref{fig:appendix_logreg}. \subsection{Multiple Channel Usages} \label{noise} In this section, we provide additional details on the noisy channel and investigate how the noise changes as we play with the scale factor $\alpha$ and the number of channel usages $r$. Suppose we have a zero-mean noisy storage channel $\phi$ with a power constraint of $|w_{in}| \leq P$, where $w_{in}$ is the input weights that is to be stored on the channel and $P$ is the power constraint ($P=1$ in our experiments). We must first map all the inputs $w_{in}$ such that they will fit within the range $[-P,P]$ to respect the physical constraints of the PCM storage device. Then, the relationship between the input weights $w_{in}$ to be stored and the output weights $w_{out}$ to be read is: \begin{align*} w_{out} = \frac{\phi(\alpha \cdot w_{in} - \beta) +\beta} {\alpha} \end{align*} where $\alpha$ and $\beta$ are scale and shift factors, respectively. Since the noise is zero-mean, we have: \begin{align*} w_{out} = \frac{\alpha \cdot w_{in} - \beta +\epsilon_0(w_{in}) +\beta} {\alpha}= w_{in}+\frac{\epsilon_0(w_{in})}{\alpha} \end{align*} where $\epsilon_0(w_{in})$ is zero-mean noise with input dependent standard deviation $\sigma(w_{in})$. If we use the noisy channel $\phi$ $r$ times to store the same weight ($r$ cells per weight), we can average over the outputs: \begin{align} \begin{aligned} w_{out}&= \frac{1}{r}\sum_{i=1}^r (w_{in}+\frac{\epsilon_{0,i}(w_{in})}{\alpha}) \\ &= w_{in} + \frac{\frac{1}{r}\sum_{i=1}^r \epsilon_{0,i}(w_{in})}{\alpha} \end{aligned} \label{noise_eq} \end{align} Let us define a new random variable $\Tilde{\epsilon}(w_{in}, r) =\frac{1}{r}\sum_{i=1}^r \epsilon_{0,i}(w_{in})$. Notice that the standard deviation of $\Tilde{\epsilon}(w_{in}, r)$ is $\frac{\sigma(w_{in})}{\sqrt{r}}$. Then the standard deviation of $\hat{\epsilon}(w_{in}, r, \alpha)= \frac{\Tilde{\epsilon}((w_{in}, r))}{\alpha}$ in Eq.~\ref{noise_eq} is given by: $\frac{\sigma(w_{in})}{\alpha \sqrt{r}}.$ More precisely, we have the following expression: \begin{align*} w_{out}= w_{in} + \hat{\epsilon}(w_{in}, r,\alpha) \end{align*} where the standard deviation of $\hat{\epsilon}(w_{in},r,\alpha)$ is $\frac{\sigma(w_{in})}{\alpha \sqrt{r}}$. \subsection{Sparsity of Neural Network Weights and Gradients} \label{histogram_app} Figure~\ref{fig:histogram} shows the histogram of weights and sensitivity terms ($s_j= \frac{1}{N}\sum_{i=1}^N\left(\frac{\partial \log{p_w(y_i|x_i)}}{\partial w_j}\right)^2$ from Section~\ref{sensitivity_driven}) of ResNet-18 after it is trained on CIFAR-10. The sparsity of the weights and sensitivity terms are critical in our experiments since we use more PCM cells for weights with larger magnitudes and larger sensitivities, assuming that this adaptive strategy has a negligible effect on the average number of cells per weight. \begin{figure}[H] \label{histogram} \centering % \subfigure[Distribution of weights (ResNet-18 trained on CIFAR-10).]{\includegraphics[width=.35\textwidth]{figures/histogram_weight_CIFAR.jpg}} \subfigure[Distribution of sensitivity terms (ResNet-18 trained on CIFAR-10).]{\includegraphics[width=.35\textwidth]{figures/histogram_gradient_CIFAR.jpg}} \caption{Histogram of (a) weights of ResNet-18 trained on CIFAR-10, (b) sensitivity terms of ResNet-18 trained on CIFAR-10. Since both the weights and sensitivities are sparse, the increase in the average number of cells per weight due to adaptive redundancy and sensitivity-driven protection strategies is negligible. }\label{fig:histogram} \end{figure} \subsection{Additional Experimental Details} \label{hyperparams} In the following subsection, we provide additional details on the models, model architectures, and hyperparameters used in our experiments. \subsubsection{MNIST} We provide the architectural details and hyperparameters for LeNet and the MLP in Tables~\ref{table:lenet_arch} and~\ref{table:mlp_arch} respectively: \begin{table}[h!] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c} \hline \textbf{Name}& \textbf{Component}\\ \hline conv1 & [$5 \times 5$ conv, 20 filters, stride 1], ReLU, $2 \times 2$ max pool \\ \hline conv2 & [$5 \times 5$ conv, 50 filters, stride 1], ReLU, $2 \times 2$ max pool \\ \hline Linear & Linear $800 \rightarrow 500$, ReLU \\ \hline Output Layer & Linear $500 \rightarrow 10$ \\ \hline \end{tabular} } \caption{LeNet architecture for MNIST experiments.} \label{table:lenet_arch} \end{table} \begin{table}[h!] \centering \begin{tabular}{c|c} \hline \textbf{Name}& \textbf{Component}\\ \hline Input Layer & Linear $784 \rightarrow 100$, ReLU \\ \hline Hidden Layer & Linear $100 \rightarrow 100$, ReLU \\ \hline Output Layer & Linear $100 \rightarrow 10$ \\ \hline \end{tabular} \caption{MLP architecture for MNIST experiments.} \label{table:mlp_arch} \end{table} For both the LeNet and MLP classifiers, we use a batch size of 100 and train for 100 epochs, early stopping at the best accuracy on the validation set. We use the Adam optimizer with learning rate $=0.001$, and $\beta_1=0.9, \beta_2=0.999$ with weight decay $=5e^{-4}$. For knowledge distillation, we use a temperature parameter of $T = 1.5$ and equally weight the contributions of the student network's cross entropy loss and the distillation loss ($\lambda=0.5$). \subsubsection{CIFAR-10} We provide the architectural details and hyperparameters for the ResNet-18 and the slim ResNet-20 used in our experiments in Tables~\ref{table:resnet_arch} and~\ref{table:resnet_arch2} below: \begin{table}[h!] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c} \hline \textbf{Name}& \textbf{Component}\\ \hline conv1 & $3\times3$ conv, 64 filters. stride 1, BatchNorm \\ \hline Residual Block 1 & $ \begin{bmatrix} 3 \times 3 \text{ conv, } 64 \text{ filters} \\ 3 \times 3 \text{ conv, } 64 \text{ filters} \end{bmatrix} \times 2$ \\ \hline Residual Block 2 & $ \begin{bmatrix} 3 \times 3 \text{ conv, } 128 \text{ filters} \\ 3 \times 3 \text{ conv, } 128 \text{ filters} \end{bmatrix} \times 2$ \\ \hline Residual Block 3 & $ \begin{bmatrix} 3 \times 3 \text{ conv, } 256 \text{ filters} \\ 3 \times 3 \text{ conv, } 256 \text{ filters} \end{bmatrix} \times 2$ \\ \hline Residual Block 4 & $ \begin{bmatrix} 3 \times 3 \text{ conv, } 512 \text{ filters} \\ 3 \times 3 \text{ conv, } 512 \text{ filters} \end{bmatrix} \times 2$ \\ \hline Output Layer & $4 \times 4$ average pool stride 1, fully-connected, softmax \\ \hline \end{tabular} } \caption{ResNet-18 architecture for CIFAR-10 experiments.} \label{table:resnet_arch} \end{table} \begin{table}[h!] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c} \hline \textbf{Name}& \textbf{Component}\\ \hline conv1 & $3\times3$ conv, 16 filters. stride 1, BatchNorm \\ \hline Residual Block 1 & $ \begin{bmatrix} 3 \times 3 \text{ conv, } 16 \text{ filters} \\ 3 \times 3 \text{ conv, } 16 \text{ filters} \end{bmatrix} \times 2$ \\ Residual Block 2 & $ \begin{bmatrix} 3 \times 3 \text{ conv, } 32 \text{ filters} \\ 3 \times 3 \text{ conv, } 32 \text{ filters} \end{bmatrix} \times 2$ \\ \hline Residual Block 3 & $ \begin{bmatrix} 3 \times 3 \text{ conv, } 64 \text{ filters} \\ 3 \times 3 \text{ conv, } 64 \text{ filters} \end{bmatrix} \times 2$ \\ \hline Output Layer & $7 \times 7$ average pool stride 1, fully-connected, softmax \\ \hline \end{tabular} } \caption{Slim ResNet-20 architecture for CIFAR-10 experiments.} \label{table:resnet_arch2} \end{table} For both ResNet-18 and slim ResNet-20, we use a batch size of 128 and train for 350 epochs, early stopping at the best accuracy on the validation set. We use SGD with learning rate $=0.1$, and momentum $=0.9$ and weight decay $=5e^{-4}$. \subsubsection{ImageNet} We provide the architectural details and hyperparameters for the ResNet-50 used in our experiments in Table~\ref{table:resnet_arch3}. \begin{table}[h!] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c} \hline \textbf{Name}& \textbf{Component}\\ \hline conv1 & $3\times3$ conv, 64 filters. stride 1, BatchNorm \\ \hline Residual Block 1 & $ \begin{bmatrix} 1 \times 1 \text{ conv, } 64 \text{ filters} \\ 3 \times 3 \text{ conv, } 64 \text{ filters} \\ 1 \times 1 \text{ conv, } 256 \text{ filters} \end{bmatrix} \times 3$ \\ \hline Residual Block 2 & $ \begin{bmatrix} 1 \times 1 \text{ conv, } 128 \text{ filters} \\ 3 \times 3 \text{ conv, } 128 \text{ filters} \\ 1 \times 1 \text{ conv, } 512 \text{ filters} \end{bmatrix} \times 4$ \\ \hline Residual Block 3 & $ \begin{bmatrix} 1 \times 1 \text{ conv, } 256 \text{ filters} \\ 3 \times 3 \text{ conv, } 256 \text{ filters} \\ 1 \times 1 \text{ conv, } 1024 \text{ filters} \end{bmatrix} \times 6$ \\ \hline Residual Block 4 & $ \begin{bmatrix} 1 \times 1 \text{ conv, } 512 \text{ filters} \\ 3 \times 3 \text{ conv, } 512 \text{ filters} \\ 1 \times 1 \text{ conv, } 2048 \text{ filters} \end{bmatrix} \times 3$ \\ \hline Output Layer & $4 \times 4$ average pool stride 1, fully-connected, softmax \\ \hline \end{tabular} } \caption{ResNet-50 architecture for ImageNet experiments.} \label{table:resnet_arch3} \end{table} We use the pretrained ResNet-50 from PyTorch (\texttt{https://github.com/pytorch/vision/blob/\\master/torchvision/models/resnet.py}), with a batch size of 64. For the robust training experiments, we retrain the model for 20 epochs with early stopping at best accuracy. We use SGD with learning rate $=0.001$, and momentum $=0.9$ and weight decay $=5e^{-4}$. \subsection{Additional Experimental Results} \label{confidence} \subsubsection{Sparsity- and Sensitivity-Driven Protection} \begin{figure*}[ht] \centering % \subfigure[Perturbation by PCM cells.]{\includegraphics[width=.47\textwidth]{figures/figure_6a_Comparison_Channel_Noise_resnet18.png}} \subfigure[Perturbation by Gaussian noise.]{\includegraphics[width=.47\textwidth]{figures/figure6b_Comparison_Gaussian_Noise_resnet18.png}} \caption{Accuracy of ResNet-18 on CIFAR-10 when weights are perturbed by (a) PCM cells, (b) Gaussian noise. SP: sign protection, AM: adaptive mapping, AR: sparsity-driven adaptive redundancy. Experiments are conducted three times.}\label{fig:comparison_cifar_app} \end{figure*} \setlength{\tabcolsep}{3pt} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccc} \toprule & 512 cells & 64 cells & 32 cells & 16 cells & 8 cells & 4 cells & 3 cells & 2 cells & 1 cell & Additional Bits\\ \midrule No Protection & 94.2($\pm$ 0.1)& 27.1 ($\pm$ 3.4) & 9.0 ($\pm$ 4.1) & 10.2 ($\pm$ 0.5) & 10.2 ($\pm$ 0.5) & 9.9 ($\pm$ 0.1) & 9.6 ($\pm$ 0.5) & 9.8 ($\pm$ 0.7) & 10.3 ($\pm$ 0.4) & 0\\ SP & 95.0 ($\pm$ 0.0) & 94.2 ($\pm$ 0.1) & 94.00 ($\pm$ 0.2) & 92.80 ($\pm$ 0.3) & 89.50 ($\pm$ 0.6) & 67.00 ($\pm$ 0.2) & 41.90 ($\pm$ 0.5) & 11.80 ($\pm$ 5.1) & 9.80 ($\pm$ 1.5) & 1\\ AM+AR & 95.0 ($\pm$ 0.0) & 94.8 ($\pm$ 0.1) & 94.70 ($\pm$ 0.1) & 94.40 ($\pm$ 0.1) & 93.70 ($\pm$ 0.2) & 93.10 ($\pm$ 0.2) & 92.70 ($\pm$ 0.2) & 89.20 ($\pm$ 2.6) & 58.00 ($\pm$ 8.2) & 1\\ SP+AM & 95.1 ($\pm$ 0.0) & 95.0 ($\pm$ 0.0) & 95.00 ($\pm$ 0.1) & 94.80 ($\pm$ 0.1) & 94.70 ($\pm$ 0.1) & 94.60 ($\pm$ 0.1) & 93.90 ($\pm$ 0.2) & 93.20 ($\pm$ 0.1) & 90.60 ($\pm$ 0.3) & 2\\ SP+AM+AR & 95.1 ($\pm$ 0.0)& 95.0 ($\pm$ 0.0) & \textbf{95.00} ($\pm$ 0.1) & \textbf{95.00} ($\pm$ 0.1) & \textbf{95.00} ($\pm$ 0.1) & \textbf{95.00} ($\pm$ 0.1) & \textbf{95.10} ($\pm$ 0.1) & \textbf{94.80} ($\pm$ 0.1) & \textbf{94.44} ($\pm$ 0.2) & 2\\ SP+AM+AR+Sens. & 95.1 ($\pm$ 0.0) & 95.1 ($\pm$ 0.0) & \textbf{95.10} ($\pm$ 0.1) & \textbf{95.07} ($\pm$ 0.1) & \textbf{95.11} ($\pm$ 0.1) & \textbf{95.03} ($\pm$ 0.2) & \textbf{95.14} ($\pm$ 0.1) & \textbf{94.80} ($\pm$ 0.1) & \textbf{94.95} ($\pm$ 0.3) & 3\\ \bottomrule \\ \end{tabular} } \caption{Accuracy of ResNet-18 on CIFAR-10 when weights are perturbed by the PCM cells. Baseline accuracy (when there is no noise) is $95.10 \%$. SP: sign protection, AM: adaptive mapping, AR: sparsity-driven adaptive redundancy, Sens.: sensitivity-driven adaptive redundancy. Reported results are averaged over three experimental runs.} \label{tab:cifar_sparsity_app} \end{table*} \setlength{\tabcolsep}{3pt} \begin{table*}[!t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{lccccccccccc} \toprule & 512 cells & 64 cells & 32 cells & 16 cells & 8 cells & 4 cells & 3 cells &2 cells & 1 cell & Additional Bits\\ \midrule No Protection & 0.001($\pm$ 0.0)& 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0\\ SP & 0.044($\pm$ 0.0) & 0.009 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 1\\ AM+AR & 75.0 ($\pm$ 0.5) & 66.7 ($\pm$ 0.8) & 66.4 ($\pm$ 2.5) & 63.2 ($\pm$ 2.5) & 57.5 ($\pm$ 2.2) & 34.1 ($\pm$ 3.4) & 22.4 ($\pm$ 3.0) & 0.037 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 2\\ SP+AM & 70.6 ($\pm$ 0.7) & 69.4 ($\pm$ 1.1) & 55.1 ($\pm$ 3.2) & 51.3 ($\pm$ 2.4) & 34.6 ($\pm$ 1.8) & 28.0 ($\pm$ 5.6) & 17.6 ($\pm$ 4.1) & 0.046 ($\pm$ 0.0) & 0.002 ($\pm$ 0.0) & 2\\ SP+AM+AR & 75.4 ($\pm$ 0.2) & 74.5 ($\pm$ 0.1) & 74.2 ($\pm$ 0.4) & 73.8 ($\pm$ 0.0) & 73.8 ($\pm$ 0.2) & 72.8 ($\pm$ 0.0) & 72.2 ($\pm$ 0.4) & 70.2 ($\pm$ 0.0) & 66.0 ($\pm$ 0.8) & 1\\ SP+AM+AR+Sens. & \textbf{75.8} ($\pm$ 0.0) & \textbf{75.6} ($\pm$ 0.0) & \textbf{75.3} ($\pm$ 0.0) & \textbf{74.8} ($\pm$ 0.1) & \textbf{74.9} ($\pm$ 0.1) & \textbf{74.5} ($\pm$ 0.1) & \textbf{73.7} ($\pm$ 0.0) & \textbf{73.0} ($\pm$ 0.2) & \textbf{68.8} ($\pm$ 0.3) & 3\\ \bottomrule \\ \end{tabular} } \caption{Accuracy of ResNet-50 on ImageNet when weights are perturbed by the PCM cells. Baseline accuracy (when there is no noise) is $76.6 \%$. SP: sign protection, AM: adaptive mapping, AR: sparsity-driven adaptive redundancy, Sens.: sensitivity-driven adaptive redundancy. Reported results are averaged over three experimental runs.} \label{tab:imagenet_sparsity_app} \end{table*} We give the full results of sparsity-driven and sensitivity-driven protection experiments on CIFAR-10 and ImageNet with confidence intervals in Table~\ref{tab:cifar_sparsity_app} and Table~\ref{tab:imagenet_sparsity_app}. In Figure~\ref{fig:comparison_cifar_app}, we present experimental results with ResNet-18 on CIFAR-10. As can be seen from Figure~\ref{fig:comparison_cifar_app}(a), our strategies, namely sign protection, adaptive mapping and adaptive redundancy, reduce the number of PCM cells per weight required to preserve accuracy from $1024$ to $1$. We also test our strategies against Gaussian noise. In the Gaussian experiments, we consider hypothetical storage devices with white Gaussian noise where the standard deviation of the overall noise on the output ($\hat{\epsilon}(r, \alpha)$ ``not input-dependent'') can be reduced by using the channel multiple times (Method \#1) and by using the channel in the its power limit (Method \#2). Figure~\ref{fig:comparison_cifar_app}(b) shows that our strategies increase the standard deviation threshold, where the NN performance sharply drops, from $0.005$ to $0.2$, i.e., robustness increases by $40$ times. We note that we present results of Gaussian noise experiments to be comparable to future work although this scenario is not realistic. \subsubsection{Robust Pruning} \label{pruning_app} We give the full results of robust pruning experiment with confidence intervals in Table~\ref{tab:cifar_pruning_app}. We apply one-shot pruning followed by $20$ epochs of retraining. When sparsity- and sensitivity-driven strategies are applied, $3$ cells per weight are enough to preserve the original accuracy of the pruned model ($94.8 \%$). Moreover, if we use only $1$ cell per weight on average, the accuracy drop is $0.1 \%$, which is insignificant considering the fact that pruning reduces the accuracy by $0.3\%$ (from $95.1\%$ to $94.8\%$). We note that this performance degradation due to pruning can be eliminated with robust training, explained in Section~\ref{robust_train}. Without pruning, the sparsity- and sensitivity- driven protection strategies provide $16$ times improvement in the memory efficiency since the number of cells per weight required to preserve the accuracy drops from $16$ to $1$ when we use analog storage instead of digital storage. As our strategies work well with pruning from Table~\ref{tab:cifar_pruning_app}, we get further memory efficiency, e.g., $10 \times 16=160$ times more efficient storage with $90\%$ pruning. \setlength{\tabcolsep}{3pt} \begin{table*}[!t] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lcccccc} \toprule & 10 cells & 5 cells & 3 cells & 2 cells & 1 cell & Additional Bits\\ \midrule No Protection & 10.1 ($\pm$ 0.2) & 9.2 ($\pm$ 0.3) & 9.9 ($\pm$ 0.3) & ($\pm$ 0.1) 10.1 ($\pm$ 0.4) & 9.1 ($\pm$ 0.3) & 0\\ SP & 76.5 ($\pm$ 0.1) & 22.5 ($\pm$ 1.1) & 13.1 ($\pm$ 0.8) & 10.1 ($\pm$ 0.4) & 10.0 ($\pm$ 0.5) & 1\\ SP+AM & 94.1 ($\pm$ 0.1) & 93.4 ($\pm$ 0.1) & 92.6 ($\pm$ 0.2) & 92.3 ($\pm$ 0.2) & 85.4 ($\pm$ 0.3) & 2\\ SP+AM+AR & \textbf{94.5} ($\pm$ 0.0) & \textbf{94.9} ($\pm$ 0.0) & \textbf{94.3} ($\pm$ 0.0) & \textbf{94.5} ($\pm$ 0.0) & \textbf{94.2} ($\pm$ 0.0)& 2\\ SP+AM+AR+Sens. & \textbf{94.8} ($\pm$ 0.0) & \textbf{94.7} ($\pm$ 0.0) & \textbf{94.8} ($\pm$ 0.0) & \textbf{94.6} ($\pm$ 0.0) & \textbf{94.7}($\pm$ 0.0) & 3\\ \bottomrule \\ \end{tabular} } \caption{Accuracy of $90 \%$ pruned ResNet-18 on CIFAR-10 when weights are perturbed by the PCM cells. Baseline accuracy after the pruning (when there is no noise) is $94.8 \%$. SP: sign protection, AM: adaptive mapping, AR: sparsity-driven adaptive redundancy, Sens.: sensitivity-driven adaptive redundancy.} \label{tab:cifar_pruning_app} \end{table*} \subsubsection{Robust Training} \setlength{\tabcolsep}{3pt} \begin{table*}[!t] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lcccc} \toprule & Naive Training & Robust Training & Robust Training & Additional\\ &(no noise) & (with N(0, 0.01)) & (with N(0, 0.006)) & Bits \\ \midrule No Noise & 95.10 & 95.50 & \textbf{95.60} &0\\ PCM Noise (No Protection) & 9.70($\pm$ 1.0) & 8.30 ($\pm$ 0.4) & 9.90 ($\pm$ 1.1) &1\\ PCM Noise+SP & 9.70($\pm$ 1.5) & 10.63 ($\pm$ 0.2) & 10.33 ($\pm$ 0.2) &2\\ PCM Noise+SP+AP & 90.60($\pm$ 0.2) & 94.73 ($\pm$ 0.1) & \textbf{94.80} ($\pm$ 0.2) &2\\ PCM Noise+AP & 27.69($\pm$ 8.2) & \textbf{86.20} ($\pm$ 0.4) & 81.83 ($\pm$ 4.2) &1\\ PCM Noise+AP+Sens & 36.21($\pm$ 3.1) & \textbf{86.73} ($\pm$ 2.2) & 78.93 ($\pm$ 4.4) &2\\ PCM Noise+SP+AP+Sens & 94.95($\pm$ 0.3) & \textbf{95.03} ($\pm$ 0.1) & \textbf{95.03} ($\pm$ 0.1) &3\\ \bottomrule \\ \end{tabular} } \caption{Accuracy of ResNet-18 on CIFAR-10 when weights are perturbed by the PCM cells. Baseline accuracy (when there is no noise at train or test time) is $95.10 \%$. SP: sign protection, AP: adaptive protection (adaptive mapping+sparsity-driven adaptive redundancy), Sens.: sensitivity-driven adaptive redundancy. Experiments are conducted three times.} \label{tab:cifar_robust_train_app} \end{table*} \setlength{\tabcolsep}{3pt} \begin{table*}[!t] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lcccc} \toprule & Naive Training & Robust Training & Robust Training & Additional \\ &(no noise) & (with N(0, 0.01)) & (with N(0, 0.006)) & Bits \\ \midrule No Noise & 76.60 & 76.60 & 76.60 &0\\ PCM Noise (No Protection) & 0.001($\pm$ 0.0) & 0.001 ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) &0\\ PCM Noise+SP & 0.001($\pm$ 0.0) & \textbf{0.004} ($\pm$ 0.0) & 0.002 ($\pm$ 0.0) &1\\ PCM Noise+AP & 0.001($\pm$ 0.0) & \textbf{0.002} ($\pm$ 0.0) & 0.001 ($\pm$ 0.0) &1\\ PCM Noise+SP+AP & 66.00($\pm$ 0.8) & 69.00 ($\pm$ 0.3) & \textbf{69.20} ($\pm$ 0.2) &2\\ PCM Noise+AP+Sens & 0.003($\pm$ 0.0) & \textbf{0.006} ($\pm$ 0.0) & 0.005 ($\pm$ 0.0) &2\\ PCM Noise+SP+AP+Sens & 68.80 ($\pm$ 0.3) & \textbf{70.20} ($\pm$ 0.0) & \textbf{70.40} ($\pm$ 0.1) &3\\ \bottomrule \\ \end{tabular} } \caption{Accuracy of ResNet-50 on ImageNet when weights are perturbed by the PCM cells. Baseline accuracy (when there is no noise at train or test time) is $74.4 \%$. SP: sign protection, AP: adaptive protection (adaptive mapping+sparsity-driven adaptive redundancy), Sens.: sensitivity-driven adaptive redundancy. Experiments are conducted three times.} \label{tab:imagenet_robust_train_app} \end{table*} We give the full results of robust training experiments on CIFAR-10 and ImageNet with confidence intervals in Table~\ref{tab:cifar_robust_train_app} and Table~\ref{tab:imagenet_robust_train_app}. We use $\lambda=0.5$ as the coefficient of the KL regularization term in the loss function in Section~\ref{robust_train}. The level of the noise (standard deviation) injected during the training is adjusted according to $r$ and $\alpha$ values to be used at storage time since the noise at storage time has a standard deviation of $\frac{\sigma(w_{in})}{\alpha \sqrt{r}}$. It is seen from Table~\ref{tab:cifar_robust_train_app} and Table~\ref{tab:imagenet_robust_train_app} that robust training improves the robustness of the network against noise. We have also observed that the pruned network reaches higher accuracy after robust training compared to naive training without noise injection (an increase from $90.2\%$ to $95.0\%$ without retraining). \subsubsection{Robust Distillation} \label{distill_exp_app} Knowledge distillation (KD) is a well-established NN compression method where a large teacher network is trained with $\mathcal{L}_t$ loss given by \begin{align*} \mathcal{L}_t = \mathbb{E}_{x,y \sim p_{data}}[-\log p_{w_t}(y|x)] \end{align*} where $w_t$ is the weights of the teacher network, and probability of each class $i$ is the output of the high temperature softmax activation applied to logits: \begin{align*} y_{i} = \frac{exp(z_i/T)}{\sum_j exp(z_j/T)}. \end{align*} where $z_i$ is the logit for class $i$. Temperature $T>1$ helps the output probabilities of the teacher network be softer. Using the same temperature, a smaller student network can be trained with the following student loss function $\mathcal{L}_s$: \begin{align*} \mathcal{L}_s &= (1-\lambda)\mathbb{E}_{x,y \sim p_{data}}[-\log p_{w_s}(y|x)] \\ & \quad + \lambda \mathbb{E}_x[D_{KL}(p_{w_t}(y|x)||p_{w_s}(y|x))] \end{align*} where $w_s$ is the weights of the student network. It has been shown that distilled student network achieves a test accuracy that a teacher network with the same architecture cannot achieve. In other words, a student network distilled from a teacher network performs comparable to a larger network. This suggests that knowledge distillation can be regarded as a promising compression method. In addition to compression, distillation has been shown to be an effective method for other desired NN attributes such as generalizability and adversarial robustness \citep{adversarialdistil}. In this work, we define ``robustness" as preserving a network's downstream classification accuracy when noise is added to the weights. This is achieved in part by robust training in the previous section where a trained network is robust to pruning and noise on the weights. Here, we present a novel student loss function that would make a (compressed) student network more robust to noise with no change in the teacher network training: \begin{align*} \mathcal{L}_s &= (1-\lambda)\mathbb{E}_{x,y \sim p_{data}}[-\log p_{g(\hat{w}_s)}(y|x)] \\ & \quad + \lambda \mathbb{E}_x[D_{KL}(p_{w_t}(y|x)||p_{g(\hat{w}_s)}(y|x))] \end{align*} In the experiments, we use a temperature parameter of $T = 1.5$ and equally weight the contributions of the student network's cross entropy loss and the KL term ($\lambda=0.5$). Similar to robust training, the noise level (standard deviation) during training is adjusted according to $r$ and $\alpha$ values to be used at storage time. We give the full results on CIFAR-10 with confidence intervals in Tables~\ref{tab:cifar_distill_PCM_app} and \ref{tab:cifar_distill_Gaussian_app} with weights perturbed by PCM cells and Gaussian noise, respectively. We also present the same set of experiments on MLP distilled from LeNet (on MNIST) in Tables~\ref{tab:mnist_distill_app} and \ref{tab:mnist_distill_Gaussian_app}. As in CIFAR-10 experiments, noise injection during distillation makes the student network more robust to both PCM and Gaussian noise in MNIST experiments. \clearpage \setlength{\tabcolsep}{3pt} \begin{table*}[!t] \centering \resizebox{0.9\textwidth}{!}{ \begin{tabular}{ccccccc} \toprule & Number of & Teacher & Teacher & Student & Noisy Student & Add. \\ & PCM cells & ResNet-18 & ResNet-20 & ResNet-20 & ResNet-20 & Bits \\ \midrule No Noise & 16 & 95.70 & 92.50 & 92.90 & \textbf{93.00} &0\\ \midrule PCM+AP &3 & 16.23 ($\pm$ 1.6) & 48.38 ($\pm$ 14.3) & 73.38 ($\pm$ 2.4) & \textbf{81.75 } ($\pm$ 1.5) &1\\ \midrule \centered{PCM+SP+AP} & \centered{1 \\ 3} & \centered{93.35 ($\pm$ 0.5) \\ 94.78 ($\pm$ 0.2)} & \centered{86.30 ($\pm$ 1.5)\\ 89.73 ($\pm$ 0.2) } & \centered{88.58 ($\pm$ 0.4) \\ 89.98 ($\pm$ 0.3) } & \centered{\textbf{90.65} ($\pm$ 0.7) \\ \textbf{91.33} ($\pm$ 0.2) } & \centered{2} \\ \midrule PCM+AP+Sens. & 1 & 9.60($\pm$ 0.4) & 29.68 ($\pm$ 0.4) & 38.18 ($\pm$ 7.2) & \textbf{69.49 } ($\pm$ 3.7) &2 \\ \midrule \centered{PCM+SP+AP+Sens.} & \centered{1 \\ 3} & \centered{93.36 ($\pm$ 0.2)\\ 94.90 ($\pm$ 0.2)} & \centered{88.40 ($\pm$ 1.4) \\ 89.92 ($\pm$ 0.5) } & \centered{88.96 ($\pm$ 0.7) \\ 90.44 ($\pm$ 0.6) } & \centered{\textbf{91.10} ($\pm$ 0.3) \\ \textbf{91.78 } ($\pm$ 0.2)} & \centered{3} \\ \bottomrule \\ \end{tabular} } \caption{Accuracy of ResNet-20 distilled from ResNet-18 on CIFAR-10 when weights are perturbed by the PCM cells. SP: sign protection, AP: adaptive protection (adaptive mapping+sparsity-driven adaptive redundancy), Sens.: sensitivity-driven adaptive redundancy. During the distillation of noisy student, a Gaussian noise with N(0,0.01) is injected onto the weights. Experiments are conducted five times.} \label{tab:cifar_distill_PCM_app} \end{table*} \setlength{\tabcolsep}{3pt} \begin{table*}[!t] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lccccc} \toprule & Teacher & Teacher & Student & Noisy Student & Additional \\ & ResNet-18 & ResNet-20 & ResNet-20 & ResNet-18 &Bits \\ \midrule No Noise & 95.70 & 92.50 & 92.90 & \textbf{93.00} &0 \\ N(0,0.01)+No Protection & 82.90 ($\pm$ 2.3) & 86.44 ($\pm$ 1.6) & 89.10 ($\pm$ 0.7) & \textbf{90.56} ($\pm$ 0.4) &0 \\ N(0,0.01)+SP+AP & 95.60 ($\pm$ 0.0) & 92.30 ($\pm$ 0.1) & 92.66 ($\pm$ 0.0) & \textbf{92.76} ($\pm$ 0.1) &2 \\ N(0,0.02)+No Protection & 10.88 ($\pm$ 1.2) & 45.36 ($\pm$ 8.8) & 65.50 ($\pm$ 7.0) & \textbf{77.78} ($\pm$ 3.7) &0 \\ N(0,0.02)+SP+AP & 95.70 ($\pm$ 0.0) & 91.68 ($\pm$ 0.4) & 92.30 ($\pm$ 0.1) & \textbf{92.36} ($\pm$ 0.2) &2 \\ \bottomrule \\ \end{tabular} } \caption{Accuracy of ResNet-20 distilled from ResNet-18 on CIFAR-10 when weights are perturbed by the Gaussian noise. SP: sign protection, AP: adaptive protection (adaptive mapping+sparsity-driven adaptive redundancy). During the distillation of noisy student, a Gaussian noise with N(0,0.01) is injected onto the weights. Experiments are conducted five times.} \label{tab:cifar_distill_Gaussian_app} \end{table*} \setlength{\tabcolsep}{3pt} \begin{table*}[t] \centering \resizebox{0.9\textwidth}{!}{ \begin{tabular}{lccccccc} \toprule & Teacher & Teacher & Student & Noisy Student & Noisy Student & Number of & Additional \\ & LeNet & MLP & MLP & MLP & MLP & PCM cells & Bits \\ & & (Student baseline) & (No Noise) & (with N(0,0.1)) & (with N(0,0.006)) & & \\ \midrule No Noise & 99.20 & 97.50 & 97.80 & 96.30 & \textbf{97.30} & 16 & 0\\ PCM+SP & 98.94 ($\pm$ 0.0) & 91.86 ($\pm$ 1.7) & 87.46 ($\pm$ 3.7) & \textbf{95.46}($\pm$ 0.1) & 93.12 ($\pm$ 1.4) & 1 & 1\\ PCM+AP & 98.60 ($\pm$ 0.2) & 92.76 ($\pm$ 1.6) & 95.40 ($\pm$ 1.0) & 95.78($\pm$ 0.2) & \textbf{96.04} ($\pm$ 0.4) &1 & 1\\ PCM+SP+AP & 99.20 ($\pm$ 0.0) & 97.04 ($\pm$ 0.1) & 97.46 ($\pm$ 0.2) & 96.12 ($\pm$ 0.1) & \textbf{97.58} ($\pm$ 0.1) & 1 & 2\\ PCM+AP+Sens. & 98.88 ($\pm$ 0.1) & 95.04 ($\pm$ 0.5) & 97.10 ($\pm$ 0.1) & 95.92($\pm$ 0.1) & \textbf{97.46} ($\pm$ 0.1) & 1 & 2 \\ PCM+SP+AP+Sens. & 99.20 ($\pm$ 0.0) & 97.20 ($\pm$ 0.0) & 97.72 ($\pm$ 0.1) & 96.14($\pm$ 0.1) & \textbf{97.80} ($\pm$ 0.1) & 1 & 3 \\ \bottomrule \\ \end{tabular} } \caption{Accuracy of MLP distilled from LeNet on MNIST when weights are perturbed by the PCM cells. SP: sign protection, AP: adaptive protection (adaptive mapping+sparsity-driven adaptive redundancy), Sens.: sensitivity-driven adaptive redundancy. Experiments are conducted five times.} \label{tab:mnist_distill_app} \end{table*} \setlength{\tabcolsep}{3pt} \begin{table*}[t] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lccccccc} \toprule & Teacher & Teacher & Student & Noisy Student & Noisy Student & Additional \\ & LeNet & MLP & MLP & MLP & MLP & Bits \\ & & (Student baseline) & (No Noise) & (with N(0,0.1)) & (with N(0,0.006)) & & \\ \midrule No Noise & 99.20 & 97.50 & 97.80 & 96.30 & \textbf{97.30} &0\\ N(0,0.1)+No Protection & 22.60 ($\pm$ 7.7) & 77.72 ($\pm$ 3.8) & 58.90 ($\pm$ 3.6) & \textbf{93.80} ($\pm$ 0.4) & 62.20 ($\pm$ 3.5) &0\\ N(0,0.1)+SP+AP & 99.22 ($\pm$ 0.0) & 95.92 ($\pm$ 0.3) & 95.96 ($\pm$ 0.8) & 95.82 ($\pm$ 0.2) & \textbf{97.02} ($\pm$ 0.2) & 2\\ N(0,0.2)+No Protection & 12.02 ($\pm$ 3.0) & 33.50 ($\pm$ 3.4) & 24.78 ($\pm$ 6.3) & \textbf{74.82}($\pm$ 5.0) & \textbf{25.02}($\pm$ 2.8) & 0\\ N(0,0.2+SP+AP) & 99.18 ($\pm$ 0.1) & 90.36 ($\pm$ 1.0) & 86.30 ($\pm$ 4.6) & \textbf{95.06}($\pm$ 0.3) & 93.06 ($\pm$ 2.1) &2 \\ N(0,0.06)+No Protection & 99.24 ($\pm$ 0.0) & 97.38 ($\pm$ 0.1) & 97.82 ($\pm$ 0.1) & 96.28 ($\pm$ 0.0) & \textbf{97.86} ($\pm$ 0.0) & 0\\ N(0,0.06)+SP+AP & 99.20 ($\pm$ 0.0) & 97.50 ($\pm$ 0.0) & 97.80 ($\pm$ 0.0) & 96.30 ($\pm$ 0.0) & \textbf{97.90} ($\pm$ 0.0) &2 \\ \bottomrule \\ \end{tabular} } \caption{Accuracy of MLP distilled from LeNet on MNIST when weights are perturbed by the Gaussian noise. SP: sign protection, AP: adaptive protection (adaptive mapping+sparsity-driven adaptive redundancy). Experiments are conducted five times.} \label{tab:mnist_distill_Gaussian_app} \end{table*} \clearpage \subsubsection{Logistic Regression Experiments} The logistic regression experiments are conducted in two steps. First, we train a logistic regression model on each Gaussian mixture dataset -- recall that there are 50 datasets per "easy" (separated means) and "hard" (overlapping means) tasks. For learning the classifier, we use a batch size of 128 and train each model with SGD, where the learning rate = 0.1 and weight decay = 0.0005. We perform early stopping for each model on a held-out validation set. After training the logistic regression models, we use an autoencoder with both MLP encoder and decoder architectures and ReLU nonlinearities, as shown in Table~\ref{table:mlp_ae_arch}. The autoencoder for both the "easy" and "hard" tasks is trained for 10 epochs with a batch size of 100 using the Adam optimizer with learning rate = 0.001, $\beta_1 = 0.9$, $\beta_2 = 0.999$, and early stopping on a held-out validation set. We test the autoencoder on both Gaussian and PCM noise: that is, we corrupt the 1-dimensional encoder output (z-representation of the classifier weights) with the appropriate perturbation before passing in the encoded representation into the decoder. Figure~\ref{fig:appendix_logreg} provides an illustration of the autoencoding process for the PCM array, which is analogous to the Gaussian noise setup. \begin{table}[h] \centering \begin{tabular}{c|c} \hline \textbf{Name}& \textbf{Component}\\ \hline (Encoder) Input Layer & Linear $3 \rightarrow 1$, ReLU \\ \hline (Encoder) Hidden Layer & Linear $1 \rightarrow 1$ \\ \hline (Decoder) Hidden Layer & Linear $1 \rightarrow 1$, ReLU \\ \hline (Decoder) Output Layer & Linear $1 \rightarrow 3$ \\ \hline \end{tabular} \caption{MLP-based autoencoder architecture for synthetic experiments, trained on 50 logistic regression classifiers per task.} \label{table:mlp_ae_arch} \end{table} \label{appendix_logreg} \begin{figure}[!h] \centering \subfigure[Autoencoder for logistic regression experiment. ]{\includegraphics[width=.42\textwidth]{figures/autoencoder_v2.png}} \caption{Illustration of the autoencoder framework used for the logistic regression experiments. The input weight $W$ is mapped to a compressed representation $\hat{Z}$ by an encoder module, which is then perturbed by the PCM (or analogously, Gaussian) noise channel to become a perturbed representation $Z$. This perturbed representation is then passed to the decoder, which produces a reconstructed weight $\hat{W}$.} \label{fig:appendix_logreg} \end{figure} In Figures~\ref{fig:logreg_z} and~\ref{fig:hard_data_z}, we qualitatively analyze the learned representations of the logistic regression classifier weights. In Figure~\ref{fig:logreg_z}(a), we plot all 50 datasets of the Gaussian mixtures (``easy task") as well as the true decision boundaries for each of the 50 logistic regression models, each boundary colored by the magnitude of its z-representation as learned by the autoencoder. We plot the same for the ``hard task" in Figure~\ref{fig:logreg_z}(b). For the easy task, we note that the classifiers are encoded by their relative location in input-space: those that are in the lower left corner of the scatter plot have smaller magnitudes than those on the upper right. For the hard task, however, the z-representations appear to be fairly random -- at a first glance, there does not appear to be a particular correlation between the magnitudes of the z-encodings and the original classifier weights. We further explore this phenomenon for the hard task in Figure~\ref{fig:hard_data_z}. For two particular datasets (though the trend holds across all 50 datasets), we color the original data points by their mixture component as well as the true decision boundary by the magnitude of its z-encoding. As shown in Figure~\ref{fig:hard_data_z}(a) and (b), we find that the autoencoder has learned to map all classifiers with the positive class to the left of the decision boundary to z-representations with large magnitude; conversely, those with the positive class to the right of the decision boundary are encoded to z-representations with smaller magnitudes. Through this preliminary investigation, we demonstrate that the end-to-end approach for learning both the compression scheme while taking the physical constraints of the storage device into account shows promise. \begin{figure*} \centering \subfigure[Logistic regression experiment, easy task. ]{\includegraphics[width=.47\textwidth]{figures/easy_task_z.png}} \subfigure[Logistic regression experiment, hard task. ]{\includegraphics[width=.47\textwidth]{figures/hard_task_z.png}} \caption{Qualitative visualizations of the learned representations in the logistic regression experiment. (a) shows all 50 datasets with the true decision boundaries colored by the magnitude of their z-representations for the ``easy task", while (b) shows the analogous plot for the harder task with overlapping means.} \label{fig:logreg_z} \end{figure*} \begin{figure*} \centering \subfigure[Encoded classifier with large magnitude. ]{\includegraphics[width=.47\textwidth]{figures/hard_dataset_2.png}} \subfigure[Encoded classifier with small magnitude. ]{\includegraphics[width=.47\textwidth]{figures/hard_dataset_5.png}} \caption{Qualitative visualizations of the learned representations for the hard logistic regression task. We find in (a) that classifiers with large magnitudes in z-space have the positive labels to the left of the decision boundary, while (b) those with small magnitudes have the positive labels to the right of the decision boundary.} \label{fig:hard_data_z} \end{figure*}
{ "timestamp": "2021-02-16T02:40:10", "yymm": "2102", "arxiv_id": "2102.07725", "language": "en", "url": "https://arxiv.org/abs/2102.07725" }
\section{Introduction} In this paper we are concerned with colorings of simple planar digraphs, i.e., orientations of planar graphs. We consider the following notion of coloring. \begin{definition} Let $F$ be a digraph. \begin{itemize} \item A digraph is \emph{$F$-free} if it has no induced subdigraph isomorphic to $F$. \item A coloring $c\mathop{:}V(D) \rightarrow \{0,1,\ldots,k-1\}$ of the vertices of a digraph $D$ is called an \emph{$F$-free $k$-coloring} or simply \emph{$F$-free} if for every color $i \in \{0,1,\ldots,k-1\}$, each of the subdigraphs $D[c^{-1}(i)]$, $i=0,1,\ldots,k-1$ induced by the color classes is $F$-free. \end{itemize} \end{definition} There is an analogous definition of $F$-free coloring for undirected graphs, which has been the subject of study of several previous papers. Gimbel and Hartman~\cite{GIMBEL2003} studied \emph{subcolorings} of graphs, which are colorings in which every color class induces a disjoint union of cliques, or, equivalently, in which no induced $P_3$ is monochromatic. Generalizing this setting, Gimbel and Ne\v{s}et\v{r}il~\cite{Gimbel2010} studied the complexity of \emph{cograph colorings} of graphs, i.e., vertex-colorings in which every color class induces a cograph. As the latter are exactly the graphs avoiding $P_4$ as an induced subgraph, these are the $P_4$-free colorings. In both cases hardness results were obtained even for planar graphs: It was shown that deciding whether a given planar graph admits a $2$-subcoloring, a $3$-subcoloring, a $2$-cograph coloring, or a $3$-cograph-coloring, respectively, are each \NP-complete. All these results were further generalized by Broersma, Fomin, Kratochvil and Woeginger~\cite{Broersma2005} who gave a complete dichotomous description of the complexity of coloring planar graphs avoiding a monochromatic induced copy of some connected planar graph $F$. They proved that the $2$-coloring problem is \NP-hard if $F$ is a tree consisting of at least two edges and polynomially-solvable in all other cases. The $3$-coloring problem was shown to be NP-hard if $F$ is a path of positive length and polynomially solvable in all remaining cases. In this paper, we will study the directed version of this problem. For a given digraph $F$, we study the complexity of $F$-free $k$-coloring for planar digraphs as input. If there is an $U(F)$-free coloring of a graph, we can also orient the edges arbitrarily and get an $F$-free coloring of an orientation. Here $U(F)$ is the underlying undirected graph of the digraph $F$. We determine the complexity of $P$-free coloring of planar digraphs with $2$ or $3$ colors for every oriented path $P$. More generally, for a fixed digraph $F$ we would like to determine the complexity of the following decision problem. \begin{problem}[PLANAR $F$-FREE $k$-COLORING, $k$-$F$-PFC] Given a planar digraph $D$, determine whether or not $D$ admits an $F$-free coloring using $k$ colors. \end{problem} Note that the only interesting variants of the problem are for $k \in \{2,3\}$, as clearly for $k \ge 4$ and any non-trivial digraph $F$ on at least two vertices the $4$-color theorem implies that every simple planar digraph admits an $F$-free coloring with $k$ colors. On the other hand, if $|V(F)|=1$ or $k=1$, the problem admits (trivial) polynomial algorithms. Finally, if $F$ consists of isolated vertices only, no planar digraph on more than $4k\,|V(F)|$ vertices can have an $F$-free $k$-coloring. The reason for this is as follows: In an $F$-free $k$-coloring of a planar digraph $D$, every color class induces a planar subdigraph which contains no stable set of size $|V(F)|$. By the $4$-color-theorem, this means that this subdigraph has at most $4(|V(F)|-1)<4|V(F)|$ vertices. Hence, in total a planar digraph which admits an $F$-free $k$-coloring has order at most $k\, \cdot\, 4|V(F)|$. Hence, since relevant instances of this problem are bounded in size, any (brute-force) algorithm responds in constant time. \begin{observation} The $k$-$F$-PFC is polynomial-time solvable for $k \ge 4$ and every digraph $F$. \end{observation} Using the following fact from the literature we can further restrict our attention to the case where $F$ is an oriented tree. \begin{proposition}[\cite{thomassencycles}, \cite{thomassentriangles}] Let $G$ be a planar (undirected) graph, and let $n \ge 3$. Then $G$ admits a $C_n$-free $2$-coloring. \end{proposition} \begin{proof} The case $n=3$ is well-known and can be found for instance in \cite{thomassentriangles}, while the cases $n \ge 4$ are covered by \cite{thomassencycles}. \end{proof} \begin{corollary} The $k$-$F$-PFC is polynomial-time solvable for every $k \in \mathbb{N}$ and every digraph $F$ containing a (not necessarily directed) cycle. \end{corollary} For the case of $k=3$ colors, a more general statement holds true. \begin{proposition} Let $D$ be a planar digraph and $F$ an orientation of a graph of maximum degree at least $3$. Then $D$ admits an $F$-free $3$-coloring. \end{proposition} \begin{proof} The main result of~\cite{linear3} states that the vertex set of every planar graph can be partitioned into $3$ parts, each of them inducing a disjoint union of paths. This clearly yields an $F$-free coloring of every planar digraph $D$, hence we can solve the problem by returning 'true' for every planar input digraph $D$. \end{proof} On the negative side, one can show that the $F$-free $2$- and $3$-coloring problem becomes hard if $F$ is an orientation of a path. More precisely, the following are our main results. By $\mathcal{P}_n$ we denote the set of all orientations of the path $P_n$ on $n \in \mathbb{N}$ vertices. \begin{theorem} \label{thm:2col_NPhard} Let $n \in \mathbb{N}$ and $\vec{P} \in \mathcal{P}_n$. Then the $2$-$\vec{P}$-PFC is polynomial-time solvable for $n \le 2$ and \NP-complete for $n \ge 3$, even restricted to acyclic digraphs. \end{theorem} \begin{theorem}\label{thm:3col_NPhard} Let $n \in \mathbb{N}$ and $\vec{P} \in \mathcal{P}_n$. Then the $3$-$\vec{P}$-PFC is polynomial-time solvable for $n=1$ and \NP-complete for $n \ge 2$, even restricted to acyclic digraphs. \end{theorem} In particular this means for directed paths $P$ of arbitrary length, there are orientations of planar graphs which require four colors for a $P$-free coloring. This is in contrast to Neumann-Lara's Two-Color-Conjecture that any orientation of a planar graph admits an $2$-coloring without monochromatic directed cycle~\cite{NeumannLara1982}. The proofs of Theorem~\ref{thm:2col_NPhard} and Theorem~\ref{thm:3col_NPhard} are separated into two steps, first we reduce the problems to cases of small $n$ in Section~\ref{sec:remove} and then describe explicit \NP-hardness reductions for paths of lengths $1$, $2$ and $3$ in Sections~\ref{sec:basecases2},~\ref{sec:basecases25}, and~\ref{sec:basecases3}. In Section \ref{sec:outer} we show for every orientation $\vec{P}$ of a path the existence of an acyclic outerplanar digraph without a $\vec{P}$-free $2$-coloring. We use this side result later in the \NP-hardness reductions. Note that the complexity of the $2$-$F$-PFC where the underlying undirected tree of $F$ has maximum degree at least $3$ remains partially open. We show that the problem is \NP-hard if the graph after successively deleting all leaves is a path of length $2$ or $3$. The complexity of those graphs where we get a star $K_{1,n}$ is not determined. \paragraph{\textbf{Notation}} All digraphs considered in this paper are simple, that is, they do not contain loops nor parallel or anti-parallel arcs. An arc (or directed edge) in a digraph $D$ with tail $u$ and head $v$ is denoted by $(u,v)$. By $V(D)$ and $A(D)$ we denote, respectively, the set of vertices and arcs in the digraph $D$. If $D$ is a directed graph, the \emph{underlying graph} $U(D)$ is the undirected graph obtained from $D$ by ignoring the orientations of the edges. Vice-versa, if $G$ is an undirected graph, then any (simple) digraph $D$ with $U(D)=G$ is called an \emph{orientation} of $G$. A \emph{proper coloring} of an undirected graph $G$ is an assignment of colors to the vertices such that adjacent vertices receive distinct colors. \section{Outerplanar digraphs forcing a monochromatic oriented path}\label{sec:outer} In this section, we prove the following auxiliary result, which states the existence of acyclic outerplanar digraphs in which every $2$-coloring induces a monochromatic induced copy of a given oriented path. We denote by an \emph{outerplanar digraph} an orientation of an outerplanar graph (i.e. a graph which can be drawn in the plane with no edges intersecting and all vertices on the outer face). \begin{theorem}\label{outermonchromaticpaths} Let $\vec{P}$ be an oriented path. Then there exists an acyclic outerplanar digraph $\vec{O}(\vec{P})$ with the property that every $2$-coloring of its vertices contains a monochromatic induced copy of $\vec{P}$. \end{theorem} \begin{figure}[htb] \centering \includegraphics[scale = 0.6,page = 1]{figs/Fan_monochromaticP} \caption{The fan $F_5$ with a path of length $4$ and the root $u$.} \label{fig:Fan} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.5, page = 2]{figs/Fan_monochromaticP} \caption{Illustration of $O^{4}$ with root $u$. } \label{fig:TreeOfFans} \end{figure} \begin{proof} Let $\vec{P} \in \mathcal{P}_{\ell+1}$ an oriented path of length $\ell$. We construct an outerplanar graph starting with a fan $F = F_{\ell +1}$ consisting of a path on $\ell +1$ vertices $v_1, \ldots, v_{\ell+1}$ and an additional vertex $u$ which is adjacent to every vertex of the path. We call this additional vertex $u$ the root. For an illustration see Figure~\ref{fig:Fan}. Starting from this fan $O^{1} = F$ with root $u$ we recursively construct a graph $O^{k+1}$ from the graph $O^k$ for every $k \leq \ell -1$. For this we add a copy $F_v$ of the fan for every vertex $v \in V(O^k)$ which has distance $k$ from the vertex $u$. The resulting graph $O^{\ell}$ (see Figure \ref{fig:TreeOfFans}) has vertices with distance $\ell$ from $u$. Clearly this graph $O^{\ell}$ is outerplanar. We now orient the edges of the graph $O^{\ell}$ depending on the oriented path $\vec{P}$. The edges of the path in every copy of the fan $F$ are oriented in such a way that they are isomorphic to the considered path $\vec{P}$. Furthermore in every copy of the fan we orient the arcs from the root vertex of this copy to the other vertices in such a way that the induced path connecting every vertex with distance $\ell$ to the vertex $u$ of $O^{\ell}$ is isomorphic to the considered path $\vec{P}$. We call the so constructed acyclic outerplanar digraph $\vec{O}(\vec{P})$. We now consider 2-colorings of $\vec{O}(\vec{P})$ and show that every $2$-coloring contains an induced monochromatic subgraph isomorphic to $\vec{P}$. Since there is an induced copy of $\vec{P}$ in every copy of the fan, we need two colors to color this induced path in order to avoid a monochromatic induced $\vec{P}$. Now we look at the color $c(u) \in \{ 0,1\}$ of the root. In every induced path isomorphic to $\vec{P}$ in a copy of a fan $F_v$ at least one vertex is colored with $c(u)$. Hence there is a monochromatic path from $u$ to a vertex with distance $\ell$ from $u$. Since we defined the orientation in such a way that these path are isomorphic to $\vec{P}$, every 2-coloring of $\vec{O}(\vec{P})$ contains a monochromatic $\vec{P}$. \end{proof} \section{Removing leaves}\label{sec:remove} In this section, we show the following two results, which will be used as the ``inductive steps'' in the proofs of Theorem~\ref{thm:2col_NPhard} and Theorem~\ref{thm:3col_NPhard}, respectively. We need the following notation: Given an oriented tree $\vec{T}$, we denote by $\rm{lrem}(\vec{T})$ the oriented tree obtained from $\vec{T}$ by deleting all its leaves. \begin{proposition}\label{remove2col} Let $\vec{T}$ be an oriented tree. Then the $2$-${\rm lrem}(\vec{T})$-PFC reduces polynomially to the $2$-$\vec{T}$-PFC. This remains true for the restrictions of both problems to acyclic digraphs. \end{proposition} We obtain a similar result for the $3$-coloring problem. \begin{proposition}\label{remove3col} Let $\vec{P} \in \mathcal{P}_n$, $n \ge 4$. Then the $3$-${\rm lrem}(\vec{P})$-PFC reduces polynomially to the $3$-$\vec{P}$-PFC. This remains true for the restrictions of both problems to acyclic digraphs. \end{proposition} \begin{proof}[Proof of Proposition~\ref{remove2col}] Suppose we are given a planar digraph $D$ as an instance of the $2$-${\rm lrem}(\vec{T})$-PFC. Let $\ell$ be the number of leaves of $\vec{T}$. We construct a digraph $D'$ obtained from $D$ by adding for each vertex $v \in V(D)$ the disjoint copies $\vec{T}_{1,v}^+, \ldots, \vec{T}_{\ell,v}^+, \vec{T}_{1,v}^-, \ldots ,\vec{T}_{\ell,v}^- $ of $\vec{T}$ to $D$ and adding for all $2\ell$ disjoint copies all the arcs $(v,u)$, $u \in V(\vec{T}_{j,v}^+)$ and $(u,v)$, $u \in V(\vec{T}_{j,v}^-)$, $j = 1, \ldots, \ell$. The digraph $D'$ is still planar, since each of the attached copies $\vec{T}_{j,v}^+, \vec{T}_{j,v}^-$, $v \in V(D)$, $j = 1, \ldots, \ell$ is a tree and therefore an outerplanar graph. \paragraph{\textbf{Claim}} $D'$ admits a $\vec{T}$-free $2$-coloring if and only if $D$ admits a ${\rm lrem}(\vec{T})$-free $2$-coloring. \begin{proof} Suppose for the first direction we are given a $\vec{T}$-free $\{0,1\}$-coloring $c'$ of $D'$. Let $c\mathop{:=}c'|_{V(D)}$ be the $2$-coloring induced by $c'$ on $D$. We claim that $c$ is ${\rm lrem}(\vec{T})$-free. Towards a contradiction assume that for some $i \in \{0,1\}$ there is $X \subseteq c^{-1}(i) \subseteq V(D)$ such that $D[X]$ is isomorphic to ${\rm lrem}(\vec{T})$. Now, since $c'$ is a $\vec{T}$-free coloring of $D'$, in each of the copies $\vec{T}_{j,v}^+, \vec{T}_{j,v}^-$ for $v \in V(D)$ and $j = 1, \ldots, \ell$ there must exist vertices of both colors. By picking a vertex of color $i$ from each of these copies and adding them to $X$ we obtain a monochromatic vertex-set $X'$ in $(D',c')$ such that $D'[X']$ is isomorphic to the digraph obtained from ${\rm lrem}(\vec{T})$ by attaching to every vertex $t \in V({\rm lrem}(\vec{T}))$ new vertices $t_1^+, \ldots, t_{\ell}^+, t_1^-, \ldots, t_{\ell}^-$ with the arcs $(t,t_j^+)$ and $(t_j^-,t)$ for all $j = 1, \ldots, \ell$. Clearly, this digraph contains a copy of $\vec{T}$ as an induced subdigraph and hence $(D',c')$ contains a monochromatic copy of $\vec{T}$, which contradicts our assumption on $c'$. Hence, $c$ is indeed ${\rm lrem}(\vec{T})$-free. This shows the first implication. For the reverse direction, suppose $c\mathop{:}V(D) \rightarrow \{0,1\}$ is an ${\rm lrem}(\vec{T})$-free $2$-coloring of $D$. We extend this to a $2$-coloring $c'$ of $D'$ by properly coloring the vertices within each copy $\vec{T}_{j,v}^+,\vec{T}_{j,v}^-$, $j = 1\ldots, \ell$, of $\vec{T}$ according to the bipartition of the underlying tree $T$. We claim that $c'$ is $\vec{T}$-free. Suppose that for some $i \in \{0,1\}$ there is $X' \subseteq V(D')$ such that $D'[X']$ is isomorphic to $\vec{T}$. By definition of $c'$, every vertex in $X' \setminus V(D)$ is incident to at most one monochromatic arc and hence must be a leaf of $D'[X']$. We conclude that $X\mathop{:=}X' \cap V(D)$ is a monochromatic vertex-set in $(D,c)$ such that $D[X]$ is isomorphic to a digraph obtained from $\vec{T}$ by removing some of its leaves. Hence, $D[X]$ contains a monochromatic copy of ${\rm lrem}(\vec{T})$ as an induced subgraph. However, this contradicts our initial assumption on the coloring $c$ of $D$, and shows that indeed $c'$ defines a $\vec{T}$-free $2$-coloring of $D'$. This finishes the proof of the claimed equivalence. \phantom\qedhere \hfill $\triangle$ \end{proof} Since the sizes of $D$ and $D'$ are linearly related, we have found a polynomial reduction of the $2$-${\rm lrem}(\vec{T})$-PFC to the $2$-$\vec{T}$-PFC. This concludes the proof of the first part of the proposition. For the second claim in the proposition it suffices to verify that $D'$ is acyclic if and only if $D$ is acyclic, since then we can use the same polynomial reduction to also reduce the $2$-${\rm lrem}(\vec{T})$-PFC restricted to acyclic inputs to the $2$-$\vec{T}$-PFC with acyclic inputs. However, this directly follows since $D$ is an induced subdigraph of $D'$, and since each of the copies $\vec{T}_{j,v}^+,\vec{T}_{j,v}^-$, $v \in V(D)$, $j = 1 , \ldots, \ell$ of $\vec{T}$ themselves are clearly acyclic and separated from the rest of the graph by directed edge-cuts. This shows the second claim in the proposition and concludes the proof. \end{proof} The following proof of Proposition~\ref{remove3col} works analogously to the previous proof, except that we attach copies of the outerplanar digraphs described in Section~\ref{sec:outer}. \begin{proof}[Proof of Proposition~\ref{remove3col}] Suppose we are given a planar digraph $D$ as an instance of the $3$-${\rm lrem}(\vec{P})$-PFC. Let $D'$ be the digraph obtained from $D$ by adding for each vertex $v \in V(D)$ two disjoint copies $\vec{O}_v^+, \vec{O}_v^-$ of the outerplanar acyclic digraph $\vec{O}(\vec{P})$ from Theorem~\ref{outermonchromaticpaths} to $D$ and adding all the arcs $(v,u)$, $u \in V(\vec{O}_v^+)$ and $(u,v)$, $u \in V(\vec{O}_v^-)$. The digraph $D'$ is still planar, since each of the attached copies $\vec{O}_v^+, \vec{O}_v^-, v \in V(D)$ is outerplanar. The following claim follows analogously to the proof of Proposition~\ref{remove2col} using the fact that each outerplanar graph has chromatic number at most $3$ and that in each of the copies $\vec{O}_v^+, \vec{O}_v^-$ for all $v \in V(D)$ all three colors are needed, see Proposition~\ref{outermonchromaticpaths}. \paragraph{\textbf{Claim}} $D'$ admits a $P$-free $3$-coloring if and only if $D$ admits an ${\rm lrem}(P)$-free $3$-coloring. \\ \iffalse \begin{proof} Suppose for the first direction we are given a $P$-free $3$-coloring $c':V(D') \rightarrow \{0,1,2\}$ of $D'$. Let $c\mathop{:=}c'|_{V(D)}$ be the $3$-coloring induced by $c'$ on $D$. We claim that $c$ is ${\rm lrem}(P)$-free. Suppose by way of contradiction that for some $i \in \{0,1,2\}$ there is $X \subseteq c^{-1}(i) \subseteq V(D)$ such that $D[X]$ is isomorphic to ${\rm lrem}(P)$. Now, since $c'$ is a $P$-free coloring of $D'$, Proposition~\ref{outermonchromaticpaths} implies that in each of the copies $\vec{O}_v^+, \vec{O}_v^-$ for $v \in V(D)$ there must exist vertices of all three colors. By picking a vertex of color $i$ from each of these copies and adding them to $X$ we obtain a monochromatic vertex-set $X'$ in $(D',c')$ such that $D'[X']$ is isomorphic to the digraph obtained from ${\rm lrem}(P)$ by attaching to every vertex $p \in V({\rm lrem}(P))$ two new vertices $p^+, p^-$ with the arcs $(p,p^+)$ and $(p^-,p)$. Clearly, this digraph contains a copy of $P$ as an induced subdigraph and hence $(D',c')$ contains a monochromatic copy of $P$, which contradicts our assumption on $c'$. Hence, $c$ is indeed ${\rm lrem}(P)$-free. This shows the first implication. For the reverse direction, suppose we are given a ${\rm lrem}(P)$-free $3$-coloring $c\mathop{:}V(D) \rightarrow \{0,1,2\}$ of $D$. We extend this to a $3$-coloring $c'$ of $D'$ by properly coloring the vertices within each copy $\vec{O}_v^+, \vec{O}_v^-$ of $\vec{O}(P)$ with the colors $\{0,1,2\}$ (this is possible, since every outerplanar graph has chromatic number at most $3$). We claim that $c'$ is $P$-free. Suppose by way of contradiction that for some $i \in \{0,1,2\}$ there is $X' \subseteq V(D')$ such that $D'[X']$ is isomorphic to $P$. By definition of $c'$, every vertex in $X' \setminus V(D)$ is incident to at most one monochromatic arc and hence must be a leaf of $D'[X']$. We conclude that $X:=X' \cap V(D)$ is a monochromatic vertex-set in $(D,c)$ such that $D[X]$ is isomorphic to a digraph obtained from $P$ by removing a subset of its two endpoints. Hence, $D[X]$ contains a monochromatic copy of ${\rm lrem}(P)$ as an induced subgraph. However, this contradicts our initial assumption on the coloring $c$ of $D$, and shows that indeed, $c'$ defines a $P$-free $3$-coloring of $D'$. This finishes the proof of the claimed equivalence. \end{proof} \fi Since the sizes of $D$ and $D'$ are linearly related, we have found a polynomial reduction of the $3$-${\rm lrem}(\vec{P})$-PFC to the $3$-$\vec{P}$-PFC. This concludes the proof of the first part of the proposition. For the second claim in the proposition it suffices to verify that $D'$ is acyclic if and only if $D$ is acyclic, since then we can use the same polynomial reduction to also reduce the $3$-${\rm lrem}(\vec{P})$-PFC restricted to acyclic inputs to the $3$-$\vec{P}$-PFC with acyclic inputs. However, this directly follows since $D$ is an induced subdigraph of $D'$, and since each of the copies $\vec{O}_v^+,\vec{O}_v^-$, $v \in V(D)$ of $\vec{O}(\vec{P})$ themselves are acyclic (as guaranteed by Theorem~\ref{outermonchromaticpaths}) and separated from the rest of the graph by directed edge-cuts. This shows the second claim in the Proposition and concludes the proof. \end{proof} \section{$P$-free $2$-coloring for paths of length $2$}\label{sec:basecases2} In this section we show Theorem \ref{thm:2col_NPhard} for $n=3$. \begin{proposition}\label{3path2colors} For every $\vec{P} \in \mathcal{P}_3$, the $2$-$\vec{P}$-PFC is \NP-hard, even restricted to acyclic inputs. \end{proposition} \begin{proof} Up to isomorphism there are three different orientations of the path $P_3$. The directed path $\vec{P}_3$, an orientation where the middle vertex of $P_3$ is a sink, denoted by $\vec{V}_3$, and the one where the middle vertex is a source, denoted by $\revvec{V}_3$, see Figure \ref{fig:OrientationP3}. The latter two oriented paths are equivalent up to the reversal of all arcs, and hence it suffices to prove that $\vec{P}_3$-free $2$-coloring and $\vec{V}_3$-free $2$-coloring of planar acyclic digraphs is \NP-hard. To prove these results, we will reduce the $3$-SAT problem to each of the two problems. For this we need to introduce some gadgets. \begin{figure}[htb] \centering \includegraphics{figs/OrientationP3} \caption{The three different orientations of the path $P_3$.} \label{fig:OrientationP3} \end{figure} \paragraph{\textbf{\NP-hardness of the $2$-$\vec{P}_3$-PFC}} First we consider the orientation $\vec{P}_3$. The negator gadget as depicted in Figure \ref{fig:negatorP3} forces the vertices $x$ and $y$ to have different colors. \begin{lemma} \label{lem:negator2P3free} The negator gadget in Figure~\ref{fig:negatorP3} has the following properties \begin{itemize} \item In every $\vec{P}_3$-free 2-coloring of the negator gadget, the vertices $x$ and $y$ must receive different colors. \item There is a $\vec{P}_3$-free 2-coloring of the negator gadget in which the unique incident edges of $x$ and $y$ are bichromatic. \item The negator gadget is acyclic. \end{itemize} \end{lemma} \begin{figure}[htb] \centering \includegraphics[page = 3,scale= 0.7]{figs/Negator} \\[2ex] \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[page = 1,scale= 0.7]{figs/Negator} \caption{} \label{fig:negatorP3} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[page = 4,scale = 0.7 ,valign = c]{figs/Negator} \caption{} \label{fig:NegatorV3} \end{subfigure} \caption{Negator gadget for (a) $\vec{P_3}$-free colorings and (b) for $\vec{V}_3$-free colorings.} \end{figure} \begin{proof} Assume towards a contradiction that $x$ and $y$ are both colored with the same color $i \in \{0,1\}$. The three vertices $v_1,v_2,v_3$ on the vertical path in the gadget (see Figure~\ref{fig:negatorP3}) induce a $\vec{P}_3$, hence both colors have to be used on these three vertices. The vertex colored with $i$ forces $x'$ and $y'$ to be colored with color $j \neq i$. But there is a vertex colored with $j$ in the vertical path, which is a contradiction since we have a monochromatic induced $\vec{P}_3$. Hence $c(x) \neq c(y)$ for all $\vec{P}_3$-free 2-colorings~$c$. It is easy to see that the negator gadget is acyclic and there is a $2$-coloring with bichromatic edges $xx'$ and $yy'$. \end{proof} If we have the negator two times in a row, we can force two vertices to have the same color. We call this gadget as depicted in Figure \ref{fig:extender} the \emph{extender gadget}. We connect the two negator gadgets with endpoint $x_1,y_1$ and $x_2,y_2$ by identifying $y_1$ and $y_2$ such that the horizontal paths have reverse directions. \begin{figure}[htb] \centering \includegraphics[page = 2,scale= 0.7]{figs/Extender} \\[2ex] \includegraphics[page = 1,scale= 0.7]{figs/Extender} \caption{Extender gadget for $\vec{P_3}$-free and $\vec{V}_3$-free colorings.} \label{fig:extender} \end{figure} It follows directly from Lemma \ref{lem:negator2P3free} that the extender gadget fulfills the following properties. \begin{corollary}\label{cor:extenderP3} The extender gadget in Figure~\ref{fig:extender} has the following properties \begin{itemize} \item In every $\vec{P}_3$-free 2-coloring of the extender gadget the vertices $x$ and $y$ must receive the same color. \item There is a $\vec{P}_3$-free 2-coloring in which the unique incident edges of $x$ and $y$ are bichromatic. \item The extender gadget is acyclic. \end{itemize} \end{corollary} Another gadget we will need is the \emph{crossover-gadget}. This gadget forces two pairs of antipodal vertices to have the same color. \begin{lemma} \label{lem:crossoverP3} The crossover gadget as depicted in Figure~\ref{fig:ColoringCrossoverP3} has the following properties: \begin{itemize} \item In every $\vec{P}_3$-free 2-coloring of the crossover gadget the vertices $x$ and $x'$ as well as the vertices $y$ and $y'$ have the same color. \item For every assignment $p\mathop{:}\{x,x',y,y'\} \to \{0,1\}$ with $p(x) = p(x')$ and $p(y) = p(y')$ there exists a $\vec{P}_3$-free 2-coloring $c$ of the crossover gadget such that $c(x) = c(x')= p(x)= p(x')$, $c(y) = c(y')= p(y)= p(y')$, and all edges incident to $x,x',y,y'$ in the gadget are bichromatic. \item The crossover gadget is acyclic. \end{itemize} \end{lemma} \begin{figure}[htb] \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[page = 2,scale= 0.65]{figs/Crossover} \caption{} \label{fig:ColoringCrossoverP3} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[page = 3,scale = 0.65]{figs/Crossover} \caption{} \label{fig:crossoverV3} \end{subfigure} \caption{Crossover gadget for (a) $\vec{P}_3$ with possible 2-colorings and (b) crossover gadget of $\vec{V}_3$.} \end{figure} \begin{proof} Using the negators and extenders of the crossover, we see immediately that $y$ and $y'$ must have the same color, say $0$. If $x$ is colored with $0$, then $x'$ must have color $0$ as well, as otherwise necessarily a monochromatic copy of $\vec{P}_3$ is created. Similar $c(x) = 1$ implies $c(x') =1$. For an illustration of the cases see Figure \ref{fig:ColoringCrossoverP3}. Since we can color the extender gadget such that arcs incident to $x,x',y,y'$ are bichromatic and any pre-coloring in which $x$ and $x'$ as well as $y$ and $y'$ have the same color can be extended to a $\vec{P}_3$-free coloring of the gadget, the second property is fulfilled. Moreover, the negator gadgets are arranged in such a way that they cannot contribute to an induced directed cycle. Hence the crossover gadget is acyclic. \end{proof} The last gadget we need is the \emph{clause gadget}. Assume there is a pre-coloring $c:\{t',x',y',z'\} \rightarrow \{0,1\}$ of some vertices of the gadget. \begin{lemma} \label{lem:clauseP3} The clause gadget as depicted in \ref{fig:clauseGadget} is acyclic and a pre-coloring $c$ can be extended to a $\vec{P}_3$-free $\{0,1\}$-coloring if and only if $c(t') \in \{c(x'),c(y'),c(z')\}$. \end{lemma} \begin{figure}[htb] \begin{subfigure}{0.48\textwidth} \centering \includegraphics[page = 1,scale= 0.7]{figs/Clause} \caption{} \label{fig:clauseGadget} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[page=2,scale = 0.7]{figs/Clause} \caption{} \label{fig:clauseV3} \end{subfigure} \caption{Clause gadget for (a) $\vec{P}_3$-free and (b) $\vec{V}_3$-free colorings.} \end{figure} \begin{proof} Assume that $c(t') = 1$. First we show that there is no $\vec{P}_3$-free 2-coloring of the clause gadget with $c(x') = c(y') = c(z') =0$. Assume there is such a coloring. Because $x'$ and $y'$ (resp. $y'$ and $z'$) together with a third vertex are an induced $\vec{P}_3$, the third vertex has to be colored with $1$. So $t'$ and the other two unlabeled vertices are all colored with 1, contradiction since they build an induced $\vec{P}_3$. It is easy to check that all other combinations lead to a $\vec{P}_3$-free $2$-coloring of the gadget. \end{proof} We are now ready to describe the reduction of 3-SAT to $2$-$\vec{P}_3$-PFC. For a given 3-SAT formula $F = c_1 \wedge c_2 \wedge \ldots \wedge c_k$ with clauses $C = \{ c_1, \ldots , c_k \}$, each consisting of three literals, we construct a planar graph $G_F$. Let $x_1, \ldots, x_m $ denote the literals in $F$ and $\overline{x_1}, \ldots, \overline{x_n}$ their negations. First we add some vertices for every literal $x_i$ and one for every negation $\overline{x_i}$. Furthermore we add an additional vertex $t$. We start connecting the vertices. Every pair $x_i$, $\overline{x_i}$ is connected by a negator. So we can be sure that in every $2$-coloring these two vertices have different color. For every clause $c_i = x_i \vee y_i \vee z_i$, we add a clause gadget such that $x'$ as depicted in Figure~\ref{fig:clauseGadget} is connected to the vertex corresponding to the literal $x_i$ by an extender gadget. Similarly $y'$ and $y_i$, $z' $ and $z_i$ and $t' $ and $t$. Note that the graph constructed so far might be non-planar. Still we can draw the graph in such a way that crossings are only between extender gadgets. We can remove those crossings successively by adding a crossover gadget instead. Let $x$ and $x'$ two vertices connected by a extender gadget which is crossed by the extender gadget connecting $y$ and $y'$. Now we delete the two extender gadgets and add a crossover gadget instead between those four vertices. If we do this for every crossing between extender gadgets, we get a planar digraph. Since all gadgets are acyclic the constructed graph is acyclic as well. Note that we can always choose the orientation of the negator gadget in such a way that they do not form an induced directed cycle. We can now conclude the proof that $2$-$\vec{P}_3$-PFC is \NP-hard. Let us reduce $2$-$\vec{P}_3$-PFC from $3$-SAT. We claim that a given 3-SAT formula $F$ is satisfiable if and only if the graph $G_F$ has a proper $\vec{P}_3$-free $2$-coloring. If $F$ is satisfiable, we fix an assignment of the literals such that the formula is true. We color the vertex $t$ with color $1$. Furthermore, we assign the corresponding vertices color $1$ (with color $0$) if the corresponding literals are assigned to be true (false, resp). Using the properties of the extender and negator gadgets this coloring is extended to a $2$-coloring of the digraph such that the outer arcs of the negator/extender gadgets are bichromatic. This $2$-coloring is $\vec{P}_3$-free since in the gadgets every $2$-coloring is $\vec{P}_3$-free and the outer arcs are bichromatic. If $F$ is unsatisfiable, for every assignment of $\{0,1\}$ to the literals $x_1, \ldots , x_m$ there is a clause $c_i = x \vee y \vee z $ which is not satisfiable, hence $x,y,z$ are assigned with $0$ which corresponds to be colored with $0$. Hence the clause gadget has no $2$-coloring which implies that the constructed digraph has no $\vec{P}_3$-free $2$-coloring. Since the digraph is constructed in polynomial time in the number of clauses and literals of the 3-SAT formula, this concludes the proof of the \NP-hardness of $2$-$\vec{P}_3$-PFC.\\ Note that a reduction from PLANAR 3-SAT would not simplify the proof since the crossover gadget would still be necessary. To be more precise, the additional vertex $t$, which we introduced in order to fix the color which represents the value truth for the variables and which is connected by an extender gadget to all clause gadgets, might still cause crossings. \paragraph{\textbf{\NP-hardness of the $2$-$\vec{V}_3$-PFC}} Analogous to the previous case of the \NP-hardness of $2$-$\vec{V}_3$-PFC, we show the \NP-hardness of $2$-$\vec{V}_3$-PFC by reduction from 3-SAT. We define the negator, extender, crossover and clause gadgets similarly. It is easy to check that the negator gadgets as depicted in \ref{fig:NegatorV3} fulfills the following conditions. \pagebreak[3] \begin{lemma}\nopagebreak[4] \noindent \begin{itemize} \item In every $\vec{V}_3$-free 2-coloring of the negator gadget, the vertices $x$ and $y$ must receive different colors. \item In any 2-coloring of a digraph, containing the negator gadget as a subdigraph in such a way that there are only incident edges to $x$ and $y$, there are no monochromatic copies of $\vec{V}_3$ using an edge of the negator gadget and one of the edges incident to $x$ or $y$ which does not belong to the negator gadget. \item The negator gadget is acyclic. \end{itemize} \end{lemma} The extender gadget is defined in the same manner as in the case of $\vec{P}_3$ by connecting two negator gadgets, see Figure \ref{fig:extender}. In the crossover gadget and the clause gadget, we change the orientation of some arcs as depicted in Figure \ref{fig:crossoverV3} and Figure \ref{fig:clauseV3}. It is easy to show that this gadgets fulfill the same properties as in the case $\vec{P}_3$, see Corollary \ref{cor:extenderP3}, Lemma \ref{lem:crossoverP3} and Lemma \ref{lem:clauseP3}. The reduction to 3-SAT works exactly the same as in the last case. Hence we proved the \NP-hardness of $2$-$P$-PFC for every $P \in \mathcal{P}_3$. \end{proof} \section{$P$-free $2$-coloring for paths of length $3$}\label{sec:basecases25} In this section, we prove Theorem~\ref{thm:2col_NPhard} in the case $n=4$. \begin{proposition}\label{4path2colors} Let $\vec{P} \in \mathcal{P}_4$. Then the $2$-$\vec{P}$-PFC is \NP-hard, even restricted to acyclic inputs. \end{proposition} \begin{proof} There are four non-isomorphic orientations of $P_4$ as illustrated in Figure~\ref{fig:OrientationP4}. Firstly $\vec{P}_4$, the directed path of length $3$, secondly $\vec{N}_4$, the anti-directed path of length $3$, the path $\vec{L}_4$ consisting of vertices $v_1,v_2,v_3,v_4$ and arcs $(v_1,v_2)$, $(v_2,v_3)$, $(v_4,v_3)$, as well as the path $\revvec{L}_4$ obtained from $\vec{L}_4$ by reversing all arcs. Since a given planar acyclic digraph $D$ is $\revvec{L}_4$-free $2$-colorable if and only if the (acyclic) digraph $\revvec{D}$ obtained from $D$ by reversing all arcs is $\vec{L}_4$-free $2$-colorable, it suffices to show the \NP-hardness of the $2$-$\vec{P}$-PFC restricted to acyclic inputs for $\vec{P} \in \{\vec{P_4},\vec{N}_4,\vec{L}_4\}$. \begin{figure}[htb] \centering \includegraphics{figs/OrientationP4} \caption{The four different Orientations of the $P_4$.} \label{fig:OrientationP4} \end{figure} \paragraph{\textbf{\NP-hardness of the $2$-$\vec{N}_4$-PFC}} We show \NP-hardness by reducing from the $2$-$\vec{V}_3$-PFC restricted to acyclic inputs, which was shown \NP-complete in Section~\ref{sec:basecases2}. Recall that $\vec{V}_3$ is the orientation of the $P_3$ where the middle vertex is a sink. Suppose we are given a planar acyclic digraph $D$ as an input to the $2$-$\vec{N}_4$-PFC. Let $D'$ be the planar digraph obtained from $D$ by adding for each vertex $v \in V(D)$ a disjoint copy $\vec{N}_4^v$ of $\vec{N}_4$ and connecting it to $v$ with the arcs $(v,u),$ $u \in V(\vec{N}_4^v)$. We claim that $D'$ is acyclic and that $D$ admits a $\vec{V}_3$-free $2$-coloring if and only if $D'$ has a $\vec{N}_4$-free $2$-coloring. Firstly, $D'$ is indeed acyclic, since each of the disjoint subgraphs $D$, $\vec{N}_4^v, v \in V(D)$ is acyclic, and no directed cycle can contain vertices from two distinct subgraphs, since no arc in $D'$ emanates from $V(D)$ into any of the copies $\vec{N}_4^v, v \in V(D)$. To prove the first direction of the equivalence, assume that $D$ admits a $\vec{V}_3$-free $2$-coloring $c\mathop{:}V(D) \rightarrow \{0,1\}$. Then we can extend this to a vertex-$2$-coloring $c'\mathop{:}V(D') \rightarrow \{0,1\}$ of $D'$ by coloring the vertices within each copy $\vec{N}_4^v$ according to a proper $\{0,1\}$-coloring of the (undirected) path $P_4$. We claim that $c'$ is $\vec{N}_4$-free: Suppose towards a contradiction that there is a monochromatic induced copy $x_1, (x_2,x_1), x_2, (x_2, x_3), x_3, (x_4,x_3),x_4$ of $\vec{N}_4$ in $D'$. Since $x_2,x_3,x_4$ induce a monochromatic $\vec{V}_3$ in $D'$, and since $c$ is a $\vec{V}_3$-free $2$-coloring of $D$, there must be $i \in \{2,3,4\}$ such that $x_i \notin V(D)$. Let $v \in V(D)$ be such that $x_i \in V(\vec{P}_4^v)$. If $i \in \{2,4\}$, then $(x_{i},x_{i-1}) \in A(D')$, and since there is no arc in $D'$ leaving $V(\vec{N}_4^v)$, we must have $x_{i-1} \in V(\vec{N}_4^v)$ as well. This however contradicts that $c'$ properly colors the copy $\vec{N}_4^v$. Hence, $i=3$. Then there is a $j \in \{2,4\}$ such that $x_j \neq v$. As we have $(x_j,x_3) \in A(D')$ and since $v$ is the only in-neighbor of $x_3$ outside $V(\vec{N}_4^v)$, we must have $x_j \in V(\vec{N}_4^v)$ as well. This means that $(x_j,x_2)$ is a monochromatic edge in $\vec{N}_4^v$, again contradicting that $c'$ properly colors this copy. Hence, our assumption was wrong, $c'$ is indeed a $\vec{N}_4$-free $2$-coloring of $D'$. For the reversed direction, assume that $c':V(D') \rightarrow \{0,1\}$ is a $\vec{N}_4$-free coloring of $D'$, and let $c$ be its restriction to $V(D)$. We claim that $c$ is $\vec{V}_3$-free. Suppose not, then let $x_1,x_2,x_3$ be the vertex-trace of a monochromatic copy of $\vec{V}_3$ in $D$ with sink $x_2$. Since $\vec{N}_4^{x_3}$ is an induced copy of $\vec{N}_4$ in $D'$, it must be bichromatic in the coloring $c'$. Hence, there is $x_4 \in V(\vec{N}_4^{x_3})$ such that $c'(x_4)=c'(x_3)$. This yields that $x_1,x_2,x_3,x_4$ induce a monochromatic copy of $\vec{N}_4$ in the coloring $c'$ of $D'$, a contradiction. This shows that $D$ is indeed $\vec{V}_3$-free $2$-colorable and proves the claimed equivalence. The correctness of the reduction and the fact that $D'$ can be constructed from $D$ in polynomial time shows that the $2$-$\vec{N}_4$-PFC is \NP-complete for acyclic inputs, as claimed. \paragraph{\textbf{\NP-hardness of the $2$-$\vec{P}_4$-PFC}} With the same arguments as in the \NP-hardness proof of the $2$-$\vec{N}_4$-PFC in the last paragraph, it is easy to see that $2$-$\vec{P}_4$-PFC is \NP-hard. The proof works by reducing from the $2$-$\vec{P}_3$-PFC restricted to acyclic inputs, which was shown \NP-complete in Section~\ref{sec:basecases2}. \paragraph{\textbf{\NP-hardness of the $2$-$\vec{L}_4$-PFC}} We show that deciding whether a given acyclic planar digraph has an $\vec{L}_4$-free $2$-coloring is \NP-hard by reducing from $3$-SAT. For this we need to introduce some gadgets. The first gadget, the \emph{extender gadget} (see Figure~\ref{fig:P4-extender}) enforces the same color on its two end vertices $x$ and $y$. \begin{figure}[htb] \centering \includegraphics[page = 1,scale= 0.7]{figs/P4-extender} \caption{The extender gadget for $\vec{L}_4$-free colorings.} \label{fig:P4-extender} \end{figure} The following properties of this gadget are readily verified. \begin{lemma} The extender gadget depicted in Figure~\ref{fig:P4-extender} has the following properties: \begin{itemize} \item In every $\vec{L}_4$-free $2$-coloring of the extender gadget the vertices $x$ and $y$ must receive the same color. \item There is an $\vec{L}_4$-free $2$-coloring of the extender gadget in which the unique incident edges of $x$ and $y$ are bichromatic. \item The extender gadget is acyclic. \end{itemize} \end{lemma} \begin{proof} Let $c$ be an $\vec{L}_4$-free $2$-coloring of the gadget. Then the color $0$ and $1$ appear in the induced $\vec{L}_4$ in both oriented fans. Since every pair of vertices together with the root vertices $a$ and $b$ of the fan build an induced fan, it holds $c(a) \neq c(b)$. The vertex $z$ between $x$ and $y$ is colored with one of the two colors. Without loss of generality assume $c(z) = c(a)$. Hence $x$ and $y$ must receive the color different from $c(z)$ since otherwise they would form a monochromatic induced $\vec{L}_4$. This shows $c(x) = c(y)$ and that the two unique incident edges are bichromatic. Clearly the gadget is acyclic. \end{proof} The next gadget is the \emph{negator-gadget} (see Figure~\ref{fig:P4-negator}), which enforces distinct colors on its two end vertices. \begin{figure}[htb] \centering \includegraphics[page = 1,scale= 0.7]{figs/P_4-negator} \caption{The negator gadget for $\vec{L}_4$-free colorings.} \label{fig:P4-negator} \end{figure} \begin{lemma} The negator gadget as depicted in Figure~\ref{fig:P4-negator} has the following properties: \begin{itemize} \item In every $\vec{L}_4$-free $2$-coloring of the negator gadget, the vertices $x$ and $y$ receive different colors. \item There is an $\vec{L}_4$-free $2$-coloring of the negator gadget in which the incident edges of $x$ and $y$ in this gadget are bichromatic. \item The negator-gadget is acyclic. \end{itemize} \end{lemma} \begin{proof} Assume there is an $\vec{L}_4$-free $2$-coloring $c$ of the negator gadget such that $c(x) = c(y)$. Using the properties of the extender gadget all vertices receive the same color $c(x)$. This gives a monochromatic $\vec{L}_4$ which is a contradiction. The second property follows from the extender gadget where we have bichromatic incident edges. Furthermore the acyclicity of the extender gadget shows that the negator gadget is acyclic. \end{proof} The last gadget we need is the \emph{crossover gadget} (see Figure~\ref{fig:P4-crossover}), which enforces two pairs of antipodal vertices on its outer face to have the same color. \begin{figure}[htb] \centering \includegraphics[page = 1,scale= 0.7]{figs/P4-crossover} \caption{The crossover gadget for $\vec{L}_4$-free colorings.} \label{fig:P4-crossover} \end{figure} \begin{lemma}\label{obs:crossover} The crossover gadget as shown in Figure~\ref{fig:P4-crossover} has the following properties: \begin{itemize} \item In every $\vec{L}_4$-free $2$-coloring of the crossover gadget the vertices $x$ and $x'$ as well as the vertices $y$ and $y'$ have the same color. \item For every $p:\{x,y,x',y'\} \rightarrow \{0,1\}$ such that $p(x)=p(x'), p(y)=p(y')$, there exists a $\vec{L}_4$-free $2$-coloring $c$ of the crossover gadget such that $c(x)=c(x')=p(x)=p(x'), c(y)=c(y')=p(y)=p(y')$, and such that all incident edges of $x, y, x', y'$ in the gadget are bichromatic. \item The crossover gadget is acyclic. \end{itemize} \end{lemma} \begin{proof} The extender gadgets immediately show that $x$ and $x'$ have the same color in every $\vec{L}_4$ $2$-coloring. Furthermore in order to avoid a monochromatic $\vec{L}_4$ the vertices $y$ and $y'$ receive the same color. The second and third property follows from the properties of the negator and extender gadget. \end{proof} We are now ready to describe the reduction of $3$-SAT to the $2$-$\vec{L}_4$-PFC with acyclic inputs. Suppose we are given a $3$-SAT formula on the literals $x_i, \overline{x_i},$ $i=1,\ldots,n$ and consisting of $m$ clauses $c_1,\ldots,c_m$ as an input to $3$-SAT. We construct an auxiliary acyclic digraph $D'$, which might not yet be planar as follows: We have $2n$ vertices corresponding to $x_i, \overline{x_i}, i=1,\ldots,n$, and connect $x_i$ and $\overline{x_i}$ by a negator. For every $j=1,\ldots,m$, we add a disjoint copy of $\vec{L}_4$ on the vertices $x_{1,j}, x_{2,j},x_{3,j},x_{4,j}$ and arcs $(x_{1,j},x_{2,j}),(x_{2,j},x_{3,j}), (x_{4,j},x_{3,j})$. Letting $c_j=l_1^j \vee l_2^j \vee l_3^j$, for $i=1,2,3$ we connect $x_{i,j}$ with an extender to the vertex representing the literal $l_i^j$. We add a further special vertex $t$ which is connected via extenders to each of the vertices $x_{4,j}$, $j=1,\ldots,m$. Clearly, the so-defined digraph $D'$ can be constructed in polynomial time in $m$ and $n$ and is of polynomial size in $m$ and $n$. It is furthermore clear from the properties of the extender and negator gadgets that $D'$ is an acyclic digraph. We claim that $D'$ admits an $\vec{L}_4$-free $2$-coloring if and only if $c_1 \wedge \ldots \wedge c_m$ is satisfiable. For the first direction, suppose we are given an $\vec{L}_4$-free $\{0,1\}$-coloring $c$ of $D'$. W.l.o.g. let $c(t)=0$. Then by the properties of the negators, we have $c(x_i)=1-c(\overline{x_i}) \in \{0,1\}$ for $i=1,\ldots,m$. We claim that assigning the truth value $c(x_i)$ to each variable $x_i$ defines a truthful assignment for $c_1 \wedge \ldots \wedge c_m$. Suppose not, then there exists $j \in \{1,\ldots,m\}$ such that the colors of the vertices representing the three literals $l_1^j, l_2^j, l_3^j$ in $c_j$ are $0$ each. Since these vertices are connected by extenders to the vertices $x_{1,j},x_{2,j},x_{3,j}$ of $D'$, we have $c(x_ {i,j})=0$, $i=1,2,3$. We further have $c(x_{4,j})=c(t)=0$, so that $x_{1,j},x_{2,j},x_{3,j},x_{4,j}$ induce a monochromatic copy of $\vec{L}_4$ in $D'$, a contradiction. Hence, we have indeed found a truthful assignment. For the reversed direction, suppose we are given an assignment of $0,1$-truth-values to the variables $x_1,\ldots,x_n$. We define a $\{0,1\}$-coloring of $D'$ by assigning to each vertex representing a literal its truth value, and coloring $t$ with color $0$. Further, every vertex $x_{i,j}$, $i=1,2,3, j=1,\ldots,m$ is colored with the truth value of the literal it is connected to by an extender, and the vertices $x_{4,j}$, $j=1,\ldots,m$ receive color $0$. The partial coloring defined so far has the property that the two end vertices of any extender in $D'$ have the same color, while end vertices of negator gadgets have distinct colors. Using the second property of extender and negator gadgets, we can extend this partial coloring to a full $\{0,1\}$-coloring $c$ of $D'$ by coloring the internal vertices of every extender or negator gadget by an $\vec{L}_4$-free coloring such that the incident edges of the two ends of such a gadget are bichromatic. We claim that the so-defined coloring $c$ is a $\vec{L}_4$-free coloring of $D$. Suppose not, then there must be an induced copy $P$ of $\vec{L}_4$ in $D'$ which is monochromatic under $c'$. We first observe that $P$ cannot contain any internal vertices of extender or negator gadgets. Indeed, since the coloring of the internal vertices of any gadget by definition is $\vec{L}_4$-free, if $P$ intersects the interior of a gadget, it would have to contain one of the two end vertices of the gadget and hence an incident edge of these vertices as well. This contradicts the fact that all such edges are by definition bichromatic. Hence, $P$ is contained in the digraph obtained from $D$ by deleting all internal vertices of extender and negator gadgets. The only non-singleton components of this digraph are constituted by the $m$ disjoint copies of $\vec{L}_4$ induced by $x_{1,j},x_{2,j},x_{3,j},x_{4,j}$, $j=1,\ldots,m$. This means that $P$ must equal one of these paths, i.e., $c(x_{1,j})=c(x_{2,j})=c(x_{3,j})=c(x_{4,j})=0$ for some $j$. By definition, this means that all three literals in the clause $c_j$ have truth value $0$, a contradiction. Hence, we proved the claimed equivalence. The digraph $D'$ might not be planar. In order to solve this issue, we consider a drawing of $D'$ in the plane in which the vertices $t,x_1,\overline{x_1},\ldots,x_n,\overline{x_n}$ are placed on a horizontal line $\ell_1$ such that the negator-gadgets between the vertices $x_i, \overline{x_i}$ are drawing as thin horizontal boxes on top of the line $\ell_1$ and the vertices $x_{i,j}$, $i=1,2,3, j=1,\ldots,m$ are placed on a parallel horizontal line $\ell_2$ such that the $m$ disjoint paths $x_{1,j},x_{2,j},x_{3,j},x_{4,j}$, $j=1,\ldots,m$ are drawn crossing-free on $\ell_2$. All the extender gadgets now connect vertices on the line $\ell_1$ to the line $\ell_2$, so we can draw them within thin rectangular strips touching $\ell_1$ and $\ell_2$. The only possible crossings in this drawing are between pairs of such extender-strips spanned between $\ell_1$ and $\ell_2$. Note that the number of pairs of such strips which cross is $O(|V(D')|^4)=\poly(m,n)$. We now sequentially transform this drawing into a drawing of a planar digraph $D$: As long as there is a crossing between two extenders-strips connecting vertices $a_1, b_1$ and $a_2,b_2$ in the drawing we locally replace the crossing by a \emph{crossover} gadget and connect, with four disjoint extender gadgets, $a_1$ to the outer vertex $x$ of the crossover gadget, $b_1$ to $x'$, $a_2$ to $y$, $b_2$ to $y'$. This can be done such that the crossover gadget and the four new extender gadgets do not intersect pairwise and do not intersect with any other features of the drawing. Performing this operation sequentially for all crossings between extender-strips in the drawing, we construct a planar digraph $D$ of polynomial size in $m$ and $n$. Since $D'$ and each of the extender and crossover gadgets are acyclic, $D$ is acyclic. Further, whenever we replace a pair of intersecting extender gadgets by four non-intersecting extender gadgets and a central crossover gadget, this has the same effect on transporting colors as the two original extender gadgets it replaces, hence every $\vec{L}_4$-free coloring of $D$ gives rise to a truthful assignment of the formula $c_1\wedge c_2\wedge \ldots \wedge c_m$ by the same arguments as above for $D'$. Vice versa, if $c_1\wedge c_2\wedge \ldots \wedge c_m$ has a truthful assignment, then there is a $\vec{L}_4$-free coloring $c'$ of $D'$. By the last item of Observation~\ref{obs:crossover}, for every crossover gadget and for every extender gadget we used to replace a pair of crossing extender gadgets of $D'$, we can color these gadgets such that there are no monochromatic copies of $\vec{L}_4$ in the interior of any of the gadgets and such that all edges connecting an internal vertex of a gadget to the outside is bichromatic. Adding these colorings to the coloring of the vertices in $D'$ described by $c'$, we obtain a $2$-coloring $c$ of $D$. No monochromatic copy of $\vec{L}_4$ in $D$ with respect to the coloring $c$ can contain an internal vertex of one of these gadgets. Hence every such copy would have existed already in the coloring $c'$ of the digraph $D'$. This is a contradiction, which proves that $D$ is $\vec{L}_4$-free $2$-colorable. Summarizing, we have shown that $D$ is $\vec{L}_4$-free $2$-colorable iff $c_1\wedge c_2\wedge \ldots \wedge c_m$ admits a truthful assignment. Since we can construct the polynomially sized planar and acyclic digraph $D$ from the formula in polynomial time in $m$ and $n$, this concludes the desired reduction showing that $3$-SAT polynomially reduces to the $2$-$\vec{L}_4$-PFC with acyclic inputs. This shows that the $2$-$\vec{L}_4$-PFC is \NP-hard, concluding the proof of Proposition~\ref{4path2colors}. \end{proof} We are now ready for the proof of Theorem~\ref{thm:2col_NPhard}. \begin{proof}[Proof of Theorem~\ref{thm:2col_NPhard}] If $n=1$ and $\vec{P}$ is the one-vertex-path, then the $2$-$\vec{P}$-PFC admits a trivial constant-time algorithm. If $n=2$ and $\vec{P}$ is the directed edge, then the $2$-$\vec{P}$-PFC amounts to testing whether the underlying graph of a given digraph $D$ is bipartite, which can be checked in polynomial time. For every $n \ge 3$ and every $\vec{P} \in \mathcal{P}_n$, the $2$-$\vec{P}$-PFC clearly is contained in \NP, since we can verify the correctness of a $\vec{P}$-free coloring of a given digraph $D$ in time $O(|V(D)|^{|V(\vec{P})|})$ by brute-force. We now prove the \NP-hardness of the $2$-$\vec{P}$-PFC, restricted to acyclic inputs, for all $\vec{P} \in \mathcal{P}_n$ and $n \ge 3$ by induction on $n$. The base cases $n \in \{3,4\}$ are covered by Propositions ~\ref{3path2colors},~\ref{4path2colors}. So assume for the inductive step that $n \ge 5$, $\vec{P} \in \mathcal{P}_n$ and that we have shown that $2$-$\vec{P}$-PFC with acyclic inputs is \NP-hard for all oriented paths $\vec{P}$ of length at least two and at most $n-1$. Then also the $2$-${\rm lrem}(\vec{P})$-PFC restricted to acyclic inputs is \NP-hard, since ${\rm lrem}(\vec{P})$ is a path on $n-2 \ge 3$ vertices. Proposition~\ref{remove2col} now implies the \NP-hardness of the $2$-$\vec{P}$-PFC, restricted to acyclic inputs. This concludes the proof by induction. \end{proof} \section{$P$-free $3$-coloring for paths of length $1$ and $2$}\label{sec:basecases3} In this section, we prove Theorem~\ref{thm:3col_NPhard} for $n \in \{2,3\}$. Together with Proposition~\ref{remove3col} we will then be able to prove Theorem~\ref{thm:3col_NPhard} in its full generality. We need the following easy consequence of Theorem~\ref{outermonchromaticpaths}. By $\vec{V}_3$ we denote the oriented path of length two whose middle vertex is a source. \begin{figure}[htb] \centering \includegraphics[scale= 0.6, page =3]{figs/Fan_monochromaticP} \caption{The acyclic outer planar digraph $\vec{O} = \vec{O}(\vec{V}_3) $ with a $3$-coloring such that there is unique vertex of color $0$.} \label{fig:OuterplanarV3crit} \end{figure} \begin{observation}\label{criticaloutermonochromaticpaths} The acyclic outerplanar digraph $\vec{O} = \vec{O}(\vec{V}_3) $ as introduced in Section~\ref{sec:outer} and illustrated in Figure~\ref{fig:OuterplanarV3crit} has the following properties \begin{itemize} \item has no $\vec{V}_3$-free $2$-coloring, but \item there is a $\vec{V}_3$-free coloring with colors $\{0,1,2\}$ with a unique vertex $x_0 \in V(\vec{O})$ of color $0$. \end{itemize} \end{observation} \begin{proposition}\label{23path3colors} Let $\vec{P} \in \mathcal{P}_2 \cup \mathcal{P}_3$. Then the $3$-$\vec{P}$-PFC is \NP-hard, even restricted to acyclic inputs. \end{proposition} \begin{proof} If $P \in \mathcal{P}_2$, then $P$ is simply a directed edge between two vertices. Hence, a $P$-free $3$-coloring of a planar digraph $D$ is equivalent to a proper $3$-coloring of its underlying planar graph. The claim now directly follows from the well-known fact that deciding whether a given planar graph is properly $3$-colorable is \NP-hard~\cite{garey}, and since we can provide a given graph with an acyclic orientation in polynomial time. Now let $P \in \mathcal{P}_3$. Then $P$ is isomorphic to either the directed path $\vec{P}_3$, $\vec{V}_3$ or $\revvec{V}_3$ of $P_3$, see Figure \ref{fig:OrientationP3}. Again we may assume $P \in \{\vec{P}_3,\vec{V}_3\}$. \paragraph{\textbf{\NP-hardness of the $3$-$\vec{V}_3$-PFC}} We again show \NP-hardness by reducing from the $3$-colorability of undirected planar graphs. Suppose we are given a planar graph $G$ as an input to the $3$-colorability problem. Let $D$ be an acyclic orientation of $G$, and let $D'$ be the planar digraph obtained from $D$ by adding for each vertex $v \in V(D)$ a disjoint copy $\vec{O}_v$ of the acyclic outerplanar digraph $\vec{O}$ as given by Observation~\ref{criticaloutermonochromaticpaths} and connecting it to $v$ with the arcs $(u,v),$ $u \in V(\vec{O}_v)$. We claim that $\chi(G) \le 3$ if and only if $D'$ has a $\vec{V}_3$-free $3$-coloring. To prove the first direction, assume that $G$ admits a proper $3$-coloring $c\mathop{:}V(G)=V(D) \rightarrow \{0,1,2\}$. Then we can extend this to a vertex-$3$-coloring $c'\mathop{:}V(D') \rightarrow \{0,1,2\}$ of $D'$ by coloring the vertices within each copy $\vec{O}_v$ according to a $\vec{V}_3$-free $\{0,1,2\}$-coloring of $\vec{O}$ in which only one vertex receives color $c(v)$ (the existence of such a coloring is guaranteed by Observation~\ref{criticaloutermonochromaticpaths}). We claim that $c'$ is $\vec{V}_3$-free: Suppose towards a contradiction there was a monochromatic copy of $\vec{V}_3$ in $D'$ induced by the vertices $x_1,x_2,x_3$ of color $i \in \{0,1,2\}$. Let $x_2$ be the sink of this copy. If $x_2 \in V(D')\setminus V(D)$, then we have $\{x_1,x_2,x_3\} \subseteq \{x_2 \} \cup N_{D'}^+(x_2) \subseteq O(\vec{O}_v)$ for some $v \in V(D)$, contradicting that by definition $c'|_{\vec{O}_v}$ is $\vec{V}_3$-free. So we must have $x_2 \in V(D)$. Since $c$ is a proper coloring of $G$ and $c(x_1)=c(x_2)=c(x_3)=i$, we must have $x_1, x_3 \in O(\vec{O}_{x_2})$. However, this contradicts our definition of $c'$, according to which there is exactly one vertex in $\vec{O}_{x_2}$ of color $c(x_2)=i$ under $c'$. This contradiction shows that our assumption was wrong, indeed, $c'$ is a $\vec{V}_3$-free coloring of $D'$. For the reverse, assume that $c'\mathop{:}V(D') \rightarrow \{0,1,2\}$ is a $\vec{V}_3$-free coloring of $D'$, and let $c$ be its restriction to $V(D)=V(G)$. If this was no proper coloring of $G$, there would be an arc $(v_1,v_2) \in A(D)$ with $c'(v_1)=c(v_1)=c(v_2)=c'(v_2)\mathop{=:}i$. Let $u_2$ be a vertex in $V(\vec{O}_{v_2})$ such that $c'(u_2)=c(v_2)=i$ (such a vertex must exist by Observation~\ref{criticaloutermonochromaticpaths}). Then the vertices $v_1,v_2,u_2$ induce a monochromatic copy of $\vec{V}_3$ in $D'$, contradicting our choice of $c'$. This shows that indeed $c$ is a proper $3$-coloring of $G$ and hence $\chi(G) \le 3$. Since the digraph $D'$ can be constructed from $G$ in polynomial time, the above equivalence yields a reduction of the $3$-coloring problem on planar graphs to the $3$-$\vec{V}_3$-PFC on acyclic planar digraphs. This concludes the proof. \paragraph{\textbf{\NP-hardness of the $3$-$\vec{P}_3$-PFC}} We show \NP-hardness by reducing from the $3$-colorability of undirected planar graphs. The proof is analogous to the \NP-hardness of $3$-$\vec{V}_3$-PFC which was proven in the last paragraph. \end{proof} We are now ready for the proof of Theorem~\ref{thm:3col_NPhard}. \begin{proof}[Proof of Theorem~\ref{thm:3col_NPhard}] If $n=1$ and $P$ is the one-vertex-path, then the $3$-$P$-PFC admits a trivial constant-time algorithm. For every $n \ge 2$ and every $P \in \mathcal{P}_n$, the $3$-$P$-PFC clearly is contained in \NP, since we can verify the correctness of a $P$-free coloring of a given digraph $D$ in time $O(|V(D)|^{|V(P)|})$ by brute-force. We now prove the \NP-hardness of the $3$-$P$-PFC, restricted to acyclic inputs, for all $P \in \mathcal{P}_n$ and $n \ge 2$ by induction on $n$. The base cases $n \in \{2,3\}$ are covered by Proposition~\ref{23path3colors}. So assume for the inductive step that $n \ge 4$, $P \in \mathcal{P}_n$ and that we have shown that $3$-$P$-PFC with acyclic inputs is \NP-hard for all oriented paths $P$ of length at least two and at most $n-1$. Then also the $3$-${\rm lrem}(P)$-PFC restricted to acyclic inputs is \NP-hard, since ${\rm lrem}(P)$ is a path on $n-2 \ge 2$ vertices. Proposition~\ref{remove3col} now implies the \NP-hardness of the $3$-$P$-PFC, restricted to acyclic inputs. This concludes the proof by induction. \end{proof} \bibliographystyle{abbrv}
{ "timestamp": "2022-06-08T02:20:40", "yymm": "2102", "arxiv_id": "2102.07705", "language": "en", "url": "https://arxiv.org/abs/2102.07705" }
\section[Introduction]{Introduction} Let $g\geq3$. It is well-known that a \emph{Prym curve} is a pair $(C,\alpha)$, where $C$ is a smooth genus $g=p_g(C)$ curve and $\alpha$ is a non-zero $2-$torsion point of $\operatorname{Pic}^0(C)$. In the following, we will consider the so called \emph{Prym-canonical map}, that is the rational map $$\phi_{|\omega_C(\alpha)|}:C\dashrightarrow \mathbb{P}^{g-2}$$ defined by $|\omega_C(\alpha)|$. In general, the pair $(C, \omega_C(\alpha))$ is called \emph{Prym-canonical curve}. The complete linear system $|\omega_C(\alpha)|$ is base point free unless $C$ is hyperelliptic and $\alpha\simeq \mathcal{O}_{C}(p-q)$, with $p$ and $q$ ramification points of the $g^1_2$. Moreover, it defines an embedding if and only if $C$ does not have a $g^1_4$ such that $\alpha\sim \mathcal{O}_{C}(a+b-x-y)$, where $2(a+b)$ and $2(x+y)$ are members of the $g^1_4$ (see \cite{CDGK}, Lemma $2.1$). If $\phi_{|\omega_C(\alpha)|}$ is an embedding, we say that $C \simeq \phi(C) \subset P^{g-2}$ is a \emph{Prym-canonical (embedded) curve}. If $g<5$, then the Prym-canonical map cannot be an embedding, as observed in \cite{CDGK}, so we will work only with $g\geq5$. \par\bigskip\noindent We say that a \emph{surface $X$ has Prym-canonical hyperplane sections} if it can be birationally realized in some projective space $\mathbb{P}^{g-1}$, for $g\geq5$, such that a general hyperplane section $C$ of $X$ is a smooth Prym-canonical (embedded) curve of genus $g$. \par\bigskip\noindent In this paper we will analyze the complex projective surfaces with Prym-canonical hyperplane sections up to birational equivalence. In particular, Section $2$ is devoted to studying the first properties regarding surfaces with Prym-canonical hyperplane sections. We will show that these surfaces can be birationally equivalent to ruled surfaces or to $\mathbb{P}^2$ or to Enriques surfaces. In any case, there is only one effective antibicanonical divisor $W'$ on $X'$, the minimal resolution of the singularities of $X$, and, if $\pi:X'\rightarrow X$, then the antibicanonical divisor of $X'$ is contracted by $\pi$ and every singularity $x\in X$ such that $\pi^{-1}(x)$ does not meet $\operatorname{supp}(W')$ is a rational double point. We will also show that surfaces with Prym-canonical hyperplane sections birationally equivalent to non-rational ruled surfaces over a base curve of genus $q>0$ have non rational singularities on them such that the sum of the geometric genus of their singularities is $q$. \par\bigskip\noindent At the best of our knowledge, the only known examples of surfaces with Prym-canonical hyperplane sections are the Enriques surfaces and a surface in $\mathbb{P}^5$ of degree $10$ obtained as image of the blowing up of $\mathbb{P}^2$ in the $10$ nodes of an irreducible rational plane curve of degree $6$. In Section $3$ we will construct new examples of these surfaces. In particular, we will construct four new examples, one birationally equivalent to an elliptic ruled surface, another birationally equivalent to a ruled surface over a base curve of genus $q\geq3$, again another one birationally equivalent to a rational ruled surface and finally, we will construct a new example of surface with Prym-canonical hyperplane sections birationally equivalent to $\mathbb{P}^2$. \par\bigskip \textit{Acknowledgements.} The results of this paper are contained in my PhD-thesis. I would like to express my deepest gratitude to my three advisors, Ciro Ciliberto, Concettina Galati and Andreas Leopold Knutsen, for their useful and indispensable advice and I would also like to acknowledge PhD-funding from the Department of Mathematics and Computer Science of the University of Calabria and funding from Research project "Families of curves: their moduli and their related varieties" (CUP E81|18000100005, P.I. Flaminio Flamini) in the framework of Mission Sustainability 2017 - Tor Vergata University of Rome. \par\bigskip \section{Preliminary results} We recall that a surface $X\subseteq \mathbb{P}^{g-1}$ is a surface with Prym canonical hyperplane sections if its general hyperplane section $C$ is a Prym canonical embedded curve. We start with the following remarks. \begin{rems} \end{rems} \begin{itemize} \item The generic hyperplane section of $X$ is irreducible and smooth, whence $X$ has at most isolated singularities. \item Let $\pi:X'\rightarrow X$ be the minimal resolution of singularities of $X$ and let $C'=\pi^*C$ be the inverse image of a general hyperplane section. For a general $C' \in |\pi^*C|$, we have that $C' \cong C$ and $\mathcal{O}_{C'}(C') \cong \mathcal{O}_C(1) \cong \omega_{C'}(\alpha)$, with $\alpha$ a non trivial two-torsion element of $\operatorname{Pic}^0(C')$. By the adjunction formula and because $X'$ is smooth, we can say that $\alpha=-K_{X'}|_{C'}$, in particular $K_{X'}\cdot C'=0$. \end{itemize} \par\bigskip\noindent From now on, we will assume that $C$ is projectively normal with respect to its embedding in $\mathbb{P}^{g-2}$. For example, if $\phi_{|\omega_C(\alpha)|}:C\hookrightarrow \mathbb{P}^{g-2}$ is a Prym-canonical embedding and the Clifford index $\operatorname{Cliff}(C)$$\geq3$, then $C$ is projectively normal with respect to the given embedding (see Theorem $1$, \cite{GL}). In general, there are also projectively normal curves $C\subseteq\mathbb{P}^{g-2}$ with $\operatorname{Cliff}(C)<3$. \par\bigskip\noindent We state some general properties regarding surfaces with Prym-canonical hyperplane sections. \begin{thm} \label{first properties} Let $X$ be a surface with Prym-canonical hyperplane section $C$ of genus $g\geq 5$ and let $\pi:X'\rightarrow X$ be the minimal resolution of its singularities. If $C$ is projectively normal with respect to its embedding in $\mathbb{P}^{g-2}$, then: \begin{itemize} \item $h^1(\mathcal{O}_{X}(n))=0$ and $h^2(\mathcal{O}_{X}(n))=0$ for any $n\geq0$, in particular $h^1(\mathcal{O}_X)=0$ and $h^2(\mathcal{O}_X)=0$, whence $p_a(X)=0$; \item $X$ is projectively normal; \item the Kodaira dimension $\kappa(X')$ equals to $-\infty$ or $0$; \item $\deg(X)=2g-2$. \end{itemize} \end{thm} \begin{proof} \begin{itemize} \item By assumption, $C$ is a projectively normal curve in $\mathbb{P}^{g-2}$, thus $$H^0(\mathcal{O}_{\mathbb{P}^{g-1}}(n))\twoheadrightarrow H^0(\mathcal{O}_C(n)),$$ for any $n\geq0$. As a consequence the map $H^0(\mathcal{O}_{X}(n))\rightarrow H^0(\mathcal{O}_C(n))$ is surjective for any $n\geq0$. \noindent Let us consider the exact sequence \begin{equation}\label{exact2} 0\rightarrow \mathcal{O}_X(n-1)\rightarrow \mathcal{O}_X(n)\rightarrow \mathcal{O}_C(n)\rightarrow0. \end{equation} \noindent Then, the second part of the long exact sequence associated with (\ref{exact2}) is $$0\rightarrow H^1(\mathcal{O}_X(n-1))\rightarrow H^1(\mathcal{O}_X(n))\rightarrow H^1(\mathcal{O}_C(n))\rightarrow $$ \begin{equation}\label{exact22} \rightarrow H^2(\mathcal{O}_X(n-1))\rightarrow H^2(\mathcal{O}_X(n))\rightarrow 0. \end{equation} \noindent By Serre's Theorem, there is a sufficiently large $n_0$ such that $h^1(\mathcal{O}_X(n))=0$, for any $n\geq n_0$. From the exact sequence (\ref{exact22}) and applying descending induction on $n$, we obtain that $h^1(\mathcal{O}_X(n))=0$, for any $n\geq0$. \par\noindent It is clear that $H^1(\mathcal{O}_C(n))=H^1(\mathcal{O}_C(n(K_C+\alpha)))$. For $n=1$, we have that $h^1(\mathcal{O}_C(K_C+\alpha))=h^0(\mathcal{O}_C(-\alpha))=0$. Moreover, because $\deg(K_C+\alpha)=2g-2$, then $\deg(n(K_C+\alpha))>2g-2$ for $n\geq2$, so $h^1(\mathcal{O}_C(n(K_C+\alpha)))=0$ (see \cite{H}, Example $IV.1.3.4$). Again by Serre's Theorem and applying descending induction on $n$, from the long exact sequence (\ref{exact22}) we can conclude that $h^2(\mathcal{O}_X(n))=0$, for any $n\geq0$. \item To prove that $X$ is projectively normal, it is enough to show that $X$ is normal and that the map $H^0(\mathcal{O}_{\mathbb{P}^{g-1}}(n))\rightarrow H^0(\mathcal{O}_X(n))$ is surjective for any $n\geq0$. \noindent Let $\eta:\widetilde{X}\rightarrow X$ be the normalization of $X$. We consider the following exact sequence on $X$: \begin{equation}\label{exact1} 0\rightarrow \mathcal{O}_X\rightarrow \eta_{*}\mathcal{O}_{\widetilde{X}}\rightarrow F\rightarrow 0, \end{equation} where $\operatorname{supp}(F)\subset \operatorname{Sing}(X)$. Since $X$ has isolated singularities, then \linebreak $F\cong H^0(F)=\oplus_{i=1}^{s}(\widetilde{\mathcal{O}_i}/\mathcal{O}_i)$, where $\mathcal{O}_i$ is the local ring of $x_i$ on $X$ and $(\eta_{*}\mathcal{O}_{\widetilde{X}})_{x_i}=\widetilde{\mathcal{O}_i}$ is the normalization of $\mathcal{O}_i$ in the function field of $X$, with $\operatorname{Sing}(X)=\{x_1,...,x_s\}$. \noindent We know that $H^0(\mathcal{O}_X)=k$ because $X$ is irreducible. By the properties of pushforward and because $\widetilde{X}$ is still irreducible, it is obvious that $H^0(\eta_{*}\mathcal{O}_{\widetilde{X}})\cong H^0(\mathcal{O}_{\widetilde{X}})\cong k$. Moreover $h^1(\mathcal{O}_X)=0$ by the previous part of this Proposition. For the long exact sequence associated with (\ref{exact1}), we have that $h^0(F)=0$. By definition of $F$, it is true that $\widetilde{\mathcal{O}_i}\cong \mathcal{O}_i$, for any $i=1,...,s$. We conclude that $X$ is normal. \noindent The surjectivity of $H^0(\mathcal{O}_{\mathbb{P}^{g-1}}(n))\rightarrow H^0(\mathcal{O}_X(n))$ is trivial for $n=0$. Let us consider the following diagram, where $H$ is a general hyperplane in $\mathbb{P}^{g-1}$ and $C=X\cap H$: \bigskip \begin{flushleft} \begin{tikzcd} 0 \arrow[r] & H^0(\mathcal{O}_{\mathbb{P}^{g-1}}(n-1)) \arrow[r] \arrow[d, "r_1"] & H^0(\mathcal{O}_{\mathbb{P}^{g-1}}(n)) \arrow[r]\arrow[d,"r_2"] & H^0(\mathcal{O}_H(n)) \arrow[r]\arrow[d,"r_3"] & 0 \\ 0 \arrow[r] & H^0(\mathcal{O}_X(n-1)) \arrow[r] & H^0(\mathcal{O}_X(n)) \arrow[r] & H^0(\mathcal{O}_C(n)) \arrow[r] & 0 \\ \end{tikzcd} \end{flushleft} \par\noindent We observe that $r_3$ is surjective because $C$ is projectively normal and $r_1$ is surjective by the inductive hypothesis. Then $r_2$ is also surjective and the claim is proved. \item Consider the following exact sequence: $$0\rightarrow \mathcal{O}_{X'}(-C'+mK_{X'})\rightarrow \mathcal{O}_{X'}(mK_{X'})\rightarrow \mathcal{O}_{C'}(m K_{X'}))\rightarrow0.$$ Since $-K_{X'}|_{C'}\sim \alpha$, for $\alpha$ a non-zero two torsion element of $C'$, then we have that $(-C'+mK_{X'})\cdot C'=-C'^2=2-2g<0$. Whence \linebreak$h^0(\mathcal{O}_{X'}(-C'+mK_{X'}))=0$ otherwise, if this divisor was effective, it would be a fixed component of $|C'|$ that is a linear system without base locus by definition. At the same time $$ h^0(\mathcal{O}_{C'}(m(K_{X'})))= \bigg\{ \begin{array}{lll} 0& if \hspace{0.3cm}m\hspace{0.3cm} odd \\ 1 & if \hspace{0.3cm}m\hspace{0.3cm} even. \\ \end{array} $$ \bigskip\noindent Consequently the plurigenus $P_m(X'):=h^0(\mathcal{O}_{X'}(mK_{X'}))\leq h^0(\mathcal{O}_{C'}(m K_{X'}))\linebreak\leq1$. Then the Kodaira dimension $\kappa(X')=-\infty$ or $0$. \item We have $\deg(X)=\deg(C)=C^2=\deg(K_C+\alpha)=2g-2$. \end{itemize} \end{proof} \begin{rems}\label{minima Enri} \end{rems} \begin{enumerate} \item Since the Kodaira dimension is a birational invariant for smooth varieties, if $\kappa(X')=-\infty$, then the minimal model $X''$ of $X'$ is a ruled surface or $\mathbb{P}^2$ (see \cite{H}, Theorem $V.6.1$). \item If $\kappa(X')=0$, then $X'$ is a minimal Enriques surface. Let us show this. \noindent Let us suppose that $X'$ is not minimal. Then there is a $(-1)-$curve $E'$ on $X'$. Because $\kappa(X')=0$, let $m>0$ be such that $|mK_{X'}|$ contains only one effective divisor $D'$. It is obvious that $E'$ is a component of $D'$. Now $\mathcal{O}_{C'}(D')\cong \mathcal{O}_{C'}(mK_{X'})\cong \mathcal{O}_{C'}$ because $-K_{X'}|_{C'}$ is a non-zero two torsion element and $m$ is even as seen in the previous Proposition. Therefore $D'$ and consequently $E'$ are contracted to a point on $X$ by $\pi$, contradicting the minimality of the resolution $\pi$. \noindent It is true that $12K_{X'}\sim0$ because $X'$ is minimal and $\kappa(X')=0$ (see \cite{H}, Theorem $V.6.3$). By the classification of minimal surfaces, there is a smallest $m\geq1$ such that $mK_{X'}\sim0$ and the possibilities are $m=\{1,2,3,4,6\}$ (see \cite{E} and \cite{CE}). If $m=1$, then $K_{X'}\sim0$, whence $\mathcal{O}_{C'}(C')\cong \mathcal{O}_{C'}(K_{C'}-K_{X'})\cong \mathcal{O}_{C'}(K_{C'})$. This is not possible because $C'$ is a Prym-canonical curve. So we exclude the cases in which $X'$ is a $K3$ surface or an abelian surface. \noindent If $X'$ was a hyperelliptic surface, it would not contain curves with negative self-intersection, then we would have $X=X'$ smooth by Mumford's Theorem (see \cite{MUM}, Chapter $1$). By definition, a hyperelliptic surface is irregular, contradicting the first point of Proposition \ref{first properties}. In conclusion $X'$ is a minimal Enriques surface. \end{enumerate} \par\bigskip We want to determine the possible singularities on $X$ surface with Prym-canonical hyperplane sections. The following Proposition determines the geometric genus of the singularities that occur on $X$. \begin{prop}\label{sum sing} With the same assumptions as before, if $\operatorname{Sing}(X)=\{x_1,...,x_s\}$ is the locus of the singular points of $X$, then:\begin{itemize} \item if $X$ is birationally equivalent to an Enriques surface or $\mathbb{P}^2$, then $X$ can only contain rational points as singularities; \item if $X$ is birationally equivalent to a ruled surface $X''$ over a base curve of genus $q\geq0$, then $\sum_{i=1}^{s}p_g(x_i)=q$, where $p_g(x_i)$ is the geometric genus of the singular point $x_i$. \end{itemize} \end{prop} \begin{proof} These results can be obtained using the following exact sequence, which one gets from the Leray spectral sequence for the sheaf $\mathcal{O}_{X'}$ and the morphism $\pi$ (see \cite{GH}, pag. 462): $$0\rightarrow H^1(\mathcal{O}_X)\rightarrow H^1(\mathcal{O}_{X'})\rightarrow H^0(R^1\pi_*\mathcal{O}_{X'})\rightarrow H^2(\mathcal{O}_X)\rightarrow ...$$ We know that $h^1(\mathcal{O}_{X})=h^2(\mathcal{O}_X)=0$ by Theorem \ref{first properties}, so $$\sum_{i=1}^{s}p_g(x_i)=h^0(R^1\pi_*\mathcal{O}_{X'})=h^1(\mathcal{O}_{X'}),$$ where $p_g(x_i)$ is the geometric genus of the singular point $x_i$. \begin{itemize} \item If $X$ is birationally equivalent to an Enriques surface, then $X'$ is a minimal Enriques surface by Remark \ref{minima Enri}. So $h^1(\mathcal{O}_{X'})=0$ by definition and $\sum_{i=1}^{s}p_g(x_i)=0$, whence $X$ can only contain rational singularities. \noindent If $X$ is birationally equivalent to $\mathbb{P}^2$, then it is clear that $h^1(\mathcal{O}_{X'})=0$ and $\sum_{i=1}^{s}p_g(x_i)=0$. This proves the first part of this Proposition. \item If $X$ is birationally equivalent to a ruled surface $X''$ over a base curve of genus $q\geq0$, then $h^1(\mathcal{O}_{X''})=q(X'')=q$. Consequently $q(X'')=q(X')=q$. So $\sum_{i=1}^{s}p_g(x_i)=h^1(\mathcal{O}_{X'})=q$. \end{itemize}\end{proof} \par\bigskip\noindent The following results give other information regarding the singularities that occur on $X$. First of all, we sketch the proof of a preliminary lemma. \begin{lemma} Let $X$ be a surface with Prym-canonical hyperplane section $C$ and let $\pi:X'\rightarrow X$ be the minimal resolution of singularities of $X$. Then $\pi_*(2K_{X'})\sim 0$. \end{lemma} \begin{proof} Let $C_m\in |mC|$ be smooth satisfing $C_m\cap\operatorname{Sing}(X)=\varnothing$. We put \linebreak$C'_m=\pi^{*}(C_m)$. We have that $-2K_{X'}\cdot C'_m\sim-2mK_{X'}\cdot C'\sim0$ in the Chow Ring $A(X')$. Since rational equivalence and linear equivalence coincide on a curve, then $-2K_{X'}|_{C'_m}\sim 0$, for any $m\geq1$. \par\bigskip\noindent We also observe that $\pi_*(2K_{X'})|_{C_m}\sim0$. Indeed, since $\pi$ is an isomorphism in a neighbourhood of $C'_m$, then $\pi_*\mathcal{O}_{C'_m}\cong \mathcal{O}_{C_m}$ and $\mathcal{O}_{C'_m}\cong \pi^*(\mathcal{O}_{C_m})$. Using the projection formula (see \cite{H}, Exe $II.5.1$) and the previous results, we obtain that \linebreak $\mathcal{O}_{C_m}\cong \pi_*(\mathcal{O}_{C'_m})\cong \pi_*(2K_{X'}\otimes \mathcal{O}_{C'_m})=\pi_*(2K_{X'}\otimes \pi^*\mathcal{O}_{C_m})\cong \pi_*(2K_{X'})\otimes \mathcal{O}_{C_m}$. \bigskip\noindent By a known result of Zariski (\cite{Z}, Theorem $4$), if $\pi_*(2K_{X'})|_{C_m}\sim0$ for any $m$, then there exists a divisor $D$ such that $D\sim \pi_*(2K_{X'})$ and $D|_{C_m}\sim 0$. Since $C_m$ is very ample on $X$, then $D\sim0$. So $\pi_*(2K_{X'})\sim0$. This proves the lemma. \end{proof} \par\bigskip \begin{thm}\label{unique abc} The dimension $\dim|-2K_{X'}|=0$, in particular, if $W'$ is the effective antibicanonical divisor on $X'$, then either $W'\sim 0$ or $\operatorname{supp}(W')=\pi^{-1}({x_1,...,x_r})$ for certain singularities $x_i\in X$, for $i=1,..,r$. \end{thm} \begin{proof} Since $\pi_*(2K_{X'})\sim 0$ by the previous Lemma, then either $2K_{X'}\sim 0$ or there is a bicanonical divisor $2K_{X'}$ on $X'$ with support in $\pi^{-1}(\operatorname{Sing} (X))$. In the latter case, let $2K_{X'}=\sum m_iF_i -\sum n_jG_j$ be the decomposition in reduced and irreducible components, with $m_i,n_j\in \mathbb{N}_{>0}$ and $F_i\neq G_j$, for all $i,j$. Let $F=\sum m_i F_i$ and $G=\sum n_jG_j$. \noindent Suppose $F\neq0$. By Mumford's Theorem (see \cite{MUM}, Chapter $1$), we have that $F_i^2<0$ for any $i$ and, because the intersection form on $\pi^{-1}(\operatorname{Sing}(X))$ is negative definite, also $F^2<0$. So there is an $i_0$ such that $F\cdot F_{i_0}<0$. Up to renaming the index, we suppose that $i_0=1$. It is obvious that $F_1\cdot G\geq0$. Since $F_1$ is an irreducible component, then $$0\leq p_a(F_1)=1+\frac{1}{2}F_1\cdot (F_1+K_{X'})=1+\frac{1}{2}F_1^2+\frac{1}{4}F_1\cdot F-\frac{1}{4}F_1\cdot G.$$ \noindent The only possibility is $F_1^2=-1$. Thus $F_1$ is a $(-1)-$curve, contradicting the minimality of $\pi$. Hence $F=0$. \noindent Thus either $2K_{X'}\sim 0$ or there are effective antibicanonical divisors with support in $\pi^{-1}(\operatorname{Sing}(X))$. Then $\dim|-2K_{X'}|=0$. \bigskip\noindent Let $|-2K_{X'}|=\{W'\}$. To conclude the proof, we have only to show that, if $x\in \operatorname{Sing}(X)$ is such that $\pi^{-1}(x)$ meets $\operatorname{supp}(W')$, then $\pi^{-1}(x)$ does not contain curves which are not part of $\operatorname{supp}(W')$. \noindent Suppose that there is an irreducible curve $E\subset \pi^{-1}(x)$ which is not part of $\operatorname{supp}(W')$. Since $X$ is normal by Theorem \ref{first properties}, Point $2.$, then $\pi^{-1}(x)$ is connected, so we can assume that $E$ intersects $W'$ and $E\cdot W'>0$. Then $$0\leq p_a(E)=1+\frac{1}{2}E^2+\frac{1}{2}E\cdot K_{X'}=1+\frac{1}{2}E^2-\frac{1}{4}E\cdot W'.$$ Again by Mumford's Theorem, we have $E^2<0$. So the only possible case for which the previous inequality is valid is: $E\cdot W'=2$, $E^2=-1$ and $p_a(E)=0$. This contradicts the minimality of $\pi$. \end{proof} \begin{rem} By the previous Theorem, we observe that, if $X$ is smooth, then $X=X'$ and $W'\sim 0$. Since $p_a(X)=p_g(X)=0 $ by Theorem \ref{first properties}, Point $1.$, then $X$ is an Enriques surface by \cite{H}, Theorem $V.6.3$. \end{rem} \par\bigskip \begin{lemma}\label{supp antibican} If $W'$ is the unique effective antibicanonical divisor on $X'$, then: \noindent a singularity $x\in X$ such that $\pi^{-1}(x)$ does not meet $\operatorname{supp}(W')$ is a rational double point. \end{lemma} \begin{proof} Let $x\in X$ be a singularity such that $\pi^{-1}(x)$ does not meet $\operatorname{supp}(W')$. Let $T$ be an irreducible component of the connected component $\pi^{-1}(x)$, then $T\cdot W'=0$. So $$0\leq p_a(T)=1+\frac{1}{2}T^2+\frac{1}{2}T\cdot K_{X'}=1+\frac{1}{2}T^2-\frac{1}{4}T\cdot W'=1+\frac{1}{2}T^2.$$ By Mumford's Theorem $T^2<0$, so the only possible case for which the inequality above is valid is: $T^2=-2$ and $p_a(T)=0$. Then all the irreducible components of $\pi^{-1}(x)$ are smooth rational curves with self-intersection $-2$. We can call these curves $E_i$, for $i=1,...,n$. \par\bigskip\noindent We can prove that $x$ must be a rational singularity using \cite{M}, Proposition-Definition $2.1$. Let $Z_0=\sum_{i=1}^{n}a_i\cdot E_i$, for $a_i\geq0$ not all zero, the fundamental cycle associated with $x$. First of all, $p_a(Z_0)=1+\frac{1}{2}Z_0^2+\frac{1}{2}Z_0\cdot K_{X'}$. Now $Z_0\cdot K_{X'}=-\frac{1}{2}Z_0\cdot W'=-\frac{1}{2}\sum_{i=1}^{n}a_i E_i\cdot W'=0$, while $Z_0^2<0$ since $Z_0$ is contracted by $\pi$. So $p_a(Z_0)<1$. \noindent By \cite{Tom}, Lemma $1.1$, we have that $p_a(E_i)\leq p_a(Z_0)$ for every $E_i$ contained in $Z_0$. Since $p_a(E_i)=0$ as computed before, then the only possible case is $p_a(Z_0)=0$, so $x$ is a rational singularity. \par\bigskip\noindent In conclusion, $x\in X$ is a rational double point. Indeed, by the adjunction formula, the self-intersection $Z_0^2=2p_a(Z_0)-2-Z_0\cdot K_{X'}=2p_a(Z_0)-2=-2$. \end{proof} \begin{rem} We have already seen that, if $X$ is birationally equivalent to an Enriques surface, then $X$ can only contain rational singularities. \noindent Moreover, by the previous Lemma, we conclude that it can only contain rational double points as singularities. \end{rem} \section{Examples} \noindent In this chapter, we will construct examples of surfaces with Prym-canonical hyperplane sections. \subsection{Surfaces with Prym-canonical hyperplane sections birationally equivalent to ruled surfaces} \par\bigskip\noindent Let us fix some notation about minimal smooth ruled surfaces in which we will follow \cite{H}, Chapter $V.2$. \par\bigskip\noindent If $X''$ is a minimal smooth ruled surface and $p:X''\rightarrow \Gamma$ is the natural map on the base curve $\Gamma$ of genus $q\geq 0$, then $X''=\mathbb{P}_{\Gamma}(\mathcal{E})$, where $\mathcal{E}$ is a normalized locally free sheaf of rank $2$ on $\Gamma$. \noindent Let $\wedge^2(\mathcal{E})=\mathcal{O}_{\Gamma}(D)$, for $D\in \operatorname{Div}(\Gamma)$. The integer $e=-\deg(D)$ is an invariant of $X''$. \par\noindent Let $C_0$ be a section of $p$ such that $\mathcal{O}_{X''}(C_0)=\mathcal{O}_{X''}(1)$. Then $C_0^2=-e$. Moreover, we recall that the canonical divisor $K_{X''}\sim -2C_0+(K_{\Gamma}+D)\cdot f$, where $(K_{\Gamma}+D)\cdot f$ denotes the divisor $p^*(K_{\Gamma}+D)$ by abuse of notation, with $K_{\Gamma}+D$ a divisor on $\Gamma$. \par\noindent Finally, if $\mathcal{E}$ is decomposable, i.e. $\mathcal{E}=\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(D)$, then we will denote by $C_1$ a fixed section of $p$ disjoint from $C_0$. Thus $C_1^2=e$ and $C_1\sim C_0-D\cdot f$. \subsubsection{\textbf{The minimal model is a non-rational ruled surface}} \par\bigskip\noindent We start recalling a simple example of surface with Prym-canonical hyperplane sections. \par\bigskip\noindent Let $X''=\mathbb{P}_{\Gamma}(\mathcal{O}_{\Gamma}\oplus \mathcal{O}_{\Gamma}(D))$ be a minimal ruled surface over a base curve $\Gamma$ of genus $q\geq5$, with $D\sim -K_{\Gamma}+\alpha$, where $\alpha$ is a not trivial two torsion divisor and $K_{\Gamma}$ is the canonical divisor of $\Gamma$, and let $L''=|C_1|$ be a linear system on $X''$. By \cite{CDGK}, Lemma $2.1$, if $\Gamma$ is non-hyperelliptic and it does not admit $g^1_4$, then a general hyperplane section of $X''$ is Prym-canonically embedded. The images of fibres of $X''$ by $i_{L''}$ are lines since $C''\cdot f =C_1\cdot f=1$. It is not difficult to prove that $X$ has only one singularity, so the map $i_{L''}:X''\dashrightarrow \mathbb{P}^{q-1}$ defined by the linear system $L''$ is such that $X=i_{L''}(X'')$ is a cone on a Prym-canonical embedded curve and if a general hyperplane section $C$ of $X$ is projectively normal, then the geometric genus of the only singularity is $q$ (see Proposition \ref{sum sing}). \begin{rem}\label{rem cone} Let us consider $L''\subseteq|aC_0+\Delta\cdot f|$, for $a\geq2$ and $\Delta\in \operatorname{Pic}(\Gamma)$. If $X''=\mathbb{P}_{\Gamma}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(D))$ is a minimal ruled surface over a base curve $\Gamma$ of genus $q\geq5$, then the images of fibres of $X''$ by the map $i_{L''}$ associated with $L''$ are not lines since $C''\cdot f=(aC_0+\Delta\cdot f)\cdot f=a>1$, for $C''$ a general curve in $L''$. Furthermore there is not another family of rational curves on $X''$ mapped into lines by $i_{L''}$ because the genus of the base curve $\Gamma$ is $q>0$. Indeed, by the Riemann-Hurwitz formula (see \cite{H}, Corollary $IV.2.4$), there is not a curve of genus $0$ (not equal to a fibre) on a ruled surface over a base curve of genus $q>0$. Hence we never obtain $X=i_{L''}(X'')$ as a cone. \end{rem} \par\bigskip Now we focus our attention on surfaces with Prym-canonical hyperplane sections birationally equivalent to ruled surfaces $X''$ over a non-hyperelliptic base curve of genus $q\geq3$. \begin{prova}\label{example} Let $X''=\mathbb{P}_{\Gamma}(\mathcal{O}_{\Gamma}\oplus \mathcal{O}_{\Gamma}(D))$ be a minimal ruled surface over a non-hyperelliptic smooth base curve $\Gamma$ of genus $q\geq3$, for $D\in \operatorname{Div}(\Gamma)$. Let $L''=|aC_1|$ be a linear system with $a\geq2$. If $D\sim -K_{\Gamma}+\alpha$, for $\alpha$ a non-zero two-torsion divisor, then the image $X=i_{L''}(X'')\subseteq\mathbb{P}^{a^2(q-1)}$ of the morphism associated with $L''$ has Prym-canonical hyperplane sections and only one singularity. In particular, if a general hyperplane section $C$ of $X$ is projectively normal, then the geometric genus of the only singularity $x$ is $p_g(x)=q$.\end{prova} \begin{proof} It is easy to prove that the linear system $L''=|aC_1|$ is base-point free using \cite{FP}, Proposition $36$, for any $a\in \mathbb{N}_{\geq2}$. Moreover, since $(C'')^2>0$, for $C''$ a general element of $L''$, then, by Bertini's Theorem, $C''$ is smooth and irreducible. \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}1:}$ We have that $-K_{X''}$ is not effective, while $-2K_{X''}$ is. \begin{proof} We know that $h^0(\mathcal{O}_{X''}(-2K_{X''}))=h^0(\mathcal{O}_{X''}(4C_0))=h^0(\mathcal{O}_{\Gamma})+h^0(\mathcal{O}_{\Gamma}(D))+h^0(\mathcal{O}_{\Gamma}(2D))+h^0(\mathcal{O}_{\Gamma}(3D))+h^0(\mathcal{O}_{\Gamma}(4D))$ by \cite{FP}, Lemma $35$. Because $\deg(D)=2-2q<0$, then $h^0(\mathcal{O}_{X''}(-2K_{X''}))=1$ and $-2K_{X''}$ is effective. \par\bigskip\noindent On the other hand, we have that $h^0(\mathcal{O}_{X''}(-K_{X''}))=h^0(\mathcal{O}_{X''}(2C_0-(K_{\Gamma}+D)\cdot f))= h^0(\mathcal{O}_{\Gamma}(-K_{\Gamma}-D))+h^0(\mathcal{O}_{\Gamma}(-K_{\Gamma}))+h^0(\mathcal{O}_{\Gamma}(-K_{\Gamma}+D))$ by \cite{FP}, Lemma $35$. Since $\deg(D-K_{\Gamma})=4-4q<0$ and $\deg(-K_{\Gamma})<0$, then $h^0(-K_{X''})=\linebreak h^0(\mathcal{O}_{\Gamma}(-K_{\Gamma}-D))=h^0(\mathcal{O}_{\Gamma}(-\alpha))=0$, so $-K_{X''}$ is not effective. \end{proof} \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}2:}$ We prove that $\mathcal{O}_{C''}(-K_{X''})\ncong \mathcal{O}_{C''}$ while $\mathcal{O}_{C''}(-2K_{X''})\cong \mathcal{O}_{C''}$, where $C''\in L''$ is a general curve. \begin{proof} Since $-2K_{X''}$ is effective as seen in Claim $1$ and $C''\cdot(-2K_{X''})=\linebreak(aC_0-aD\cdot f)\cdot (4C_0)=4aC_0^2-4a\deg(D)=0$, then $\mathcal{O}_{C''}(-2K_{X''})\cong \mathcal{O}_{C''}$. \par\bigskip\noindent Clearly also $C''\cdot(-K_{X''})=0$. Since $h^0(\mathcal{O}_{X''}(-K_{X''}))=0$ as seen in Claim $1$, if we prove that $h^1(\mathcal{O}_{X''}(-K_{X''}-C''))=0$, then, from the exact sequence $$0\rightarrow \mathcal{O}_{X''}(-K_{X''}-C'')\rightarrow \mathcal{O}_{X''}(-K_{X''})\rightarrow \mathcal{O}_{C''}(-K_{X''})\rightarrow 0,$$ we have that $h^0(\mathcal{O}_{C''}(-K_{X''}))=0$, which implies that $\mathcal{O}_{C''}(-K_{X''})\ncong \mathcal{O}_{C''}$. \noindent Thus, by Serre Duality, we have that $h^1(\mathcal{O}_{X''}(-K_{X''}-C''))=h^1(\mathcal{O}_{X''}(2K_{X''}+C''))$. If we prove that $K_{X''}+C''$ is ample, then, by the Kodaira vanishing Theorem (see \cite{H}, Remark $III.7.15$), we have that $h^1(\mathcal{O}_{X''}(2K_{X''}+C''))=0$ and the claim is proved. \noindent By \cite{H}, Proposition $V.2.20$, if $a>2$, then $K_{X''}+C''$ is ample and the claim is satisfied, instead, if $a=2$, we have that $K_{X''}+C''$ is not ample. About that, let us suppose that $\mathcal{O}_{C''}(-K_{X''})\cong \mathcal{O}_{C''}$. Then the image of $X''$ by $i_{L''}$ is a surface with canonical hyperplane sections. By \cite{Epe1}, Corollary $5.4$, $X''$ contains only one effective anticanonical divisor. This contradicts Claim $1$, then this claim is also satisfied for $a=2.$\end{proof} \par\bigskip\noindent \noindent We know that $h^0(\mathcal{O}_{\Gamma})=1$. Using the Riemann-Roch Theorem, we also have that $h^0(\mathcal{O}_{\Gamma}(-D))=h^0(\mathcal{O}_{\Gamma}(\alpha))+2q-2+1-q=(2q-2)+1-q$ and $h^0(\mathcal{O}_{\Gamma}(-mD))=m(2q-2)+1-q$, for any $m\in \mathbb{N}_{>1}$. Hence, using \cite{FP}, Lemma $35$, we obtain that $$h^0(\mathcal{O}_{X''}(L''))=(1+...+a)(2q-2)+a(1-q)+1=$$$$=a(a+1)(q-1)+a(1-q)+1=a^2(q-1)+1.$$ \noindent So $X=i_{L''}(X'')$ is contained in $\mathbb{P}^{a^2(q-1)}$, for $a^2(q-1)\geq4\cdot 2=8$. \par\bigskip \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}3:}$ It remains to show that $L''$ defines a birational morphism $i_{L''}$, in particular an isomorphism outside $C_0$. So $i_{L''}|_{C''}$ is a Prym-canonical embedding, for $C''\in L''$ a general divisor. \begin{proof} \par\bigskip We can prove that $L''$ defines a birational map, in particular an isomorphism outside $C_0$, using \cite{FP}, Theorem $38$. Indeed, this happens if $-aD$ is very ample and if $|-aD+D|$, $|-aD+(a-1)D|=|-D|$ and $|-aD+aD|$ are base-point free. \noindent The last case is trivial. By \cite{H}, Corollary $IV.3.2$, since $\deg(-aD)=a(2q-2)\geq 4q-4=2q+(2q-4)\geq2q+1$, then $-aD$ is very ample. Again by \cite{H}, Corollary $IV.3.2$, if $a\geq3$, since $\deg(-aD+D)=a(2q-2)+(2-2q)=(a-1)(2q-2)\geq 4q-4=2q+(2q-4)\geq2q$, then $|-aD+D|$ is base-point free. \noindent It remains to show that $|-D|$ is base-point free. Since $D\sim -K_{\Gamma}+\alpha$, if $|-D|$ has base points, then $\Gamma$ must be hyperelliptic (see \cite{CDGK}, Lemma $2.1$). This contradicts our assumptions. So $L''$ defines an isomorphism outside $C_0$, called $i_{L''}$ . \bigskip\noindent A general $C''\sim aC_1$ in $L''$ is disjoint from $C_0$ by definition, then $$i_{L''}|_{C''}:C''\rightarrow \mathbb{P}^{a^2(q-1)-1}$$ is an embedding. By the adjunction formula, we have that $$L''|_{C''}\cong K_{C''}-K_{X''}|_{C''}$$ but in Claim $2$ we have already proved that $-K_{X''}|_{C''}$ is a non trivial two-torsion divisor, so $i_{L''}|_{C''}$ is a Prym-canonical embedding. \end{proof} \par\bigskip\noindent We observe that, since $i_{L''|_{C''}}:C''\rightarrow\mathbb{P}^{g-2}$ by definition of Prym-canonical map, then $g=g(C'')=a^2(q-1)+1.$ \noindent The image $x\in X$ of $-2K_{X''}\sim 4C_0$ by $i_{L''}$ is a singular point. There are not other possible singularities because $L''$ is an isomorphism outside $C_0$. We have found examples of surfaces in $\mathbb{P}^{a^2(q-1)}$ with Prym-canonical hyperplane sections birationally equivalent to non-rational ruled surfaces, for $a\geq 2$ and $q\geq3$. \noindent If a general hyperplane section $C$ of $X$ is projectively normal, then, by Proposition \ref{sum sing}, the singularity $x$ has geometric genus equal to $q$. \end{proof} \par\bigskip We can construct an example of surface with Prym-canonical hyperplane sections birationally equivalent to an elliptic ruled surface $X''$. \begin{exa} Let $\Gamma$ be an elliptic curve and let $X''=\mathbb{P}_{\Gamma}(\mathcal{O}_{\Gamma}\oplus \mathcal{O}_{\Gamma}(D))$ be a minimal ruled surface with base curve $\Gamma$, for $D\in \operatorname{Div}(\Gamma)$. If $Q_i$, for $i=1,2,3$, is a general point on $\Gamma$ and $\alpha,\beta\in\Gamma$ are two points such that $\alpha-\beta$ is a two torsion element of $\Gamma$, then we assume that $D=-Q_1-Q_2-Q_3-\alpha+\beta$. So $e=-\deg(D)=3$. \par\bigskip\noindent We consider the linear system $|3C_1|=|3C_0+3(Q_1+Q_2+Q_3+\alpha-\beta)\cdot f|$. It is easy to prove that this linear system is base-point free using \cite{FP}, Proposition $36$, so, by Bertini's Theorem, its general element $\mathcal{L}$ is smooth and, since $\mathcal{L}^2=9C_1^2=9e>0$, it is also irreducible. \noindent We call $f_i:=Q_i\cdot f$, for $i=1,2,3$. For any fibre $f$, we have that $\mathcal{L}\cdot f=3$, so we can fix the $9$ points $$Z:=\{x_{1,1},x_{1,2},x_{1,3},x_{2,1},x_{2,2},x_{2,3},x_{3,1},x_{3,2},x_{3,3}\}$$ of intersection between $\mathcal{L}$ and $f_i$, for $i=1,2,3$. Since $3C_1$ is disjoint from $C_0$, we can assume that $Z\cap C_0=\varnothing$. So we can consider the linear system $L''\subset|3C_1|$ on $X''$ with $Z$ as base locus. In particular, we suppose that every curve $C''\in L''$ simply passes through the $9$ points, so $\mathcal{L}$ is an element of $L''$. By \cite{FP}, Lemma $35$, we have that $h^0(\mathcal{O}_{X''}(3C_0+3(Q_1+Q_2+Q_3+\alpha-\beta)\cdot f))=19$, so $\dim L''\geq18-9=9$. Since $\mathcal{L}\in L''$ and smoothness is an open condition, then the general element $C''$ of $L''$ is smooth. \par\bigskip\noindent We know that $-K_{X''}\sim 2C_0-D\cdot f$ and $h^0(\mathcal{O}_{X''}(-K_{X''}))=4$ by \cite{FP}, Lemma $35$. \noindent We can show that $h^0(\mathcal{O}_{X''}(-K_{X''}-\mathcal{I}_{Z}))=0$. Indeed, if we suppose that there is an effective divisor $T\in |-K_{X''}-\mathcal{I}_{Z}|$, then $T\cdot C_0=2C_0^2-\deg(D)=2(-3)+3<0$, so $C_0$ is a fixed component of $T$. Moreover $T\cdot f=2$ but $T$ contains $3$ points of the fibres $f_i$, so it also contains $f_1,f_2,f_3$. Then $$|-K_{X''}-\mathcal{I}_{Z}|=|2C_0-D\cdot f-\mathcal{I}_{Z}|=C_0+f_1+f_2+f_3+|C_0+(\alpha-\beta)\cdot f|,$$ so $$\dim |-K_{X''}-\mathcal{I}_{Z}|=\dim |C_0+(\alpha-\beta)\cdot f|.$$ \noindent By \cite{FP}, Lemma $35$, we have that $h^0(\mathcal{O}_{X''}(C_0+(\alpha-\beta)\cdot f))=h^0(\mathcal{O}_{\Gamma}(\alpha-\beta))+h^0(\mathcal{O}_{\Gamma}(\alpha-\beta-Q_1-Q_2-Q_3-\alpha+\beta))=0$. Hence $T$ effective does not exist. \par\bigskip\noindent It is not difficult to prove that $h^0(\mathcal{O}_{X''}(-2K_{X''}))=10$ and, since $-2K_{X''}\sim 4C_0+2(Q_1+Q_2+Q_3)\cdot f$ and $4C_0+2(Q_1+Q_2+Q_3)\cdot f$ contains $Z$ with multiplicity $2$, then also $h^0(\mathcal{O}_{X''}(-2K_{X''})\otimes\mathcal{I}_{Z})>0$. \par\bigskip\noindent Let $\phi: X'\rightarrow X''$ be the blowing up of $X''$ along the $9$ points defining $Z$. Let $E_{i,j}\in X'$ be the exceptional divisor of $x_{i,j}$, for $i,j\in\{1,2,3\}$, and let $\widetilde{f}_i$ be the strict transform of $f_i$, for $i=1,2,3$. With abuse of notation, we call $C_0:=\phi^*(C_0)$ (we remark that $\phi^*(C_0)$ is the strict transform of $C_0$ since $Z\cap C_0=\varnothing$), $\phi^*(\alpha\cdot f):=f_{\alpha}$ and $\phi^*(\beta\cdot f):=f_{\beta}$. Let $L'$ be such that $L''=\phi_*L'$. Then the strict transform $C'\in L'$ of a general $C''\in L''$ is of the form \begin{eqnarray*} C' = \phi^*(C'')- E_{1,1}-...-E_{3,3} &\sim & 3C_0+3\sum_{i=1}^{3}\widetilde{f_i}+3f_{\alpha}-3f_{\beta}+\end{eqnarray*}\begin{eqnarray*}+2(E_{1,1}+...+E_{3,3}). \end{eqnarray*} \bigskip\noindent Instead, using \cite{H}, Proposition $V.3.3$, we obtain that \begin{eqnarray*} -K_{X'}= \phi^*(-K_{X''})- E_{1,1}-...-E_{3,3}&\sim & 2C_0+\sum_{i=1}^{3}\widetilde{f_i}+f_{\alpha}-f_{\beta} \end{eqnarray*} \noindent and $$-2K_{X'}\sim 4C_0+2\widetilde{f_1}+2\widetilde{f_2}+2\widetilde{f_3}.$$ \par\bigskip\noindent It is clear that $h^0(\mathcal{O}_{X'}(-K_{X'}))=h^0(\mathcal{O}_{X''}(-K_{X''}-\mathcal{I}_{Z}))=0$ while $h^0(\mathcal{O}_{X'}(-2K_{X'}))=h^0(\mathcal{O}_{X''}(-2K_{X''}-\mathcal{I}_{Z}))>0$. \par\bigskip Step by step, we can prove that the general hyperplane section $C'$ of $X'$ is a Prym-canonical embedded curve. \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}1:}$ We have that $\mathcal{O}_{C'}(-K_{X'})\ncong \mathcal{O}_{C'}$ and $\mathcal{O}_{C'}(-2K_{X'})\cong\mathcal{O}_{C'}$, where $C'\in L'$ is a general curve. In particular, $-K_{X'}|_{C'}$ is a non-zero two torsion divisor. \begin{proof} The intersection $$C'\cdot( -K_{X'})=6C_0^2+6\sum_{i=1}^{3}C_0\cdot \widetilde{f_i}+3\sum_{i=1}^{3}(C_0\cdot \widetilde{f_i}+\widetilde{f_i}^2)+2[\widetilde{f_1}\cdot(E_{1,1}+E_{1,2}+E_{1,3})+$$$$\widetilde{f_2}\cdot(E_{2,1}+E_{2,2}+E_{2,3})+\widetilde{f_3}\cdot(E_{3,1}+E_{3,2}+E_{3,3})]=6(-3)+6(3)+3(3-9)+2(3+3+3)=0$$ \par\bigskip\noindent and clearly also $C'\cdot (-2K_{X'})=0$. \noindent Since $-2K_{X'}$ is effective, then the antibicanonical divisor of $X'$ is contracted by $i_{L'}$, in particular $\mathcal{O}_{C'}(-2K_{X'})\cong \mathcal{O}_{C'}$. \noindent On the contrary, we have that $h^0(\mathcal{O}_{X'}(-K_{X'}))=0$, so $-K_{X'}$ is not effective. As seen in Claim $2$ of Proposition \ref{example}, if we proved that $h^1(\mathcal{O}_{X'}(-K_{X'}-C'))=0$, then $\mathcal{O}_{C'}(-K_{X'})\ncong \mathcal{O}_{C'}$. \par\bigskip\noindent Thus, by Serre Duality, it is clear that $h^1(\mathcal{O}_{X'}(-K_{X'}-C'))=h^1(\mathcal{O}_{X'}(2K_{X'}+C'))$. If we prove that $K_{X'}+C'$ is big and nef, then, by the Kawamata-Viehweg vanishing Theorem (see \cite{K} and \cite{V}), the first cohomology $h^1(\mathcal{O}_{X'}(2K_{X'}+C'))=0$. \par\bigskip\noindent In our case, since $2(\alpha-\beta)\sim 0$, then $2f_{\alpha}-2f_{\beta}\sim 0$ and the divisor $$K_{X'}+C'\sim C_0+2\widetilde{f_1}+2\widetilde{f_2}+2\widetilde{f_3}+2E_{1,1}+...+2E_{3,3}. $$ \noindent Since $K_{X'}+C'$ is written as sum of irreducible and effective curves, then, to prove that $K_{X'}+C'$ is nef, it is enough to prove that $(K_{X'}+C')\cdot \delta\geq0$, for any its irreducible component $\delta$. With some simple computations we obtain that this is true and because we have strictly positive intersections between $K_{X'}+C'$ and its components, then $K_{X'}+C'$ is also big. So the claim is satisfied.\end{proof} \par\bigskip\noindent It is not difficult to compute that $C'^2=18$, so, by the adjunction formula, the genus $g(C')=1+\frac{1}{2}(C'^2+K_{X'}\cdot C')=10$. We also observe that $C'$ is smooth because it is the strict transform of a general element $C''$ of $L''$, that is smooth. Since $-K_{X'}|_{C'}$ is a non-zero two torsion divisor as seen in Claim $1$, we have that $L'|_{C'}= |K_{C'}-K_{X'}|_{C'}|$ defines a Prym-canonical map $$\phi_{L'|_{C'}}:C'\dashrightarrow \mathbb{P}^8.$$ \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}2:}$ The rational map $\phi_{L'|_{C'}}:C'\dashrightarrow \mathbb{P}^8$ is an embedding, for any general curve $C'\in L'$. \begin{proof} First of all, we know that, if $L'|_{C'}$ has base points, then $C'$ is hyperelliptic by \cite{CDGK}, Lemma $2.1$. Moreover, since $C'\cdot \widetilde{f}=3$, where $\widetilde{f}$ is the pullback of a general fibre $f$ of $X''$, then $C'$ is also a covering $3:1$ of the elliptic curve $\Gamma$. This is not possible by Castelnuovo-Severi inequality otherwise we would have $10=g(C')\leq2\cdot0+3\cdot1+1\cdot 2=5$ (see \cite{A}). Thus $L'|_{C'}$ is base-point free. \par\bigskip\noindent Thanks to \cite{CDGK}, Corollary $2.2$, we know that, if $C'$ is not bielliptic, then $L'|_{C'}$ is an embedding. Because $C'$ is a triple cover of $\Gamma$ as observed before, then $C'$ cannot be bielliptic again by Castelnuovo-Severi inequality otherwise we would have $10\leq2\cdot 1+3\cdot 1+1\cdot 2=7$. Thus the claim is proved. \end{proof} \par\bigskip\noindent At this point, since $L'|_{C'}$ is base-point free, it is clear that $L'$ is also base-point free. \noindent Since the restriction $L'|_{C'}$ defines an embedding for each generic curve $C'\in L'$, then $\phi_{L'}$ is a birational map, generically $1:1$. \par\bigskip\noindent Before we have showed that $\dim(L'')=\dim(L')\geq9$. From the exact sequence $$0\rightarrow \mathcal{O}_{X'}(C'-C')\rightarrow \mathcal{O}_{X'}(C')\rightarrow \mathcal{O}_{C'}(C')\rightarrow 0,$$ we conclude that $h^0(\mathcal{O}_{X'}(C'))\leq10$ since $\mathcal{O}_{X'}(C'-C')\cong \mathcal{O}_{X'}$ and $h^0(\mathcal{O}_{C'}(C'))=9$. So we have that $h^0(\mathcal{O}_{X'}(C'))=10$. \par\bigskip\noindent Then $X'$ has hyperplane sections that are Prym-canonically embedded and, in particular, we have found a new surface $X=\phi_{L'}(X')\subset \mathbb{P}^9$ with Prym-canonical hyperplane sections. Since the antibicanonical divisor of $X'$ is connected, then its image $x\in X$ by $\phi_{L'}$ is a singular point. There are other possible rational double singularities on $X$ whose exceptional divisors on $X'$ do not intersect $-2K_{X'}$. \noindent If a general hyperplane section $C$ of $X$ is projectively normal, then, by Proposition \ref{sum sing}, the geometric genus $p_g(x)$ is equal to $1$. \end{exa} \par\bigskip\begin{rem} We can compute how many moduli the couple $(X'',L'')$ of previous example depends on. \bigskip\noindent The choice of the elliptic curve $\Gamma$ depends on one parameter. In addition we fix a divisor $D=-Q_1-Q_2-Q_3-\alpha+\beta$ of degree $-3$, where $Q_i$ is a general point on $\Gamma$, for $i=1,2,3$, and $\alpha-\beta$ is a non-zero two torsion element of $\Gamma$. \noindent We know that there are only three non-zero two torsion points on $\Gamma$. Instead we observe that $|Q_1+Q_2+Q_3|$ is a linear system of dimension $2$, so the choice of $\mathcal{O}_{\Gamma}(D)$ depends on $3-2=1$ parameter. \noindent Moreover, every automorphism of $\Gamma$ lifts to an automorphism of $X''$ that means that, if $\phi:\Gamma\rightarrow \Gamma$ is an automorphism, then $X''=\mathbb{P}_{\Gamma}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(D))\cong \linebreak \mathbb{P}_{\Gamma}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(\phi^*(D)))$. The group $\operatorname{Aut}(\Gamma)$ has dimension $1$. \bigskip\noindent To construct the surface with Prym-canonical hyperplane sections of the previous example, we also fix a linear system $L''\subset|3C_1|=|3C_0-3D\cdot f|$ with $9$ simple base points. The linear system $|3C_1|$ depends on the parameters fixed before. Instead the $9$ simple base points are the points of intersection between a general element $\mathcal{L}\in |3C_1|$ and the three fibres $f_i:=Q_i\cdot f$, for $i=1,2,3$. The choice of the effective divisor in a linear system of the type $|Q_1+Q_2+Q_3|$ that defines the three fibres $f_1,f_2,f_3$ depends on $2$ parameters. In addition, as seen in the previous example, the nine points $\{x_{1,1},x_{1,2},x_{1,3},x_{2,1},x_{2,2},x_{2,3},x_{3,1},x_{3,2},x_{3,3}\}$ impose independent conditions on the linear system $L''$, so they depend on $9$ parameters. \par\bigskip\noindent The choice of the pair $(X'',L'')$ depends on $1+1-1+2+9=12$ parameters. \par\bigskip\noindent We know that $\dim(\operatorname{Aut}(\mathbb{P}^3))=15$. We can consider $X''$ as the blowing up of the vertex of the cone $C_{X''}$ on a plane cubic of $\mathbb{P}^3$. If $C_{\Gamma}$ is the base curve of $C_{X''}$, there are $\infty^8$ plane cubics isomorphic to $C_{\Gamma}$. Since we can choose the vertex among all the possible points of $\mathbb{P}^3$ obtaining always isomorphic cones, then there are $\infty^{(8+3)}=\infty^{11}$ isomorphic cones of the type of $C_{X''}$ in $\mathbb{P}^3$. \noindent Thus there are $\infty^4$ automorphism of $\mathbb{P}^3$ that fix $X''$ so, in conclusion, the couple $(X'',L'')$ depends on $12-4=8$ parameters. \par\bigskip\noindent Since the surface constructed in the previous example depends on $8$ moduli while a general Enriques surface depends on $10$ moduli, then the generic Enriques surface can degenerate to one of these surfaces since they depend on less parameters. \end{rem} \subsubsection{\textbf{The minimal model is a rational ruled surface}} \par\bigskip\noindent We construct an example of surface with Prym-canonical hyperplane sections birationally equivalent to a rational ruled surface. \begin{exa} Let $\Gamma$ be a rational smooth curve and let $X''=\mathbb{P}_{\Gamma}(\mathcal{O}_{\Gamma}\oplus\mathcal{O}_{\Gamma}(D))$ be a minimal ruled surface with base curve $\Gamma$, for $D\in \operatorname{Div}(\Gamma)$. We assume that $e=4$, so $\deg(D)=-4$. Hence $X''$ is a Hirzebruch surface $F_4$. \par\bigskip\noindent We know that $-K_{X''}\sim2C_0-(K_{\Gamma}+D)\cdot f$, where $\deg(-K_{\Gamma}-D)=2+4=6$. We put $$-K_{X''}=2C_0+2\sum_{i=1}^{3}F_i,$$ where $F_1,F_2$ and $F_3$ are distinct and fixed fibres. We also set $$W''=4C_0+3F+3\sum_{i=1}^{3}F_i\in |-2K_{X''}|,$$ where $F$ is a generic fibre distinct from $F_i$, for $i=1,2,3$. \par\bigskip\noindent We consider the linear system $|4C_1|=|4C_0-4D\cdot f|$ on $X''$. Every element in $|4C_1|$ intersects every fibre of $X''$ in $4$ points since $4C_1\cdot f=4$. In addition, we know that $h^0(\mathcal{O}_{X''}(4C_1))=45$ by \cite{FP}, Lemma $35$. \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}1:}$ There is a smooth curve $\mathcal{L}\in |4C_1|$ such that $\mathcal{L}$ is tangent to $F$, $F_1$, $F_2$ and $F_3$ respectively in two points. \begin{proof} We can assume that $X''=F_4$ is the blowing up of the vertex of a cone in $\mathbb{P}^5$ on a rational normal curve of $\mathbb{P}^4$. It is clear that $C_0$ is the exceptional divisor associated with the vertex. \noindent Let $E=F+F_1+F_2+F_3$ be the curve intersection between the cone and a hyperplane $H_0$ of $\mathbb{P}^5$ passing through the vertex of the cone. With abuse of notation, the total transform of $E$ on $X''$ is $E=C_0+F+F_1+F_2+F_3$. \par\bigskip\noindent It is obvious that the linear systems $|2C_1|$ and $|3C_1|$ are base-point free. Then they respectively contain a general quadric $Q$ and a general cubic $C$, that are smooth by Bertini's Theorem. \noindent It is clear that $2Q$ intersects every fibre in two points with multiplicity $2$. We call $\{x_j,x_{1,j},x_{2,j},x_{3,j}\}$, for $j=1,2$, the intersections points between $2Q$ and $F+F_1+F_2+F_3$. \noindent Let us consider a pencil $\mathcal{P}$ generated by $2Q$ and $E+C$. By Bertini's Theorem, its curves may have singular points only on the base locus of the pencil. At this point we observe that $2Q\cdot C\sim 2(2C_1)\cdot (3C_1)=12\cdot 4=48$ since $C_1$ is a rational normal curve of degree $4$. These $48$ points are base points for the pencil, different from $\{x_j,x_{1,j},x_{2,j},x_{3,j}\}$, with $j=1,2$, for the generality of $C$. Now $E+C$ has only $16$ singular points since $E$ has only $4$ singular points on $C_0$, while $C$ is smooth and disjoint from $C_0$ for its generality and $E\cdot C\sim (C_0+4F)\cdot 3C_1=12$. Since $Q$ is also disjoint from $C_0$ and it is general, then these $16$ points are different from the $48$ base points. Then $E+C$ is smooth in the $48$ base points. The same is true for $2Q$. Hence also a general divisor $\mathcal{L}$ in the pencil $\mathcal{P}$ is smooth in the $48$ base points. \noindent Instead $2Q|_{E}=\{x_1,x_2,x_{1,1},x_{1,2},x_{2,1},x_{2,2},x_{3,1},x_{3,2}\}$. These are other $8$ base points for $\mathcal{P}$. Since $2Q$ passes through $\{x_j,x_{1,j},x_{2,j},x_{3,j}\}$, for $j=1,2$, with multiplicity $2$ and since $E+C$ simply passes through the eight points ($E$ contains the fibres $F,F_1,F_2,F_3$ and $C$ does not contain these $8$ points), then a general curve $\mathcal{L}$ is smooth in these $8$ points and, in particular, it is tangent to $F,F_1,F_2,F_3$ in $\{x_1,x_2,x_{1,1},x_{1,2},x_{2,1},x_{2,2},x_{3,1},x_{3,2}\}$. \noindent Finally, we observe that $2Q\sim 2(2C_1)=4C_1$ and similarly, we have $C\sim 3C_1$ and $E\sim C_0+4F$, so $E+C\sim 4C_1$. Then we have found a smooth curve $\mathcal{L}\in |4C_1|$ tangent to $F$ in $x_1$ and $x_2$ and tangent to $F_i$ in $x_{i,1}$ and $x_{i,2}$, for $i=1,2,3$. \end{proof} \par\bigskip In the following figure, we analyze what happens blowing up all the intersection points between $\mathcal{L}$ and $W''$, also infinitely near. We observe that, since $\mathcal{L}\sim 4C_1$ and $4C_1$ is disjoint from $C_0$, then $\mathcal{L}$ does not intersect $C_0$. With abuse of notation, we will call the strict transforms of $C_0$, $F_i$ and $\mathcal{L}$ with the same names. \noindent In the figure, we only focus on $F_1$, it is the same for $F_2,F_3$ and $F$. \bigskip \begin{itemize} \item{STEP 1} We blow up the intersection points $x_{i,1}$ and $x_{i,2}$ on $X''$; \bigskip \item{STEP 2} In $X''_1:=Bl_{x_{1,1},x_{1,2},x_{2,1},x_{2,2},x_{3,1},x_{3,2}}(X'')$, the curve $\mathcal{L}$ simply passes through the infinitely near base points $y_{i,1}$ and $y_{i,2}$, for $i=1,2,3$. We also blow up these six points; \bigskip \item{STEP 3} Again $\mathcal{L}$ intersects the exceptional divisors $E_{i,1}$ and $E_{i,2}$ respectively in $z_{i,1}$ and $z_{i,2}$ on $X''_2:=Bl_{y_{1,1},y_{1,2},y_{2,1},y_{2,2},y_{3,1},y_{3,2}}(X''_1)$, for $i=1,2,3$. We obtain $Y=Bl_{z_{1,1},z_{1,2},z_{2,1},z_{2,2},z_{3,1},z_{3,2}}(X''_2)$ blowing up these other six points. \end{itemize} \par\bigskip \begin{figure}[h] \centering \includegraphics [scale=0.6]{img01} \end{figure} \begin{figure}[h] \includegraphics [scale=0.6]{img02} \label{fig:1} \end{figure} \newpage \noindent With the same techniques as before, we also blow up $\{x_1,x_2,y_1,y_2,z_1,z_2\}\in \mathcal{L}\cap F$ (as seen in the previous figure, they are infinitely near points). We define $$X':=Bl_{z_1,z_2}(Bl_{y_1,y_2}(Bl_{x_1,x_2}(Y))).$$ In $X'$, there are no intersection points between $\mathcal{L}$ and $-2K_{X'}$. \par\bigskip After observing how the blowing up works, we consider $L''\subset|4C_1|$ as the linear system of the curves of $|4C_1|$ simply passing through $$Z:=\{x_1,x_2,y_1,y_2,z_1,z_2,x_{i,1},x_{i,2},y_{i,1},y_{i,2},z_{i,1},z_{i,2}\}, \hspace{0.2cm} for \hspace{0.2cm} i=1,2,3.$$ Then $\dim L''\geq 44-6\cdot 4=20$. It is clear that $\mathcal{L}$ is an element of $L''$ and since smoothness is an open condition, then the general element $C''$ of $L''$ is smooth. \par\bigskip\noindent With the same notation as before, we can obtain that $$-K_{Y}=2C_0+\sum_{i=1}^{3}(2F_i+J_{i,1}+J_{i,2}+2E_{i,1}+2E_{i,2}+B_{i,1}+B_{i,2})$$ while $$W''_Y=4C_0+3F+\sum_{i=1}^{3}(3F_i+J_{i,1}+J_{i,2}+2E_{i,1}+2E_{i,2})\in |-2K_Y|.$$ \par\bigskip\noindent At this point, we use the fact that, if $M$ is an effective divisor and $N_i$ are irreducible divisors, if $M\cdot N_1<0$, $(M-N_1)\cdot N_2<0$, and so on, the $\sum N_i$ is a partial fixed part of $|M|$. Thus, inductively, one can verify that all $2C_0+\sum_{i=1}^{3}(2F_i+J_{i,1}+J_{i,2}+2E_{i,1}+2E_{i,2}+B_{i,1}+B_{i,2})$ is a fixed component of $|-K_{Y}|$, so this is the only one effective curve in its linear system. In addition, the part $4C_0+\sum_{i=1}^{3}(3F_i+J_{i,1}+J_{i,2}+2E_{i,1}+2E_{i,2})$ is the fixed part of $|-2K_{Y}|$ while its variable part is $3F$. \par\bigskip\noindent Similarly to before, we can compute that $$-K_{X'}\sim2C_0+\sum_{i=1}^{3}(2F_i+J_{i,1}+J_{i,2}+2E_{i,1}+2E_{i,2}+B_{i,1}+B_{i,2})+$$$$-J_1-J_2-2E_1-2E_2-3B_1-3B_2.$$ \bigskip\noindent This is clearly not effective. Instead $$W'=4C_0+3F+J_1+J_2+2E_1+2E_2+\sum_{i=1}^{3}(3F_i+J_{i,1}+J_{i,2}+2E_{i,1}+2E_{i,2})\in |-2K_{X'}|$$ is effective. In addition, all $4C_0+3F+J_1+J_2+2E_1+2E_2+\sum_{i=1}^{3}3F_i+J_{i,1}+J_{i,2}+2E_{i,1}+2E_{i,2}$ is a fixed component of $|-2K_{X'}|$ and consequently it is the only one effective divisor in its linear system. \par\bigskip\noindent Since all the divisors on $\Gamma$ of the same degree are linearly equivalent, then we observe that $-4D\cdot f\sim 4F+4\sum_{i=1}^{3}F_i$, so we can assume that $$C''\sim4C_0+4F+4\sum_{i=1}^{3}F_i,$$ where $C''$ is a general element in $L''$. Then, its strict transform $C'$ on $X'$ is linearly equivalent to $$C'\sim 4C_0+4F+\sum_{i=1}^{2}(3J_j+6E_j+5B_j)+$$$$+\sum_{i=1}^{3}(4F_i+3J_{i,1}+3J_{i,2}+6E_{i,1}+6E_{i,2}+5B_{i,1}+5B_{i,2}).$$ If $\phi:X'\rightarrow X''$ is the blowing up of $X''$ along the points of $Z$, let $L'$ be such that $L''=\phi_*L'$, with $C'$ a general element. \par\bigskip Step by step, we can prove that a general hyperplane section $C'$ of $X'$ is a Prym-canonical embedded curve. \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}2:}$ We have that $\mathcal{O}_{C'}(-K_{X'})\ncong \mathcal{O}_{C'}$ while $\mathcal{O}_{C'}(-2K_{X'})\cong\mathcal{O}_{C'}$. In particular, $-K_{X'}|_{C'}$ is a non-zero two torsion divisor. \begin{proof} It is easy to compute that $C'\cdot W'=0$. Since $W'$ is effective, then it is contracted by the map defined by $L'$, in particular $\mathcal{O}_{C'}(-2K_{X'})\cong \mathcal{O}_{C'}$. \noindent It is clear that also $C'\cdot (-K_{X'})=0$ but this time we have that $h^0(\mathcal{O}_{X'}(-K_{X'}))=0$. As seen in Claim $2$ of Proposition \ref{example}, it is sufficient to show that \linebreak$h^1(\mathcal{O}_{X'}(-K_{X'}-C'))=0$ to prove that $\mathcal{O}_{C'}(-K_{X'})\ncong \mathcal{O}_{C'}$. \noindent Using Serre Duality and the Kawamata-Viehweg vanishing Theorem (see \cite{K} and \cite{V}), if we prove that $K_{X'}+C'$ is big and nef, then the claim is satisfied. \noindent Now $$K_{X'}+C'=2C_0+\sum_{i=1}^{3}(2F_i+2J_{i,1}+2J_{i,2}+4E_{i,1}+4E_{i,2}+4B_{i,1}+4B_{i,2})+$$$$+4F+4J_1+4J_2+8E_1+8E_2+8B_1+8B_2.$$ \noindent Since $K_{X'}+C'$ is written as sum of irreducible and effective curves, then, to prove that $K_{X'}+C'$ is nef, it is enough to prove that $(K_{X'}+C')\cdot \delta\geq0$, for any its irreducible component $\delta$. It is possible to compute that this is true. Because we have strictly positive intersections between $K_{X'}+C'$ and its components, then $K_{X'}+C'$ is also big. So the claim is satisfied.\end{proof} \par\bigskip\noindent It is not difficult to compute that $C'^2=40$, so, by the adjunction formula, the genus $g(C')=1+\frac{1}{2}(C'^2)=21$. We also observe that $C'$ is smooth because it is the strict transform of a general element $C''$ of $L''$, that is smooth. Since $-K_{X'}|_{C'}$ is a non-zero two torsion divisor as seen in Claim $2$, we have that $L'|_{C'}= |K_{C'}-K_{X'}|_{C'}|$ defines a Prym-canonical map $$\phi_{L'|_{C'}}:C'\dashrightarrow \mathbb{P}^{19}.$$ \par\bigskip\noindent We have already observed that $\dim(L'')=\dim(L')\geq20$. From the exact sequence $$0\rightarrow \mathcal{O}_{X'}(C'-C')\rightarrow \mathcal{O}_{X'}(C')\rightarrow \mathcal{O}_{C'}(C')\rightarrow 0,$$ we conclude that $h^0(\mathcal{O}_{X'}(C'))\leq20$ since $\mathcal{O}_{X'}(C'-C')\cong \mathcal{O}_{X'}$ and $h^0(\mathcal{O}_{C'}(C'))=19$. So we have that $h^0(\mathcal{O}_{X'}(C'))=21$ and $\phi_{L'}(X')\subseteq\mathbb{P}^{20}.$ \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}3:}$ The rational map $\phi_{L'|_{C'}}:C'\dashrightarrow \mathbb{P}^{19}$ is an embedding, for any general curve $C'\in L'$. \begin{proof} First of all, we know that, if the Prym-canonical system $L'|_{C'}$ has base points, then $C'$ is hyperelliptic by \cite{CDGK}, Lemma $2.1$. \noindent Let us suppose that $C'$ is hyperelliptic. Since we know that $C'\in X'$ is isomorphic to $C''\sim 4C_1$ on $X''$, then $C''$ is also hyperelliptic. \noindent The self-intersection $C''^2=(4C_1)^2=64$. Furthermore $C''$ is nef since $C''\sim 4C_0+4F+4\sum_{i=1}^{3}F_i$ and $(4C_0+4F+4\sum_{i=1}^{3}F_i)\cdot C_0=0$ and $(4C_0+4F+4\sum_{i=1}^{3}F_i)\cdot F=(4C_0+4F+4\sum_{i=1}^{3}F_i)\cdot F_i=4$. Moreover, by \cite{H}, Proposition $IV.5.2$, we have that $|K_{C''}|$ is not very ample, precisely it does not separate any pair of points $p$ and $q$ such that $p+q$ is a member of the $g^1_2$ on $C''$. By the adjunction formula, we also have that $|K _{X''} + C''|$ does not separate such $p$ and $q$. \noindent By \cite{R}, Theorem $1.$, there exists an effective divisor $E$ on $X''$ passing through $p$ and $q$ such that $C''\cdot E<4$. Since $C''\sim 4C_1$, then $C''\cdot E=0$. This is not possible and thus $E$ cannot exist. We exclude the case $C''$ hyperelliptic and hence $L'|_{C'}$ is base-point free. \par\bigskip\noindent Furthermore, we can prove that $L'|_{C'}$ defines a birational map. Indeed, if this did not happen, we would have $C'$ bielliptic and the image of $X'$ via the map associated with $L'$ would be a surface in $\mathbb{P}^{20}$ with elliptic sections (see \cite{CDGK}, Corollary $2.2$). Since $20>9$, then the surface image in $\mathbb{P}^{20}$ could not be a Del Pezzo surface but it would be an elliptic cone. Anyway $X'$ is a rational surface, so it cannot cover an elliptic cone. Then $L'|_{C'}$ defines a birational map. \par\bigskip\noindent More precisely, we can also show that $L'|_{C'}$ defines an embedding, for any general $C'\in L'$. By \cite{CDGK}, Lemma $2.1$, we know that $L'|_{C'}$ does not separate $p$ and $q$ (possibly infinitely near) if and only if $C'$ has a $g^1_4$ and $-K_{X'}|_{C'}\sim \mathcal{O}_{C'}(p+q-x-y)$, where $2(p+q)$ and $2(x+y)$ are members of the $g^1_4$. \par\bigskip\noindent We know that $C'\cong C''$ and $C''\sim 4C_1$ has a $g^1_4$ defined by the fibres of the ruled surface $X''$. This is the only one. Indeed, if $C'$ had two $g^1_4$, then there would be a map $\psi:C'\rightarrow \mathbb{P}^1\times\mathbb{P}^1$. If $\psi$ was a birational map, the image curve would be of the type $(4,4)$ on $\mathbb{P}^1\times\mathbb{P}^1$. Then its geometric genus would be at most $(4-1)(4-1)=9$. Since $C'$ has genus $21$, this case is excluded. Thus $\psi$ would be a map $2:1$ on a curve $D$. The image curve $D$ would be a curve of type $(2,2)$ on $\mathbb{P}^1\times\mathbb{P}^1$, so its geometric genus would be $g(D)\leq1$. Since $C'$ is non-hyperelliptic as seen before, then $g(D)=1$ and $C'$ is bielliptic. Then $C'$ admits a singular correspondence. By Corollary $2.2$ of \cite{Cil-VdV}, the map determined by the linear system $|C'|$ is not birational, in particular it is $2:1$ on a surface with elliptic sections. We have already excluded this possibility, so $C''$ has only one $g^1_4$. \par\bigskip\noindent It is clear that $C''\cap F\sim C''\cap F_1\sim C''\cap F_2\sim C''\cap F_3$ and we know that $C''$ is tangent in two points to this four fibres. So we have four pairs of points $(p,q)\in F$, $(x,y)\in F_1$, $(z,w)\in F_2$ and $(a,b)\in F_3$ such that $2(p+q)\sim 2(x+y)$ and so on for all the possible cases. Since $C'$ is the strict transform of $C''$, it has the same characteristics of $C''$ and, after the blowing up, the four pairs of points that satisfy this property are the intersection points between $C'$ and $B_{i,j}$ and $C'$ and $B_j$, for $i=1,2,3$ and $j=1,2$. Now, with abuse of notation and using the expression of $-K_{X'}$ seen before, we have that $$-K_{X'}|_{C'}=\sum_{i=1}^{3}(B_{i,1}+B_{i,2})|_{C'}-(3B_1+3B_2)|_{C'}=$$$$=x+y+w+z+a+b-3p-3q\sim x+y-w-z+a+b-p-q.$$ \bigskip\noindent At this point, we observe that $$x+y-w-z+a+b-p-q\nsim x+y-p-q$$ otherwise, if $a+b-w-z\sim 0,$ then $C'$ would have a $g^1_2$. Hence $L'|_{C'}$ separate each pair of points and it defines an embedding. \end{proof} \par\bigskip\noindent At this point, since $L'|_{C'}$ is base-point free, it is clear that $L'$ is also base-point free. \noindent Since the restriction $L'|_{C'}$ defines an embedding for each generic curve $C'\in L'$, then $\phi_{L'}$ is a birational map, generically $1:1$. \noindent Then $X'$ has hyperplane sections that are Prym-canonically embedded. In particular, $\phi_{L'}(X')$ is a surface with Prym-canonical hyperplane sections. \par\bigskip\noindent We have found a new surface $X=\phi_{L'}(X')\subset \mathbb{P}^{20}$ with Prym-canonical hyperplane sections. Since $W'$ is connected, then the image $x\in X$ of $W'$ is a rational singular point (see Proposition \ref{sum sing}). There are other possible rational double singularities on $X$ whose exceptional divisors on $X'$ do not intersect $-2K_{X'}$. \end{exa} \subsection{More surfaces with Prym-canonical hyperplane sections birationally equivalent to $\mathbb P^2$} \par\bigskip\noindent We construct a new example of such surface whose minimal model is $X''=\mathbb{P}^2$. \begin{exa}\label{Case 2} Let $X''=\mathbb{P}^2$ be such that $-2K_{X''}$ is an irreducible sextic with $10$ nodes $\{x_1,...,x_{10}\}$. Let $L''$ be a linear system of curves of degree $18$ with base points $\{x_1,...,x_{10}\}\in X''$ of multiplicity respectively $r_i=4$, for $i=1,2,3$, and $r_i=6$, for $i=4,...,10$. Let $X'=Bl_{\{x_1,...,x_{10}\}}(\mathbb{P}^2)$ be the blowing up of $X''$ along the base points of $L''$ and let $L'$ be the strict transform of $L''$. We observe that the anticanonical divisor $-K_{X'}$ is not effective. \noindent Let $$C'\sim 18 l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i$$ be a general curve in $L'$, where $E_i$ is the exceptional divisor associated with $x_i$, for $i=1,...,10$. It is obvious that $$\deg(C'|_{C'})=18^2-3\cdot 16-7\cdot36=24.$$ We have that $h^0(\mathcal{O}_{X'}(C'))\geq\binom{20}{2}-4\frac{4\cdot5}{2}-7\frac{6\cdot 7}{2}=13$, so $\phi_{L'}(X')=X\subset \mathbb{P}^r$, for $r\geq12$. \par\bigskip\noindent Since $-2K_{X'}\sim J=6l-2\sum_{i=1}^{10}E_i$ is effective and $C'\cdot (-2K_{X'})=0$ by construction, then $\mathcal{O}_{C'}(-2K_{X'})\cong \mathcal{O}_{C'}$. So $L'$ contracts $J$ in a single point since $J$ is irreducible. Moreover, since $J$ is also rational, then $\phi_{L'}(J)$ is a rational singularity of multiplicity $4$ because the fundamental cycle $Z_0=J$ is such that $Z_0^2=J^2=-4$. \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}1:}$ The dimension $\dim(L')=12$, so $\phi_{L'}(X')=X\subseteq\mathbb{P}^{12}$. \begin{proof} We can consider the following exact sequence, already tensored with $\mathcal{O}_{X'}(C')$: \begin{equation}\label{P2 1} 0\rightarrow\mathcal{O}_{X'}(C'-J)\rightarrow\mathcal{O}_{X'}(C')\rightarrow\mathcal{O}_{J}(C')\rightarrow0. \end{equation} \noindent Since $\mathcal{O}_{C'}(-2K_{X'})\cong \mathcal{O}_{C'}$ and $J\in |-2K_{X'}|$, then $\mathcal{O}_{J}(C')\cong \mathcal{O}_{J}\cong \mathcal{O}_{\mathbb{P}^1}$ since $J$ is rational. Thus we can rewrite (\ref{P2 1}) as \begin{equation}\label{P2 2} 0\rightarrow\mathcal{O}_{X'}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i)\rightarrow\mathcal{O}_{X'}(18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i)\rightarrow\mathcal{O}_{\mathbb{P}^1}\rightarrow0. \end{equation} \noindent Similarly we obtain that $$0\rightarrow\mathcal{O}_{X'}(6l-2\sum_{i=4}^{10}E_i)\rightarrow\mathcal{O}_{X'}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i)\rightarrow$$ \begin{equation}\label{P2 3} \rightarrow\mathcal{O}_{J}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i)\rightarrow0. \end{equation} \bigskip\noindent It is possible to choose a quintuple of points among the $10$ nodes $\{x_1,...,x_{10}\}$ of $J$ such that three of these points are not aligned, then an irreducible conic passing through this quintuple of points exists. Up to renaming the nodes of $J$, we suppose that a conic passing through $\{x_4,...,x_8\}$ exists. \noindent So let us consider the following exact sequences: \begin{equation}\label{P2 3bis} 0\rightarrow\mathcal{O}_{X'}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i)\rightarrow\mathcal{O}_{X'}(6l-2\sum_{i=4}^{10}E_i)\rightarrow\mathcal{O}_{2l-\sum_{i=4}^{8}E_i}(6l-2\sum_{i=4}^{8}E_i)\rightarrow0; \end{equation} $$0\rightarrow\mathcal{O}_{X'}(3l-\sum_{i=4}^{10}E_i)\rightarrow\mathcal{O}_{X'}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i)\rightarrow$$ \begin{equation}\label{P2 3bisbis} \rightarrow\mathcal{O}_{l-E_9-E_{10}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i)\rightarrow0. \end{equation} \bigskip\noindent It obvious that $h^0(\mathcal{O}_{X'}(3l-\sum_{i=4}^{10}E_i))=\binom{5}{2}-7=3$. Since it is an effective divisor on $X'$ and it has the excepted dimension, then $h^1(\mathcal{O}_{X'}(3l-\sum_{i=4}^{10}E_i))=0$. \noindent Because $l-E_9-E_{10}$ is rational and $(l-E_9-E_{10})\cdot(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i)=0$, then $h^0(\mathcal{O}_{l-E_9-E_{10}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i))=1$ and $h^1(\mathcal{O}_{l-E_9-E_{10}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i))=0$. \noindent From the exact sequence (\ref{P2 3bisbis}), we conclude that $h^0(\mathcal{O}_{X'}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i))=3+1=4$ and $h^1(\mathcal{O}_{X'}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i))=0$. \par\bigskip\noindent Using the Riemann-Roch Theorem, since $J$ and $2l-\sum_{i=4}^{8}E_i$ are rational, we obtain that $$h^0(\mathcal{O}_{J}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i))=4+1=5$$ and $$h^0(\mathcal{O}_{2l-\sum_{i=4}^{8}E_i}(6l-2\sum_{i=4}^{10}E_i))=2+1=3.$$ \par\bigskip\noindent From the exact sequence (\ref{P2 3bis}), we conclude that $h^0(\mathcal{O}_{X'}(6l-2\sum_{i=4}^{10}E_i))=7$ and $h^1(\mathcal{O}_{X'}(6l-2\sum_{i=4}^{10}E_i))=0$. Again, from the exact sequence (\ref{P2 3}), we have that $h^0(\mathcal{O}_{X'}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i))=12$ and $h^1(\mathcal{O}_{X'}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i))=0$. \noindent Finally, from the exact sequence (\ref{P2 2}), we obtain that $$h^0(\mathcal{O}_{X'}(18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10} E_i))=12+1=13.$$ Then the claim is proved. \end{proof} \par\bigskip Step by step we want to show that a general hyperplane section of $X'$ is a Prym-canonical embedded curve. \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}2:}$ We prove that $\mathcal{O}_{C'}(-K_{X'})\ncong \mathcal{O}_{C'}$. \begin{proof} If we show that $h^1(\mathcal{O}_{X'}(-K_{X'}-C'))=0$, then, using the long exact sequence associated with $$0\rightarrow \mathcal{O}_{X'}(-K_{X'}-C')\rightarrow \mathcal{O}_{X'}(-K_{X'})\rightarrow \mathcal{O}_{C'}(-K_{X'})\rightarrow 0$$ and observing that $-K_{X'}$ is not effective, we have that $h^0(\mathcal{O}_{C'}(-K_{X'}))=0$, thus $\mathcal{O}_{C'}(-K_{X'})\ncong \mathcal{O}_{C'}$. \noindent Since $$-K_{X'}-C'\sim -15l+3\sum_{i=1}^{3}E_i+5\sum_{i=4}^{10}E_i,$$ then it is not effective and $h^0(\mathcal{O}_{X'}(-K_{X'}-C'))=0$. By Serre Duality, we have that $h^2(\mathcal{O}_{X'}(-K_{X'}-C'))=h^0(\mathcal{O}_{X'}(2K_{X'}+C'))=h^0(\mathcal{O}_{X'}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i))=12$ as proved in Claim $1$. \noindent Using the Riemann-Roch Theorem, we conclude that $-h^1(\mathcal{O}_{X'}(-K_{X'}-C'))+\linebreak +h^2(\mathcal{O}_{X'}(-K_{X'}-C'))=-h^1(\mathcal{O}_{X'}(-K_{X'}-C'))+12=\frac{1}{2}(-15l+3\sum_{i=1}^{3}E_i+5\sum_{i=4}^{10}E_i) \cdot(-12l+2\sum_{i=1}^{3}E_i+4\sum_{i=4}^{10}E_i)+1-0=12$, so $h^1(\mathcal{O}_{X'}(-K_{X'}-C'))=0$ and the claim is proved.\end{proof} \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}3:}$ There are irreducible curves of degree $18$ with exactly $3$ quadruple points and $7$ points of multiplicity six in the ten nodes of $J$. \begin{proof} We observe that curves of the type $J+D$, with $D\in|12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i|$ and $J$ fixed part, are contained in $L'=|18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i|$. \noindent As proved in Claim $1$, we have that $\dim|18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i|=12$ while $\dim |12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i|=11$, so the reducible curves $J+D$ do not fill up all the linear system of the curves of degree $18$. As consequence of Bertini's Theorem (see \cite{Ak}, pag. $1$), the generic curve of $L'$ is irreducible (indeed the curves of the linear system $L'$ with fixed part $J$ define a sublinear system and moreover the sublinear system is not composed by a pencil, even more so the linear system $L'$). \noindent Also curves of the type $2J+F$, with $F\in |6l-2\sum_{i=4}^{10}E_i|$ and $2J$ fixed part, are contained in $L'$. Since these special curves of $L'$ have exactly quadruple points in three of the $10$ nodes of $J$ and points of multiplicity $6$ in seven of the $10$ nodes of $J$, then the generic curves of the linear system $L'$ have the same property. Thus irreducible curves of degree $18$ with exactly quadruple points in three of the $10$ nodes of $J$ and points of multiplicity $6$ in the remaining nodes of $J$ exist. \end{proof} \par\bigskip\noindent Therefore the arithmetic genus, that is equal to the geometric genus of $C$, is $$g(C')=\frac{17\cdot 16}{2}-3\frac{4\cdot 3}{2}-7\frac{6\cdot 5}{2}=13$$ by the Pl\"{u}cker Formula. \par\bigskip\noindent It remains to show that $L'$ defines an embedding outside the contracted curve $J$. \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}4:}$ The linear system $L'$ is base-point free. \begin{proof} Let $\overline{X'}=Bl_{x_{11}}(X')$, where $x_{11}$ is a point of a general $C'\in L'$. If $\overline{L'}=|18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i-E_{11}|$, for $E_{11}$ the exceptional divisor associated with $x_{11}$, then $L'$ is base point free if and only if $$\dim( \overline{L'})=\dim(L')-1,$$ for any point $x_{11}\in C'$, for a general $C'\in L'$. \par\bigskip\noindent We have already proved that $\dim(L')=12$, instead $\dim(\overline{L'})\geq\binom{20}{2}-3\frac{4\cdot 5}{2}-7\frac{6\cdot 7}{2}-1-1=11$. We observe that, since $x_{11}\in C'$ and $C'$ and $J$ are disjoint by assumptions, then $\mathcal{O}_{\overline{J}}(\overline{C'})\cong \mathcal{O}_{\mathbb{P}^1}$, where $\overline{C'}$ is a general curve in $\overline{L'}$ and $\overline{J}$ is the strict transform of $J$ on $\overline{X'}$. \noindent Similarly to the exact sequences (\ref{P2 2}), (\ref{P2 3}), we have the following: \begin{equation}\label{P2 4} 0\rightarrow\mathcal{O}_{\overline{X'}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11})\rightarrow\mathcal{O}_{\overline{X'}}(18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i-E_{11})\rightarrow\mathcal{O}_{\mathbb{P}^1}\rightarrow0 \end{equation} $$ 0\rightarrow\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11})\rightarrow\mathcal{O}_{\overline{X'}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11})\rightarrow$$ \begin{equation}\label{P2 5} \rightarrow\mathcal{O}_{\overline{J}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11})\rightarrow0. \end{equation} \noindent As in Claim $1$, we suppose that an irreducible conic passing through $\{x_4,...,x_8\}$ exists. \begin{itemize} \item If $x_{11}\in 2l-\sum_{i=4}^{8}E_i$, we can consider the exact sequence $$0\rightarrow \mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i)\rightarrow\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11})\rightarrow$$ \begin{equation}\label{P2 6} \rightarrow \mathcal{O}_{2l-\sum_{i=4}^{8}E_i-E_{11}}(6l-2\sum_{i=4}^{10}E_i-E_{11})\rightarrow 0. \end{equation} \noindent From the exact sequence (\ref{P2 3bisbis}), we know that $h^0(\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i))=4$ and $h^1(\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i))=0$. \noindent Since $h^0(\mathcal{O}_{2l-\sum_{i=4}^{8}E_i-E_{11}}(6l-2\sum_{i=4}^{10}E_i-E_{11}))=2$ by Riemann-Roch's Theorem, then $h^0(\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}))=6$ and $h^1(\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}))=0$ from the exact sequence (\ref{P2 6}). \item If $x_{11}\notin 2l-\sum_{i=4}^{8}E_i$, then we consider the following \bigskip $$0\rightarrow\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2E_9-2E_{10}-E_{11})\rightarrow\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11})\rightarrow $$ \begin{equation}\label{P2 7} \rightarrow\mathcal{O}_{2l-\sum_{i=4}^{8}E_i}(6l-2\sum_{i=4}^{10}E_i-E_{11})\rightarrow0. \end{equation} \begin{itemize} \item[$\blacklozenge$] If $x_{11}\in l-E_9-E_{10}$, we consider \bigskip $$ 0\rightarrow \mathcal{O}_{\overline{X'}}(3l-\sum_{i=4}^{10}E_i)\rightarrow\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11})\rightarrow$$ \begin{equation}\label{P2 8} \rightarrow\mathcal{O}_{l-E_{9}-E_{10}-E_{11}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11})\rightarrow0. \end{equation} It is obvious that $h^0(\mathcal{O}_{\overline{X'}}(3l-\sum_{i=4}^{10}E_i))=3$ and $h^0(\mathcal{O}_{l-E_{9}-E_{10}-E_{11}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11}))=0$. From the exact sequence (\ref{P2 8}), we obtain that $h^0(\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11}))=3$. \item[$\blacklozenge$] If $x_{11}\notin l-E_{9}-E_{10}$, we can consider the following exact sequences \bigskip $$0\rightarrow\mathcal{O}_{\overline{X'}}(3l-\sum_{i=4}^{11}E_i)\rightarrow\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11})\rightarrow$$ \begin{equation}\label{P2 9} \rightarrow\mathcal{O}_{l-E_9-E_{10}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11})\rightarrow0; \end{equation} \bigskip $$0\rightarrow\mathcal{O}_{\overline{X'}}(l-\sum_{i=9}^{11}E_i)\rightarrow\mathcal{O}_{\overline{X'}}(3l-\sum_{i=4}^{11}E_i)\rightarrow$$ \begin{equation}\label{P2 10} \rightarrow\mathcal{O}_{2l-\sum_{i=4}^{8}E_i}(3l-\sum_{i=4}^{11}E_i)\rightarrow0. \end{equation} \noindent By assumption we have that $h^0(\mathcal{O}_{\overline{X'}}(l-\sum_{i=9}^{11} E_i))=0$. Because \\\ $h^0(\mathcal{O}_{2l-\sum_{i=4}^{8}E_i}(3l-\sum_{i=4}^{11}E_i))=2$, then, from the exact sequence (\ref{P2 10}) we conclude that $h^0(\mathcal{O}_{\overline{X'}}(3l-\sum_{i=4}^{11}E_i))\leq2$. Since $h^0(\mathcal{O}_{\overline{X'}}(3l-\sum_{i=4}^{11}E_i))\geq\binom{5}{2}-8=2$, then equality holds. \par\bigskip\noindent Moreover $h^0(\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11}))\geq\binom{6}{2}-6-3-3=3$. Since $h^0(\mathcal{O}_{l-E_9-E_{10}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11}))=1$, then, from the exact sequence (\ref{P2 9}), we have that $h^0(\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11}))\leq3$, so equality holds. \end{itemize} \par\bigskip\noindent In both previous cases, we have found $h^0(\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11}))=3$. Since it is the expected dimension, then $h^1(\mathcal{O}_{\overline{X'}}(4l-\sum_{i=4}^{8}E_i-2\sum_{i=9}^{10}E_i-E_{11}))=0$. \noindent Using the Riemann-Roch Theorem, we have that \linebreak$h^0(\mathcal{O}_{2l-\sum_{i=4}^{8}E_i}(6l-2\sum_{i=4}^{10}E_i-E_{11}))=3$. So we obtain that $h^0(\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}))=6$ and $h^1(\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}))=0$ from the exact sequence (\ref{P2 7}). \end{itemize} \par\bigskip\noindent In all two cases we have that $h^0(\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}))=6$. \par\bigskip\noindent With the same techniques as before, from the exact sequence (\ref{P2 5}), we have that $h^0(\mathcal{O}_{\overline{X'}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11}))=11$ and $h^1(\mathcal{O}_{\overline{X'}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11}))=0$ and finally, from the exact sequence (\ref{P2 4}), we obtain that $h^0(\mathcal{O}_{\overline{X'}}(18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i-E_{11}))=12$. \noindent Then we can conclude that $L'$ is base-point free. \end{proof} \par\bigskip\noindent By Bertini's Theorem, since $L'$ is base-point free, then the generic $C'\in L'$ is smooth. \par\bigskip\noindent $\mathrm{CLAIM \hspace{0.2cm}5:}$ The linear system $L'$ defines an embedding outside the contracted curve $J$. \begin{proof} It is sufficient to show that $\dim\overline{L'}=\dim (L')-2$, where either $\overline{L'}=|18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i-E_{11}-E_{12}|$, for $E_{11}$ and $E_{12}$ the exceptional divisors associated with any two distinct points $x_{11}$ and $x_{12}$ not belonging to $J$, or $\overline{L'}=|18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i-E_{11}-2E_{12}|$, for $E_{11}$ and $E_{12}$ the exceptional divisors associated with any two points $x_{11}$ and $x_{12}$ infinitely near not belonging to $J$. \bigskip\noindent We have already proved that $\dim(L')=12$. Moreover we have that $\dim(\overline{L'})\geq\binom{20}{2}-3\frac{4\cdot 5}{2}-7\frac{6\cdot 7}{2}-2-1=10$. If $\overline{C'}$ is a general curve in $\overline{L'}$ and $\overline{J}$ is the strict transform of $J$ on $\overline{X'}$, then $\overline{C'}\cdot \overline{J}=0$ since we choose $x_{11}$ and $x_{12}$ not belonging to $J$. \bigskip\noindent We will show that $L'$ defines an embedding outside the contracted curve $J$ assuming $x_{11}$ and $x_{12}$ distinct. The proof is similar for $x_{11}$ and $x_{12}$ infinitely near. \noindent We can consider the following exact sequences: $$ 0\rightarrow\mathcal{O}_{\overline{X'}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11}-E_{12})\rightarrow$$ \begin{equation}\label{flu 1}\rightarrow\mathcal{O}_{\overline{X'}}(18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i-E_{11}-E_{12})\rightarrow\mathcal{O}_{\mathbb{P}^1}\rightarrow0; \end{equation} $$0\rightarrow\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12})\rightarrow\mathcal{O}_{\overline{X'}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11}-E_{12})\rightarrow$$ \begin{equation}\label{flu 2} \rightarrow\mathcal{O}_{\overline{J}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11}-E_{12})\rightarrow0 \end{equation} $$0\rightarrow\mathcal{O}_{\overline{X'}}(2\sum_{i=1}^{3}E_i-E_{11}-E_{12})\rightarrow\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12})\rightarrow$$ \begin{equation}\label{flu 3} \rightarrow\mathcal{O}_{\overline{J}}(6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12})\rightarrow0. \end{equation} \noindent It is clear that $h^0(\mathcal{O}_{\overline{X'}}(2\sum_{i=1}^{3}E_i-E_{11}-E_{12}))=0$. Again, by Serre Duality, we have that $h^2(\mathcal{O}_{\overline{X'}}(2\sum_{i=1}^{3}E_i-E_{11}-E_{12}))=0$. Using the Riemann-Roch Theorem, we obtain that $-h^1(2\sum_{i=1}^{3}E_i-E_{11}-E_{12}))=\frac{1}{2}(2\sum_{i=1}^{3}E_i-E_{11}-E_{12})\cdot(3l+\sum_{i=1}^{3}E_i-\sum_{i=4}^{10}E_i-2E_{11}-2E_{12})+1=-4$. \noindent Since $\overline{J}\cdot (6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12})=8$, then $h^0(\mathcal{O}_{\overline{J}}(6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}))=9$. Moreover $h^0(\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}))\geq\binom{8}{2}-7\cdot 3-2=5$. To show that equality holds, it is sufficient to prove that $h^1(\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}))=0$. \noindent We observe that curves of the type $(3l-\sum_{i=4}^{12}E_i)+F$, with $F\in|3l-\sum_{i=4}^{10}E_i|$ and $3l-\sum_{i=4}^{12}E_i$ fixed part, are contained in $|6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}|$. \noindent We have that $\dim|6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}|\geq5$ while $\dim |3l-\sum_{i=4}^{10}E_i|=\binom{5}{2}-7=\frac{5\cdot 4}{2}-7=3$, so the reducible curves of the type $(3l-\sum_{i=4}^{12}E_i)+F$ do not fill up all the linear system of the curves of degree $6$. As consequence of Bertini's Theorem (see \cite{Ak}, pag. $1$), the generic curve $D$ of $|6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}|$ is irreducible (indeed the curves of the linear system $|6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}|$ with fixed part $3l-\sum_{i=4}^{12}E_i$ define a sublinear system and moreover the sublinear system is not composed by a pencil, even more so the linear system $|6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}|$). \noindent Let us consider the exact sequence \begin{equation}\label{flu 55} 0\rightarrow \mathcal{O}_{\overline{X'}}\rightarrow \mathcal{O}_{\overline{X'}}(D)\rightarrow \mathcal{O}_{D}(D)\rightarrow 0. \end{equation} \noindent Since $D^2=36-28-2=6$ and $p_a(D)=\frac{5\cdot 4}{2}-7=3$, then $h^1(\mathcal{O}_{D}(D))=0$ (see \cite{H}, Example $IV.1.3.4$). Because $h^1(\mathcal{O}_{\overline{X'}})=0$ by definition, then \linebreak$h^1(\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}))=0$ from the exact sequence (\ref{flu 55}). Consequently $h^0(\mathcal{O}_{\overline{X'}}(6l-2\sum_{i=4}^{10}E_i-E_{11}-E_{12}))=5$ from the exact sequence (\ref{flu 3}). \noindent Since $\overline{J}$ is rational, then $h^0(\mathcal{O}_{\overline{J}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11}-E_{12}))=5$, so, from the exact sequence (\ref{flu 2}), we have that $h^0(\mathcal{O}_{\overline{X'}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11}-E_{12}))=10$ and $h^1(\mathcal{O}_{\overline{X'}}(12l-2\sum_{i=1}^{3}E_i-4\sum_{i=4}^{10}E_i-E_{11}-E_{12}))=0.$ \noindent Finally, from the exact sequence (\ref{flu 1}), we obtain that $h^0(\mathcal{O}_{\overline{X'}}(18l-4\sum_{i=1}^{3}E_i-6\sum_{i=4}^{10}E_i-E_{11}-E_{12}))=11$. The claim is proved. \end{proof} \par\bigskip\noindent We have found a new example of rational surface $X\subset\mathbb{P}^{12}$ of degree $\deg(X)=\frac{C'^2}{\deg \phi_{L'}}=24$ with Prym-canonical hyperplane sections and only one singularity, a quartic rational singularity. \end{exa}
{ "timestamp": "2021-02-16T02:39:36", "yymm": "2102", "arxiv_id": "2102.07700", "language": "en", "url": "https://arxiv.org/abs/2102.07700" }
\section{Introduction} \label{sec:intro} Consider an $M/M/1$ first-come-first-served queue where customers arrive according to a Poisson process with rate $\lambda$ and the service times for each customer are independently and identically distributed according to an exponential distribution with parameter $\mu$. After being served, each customer either successfully completes the service and departs from the system with probability $q$, or the service fails and the customer immediately joins the end of the queue to wait to be served again until she successfully completes it. We define the sojourn time as the total time a customer spends in the system, so it includes both the waiting time and the service time. Upon arriving at the queue, the newly arrived customer observes the number of customers in the system, and by considering the trade-off between her expected sojourn time and the reward due to a successful service completion, she makes a decision to join the queue or balk depending on the number of customers present when she arrives. The cost is assumed to be linear in the sojourn time with rate $C$. To non-dimensionalise the model, we set $C=1$ for the rest of the paper. The reward to a customer when she successfully completes her service, which is assumed to be identical across customers, is denoted by $R_0$. Let $R$ be the reward that a customer actually obtains when she leaves the system. In this first model, customers are not allowed to leave until they successfully complete their service. Hence, the random variable $R$ is equal to the reward $R_0$ with probability one, but the reason that we have introduced it is that the reward is truly random for the system with reneging that we consider later. Indeed, for that system the random variable $R$ is equal to the reward $R_0$ with some probability less than one and equal to zero with some positive probability. Customers decide to join as long as their expected payoff, which is defined as the difference between their expected reward and their expected cost, is positive. See Figure \ref{F1} for an illustration of the system. The sojourn time of a customer depends on the service times of all customers that are served before she leaves the system and, if she has to repeat her service, it is possible that some of these services are for customers who joined the queue after her. It follows that her expected reward depends on the joining strategy of other customers. As a consequence, the best response of each customer is a function of both the position at which she joins the system and the other customers’ strategies. For this reason, it is natural to consider the decision problem in a game theoretic framework and to look into the Nash equilibrium strategy for each customer (see \citet{HH03}). \begin{figure} \centering \includegraphics[width=10cm]{fbq_illustration_NR.pdf} \caption{An $M/M/1$ feedback queue with strategic customers when reneging is not allowed.} \label{F1} \end{figure} The study of the instantaneous Bernoulli feedback queue goes back to \citet{T63}, in which he obtained the expected total waiting time for the $M/G/1$ feedback queue by deriving the joint transform of the distribution of the queue length and the remaining service time. \citet {DMS80}, \citet{D81}, and \citet{DK84} further studied the queue length, the total sojourn time, and the waiting time. \citet{T87} applied the instantaneous Bernoulli feedback queue to study packet transmissions in an error-prone channel with probability of successful transmission $q$. This transmission style is similar to segmented message transmission with the number of segments in a message geometrically distributed with mean $1/q$. \citet{FH05}, and \citet{GH09} studied queueing networks with feedback loops and intelligent customers. However, the customers in their model are not strategic in the sense that their decisions do not depend on the others' decisions, which is different from our setting. \citet{AS98} analysed a system of observable egalitarian processor sharing queues, where customers decide to join or to balk after observing the number of customers in the system upon their arrival, and are not allowed to renege at any stage after joining. They calculated numerically the symmetric threshold equilibrium strategy for the case of Poisson arrivals and proposed a dynamic learning scheme, which converges to the symmetric Nash equilibrium strategy. \citet{AKLL18} considered a polling system with two queues where a single server serves the two nodes in a cyclic fashion with exhaustive service. Customers can choose which queue to join upon arrival. They analysed the Nash equilibrium strategies under three scenarios of available information of the queue lengths or the position of the server at decision epochs, and obtained the Nash equilibrium strategies via a new iterative algorithm. In both \cite{AS98} and \cite{AKLL18}, customers' best response is affected by future arrivals. In this paper, we study an $M/M/1$ queue with instantaneous Bernoulli feedback, and allow each customer to determine whether to join the queue or not after observing the number of customers in the system. Similar to \citet{AS98}, in our model, a tagged customer's sojourn time is affected by the joining behavior of future arrivals. This model was first analysed in an unpublished technical report by \citet{BC13}. They considered a first-come-first-served $GI/G/1$ Bernoulli feedback queue with arriving customers observing the number of customers in the system before deciding whether to join or not, but not allowed to renege. Although arriving customers see the stationary distribution of the number of customers in the system, due to the balking and different joining positions, the distribution observed by feedback customers requires further analysis (see \citet[Section 2.10]{W88}, \citet{BVD97}), which makes the expected sojourn time computation nontrivial. In this paper, we efficiently obtain the expected payoff of a joining customer for any parameter set using matrix analytic methods (see \citet{N81}), which can also be easily extended to other models. In particular, we compute a customer's conditional expected payoff based on her joining position and the other customers’ threshold values, by solving Poisson's equation for a discrete-time nonhomogeneous quasi-birth-and-death-process. Then we explicitly propose the Nash equilibrium strategies (pure or mixed) of threshold type. Every time a customer joins at the end of the queue due to a service failure, the time she has already spent becomes a sunk cost. Also, it is possible that her conditions have deteriorated with time. For example, the system could have been empty when a tagged customer first arrived at the queue, but has become overcrowded before she goes to the end of the queue due to a service failure, because of a large number of arrivals during her first service. Thus, such a customer might want to renege if they are allowed. But once they choose to remain, the residual time until their next service has an Erlang distribution, which has an increasing hazard rate. Thus, if it is worth remaining in the system, it is worth waiting until the next service. In the second part of this paper, we assume that customers are allowed to renege every time they join the end of the queue according to the same threshold strategy with which they choose to join the system. That is, if customers choose to join the system if and only if the number of customers in the system is less than or equal to some threshold value, then they will leave the system after a service failure if and only if the number exceeds the same threshold value. With matrix analytic methods, we can easily compute the Nash equilibrium threshold when reneging is permitted, and compare it with the equilibrium threshold value in the non-reneging case. We show that the customers' equilibrium threshold value when reneging is allowed is greater. However, for some parameter values, their expected payoff can decrease. The paper is organised as follows. In Section 2 we introduce the basics of the $M/M/1$ feedback queue, and precisely define our threshold joining strategy, which is specified by a real-valued threshold. We also derive an analytical expression for a tagged customer's position-dependent expected sojourn time if the other customers always choose to join. In Section \ref{sec: NRcase} we obtain numerically the expected sojourn time and the expected payoff of a tagged customer conditioned on her joining position and the threshold strategy used by others, via matrix analytic methods. Then we propose a threshold Nash equilibrium strategy. In Section 4 we assume that customers are allowed to leave after joining and their reneging threshold is the same as the one with which they choose to join the system. We compute the Nash equilibrium threshold when reneging is permitted and compare it with that in the non-reneging case. In Section 5 we present two paradoxes observed in the non-reneging and the reneging case. In Section 6 we analyse the optimal social welfare in both the non-reneging and the reneging cases, and prove that allowing reneging does not change the socially optimal threshold and optimal social welfare. \section{Preliminaries} \label{sec: Pre} \subsection{Joining strategies} \label{sec: Pre: Jpolicies} We assume that the queue starts at time $0$ with an initial number of customers according to a distribution $\pi(0)$ which is supported on the nonnegative integers. The number of customers in the system is observable to any arriving customer before she decides to join or not to join. For $r = 1,2,\ldots$, let $u_r$ be a function that maps the numbers $1,2, \ldots$ to the interval $[0,1]$ such that $u_r(i)$ is the probability that the $r$th arriving customer chooses to join if there are $i-1$ customers in front of her (including the one in service), which would mean that she starts in position $i$. We call the function $u_r$ the {\it joining strategy} for customer $r$ and $\bm{u}^\infty \equiv (u_1,u_2,\ldots )$ the {\it joining strategy profile} for the population. If $u_r(i)$ depends only on $i$, then the joining strategy is {\it symmetric} in which case, $\bm{u} = \{u,u,\cdots\}$ (see \citet[p3]{HH03}). Next, we introduce the definition of a threshold strategy. This threshold strategy was first proposed in \citet{H96}, and was also used in \citet{HH97}. \begin{definition} {\rm (symmetric threshold strategy)}. \label{D1} For any $x \in \mathbb{R}^+$, the symmetric threshold strategy with threshold value $x$ has components \begin{equation} u^{(x)}(i)\equiv \begin{cases} 1 & \text{if}\ \, i \leq n \\[+6pt] p & \text{if}\ \, i=n +1 \\[+6pt] 0 & \text{if}\ \, i \geq n +2 \,, \end{cases} \end{equation} \end{definition} where $n \equiv \lfloor x \rfloor, p \equiv x-n$. A customer who adopts threshold $x$ always chooses to join at a position which is less than or equal to $x$. She chooses to join at position $\lfloor x \rfloor+1$ with probability $x - \lfloor x \rfloor$, and refuses to join at any position greater than $\lfloor x \rfloor+1$. In their unpublished report \cite[Theorem 6]{BC13}, Brooms and Collins claimed that any symmetric equilibrium joining strategy must be a threshold strategy. However their proof lacks detail, so we are going to treat this result with caution. If it is correct then our threshold strategy in Theorem 1 is the unique symmetric subgame perfect equilibrium strategy. \subsection{Basics of a single-server feedback queue} \label{sec: Pre: BasicsFeedback} For the single-server feedback queue in Figure \ref{F1}, in the time interval $[0, \infty)$, we denote by $\xi(t)$ and $\tau_r$ the number of customers in the system at time $t$ and the arrival time of the $r$th customer, respectively. Then $\xi_r:=\xi(\tau_r)$ is the position at which the $r$th customer joins the system where, when $\xi_r = 1$, the customer immediately goes into service. To work out the Nash equilibrium strategy, we arbitrarily select a customer as our tagged customer, and calculate her optimal response based on different strategies adopted by others. We are interested in the symmetric Nash equilibrium strategy, that is the strategy which is the best response when others use it too. We denote the total sojourn time of the tagged customer in the system when the other customers all use threshold $x$ by $W^{(x)}$. Consistent with this notation, $W^{(\infty)}$ is the total sojourn time of a tagged customer in the system when all the other arriving customers always join and are not allowed to renege later. From \citet[Theorem 1]{T63}, if $\lambda < q \mu$, then when all customers always join and are not allowed to renege later, the process $\lbrace \xi(t), 0 \leq t < \infty \rbrace$ has a unique stationary distribution \[ \pi_i:= \lim\limits_{t \rightarrow \infty}\mathbb{P} \lbrace \xi(t) = i \rbrace = \left(1-\frac{\lambda}{q\mu}\right)\left(\frac{\lambda}{q\mu}\right)^i \, (i=0,1,\cdots)\,. \] Furthermore, \citet[Section VI]{T63} gave the Laplace-Stieltjes transform of the unconditional stationary waiting time. We use similar techniques to obtain the conditional expected sojourn time given the joining position of each customer. In the stationary regime, for $i = 1,2,...$, let \begin{align} &P_i(w):= \mathbb{P}\{W^{(\infty)} \leq w \, , \, \xi_r = i\} \\ &\Pi_i(s):= \int_{0}^{\infty}e^{-sw}dP_i(w) \,. \end{align} Then for $|z| \leq 1$, $\mathfrak{R}(s) \geq 0$, \begin{equation} U(s,z) := \sum_{i=1}^{\infty} \Pi_i(s)z^i = \left(1-\frac{\lambda}{q \mu}\right)\sum_{k = 1}^{\infty}\frac{q (1-q)^{k-1}}{a_k(s,z)-b_k(s,z)} \,, \label{eq6} \end{equation} where \begin{equation} \begin{bmatrix} a_k(s,z) \\ b_k(s,z) \end{bmatrix} = \begin{bmatrix} \frac{\displaystyle \mu+\lambda+s}{\displaystyle \mu} & -q \\ \frac{\displaystyle \lambda}{\displaystyle \mu} & (1-q) \end{bmatrix}^k \begin{bmatrix} 1 \\ \frac{\displaystyle\lambda z}{\displaystyle q\mu} \end{bmatrix} \,. \end{equation} To obtain $\displaystyle \int_{0}^{\infty}w \, dP_i(w)$, we take the derivative of $U(s,z)$ with respect to $s$ and set $s=0$. \begin{align} \sum_{i=1}^{\infty} \left(\int_{0}^{\infty}w \, dP_i(w)\right) z^i &= -\frac{\partial U(s,z)}{\partial s} \mid_{s=0} \\ &=\left(1-\frac{\lambda}{q \mu}\right)\frac{q\mu((1-q)\lambda z+(q-2)q\mu)}{(\lambda z-q\mu)^2((q-1)\lambda-(q-2)q\mu)} \\ &=\sum_{i = 1}^{\infty} \left(1-\frac{\lambda}{q \mu}\right) \, \frac{i+1-q}{(q\mu)^i((q-1)\lambda-(q-2)q\mu)}z^i \,. \label{eq3} \end{align} Hence, the stationary expected sojourn time of a tagged customer if she joins at position $i$, and all other customers always choose to join upon arrival is \begin{equation} w_{i,i}^{(\infty)}:=\mathbb{E}\left(W^{(\infty)} \mid \xi_r = i\right) = \frac{\displaystyle\int_{0}^{\infty}w \, dP_i(w)}{\displaystyle \pi_i} = \frac{i+1-q}{((q-1)\lambda-(q-2)q\mu)} \,. \label{closed_form_waiting_time} \end{equation} \section{The Case When Customers Cannot Renege} \label{sec: NRcase} \subsection{Expected payoff} \label{sec: NRcase: EP} In this paper, we assume that customers are homogeneous which means they value receiving service identically and they place the same per unit time value on their waiting, and we focus on symmetric threshold strategies defined in Definition \ref{D1}. When a customer arrives and sees $j-1$ customers already in the system, she will join the queue at the $j$th place. When every customer adopts threshold $x$ and the system starts with less than $\lceil x \rceil+1$ customers, the tagged customer, upon arrival, can observe at most $\lceil x \rceil$ people in the system. If she chooses to join, her position is at most $\lceil x \rceil + 1$. Let $w^{(x)}_{i,j}$ be the expected remaining time until the tagged customer departs the system, if there are $j$ customers in the system, she is in position $i\leq j$ and all the other customers use threshold $x$. So if a customer joins in position $j$, her expected sojourn time will be $w_{j,j}^{(x)}$. On the other hand, when she leaves the queue she will obtain a reward $R_0$ and her expected payoff when she is in position $i$ there are $j$ customers in total and other customers are using threshold $x$ is thus $z_{i,j}^{(x)} \equiv E(R) - w_{i,j}^{(x)} = R_0 - w_{i,j}^{(x)}$. We shall show that the vector \[ \bm{w}^{(x)} = \left(w_{1,1}^{(x)}, w_{1,2}^{(x)}, w_{2,2}^{(x)}, \ldots, w_{1,\lceil x \rceil +1}^{(x)}, \ldots, w_{\lceil x \rceil,\lceil x \rceil +1}^{(x)}, w_{\lceil x \rceil +1,\lceil x \rceil +1}^{(x)}\right)^T \] satisfies a version of Poisson's equation. In Section \ref{sec: Rcase} where we consider a model with reneging, customers do not always get the reward, and we proceed by writing Poisson's equation for the expected payoff $z_{i,j}^{(x)}$ directly. To compute $\bm{w}^{(x)} $, we construct a continuous-time quasi-birth-and-death process (QBD) on the state space $\mathcal{S} = \left\{(i,j) \, : \, 1 \leq i \leq j \leq \lceil x \rceil + 1 \right\}$ with its level $j$ denoting the total number of customers including the customer in service in the system, and its phase $i$ denoting the position of the tagged customer. Then we construct the embedded discrete-time QBD obtained by observing this continuous-time Markov chain at its transition points and write $w_{i,j}^{(x)}$ conditioning on the first transition out of state $(i,j)$ in \eqref{eq:wij}. Specifically, the expected time until the next transition is $ \frac{1}{\lambda+\mu}$. The next transition is an arrival with probability $ \frac{\lambda}{\lambda+\mu}$. When $j < \lfloor x \rfloor$, the arriving customer joins the system with probability $1$; when $j = \lfloor x \rfloor$, the arriving customer joins the system with probability $p$; when $j = \lfloor x \rfloor+1$ or $\lfloor x \rfloor+2$, the arriving customer balks. The next transition is a service completion with probability $\frac{\mu}{\lambda+\mu}$, after which a customer leaves the system with probability $q$ and joins the end of the system with probability $1-q$. Hence, if the customer in service is the tagged one ($i=1$), when she finishes her service, her future sojourn time is $0$ with probability $q$, otherwise, her next position is $j$. When the customer in service is not the tagged one, the position of the tagged customer decreases by $1$, the total number of customers decreases by $1$ with probability $q$ but stays unchanged with probability $1-q$. From the aforementioned reasoning, \begin{align} \label{eq:wij} &\lefteqn{w_{i,j}^{(x)} \, = } \\ & \frac{1}{\lambda+\mu} + \frac{\lambda}{\lambda+\mu} \, \left( w_{i,j+1}^{(x)} \, \mathbbm{ 1 }_{\lbrace j < \lfloor x \rfloor \rbrace} + \left( p \, w_{i,j+1}^{(x)} + (1-p) \, w_{i,j}^{(x)} \right) \, \mathbbm{ 1 }_{\lbrace j = \lfloor x \rfloor \rbrace} + w_{i,j}^{(x)} \, \mathbbm{ 1 }_{\lbrace j = \lfloor x \rfloor+1, \lfloor x \rfloor+2 \rbrace} \right) \, \notag \\ &\, + \, \frac{\mu}{\lambda+\mu}\, \left( (1-q) \, w_{j,j}^{(x)} \, \mathbbm{ 1 }_{\lbrace i = 1 \rbrace} + \left( q \, w^{(x)}_{i-1,j-1} + (1-q) \, w_{i-1,j}^{(x)} \right) \, \mathbbm{ 1 }_{\lbrace i > 1 \rbrace} \right) \,.\notag \end{align} Thus, we can obtain $\bm{w}^{(x)}$ by solving Poisson's equation \begin{equation} \left(I-P^{(x)} \right) \bm{w}^{(x)} \, = \, \frac{1}{\lambda+\mu} \, \bm{e}, \label{eq: poisson1} \end{equation} where $P^{(x)}$ is defined in Appendix \ref{appendix:1.1}, and $\bm{e}$ denotes a vector of $1$'s of the appropriate size. We have shown that $w_{i,j}^{(x)}$ can be obtained by solving a system of linear equations. However, the number of equations is quadratic in $\lfloor x \rfloor$. Thus, it is necessary to come up with an efficient way of carrying out the calculation. Equation (\ref{eq: poisson1}) is Poisson’s equation for a level dependent QBD, where the defining matrices $A^{(j)}_1, A^{(j)}_0, A^{(j)}_{-1}$ are given in Appendix \ref{appendix:1.1}. Due to the special structure of QBDs, we propose Algorithm \ref{alg_poisson} to solve $\bm{w}^{(x)}$ based on the methodology in \citet{DLL13}. See \citet[Chapter 12]{LR99} for a detailed explanation of the matrices $\Gamma^{(j)}, U^{(j)}$ and $G^{(j)}$ for a level dependent QBD which are used in Algorithm \ref{alg_poisson}, noting that the matrix $\Gamma^{(j)}$ in this paper has the same meaning as matrix $R^{(j)}$ in \cite{LR99}. We use $\Gamma$ to differentiate it from the reward $R$ that is obtained by the customers after they leave the service. \begin{algorithm} \caption{}\label{Poisson} \begin{algorithmic}[1] \Procedure{Calculate $U^{(j)}$, $\Gamma^{(j)}$, $G^{(j)}$}{} \Comment{The $U^{(j)}, \Gamma^{(j)}, G^{(j)}$ of $P^{(x)}$} \State $U^{(\lceil x \rceil+1)} \gets A_0^{(\lceil x \rceil+1)}$ \State $\Gamma^{(\lceil x \rceil+1)} \gets A_1^{(\lceil x \rceil)} \, (\mathbf{I}- U^{(\lceil x \rceil+1)})^{-1}$ \State $G^{(\lceil x \rceil+1)} \gets (\mathbf{I}- U^{(\lceil x \rceil+1)})^{-1} \, A_{-1}^{(\lceil x \rceil+1)}$ \For {$j = \lceil x \rceil:2$} \State $U^{(j)} \gets A_0^{(j)} + A_1^{(j)}\, G^{(j+1)}$ \State $\Gamma^{(j)} \gets A_1^{(j-1)} \, (\mathbf{I}- U^{(j)})^{-1}$ \State $G^{(j)} \gets (\mathbf{I}- U^{(j)})^{-1} \, A_{-1}^{(j)}$ \EndFor \State \textbf{end} \EndProcedure \Procedure{Poisson's Equation}{$U^{(j)}, \Gamma^{(j)}, G^{(j)}, \frac{1}{\lambda+\mu}$ \State $y(1) \gets 0$ \For {$j = 2:\lceil x \rceil+1$} \State $y(j) \gets \frac{1}{\lambda+\mu}(\mathbf{I} - U^{(j)})^{-1} \, (\bm{e}_{j}+ \sum_{k = j}^{\lceil x \rceil} \Pi_{l=j+1: k+1} \, \Gamma^{(l)} \, \bm{e}_{k+2}) + \, G^{(j)} \, y(j-1)$ \EndFor \State \textbf{end} \State $y(1) \gets \frac{1}{\lambda + \mu} + A_1^{(1)}\, y(2)$ \State $w(1) = \frac{y(1)}{1-(A_0^{(1)} \, + \, A_1^{(1)} \, G^{(2)})}$ \Comment{Expected sojourn time} \For {$j = 2: \lceil x \rceil+1$} \State $w(\frac{j(j-1)}{2}+1: \frac{j(j-1)}{2}+j) = y(j) \, + \, \Pi_{l=j:2} \, G^{(l)} \, w(1)$ \EndFor \State \textbf{end} \State \textbf{return} $w$ \EndProcedure \end{algorithmic} \label{alg_poisson} \end{algorithm} \begin{figure} \centering \includegraphics[width = 0.7\textwidth]{waiting_time.png} \caption{Expected sojourn time of the tagged customer ($\lambda = 0.4, \mu = 0.6, q = 0.7$).} \centering\label{WT} \end{figure} We plot $w_{j,j}^{(x)}$ for $0 \leq x \leq 10$ in Figure \ref{WT}. Several observations can be made. First, $w_{j,j}^{(x)}$ exists only when $\lceil x \rceil \geq j-1$. The reason follows from the explanation at the beginning of this section that the tagged customer cannot be in a position greater than $\lceil x \rceil + 1$. Second, $w_{j,j}^{(x)}$ increases in $j$ for any $1 \leq j \leq \lceil x \rceil+1$, and increases in $x$ when $x \geq 1$. This property was proved in \citet{BC13} via coupling, and their proof works for $GI/G/1$ feedback queues. For an $M/M/1$ feedback queue, we propose an alternative proof in Lemmas \ref{lemmaWI} and \ref{lemmaWx}. Third, when $x \in [0,1)$, as long as the tagged customer is in the system, no newly arriving customer will join the system, hence the expected sojourn time of the tagged customer is independent of $x$. Actually, from (\ref{eq:wij}), we explicitly have \begin{equation} w_{1,1}^{(x)} = \frac{1}{\mu q} \qquad w_{2,2}^{(x)} = \frac{3-q}{\mu q (2-q)} > \frac{1}{\mu q} \qquad 0 \leq x \leq 1 \,. \end{equation} Finally, as expected, $w_{j,j} ^{(x)}$ approaches $w_{j,j} ^{(\infty)}$ as $x$ increases. Our results are stated in Lemmas 1 and 2 below, the proofs of which appear in Appendix \ref{appendix:proof of lemma1 and 2}. \begin{lemma} \label{lemmaWI} $w^{(x)}_{j,j}$ is increasing in $j$ for $1 \leq j \leq \lceil x \rceil +1 \,.$ \end{lemma} \textbf{Remark} At the expense of making the calculation more intricate, we can prove that $w_{j,j}^{(x)}$ is strictly increasing in $j$. We omit the details. \begin{lemma} \label{lemmaWx} For any two threshold policies $x_1$ and $x_2$ with $x_1 < x_2$, \begin{equation} w_{i,j}^{(x_1)} < w_{i,j}^{(x_2)} \qquad 1 \leq i \leq \lceil x_1 \rceil +1 \,. \end{equation} \end{lemma} \subsection{The Nash equilibrium threshold} \label{sec: NRcase: NashE} In Lemmas \ref{lemmaWI} and \ref{lemmaWx}, we have proved that $w_{j,j}^{(x)}$ is increasing in $j$ and $x$, so $z_{j,j}^{(x)}$ is decreasing in $j$ and $x$. We know from the beginning of Section \ref{sec: NRcase} that the position where the tagged customer can join is at most $\lceil x \rceil + 1$ if the other customers use threshold $x$ and the system starts with less than $\lceil x \rceil$ customers. If we refer to the highest position that the tagged customer is willing to join, when others use threshold $x$, as the best response, and let $\mathcal{BR}(x)$ denote it, then $\mathcal{BR}(x) = \max\{ j: z^{(x)}_{j,j} \geq 0, \, 1 \leq j \leq \lceil x \rceil +1 \}$. If $R_0$ is big, then there will be values of $x$ for which $z^{(x)}_{\lceil x \rceil +1, \lceil x \rceil +1} \geq 0$ and so $\mathcal{BR}(x) = \lceil x \rceil + 1$. On this part of the domain, $\mathcal{BR}(x)$ is (obviously) an increasing step function. However, as $x$ increases, there must be a value $x^*$ for which $z^{(x^*)}_{\lceil x^* \rceil +1, \lceil x^* \rceil +1} < 0$. To see this, observe that a customer arriving to position $j$ must wait for at least $j$ services and so $w^{(x)}_{\lceil x \rceil +1, \lceil x \rceil +1} \geq (\lceil x \rceil +1)/\mu$ and so when $x > R_0\mu - 1$, \begin{eqnarray} z^{(x)}_{\lceil x \rceil +1, \lceil x \rceil +1} & = & R_0 - w^{(x)}_{\lceil x \rceil +1, \lceil x \rceil + 1} \\ & \leq & R_0 - (\lceil x \rceil +1)/\mu \\ & < & 0. \end{eqnarray} For $x>x^*$, $BR(x) < \lceil x \rceil + 1$ and, on this part of the domain Lemma 2 ensures that $BR(x)$ is a monotone decreasing step function. There are now two possibilities \begin{itemize} \item there is an integer $m$ such that $BR(m) = m$, or \item there is an integer $m$ such that $BR(m) = m+1$ and $BR(m+1) \leq m$, \end{itemize} These are illustrated in Figure 3.2(a) and in Figures 3.2(b) and (c) respectively. \begin{figure}[h] \subcaptionbox{$\mathcal{BR}(m) = m$}% {\includegraphics[width=0.29\linewidth]{BR1.pdf}} \subcaptionbox{$\mathcal{BR}(m) = m+1,\, \mathcal{BR}(m+1) = m$}% {\includegraphics[width=0.34\linewidth]{BR2.pdf}} \hspace{\fill} \subcaptionbox{$\mathcal{BR}(m) = m+1,\, \mathcal{BR}(m+1) < m$}% {\includegraphics[width=0.34\linewidth]{BR3.pdf}} \caption{Best Response.} \label{fig:BR} \end{figure} For the purpose of presenting the Nash equilibrium, for $m = 1, 2, \cdots$, let $\alpha_m = w_{m,m}^{(m)}, \beta_m = w_{m+1,m+1}^{(m)}$ (see Figure \ref{NEp}). Let $\chi_m$ be the solution to $w_{m+1,m+1}^{(\chi_m)} = R_0$. We prove in the following lemma that $\chi_m$ exists and is unique. \begin{lemma} There exists a unique $\chi_m$ such that $w_{m+1,m+1}^{(\chi_m)} = R_0$. \end{lemma} Proof. For $x \in (m,m+1)$, from Equation \eqref{eq: poisson1}, \begin{equation} w_{m+1,m+1}^{(x)} = \frac{1}{\lambda+\mu} \left( \left( I-P^{(x)} \right)^{-1}\, \bm{e} \right)_{\frac{(m+1)(m+2)}{2}} \,, \end{equation} where $P^{(x)}$ is defined in Appendix \ref{appendix:1.1}. The matrix $P^{(x)}$ is substochastic for any $x$, as the sum of the $\left(\frac{j(j+1)}{2}+1\right)$th row of $P^{(x)}$ is less than $1$ for $j = 1, 2 \, \ldots, \lceil x \rceil + 1$. From a Corollary to the Perron-Frobenius Theorem (\citet[page 8]{S06}), $|r^{(x)}| < 1 $ for any eigenvalue $r^{(x)}$. Thus, any real eigenvalue $1-r^{(x)}$ of $I-P^{(x)}$ must be greater than 0. Hence $|I-P^{(x)}| \ne 0$. Next, we write $x$ as $m+p$. From its expression, $P^{(x)}$ is continuous in $p$, so is $I-P^{(x)}$. Since the entries of the inverse matrix can be written as rational functions of the entries of the original matrix, and the denominators of these rational functions are non-zero for all $x$, $w_{m+1,m+1}^{(x)}$ is continuous in $p$. Hence, $w_{m+1,m+1}^{(x)}$ is continuous in $x$ for $x \in (m,m+1)$. Also, it follows from Lemma \ref{lemmaWI} and \ref{lemmaWx} that $\beta_m > \alpha_m$ and $\alpha_{m+1} > \beta_m$, respectively. If $\beta_m < R_0 < \alpha_{m+1}$, then \[ \beta_m = w_{m+1,m+1}^{(m)} < R_0 < w_{m+1,m+1}^{(m+1)} = \alpha_{m+1} \,. \] Due to the fact that $w_{m+1,m+1}^{(x)}$ continuous and strictly increasing in $x \in (m,m+1)$, there is a unique $\chi_m \in (m,m+1)$ such that $w_{m+1,m+1}^{(\chi_m)} = R_0$. We describe the Nash equilibrium strategy for the feedback queueing system in the following theorem. \begin{theorem}\label{Theorem:NE} There exists an equilibrium threshold strategy with threshold value \begin{equation}\label{NE} x_e= \begin{cases} 0 & \text{if} \ R_0 < \alpha_1 \,,\\[+6pt] \zeta_0 & \text{if} \ R_0 = \alpha_1 \,,\\[+6pt] m & \text{if} \ \alpha_m \leq R_0 \leq \beta_m \quad m = 1,2, \ldots,\\[+6pt] \chi_m & \text{if} \ \beta_m < R_0 < \alpha_{m+1} \quad m = 1,2,\ldots \,, \end{cases} \end{equation} where $\zeta_0 \in [0,1]$. \end{theorem} \begin{figure} \centering \includegraphics[width = .7\textwidth]{NE_policy.pdf} \caption{} \centering \label{NEp} \end{figure} Proof. A customer will choose to join the queue if and only if her reward can fully bear her expected sojourn cost. \begin{itemize} \item When $R_0< \alpha_1$, even if the tagged customer is the only one in the system, her expected sojourn time $\alpha_1 > R_0$. Thus, her best option is balking. The same analysis works for the other customers. So balking is the Nash equilibrium strategy. \item When $R_0= \alpha_1$, if the other customers are all using threshold $\chi_0 \in \left[0,1\right]$, there is at most one customer in the system when the tagged customer arrives. For the tagged customer, when she observes one person in the system, her best response is balking as her expected sojourn time $\beta_1 > \alpha_1 = R_0$. When she observes that the system is empty, her expected payoff is zero. Thus, she is indifferent between joining an empty system and balking. Actually, she gains nothing by either strategy. The same analysis works for any other customer, so any threshold strategy with threshold value $\chi_0 \in \left[0,1\right]$ is a Nash equilibrium strategy. \item When $\alpha_m \leq R_0 \leq \beta_m$, the tagged customer's expected sojourn time satisfies $w_{m,m}^{(m)} \leq R_0\leq w_{m+1,m+1}^{(m)}$, so \begin{equation} z_{m+1,m+1}^{(m)} \leq 0 \leq z_{m,m}^{(m)}\,. \end{equation} Hence the tagged customer's best response is $m$ when others adopt threshold $m$. So threshold $m$ is a Nash equilibrium strategy. \item When $\beta_m < R_0 < \alpha_{m+1}$, if other customers all adopt threshold $\chi_m$, the tagged customer gains nothing when she joins at $m+1$, so her best response is any threshold strategy with threshold value between $m$ and $m+1$ (including $\chi_m$). Thus, $\chi_m$ is the Nash equilibrium threshold. \hfill $\square$ \end{itemize} From Theorem \ref{Theorem:NE}, $m$ is either the Nash equilibrium threshold or the integer part of it. The Nash equilibrium threshold is not an integer when $R_0 \in (\beta_m, \alpha_{m+1})$. Figure \ref{fig:BR}(b) represents this case with $\mathcal{BR}(m) = m+1$ and $\mathcal{BR}(m+1) = m$. Figure \ref{fig:BR}(c) depicts the case with $\mathcal{BR}(m) = m+1$ and $\mathcal{BR}(m+1) = m-1$. In both cases, the tagged customer is indifferent between $m$ and $m+1$ when others use a threshold between $m$ and $m+1$, and the conclusion of Theorem 3.1 holds. Intuitively speaking, a Nash equilibrium is said to be {\em evolutionarily stable} if it cannot be invaded by any alternative strategy that is initially rare (see \citet{S86}). \begin{definition} \textbf{Evolutionarily stable strategy (ESS).} A Nash equilibrium strategy $x$ is said to be an ESS if either (i) $x$ is the unique best response against itself or (ii) for any $x' \ne x$ which is a best response against $x$, $x$ is better than $x'$ as a response to $x'$ itself. That is, with $U(x',x)$ denoting a customer's expected payoff when she uses $x'$ and others use $x$, for all $x'\not = x$, either \begin{align} &U(x, x) > U(x', x)\,, \, \mbox{or} \\ &U(x, x) = U(x', x) \quad \mbox{and} \quad U(x, x') > U(x', x') \,. \end{align} \end{definition} To show that the Nash equilibrium startegy with threshold value $x_e$ is an ESS, we first define the total expected payoff of a tagged customer who adopts threshold $x$ when the other customers all adopt threshold $x'$. \begin{equation} \label{eq:TotalEP} U(x,x'): = \sum_{i=1}^{\lfloor x \rfloor \wedge (\lceil x' \rceil + 1)} \pi^{(x')}_{i-1} \, z_{i,i}^{(x')} + (x - \lfloor x \rfloor) \, \pi^{(x')}_{\lfloor x \rfloor} \, z_{\lfloor x \rfloor+1,\lfloor x \rfloor+1}^{(x')} \, \mathbbm{1}_{\{\lfloor x \rfloor < \lceil x' \rceil + 1\}} \,, \end{equation} where $\pi^{(x)}_j \,\, 0 \leq j \leq \lceil x \rceil$, is the stationary distribution of the number of customers in the system where everyone adopts threshold $x$. We prove the $x_e$-threshold strategy is an ESS in the following corollary. \begin{corollary} The threshold strategy with threshold value $x_e$ is an ESS when $R \neq \alpha_1$. \end{corollary} We have already proved that when other customers adopt threshold strategy $x_e$, there is no better strategy than $x_e$ for the tagged customer, that is $U(x_e, x_e) \geq U(x, x_e)$. \begin{itemize} \item When $R_0< \alpha_1$, $U(0,0) = 0 > U(x,0)$ for any $x>0$. Thus, balking is an ESS. \item When $R_0= \alpha_1$, for any $0 \leq \chi_0,\chi_0'\leq 1$, $U(\chi_0,\chi_0) = U(\chi_0',\chi_0) = 0$ and $U(\chi_0,\chi_0') = U(\chi_0',\chi_0') = 0$. Thus, $\chi_0 \in [0,1]$ is not an ESS. \item When $\alpha_m \leq R_0 \leq \beta_m$, $U(m,m) > U(m',m)$ for any $m' \ne m$. Thus, $m$ is an ESS. \item When $\beta_m < R_0 < \alpha_{m+1}$, it follows from the definition of $\chi_m$ that $z_{m+1,m+1}^{(\chi_m)} = R_0 - w_{m+1,m+1}^{(\chi_m)} = 0$, so for any $\chi_m' \in [m, m+1]$ \begin{equation} U(\chi_m', \chi_m) = U(\chi_m, \chi_m) = \sum_{i=1}^{m} \pi^{(\chi_m)}_{i-1} \, z_{i,i}^{(\chi_m)} \,. \end{equation} Furthermore, when $m \leq \chi_m' <\chi_m < m+1$, it follows from the fact that $z_{m+1,m+1}^{(x)}$ is decreasing in $x$ that \begin{equation} z_{m+1,m+1}^{(\chi_m')} >0 \,. \end{equation} Since $\lfloor \chi_m' \rfloor = \lfloor \chi_m \rfloor = m$, the first summations in $U(\chi_m', \chi_m')$ and $U(\chi_m, \chi_m')$ are equal. However, $(\chi_m' - \lfloor \chi_m' \rfloor) < (\chi_m - \lfloor \chi_m \rfloor)$, hence the second term in $U(\chi_m',\chi_m') $ is less than the second term in $U(\chi_m, \chi_m')$. So $U(\chi_m',\chi_m') < U(\chi_m,\chi_m')$. Similarly, when $\chi_m < \chi_m' \leq m+1$, $z_{m+1,m+1}^{(\chi_m')} <0$ but $(\chi_m - \lfloor \chi_m \rfloor) < (\chi_m' - \lfloor \chi_m' \rfloor)$. Following similar lines, we have $U(\chi_m',\chi_m') < U(\chi_m,\chi_m')$. Thus, $\chi_m$ is an ESS. $ \hfill \square $ \end{itemize} \begin{figure} \centering \includegraphics[width=10cm]{fbq_illustration_R.pdf} \caption{An M/M/1 feedback queue with strategic customers when reneging is allowed.} \label{RC} \end{figure} \section{The Case When Customers Can Renege} \label{sec: Rcase} Every time a customer rejoins at the end of the queue due to a service failure, it is possible that her conditions have deteriorated with time. Hence customers might have an incentive to {\it renege}, that is depart from the queue, when their service fails. Figure \ref{RC} is an illustration of an $M/M/1$ feedback queue when reneging is allowed. In this section, we focus on the Nash equilibrium threshold when the customers are allowed to renege, and compare it with the equilibrium threshold when they cannot renege. In order to make comparisons between the two cases, we abbreviate the non-reneging case as the $N$-case and the reneging case as the $R$-case. \subsection{The expected payoff} \label{sec: Rcase:EP} In our model, every time a customer rejoins the end of the queue, she faces a similar situation as that when she first chooses to join. Thus, we restrict our attention to policies where the customer must use the same threshold when she chooses to balk or renege. In contrast to Section \ref{sec: NRcase}, the expected payoff of the tagged customer is affected by her future reneging decisions. In particular, if she chooses to renege, she will not receive the reward $R_0$. So we use $\hat{z}^{(x_{tag}, x)}_{i,j}$ to denote the tagged customer's expected payoff, which is the difference between the expected reward and her expected sojourn cost, given that she is at position $i$ and uses threshold strategy $x_{tag}$, there are $j$ customers in the system, and the other customers all adopt threshold $x$. It will turn out that the relevant value of $x_{tag}$ that we need to consider for the purpose of calculating the Nash equilibrium occurs when $x _{tag} = \lfloor x \rfloor +1$. This satisfies the equation \begin{align} & \hat{z}_{i,j}^{(\lfloor x \rfloor +1,x)} = -\frac{1}{\lambda+\mu} \\ &+ \frac{\lambda}{\lambda+\mu} \, \left( \hat{z}_{i,j+1}^{(\lfloor x \rfloor +1,x)} \, \mathbbm{1}_{\lbrace j < \lfloor x \rfloor \rbrace} +\left(p \, \hat{z}_{i,j+1}^{(\lfloor x \rfloor +1,x)} + (1-p) \, \hat{z}_{i,j}^{(\lfloor x \rfloor +1,x)} \right) \, \mathbbm{1}_{\lbrace j = \lfloor x \rfloor \rbrace} \right. \notag \\ & \left. + \hat{z}_{i,j}^{(\lfloor x \rfloor +1,x)} \, \mathbbm{1}_{\lbrace j = \lfloor x \rfloor +1 \rbrace} \right) \notag + \frac{\mu}{\lambda+\mu} \left( \left(q \, R_0 + (1-q) \, \hat{z}_{j,j}^{(\lfloor x \rfloor +1,x)} \right) \, \mathbbm{1}_{\lbrace i = 1 \rbrace} \right. \notag \\ & \left. + \left((q + (1-q) \, (1-p) \, \mathbbm{1}_{\lbrace j=\lfloor x \rfloor+1 \rbrace}) \, \hat{z}_{i-1,j-1}^{(\lfloor x \rfloor +1,x)} + (1-q) \, (1-(1-p) \mathbbm{1}_{\lbrace j=\lfloor x \rfloor+1 \rbrace}) \, \hat{z}_{i-1,j}^{(\lfloor x \rfloor +1,x)} \right) \right) \notag \,, \end{align} where $p$ is the fractional part of $x$ as defined in Definition \ref{D1}. Hence, we can calculate $\hat{z}_{i,j}^{(\lfloor x \rfloor +1,x)}$ via Poisson's equation \begin{equation} \label{eq: NER1} \left(I-\hat{P}^{(\lfloor x \rfloor +1,x)} \right) \, \bm{\hat{z}}^{(\lfloor x \rfloor +1,x)} \, = \, \bm{{g}} \,, \end{equation} where the matrix $\hat{P}^{(\lfloor x \rfloor +1,x)}$ and the vector $\bm{g}$ are defined in Appendix \ref{appendix: 2.1}, and \begin{equation} \bm{\hat{z}}^{(\lfloor x \rfloor +1,x)} = (z_{1,1}^{(\lfloor x \rfloor +1,x)}, z_{1,2}^{(\lfloor x \rfloor +1,x)}, z_{2,2}^{(\lfloor x \rfloor +1,x)}, \cdots, z_{\lfloor x \rfloor,\lfloor x \rfloor+1}^{(\lfloor x \rfloor +1,x)}, z_{\lfloor x \rfloor+1,\lfloor x \rfloor+1}^{(\lfloor x \rfloor +1,x)})^T \,. \end{equation} In Section \ref{sec: NRcase: NashE}, we derived the Nash equilibrium threshold value by finding the $m$ that satisfies the case in Figure \ref{fig:BR}. In the $N$-case, this means only $z^{(\lfloor x \rfloor)}_{\lfloor x \rfloor, \lfloor x \rfloor}$, $z^{(\lfloor x \rfloor)}_{\lfloor x \rfloor+1, \lfloor x \rfloor+1}$ and $z^{(x)}_{\lfloor x \rfloor+1, \lfloor x \rfloor+1}$ matter in calculating the Nash equilibrium, although the tagged customer can join at position $\lceil x \rceil +1$. Similarly, in the $R$-case we only care about $\hat{z}^{(\lfloor x \rfloor, \lfloor x \rfloor)}_{\lfloor x \rfloor, \lfloor x \rfloor}$, $\hat{z}^{(\lfloor x \rfloor+1, \lfloor x \rfloor)}_{\lfloor x \rfloor+1, \lfloor x \rfloor+1}$, and $\hat{z}^{(\lfloor x \rfloor+1,, x)}_{\lfloor x \rfloor+1, \lfloor x \rfloor+1}$, so we calculate $\hat{z}^{(\lfloor x \rfloor+1, \lfloor x \rfloor)}_{i,j}$ only for $1 \leq i \leq j \leq \lfloor x \rfloor+1$. In the $R$-case, when others use threshold $x$ and the tagged customer uses threshold $x_{tag} \geq \lceil x \rceil$, the queue size is never greater than $\lceil x \rceil$ at a time point where the tagged customer's service has failed, so the tagged customer will never renege after joining if she uses $x_{tag}$ even though other customers may do so. Hence the calculation of $\bm{\hat{z}}^{(x_{tag}, x)}$ when $x_{tag} \geq \lceil x \rceil$ can be transfered to the calculation of the expected sojourn time. If we define $\hat{w}^{(x_{tag}, x)}_{i,j}$ as the expected sojourn time of the tagged customer in the $R$-case, given that she is at position $i$ and uses threshold strategy $x_{tag}$, there are $j$ customers in the system, and the other customers all adopt threshold $x$, then when $x_{tag} = \lfloor x \rfloor +1$, {\begin{align} \bm{\hat{w}}^{(\lfloor x \rfloor +1, x)}=\left( w_{1,1}^{(\lfloor x \rfloor +1,x)}, w_{1,2}^{(\lfloor x \rfloor +1,x)}, w_{2,2}^{(\lfloor x \rfloor +1,x)}, \cdots, w_{\lfloor x \rfloor,\lfloor x \rfloor+1}^{(\lfloor x \rfloor +1,x)}, w_{\lfloor x \rfloor+1,\lfloor x \rfloor+1}^{(\lfloor x \rfloor +1,x)}\right)^T \,, \end{align}} satisfies a version of Poisson's equation similar to Equation \eqref{eq: poisson1} \begin{equation} \label{eq: poisson2} \left(I-\hat{P}^{(\lfloor x \rfloor+1, x)}\right) \, \bm{\hat{w}} = \frac{1}{\lambda+\mu} \, \bm{e} \,, \end{equation} and $\bm{\hat{z}}^{(\lfloor x \rfloor +1, x)} = R_0-\bm{\hat{w}}^{(\lfloor x \rfloor +1, x)}$. Similar to the $N$-case, an equilibrium strategy exists and can be computed using algorithm \ref{alg_poisson}. We compare ${\hat{z}}^{(\lfloor x \rfloor +1,x)}_{j,j}$ and ${z}^{(x)}_{j,j}$ for $j = 1, \ldots, \lfloor x \rfloor+1$ in Lemma \ref{lemma:CompareRNR1}, the proof of which appears in Appendix \ref{appendix:lemma3}. \begin{lemma} \label{lemma:CompareRNR1} When $\lfloor x \rfloor < x$, \begin{align} \label{eq:compare1} \hat{z}^{(\lfloor x \rfloor +1,x)}_{j,j} \geq z^{(x)}_{j,j} \quad for \quad j = 1, \ldots, \lfloor x \rfloor+1 \,. \end{align} When $\lfloor x \rfloor = x$, \begin{align} \label{eq:compare2} \hat{z}^{(\lfloor x \rfloor, \,x )}_{j, j} = \hat{z}^{(\lfloor x \rfloor +1,x)}_{j,j} = z^{(x)}_{j,j} \quad for \quad j = 1, \ldots, \lfloor x \rfloor \,. \end{align} \end{lemma} One interpretation of Lemma \ref{lemma:CompareRNR1} is as follows. When other customers adopt the threshold $x$, for a customer who never reneges, her expected payoff is higher if the other customers are allowed to renege. When $x = \lfloor x \rfloor > 0$, the number of customers in the system never exceeds $\lfloor x \rfloor$ if the tagged customer joins at a position less than $\lfloor x \rfloor+1$; if the tagged customer joins at $\left(\lfloor x \rfloor+1\right)$th position, the customer who is in service when she joins will leave the system with probability $1$: either the service will complete successfully or the customer will renege when the service fails. Thus if the tagged customer joins at position $\lfloor x \rfloor+1$, then she is better off when others can renege, but there is no difference between the $N$-case and the $R$-case when the position at which the tagged customer joins is less than $\lfloor x \rfloor+1$. \textbf{Remark}. We can prove the strict inequality holds in Lemma \ref{lemma:CompareRNR1} when $x > \lfloor x \rfloor$. Also, an argument similar to that in Lemmas \ref{lemmaWI} and \ref{lemmaWx} can be used to show that for $1 \leq i \leq j \leq \lfloor x \rfloor+1$, $\hat{z}_{i,j}^{(\lfloor x \rfloor+1,x)}$ is strictly decreasing in $j$ for $1 \leq j \leq \lceil x \rceil +1$, and $x$ when $x > 1$. \subsection{The Nash equilibrium and its comparison with the $N$-case} \label{sec: Rcase:NashE} As in the $N$-case, to work out the Nash equilibrium in the $R$-case, we need to draw the best response plot and investigate the intersection point of $\mathcal{BR}(x)$ and $x$. When $\hat{z}^{(m+1, m)}_{m+1,m+1} \leq 0 \leq \hat{z}^{(m,m)}_{m,m}$, the tagged customer's best response when others adopt $m$ is also $m$, which is the case in Figure \ref{fig:BR}(a). When $\hat{z}^{(m+1, x)}_{m+1,m+1} = 0$ with $x \in (m,m+1)$, the tagged customer is indifferent between $m$ and $m+1$ when others use threshold $x$, which is the case in Figure \ref{fig:BR}(b). Before we work out the Nash equilibrium strategy in the $R$-case and compare it with the $N$-case, we first define $Ne(R_0, \lambda, \mu, q)$ and $\hat{Ne}(R_0, \lambda, \mu, q)$ as the Nash equilibrium under the parameter set $R_0, \lambda, \mu, q$ in the $N$-case and the $R$-case, respectively, and use $x_e$ and $\hat{x}_e$ for short if they are from the same $R_0, \lambda, \mu, q$. Similar to our use of $\alpha_m$ and $\beta_m$ in the $N$-case, we let $\gamma_m := \hat{w}_{m+1,m+1}^{(m+1,m)}$ to help explain the Nash equilibrium in the $R$-case which is described in the following. \begin{theorem} \label{theorem:NER} The Nash equilibrium threshold value when reneging is allowed is greater than or equal to that when reneging is not allowed. \end{theorem} Proof. There are three scenarios. \begin{itemize} \item When $R_0 \in [\alpha_m, \gamma_m]$, then \begin{align} & \hat{z}_{m+1,m+1}^{(m+1,m)}= R_0 - \hat{w}_{m+1,m+1}^{(m+1,m)} = R_0-\gamma_m \leq 0 \\ & z_{m,m}^{(m)} = R_0 - z_{m,m}^{(m)} = R_0-\alpha_m \geq 0 \,. \end{align} Hence $z_{m+1,m+1}^{(m)} < \hat{z}_{m+1,m+1}^{(m+1,m)} \leq 0 \leq \hat{z}_{m,m}^{(m,m)} = z_{m,m}^{(m)}$ with the first inequality and the equality following from Lemma \ref{lemma:CompareRNR1}. The tagged customer's best response is $m$ if others' strategy is $m$ in both the $N$-case and the $R$-case, hence $\hat{x}_e = x_e = m$. This case is depicted in Figure \ref{NER}(a). \item When $R_0 \in (\gamma_m, \beta_m]$, then \begin{align} & \hat{z}_{m+1,m+1}^{(m+1,m)}= R_0 - \hat{w}_{m+1,m+1}^{(m+1,m)} = R_0-\gamma_m > 0 \\ & z_{m+1,m+1}^{(m)} = R_0 - z_{m+1,m+1}^{(m)} = R_0-\beta_m \leq 0 \,. \end{align} Hence $\hat{z}_{m+1,m+1}^{(m+1,m+1)} < z_{m+1,m+1}^{(m)} \leq 0 < \hat{z}_{m+1,m+1}^{(m+1,m)} < \hat{z}_{m,m}^{(m+1,m)} = z_{m,m}^{(m)}$ with the first inequality, the last inequality and the equality following from Lemma \ref{lemma:CompareRNR1}. The tagged customer's best response is $m$ if others' strategy is $m$ in the $N$-case. In the $R$-case, $\hat{x}_e = \lbrace x: \, \hat{z}_{m+1,m+1}^{(m+1, x)} = 0 \rbrace$ since the tagged customer is indifferent between joining at position $m+1$ and balking if others adopt threshold $\hat{x}_e$. In this case, $\hat{x}_e > x_e = m$, and it is depicted in Figure \ref{NER}(b). \item When $R_0 \in (\beta_m, \alpha_{m+1})$, then \begin{equation} \hat{z}_{m+1,m+1}^{(m+1,m+1)} = z_{m+1,m+1}^{(m+1)} < 0 < z_{m+1,m+1}^{(m)} < \hat{z}_{m+1,m+1}^{(m+1,m)} \,, \end{equation} with the first equality and the last inequality following from Lemma \ref{lemma:CompareRNR1}. The Nash equilibrium strategies are \[ x_e = \lbrace x : \, z_{m+1,m+1}^{(x)} = 0 \rbrace \qquad \hat{x}_e = \lbrace x : \hat{z}_{m+1,m+1}^{(m+1, x)} = 0 \rbrace \,, \] which are mixed in both cases. It follows from Equation \eqref{eq:compare1} that $\hat{x}_e > x_e$. Note that $x_e$ here is the same as calculated in Theorem \ref{Theorem:NE}. This case is depicted in Figure \ref{NER}(c). \hfill $\square$ \end{itemize} \begin{figure}[h] \centering{\includegraphics[width=0.36\linewidth]{NeTag.pdf}}\\ \subcaptionbox{$R_0 \in [\alpha_m, \gamma_m]$ {\includegraphics[width=0.32\linewidth]{NE-reneging-3.pdf}} \hspace{\fill} \subcaptionbox{$R_0 \in (\gamma_m, \beta_m]$ {\includegraphics[width=0.32\linewidth]{NE-reneging-2.pdf}} \hspace{\fill} \subcaptionbox{$R_0 \in (\beta_m, \alpha_{m+1})$ {\includegraphics[width=0.32\linewidth]{NE-reneging-1.pdf}} \caption{An illustration of Nash equilibrium threshold comparison.} \label{NER} \end{figure} \section{Two Paradoxes} \label{sec:Paradox} In the $N$-case, every customer remains in the system until she successfully completes her service and receives reward $R_0$. Increasing $R_0$ can increase customers' incentive to join but also make the system more crowded. In this situation, does everyone become better off when the reward $R_0$ increases? To answer this question, we observe that there are parameter settings where the equilibrium expected payoff can decrease with $R_0$. This paradoxical behaviour is discussed in the following. \begin{paradox} \label{paradox1} In the $N$-case, let $x_k = NE(r_k, \lambda, \mu, q), \, k= 1,2.$ Then for $m = 1,2$, $z^{(x_1)}_{m,m} > z^{(x_2)}_{m,m}$ if $\beta_m < r_1 < r_2 < \alpha_{m+1}$, where $m$ is the integer part of the Nash equilibrium. In other words, if $R_0 \in (\beta_m, \alpha_{m+1})$, increasing $R_0$ will make everyone joining at position $m$ worse off. \end{paradox} Proof. As in Definition \ref{D1}, $p$ is the fractional part of $x$. When $x \in (1,2)$, \begin{equation} w_{2,2}^{(x)} - w_{1,1}^{(x)} = \frac{\mu+\lambda p}{\mu (-\mu q^2 + 2\mu q + \lambda p)} = \frac{1}{\mu \, \left(1-\frac{\mu\,(1-q)^2}{\mu+\lambda \, p}\right)} \,, \end{equation} which is decreasing in $p$. When $x \in (2,3)$, \begin{equation} w_{3,3}^{(x)} - w_{2,2}^{(x)} = \frac{1}{\mu (1-\mu^2 (1-q)^2\, f(p))} \,, \end{equation} where $\displaystyle f(p)=\frac{\lambda +2 \lambda p q+\mu q^3-3 \mu q^2-\lambda q+3 \mu q}{(\mu +\lambda p) \left(\lambda \mu +\lambda ^2 p+\lambda \mu p q+\mu ^2 q^3-3 \mu ^2 q^2+3 \mu ^2 q\right)}$, which is decreasing in $p$. See Appendix \ref{appendix:fp} for the derivative of function $f(p)$. Hence $w_{3,3}^{(x)} - w_{2,2}^{(x)} $ is decreasing in $p$. From Theorem \ref{Theorem:NE}, if $\beta_m < r_1 < r_2 < \alpha_{m+1}$, then $x_k$, which is the Nash equilibrium threshold when $R= r_k$, satisfies \begin{equation} w_{m+1,m+1}^{(x_k)} = r_k \qquad x_k \in (m,m+1) \qquad k = 1,2 \,. \end{equation} Thus, for $m =1,2$, \begin{equation} \Scale[0.9]{ z_{m,m}^{(x_1)} = r_1 - w_{m,m}^{(x_1)} =w_{m+1,m+1}^{(x_1)} - w_{m,m}^{(x_1)} > w_{m+1,m+1}^{(x_2)} - w_{m,m}^{(x_2)} = r_2 - w_{m,m}^{(x_2)} = z_{m,m}^{(x_2)} \,. } \end{equation} \hfill $\square$ In Paradox \ref{paradox1}, we have proved that if $R_0 \in (\beta_m, \alpha_{m+1})$ for $m = 1,2$, increasing $R_0$ makes everyone joining at position $m$ worse off. We conjecture that this phenomenon holds for any $m$. Our numerical experience indicates that this is the case. However, the proof has eluded us. We have proved in the previous section that customers have a higher incentive to join the system if they are allowed to renege later. However, with more customers joining, the system can be more crowded. So we are interested in the question: when customers are given the right to leave, do they become better off? To answer this question, we first need to work out the equilibrium expected payoff in the $R$-case. In contrast to the $N$-case, customers may renege before they successfully complete the service in the $R$-case, thus their expected payoff cannot be calculated as the difference between $R_0$ and their expected sojourn cost. By similar reasoning to Equation \eqref{eq:wij}, it follows that \begin{align} &\Scale[0.97]{\hat{z}_{i,j}^{(x,x)} = -\frac{1}{\lambda+\mu}}\\ &\Scale[0.97]{+ \frac{\lambda}{\lambda+\mu} \, \left( \hat{z}_{i,j+1}^{(x,x)} \, \mathbbm{1}_{\lbrace j < \lfloor x \rfloor \rbrace} + \left(p \, \hat{z}_{i,j+1}^{(x,x)} + (1-p) \, \hat{z}_{i,j}^{(x,x)}\right) \, \mathbbm{1}_{\lbrace j = \lfloor x \rfloor \rbrace} +\hat{z}_{i,j}^{(x,x)} \, \mathbbm{1}_{\lbrace j = \lfloor x \rfloor+1 \rbrace} \right) \notag } \\ & \,\Scale[0.97]{+ \frac{\mu}{\lambda+\mu} \, \left[\left( q \, R_0 + (1-q) \, (1-(1-p) \mathbbm{1}_{\lbrace j=\lfloor x \rfloor+1 \rbrace}) \, \hat{z}_{j,j}^{(x,x)} \right) \, \mathbbm{1}_{\lbrace i = 1 \rbrace} \right. }\\ &\Scale[0.97]{\left. \, + \left( (q + (1-q) \, (1-p) \, \mathbbm{1}_{\lbrace j=\lfloor x \rfloor+1 \rbrace}) \, \hat{z}_{i-1,j-1}^{(x,x)} + (1-q) \, (1-(1-p) \mathbbm{1}_{\lbrace j=\lfloor x \rfloor+1 \rbrace}) \, \hat{z}_{i-1,j}^{(x,x)} \right) \, \mathbbm{1}_{\lbrace i > 1 \rbrace}\right] \notag .} \end{align} Thus, we can obtain $\hat{z}_{i,j}^{(x,x)} $ via Poisson's equation \begin{equation} \left(I-\hat{P}^{(x,x)} \right) \, \bm{\hat{z}}^{(x,x)} \, = \, \bm{{g}} \,, \end{equation} where $\hat{P}^{(x,x)}$ is defined in Appendix \ref{appendix: 3.1}, and \begin{align} \begin{split} &\bm{\hat{z}}^{(x,x)} = (z_{1,1}^{(x,x)}, z_{1,2}^{(x,x)}, z_{2,2}^{(x,x)}, z_{1,3}^{(x,x)}, z_{2,3}^{(x,x)}, z_{3,3}^{(x,x)},\cdots, z_{\lceil x \rceil,\lceil x \rceil+1}^{(x,x)}, z _{\lceil x \rceil+1,\lceil x \rceil+1}^{(x,x)})^T \,. \end{split} \end{align} We are interested in the expected payoff when every customer uses $\hat{x}_e$. In Lemma \ref{lemma:CompareRNR1}, we have proved that if $\hat{x}_e = \lfloor \hat{x}_e \rfloor$, $\bm{\hat{z}}^{(\hat{x}_e,\hat{x}_e)}=\bm{{z}}^{(x_e)}$. In the following, we prove that if $\hat{z}^{(m+1,\hat{x}_e)}_{m+1,m+1} = 0$, $\bm{\hat{z}}^{(\hat{x}_e,\hat{x}_e)} = \bm{\hat{z}}^{(m+1,\hat{x}_e)}$. \begin{lemma} \label{lemma:mm1same} If $\hat{z}^{(m+1,\hat{x}_e)}_{m+1,m+1} = 0$, then $\hat{z}^{(\hat{x}_e,\hat{x}_e)}_{i,j} = \hat{z}^{(m+1,x_e)}_{i,j}$ for any $1 \leq i \leq j \leq m+1$. \end{lemma} Proof. First, when $\hat{z}^{(m+1,\hat{x}_e)}_{m+1,m+1} = 0$, the tagged customer is indifferent between joining or not joining at position $m+1$. In other words, joining with any probability at position $m+1$ will result in a zero expected payoff for her. Hence, $\hat{z}^{(x,\hat{x}_e)}_{m+1,m+1} = 0$ for any $x \in [m, m+1]$ including $\hat{x}_e$. For a general state $(i,j)$, consider two queues with others using threshold $\hat{x}_e$ and the tagged customer in state $(i,j)$: she uses threshold $m+1$ in queue 1 and $\hat{x}_e$ in queue 2. By coupling the customer arrival processes, their joining decisions, the service processes and the service success probability for every customer including the tagged one, we can see the next customer will arrive, join or not join both queues at the same time, the customer in service in both queues will complete the service and rejoin or not rejoin the queue at the same time, until the first time the tagged customer needs to rejoin the queue and the queue size including her is $m+1$ when she rejoins it. When this is the case, $\hat{z}^{(x, \hat{x}_e)}_{m+1,m+1} = 0$ for any $x \in [m,m+1]$, due to $\hat{z}^{(m+1,\hat{x}_e)}_{m+1,m+1} = 0$. Hence, the tagged customer in both queues either has exactly the same sample path, or reaches the state $(m+1, m+1)$ where her remaining expected payoff is $0$ regardless of her decision. So the tagged customer in both queues receives the same expected payoff. \hfill $\square$ When $R_0 \in [\alpha_m, \gamma_m]$, then $x_e = \hat{x}_e=m$, thus, $z^{(m)}_{i,i} = \hat{z}^{(m,m)}_{i,i}$, and the stationary distributions in the $N$-case and the $R$-case are the same. Thus there is no difference between the two cases. When $R_0 \in (\gamma_m, \alpha_{m+1}]$, we observe that both the equilibrium expected payoff and the total expected payoff decrease when reneging is allowed. Specifically, when $R_0 \in (\gamma_m, \beta_m)$, we prove this paradoxical behaviour in the following. \begin{paradox} \label{paradox2} If $R_0 \in (\gamma_m, \beta_m]$, then \[ \hat{z}_{i,i}^{(\hat{x}_e,\hat{x}_e)} < z_{i,i}^{({x}_e)} \quad i = 1, \cdots, m \,. \]Furthermore, the total expected payoff under equilibrium satisfy \[ \sum_{i = 1}^{m+1} \hat{\pi}^{(\hat{x}_e)}_{i-1} \, \hat{z}_{i,i}^{(\hat{x}_e,\hat{x}_e)} < \sum_{i = 1}^{m}\pi^{(x_e)}_{i-1} z_{i,i}^{({x}_e)} \] where $\pi^{(x)}_k$ and $\hat{\pi}^{(x)}_k$ denote the stationary distribution of the number of customers in the system in the $N$-case and the $R$-case, respectively. \end{paradox} Proof. If $R_0 \in (\gamma_m, \beta_m]$, then $\hat{x_e} > x_e = m$. Since the Nash equilibrium threshold in the $R$-case is mixed, it follows from Lemma \ref{lemma:mm1same} that $z_{m+1,m+1}^{(\hat{x}_e, \hat{x}_e)} = z_{m+1,m+1}^{(m+1, \hat{x}_e)} = 0$, and $z_{i,i}^{(\hat{x}_e, \hat{x}_e)} = z_{i,i}^{(m+1, \hat{x}_e)} $. In this way, \begin{equation} \label{eq:com1} z^{(x_e)}_{i,i} = z^{(m)}_{i,i}= \hat{z}^{(m+1, m)}_{i,i} > \hat{z}^{(m+1, \hat{x}_e)}_{i,i} = \hat{z}^{(\hat{x}_e, \hat{x}_e)}_{i,i} \qquad 1 \leq i \leq m\,, \end{equation} with the second equality following from Lemma \ref{lemma:CompareRNR1}, and the inequality following from the fact that $\hat{z}_{m+1,m+1}^{(m+1,x)}$ is decreasing in $x$. \begin{figure}[H] \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm, scale=0.6 , transform shape] \clip(-2.84,-1.3) rectangle (9.04,3.58); \node (zero) [draw, circle, minimum size=1.4cm] at (-2,1cm) {$0$}; \node (one) [draw, circle, minimum size=1.4cm] at (0,1cm) {$1$}; \node (two) [draw, circle, minimum size=1.4cm] at (2,1cm) {$2$}; \node (enminusone) [draw, circle, minimum size=1cm] at (4,1cm) {$\lfloor x \rfloor-1$}; \node (en) [draw, circle, minimum size=1.4cm] at (6,1cm) {$\lfloor x \rfloor$}; \node (enplusone) [draw, circle, minimum size=1cm] at (8,1cm) {$\lfloor x \rfloor+1$}; \draw [->] (zero) .. controls +(0.5,1.5) and +(-0.5,1.5) .. node [midway, above] {$\lambda$} (one); \draw [->] (one) .. controls +(0.5,1.5) and +(-0.5,1.5) .. node [midway, above] {$\lambda$} (two); \draw [->, very thick, dashed] (two) .. controls +(0.5,1.5) and +(-0.5,1.5) .. (enminusone); \draw [->] (enminusone) .. controls +(0.5,1.5) and +(-0.5,1.5) .. node [midway, above] {$\lambda$} (en); \draw [->] (en) .. controls +(0.5,1.5) and +(-0.5,1.5) .. node [midway, above] {$\lambda p$} (enplusone); \draw [->] (enplusone) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. node [midway, below] {$\mu q$} (en); \draw [->] (en) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. node [midway, below] {$\mu q$} (enminusone); \draw [->, very thick, dashed] (enminusone) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. (two); \draw [->] (two) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. node [midway, below] {$\mu q$} (one); \draw [->] (one) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. node [midway, below] {$\mu q$} (zero); \end{tikzpicture} \caption{Transition rate diagram when the threshold is $x$ when reneging is not allowed.} \centering \label{fig:TR1} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm,scale=0.6 , transform shape] \clip(-2.84,-1.3) rectangle (9.04,3.58); \node (zero) [draw, circle, minimum size=1.4cm] at (-2,1cm) {$0$}; \node (one) [draw, circle, minimum size=1.4cm] at (0,1cm) {$1$}; \node (two) [draw, circle, minimum size=1.4cm] at (2,1cm) {$2$}; \node (enminusone) [draw, circle, minimum size=1cm] at (4,1cm) {$\lfloor x \rfloor-1$}; \node (en) [draw, circle, minimum size=1.4cm] at (6,1cm) {$\lfloor x \rfloor$}; \node (enplusone) [draw, circle, minimum size=1cm] at (8,1cm) {$\lfloor x \rfloor+1$}; \draw [->] (zero) .. controls +(0.5,1.5) and +(-0.5,1.5) .. node [midway, above] {$\lambda$} (one); \draw [->] (one) .. controls +(0.5,1.5) and +(-0.5,1.5) .. node [midway, above] {$\lambda$} (two); \draw [->, very thick, dashed] (two) .. controls +(0.5,1.5) and +(-0.5,1.5) .. (enminusone); \draw [->] (enminusone) .. controls +(0.5,1.5) and +(-0.5,1.5) .. node [midway, above] {$\lambda$} (en); \draw [->] (en) .. controls +(0.5,1.5) and +(-0.5,1.5) .. node [midway, above] {$\lambda p$} (enplusone); \draw [->] (enplusone) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. node [midway, below] {\small $\mu q + \mu (1-q)(1-p)$} (en); \draw [->] (en) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. node [midway, below] {$\mu q$} (enminusone); \draw [->, very thick, dashed] (enminusone) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. (two); \draw [->] (two) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. node [midway, below] {$\mu q$} (one); \draw [->] (one) .. controls +(-0.5,-1.5) and +(0.5,-1.5) .. node [midway, below] {$\mu q$} (zero); \end{tikzpicture} \caption{Transition rate diagram when the threshold is $x$ when reneging is allowed.} \centering \label{fig:TR2} \end{figure} Next, we calculate the stationary distribution of the number of customers in the system in the $N$-case and the $R$-case. Figures \ref{fig:TR1} and \ref{fig:TR2} depict the transition rate diagram for both cases, given that each customer uses threshold $x$. Let $\displaystyle \rho = \frac{\lambda}{\mu q}$, it follows from the detailed balance equations that for $k = 0, \cdots, \lfloor x \rfloor$, \begin{align} &\pi_k^{(x)} = \frac{\displaystyle\rho^{k}}{\displaystyle(x-\lfloor x \rfloor) \,\rho^{\lfloor x \rfloor+1}+\frac{\rho^{\lfloor x \rfloor+1}-1}{\rho-1}} \\ &\hat{\pi}_k^{(x)} = \frac{\displaystyle\rho^{k}}{\displaystyle\frac{\lambda (x - \lfloor x \rfloor)}{\mu q+\mu(1-q)(1-(x - \lfloor x \rfloor))} \rho^{\lfloor x \rfloor}+\frac{\rho^{\lfloor x \rfloor+1}-1}{\rho-1}} \notag \,, \end{align} and \begin{align} &\pi_{ \lfloor x \rfloor +1}^{(x)} = \frac{\displaystyle(x-\lfloor x \rfloor) \, \rho^{n+1}}{\displaystyle(x - \lfloor x \rfloor) \,\rho^{\lfloor x \rfloor+1}+\frac{\rho^{\lfloor x \rfloor+1}-1}{\rho-1}} \\ &\hat{\pi}_{\lfloor x \rfloor+1}^{(x)} = \frac{\displaystyle \frac{\lambda \, (x - \lfloor x \rfloor)}{\mu q+\mu(1-q)(1-(x - \lfloor x \rfloor))} \rho^{\lfloor x \rfloor}}{\displaystyle \frac{\lambda \, (x - \lfloor x \rfloor)}{\mu q+\mu(1-q)(1-(x - \lfloor x \rfloor))} \rho^{\lfloor x \rfloor}+\frac{\rho^{\lfloor x \rfloor+1}-1}{\rho-1}} \,. \notag \end{align} Since $\hat{x}_e > x_e = m$, \begin{equation} \label{eq:SdCom} \hat{\pi}^{(\hat{x}_e)}_k < \pi^{(x_e)}_k \quad k=0, \cdots, m \qquad \hat{\pi}^{(\hat{x}_e)}_{m+1} > 0 = \pi^{(x_e)}_{m+1} \,. \end{equation} Hence \begin{equation} \sum_{i = 1}^{m+1} \hat{\pi}^{(\hat{x}_e)}_{i-1} \, \hat{z}_{i,i}^{(\hat{x}_e,\hat{x}_e)} = \sum_{i = 1}^{m} \hat{\pi}^{(\hat{x}_e)}_{i-1} \, \hat{z}_{i,i}^{(\hat{x}_e,\hat{x}_e)} < \sum_{i = 1}^{m}\pi^{(x_e)}_{i-1} z_{i,i}^{({x}_e)} \,, \end{equation} with the equality following from $z_{m+1,m+1}^{(m+1, \hat{x}_e)} = 0$, and the inequality following from Equations \eqref{eq:com1} and \eqref{eq:SdCom}. \hfill $\square$ It can be seen that although the expected payoff $z_{m+1,m+1}^{(x_e)}$ is smaller than $\hat{z}_{m+1,m+1}^{(\hat{x}_e,\hat{x}_e)}$, customers do not really join at position $m+1$ in the $R$-case equilibrium, thus it is not included in the total expected payoff. \begin{figure}[H] \centering{\includegraphics[width=0.38\linewidth]{Paradox2Tag.pdf}}\\ \subcaptionbox{$R_0 \in (\gamma_m, \beta_m]$}% {\includegraphics[width=0.5\linewidth]{Paradox2-1.pdf} \subcaptionbox{$R_0 \in (\beta_m, \alpha_{m+1})$}% {\includegraphics[trim={0 1.5mm 0 0 },clip, width=0.5\linewidth]{Paradox2-2.pdf}} \caption{Allowing reneging can make everyone worse off.} \label{fig:pa2} \end{figure} We illustrate Equation (\ref{eq:com1}) in Figure \ref{fig:pa2}(a) via an example with $R_0 = 7.5, \lambda = 1, \mu = 0.8, q = 0.4$. The Nash equilibrium threshold is $2$ and $2.167$ in the $N$-case and the $R$-case, respectively. The blue stars represent $\bm{z}^{(x_e)}$ with $z^{(x_e)}_{1,1} > z^{(x_e)}_{2,2} > 0 > z^{(x_e)}_{3,3}$, and the triangles represent $\bm{\hat{z}}^{(\hat{x}_e, \hat{x}_e)}$ with $z^{(\hat{x}_e, \hat{x}_e)}_{1,1} > z^{(\hat{x}_e, \hat{x}_e)}_{2,2} > z^{(\hat{x}_e, \hat{x}_e)}_{3,3} = 0$. It can be observed that and $z_{i,i}^{(x_e)} > \hat{z}_{i,i}^{(\hat{x}_e,\hat{x}_e)}$ for $i =1,2$. In Paradox \ref{paradox2}, we proved that when $R_0 \in (\gamma_m, \beta_m]$, allowing reneging makes everyone worse off. Next, we use some numerical examples to show that allowing reneging can make everyone worse off when $R_0 \in (\beta_m, \alpha_{m+1})$. Actually, the paradox is observed in every example. We first illustrate $z_{i,i}^{(x_e)}>\hat{z}_{i,i}^{(x_e,x_e)}$ for any $1 \leq i \leq \lfloor x_e \rfloor$ in Figure \ref{fig:pa2}(b) via an example with $R_0 = 7.8, \lambda = 1, \mu = 0.8, q = 0.4$. The Nash equilibrium is $2.073$ and $2.327$ in the $N$-case and the $R$-case, respectively. The blue stars represent $\bm{z}^{(x_e)}$ with $z^{(x_e)}_{1,1} > z^{(x_e)}_{2,2} > z^{(x_e)}_{3,3} = 0$, and the triangles represent $\bm{\hat{z}}^{(\hat{x}_e, \hat{x}_e)}$ with $z^{(\hat{x}_e, \hat{x}_e)}_{1,1} > z^{(\hat{x}_e, \hat{x}_e)}_{2,2} > z^{(\hat{x}_e, \hat{x}_e)}_{3,3} = 0$. It can be observed that $z_{i,i}^{(x_e)} > \hat{z}_{i,i}^{(\hat{x}_e,\hat{x}_e)}$ for $i =1,2$. \begin{table} \centering \begin{tabular}{|l | c | c | c | } \hline \quad $R_0,\lambda, \mu,q$ & $7.8, 1, 0.8, 0.4 $ & $4.4, 1, 0.8, 0.8$ & $13.5, 0.8, 1, 0.2 $ \, \\ [0.5ex] \hline $x_e \qquad \, \hat{x}_e$ & $2.073 \quad 2.327$ & $2.345 \quad 2.444$ & $2.529 \quad 2.872$ \\ \hline $z_{1,1}^{(x_e)} \quad \hat{z}_{1,1}^{(\hat{x}_e, \hat{x}_e)}$ & $ 3.245 \quad 2.964$ & $2.599 \quad 2.591$ & $3.740 \quad 3.546$ \\ \hline $z_{2,2}^{(x_e)} \quad \hat{z}_{2,2}^{(\hat{x}_e, \hat{x}_e)}$ & $1.661 \quad 1.292$ & $1.271 \quad 1.259$ & $1.514 \quad 1.283$ \\ \hline $\pi_{0}^{(x_e)} \quad \hat{\pi}_{0}^{(\hat{x}_e)}$ & $0.063 \quad 0.053$ & $0.158 \quad 0.154$ & $ 0.018 \quad 0.017$ \\ \hline $\pi_{1}^{(x_e)} \quad \hat{\pi}_{1}^{(\hat{x}_e)}$ & $0.195 \quad 0.165$ & $0.247 \quad 0.241$ & $0.073 \quad 0.069 $ \\ \hline \end{tabular} \caption{An numerical example when $R_0 \in (\beta_m, \alpha_{m+1})$} \label{tab:pa2} \end{table} Next, we list the Nash equilibrium, the stationary probabilities of having $0$ or $1$ customers in the queue, and the expected payoff of three examples with different values of $R_0, \lambda, \mu$, and $q$ given in Table \ref{tab:pa2}, to show that not only $z_{i,i}^{(x_e)}>\hat{z}_{i,i}^{(\hat{x}_e,\hat{x}_e)}$, but also $\pi_{i-1}^{(x_e)}>\hat{\pi}_{i-1}^{(\hat{x}_e)}$ for any $1 \leq i \leq \lfloor x \rfloor$. It follows from the transition rate diagram in Figure \ref{fig:TR1} and \ref{fig:TR2} that, for $i =1, \ldots m-1$, the transition rates going from state $i$ to state $i+1$, and vice versa, are identical in the $N$-case and the $R$-case, so the only difference in the stationary distribution for the two cases is the normalisation constant. The greater constant in the $R$-case makes the first $m$ states have less probability mass than the $N$-case, and it is only the final one that compensates. The first example has the same parameters as that in Figure \ref{fig:pa2}. When the Nash equilibrium is fractional, $z_{m+1,m+1}^{(x_e)} = \hat{z}_{m+1,m+1}^{(\hat{x}_e, \hat{x}_e)} = 0$ where $m$ is the integer part of the Nash equilibrium, so we omit this in the table. Also, to compare $\sum_{i = 1}^{m+1} {\pi}^{({x}_e)}_{i-1} \, {z}_{i,i}^{(x_e)}$ and $\sum_{i = 1}^{m+1} \hat{\pi}^{(\hat{x}_e)}_{i-1} \, \hat{z}_{i,i}^{(\hat{x}_e,\hat{x}_e)}$, we only need to calculate ${\pi}^{({x}_e)}_i$ and $\hat{\pi}^{(\hat{x}_e)}_i$ for $i = 0, \cdots, m-1$, so we omit ${\pi}^{({x}_e)}_k$ and $\hat{\pi}^{(\hat{x}_e)}_k$ for $k = m,m+1$. In Table \ref{tab:pa2}, the Nash equilibrium thresholds of the three examples are all fractional and have the same integer part, that is, $2$. We observe that $z_{i,i}^{(x_e)} > \hat{z}_{i,i}^{(\hat{x}_e,\hat{x}_e)}$ and $\pi_{i-1}^{(x_e)}>\hat{\pi}_{i-1}^{(\hat{x}_e)}$ for $i =1,2$. \section{Social Welfare} \label{sec:SW} In the previous section, we showed that allowing reneging can make every customer worse off. If the goal is to maximise the social welfare which is defined as the total expected net benefit of all customers, how does the reneging affect the social welfare? In this section, we calculate and compare the optimal threshold from the social point of view in the $N$-case and the $R$-case. \subsection{Social welfare in the $N$-case} When the customers all adopt threshold $x$, the state transition rate diagram in the non-reneging case is shown in Figure \ref{fig:TR1}, the social welfare \begin{align} S^{N} (x) :&= \lambda \left(\sum_{k = 1}^{\lfloor x \rfloor} \pi_{k-1}^{(x)} z_{k,k}^{(x)}+ (x-\lfloor x \rfloor) \pi_{\lfloor x \rfloor}^{(x)} z_{\lfloor x \rfloor+1,\lfloor x \rfloor+1}^{(x)}\right)\\ & = \lambda R_0 \left(\sum_{k = 0}^{\lfloor x \rfloor-1} \pi_k^{(x)}+ (x-\lfloor x \rfloor) \pi_{\lfloor x \rfloor}^{(x)} \right)-\sum_{k = 0}^{\lceil x \rceil} \, k \, \pi_{k}^{(x)} \,. \end{align} where $\rho := \displaystyle \frac{\lambda}{\mu q}$, and the second equality follows from Little's law. The explicit expression is in Appendix \ref{appendix:sw}. \begin{proposition} \label{pro1} Social welfare $S^{N}(x)$ is unimodal. \end{proposition} Proof. We first take the derivative of $S^{N}(x)$, \begin{equation}\label{Sod1} \frac{dS^{N}(x)}{dx} = \Scale[0.91]{\begin{cases} \displaystyle \frac{\rho^{\lfloor x \rfloor} \left(R_0 \lambda (\rho-1)^2 - \rho \,\left(1 - 2 \rho + \lfloor x \rfloor (1-\rho) + \rho^{\lfloor x \rfloor+2}\right)\right)}{(1 + \rho^{\lfloor x \rfloor+1} ((x-\lfloor x \rfloor)(1-\rho) - 1))^2} \qquad & \text{when }x > \lfloor x \rfloor \\ \text{undefined} & \text{when }x = \lfloor x \rfloor \,. \end{cases}} \end{equation} To see that $S^{N}(x)$ is unimodal, let \begin{equation} f(k) = \left(1 - 2 \rho + k (1-\rho) + \rho^{2 + k} \right) \quad k = 0,1,2,\cdots \,, \end{equation} and observe that the numerator in the first Equation of (6.4) can be written as \[ \rho^{\lfloor x \rfloor}\left(R_0 \lambda(\rho-1)^2 -\rho f(\lfloor x \rfloor) \right)\,. \] When $\rho \neq 1$ \begin{equation} f(k+1) - f(k) = \rho^{2 + k} (\rho - 1) + (1-\rho) = (1-\rho)\left(1-\rho^{2 + k} \right) > 0 \,. \end{equation} We assume that $R_0 > \frac{\displaystyle 1}{\displaystyle \mu q}$ to avoid the trivial case where the reward is smaller than the expected service time even if a customer does not have to wait. Hence \begin{equation} f(0) = (-1+\rho)^2 < \frac{R_0 \lambda }{\rho}(1-\rho)^2 \end{equation} and \begin{equation} \lim\limits_{n \rightarrow \infty} f(k) = \infty > \frac{R_0 \lambda }{\rho}(1-\rho)^2, \end{equation} and so, \begin{equation} \rho \, f(0) < \cdots < R_0 \lambda (1-\rho)^2< \cdots < \rho \,\lim\limits_{n \rightarrow \infty} f(k) \,. \end{equation} Thus, there exists an integer $n_N^{\star}$ such that $\dfrac{dS^{N}}{dx}$ is increasing when $ x \leq n_N^{\star}$; is decreasing when $ x > n_N^{\star}$. That is, $n_N^{\star}$ is the socially optimal threshold. $ \hfill \square $ It can be observed that $n_N^{\star} = \lfloor \nu \rfloor$, where $\nu$ satisfies \begin{equation} R_0 \mu q -\nu = \frac{\rho}{(1-\rho)^2} \left( \nu (1-\rho)- 1+ \rho^{\nu} \right) \,. \end{equation} This coincides with Naor's result for the non-feedback $M/M/1$ queue in \citet[section 4]{N69}. In other words, from the perspective of society, the feedback parameter $q$ affects the social welfare as it lowers the service rate from $\mu$ to $\mu q$. In addition, the socially optimal threshold value is an integer even though customers are allowed to use fractional thresholds. Figure \ref{SWNR} (the blue curve) illustrates how the social welfare varies with the threshold value. \subsection{Social welfare in the $R$-case} When customers are allowed to renege after they join, the social welfare calculation is more involved, as not every customer who chooses to join contributes $R_0$ to the social welfare. On this account, to calculate the social welfare in the $R$-case, we need to work out the probability that a joining customer reneges before she successfully completes the service. Denote this probability by $\tilde{p}^{(x)}$ when every customer uses threshold $x$. In order to obtain $\tilde{p}^{(x)}$, we first calculate the distribution \[ \tilde{\pi}^{(x)}_{k} = \frac{\mu (1-q) \, \hat{\pi}^{(x)}_{k+1}}{\sum_{k=0}^{\lfloor x \rfloor} \,\mu (1-q) \, \hat{\pi}^{(x)}_{k+1} } = \frac{\hat{\pi}^{(x)}_{k+1}}{\sum_{k=0}^{\lfloor x \rfloor} \, \hat{\pi}^{(x)}_{k+1} }\qquad k = 0, \cdots, \lfloor x \rfloor \,. \] of the number of customers in the system observed by each feedback customer. Each joining customer can only renege when her service fails and there are $\lfloor x \rfloor$ other customers in the system. If this is the case, she reneges with probability $1-p$. So a joining customer reneges at her $k$th feedback with probability \begin{equation} \tilde{p}^{(x)}_k : = (1-q)(1-p)\,\tilde{\pi}^{(x)}_{\lfloor x \rfloor} \, \left((1-q) \left(1-(1-p)\,\tilde{\pi}^{(x)}_{\lfloor x \rfloor} \right)\right)^{k-1} \,. \end{equation} Hence, the probability that a joining customer reneges before she successfully completes the service is given by \begin{align} \tilde{p}^{(x)} =\sum_{k = 1}^{\infty}\tilde{p}^{(x)}_k &=\frac{(1-q)\, (1-p)\,\tilde{\pi}^{(x)}_{\lfloor x \rfloor} }{1-(1-q) \, \left(1-(1-p)\,\tilde{\pi}^{(x)}_{\lfloor x \rfloor} \right)} \,. \end{align} Then the social welfare in the $R$-case is \begin{align} S^{R} (x):&= \lambda \left(\sum_{k = 1}^{\lfloor x \rfloor} \hat{\pi}_{k-1}^{(x)} \, \hat{z}_{k,k}^{(x)} + (x - \lfloor x \rfloor) \, \hat{\pi}_{\lfloor x \rfloor}^{(x)} \, \hat{z}_{\lfloor x \rfloor+1,\lfloor x \rfloor+1}^{(x)}\right) \, \\ &= \lambda \, R_0 \, \left(\sum_{k = 0}^{\lfloor x \rfloor-1} \hat{\pi}_k^{(x)} + (x - \lfloor x \rfloor) \, \hat{\pi}_{\lfloor x \rfloor}^{(x)} \right) \left(1-\tilde{p}^{(x)} \right) -\sum_{k = 0}^{\lceil x \rceil} \, k \, \hat{\pi}_{k}^{(x)} \,. \end{align} If we take the derivative of $S^{R}(x)$, we have \begin{equation} \label{Sod2} \frac{dS^{R}(x)}{dx} = \Scale[0.9]{ \begin{cases} \displaystyle \frac{q \,\rho^{\lfloor x \rfloor} \left(R_0 \lambda (\rho-1)^2 - \rho \,\left(1 - 2 \rho + \lfloor x \rfloor (1-\rho) + \rho^{\lfloor x \rfloor+2}\right)\right)}{(1 + \rho^{n+1} ((x-\lfloor x \rfloor) \, (1-q \rho)-1))^2} \qquad & \text{when }x > \lfloor x \rfloor \\ \text{undefined} & \text{when }x = \lfloor x \rfloor \,. \end{cases}} \end{equation} Following a similar argument to that in Proposition \ref{pro1}, there exists a socially optimal threshold $n_R^{\star}$ such that $\dfrac{dS^{R}(x)}{dx}$ is increasing when $ x \leq n_R^{\star}$; is decreasing when $ x > n_R^{\star}$. The part in Equation (\ref{Sod2}) that decides the sign of $\displaystyle \frac{dS^{R}(x)}{dx}$ is $R_0 \lambda (\rho-1)^2 - \rho \, \left(1 - 2 \rho + \lfloor x \rfloor (1-\rho) + \rho^{\lfloor x \rfloor+2} \right)$, which is the same as in Equation (\ref{Sod1}), so $n_R^{\star} = n_N^{\star}$. When the threshold is an integer, the joining customers never renege, so there is no difference between the $N$-case and the $R$-case, the socially optimal threshold and the optimal social welfare in both cases are the same. In Figure \ref{SWNR}, an example of social welfare in the $N$-case (blue) and the $R$-case (red) is plotted. The socially optimal threshold $n_N^{\star} = n_R^{\star}=3$. Also, the figure indicates that the social welfare in the non-reneging case is greater than the reneging case when customers use threshold $x < n^*_N$ but is lower when they use $x > n^*_N$. A possible explanation for this is that, when the customers' threshold is less than the socially-desired value, the reason is that fewer customers use the service, and allowing reneging makes this worse. On the other hand, when the customers' threshold is greater than the socially-desired value, the social welfare is less because joining customers inflict negative externalities on others \citep{HO16}, and allowing reneging makes it easier to leave, which improves the situation. \begin{figure} \centering \includegraphics[width=10cm]{SWNR.pdf} \caption{Social welfare when $\lambda = 1, \, \mu = 0.8, \, q = 0.8, \, R_0 = 18$.} \label{SWNR} \end{figure} \section*{Acknowledgments} \noindent P. G. Taylor's research is supported by the Australian Research Council (ARC) Laureate Fellowship FL130100039 and the ARC Centre of Excellence for the Mathematical and Statistical Frontiers (ACEMS). M. Fackrell's research is supported by the ARC Centre of Excellence for the Mathematical and Statistical Frontiers (ACEMS). J. Wang would like to thank the University of Melbourne for supporting her work through the Melbourne Research Scholarship. \clearpage
{ "timestamp": "2022-02-02T02:26:26", "yymm": "2102", "arxiv_id": "2102.07644", "language": "en", "url": "https://arxiv.org/abs/2102.07644" }
\section{Introduction} \label{sec:intro} Over the last years, intensive research has been carried out in the field of autonomous driving \cite{Urmson2008,Ziegler2014a,Kunz2015}. Thereby, motion planning is a crucial requirement and one of the most challenging aspects for automated vehicles. As early as 2007, impressive automated systems for complex urban scenarios with interacting vehicles were presented as part of the well-known Urban Challenge initiated by the Defense Advanced Research Projects Agency (DARPA) \cite{Urmson2008}. In 2013, the Mercedes S-Class Bertha was able to drive fully autonomously more than 100km from Mannheim to Pforzheim in Germany \cite{Ziegler2014a, Ziegler2014}. A popular architecture for the motion planning system follows the idea that a behavior planning module decides for a strategic maneuver option, which is passed to a trajectory planning module where a feasible trajectory is calculated. A practicable approach for behavior planning is to generate a maneuver option rule-based using heuristics. However, this limits the capabilities for foresighted motion planning in complex environments \cite{Ziegler2015}. For this reason more foresighted but still efficient behavior and trajectory planning systems are widely investigated. A popular concept for graph-based behavior planning is shown in \cite{Hubmann2016}. A speed profile along a given reference path is obtained by using graph-search methods. Therefore, a graph is generated where nodes represent states and edges represent actions. The idea is to extract a rough behavior trajectory over a planning horizon $t_\text{hor} \approx \SI{10}{\second}$ in order to enable foresighted behavior planning. The action set consists of discrete acceleration values and the temporal discretization is $\Delta t = \SI{1}{\second}$. There exist several approaches which extend this concept of behavior planning for, e.g., short term lateral motion \cite{Zhan2017}, merging behavior at highways \cite{Ward2018a} or courteous behavior at intersections \cite{Speidel2019}. In \cite{Zhang2020}, closed-loop forward simulation implementing high-level policies is used in order to generate subsequent states in the graph, in contrast to discrete acceleration or velocity values. The forward simulation is done using the Intelligent Driver Model (IDM) \cite{Treiber2000} and the Pure Pursuit Controller \cite{Coulter1992}. However, due to the computational complexity of the approach, the concept is restricted to a horizon of $t_\text{hor} = \SI{8}{\second}$ seconds and large discretization of $\Delta t = \SI{2}{\second}$ . Further, only one policy change is allowed within the planning horizon $t_\text{hor}$. A similar method is used in \cite{Lenz2016} where cooperative behavior for highway scenarios is generated using Monte Carlo Tree Search. In this concept, the IDM as well as pre-defined acceleration actions are employed. Driver models could also be successfully used in various other concepts to efficiently generate social compliant behavior. For example, in \cite{Graf2019} the IDM-based MOBIL model \cite{Kesting2007} is utilized in order to decide whether a lane change is desirable. Based on the previous discussion, in this work, a motion planning framework is developed enabling foresighted and courteous behavior using graph-search methods extending our concept presented in \cite{Speidel2019}. The main idea is to utilize different control and driver models, which are known to generate preferable actions for specific scenarios. Consequently, we are able to improve the performance of graph-based behavior planning and driven trajectories compared to related work \cite{Hubmann2016, Speidel2019}. In order to still assure real-time capabilities and significantly reduce calculation times, we propose action selection strategies as well as efficient admissible heuristics, which are applicable in interactive urban scenarios. \section{Methodology} \label{sec:methodology} \begin{figure} \centering \input{ArchitecurePicture.tex} \vspace*{-0.5cm} \caption{System Overview} \label{fig:Architecture} \end{figure} The concept follows the modular architecture of behavior and trajectory planning as shown in Figure~\ref{fig:Architecture}. Preceding modules provide environmental data including map data as well as state estimations of other vehicles with according predictions. Thereby, a set of predicted trajectories for each other vehicle with corresponding uncertainties is received. The goal of the graph-based behavior planning is to obtain a rough behavior trajectory. In general, planning is done relative to the center line of the current road lane. Therefore, a node in the graph is represented by a state vector \begin{equation} \boldsymbol{x}_k = [s_k, d_k, \theta_k, \kappa_k, v_k, a_k]^\mathsf{T}, \quad k \in [0,\dots, T] \,, \end{equation} where $s$ is the longitudinal position along the lane, $d$ the lateral distance to the lane, $\theta$ the orientation, $\kappa$ the curvature, $v$ the velocity, $a$ the acceleration and $k$ the corresponding time step. The end of the planning horizon $t_\text{hor}$ is denoted by the index $T$. The lane relative position $[s,d]$ can be transformed to the classic representation $[x,y]$ in Cartesian coordinates and vice versa. For further details, the reader is referred to \cite{Werling2010}. The expansion of a node, i.e. the generation of possible subsequent states, is done using different models which will be presented in Section~\ref{subsec:contextAwareBranching}. The time discretization of subsequent nodes is $\Delta t = \SI{1}{\second}$. Beginning from the root node, i.e. the current state, a graph is generated up to the planning horizon $t_\text{hor} \approx 10s$. The graph structure is exemplary shown in Figure~\ref{fig:graph}. The optimal behavior trajectory is extracted using the A*-search algorithm, where the search is guided by the admissible heuristic functions shown in Section~\ref{subsec:heuristicFunctions}. The generation of the resulting trajectory in the trajectory planning module is based on the approach presented in \cite{Speidel2019}. The general idea is to use polynomials in order to interpolate between the behavior trajectory states. In contrast to our previous work, in this work we also regard lateral optimization. In the end, the resulting trajectory is passed to the controller which generates the input for the actuators. \subsection{Branching Strategy} \label{subsec:contextAwareBranching} \begin{figure} \centering \input{graphPicture.tex} \vspace*{-0.4cm} \caption{Exemplary part of the utilized graph structure for behavior planning. Nodes represent states connected by edges associated with actions contained in \mbox{$\mathcal{A} = \{\alpha^{(0)},\dots, \alpha^{(n)}\}$}. The concept of MOBIL-based action selection is demonstrated for a possible lane change maneuver, which is not expanded originating from $\boldsymbol x_k^{(i)}$ indicated by dashed lines. However, a possible lane change is regarded originating from $\boldsymbol x^{(i)}_{k+1}$.} \label{fig:graph} \end{figure} The idea of the branching strategy is to combine the advantages of pre-defined acceleration actions and model-based action which generate preferable behavior for different scenarios, inspired by \cite{Lenz2016}. In order to omit the expansion of all actions, only a subset of actions is expanded at each node. In the following, this process of choosing which action to expand at which node is also referred to as action selection. In general, the ideas of \cite{Lenz2016} are extended by additional control models as well as more sophisticated action selection strategies. Further, in this work, the behavior planning is embedded into a holistic framework generating comfortable trajectories. In addition, the solution of behavior planning guarantees the existence of a feasible solution in the trajectory planning module, as all kinematic and collision constraints are considered during forward simulation. In general, longitudinal actions $\mathcal{A}_\text{lon}$ and lateral actions $\mathcal{A}_\text{lat}$ can be distinguished, where the resulting action set is $\mathcal{A} = \mathcal{A}_\text{lon} \times \mathcal{A}_\text{lat} = \{\alpha^{(0)},\dots, \alpha^{(n)}\}$. Hereafter, the different actions and corresponding action selection strategies are presented. \newline \textbf{Longitudinal actions:} For longitudinal action generation, acceleration and velocity targets are distinguished . First the acceleration targets are discussed. These consist of pre-defined accelerations $a^{(i)} \in \{-2, -1, 0, 1, 2\} \SI{}{\meter\per\second\squared}$ and the acceleration according to the IDM $a^{\text{(idm)}}$. The $a^{\text{(idm)}}$ is expanded during car-following scenarios, as it is able to model comfortable and human-like following behavior. In order to omit expansions of similar states, in car-following scenarios only generic acceleration targets $a^{(i)}$ with \mbox{$|a^{(i)} - a^{\text{(idm)}}| > \SI{0.5}{\meter\per\second\squared}$} are expanded. Further, we require \mbox{$|a_{k+1} - a_k| \leq 1.9 \SI{}{\meter\per\second\squared}$}. The longitudinal state transition model for acceleration targets is defined by \begin{multline} \setlength\arraycolsep{1.5pt} \begin{bmatrix} s_{k+1} \\ v_{k+1} \\ a_{k+1} \end{bmatrix} = \begin{bmatrix} 1 & \Delta t & \frac{1}{2}\Delta t^2\\ 0 & 1 & \Delta t\\ 0 & 0 & 1\\ \end{bmatrix} \begin{bmatrix} s_k \\ v_k \\ a_k \\ \end{bmatrix} + \begin{bmatrix} \frac{1}{6}\Delta t^3\\[1pt] \frac{1}{2}\Delta t^2 \\ \Delta t \end{bmatrix} \dot{a}_k\,, \label{eqn:caTransition} \end{multline} where $\dot{a}_k = (a^{(i)} - a_{k})/ \Delta t$ and, as a result, $a_{k+1} = a^{(i)}$. The velocity targets are defined by the desired velocity $v_d$ and stillstand $v_0 = 0$, where the acceleration of the targets is constrained to 0. The expansion of these actions is triggered if the target velocity is reachable within $\Delta t$. As state transition model, the concept of C1-continuous time optimal trajectories summarized in \cite{Knierim2012} is employed. Thus, comfort is ensured by restricted and continuous jerk. In general, C1-continuous time optimal trajectories also allow emergency maneuvers at kinematic limits. However, in this work, we limit our scope to non-safety critical scenarios. For further details, the reader is referred to \cite{Knierim2012}. \newline \textbf{Lateral actions:} The lateral action set is given by the different road lanes which can be targeted to drive on. Therefore, $\mathcal{A}_\text{lat} = \{r_\text{l}, r_\text{c}, r_\text{r}\}$ where $r_\text{l}$ represents the lane to the left, $r_\text{c}$ the current lane and $r_\text{r}$ the lane to the right. Using $r_c$ the motion can be modeled purely longitudinal along the center line of the current lane. In order to perform a lane change to $r_\text{l}$ or $r_\text{r}$, the Pure Pursuit Controller is employed as lateral transition model and, consequently, the vehicle state is regarded in Cartesian Coordinates. The steering behavior defined by the Pure Pursuit Controller is combined with different acceleration targets for longitudinal behavior. This allows to restrict $\kappa$, $\dot{\kappa}$ and the absolute acceleration $a_{\text{abs}}$ already during behavior planning which ensures feasible solutions in the trajectory planning module. The context in which a lane change is explored is defined by the MOBIL model \cite{Kesting2007}, which is known to generate human-like decision-making for lane change behavior \cite{Graf2019}. Thereby, it is estimated if a lane change is favorable for the combined costs of all involved vehicles. \subsection{Cost and Heuristic Functions} \label{subsec:heuristicFunctions} The costs attributed to a node are defined by \begin{equation} \vspace{-0.1cm} J = w_\text{f} j_\text{f} + w_\text{c} j_\text{c} + w_v j_v + w_a j_a + w_{\dot{a}} j_{\dot{a}} + w_{\text{lc}} j_\text{lc}\,, \end{equation} where $j_\text{f}$ represents costs for the spatio-temporal distance to the vehicle in front, $j_\text{c}$ are courtesy costs that arise if the ego vehicle pulls out or drives in front of another vehicle \cite{Speidel2019}. The difference to the desired velocity is regarded by $j_v$ and the comfort is optimized by costs $j_a$ and $j_{\dot{a}}$ for larger absolute values of $a$ and $\dot{a}$. Further, costs $j_\text{lc}$ arise for lane changes. The single cost terms can be weighted with the according cost weighting $w$. In order to generate courteous and safe behavior, a set of predicted trajectories for each other vehicle is considered, where the corresponding uncertainties are incorporated by the single cost terms. To further improve the runtime, admissible heuristics are developed which can be calculated online. The idea is to use a linear combination of heuristic terms rather than model directly one overall heuristic. If the heuristics for the single cost terms are admissible, the linear combination remains admissible \cite{Russell2016}. Therefore, the heuristic is given with \begin{equation} h_{\text{all}} = \sum_{i=k+1}^{T} w_\text{f} j_{\text{f},i}^\text{min} + w_\text{c} j_{\text{c},i}^\text{min} + w_v j_{v,i}^\text{min} + w_a j_{a,i}^\text{min} + w_{\dot{a}} j_{\dot{a},i}^\text{min} \,, \end{equation} where $j^\text{min}_{(\cdot),i}$ represent the minimal costs for the corresponding cost term that arise at time $i$ originating from the currently expanded node $\boldsymbol{x}_k$. In the following, the calculation of the single minimal costs terms is explained. The terms $ j_{\dot{a},i}^\text{min}$ and $ j_{a,i}^\text{min}$ are determined by the minimal necessary jerk and acceleration to avoid a collision with the vehicle in front. The term $j_{\text{f},i}^\text{min}$ can be estimated by calculating the maximum possible distance to the vehicle in front at $i$. The same applies for $j_{\text{c},i}^\text{min}$, where the maximum possible distance to the vehicle behind is calculated. The minimal arising velocity costs $j_{v,i}^\text{min}$ are given if the ego vehicle accelerates with maximum acceleration to the desired speed. Even though the single minimal costs terms result in low estimated heuristic costs, the evaluation shows that the combination of all heuristics leads to a significant reduction of calculation times, while the solution remains optimal. \begin{figure*}[ht] \setlength\columnsep{5pt} \begin{multicols}{3} \vspace{-0.65cm} \input{eval_result1.tikz}\; \input{eval_result2.tikz}\; \input{eval_result3.tikz} \end{multicols} \vspace{-0.675cm} \begin{multicols}{2} \input{eval_result_vel.tikz}\;\;\;\;\;\;\;\;\;\;\;\;\columnbreak \input{eval_result_acc.tikz} \end{multicols} \vspace*{-0.75cm} \caption{ Evaluation scenario, where the ego vehicle performs a left turn, while maintaining comfortable behavior and courtesy towards other traffic participants. In the first row, a top view of the scene is depicted for three different points in time. The planned trajectory is represented by colored dots where red presents the planned position at $T$. The ego vehicle is red and other vehicles are blue. The center lines of the road lanes are depicted in black. The second row shows the velocity and acceleration of the driven trajectory. } \label{fig:evalLeftTurn} \end{figure*} \section{EVALUATION} The evaluation is done using real world map data contained in a high-precision digital map of Ulm (Germany) including lane-changes, intersections, roundabouts and on-ramp scenarios \cite{Kunz2015}. The concept is implemented in C++ using the A*-search algorithm of the DOSL library \cite{Bhattachary2017}. Runtimes are obtained using a Intel XEON E5-1660 v4 CPU with 3.2 GHz utilizing a single thread. Other vehicles are simulated with random acceleration uniformly distributed between $[ \SI{-1}{\meter\per\second\squared},\SI{1}{\meter\per\second\squared}]$ in each time step. In general, for each of the evaluations about 250 scenarios were analyzed, including lane change, highway on-ramp, roundabout and intersection scenarios. \subsection{Branching and Heuristic Functions} At first, the action selection strategy is investigated. The corresponding findings are summarized in Table~\ref{tab:MobilEval}. In order to measure the comfort of resulting trajectories, both average squared acceleration $\varnothing a^2$ and jerk $\varnothing \dot{a}^2$ are regarded, as they are also incorporated into the cost function during trajectory planning. The results show that our model-based action selection strategy only has minor influence on the quality of resulting trajectories. This emphasizes that during car-following scenarios exploration of similar states is omitted and that the MOBIL model yields well suited decision making for lane changes when integrated into the graph-based framework. Thereby, the runtime is reduced by 90\% compared to a more passive action selection strategy similar to the defined preconditions in \cite{Lenz2016}. Consequently, the proposed action selection strategy enables the usage of the extended action set for real-time application, without increased trajectory costs. {\renewcommand{\arraystretch}{1.0} \begin{table} \caption{Evaluation of the action selection strategy, including strategies for car-following and lane changing using the \mbox{MOBIL} model. Thereby, the performance of the proposed action selection strategy is compared with a more passive, i.e. less restrictive, one similar to the preconditions presented in \cite{Lenz2016}. Runtimes for behavior planning as well as average squared jerk and acceleration of driven trajectories are shown. No heuristic functions ($h_0$) are used for the comparison. \vspace{-0.5cm}} \label{tab:MobilEval} \begin{center} \begin{tabular}{c| l l} & \makecell{proposed \\ ($h_0$)} & passive\\ \hline $\varnothing$ runtime [$\SI{}{\milli\second}$]&\textbf{18.09} & 189.61\\ max runtime [$\SI{}{\milli\second}$]&\textbf{158.33} & {1588.67}\\ $\varnothing \dot{a}^2$ $[(\SI{}{\meter}/\SI{}{\second}^3)^2]$&{0.056}& 0.056\\ $\varnothing a^2$ $[(\SI{}{\meter}/\SI{}{\second}^2)^2]$&{0.39} & {0.39}\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \end{table} } Further, the heuristic functions as well as the overall branching strategy were evaluated, where Table~\ref{tab:IDMEval} shows corresponding results. As baseline the concepts \cite{Speidel2019,Hubmann2016} are used, which implement a similar branching strategy. It is shown that comfort is much higher for the proposed approach, as $\varnothing a^2$ is slightly reduced and $\varnothing \dot{a}^2$ is nearly 25\% lower. These results emphasize the idea that model knowledge of driver and control models can be effectively used in order to improve comfort, in the trade-off against longer runtimes. However, the proposed heuristic function are able to effectively reduce the runtime. It is demonstrated that the average runtime can be reduced by about $17\%$. The maximum runtime is even improved by about $60\%$, from $\SI{158.3}{\milli\second}$ to $\SI{62.67}{\milli\second}$, while the solution of behavior planning remains optimal. Slight divergences of driven trajectories occur due to numerical issues. As a result, runtimes for behavior planning are significantly reduced even compared to \cite{Speidel2019,Hubmann2016}, while the quality of driven trajectories is improved using the proposed branching strategy. {\renewcommand{\arraystretch}{1.0} \begin{table} \caption{Evaluation of the proposed behavior planning module, where runtimes for behavior planning as well as average squared jerk and acceleration for driven trajectories are shown. The evaluation scenarios were investigated using the proposed branching strategy and the branching strategy implemented in \cite{Hubmann2016, Speidel2019}. Further, the proposed approach is shown with heuristic functions $h_\text{all}$ and without usage of heuristic functions $h_0$. \vspace{-0.5cm}} \label{tab:IDMEval} \begin{center} \begin{tabular}{c| l l l} &\makecell{proposed \\ ($h_0$)} & \makecell{proposed \\ ($h_\text{all}$)} & \makecell{\cite{Hubmann2016, Speidel2019} }\\ \hline $\varnothing$ runtime $[\SI{}{\milli\second}]$ &{18.09} &\textbf{14.96} & {14.79}\\ max runtime $[\SI{}{\milli\second}]$ &158.33 &\textbf{62.67}& 103.00\\ $\varnothing \dot{a}^2$ $[(\SI{}{\meter}/\SI{}{\second}^3)^2]$&\textbf{0.056} &\textbf{0.055}& {0.073}\\ $\varnothing a^2$ $[(\SI{}{\meter}/\SI{}{\second}^2)^2]$&\textbf{0.39} &\textbf{0.39} & {0.40}\\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \end{table} } \subsection{Motion Planning Framework} In order to give an insight to the overall performance and the resulting trajectories of the motion planning framework, an exemplary scenario is depicted in Figure~\ref{fig:evalLeftTurn}. In general, an urban left turn scenario is regarded without right-of-way. At time \tikz\node[circle,fill=white,draw=black,thick,inner sep=1pt]{\scriptsize 1};, the ego vehicle \tikz\node[circle,fill=white,draw=red,thick,inner sep=1pt]{\textcolor{red}{\footnotesize e}}; slowly approaches the intersection, while vehicle \tikz\node[circle,fill=white,draw=blue,thick,inner sep=1pt]{\textcolor{blue}{\scriptsize 4}}; crosses it. Afterwards, vehicle \tikz\node[circle,fill=white,draw=blue,thick,inner sep=1pt]{\textcolor{blue}{\scriptsize 2}}; and \tikz\node[circle,fill=white,draw=blue,thick,inner sep=1pt]{\textcolor{blue}{\scriptsize 3}}; approaching from the right have to be considered. Taking the turn in front of vehicle \tikz\node[circle,fill=white,draw=blue,thick,inner sep=1pt]{\textcolor{blue}{\scriptsize 2}}; would cause to much courtesy costs, thus the ego vehicle merges between vehicle \tikz\node[circle,fill=white,draw=blue,thick,inner sep=1pt]{\textcolor{blue}{\scriptsize 2}}; and \tikz\node[circle,fill=white,draw=blue,thick,inner sep=1pt]{\textcolor{blue}{\scriptsize 3}}; at \mbox{time \tikz\node[circle,fill=white,draw=black,thick,inner sep=1pt]{\scriptsize 3};}. In addition, it is worth noting that during the analysis of all 250 scenarios, the maximum measured overall calculation time for motion planning was $\SI{89.33}{\milli\second}$ using the presented approach. Further, non of the scenarios led to any collisions despite random behavior of other traffic participants. This emphasizes the capability of the framework to handle complex urban scenarios. \section{CONCLUSION} In this work, we presented a motion planning framework for autonomous vehicles in urban environments utilizing graph-search methods. The proposed branching strategy and admissible heuristic functions yield trajectories attributed with lower costs, while the runtime is reduced significantly compared to related work. Therefore, the implementation of the concept on the research vehicle of Ulm University and according validations in real-world public traffic is part of our future work. \bibliographystyle{IEEEbib}
{ "timestamp": "2021-02-17T02:01:45", "yymm": "2102", "arxiv_id": "2102.07812", "language": "en", "url": "https://arxiv.org/abs/2102.07812" }
\section{Competitive Coevolution} \label{sec:approach} Figure~\ref{fig:coevolution} describes the OPAM algorithm for finding optimal priority assignments, which employs multi-objective, two-population competitive coevolution. The algorithm first randomly initializes two populations $\mathbf{A}$ and $\mathbf{P}$ for task-arrival sequences and priority assignments, respectively (lines 13--15). For $\mathbf{A}$, OPAM randomly varies task arrivals of aperiodic tasks to create $\var{ps}_a$ task-arrival sequences, according to the input task descriptions $D$. Regarding $\mathbf{P}$, OPAM randomly creates $\var{ps}_p$ priority assignments that may include one defined by engineers if available. \begin{figure}[t] \begin{lstlisting}[style=Alg] Algorithm Search optimal priority assignments Input $D$: task descriptions Input $n_c$: number of coevolution cycles //budget Input $\var{ps}_a$: population size //task-arrival sequences Input $\var{ps}_p$: population size //priority assignments Input $\var{cp}_a$: crossover probability //task-arrival sequences Input $\var{cp}_p$: crossover probability //priority assignments Input $\var{mp}_a$: mutation probability //task-arrival sequences Input $\var{mp}_p$: mutation probability //priority assignments Input $\mathbf{E}$: set of task-arrival sequences //external evaluation Output $\mathbf{B}$: best Pareto front //initialize populations $\mathbf{A} \leftarrow \fun{randomize\_arrivals}(D,\var{ps}_a)$ $\mathbf{P} \leftarrow \fun{randomize\_priorities}(D,\var{ps}_p)$ for $n_c$ times do ?\vrule? //evolution: find worst-case sequences of task arrivals ?\vrule? //objective: deadline misses ?\vrule? $\fun{evaluate\_internal\_fitness\_arrivals}(\mathbf{A},\mathbf{P})$ ?\vrule? $\mathbf{A} \leftarrow \fun{bread\_arrivals}(\mathbf{A},\mathbf{P},\var{cp}_a,\var{mp}_a)$ //GA ?\vrule? ?\vrule? //evolution: find best-case priority assignments ?\vrule? //objectives: safety margins and constraints ?\vrule? $\fun{evaluate\_internal\_fitness\_priorities}(\mathbf{P},\mathbf{A})$ ?\vrule? $\mathbf{P} \leftarrow \fun{breed\_priorities}(\mathbf{P},\mathbf{A},\var{cp}_p,\var{mp}_p)$ //NSGAII ?\vrule? ?\vrule? //external fitness evaluation ?\vrule? //objectives: safety margins and constraints ?\vrule? $\fun{evaluate\_external\_fitness}(\mathbf{P},\mathbf{E})$ ?\vrule? $\mathbf{B} \leftarrow \fun{select\_best}(\mathbf{P} \cup \mathbf{B})$ return $\mathbf{B}$ \end{lstlisting} \caption{Multi-objective two-population competitive coevolution for finding optimal priority assignments.} \label{fig:coevolution} \end{figure} The two populations sequentially evolve during the allotted analysis budget (see line 17 in Figure~\ref{fig:coevolution}). The best priority assignment is the one that makes tasks schedulable and maximizes the magnitude of safety margins, while satisfying engineering constraints for a given worst sequence of task arrivals. Hence, searching for the best priority assignments involves searching for the worst sequences of task arrivals. We create two populations $\mathbf{A}$ and $\mathbf{P}$ searching for the worst arrival sequences and the best priority assignments, respectively. The fitness values of task-arrival sequences in $\mathbf{A}$ are computed based on how well they challenge the priority assignments in $\mathbf{P}$, i.e., maximizing the magnitude of deadline misses (line 20). Likewise, the priority assignments in $\mathbf{P}$ are evaluated based on how well they perform against the task-arrival sequences in $\mathbf{A}$, i.e., maximizing the magnitude of safety margins while satisfying constraints (line 25). Once the two populations are assessed against each other, OPAM generates the next populations based on the computed fitness values (lines 21 and 26). OPAM tailors the breading mechanisms of steady-state genetic algorithms (GA)~\citep{Whitley1988} for $\mathbf{A}$ and NSGAII~\citep{Deb2002} for $\mathbf{P}$. OPAM uses two types of fitness functions, namely internal and external fitness evaluations, which play a different and complementary role as described below. The two internal fitness evaluations in lines 20 and 25 of the listing in Figure~\ref{fig:coevolution} aim at selecting individuals -- task-arrival sequences and priority assignments -- for breeding the next $\mathbf{A}$ and $\mathbf{P}$ populations. OPAM evaluates the external fitness for the $\mathbf{P}$ population of priority assignments to find a best Pareto front (lines 28--31). As shown in lines 20 and 25, the internal fitness values of individuals in $\mathbf{A}$ (resp. $\mathbf{P}$) are computed based on how they perform with respect to individuals in $\mathbf{P}$ (resp. $\mathbf{A}$). Hence, an individual's internal fitness is assessed through interactions with competing individuals. For example, a priority assignment in the first generation may have acceptable fitness values regarding safety margins and constraint satisfaction with respect to the first generation of task-arrival sequences, which are likely far from worst-case sequences. However, priority assignment fitness may get worse in later generations as the task-arrival sequences evolve towards larger deadline misses. Thus, if OPAM simply monitors internal fitness, it cannot reliably detect coevolutionary progress as an individual's internal fitness changes according to competing individuals. The problem of monitoring progress in coevolution has been observed in many studies~\citep{Ficici2004,Popovici2012}. To address it, OPAM computes external fitness values of priority assignments in $\mathbf{P}$ based on a set $\mathbf{E}$ of task-arrival sequences generated independently from the coevolution process. By doing so, OPAM can observe the monotonic improvement of external fitness for priority assignments. We note that, in general, if interactions between two competing populations are finite and any interaction can be examined with non-zero probability at any time, monotonicity guarantees that a coevolutionary algorithm converges to a solution~\citep{Popovici2012}. We note that our approach for evolving task-arrival sequences is based on past work~\citep{Briand2005}, where a specific genetic algorithm configuration was proposed to find worst-case task-arrival sequences. One significant modification is that OPAM accounts for task relationships -- resource-dependency and task triggering relationships -- and a multi-core scheduling policy based on simulations to evaluate the magnitude of deadline misses. Following standard practice~\citep{Ralph2020}, the next sections describe OPAM in detail by defining the representations, the scheduler, the fitness functions, and the evolutionary algorithms for coevolving the task-arrival sequences and priority assignments. We then describe the external fitness evaluation of OPAM. \subsection{Representations} \label{subsec:representations} OPAM coevolves two populations of task-arrival sequences and priority assignments. A task-arrival sequence is defined by their inter-arrival time characteristics (see Section~\ref{sec:problem}). A priority assignment is defined by a function that maps priorities to tasks. \noindent\textbf{Task-arrival sequences.} Given a set $J$ of tasks to be scheduled, a feasible sequence of task arrivals is a set $A$ of tuples $(j, \fun{at}_k(j))$ where $j \in J$ and $\fun{at}_k(j)$ is the $k$th arrival time of a task $j$. Thus, a solution $A$ represents a valid sequence of task arrivals of $J$ (see valid $\fun{at}_k(j)$ computation in Section~\ref{sec:problem}). Let $\mathbb{T} = [0, \mathbf{T}]$ be the time period during which a scheduler receives task arrivals. The size of $A$ is equal to the number of task arrivals over the $\mathbb{T}$ time period. Due to the varying inter-arrival times of aperiodic tasks (Section~\ref{sec:problem}), the size of $A$ will vary across different sequences. \noindent\textbf{Priority assignments.} Given a set $J$ of tasks to be scheduled, a feasible priority assignment is a list $\vv{P}$ of priority $\fun{pr}(j)$ for each task $j \in J$. OPAM assigns a non-negative integer to a priority $\fun{pr}(j)$ of $j$ such that priorities are comparable to one another. The size of $\vv{P}$ is equal to the number of tasks in $J$. Each task in $J$ has a unique priority. Hence, a priority assignment $\vv{P}$ is a permutation of all tasks' priorities. We note that these characteristics of priority assignments are common in many real-time analysis methods~\citep{Audsley2001,Davis2007,Zhao2017} and industrial systems (e.g., see our six industrial case study systems described in Section~\ref{subsec:industrial subjects}). \subsection{Simulation} \label{subsec:simulation} OPAM relies on simulation for analyzing the schedulability of tasks in a scalable way. For instance, an inter-arrival time of a software update task in a satellite system is approximately at most three months. In such cases, conducting an analysis based on an actual scheduler is prohibitively expensive. Also, applying an exhaustive technique for schedulability analysis typically doesn't scale to an industrial system (e.g., see our experiment results using a model checker described in Section~\ref{subsec:threats}). Instead, OPAM uses a real-time task scheduling simulator, named OPAMScheduler, which applies a scheduling policy, i.e., single-queue multi-core scheduling policy~\citep{Arpaci2018}, based on discrete simulation time events. Note that we chose the single-queue multi-core scheduling policy for OPAMScheduler since our case study systems (described in Section~\ref{subsec:industrial subjects}) rely on this policy. OPAMScheduler takes as input a feasible task-arrival sequence $A$ and a priority assignment $\vv{P}$ for scheduling a set $J$ of tasks. It then outputs a schedule scenario as a set $S$ of tuples $(j,\fun{at}_k(j),\fun{et}_k(j))$ where $\fun{at}_k(j)$ and $\fun{et}_k(j)$ are the $k$th arrival and end time values of a task $j$, respectively (see Section~\ref{sec:problem}). For each task $j$, OPAMScheduler computes $\fun{et}_k(j)$ based on its WCET and scheduling policy while accounting for task relationships (see the $\fun{dp}(j,j^\prime)$ resource-dependency relationship and the $\fun{tr}(j,j^\prime)$ task triggering relationship in Section~\ref{sec:problem}). To simulate the worst-case executions of tasks, OPAMScheduler assigns tasks' WCETs to their execution times. \begin{sloppypar} OPAMScheduler implements a single-queue multi-core scheduling policy~\citep{Arpaci2018}, which schedules a task $j$ with explicit priority $\fun{pr}(j)$ and deadline $\fun{dl}(j)$. When tasks arrive, OPAMScheduler puts them into a single queue that contains tasks to be scheduled. At any simulation time, if there are tasks in the queue and multiple cores are available to execute tasks, OPAMScheduler first fetches a task $j$ from the queue in which $j$ has the highest priority $\fun{pr}(j)$. OPAMScheduler then allocates task $j$ to any available core. Note that if task $j$ shares a resource with a running task $j^\prime$ in another core, i.e., the $\fun{dp}(j,j^\prime)$ resource-dependency relationship holds, $j$ will be blocked until $j^\prime$ releases the shared resource. \end{sloppypar} OPAMScheduler works under the assumption that context switching time is negligible, which is also a working assumption in many scheduling analysis methods~\citep{Liu1973,Audsley2001,Alesio2015}. Note that the assumption is practically valid and useful at an early development step in the context of real-time analysis. For instance, our collaborating partner, LuxSpace, accounts for the waiting time of tasks due to context switching between tasks through adding some extra time to WCET estimates at the task design stage. Note that OPAM can be applied with any scheduling policy, including those that account for context switching time and multiple queues. \subsection{Fitness functions} \label{subsec:fitness} \noindent\textbf{Internal fitness: deadline misses.} Given a feasible task-arrival sequence $A$ and a priority assignment $\vv{P}$, we formulate a function, $\fun{fd}(A,\vv{P})$, to quantify the degree of deadline misses regarding a set $J$ of tasks to be scheduled. To compute $\fun{fd}(A,\vv{P})$, OPAM runs OPAMScheduler for $A$ and $\vv{P}$ and obtains a schedule scenario $S$. We denote by $\fun{dist}_k(j)$ the distance between the end time and the deadline of the $k$th arrival of task $j$ observed in $S$ and define $\fun{dist}_k(j) = \fun{et}_k(j) - \fun{at}_k(j) + \fun{dl}(j)$ (see Section~\ref{sec:problem} for the notation end time $\fun{et}_k(a)$, arrival time $\fun{at}_k(j)$, and deadline $\fun{dl}(j)$). We denote by $\fun{lk}(j)$ the last arrival index of a task $j$ in $A$. Given a set $J$ of tasks to be scheduled, the $\fun{fd}(A,\vv{P})$ function is defined as follows: \begin{equation*} \fun{fd}(A,\vv{P}) = \sum_{j \in J, k \in [1,\mathit{lk}(j)]}2^{\fun{dist}_k(j)} \end{equation*} Note that $\fun{fd}(A,\vv{P})$ is defined as an exponential equation. Hence, when all task executions observed in a schedule scenario $S$ meet their deadlines, $\fun{fd}(A,\vv{P})$ is a small value as any distance $\fun{dist}_k(j)$ between the task end time and the deadline of the $k$th arrival of task $j$ is a negative value. In contrast, deadline misses result in positive values for $\fun{dist}_k(j)$. In such cases, $\fun{fd}(A,\vv{P})$ is a large value. The exponential form of $\fun{fd}(A,\vv{P})$ was precisely selected for this reason, to assign large values for deadline misses but small values when deadlines are met. By doing so, $\fun{fd}(A,\vv{P})$ prevents an undesirable solution that would result into many task executions meeting deadlines obfuscating a smaller number of deadline misses. Following the principles of competitive coevolution, individuals in a population $\mathbf{A}$ of task-arrival sequences need to be assessed by pitting them against individuals in the other population $\mathbf{P}$ of priority assignments. We denote by $\fun{fd}(A,\mathbf{P})$ the internal fitness function that quantifies the overall magnitude of deadline misses across all priority assignment $\vv{P} \in \mathbf{P}$, regarding a set $J$ of tasks to be scheduled. The $\fun{fd}(A,\mathbf{P})$ fitness is used for breeding the next population of task-arrival sequences. OPAM aims to maximize $\fun{fd}(A,\mathbf{P})$, defined as follows: \begin{equation*} \fun{fd}(A,\mathbf{P}) = \sum_{\vv{P} \in \mathbf{P}}\fun{fd}(A,\vv{P})/|\mathbf{P}| \end{equation*} \noindent\textbf{Internal fitness: safety margins.} Given a feasible priority assignment $\vv{P}$ and a task-arrival sequence $A$, we denote by $\fun{fs}(\vv{P},A)$ the magnitude of safety margins regarding a set $J$ of tasks to be scheduled. The computation of $\fun{fs}(\vv{P},A)$ is similar to the computation of $\fun{fd}(A,\vv{P})$ regarding the use of OPAMScheduler, which outputs a schedule scenario $S$. The difference is that OPAM reverses the sign of $\fun{fd}(A,\vv{P})$ as OPAM aims at maximizing the magnitude of safety margins. Given a set $J$ of tasks to be scheduled, the $\fun{fs}(\vv{P},A)$ function is defined as follows: \begin{equation*} \fun{fs}(\vv{P},A) = \sum_{j \in J, k \in [1,\mathit{lk}(j)]}-2^{\fun{dist}_k(j)} \text{\quad(i.e,}{-}\fun{fd}(A,\vv{P})\text{)} \end{equation*} Given two populations $\mathbf{P}$ and $\mathbf{A}$ of priority assignments and task-arrival sequences, similar to internal fitness $\fun{fd}(A,\mathbf{P})$, priority assignments in $\mathbf{P}$ need to be assessed against task-arrival sequences in $\mathbf{A}$. We formulate an internal fitness function, $\fun{fs}(\vv{P},\mathbf{A})$, to quantify the overall magnitude of safety margins across all task-arrival sequences $A \in \mathbf{A}$, regarding a set $J$ of tasks to be scheduled and a priority assignment $\vv{P}$. OPAM relies on the $\fun{fs}(\vv{P},\mathbf{A})$ function to breed the next population of priority assignments. OPAM aims to maximize $\fun{fs}(\vv{P},\mathbf{A})$, which is defined as follows: \begin{equation*} \fun{fs}(\vv{P},\mathbf{A}) = \sum_{A \in \mathbf{A}}\fun{fs}(\vv{P},\mathbf{A})/|\mathbf{A}| \end{equation*} \noindent\textbf{Internal fitness: constraints.} Given a priority assignment $\vv{P}$, we formulate an internal fitness function, $\fun{fc}(\vv{P})$, to quantify the degree of satisfaction of soft constraints set by engineers. Such function is required as we recast the satisfaction of such constraints into an optimization problem, in order to minimize constraint violations. Specifically, OPAM accounts for the following constraint: aperiodic tasks should have lower priorities than those of periodic tasks. Recall from Section~\ref{sec:motivation} that engineers consider this constraint to be desirable. We denote by $\fun{lp}(\vv{P})$ the lowest priority of periodic tasks in $\vv{P}$. For a set $J$ of tasks to be scheduled, OPAM aims to maximize $\fun{fc}(\vv{P})$, which is defined as follows: \begin{equation*} \fun{fc}(\vv{P}) = \sum_{j \in J} \begin{cases} \fun{lp}(\vv{P}) - \fun{pr}(j) \text{, if $j$ is an aperiodic task}\\ 0 \text{, otherwise} \end{cases} \end{equation*} Greater $\fun{pr}(j)$ values denote higher priorities. Given a priority assignment $\vv{P}$, if $\fun{pr}(j)$ for an aperiodic task $j$ is lower than the priority of any of the periodic tasks, $\fun{lp}(\vv{P}) - \fun{pr}(j)$ is a positive value. OPAM measures the difference between priorities of aperiodic and periodic tasks. By doing so, $\fun{fc}(\vv{P})$ rewards aperiodic tasks that satisfy the above constraint and consistently penalizes those that violate it. Hence, OPAM aims at maximizing $\fun{fc}(\vv{P})$. \noindent\textbf{External fitness: safety margins and constraints.} To examine the quality of priority assignments and monitor the progress of coevolution, OPAM takes as input a set $\mathbf{E}$ of task-arrival sequences created independently from the coevolution process. Given a set $\mathbf{E}$ of task-arrival sequences and a priority assignment $\vv{P}$, OPAM utilizes $\fun{fs}(\vv{P},\mathbf{E})$ and $\fun{fc}(\vv{P})$ described above as external fitness functions for quantifying the magnitude of safety margins and the extent of constraint satisfaction, respectively. As $\mathbf{E}$ does not change over the coevolution process, $\fun{fs}(\vv{P},\mathbf{E})$ is used for evaluating a priority assignment $\vv{P}$ since it is not impacted by the evolution of task-arrival sequences. Hence, external fitness functions ensure that OPAM monitors the progress of coevolution in a stable manner. Given two populations $\mathbf{P}$ and $\mathbf{A}$ of priority assignments and task-arrival sequences, we recall that the $\fun{fd}(A,\mathbf{P})$ internal fitness function quantifies the overall magnitude of deadline misses across all priority assignments in $\mathbf{P}$ for the given sequence of task arrivals $A$. The $\fun{fs}(\vv{P},\mathbf{A})$ internal fitness function quantifies the overall magnitude of safety margins across all sequences of task arrivals in $\mathbf{A}$ for the given priority assignments $\vv{P}$. Hence, the internal fitness of $A$ (resp. $\vv{P}$) is assessed through interactions with competing individuals in $\mathbf{P}$ (resp. $\mathbf{A}$). Therefore, if OPAM relies only on the internal fitness functions, it cannot gauge the progress of coevolution in a stable manner as an individual's internal fitness depends on competing individuals. We note that soft deadline tasks also require to execute within reasonable execution time, i.e., (soft) deadline. As the above fitness functions return quantified degrees of deadline misses and safety margins, OPAM uses the same fitness functions for both soft and hard deadline tasks. \subsection{Evolution: Worst-case task arrivals} \label{subsec:evolution arrivals} \begin{figure}[t] \begin{lstlisting}[style=Alg] Algorithm Task-arrival sequences evolution Input $\mathbf{A}$: population of task-arrival sequences Input $\mathbf{P}$: population of priority assignments Input $\var{cp}_a$: crossover probability //task-arrival sequences Input $\var{mp}_a$: mutation probability //task-arrival sequences Output $\mathbf{A}$: population of task-arrival sequences //evaluate internal fitness values for ?$\color{javagreen}\mathbf{A}$? for each $A_i \in \mathbf{A}$ ?\vrule? for each $\vv{P}_l \in \mathbf{P}$ ?\vrule? ?\vrule? $S \leftarrow \fun{simulate}(A_i,\vv{P}_l)$ //OPAMScheduler ?\vrule? ?\vrule? //?$\color{javagreen}\fun{dist}_k(j)$? is computed based on ?$\color{javagreen}S$? ?\vrule? ?\vrule? $\fun{fd}(A_i,\vv{P}_l) = \sum_{j \in J, k \in [1,\mathit{lk}(j)]}2^{\fun{dist}_k(j)}$ ?\vrule? $\fun{fd}(A_i,\mathbf{P}) = \sum_{\vv{P}_l \in \mathbf{P}}\fun{fd}(A_i,\vv{P}_l)/|\mathbf{P}|$ //breed task-arrival sequences $\var{parents} \leftarrow \fun{select\_arrivals}(\mathbf{A})$ $\var{offspring} \leftarrow \fun{crossover\_arrivals}(\var{parents},\var{cp}_a)$ $\var{offspring} \leftarrow \fun{mutate\_arrivals}(\var{offspring},\var{mp}_a)$ //evaluate internal fitness values for ?$\color{javagreen}\var{offspring}$? for each $A_i \in \var{offspring}$ ?\vrule? for each $\vv{P}_l \in \mathbf{P}$ ?\vrule? ?\vrule? $S \leftarrow \fun{simulate}(A_i,\vv{P}_l)$ //OPAMScheduler ?\vrule? ?\vrule? //?$\color{javagreen}\fun{dist}_k(j)$? is computed based on ?$\color{javagreen}S$? ?\vrule? ?\vrule? $\fun{fd}(A_i,\vv{P}_l) = \sum_{j \in J, k \in [1,\mathit{lk}(j)]}2^{\fun{dist}_k(j)}$ ?\vrule? $\fun{fd}(A_i,\mathbf{P}) = \sum_{\vv{P}_l \in \mathbf{P}}\fun{fd}(A_i,\vv{P}_l)/|\mathbf{P}|$ $\mathbf{A} \leftarrow \fun{replace\_arrivals}(\mathbf{A},\var{offspring})$ return $\mathbf{A}$ \end{lstlisting} \caption{A steady-state GA-based algorithm for evolving task-arrival sequences.} \label{fig:GA} \end{figure} The algorithm in Figure~\ref{fig:GA} describes in detail the evolution of task-arrival sequences in lines 18--21 of the listing in Figure~\ref{fig:coevolution}. OPAM adapts a steady-state Genetic Algorithm (GA)~\citep{Luke2013} for evolving task-arrival sequences. As shown in lines 8--14, OPAM first evaluates each task-arrival sequence in the $\mathbf{A}$ population against the $\mathbf{P}$ population of priority assignments. OPAM executes OPAMScheduler to obtain a schedule scenario $S$ for a task-arrival sequence $A_i \in \mathbf{A}$ and a priority assignment $\vv{P}_l \in \mathbf{P}$ (line 11). OPAM then computes the internal fitness $\fun{fd}(A_i,\mathbf{P})$ capturing the magnitude of deadline misses (lines 12--14). We note that a steady-state GA iteratively breeds offspring, assess their fitness, and then reintroduce them into a population. However, OPAM computes internal fitness of all task-arrival sequences in $\mathbf{A}$ at every generation. This is because internal fitness is computed in relation to $\mathbf{P}$, which is coevolving with $\mathbf{A}$. Breeding the next population is done by using the following genetic operators: (1)~\emph{Selection:} OPAM selects candidate task-arrival sequences using a tournament selection technique, with the tournament size equal to two which is the most common setting~\citep{Gendreau2010} (line 17 in Figure~\ref{fig:GA}). (2)~\emph{Crossover:} Selected candidate task-arrival sequences serve as parents to create offspring using a crossover operation (line 18). (3)~\emph{Mutation:} The offspring are then mutated (line 19). Below, we describe our crossover and mutation operators. \emph{Crossover.} A crossover operator is used to produce offspring by mixing traits of parent solutions. OPAM modifies the standard one-point crossover operator~\citep{Luke2013} as two parent task-arrival sequences $A_p$ and $A_q$ may have different sizes, i.e., $|A_p| \neq |A_q|$. Let $J = \{j_1, j_2, \ldots, j_m\}$ be a set of tasks to be scheduled. Our crossover operator first randomly selects an aperiodic task $j_r \in J$. For all $i \in [1,r]$ and $j_i \in J$, OPAM then swaps all $j_i$ arrivals between the two task-arrival sequences $A_p$ and $A_q$. Since $J$ is fixed for all solutions, OPAM can cross over two solutions that may have different sizes. \emph{Mutation operator} OPAM uses a heuristic mutation algorithm. For a task-arrival sequence $A$, OPAM mutates the $k$th task arrival time $\fun{at}_k(j)$ of an aperiodic task $j$ with a mutation probability. OPAM chooses a new arrival time value of $\fun{at}_k(j)$ based on the $[\mathit{pmin}(j), \mathit{pmax}(j)]$ inter-arrival time range of $j$. If such a mutation of the $k$th arrival time of $j$ does not affect the validity of the $k{+}1$th arrival time of $j$, the mutation operation ends. Specifically, let $d$ be a mutated value of $\fun{at}_k(j)$. In case $\fun{at}_{k+1}(j) \in [d + \mathit{pmin}(j), d + \mathit{pmax}(j)]$, OPAM returns the mutated $A$ task-arrival sequence. After mutating the $k$th arrival time $\fun{at}_k(j)$ of a task $j$ in a solution $A$, if the $k{+}1$th arrival becomes invalid, OPAM corrects the remaining arrivals of $j$. Let $o$ and $d$ be, respectively, the original and mutated $k$th arrival time of $j$. For all the arrivals of $j$ after $d$, OPAM first updates their original arrival time values by adding the difference $d-o$. Let $\mathbb{T} = [0,\mathbf{T}]$ be the scheduling period. OPAM then removes some arrivals of $j$ if they are mutated to arrive after $\mathbf{T}$ or adds new arrivals of $j$ while ensuring that all tasks arrive within $\mathbb{T}$. As shown in lines 20--26 in Figure~\ref{fig:GA}, the internal fitness of the generated offspring is computed based on the $\mathbf{P}$ population. OPAM then updates the $\mathbf{A}$ population of task-arrival sequences by comparing the offspring and individuals in $\mathbf{A}$ (line 27). We note that when a system is only composed of periodic tasks, OPAM will skip evolving for worst-case arrival sequences as arrivals of periodic tasks are deterministic (see Section~\ref{sec:problem}). Nevertheless, OPAM will optimize priority assignments based on given arrivals of periodic tasks. When needed, OPAM can be easily extended to manipulate offset and period values for periodic tasks, in a way identical to how we currently handle inter-arrival times for aperiodic tasks. \subsection{Evolution: Best-case priority assignments} \label{subsec:evolution priorities} \begin{figure}[t] \begin{lstlisting}[style=Alg] Algorithm Priority assignments evolution Input $\mathbf{A}$: population of task-arrival sequences Input $\mathbf{P}$: population of priority assignments Input $\var{ps}_p$: population size //priority assignments Input $\var{cp}_p$: crossover probability //priority assignments Input $\var{mp}_p$: mutation probability //priority assignments Output $\mathbf{P}$: population of priority assignments //evaluate internal fitness values for ?$\color{javagreen}\mathbf{P}$? for each $\vv{P}_i \in \mathbf{P}$ ?\vrule? for each $A_l \in \mathbf{A}$ ?\vrule? ?\vrule? $S \leftarrow \fun{simulate}(A_l,\vv{P}_i)$ //OPAMScheduler ?\vrule? ?\vrule? //?$\color{javagreen}\fun{dist}_k(j)$? is computed based on ?$\color{javagreen}S$? ?\vrule? ?\vrule? $\fun{fs}(\vv{P}_i,A_l) = \sum_{j \in J, k \in [1,\mathit{lk}(j)]}-2^{\fun{dist}_k(j)}$ ?\vrule? $\fun{fs}(\vv{P}_i,\mathbf{A}) = \sum_{A_l \in \mathbf{A}}\fun{fs}(\vv{P}_i,\mathbf{A})/|\mathbf{A}|$ ?\vrule? ?$\fun{fc}(\vv{P}_i) = \sum_{j \in J} \begin{cases} \fun{lp}(\vv{P}_i) - \fun{pr}(j) \text{, if $j$ is an aperiodic task}\\ 0 \text{, otherwise} \end{cases}$? //breed priority assignments $\vv{R} \leftarrow \fun{sort\_non\_dominated\_fronts}(\mathbf{P})$ $\fun{assign\_crowding\_distance}(\vv{R})$ $\mathbf{P}_\alpha \leftarrow \fun{NSGAII\_breed}(\vv{R},\var{ps}_p,\var{cp}_p,\var{mp}_p)$ //evaluate internal fitness values for ?$\color{javagreen}\mathbf{P}_\alpha$? for each $\vv{P}_i \in \mathbf{P}_\alpha$ ?\vrule? for each $A_l \in \mathbf{A}$ ?\vrule? ?\vrule? $S \leftarrow \fun{simulate}(A_l,\vv{P}_i)$ //OPAMScheduler ?\vrule? ?\vrule? //?$\color{javagreen}\fun{dist}_k(j)$? is computed based on ?$\color{javagreen}S$? ?\vrule? ?\vrule? $\fun{fs}(\vv{P}_i,A_l) = \sum_{j \in J, k \in [1,\mathit{lk}(j)]}-2^{\fun{dist}_k(j)}$ ?\vrule? $\fun{fs}(\vv{P}_i,\mathbf{A}) = \sum_{A_l \in \mathbf{A}}\fun{fs}(\vv{P}_i,\mathbf{A})/|\mathbf{A}|$ ?\vrule? ?$\fun{fc}(\vv{P}_i) = \sum_{j \in J} \begin{cases} \fun{lp}(\vv{P}_i) - \fun{pr}(j) \text{, if $j$ is an aperiodic task}\\ 0 \text{, otherwise} \end{cases}$? $\vv{R} \leftarrow \fun{sort\_non\_dominated\_fronts}(\mathbf{P} \cup \mathbf{P}_\alpha)$ $\fun{assign\_crowding\_distance}(\vv{R})$ $\mathbf{P} \leftarrow \fun{select\_archive}(\vv{R},\var{ps}_p)$ return $\mathbf{P}$ \end{lstlisting} \caption{An NSGAII-based algorithm for evolving priority assignments.} \label{fig:NSGAII} \end{figure} Figure~\ref{fig:NSGAII} shows the evolution procedure of priority assignments, which refines lines 23--26 in Figure~\ref{fig:coevolution}. OPAM tailors the Non-dominated Sorting Genetic Algorithm version 2 (NSGAII)~\citep{Deb2002} to generate a non-dominating (equally viable) set of priority assignments, representing the best trade-offs found among the given internal fitness functions. This is referred to as a Pareto nondominated front~\citep{Knowles2000}, where the dominance relation over priority assignments is defined as follows: A priority assignment $\vv{P}$ dominates another priority assignment $\vv{P}^\prime$ if $\vv{P}$ is not worse than $\vv{P}^\prime$ in all fitness values, and $\vv{P}$ is strictly better than $\vv{P}^\prime$ in at least one fitness value. NSGAII has been applied to many multi-objective optimization problems~\citep{Langdon2010,Shin2018,Wang2020}. OPAM maintains a population $\mathbf{P}$ of priority assignments as an archive that contains the best priority assignments discovered during coevolution. Unlike a standard application of NSGAII, in our study, we need to reevaluate the internal fitness values for priority assignments in $\mathbf{P}$ at every generation as the internal fitness values are computed based on the $\mathbf{A}$ population of task-arrival sequences, which coevolves. As shown in lines 9--16 in Figure~\ref{fig:NSGAII}, OPAM first computes the internal fitness functions that measure the magnitude of safety margins and the extent of constraint satisfaction. OPAM then sorts non-dominated Pareto fronts (line 19) and assigns crowding distance (line 20) to introduce diversity among non-dominated priority assignments~\citep{Deb2002}. For breeding the next population of priority assignments (line 21 in Figure~\ref{fig:NSGAII}, OPAM applies the following standard genetic operators~\citep{Sivan2008} that have been applied to many similar problems~\citep{Islam2012,Marchetto2016,Shin2018}: (1)~\emph{Selection.} OPAM uses a binary tournament selection based on non-domination ranking and crowding distance. The binary tournament selection has been used in the original implementation of NSGAII~\citep{Deb2002}. (2)~\emph{Crossover.} OPAM applies a partially mapped crossover (PMX)~\citep{Goldberg1985}. PMX ensures that the generated offspring are valid permutations of priorities. (3)~\emph{Mutation.} OPAM uses a permutation swap method for mutating a priority assignment. This mutation method interchanges two randomly-selected priorities in a priority assignment according to a given mutation probability. For the generated population $\mathbf{P}_\alpha$ of priority assignments, OPAM computes the two internal fitness functions (lines 22--29 in Figure~\ref{fig:NSGAII}). OPAM then sorts non-dominated Pareto fronts for the union of the current $\mathbf{P}$ and next $\mathbf{P}_\alpha$ populations (line 30), assign crowding distance (line 31), and select the best archive by accounting for the computed non-domination ranking and crowding distance (line 32). \subsection{External fitness evaluation} \label{subsec:external} \begin{figure}[t] \begin{lstlisting}[style=Alg] Algorithm Priority assignments evolution Input $\mathbf{E}$: set of task-arrival sequences //external evaluation Input $\mathbf{P}$: population of priority assignments Input $\var{ps}_p$: population size //priority assignments Input $\var{cp}_p$: crossover probability //priority assignments Input $\var{mp}_p$: mutation probability //priority assignments Output $\mathbf{P}$: population of priority assignments //evaluate external fitness values for ?$\color{javagreen}\mathbf{P}$? for each $\vv{P}_i \in \mathbf{P}$ ?\vrule? for each $E_l \in \mathbf{E}$ ?\vrule? ?\vrule? $S \leftarrow \fun{simulate}(E_l,\vv{P}_i)$ //OPAMScheduler ?\vrule? ?\vrule? //?$\color{javagreen}\fun{dist}_k(j)$? is computed based on ?$\color{javagreen}S$? ?\vrule? ?\vrule? $\fun{fs}(\vv{P}_i,E_l) = \sum_{j \in J, k \in [1,\mathit{lk}(j)]}-2^{\fun{dist}_k(j)}$ ?\vrule? $\fun{fs}(\vv{P}_i,\mathbf{E}) = \sum_{E_l \in \mathbf{E}}\fun{fs}(\vv{P}_i,\mathbf{E})/|\mathbf{E}|$ ?\vrule? ?$\fun{fc}(\vv{P}_i) = \sum_{j \in J} \begin{cases} \fun{lp}(\vv{P}_i) - \fun{pr}(j) \text{, if $j$ is an aperiodic task}\\ 0 \text{, otherwise} \end{cases}$? $\vv{R} \leftarrow \fun{sort\_non\_dominated\_fronts}(\mathbf{P} \cup \mathbf{B})$ $\fun{assign\_crowding\_distance}(\vv{R})$ $\mathbf{B} \leftarrow \fun{select\_best\_front}(\vv{R})$ //?$\color{javagreen}|\mathbf{B}| \le |\mathbf{P}|$? return $\mathbf{B}$ \end{lstlisting} \caption{An algorithm for evaluating external fitness and finding the best Pareto front.} \label{fig:external} \end{figure} Figure~\ref{fig:external} shows an algorithm that computes the external fitness functions and finds the best Pareto front, which refines lines 28--31 in Figure~\ref{fig:coevolution}. To monitor the coevolution progress in a stable manner, OPAM takes as input a set $\mathbf{E}$ of task-arrival sequences that are generated independently from the coevolution process. We use an adaptive random search technique~\citep{Chen2010} to sample task-arrival sequences in order to create $\mathbf{E}$. The adaptive random search extends the naive random search by maximizing the Euclidean distance between the sampled points such that it maximizes the diversity of task-arrival sequences in $\mathbf{E}$. As shown in lines 9--16 in Figure~\ref{fig:external}, OPAM computes the two external fitness values for each priority assignment in the $\mathbf{P}$ population based on a given set $\mathbf{E}$ of task-arrival sequences. OPAM then sorts non-dominated Pareto fronts for the union of the $\mathbf{P}$ population and the current best Pareto front (line 17), assigns crowding distance (line 18), and selects the best Pareto front by accounting for the computed non-domination ranking and crowding distance (line 32). OPAM adopts NSGAII in order to maximize the diversity of priority assignments in the best Pareto front. \section{Conclusion} \label{sec:conclusion} We developed OPAM, a priority assignment method for real-time systems, that aims to find equally viable priority assignments that maximize the magnitude of safety margins and the extent to which engineering constraints are satisfied. OPAM uses a novel approach, based on multi-objective, competitive coevolutionary search, that simultaneously evolves different species, i.e., populations of priority assignments and stress test scenarios, that compete with one another with opposite objectives, the former trying to minimize chances of deadline misses while the latter attempts to maximize them. We evaluated OPAM on a number of synthetic systems as well as six industrial systems from different domains. The results indicate that OPAM is able to find significantly better solutions than both those manually defined by engineers based on expert knowledge and those obtained by our baselines: random search and sequential search. Further, OPAM scales linearly with the number of tasks in a system and the time required to simulate task executions. Execution times on our industrial systems are practically acceptable. \textcolor{rev3}{In the future, we will continue to study the problem of optimal priority assignment by accounting for (1)~priority assignments that change dynamically, (2)~WCET value ranges that account for non-deterministic computation times, (3)~interrupt handling routines that execute differently compared to real-time tasks, and (4)~hybrid scheduling policies that combine multiple standard policies.} We also plan to develop a real-time task modeling language to specify task characteristics such as resource dependencies, triggering relationships, engineering constraints, and behaviors of real-time tasks and to facilitate real-time system analysis, e.g., optimal priority assignment and schedulability analysis. In addition, we would like to incorporate additional analysis capabilities into OPAM in order to verify whether or not a system satisfies the required properties, e.g., schedulability of tasks and absence of deadlocks, for a given priority assignment. For example, statistical model checking~\citep{Legay2010} may allow us to verify whether tasks meet their deadlines for a given priority assignment with a probabilistic guarantee. In the long term, we plan to more conclusively validate the usefulness of OPAM by applying it to additional case studies in different application domains. \subsection{Evaluation metrics} \label{subsec:metrics} \noindent\textbf{Multi-objective evaluation metrics.} In order to fairly compare the results of search algorithms, based on existing guidelines~\citep{Chen2020} for assessing multi-objective search algorithms, we use complementary quality indicators: \emph{Hypervolume} (HV)~\citep{Zitzler1999}, \emph{Pareto Compliant Generational Distance} (GD+)~\citep{Ishibuchi2015}, and \emph{Spread} ($\Delta$)~\citep{Deb2002}. To compute the GD+ and $\Delta$ quality indicators, following the usual procedure~\citep{Chen2020}, we create a reference Pareto front as the union of all the non-dominated solutions obtained from all runs of the algorithms being compared. Identifying the optimal (ideal) Pareto front is typically infeasible for a complex optimization problem~\citep{Chen2020}. Key features of the three quality indicators are described below. \begin{itemize}[leftmargin=1em] \item HV is defined to measure the volume in the objective space that is covered by members of a Pareto front generated by a search algorithm~\citep{Zitzler1999}. The higher the HV values, the more optimal the search outputs. \item GD+ is defined to measure the distance between the points on a Pareto front obtained from a search algorithm and the nearest points on a reference Pareto front~\citep{Ishibuchi2015}. GD+ modifies General Distance (GD)~\citep{Veldhuizen1998} to account for the dominance relations when computing the distances. The lower the GD+ values, the more optimal the search outputs. \item $\Delta$ is defined to measure the extent of spread among the points on a Pareto front computed by a search algorithm~\citep{Deb2002}. We note that OPAM aims at obtaining a wide variety of equally-viable priority assignments on a Pareto front (see Section~\ref{sec:approach}). The lower the Spread values, the more spread out the search outputs. \end{itemize} \noindent\textbf{Interpretable metrics.} The two external fitness functions described in Section~\ref{sec:approach} mainly aim at effectively guiding search. It is, however, difficult for practitioners to interpret the computed fitness values. Since they are not intuitive to practitioners, to assess the usefulness of OPAM from a practitioner perspective, we measure (1)~the safety margins from tasks' completion times to their deadlines across our experiments and (2)~the number of constraint violations in a priority assignment. In addition, we measure the execution time and memory usage of OPAM. \noindent\textbf{Statistical comparison metrics.} To statistically compare our experiment results, we use the Mann-Whitney U-test~\citep{Mann1947} and Vargha and Delaney's $\hat{A}_{12}$ effect size~\citep{Vargha2000}, which have been frequently applied for evaluating search-based algorithms~\citep{Arcuri2010, Hemmati2013, Shin2018}. Mann-Whitney U-test determines whether two independent samples are likely or not to belong to the same distribution. We set the level of significance, $\alpha$, to 0.05. Vargha and Delaney’s $\hat{A}_{12}$ measures probabilistic superiority -- effect size -- between search algorithms. Two algorithms are considered to be equivalent when the value of $\hat{A}_{12}$ is 0.5. \subsection{Parameter tuning and implementation} \label{subsec:param} \noindent\textbf{Parameters for coevolutionary search.} For the coevolutionary search parameters, we set the population size to 10, the crossover rate to 0.8, and the mutation rate to $1/|J|$, where $|J|$ denotes the number of tasks. We apply these parameter values for both the evolution of task-arrival sequences and priority assignments (see Section~\ref{sec:approach}). These values are determined based on existing guidelines~\citep{Arcuri2011,Sayyad2013} and previous work~\citep{Lee2020}. We determine the number of coevolution cycles (see Section~\ref{sec:approach}) based on an initial experiment. We applied OPAM to the six industrial subjects and ran OPAM 50 times for each subject. From the experiment results, we observed that there is no notable difference in Pareto fronts generated after 1000 cycles. Hence, we set the number of coevolution cycles to 1000 in our experiments, i.e., EXP1, EXP2, and EXP3 described in Section~\ref{subsec:design}. \noindent\textbf{Parameters for evaluating fitness functions.} To evaluate external fitness functions, we use a set of task-arrival sequences that are generated independently from the coevolution process (see Section~\ref{subsec:external}). We use an adaptive random search~\citep{Chen2010} to generate a set $\mathbf{E}$ of task-arrival sequences, which varies task arrival times within the specified inter-arrival time ranges of aperiodic tasks. We set the size of $\mathbf{E}$ to 10. From our initial experiment, we observed that this is sufficient to compute the external fitness functions of OPAM under a reasonable time, i.e., less than 15s. We note that $\mathbf{E}$ contains two default sequences of task arrivals as follows: (seq.~1)~aperiodic tasks always arrive at their maximum inter-arrival times and (seq.~2)~aperiodic tasks always arrive at their minimum inter-arrival times. By having those two sequences of task arrivals as initial elements in $\mathbf{E}$, the adaptive random search finds other sequences of task arrivals to maximize the diversity of elements in $\mathbf{E}$. If a system contains only periodic tasks, the simulation time is often set as the least common multiple (LCM) of their periods to account for all possible arrivals~\citep{Peng1997}. However, as the six industrial subjects include aperiodic tasks, this is not applicable. For the experiments with the six industrial subjects, we set the simulation time to the maximum time between the LCM of periodic tasks' periods and the maximum inter-arrival time among aperiodic tasks. By doing so, all possible arrival patterns of periodic tasks are examined and any aperiodic task arrives at least once during simulation. Recall from Section~\ref{subsec:evolution arrivals} that OPAM varies arrival times of aperiodic tasks to find worst-case sequences of task arrivals. We note that the parameters mentioned above can probably be further tuned to improve the performance of our approach. However, since with our current setting, we were able to convincingly and clearly support our conclusions, we do not report further experiments on tuning those values. \textbf{Implementation.} We implemented OPAM by extending jMetal~\citep{Durillo2011}, which is a metaheuristic optimization framework supporting NSGAII and GA. We conducted our experiments using the high-performance computing cluster~\citep{Varrette2014} at the University of Luxembourg. To account for randomness, we repeated each run of OPAM 50 times for all experiments. Each run of OPAM was executed on a different node (equipped with five 2.5GHz cores and 20GB memory) of the cluster, and took less than 16 hours. \subsection{Results} \label{subsec:results} \begin{figure}[t] \begin{center} \subfloat[ICS]{ \includegraphics[width=0.45\columnwidth]{figs_rq1-scatter-ICS.pdf} \label{fig:rq1scatter ics} } \hspace*{\fill} \subfloat[CCS]{ \includegraphics[width=.45\columnwidth]{figs_rq1-scatter-CCS.pdf} \label{fig:rq1scatter ccs} } \hfill \subfloat[UAV]{ \includegraphics[width=.45\columnwidth]{figs_rq1-scatter-UAV.pdf} \label{fig:rq1scatter uav} } \hspace*{\fill} \subfloat[GAP]{ \includegraphics[width=.45\columnwidth]{figs_rq1-scatter-GAP.pdf} \label{fig:rq1scatter gap} } \hfill \subfloat[HPSS]{ \includegraphics[width=.45\columnwidth]{figs_rq1-scatter-HPSS.pdf} \label{fig:rq1scatter hpss} } \hspace*{\fill} \subfloat[ESAIL]{ \includegraphics[width=0.45\columnwidth]{figs_rq1-scatter-ESAIL.pdf} \label{fig:rq1scatter esail} } \end{center} \caption{Pareto fronts obtained by OPAM and RS for the six industrial subjects: (a)~ICS, (b)~CCS, (c)~UAV, (d)~GAP, (e)~HPSS, and (f)~ESAIL. The fitness values are computed based on each subject's set $\mathbf{E}$ of task-arrival sequences (see Section~\ref{subsec:param}). The points located closer to the bottom left of each plot are considered to be better priority assignments when compared to points closer to the top right.} \label{fig:rq1scatter} \end{figure} \noindent\textbf{RQ1.} Figure~\ref{fig:rq1scatter} shows the best Pareto fronts obtained with 50 runs of OPAM and RS, for the six industrial study subjects described in Section~\ref{subsec:industrial subjects}. The fitness values presented in the figures are computed based on each subject's set $\mathbf{E}$ of task-arrival sequences (see Section~\ref{subsec:param}), which is created independently from OPAM and RS. Figures~\ref{fig:rq1scatter ics}, \ref{fig:rq1scatter uav}, \ref{fig:rq1scatter gap}, \ref{fig:rq1scatter hpss}, and \ref{fig:rq1scatter esail} indicate that OPAM finds significantly better solutions than RS for ICS, UAV, GAP, HPSS, and ESAIL. Regarding CCS (see Figure~\ref{fig:rq1scatter ccs}), it is difficult to conclude anything based only on visual inspection. Hence, we compared Pareto fronts obtained by OPAM and RS using the three quality indicators HV, GD+, and $\Delta$, described in Section~\ref{subsec:metrics}. \begin{figure*}[t] \begin{center} \subfloat[HV]{ \includegraphics[width=.49\columnwidth]{figs_rq1-HV.pdf} \label{fig:rq1 hv} } \hfill \subfloat[GD+]{ \includegraphics[width=0.49\columnwidth]{figs_rq1-GD+.pdf} \label{fig:rq1 gd} } \hfill \subfloat[$\Delta$]{ \includegraphics[width=0.49\columnwidth]{figs_rq1-GS.pdf} \label{fig:rq1 sp} } \end{center} \caption{Comparing OPAM and RS using the three quality indicators: (a)~HV, (b)~GD+, and (c)~$\Delta$. The boxplots (25\%-50\%-75\%) show the quality values obtained from 50 runs of OPAM and RS. The quality values are computed based on the Pareto fronts obtained by the algorithms and each subject's set $\mathbf{E}$ of task-arrival sequences (see Section~\ref{subsec:param}).} \label{fig:rq1} \end{figure*} Figure~\ref{fig:rq1} depicts distributions of HV (Figure~\ref{fig:rq1 hv}), GD+ (Figure~\ref{fig:rq1 gd}), and $\Delta$ (Figure~\ref{fig:rq1 sp}) for the six industrial subjects. The boxplots in the figures present the distributions (25\%-50\%-75\%) of the quality values obtained from 50 runs of OPAM and RS. The quality values are computed based on the Pareto fronts obtained by the algorithms and each subject's set $\mathbf{E}$ of task-arrival sequences (see Section~\ref{subsec:param}). In the figures, statistical comparisons of the two corresponding distributions are summarized using p-values and $\hat{A}_{12}$ values, as described in Section~\ref{subsec:metrics}, under each subject name. As shown in Figures~\ref{fig:rq1 hv} and \ref{fig:rq1 gd}, OPAM obtains better distributions of HV and GD+ compared to RS for all six subjects. All the differences are statistically significant as the p-values are below 0.05. Regarding $\Delta$, as depicted in Figure~\ref{fig:rq1 sp}, OPAM yields higher diversity in Pareto front solutions than RS for the following subjects: UAV, GAP, and HPSS. For ICS, CCS, and ESAIL, OPAM and RS obtain similar $\Delta$ values. From Figures~\ref{fig:rq1 hv} and \ref{fig:rq1 gd}, and Table~\ref{tbl:subjects}, we also observe that the higher the number of aperiodic tasks in a subject, the larger the differences in HV and GD+ between OPAM and RS. Hence, for these two quality indicators, OPAM outperforms RS more significantly for more complex search problems. Note that the number of aperiodic tasks is one of the main factors that drives the degree of uncertainty in task arrivals. \begin{table}[!p] \caption{Comparing OPAM and RS using the three quality indicators: HV, GD+, and $\Delta$. Average quality values computed based on 50 runs of OPAM and RS using the different sets of task-arrival sequences (see Section~\ref{subsec:design}).} \vspace{-1.2em} \fontsize{8}{8}\selectfont \def0.5{0.5} \begin{center} \begin{tabularx}{\columnwidth}{m{2.5em}@{}c@{\hspace{0.3em}}r@{\hspace{1em}} RRRRRR} \toprule \addlinespace[0.5em] \multicolumn{3}{c}{} & \multicolumn{1}{c}{\textbf{ICS}} & \multicolumn{1}{c}{\textbf{CCS}} & \multicolumn{1}{c}{\textbf{UAV}} & \multicolumn{1}{c}{\textbf{GAP}} & \multicolumn{1}{c}{\textbf{HPSS}} & \multicolumn{1}{c}{\textbf{ESAIL}} \\ \addlinespace[0.2em] \midrule \multirow{9}{*}{\rotatebox{90}{ \parbox{8em}{\centering{$\mathbf{T}^{10}_{a}$ \\ (adaptive, size 10)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{1.0000} & \cellcolor{blue!30}\textbf{0.7168} & \cellcolor{blue!30}\textbf{0.8923} & \cellcolor{blue!30}\textbf{0.8864} & \cellcolor{blue!30}\textbf{0.9629} & \cellcolor{blue!30}\textbf{0.9998} \\ & & \textbf{RS} & 0.9000 & 0.6633 & 0.7488 & 0.6278 & 0.5120 & 0.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.02$\vert$0.55 & 0.00$\vert$0.80 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{0.0000} & \cellcolor{blue!30}\textbf{0.0203} & \cellcolor{blue!30}\textbf{0.0068} & \cellcolor{blue!30}\textbf{0.0067} & \cellcolor{blue!30}\textbf{0.0073} & \cellcolor{blue!30}\textbf{0.0135} \\ & & \textbf{RS} & 0.0883 & 0.0472 & 0.0745 & 0.0780 & 0.1380 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.02$\vert$0.45 & 0.00$\vert$0.04 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7650 & \cellcolor{blue!30}\textbf{0.4256} & \cellcolor{blue!30}\textbf{0.3631} & \cellcolor{blue!30}\textbf{0.5355} & 0.9433 \\ & & \textbf{RS} & 0.9766 & \cellcolor{gray!20}\textbf{0.5879} & 0.6112 & 0.6605 & 0.7508 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.16$\vert$0.52 & 0.00$\vert$0.76 & 0.00$\vert$0.15 & 0.00$\vert$0.03 & 0.00$\vert$0.12 & 0.08$\vert$0.47 \\ \midrule \multirow{9}{*}{\rotatebox{90}{ \parbox{8em}{\centering{$\mathbf{T}^{10}_{w}$ \\ (worst, size 10)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & 0.0000 & \cellcolor{blue!30}\textbf{0.7878} & \cellcolor{blue!30}\textbf{0.9152} & \cellcolor{blue!30}\textbf{0.9280} & \cellcolor{blue!30}\textbf{0.9652} & \cellcolor{blue!30}\textbf{0.9997} \\ & & \textbf{RS} & 0.0000 & 0.7591 & 0.7782 & 0.6743 & 0.5180 & 0.0000 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.01$\vert$0.65 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & 0.0000 & 0.0809 & \cellcolor{blue!30}\textbf{0.0053} & \cellcolor{blue!30}\textbf{0.0042} & \cellcolor{blue!30}\textbf{0.0108} & \cellcolor{blue!30}\textbf{0.0135} \\ & & \textbf{RS} & 0.0200 & 0.0866 & 0.0740 & 0.0760 & 0.1405 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.16$\vert$0.48 & 0.75$\vert$0.52 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7012 & \cellcolor{blue!30}\textbf{0.4508} & \cellcolor{blue!30}\textbf{0.4009} & \cellcolor{blue!30}\textbf{0.4872} & 0.9433 \\ & & \textbf{RS} & 0.9600 & \cellcolor{gray!20}\textbf{0.4764} & 0.6032 & 0.7002 & 0.7328 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.16$\vert$0.52 & 0.00$\vert$0.79 & 0.00$\vert$0.22 & 0.00$\vert$0.03 & 0.00$\vert$0.11 & 0.08$\vert$0.47 \\ \midrule \multirow{9}{*}{\rotatebox{90}{ \parbox{8em}{\centering{$\mathbf{T}^{10}_{r}$ \\ (random, size 10)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & 0.0000 & \cellcolor{blue!30}\textbf{0.8976} & \cellcolor{blue!30}\textbf{0.9792} & \cellcolor{blue!30}\textbf{0.9449} & \cellcolor{blue!30}\textbf{0.9837} & \cellcolor{blue!30}\textbf{0.9999} \\ & & \textbf{RS} & 0.0000 & 0.8517 & 0.8191 & 0.6879 & 0.5183 & 0.0000 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$0.90 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & 0.0000 & \cellcolor{blue!30}\textbf{0.0806} & \cellcolor{blue!30}\textbf{0.0035} & \cellcolor{blue!30}\textbf{0.0043} & \cellcolor{blue!30}\textbf{0.0211} & \cellcolor{blue!30}\textbf{0.0134} \\ & & \textbf{RS} & 0.0200 & 0.1252 & 0.0912 & 0.0789 & 0.1580 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.16$\vert$0.48 & 0.00$\vert$0.09 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.8662 & \cellcolor{blue!30}\textbf{0.4603} & \cellcolor{blue!30}\textbf{0.3951} & \cellcolor{blue!30}\textbf{0.4728} & 0.9433 \\ & & \textbf{RS} & 0.9600 & \cellcolor{gray!20}\textbf{0.6579} & 0.6331 & 0.7035 & 0.7617 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.16$\vert$0.52 & 0.00$\vert$0.73 & 0.00$\vert$0.20 & 0.00$\vert$0.02 & 0.00$\vert$0.05 & 0.08$\vert$0.47 \\ \midrule \multirow{9}{*}{\rotatebox{90}{ \parbox{8.5em}{\centering{$\mathbf{T}^{500}_{a}$ \\ (adaptive, size 500)}}\hspace{-0.3em} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{1.0000} & \cellcolor{blue!30}\textbf{0.7032} & \cellcolor{blue!30}\textbf{0.9424} & \cellcolor{blue!30}\textbf{0.9089} & \cellcolor{blue!30}\textbf{0.9803} & \cellcolor{blue!30}\textbf{0.9999} \\ & & \textbf{RS} & 0.9000 & 0.6518 & 0.7893 & 0.6561 & 0.5167 & 0.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.02$\vert$0.55 & 0.00$\vert$0.86 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{0.0000} & \cellcolor{blue!30}\textbf{0.0159} & \cellcolor{blue!30}\textbf{0.0035} & \cellcolor{blue!30}\textbf{0.0051} & \cellcolor{blue!30}\textbf{0.0064} & \cellcolor{blue!30}\textbf{0.0134} \\ & & \textbf{RS} & 0.0883 & 0.0393 & 0.0850 & 0.0746 & 0.1422 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.02$\vert$0.45 & 0.00$\vert$0.03 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7842 & \cellcolor{blue!30}\textbf{0.4715} & \cellcolor{blue!30}\textbf{0.3680} & \cellcolor{blue!30}\textbf{0.4850} & 0.9433 \\ & & \textbf{RS} & 0.9766 & \cellcolor{gray!20}\textbf{0.5354} & 0.6357 & 0.6850 & 0.7565 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.16$\vert$0.52 & 0.00$\vert$0.84 & 0.00$\vert$0.21 & 0.00$\vert$0.01 & 0.00$\vert$0.09 & 0.08$\vert$0.47 \\ \midrule \multirow{9}{*}{\rotatebox{90}{ \parbox{8em}{\centering{$\mathbf{T}^{500}_{w}$ \\ (worst, size 500)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{1.0000} & \cellcolor{blue!30}\textbf{0.6535} & \cellcolor{blue!30}\textbf{0.9223} & \cellcolor{blue!30}\textbf{0.9307} & \cellcolor{blue!30}\textbf{0.9635} & \cellcolor{blue!30}\textbf{0.9997} \\ & & \textbf{RS} & 0.9000 & 0.6050 & 0.7791 & 0.6770 & 0.5032 & 0.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.02$\vert$0.55 & 0.00$\vert$0.77 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{0.0000} & \cellcolor{blue!30}\textbf{0.0302} & \cellcolor{blue!30}\textbf{0.0037} & \cellcolor{blue!30}\textbf{0.0040} & \cellcolor{blue!30}\textbf{0.0054} & \cellcolor{blue!30}\textbf{0.0136} \\ & & \textbf{RS} & 0.0883 & 0.0545 & 0.0768 & 0.0763 & 0.1408 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.02$\vert$0.45 & 0.00$\vert$0.09 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7899 & \cellcolor{blue!30}\textbf{0.4640} & \cellcolor{blue!30}\textbf{0.4077} & \cellcolor{blue!30}\textbf{0.5083} & 0.9433 \\ & & \textbf{RS} & 0.9766 & \cellcolor{gray!20}\textbf{0.5910} & 0.6114 & 0.7052 & 0.7448 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.16$\vert$0.52 & 0.00$\vert$0.84 & 0.00$\vert$0.22 & 0.00$\vert$0.02 & 0.00$\vert$0.11 & 0.08$\vert$0.47 \\ \midrule \multirow{9}{*}{\rotatebox{90}{ \parbox{8.1em}{\centering{$\mathbf{T}^{500}_{r}$ \\ (random, size 500)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{1.0000} & \cellcolor{blue!30}\textbf{0.6936} & \cellcolor{blue!30}\textbf{0.9742} & \cellcolor{blue!30}\textbf{0.9481} & \cellcolor{blue!30}\textbf{0.9810} & \cellcolor{blue!30}\textbf{0.9999} \\ & & \textbf{RS} & 0.9000 & 0.6401 & 0.8138 & 0.6904 & 0.5183 & 0.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.02$\vert$0.55 & 0.00$\vert$0.85 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{0.0000} & \cellcolor{blue!30}\textbf{0.0169} & \cellcolor{blue!30}\textbf{0.0031} & \cellcolor{blue!30}\textbf{0.0041} & \cellcolor{blue!30}\textbf{0.0062} & \cellcolor{blue!30}\textbf{0.0134} \\ & & \textbf{RS} & 0.0883 & 0.0394 & 0.0914 & 0.0794 & 0.1420 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.02$\vert$0.45 & 0.00$\vert$0.03 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7415 & \cellcolor{blue!30}\textbf{0.4637} & \cellcolor{blue!30}\textbf{0.4077} & \cellcolor{blue!30}\textbf{0.4854} & 0.9433 \\ & & \textbf{RS} & 0.9766 & \cellcolor{gray!20}\textbf{0.5251} & 0.6358 & 0.7042 & 0.7535 & 1.0000 \\ & & $p\vert\hat{A}_{12}$ & 0.16$\vert$0.52 & 0.00$\vert$0.80 & 0.00$\vert$0.20 & 0.00$\vert$0.03 & 0.00$\vert$0.09 & 0.08$\vert$0.47 \\ \bottomrule \addlinespace[0.5em] \multicolumn{9}{l}{\parbox[t]{0.95\linewidth}{ \colorbox{blue!30}{\textbf{n.nnnn}}: OPAM outperforms RS \quad\quad \colorbox{gray!20}{\textbf{n.nnnn}}: RS outperforms OPAM}} \\ \end{tabularx} \end{center} \label{tbl:rq1QIs} \end{table} Given the Pareto priority assignments obtained by OPAM and RS, we further assessed the quality values of the solutions by evaluating them with different sets of task-arrival sequences. As described in Section~\ref{subsec:design}, we created six test sets of task-arrival sequences for each subject by varying the sequence generation methods and the number of task-arrival sequences in a set (see $\mathbf{T}_{a}^{10}$, $\mathbf{T}_{w}^{10}$, $\mathbf{T}_{r}^{10}$, $\mathbf{T}_{a}^{500}$, $\mathbf{T}_{w}^{500}$, and $\mathbf{T}_{r}^{500}$ described in Section~\ref{subsec:design}). Table~\ref{tbl:rq1QIs} reports the average quality values measured by HV, GD+, and $\Delta$ based on 50 runs of OPAM and RS with the different test sets of task-arrival sequences. The results indicate that OPAM significantly outperforms RS in most comparison cases. Specifically, out of a total of 108 comparisons, OPAM outperforms RS 87 times (see the blue-colored cells related to OPAM in Table~\ref{tbl:rq1QIs}). Regarding $\Delta$, RS outperforms OPAM for the CCS subject (see the gray-colored cells related to RS in Table~\ref{tbl:rq1QIs}). As shown in Table~\ref{tbl:subjects}, CCS has only 3 aperiodic tasks and RS was therefore able to find better solutions with respect to $\Delta$ for such a simple subject. \begin{mdframed}[style=RQFrame] \emph{The answer to {\bf RQ1} is that} OPAM significantly outperforms RS with respect to HV and GD+. In particular, OPAM performs considerably better than RS when more aperiodic tasks are involved. \end{mdframed} \begin{figure}[!th] \begin{center} \subfloat[ICS]{ \includegraphics[width=0.45\columnwidth]{figs_rq2-scatter-ICS.pdf} \label{fig:rq2scatter ics} } \hspace*{\fill} \subfloat[CCS]{ \includegraphics[width=.45\columnwidth]{figs_rq2-scatter-CCS.pdf} \label{fig:rq2scatter ccs} } \hfill \subfloat[UAV]{ \includegraphics[width=.45\columnwidth]{figs_rq2-scatter-UAV.pdf} \label{fig:rq2scatter uav} } \hspace*{\fill} \subfloat[GAP]{ \includegraphics[width=.45\columnwidth]{figs_rq2-scatter-GAP.pdf} \label{fig:rq2scatter gap} } \hfill \subfloat[HPSS]{ \includegraphics[width=.45\columnwidth]{figs_rq2-scatter-HPSS.pdf} \label{fig:rq2scatter hpss} } \hspace*{\fill} \subfloat[ESAIL]{ \includegraphics[width=0.45\columnwidth]{figs_rq2-scatter-ESAIL.pdf} \label{fig:rq2scatter esail} } \end{center} \caption{Pareto fronts obtained by OPAM and SEQ for the six industrial subjects: (a)~ICS, (b)~CCS, (c)~UAV, (d)~GAP, (e)~HPSS, and (f)~ESAIL. The fitness values are computed based on each subject's set $\mathbf{T}_{a}^{500}$ of task-arrival sequences (see Section~\ref{subsec:design}). The points located closer to the bottom left of each plot are considered to be better priority assignments when compared to points closer to the top right.} \label{fig:rq2scatter} \end{figure} \noindent\textbf{RQ2.} To compare OPAM and SEQ, we first visually inspect the best Pareto fronts obtained from 50 runs of OPAM and SEQ for the six study systems described in Section~\ref{subsec:industrial subjects} by varying the test sets of task-arrival sequences for each subject (see $\mathbf{T}_{a}^{10}$, $\mathbf{T}_{w}^{10}$, $\mathbf{T}_{r}^{10}$, $\mathbf{T}_{a}^{500}$, $\mathbf{T}_{w}^{500}$, and $\mathbf{T}_{r}^{500}$ described in Section~\ref{subsec:design}), which are created independently from OPAM and SEQ. Overall, we observed that OPAM finds significantly better priority assignments in most cases. For example, Figure~\ref{fig:rq2scatter} depicts the best Pareto fronts obtained by OPAM and SEQ when the fitness values are computed based on each subject's test set $\mathbf{T}_{a}^{500}$ of 500 task-arrival sequences, which are generated with adaptive random search. The results clearly show that OPAM outperforms SEQ with respect to producing more optimal Pareto fronts for ICS, CCS, UAV, HPSS, and ESAIL. For GAP, the visual inspection is not sufficient to provide any conclusions. Hence, we further compare OPAM and SEQ based on the quality indicators described in Section~\ref{subsec:metrics}. \begin{table}[p] \caption{Comparing OPAM and SEQ using the three quality indicators: HV, GD+, and $\Delta$. Average quality values computed based on 50 runs of OPAM and SEQ using the different sets of task-arrival sequences (see Section~\ref{subsec:design}).} \vspace{-1.2em} \fontsize{8}{8}\selectfont \def0.5{0.5} \begin{center} \begin{tabularx}{\columnwidth}{m{2.5em}@{}c@{\hspace{0.3em}}r@{\hspace{1em}} RRRRRR} \toprule \addlinespace[0.5em] \multicolumn{3}{c}{} & \multicolumn{1}{c}{\textbf{ICS}} & \multicolumn{1}{c}{\textbf{CCS}} & \multicolumn{1}{c}{\textbf{UAV}} & \multicolumn{1}{c}{\textbf{GAP}} & \multicolumn{1}{c}{\textbf{HPSS}} & \multicolumn{1}{c}{\textbf{ESAIL}} \\ \addlinespace[0.2em] \midrule \multirow{15}{*}{\rotatebox{90}{ \parbox{8em}{\centering{$\mathbf{T}^{10}_{a}$ \\ (adaptive, size 10)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & 0.0000 & \cellcolor{blue!30}\textbf{0.6052} & \cellcolor{blue!30}\textbf{0.6011} & \cellcolor{blue!30}\textbf{0.6088} & \cellcolor{blue!30}\textbf{0.6290} & \cellcolor{blue!30}\textbf{0.9808} \\ & & \textbf{SEQ} & 0.0000 & 0.4172 & 0.5354 & 0.5868 & 0.6086 & 0.4470 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$1.00 & 0.00$\vert$0.95 & 0.00$\vert$0.76 & 0.02$\vert$0.63 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{0.0000} & \cellcolor{blue!30}\textbf{0.0244} & \cellcolor{blue!30}\textbf{0.0175} & \cellcolor{blue!30}\textbf{0.0148} & \cellcolor{blue!30}\textbf{0.0529} & \cellcolor{blue!30}\textbf{0.0249} \\ & & \textbf{SEQ} & 0.2191 & 0.0835 & 0.0350 & 0.0201 & 0.0625 & 0.1887 \\ & & $p\vert\hat{A}_{12}$ & 0.00$\vert$0.01 & 0.00$\vert$0.00 & 0.00$\vert$0.01 & 0.00$\vert$0.25 & 0.00$\vert$0.26 & 0.00$\vert$0.03 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7653 & 0.4239 & 0.3343 & 0.5297 & 0.9444 \\ & & \textbf{SEQ} & \cellcolor{gray!20}\textbf{0.0200} & \cellcolor{gray!20}\textbf{0.5656} & \cellcolor{gray!20}\textbf{0.3628} & \cellcolor{gray!20}\textbf{0.2875} & 0.5706 & \cellcolor{gray!20}\textbf{0.8285} \\ & & $p\vert\hat{A}_{12}$ & 0.00$\vert$0.99 & 0.00$\vert$0.81 & 0.01$\vert$0.64 & 0.01$\vert$0.65 & 0.33$\vert$0.44 & 0.00$\vert$0.75 \\ \midrule \multirow{15}{*}{\rotatebox{90}{ \parbox{8em}{\centering{$\mathbf{T}^{10}_{w}$ \\ (worst, size 10)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & 0.0000 & \cellcolor{blue!30}\textbf{0.7345} & \cellcolor{blue!30}\textbf{0.6258} & \cellcolor{blue!30}\textbf{0.6290} & \cellcolor{blue!30}\textbf{0.7460} & \cellcolor{blue!30}\textbf{0.9059} \\ & & \textbf{SEQ} & 0.0000 & 0.6794 & 0.5933 & 0.5928 & 0.6856 & 0.5046 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$0.82 & 0.00$\vert$0.82 & 0.00$\vert$0.88 & 0.00$\vert$0.87 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & 0.0000 & 0.0912 & \cellcolor{blue!30}\textbf{0.0191} & \cellcolor{blue!30}\textbf{0.0131} & \cellcolor{blue!30}\textbf{0.0340} & \cellcolor{blue!30}\textbf{0.0724} \\ & & \textbf{SEQ} & 0.0000 & \cellcolor{gray!20}\textbf{0.0695} & 0.0272 & 0.0211 & 0.0667 & 0.1720 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$0.86 & 0.00$\vert$0.12 & 0.00$\vert$0.14 & 0.00$\vert$0.03 & 0.00$\vert$0.07 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7009 & 0.4835 & 0.3616 & \cellcolor{blue!30}\textbf{0.4695} & 0.9470 \\ & & \textbf{SEQ} & 1.0000 & \cellcolor{gray!20}\textbf{0.5376} & \cellcolor{gray!20}\textbf{0.3111} & \cellcolor{gray!20}\textbf{0.3054} & 0.5453 & \cellcolor{gray!20}\textbf{0.7547} \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$0.74 & 0.00$\vert$0.83 & 0.01$\vert$0.66 & 0.01$\vert$0.35 & 0.00$\vert$0.67 \\ \midrule \multirow{15}{*}{\rotatebox{90}{ \parbox{8em}{\centering{$\mathbf{T}^{10}_{r}$ \\ (random, size 10)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & 0.0000 & \cellcolor{blue!30}\textbf{0.8720} & \cellcolor{blue!30}\textbf{0.8653} & \cellcolor{blue!30}\textbf{0.6340} & 0.7714 & \cellcolor{blue!30}\textbf{0.9055} \\ & & \textbf{SEQ} & 0.0000 & 0.5478 & 0.7246 & 0.5879 & 0.7935 & 0.1139 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$0.99 & 0.00$\vert$1.00 & 0.00$\vert$0.92 & 0.06$\vert$0.39 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & 0.0000 & \cellcolor{blue!30}\textbf{0.0911} & \cellcolor{blue!30}\textbf{0.0205} & \cellcolor{blue!30}\textbf{0.0160} & \cellcolor{blue!30}\textbf{0.0472} & \cellcolor{blue!30}\textbf{0.0718} \\ & & \textbf{SEQ} & 0.0000 & 0.1358 & 0.0882 & 0.0277 & 0.0646 & 0.2838 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$0.01 & 0.00$\vert$0.00 & 0.00$\vert$0.10 & 0.00$\vert$0.19 & 0.00$\vert$0.06 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.8605 & 0.4644 & 0.3825 & 0.4658 & 0.9456 \\ & & \textbf{SEQ} & 1.0000 & \cellcolor{gray!20}\textbf{0.5896} & \cellcolor{gray!20}\textbf{0.4072} & \cellcolor{gray!20}\textbf{0.3253} & 0.4620 & \cellcolor{gray!20}\textbf{0.9670} \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$0.82 & 0.02$\vert$0.64 & 0.01$\vert$0.66 & 0.90$\vert$0.49 & 0.00$\vert$0.67 \\ \midrule \multirow{15}{*}{\rotatebox{90}{ \parbox{8.5em}{\centering{$\mathbf{T}^{500}_{a}$ \\ (adaptive, size 500)}}\hspace{-0.3em} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & 0.0000 & \cellcolor{blue!30}\textbf{0.6781} & \cellcolor{blue!30}\textbf{0.7134} & \cellcolor{blue!30}\textbf{0.6261} & \cellcolor{blue!30}\textbf{0.7332} & \cellcolor{blue!30}\textbf{0.9744} \\ & & \textbf{SEQ} & 0.0000 & 0.4854 & 0.6179 & 0.5981 & 0.7056 & 0.3571 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$0.83 & 0.00$\vert$0.73 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{0.0000} & \cellcolor{blue!30}\textbf{0.0174} & \cellcolor{blue!30}\textbf{0.0140} & \cellcolor{blue!30}\textbf{0.0134} & \cellcolor{blue!30}\textbf{0.0320} & \cellcolor{blue!30}\textbf{0.0285} \\ & & \textbf{SEQ} & 0.2191 & 0.0727 & 0.0549 & 0.0197 & 0.0565 & 0.2153 \\ & & $p\vert\hat{A}_{12}$ & 0.00$\vert$0.01 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.20 & 0.00$\vert$0.08 & 0.00$\vert$0.04 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7833 & 0.4964 & 0.3588 & \cellcolor{blue!30}\textbf{0.4564} & 0.9442 \\ & & \textbf{SEQ} & \cellcolor{gray!20}\textbf{0.0200} & 0.7319 & \cellcolor{gray!20}\textbf{0.4002} & 0.3315 & 0.5312 & \cellcolor{gray!20}\textbf{0.8554} \\ & & $p\vert\hat{A}_{12}$ & 0.00$\vert$0.99 & 0.23$\vert$0.57 & 0.00$\vert$0.72 & 0.07$\vert$0.60 & 0.02$\vert$0.36 & 0.00$\vert$0.75 \\ \midrule \multirow{8}{*}{\rotatebox{90}{ \parbox{8em}{\centering{$\mathbf{T}^{500}_{w}$ \\ (worst, size 500)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & 0.0000 & 0.4732 & \cellcolor{blue!30}\textbf{0.6330} & \cellcolor{blue!30}\textbf{0.6181} & \cellcolor{blue!30}\textbf{0.6990} & \cellcolor{blue!30}\textbf{0.8755} \\ & & \textbf{SEQ} & 0.0000 & \cellcolor{gray!20}\textbf{0.5564} & 0.5958 & 0.5792 & 0.6800 & 0.1183 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$0.04 & 0.00$\vert$0.85 & 0.00$\vert$0.90 & 0.00$\vert$0.70 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{0.0000} & 0.0511 & \cellcolor{blue!30}\textbf{0.0141} & \cellcolor{blue!30}\textbf{0.0135} & \cellcolor{blue!30}\textbf{0.0258} & \cellcolor{blue!30}\textbf{0.0911} \\ & & \textbf{SEQ} & 0.2191 & \cellcolor{gray!20}\textbf{0.0343} & 0.0267 & 0.0226 & 0.0336 & 0.2849 \\ & & $p\vert\hat{A}_{12}$ & 0.00$\vert$0.01 & 0.00$\vert$0.96 & 0.00$\vert$0.05 & 0.00$\vert$0.11 & 0.00$\vert$0.24 & 0.00$\vert$0.06 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7569 & 0.4950 & 0.3751 & 0.5379 & 0.9469 \\ & & \textbf{SEQ} & \cellcolor{gray!20}\textbf{0.0200} & 0.7259 & \cellcolor{gray!20}\textbf{0.3315} & \cellcolor{gray!20}\textbf{0.3139} & 0.5102 & \cellcolor{gray!20}\textbf{0.8957} \\ & & $p\vert\hat{A}_{12}$ & 0.00$\vert$0.99 & 0.43$\vert$0.55 & 0.00$\vert$0.82 & 0.01$\vert$0.66 & 0.20$\vert$0.57 & 0.00$\vert$0.67 \\ \midrule \multirow{8}{*}{\rotatebox{90}{ \parbox{8.1em}{\centering{$\mathbf{T}^{500}_{r}$ \\ (random, size 500)}} }} & \multirow{3}{*}{\textbf{HV}} & \textbf{OPAM} & 0.0000 & \cellcolor{blue!30}\textbf{0.6646} & \cellcolor{blue!30}\textbf{0.8446} & \cellcolor{blue!30}\textbf{0.6321} & \cellcolor{blue!30}\textbf{0.7087} & \cellcolor{blue!30}\textbf{0.8782} \\ & & \textbf{SEQ} & 0.0000 & 0.4876 & 0.7242 & 0.5839 & 0.6786 & 0.1965 \\ & & $p\vert\hat{A}_{12}$ & 1.00$\vert$0.50 & 0.00$\vert$1.00 & 0.00$\vert$1.00 & 0.00$\vert$0.93 & 0.00$\vert$0.72 & 0.00$\vert$1.00 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{GD+}} & \textbf{OPAM} & \cellcolor{blue!30}\textbf{0.0000} & \cellcolor{blue!30}\textbf{0.0184} & \cellcolor{blue!30}\textbf{0.0172} & \cellcolor{blue!30}\textbf{0.0165} & \cellcolor{blue!30}\textbf{0.0327} & \cellcolor{blue!30}\textbf{0.0900} \\ & & \textbf{SEQ} & 0.2191 & 0.0684 & 0.0791 & 0.0285 & 0.0580 & 0.2620 \\ & & $p\vert\hat{A}_{12}$ & 0.00$\vert$0.01 & 0.00$\vert$0.00 & 0.00$\vert$0.00 & 0.00$\vert$0.09 & 0.00$\vert$0.06 & 0.00$\vert$0.06 \\ \addlinespace[0.5em] & \multirow{3}{*}{\textbf{$\Delta$}} & \textbf{OPAM} & 1.0000 & 0.7449 & 0.5059 & 0.3960 & \cellcolor{blue!30}\textbf{0.4502} & 0.9472 \\ & & \textbf{SEQ} & \cellcolor{gray!20}\textbf{0.0200} & 0.6798 & \cellcolor{gray!20}\textbf{0.4156} & \cellcolor{gray!20}\textbf{0.3341} & 0.5148 & \cellcolor{gray!20}\textbf{0.8546} \\ & & $p\vert\hat{A}_{12}$ & 0.00$\vert$0.99 & 0.19$\vert$0.58 & 0.00$\vert$0.71 & 0.01$\vert$0.66 & 0.03$\vert$0.38 & 0.00$\vert$0.67 \\ \bottomrule \addlinespace[0.5em] \multicolumn{9}{l}{\parbox[t]{0.95\linewidth}{ \colorbox{blue!30}{\textbf{n.nnnn}}: OPAM outperforms SEQ \quad\quad \colorbox{gray!20}{\textbf{n.nnnn}}: SEQ outperforms OPAM}} \\ \end{tabularx} \end{center} \label{tbl:rq2QI-SEQ} \end{table} Table~\ref{tbl:rq2QI-SEQ} compares the quality values measured by HV, GD+, and $\Delta$ for the six study subjects. To fairly compare the priority assignments obtained by OPAM and SEQ, we assess them with the test sets of task-arrival sequences for each subject (see $\mathbf{T}_{a}^{10}$, $\mathbf{T}_{w}^{10}$, $\mathbf{T}_{r}^{10}$, $\mathbf{T}_{a}^{500}$, $\mathbf{T}_{w}^{500}$, and $\mathbf{T}_{r}^{500}$ described in Section~\ref{subsec:design}). Table~\ref{tbl:rq2QI-SEQ} reports the average quality values computed based on 50 runs of OPAM and SEQ. In Table~\ref{tbl:rq2QI-SEQ}, the statistical comparison of the two corresponding distributions are reported using p-values and $\hat{A}_{12}$ values. As shown in Table~\ref{tbl:rq2QI-SEQ}, we compared OPAM and SEQ 108 times by varying the study subjects, the quality indicators, the number of task-arrival sequences, and the task-arrival sequence generation methods. Out of 108 comparisons, OPAM significantly outperforms SEQ 63 times. Specifically, out of 36 HV comparisons, OPAM obtains better HV values than SEQ 28 times. For ICS (6 HV comparisons), the differences in HV values between OPAM and SEQ are not statistically significant. In only one HV comparison for CCS, SEQ outperforms OPAM (see the gray-colored cell related to HV and CCS in Table~\ref{tbl:rq2QI-SEQ}). To interpret these results, one must recall from Table~\ref{tbl:subjects} that ICS and CCS have only three aperiodic tasks that impact the degree of uncertainty in task arrivals and therefore represent simple cases. Out of 36 GD+ comparisons, OPAM outperforms SEQ 32 times. SEQ outperforms OPAM only two times for CCS. Hence, overall, the results indicate that OPAM outperforms SEQ, in terms of generating more optimal Pareto fronts, when the subjects feature a considerable degree of uncertainty in task arrivals and therefore make our search problem more complex. Otherwise differences are not statistically or practically significant. Regarding $\Delta$, which focuses on the diversity of solutions on the Pareto front, SEQ outperforms OPAM 24 times out of 36 comparisons (see the gray-colored cells related to $\Delta$ in Table~\ref{tbl:rq2QI-SEQ}). However, since OPAM produces enough alternative priority assignments spreading across Pareto fronts (as visible from the solutions obtained by OPAM in Figure~\ref{fig:rq2scatter}), these differences in $\Delta$ have limited implications in practice. \begin{mdframed}[style=RQFrame] \emph{The answer to {\bf RQ2} is that} OPAM significantly outperforms SEQ with respect to HV and GD+ when in the presence of more than a few aperiodic tasks and therefore higher uncertainty in terms of task arrivals. OPAM therefore generate solutions on a Pareto front that is closer to the unknown, optimal one. In other words, coevolution is a suitable and successful strategy for finding better priority assignments in complex systems. \end{mdframed} \begin{table}[t] \caption{Execution times and memory usage required to run OPAM for the six industrial subjects. Average values computed based on 50 runs of OPAM are reported.} \begin{center} \begin{tabular}{lrrrrr} \toprule \multicolumn{1}{c}{Subject} & \multicolumn{1}{c}{Execution time (s)} & \multicolumn{1}{c}{Memory usage (MB)} \\ \midrule ICS & 104.34 & 89.97 \\ CCS & 165.50 & 111.85 \\ UAV & 1455.35 & 312.85 \\ GAP & 2819.03 & 730.29 \\ HPSS & 226.98 & 127.77 \\ ESAIL & 55844.23 & 2879.79 \\ \bottomrule \end{tabular} \end{center} \label{tab:rq3industrial} \end{table} \noindent\textbf{RQ3.} Table~\ref{tab:rq3industrial} reports the average execution times and memory usage required to run OPAM for the six industrial subjects, over 50 runs. As shown in Table~\ref{tab:rq3industrial}, finding optimal priority assignments for ESAIL requires the largest execution time ($\approx$15.5h) and memory usage ($\approx$2.9GB), compared to the other subjects. We note that such execution time and memory usage are acceptable as OPAM can be executed offline in practice. \begin{figure}[t] \begin{center} \subfloat[Number of tasks ($n$)]{ \includegraphics[width=0.45\columnwidth]{figs_rq3-time-Exp1.pdf} \label{fig:rq3time exp1} } \hspace*{\fill} \subfloat[Ratio of aperiodic tasks($\gamma$)]{ \includegraphics[width=.45\columnwidth]{figs_rq3-time-Exp2.pdf} \label{fig:rq3time exp2} } \hfill \subfloat[Range factor ($\mu$)]{ \includegraphics[width=.45\columnwidth]{figs_rq3-time-Exp3.pdf} \label{fig:rq3time exp3} } \hspace*{\fill} \subfloat[Simulation time ($T$)]{ \includegraphics[width=.45\columnwidth]{figs_rq3-time-Exp4.pdf} \label{fig:rq3time exp4} } \end{center} \caption{Execution times of OPAM when varying the values of the following parameters: (a)~number of tasks $n$, (b)~ratio of aperiodic tasks $\gamma$, (c)~range factor $\mu$, and (d)~simulation time $T$. The boxplots (25\%-50\%-75\%) show the execution times obtained from 500 runs of OPAM, i.e., 50 runs for each of the 10 synthetic subjects with the same configuration.} \label{fig:rq3time} \end{figure} \begin{figure}[htp] \begin{center} \subfloat[Number of tasks ($n$)]{ \includegraphics[width=0.45\columnwidth]{figs_rq3-memory-Exp1.pdf} \label{fig:rq3memory exp1} } \hspace*{\fill} \subfloat[Ratio of aperiodic tasks($\gamma$)]{ \includegraphics[width=.45\columnwidth]{figs_rq3-memory-Exp2.pdf} \label{fig:rq3memory exp2} } \hfill \subfloat[Range factor ($\mu$)]{ \includegraphics[width=.45\columnwidth]{figs_rq3-memory-Exp3.pdf} \label{fig:rq3memory exp3} } \hspace*{\fill} \subfloat[Simulation time ($T$)]{ \includegraphics[width=.45\columnwidth]{figs_rq3-memory-Exp4.pdf} \label{fig:rq3memory exp4} } \end{center} \caption{Memory usage of OPAM when varying the values of the following parameters: (a)~number of tasks $n$, (b)~ratio of aperiodic tasks $\gamma$, (c)~range factor $\mu$, and (d)~simulation time $T$. The boxplots (25\%-50\%-75\%) show the memory usage obtained from 500 runs of OPAM, i.e., 50 runs for each of the synthetic subjects with the same configuration.} \label{fig:rq3memory} \end{figure} Figures~\ref{fig:rq3time} and \ref{fig:rq3memory} show, respectively, the execution times and memory usage from EXP3.1 (a), EXP3.2 (b), EXP3.3 (c), and EXP3.4 (d), described in Section~\ref{subsec:design}. The boxplots in the figures show distributions (25\%-50\%-75\%) obtained from 50 $\times$ 10 runs of OPAM for a set of 10 synthetic subjects, which are created with the same experimental setting. Regarding the execution time of OPAM, Figures~\ref{fig:rq3time exp1} and \ref{fig:rq3time exp4} show that the execution time of OPAM is linear both in the number of tasks and simulation time. As for the memory usage of OPAM, results in Figures~\ref{fig:rq3memory exp1} and \ref{fig:rq3memory exp4} indicate that memory usage is linear both in the number of tasks and in the simulation time. However, the results depicted in Figures~\ref{fig:rq3time exp2}, \ref{fig:rq3time exp3}, \ref{fig:rq3memory exp2}, and \ref{fig:rq3memory exp3} indicate that there are no correlations between OPAM execution time and memory usage and the following two parameters: ratio of aperiodic tasks and range factor. Therefore, we expect OPAM to scale well as the numbers of tasks and simulation time increase. \noindent\parbox{\textwidth}{ \begin{mdframed}[style=RQFrame] \emph{The answer to {\bf RQ3} is that} the execution time and memory usage of OPAM are linear in the number of tasks and simulation time, thus scaling to industrial systems. Further, across our experiments, OPAM takes at most 15.5h using 2.9GB of memory to optimize priority assignments, an acceptable result since this is done offline. \end{mdframed} } \begin{figure*}[!th] \begin{center} \subfloat[ICS]{ \includegraphics[width=0.45\columnwidth]{figs_rq4-scatter-ICS.pdf} \label{fig:rq4scatter ics} } \hspace*{\fill} \subfloat[CCS]{ \includegraphics[width=.45\columnwidth]{figs_rq4-scatter-CCS.pdf} \label{fig:rq4scatter ccs} } \hfill \subfloat[UAV]{ \includegraphics[width=.45\columnwidth]{figs_rq4-scatter-UAV.pdf} \label{fig:rq4scatter uav} } \hspace*{\fill} \subfloat[GAP]{ \includegraphics[width=.45\columnwidth]{figs_rq4-scatter-GAP.pdf} \label{fig:rq4scatter gap} } \hfill \subfloat[HPSS]{ \includegraphics[width=.45\columnwidth]{figs_rq4-scatter-HPSS.pdf} \label{fig:rq4scatter hpss} } \hspace*{\fill} \subfloat[ESAIL]{ \includegraphics[width=0.45\columnwidth]{figs_rq4-scatter-ESAIL.pdf} \label{fig:rq4scatter esail} } \end{center} \caption{Comparing Pareto solutions obtained by OPAM and priority assignments defined by engineers for the six industrial subjects: (a)~ICS, (b)~CCS, (c)~UAV, (d)~GAP, (e)~HPSS, and (f)~ESAIL. The points located closer to the bottom left of each plot are considered to be better priority assignments when compared to points closer to the top right.} \label{fig:rq4scatter} \end{figure*} \noindent\textbf{RQ4.} Figure~\ref{fig:rq4scatter} compares, with respect to external fitness (see the $\fun{fs}()$ and $\fun{fc}()$ fitness functions and the set $\mathbf{E}$ of sequences of task arrivals described in Section~\ref{subsec:external}), the Pareto solutions obtained by OPAM against the priority assignments defined by engineers for the six industrial subjects: ICS (Figure~\ref{fig:rq4scatter ics}), CCS (Figure~\ref{fig:rq4scatter ccs}), UAV (Figure~\ref{fig:rq4scatter uav}), GAP (Figure~\ref{fig:rq4scatter gap}), HPSS (Figure~\ref{fig:rq4scatter hpss}), and ESAIL (Figure~\ref{fig:rq4scatter esail}). As shown in the figure, the solutions obtained by OPAM clearly outperform the priority assignments defined by engineers regarding the two external objectives: the magnitude of safety margins and the extent to which constraints are satisfied. \begin{table}[t] \caption{Comparing safety margins from the task executions of ESAIL when using our optimized priority assignment and the one defined by engineers.} \begin{center} \begin{tabular}{ccrrr} \toprule \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{c}{Periodic tasks} & \multicolumn{1}{c}{Aperiodic tasks} & \multicolumn{1}{c}{All tasks} \\ \midrule \multirow{4}{*}{Engineer} & Min& -44.5& 9.4& -44.5 \\ & Max& 1879.7& 59710.3& 59710.3 \\ & Avg.& 126.6& 52.6& 78.1 \\ & Median& 82.1& 9.4& 48.1 \\ \midrule \multirow{4}{*}{OPAM} & Min& 48.1& 9.4& 9.4 \\ & Max& 1879.7& 59707.2& 59707.2 \\ & Avg.& 129.8& 57.2& 82.3 \\ & Median& 85.7& 9.4& 48.1 \\ \midrule \multirow{4}{*}{\% Difference} & Min& 208.09\%& 0.00\%& 121.12\% \\ & Max& 0.00\%& -0.01\%& -0.01\% \\ & Avg.& 2.53\%& 8.89\%& 5.33\% \\ & Median& 4.38\%& 0.00\%& 0.00\% \\ \bottomrule \multicolumn{5}{l}{\footnotesize $\ast$ Unit of time: ms} \end{tabular} \end{center} \label{tbl:rq4margins} \end{table} Table~\ref{tbl:rq4margins} summarizes safety margins from the task executions of ESAIL when using one of our priority assignments optimized by OPAM and the one defined by engineers at LuxSpace. Note that we focus on ESAIL as it is not possible to access the engineers who developed the other five industrial subjects reported in the literature~\citep{Locke1990,Traore2006,Frati2008,Marius2010,Anssi2011}. For comparison, we chose the bottom-left solution in Figure~\ref{fig:rq4scatter esail} since it is optimal for the constraint fitness, which is the same as the fitness value of the priority assignment defined by engineers, and the differences in safety margin fitness among our solutions are negligible. As shown in Table~\ref{tbl:rq4margins}, our optimized priority assignment significantly outperforms the one of engineers. Our solution increases safety margins, on average, by 5.33\% compared to the engineers' solution. For aperiodic tasks, our solution decreases safety margins by 0.01\% (4.2ms difference) when the safety margins being compared are the maximum margins observed in both solutions (see the maximum safety margins, 59710.3ms obtained by engineers' solution and 59707.2ms obtained by OPAM, in Table~\ref{tbl:rq4margins}). Such a small decrease is however negligible in the context of ESAIL as the maximum safety margin obtained by our solution is still large, i.e., $\approx$1m. For periodic tasks, we note that our solution increases safety margins by 208.09\% when the safety margins being compared are the minimum margins observed in both solutions (see the minimum safety margins, -44.5ms obtained by engineers' solution and 48.1ms obtained by OPAM, in Table~\ref{tbl:rq4margins}). Note that the minimum safety margin of -44.5ms obtained with the engineers' solution indicates that a task violates its deadline. In the context of ESAIL, which is a mission-critical system, such gain in safety margins in the executions of periodic tasks is important because the hard deadlines of periodic tasks are more critical than the soft deadlines of aperiodic tasks. Investigating practitioners' perceptions of the benefits of OPAM is necessary to adopt OPAM in practice. To do so, we draw on the qualitative reflections of three software engineers at LuxSpace, with whom we have been collaborating on this research. They have had four to seven years of experience developing satellite systems at LuxSpace, with more than 50 years of collective experience in companies. All the reflections are based on observations made throughout our interactions. The engineers at LuxSpace deemed OPAM to be an improvement over their current practice as it allows them to perform domain-specific trade-off analysis among Pareto solutions and is useful in practice to support decision making with respect to their task design. Encouraged by the promising results, we are now applying OPAM to new systems in collaboration with LuxSpace. \noindent\parbox{\textwidth}{ \begin{mdframed}[style=RQFrame] \emph{The answer to {\bf RQ4} is that} OPAM helps optimize priority assignments such that they outperform those manually defined by engineers based on domain expertise. Our results show that OPAM, compared to current practice, increases safety margins, on average, by 5.33\%. \end{mdframed} } \subsection{Research questions} \label{subsec:RQs} \noindent\textbf{RQ1 (Sanity check):} \textit{How does OPAM perform compared with Random Search?} For search-based solutions, this RQ is an important \emph{sanity check} to ensure that success is not due to the search problem being easy~\citep{Arcuri2014}. Our conjecture is that a search-based algorithm, although expensive, will significantly outperform naive random search (RS). \noindent\textbf{RQ2 (Coevolution):} \textit{Is competitive coevolution suitable to find best-case priority assignments?} We conjecture that a coevolutionary algorithm is a suitable solution to address the priority assignment problem since it is solved, in practice, through a competing interactive process between the development and testing teams. To answer this RQ, we compare OPAM with a sequential approach that first looks for worst-case sequences of task arrivals and then tries to find best-case priority assignments. \noindent\textbf{RQ3 (Scalability):} \textit{Can OPAM find (near-)optimal solutions for large-scale systems in a reasonable time budget?} In this RQ, we investigate the scalability of OPAM by conducting some experiments with systems of various sizes, including six industrial and several synthetic subjects. We study the relationship between OPAM's performance measures and the characteristics of study subjects. \noindent\textbf{RQ4 (Usefulness):} \textit{How do priority assignments generated by OPAM compare with priority assignments defined by engineers?} OPAM can be considered useful only when it finds priority assignments that show benefits over those defined (manually) by engineers with domain expertise. This RQ therefore compares the quality of priority assignments generated by OPAM with those defined by engineers. We further discuss the usefulness of OPAM from a practical perspective, based on the feedback received from engineers in LuxSpace. \subsection{Experimental Design} \label{subsec:design} This section describes how we design experiments to answer the RQs described in Section~\ref{subsec:RQs}. We conducted four experiments, EXP1, EXP2, EXP3, and EXP4, as described below. \noindent\textbf{EXP1.} To answer RQ1, EXP1 compares OPAM with our baseline, which relies on random search, to ensure that the effectiveness of OPAM is not due to the search problem being simple. Our baseline, named RS, replaces GA with a random search for finding worst-case sequences of task arrivals and NSGAII with a random search for finding best-case priority assignments. Note that RS uses the same internal and external fitness functions (see Section~\ref{subsec:fitness}) and also maintains the best populations during search; however, it does not employ any genetic operators, i.e., crossover and mutation. In EXP1, we applied OPAM and RS to the six industrial subjects described in Section~\ref{subsec:industrial subjects}. Recall from Section~\ref{subsec:fitness} that OPAM uses a set $\mathbf{E}$ of task-arrival sequences that are generated independently from the coevolution process in order to monitor the coevolution progress in a stable manner. As OPAM and RS use the same set $\mathbf{E}$ of task-arrival sequences, EXP1 first compares OPAM and RS based on $\mathbf{E}$. In addition, EXP1 examines how well the solutions, i.e., priority assignments, found by OPAM and RS perform with other sequences of task arrivals. To do so, we create six sets of sequences of task arrivals for each study subject by varying the method to generate task-arrival sequences and the number of task-arrival sequences. Note that task-arrival sequences generated by different methods are valid with respect to the inter-arrival times defined in each study subject. Below we describe the six sets of task-arrival sequences generated for each subject. \begin{itemize}[leftmargin=1em] \item $\mathbf{T}_{a}^{10}$: A set of task-arrival sequences generated by using an adaptive random search technique~\citep{Chen2010} that aims at maximizing the diversity of task-arrival sequences. The $\mathbf{T}_{a}^{10}$ set contains 10 sequences of task arrivals. \item $\mathbf{T}_{w}^{10}$: A set of task-arrival sequences generated by using a stress test case generation method that aims at maximizing the chances of deadline misses in task executions. The stress test case generation method extends prior work~\citep{Briand2005}. The extended method uses the fitness function regarding deadline misses and genetic operators that OPAM introduces for evolving worst-case task-arrival sequences (see Section~\ref{sec:approach}). The $\mathbf{T}_{w}^{10}$ set contains 10 sequences of task arrivals. \item $\mathbf{T}_{r}^{10}$: A set of task-arrival sequences generated randomly. The $\mathbf{T}_{r}^{10}$ set has 10 sequences of task arrivals. \item $\mathbf{T}_{a}^{500}$: A set of task-arrival sequences generated by using the adaptive random search technique. The $\mathbf{T}_{a}^{500}$ set contains 500 sequences of task arrivals. \item $\mathbf{T}_{w}^{500}$: A set of task-arrival sequences generated by using the stress test case generation method. The $\mathbf{T}_{w}^{500}$ set contains 500 sequences of task arrivals. \item $\mathbf{T}_{r}^{500}$: A set of task-arrival sequences generated randomly. The $\mathbf{T}_{r}^{500}$ set has 500 sequences of task arrivals. \end{itemize} \noindent\textbf{EXP2.} To answer RQ2, EXP2 compares OPAM with a priority assignment method, named SEQ, that relies on one-population search algorithms. SEQ first finds a set of worst-case sequences of task arrivals using GA with the fitness function that measures the magnitude of deadline misses (see $\fun{fd}()$ in Section~\ref{subsec:fitness}) and the genetic operators described in Section~\ref{subsec:evolution arrivals}. Given a set of worst-case task-arrival sequences obtained from GA, SEQ then aims at finding best-case priority assignments using NSGAII with the fitness functions that quantify the magnitude of safety margins and the degree of constraint satisfaction (see $\fun{fs}()$ and $\fun{fc}()$, respectively, in Section~\ref{subsec:fitness}) and the genetic operators described in Section~\ref{subsec:evolution priorities}. We note that SEQ does not use the external fitness functions as it does not coevolve task-arrival sequences and priority assignments. Hence, the numbers of fitness evaluations of the two methods are not comparable. To fairly compare OPAM and SEQ, we set the same time budget for the two methods. Specifically, we first measure the execution time of OPAM for analyzing each subject. We then split the execution time in half and set each half time as the execution budget of the GA and NSGAII steps in SEQ for the corresponding subject. In order to assess the quality of priority assignments obtained from OPAM and SEQ, we use the sets of task-arrival sequences described in EXP1, i.e., $\mathbf{T}_{a}^{10}$, $\mathbf{T}_{w}^{10}$, $\mathbf{T}_{r}^{10}$, $\mathbf{T}_{a}^{500}$, $\mathbf{T}_{w}^{500}$, and $\mathbf{T}_{r}^{500}$, which are created independently from the two methods. \noindent\textbf{EXP3.} To answer RQ3, EXP3 examines not only the six industrial subjects but also 370 synthetic subjects. We create the synthetic subjects to study correlations between the execution time and memory usage of OPAM and the following parameters: the number of tasks ($n$), a (part-to-whole) ratio of aperiodic tasks ($\gamma$), a range factor for maximum inter-arrival times ($\mu$), and simulation time ($T$), as described in Sections~\ref{subsec:synthetic subjects} and~\ref{sec:approach}. We note that we chose to control parameters $n$, $\gamma$, and $\mu$ because they are the main parameters on which engineers have control to define tasks in real-time systems. Simulation time $T$ obviously impacts the execution time of OPAM as well. But EXP3 aims at modeling such correlations precisely and providing experimental results. Regarding the other factors that define, for example, task relationships and platform cores, we note significant diversity across the six industrial subjects. Recall from Section~\ref{subsec:synthetic subjects} that we use the task generation procedure presented in Figure~\ref{fig:synthetic} to synthesize tasks. For EXP3, we set some parameter values of the procedure as follows: (1)~Target utilization $u_t = 0.7$, which is a common objective in the development of a real-time system in order to guarantee the schedulability of tasks~\citep{Fineberg1967,Durr2019}. (2)~The range of task periods $[\var{pd}_\var{min}, \var{pd}_\var{max}] = [10\text{ms}, 1\text{s}]$, which are common values in many real-time systems~\citep{Emberson2010,Baruah2011}. (3)~The granularity of task periods $g = 10\text{ms}$ in order to increase realism as most of the task periods in our industrial subjects are multiples of 10ms. Because of some degree of randomness in the procedure of Figure~\ref{fig:synthetic}, we create ten synthetic subjects per configuration. Below we further describe how synthetic subjects are created for each controlled experiment. \emph{EXP3.1.} To study the correlations between the execution time and memory usage of OPAM with the number of tasks $n$, we create nine sets of ten synthetic subjects such that no two sets have the same number of tasks. Specifically, we create sets with 10, 15, ..., 50 tasks, respectively. Regarding the ratio of aperiodic tasks, $\gamma = 0.4$ as, on average, the ratio of aperiodic tasks to periodic tasks in our industrial subjects is 2/3. For the range factor, $\mu = 2$, which is determined based on the inter-arrival times of aperiodic tasks in our industry subjects. We set the simulation time $T$ to 2s in order to ensure that any aperiodic task arrives at least once during that time. We note that, given the maximum task period $\var{pd}_\var{max} = 1\text{s}$ and the range factor $\mu = 2$, the maximum inter-arrival time of an aperiodic task is at most 2s (see Section~\ref{subsec:synthetic subjects}). \emph{EXP3.2.} To study the correlations between the execution time and memory usage of OPAM with the ratio of aperiodic tasks $\gamma$, we create ten sets of synthetic subjects by setting this ratio to the following values: 0.05, 0.10, ..., 0.50. We set the number of tasks to 20 ($n = 20$), which is the average number of tasks in our six industrial subjects. Regarding the other parameters, range factor and simulation time, $\mu = 2$ and $T = 2\text{s}$ are set as discussed in EXP3.1. \emph{EXP3.3.} To study the correlations between the execution time and memory usage of OPAM with the range factor $\mu$ that is used to determine the maximum inter-arrival times, we create nine sets of synthetic subjects by setting $\mu$ to 2, 3, ..., 10. We set the simulation time as follows: $T = 10\text{s}$. This ensures that any aperiodic task arrives at least once during the simulation time when $\mu$ is at most 10 (see Section~\ref{subsec:synthetic subjects}). The other parameters, the number of tasks and ratio of aperiodic tasks, $n = 20$ and $\gamma = 0.4$ are set as discussed in EXP3.1 and EXP3.2. \emph{EXP3.4.} To study the correlations between the execution time and memory usage of OPAM with the simulation time $T$, we create nine sets of synthetic subjects by setting $T$ to 2s, 3s, ..., 10s. The other parameters, e.g., the number of tasks, the ratio of aperiodic tasks, and the range factor, $n = 20$, $\gamma = 0.4$, and $\mu = 2$, are set as discussed in EXP3.1 and EXP3.2. \noindent\textbf{EXP4.} To answer RQ4, EXP4 compares priority assignments optimized by OPAM and those defined by engineers. We apply OPAM to the six industrial subjects (see Section~\ref{subsec:industrial subjects}) which include priority assignments defined by practitioners. Note that we focus here on the ESAIL subject in collaboration with our industry partner, LuxSpace; The other five subjects are from the literature~\citep{Alesio2015} and hence we can only collect feedback from practitioners for ESAIL. \subsection{Industrial study subjects} \label{subsec:industrial subjects} \begin{table*}[t] \caption{Description of the six industrial subject systems: number of periodic and aperiodic tasks, resource dependencies, triggering relations, and platform cores.} \label{tbl:subjects} \begin{center} \begin{tabular}{lccccc} \toprule & \multicolumn{2}{c}{Task types} & \multicolumn{2}{c}{Relationships} & Platform \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-6} \multicolumn{1}{c}{System} & Periodic & Aperiodic & Dependencies & Triggering & Cores \\ \midrule ICS & 3 & 3 & 3 & 0 & 3 \\ CCS & 8 & 3 & 3 & 6 & 2 \\ UAV & 12 & 4 & 4 & 0 & 3 \\ GAP & 15 & 8 & 6 & 5 & 2 \\ HPSS & 23 & 9 & 5 & 0 & 1 \\ ESAIL & 11 & 14 & 0 & 0 & 1 \\ \bottomrule \end{tabular} \end{center} \end{table*} \textcolor{rev3}{To evaluate RQs in realistic and diverse settings, we apply OPAM to six industrial study subjects from different domains such as aerospace, automotive, and avionics domains. Specifically, we obtained one case study subject from our industry partner, LuxSpace. We found the other five industrial study subjects in the literature~\citep{Alesio2015}, which, consistent with the LuxSpace system, all assume a single-queue, multi-core, fixed-priority scheduling policy. Note that OPAM uses the same scheduling policy (described in Section~\ref{subsec:simulation}) as in \citeauthor{Alesio2015}'s work. This policy uses fixed priorities that are determined offline and therefore do not change dynamically.} Table~\ref{tbl:subjects} summarizes the relevant attributes of these subjects, presenting the number of periodic and aperiodic tasks, resource dependencies, triggering relations, and platform cores. \textcolor{rev3}{The subjects are characterized by real-time parameters, e.g., periods, deadlines, and priorities, described in Section~\ref{sec:problem}. We note that all the study subjects are deadlock-free systems as they do not have circular resource dependencies.} Regarding task priorities, all tasks in the six subjects have fixed priorities, which are defined by experts in their domains. The full task descriptions (including WCET, inter-arrival times, periods, deadlines, priorities, and relationship details) of the subjects are available online~\citep{Artifacts}. The main missions of the six subjects are described as follows: \begin{sloppypar} \begin{itemize}[leftmargin=1em] \item ICS is an ignition control system that checks the status of an automotive engine and corrects any errors of the engine~\citep{Frati2008}. The system was developed by Bosch GmbH\footnote{Bosch {GmbH}: https://www.bosch.com/}. \item CCS is a cruise control system that acquires data from vehicle sensors and maintains the specified vehicle speed~\citep{Anssi2011}. Continental AG\footnote{Continental {AG}: https://www.continental.com} developed the system. \item UAV is a mini unmanned air vehicle that follows dynamically defined way-points and communicates with a ground station to receive instructions~\citep{Traore2006}. The system was developed in collaboration with the University of Poitiers France and ENSMA\footnote{ENSMA: https://www.ensma.fr/}. \item GAP is a generic avionics platform for a military aircraft~\citep{Locke1990}. The system was designed in a joint project with Carnegie Mellon University, the US Navy, and IBM\footnote{IBM: https://www.ibm.com/}, aiming at supporting several missions regarding air-to-surface attacks. \item HPSS is a satellite system for two satellites, named Herschel and Planck~\citep{Marius2010}. The two satellites share the same computational architecture, although they have different scientific missions. Herschel aims at studying the origin and evolution of stars and galaxies. Planck's primary mission is the study of the relic radiation from the Big Bang. ESA\footnote{ESA: https://www.esa.int/} carried out the HPSS project. \item ESAIL is a microsatellite for tracking ships worldwide by detecting messages that ships radio-broadcast (see Section~\ref{sec:motivation}). Luxspace, our industry partner, developed ESAIL in an ESA project. \end{itemize} \end{sloppypar} \subsection{Synthetic study subjects} \label{subsec:synthetic subjects} To investigate RQ3, we use synthetic subjects in order to freely control key parameters in real-time systems. We create a set of tasks by adopting a well-known procedure~\citep{Emberson2010} for synthesizing real-time tasks, which has been applied in many schedulability analysis studies~\citep{Davis2008,Zhang2009,Davis2011,Grass2018,Durr2019}. \lstset{morekeywords={continue,not,is}, escapeinside={\and,\times}{and,times}} \begin{figure}[t] \begin{lstlisting}[style=Alg] Algorithm Synthetic task generation Input ${n}$: number of tasks Input $u_t$: target utilization Input $\var{pd}_\var{min}$: minimum task period Input $\var{pd}_\var{max}$: maximum task period Input $g$: granularity of task periods Input $\gamma$: ratio of aperiodic tasks Input $\mu$: range factor to determine maximum inter-arrival ?times? Output $\mathbf{S}$: set of tasks $\mathbf{S} \leftarrow$ $\{\}$, $\mathbf{C} \leftarrow$ $\{\}$ // synthesize a set of periodic tasks $\mathbf{U} \leftarrow \fun{UUniFast\_discard}(n, u_t)$ // task utilizations $\mathbf{I} \leftarrow \fun{generate\_task\_periods}(n, \var{pd}_\var{min}, \var{pd}_\var{max}, g)$ // task periods for each $j \in [1,n]$ ?\vrule? $\mathbf{C}$ $\leftarrow$ $\mathbf{C} \cup \{U_j {\cdot} I_j\}$, where $U_j \in \mathbf{U}$ and $I_j \in \mathbf{I}$ // WCETs $ \mathbf{S} \leftarrow \fun{generate\_task\_set}(\mathbf{I}, \mathbf{C})$ // set of tasks // convert some periodic tasks to aperiodic tasks $ \mathbf{S} \leftarrow \fun{convert\_aperiodic\_tasks}(\mathbf{S}, \gamma, \mu)$ return $\mathbf{S}$ \end{lstlisting} \caption{An algorithm for synthesizing a set of tasks.} \label{fig:synthetic} \end{figure} Figure~\ref{fig:synthetic} describes a procedure that synthesizes a set of real-time tasks. For a given number $n$ of tasks and a target utilization $u_t$, the procedure first generates a set $\mathbf{U}$ of task utilization values by using the UUniFast-Discard algorithm~\citep{Davis2011} (line 13). The UUniFast-Discard algorithm is devised to give an unbiased distribution of utilization values, where a utilization $U_j \in \mathbf{U}$ is a positive value and $\sum_{U_j \in \mathbf{U}} U_j = u_t$. The procedure then generates a set $\mathbf{I}$ of $n$ task periods according to a log-uniform distribution within a range $[\var{pd}_\var{min}, \var{pd}_\var{max}]$, i.e., given a task period (random variable) $I_j \in \mathbf{I}$, $\log{I_j}$ follows a uniform distribution (line 14 in Figure~\ref{fig:synthetic}). For example, when the minimum and maximum task periods are $\var{pd}_\var{min} = 10\text{ms}$ and $\var{pd}_\var{max} = 1000\text{ms}$, respectively, the procedure generates (approximately) an equal number of tasks in time intervals [10ms, 100ms] and [100ms, 1000ms]. The parameter $g$ is used to choose the granularity of the periods, i.e., task periods are multiples of $g$. Such a distribution of task periods provides a reasonable degree of realism with respect to what is usually observed in real systems~\citep{Baruah2011}. As shown in lines 15--16 of the procedure in Figure~\ref{fig:synthetic}, a set $\mathbf{C}$ of task WCETs are computed based on the set $\mathbf{U}$ of task utilization values and the set $\mathbf{I}$ of task periods. Specifically, a task WCET $C_j \in \mathbf{C}$ is computed as $C_j = U_j \cdot I_j$. As per line 17 of the listing in Figure~\ref{fig:synthetic}, the procedure synthesizes a set $\mathbf{S}$ of tasks. A task $j$ is characterized by a period $I_j$ and a WCET $C_j$ and it is associated with a deadline $\fun{dl}(j)$ and a priority $\fun{pr}(j)$. According to the rate-monotonic scheduling policy~\citep{Liu1973}, tasks' deadlines are equal to their periods and tasks with shorter periods are given higher priorities. To synthesize aperiodic tasks, the procedure converts some periodic tasks to aperiodic tasks according to a given ratio $\gamma$ of aperiodic tasks among all tasks (see line 19 in Figure~\ref{fig:synthetic}). A range factor $\mu$ is used to determine maximum inter-arrival times of aperiodic tasks. Specifically, for a task $j$ to be converted, the procedure sets the minimum inter-arrival time $\fun{pmin}(j)$ as $\fun{pmin}(j) = I_j$. The procedure then selects a uniformly distributed value $x$ from the range $(1, \mu]$ and computes the maximum inter-arrival time $\fun{pmax}(j)$ as $\fun{pmax}(j) = x \cdot I_j$. \subsection{Threats to Validity} \label{subsec:threats} To mitigate the main threats that arise from not accounting for random variation, we compared OPAM against RS under identical parameter settings. We present all the underlying parameters and provide the full package of our experiments to facilitate replication. Also, we ran OPAM 50 times for each study subject and compared results using statistical analysis, i.e., Mann-Whitney U-test and Vargha and Delaney's $\hat{A}_{12}$. We note that there are prior studies that aim at optimizing priority assignments such as OPA~\citep{Audsley1991} and RPA~\citep{Davis2007}. However, to our knowledge, none of the existing works offer ways to analyze trade-offs among equally viable priority assignments with respect to safety margins and the satisfaction of constraints. Nevertheless, we attempted to compare OPAM with an extension of an existing method, e.g., RPA~\citep{Davis2007}. To do so, we first applied an exhaustive schedulability analysis technique to the ESAIL subject -- our motivating case study -- in order to verify whether the ESAIL tasks are schedulable for a given priority assignment. Note that existing priority assignment techniques are built on such schedulability analysis methods, which are therefore a prerequisite. We chose UPPAAL~\citep{Behrmann2004}, a model checker, for schedulability analysis as it has been used in real-time system studies~\citep{Marius2010,Yu2010,Yalcinkaya2019}. However, our experiment results using UPPAAL for ESAIL showed that it was not able to complete the analysis task, even after 5 days of execution, for a single priority assignment. We were therefore not able to perform experimental comparisons with existing priority assignment methods. Since this evaluation is not the main focus of this article, we point the reader to the UPPAAL specification of ESAIL available online~\citep{Artifacts}. Recall from Section~\ref{subsec:simulation} that OPAM assigns tasks' WCETs to their execution times when it simulates the worst-case executions of tasks while varying task arrival times. In many real-time systems studies~\citep{Briand2005, Guan2009, Lin2009, Anssi2011, Zeng2014, Alesio2015, Durr2019}, static WCETs are often used instead of varying task execution times for the purpose of real-time analysis. For example, practitioners typically use WCETs to estimate the lowest bound of CPU utilization required to properly apply the rate monotonic scheduling policy~\citep{Fineberg1967} to their systems. \textcolor{rev3}{Similarly, OPAM assumes that near-worst-case schedule scenarios can be simulated by assigning tasks' WCETs to their execution times and varying tasks' arrival times using search. A near-worst-case schedule scenario entails that the magnitude of deadline misses is maximized when tasks execute as per this scenario. Under this working assumption, we were able to empirically evaluate the sanity, coevolution, scalability, and usefulness aspects of OPAM (see Section~\ref{sec:eval}). The results indicate that OPAM is a promising and useful tool.} However, the formal proof of whether or not the WCET assumption holds in the system model described in Section~\ref{sec:problem} requires complex analysis, accounting for varying task arrival times, triggering relationships, resource dependencies, and multiple cores. When task execution times need to be varied during simulation, engineers can adapt OPAM by utilizing Monte-Carlo simulation~\citep{Kroese2014} to account for such variations. The main threat to external validity is that our results may not generalize to other systems. We mitigate potential biases and errors in our experiments by drawing on real industrial subjects from different domains and several synthetic subjects. Specifically, we selected two subjects from the aerospace domain, two from the automotive domain, and two from the avionics domain. The positive feedback obtained from LuxSpace and the encouraging results from our industrial case studies indicate that OPAM is a scalable and practical solution. \textcolor{rev3}{Furthermore, we believe OPAM introduces a promising avenue for addressing the problem of priority assignment by applying coevolutionary algorithms, even for systems that use other scheduling policies, e.g., priority inheritance. In order for OPAM to support different scheduling policies, the main requirement is to replace the existing simulator (described in Section~\ref{sec:approach}) with a new simulator supporting the desired scheduling policy. In our approach, the coevolution part of OPAM is separated from the scheduling policy, which is contained in the simulator. Hence, we deem the expected changes for the coevolution part of OPAM to be minimal. Future studies are nevertheless necessary to investigate how OPAM can be adapted to find near-optimal priority assignments for other real-time systems in different contexts. } \section{Evaluation} \label{sec:eval} This section describes our evaluation of OPAM through six industrial case studies from different domains and several synthetic subjects. Our full evaluation package is available online~\citep{Artifacts}. \input{tex_ev_rq} \input{tex_ev_subjects} \input{tex_ev_synthetic} \input{tex_ev_setup} \input{tex_ev_metrics} \input{tex_ev_param} \input{tex_ev_results} \input{tex_ev_validity} \section{Introduction} \label{sec:intro} Mission-critical systems are found in many different application domains, such as aerospace, automotive, and healthcare domains. The success of such systems depends on both functional and temporal correctness. For functional correctness, systems are required to provide appropriate outputs in response to the corresponding stimuli. Regarding temporal correctness, systems are supposed to generate outputs within specified time constraints, often referred to as deadlines. The systems that have to comply with such deadlines are known as real-time systems~\citep{Liu2000}. Real-time systems typically run multiple tasks in parallel and rely on a real-time scheduling policy to decide which tasks should have access to processing cores, i.e., CPUs, at any given time. While developing a real-time system, one of the most common problems that engineers face is the assignment of priorities to real-time tasks in order for the system to meet its deadlines. Based on priorities of real-time tasks, the system's task scheduler determines a particular order for allocating real-time tasks to processing cores. Hence, a priority assignment that is poorly designed by engineers makes the system scheduler execute tasks in an order that is far from optimal. In addition, the system will likely violate its performance and time constraints, i.e., deadlines, if a poor priority assignment is used. In real-time systems, the problem of optimally assigning priorities to tasks is important not only to avoid deadline misses but also to maximize \emph{safety margins} from task deadlines and is subject to \emph{engineering constraints}. Tasks may exceed their expected execution times due to unexpected interrupts. For example, it is infeasible to test an aerospace system exhaustively on the ground such that potential environmental uncertainties, e.g., those related to space radiations, are accounted for. Hence, engineers assign optimal priorities to tasks such that the remaining times from tasks' completion times to their deadlines, i.e., safety margins, are maximized to cope with potential uncertainties. Furthermore, engineers typically have to account for additional engineering constraints, e.g., they assign higher priorities to critical tasks that must always meet their deadlines compared to the tasks that are less critical or non-critical. A brute force approach to find an optimal priority assignment would have to examine all $n!$ distinct priority assignments, where $n$ denotes the number of tasks. Furthermore, for a given priority assignment, schedulability analysis is, in general, known as a hard problem~\citep{Audsley2001}, which determines whether or not tasks will always complete their executions within their specified deadlines. Thus, optimizing priority assignments is also a hard problem because the space of all possible system states to explore in order to find optimal priority assignments is very large. Most of the prior works on optimizing priority assignments provide analytical methods~\citep{Fineberg1967,Leung1982,Audsley1991,Davis2007,Chu2008,Davis2009,Davis2012}, which rely on well-defined system models and are very restrictive. For example, they assume that tasks are independent, i.e., tasks do not share resources~\citep{Davis2016,Zhao2017}. Industrial systems, however, are typically not compatible with such (simple) system models. In addition, none of the existing work addresses the problem of optimizing priority assignments by simultaneously accounting for multiple objectives, such as safety margins and engineering constraints, as discussed above. Search-based software engineering (SBSE) has been successfully applied in many application domains, including software testing~\citep{Wegener1997,Wegener1998,Lin2009,Arcuri2010,Shin2018}, program repair~\citep{Weimer2009,Tan2016,Abdessalem2020}, and self-adaptation~\citep{Andrade2013,Chen2018,Shin2020}, where the search spaces are very large. Despite the success of SBSE, engineering problems in real-time systems have received much less attention in the SBSE community. In the context of real-time systems, there exists limited work on finding stress test scenarios~\citep{Briand2005} and predicting worst-case execution times~\citep{Lee2020}, which complements our work. In practice, priority assignments result from an interactive process between the development and testing teams. While developing a real-time system, developers assign priorities to real-time tasks in the system and then testers stress the system to check whether or not the system meets its specified deadlines. If testers find a problematic condition under which any of the tasks violates its deadline, developers have to modify the priority assignment to address the problem. The back-and-forth between the development and testing teams continues until a priority assignment that does not lead to any deadline miss is found or the one that yields the least critical deadline misses is identified. The process is, however, not automated. In this article, we use metaheuristic search algorithms to automate the process of assigning priorities to real-time tasks. To mimic the interactive back-and-forth between the development and testing teams, we use competitive coevolutionary algorithms~\citep{Luke2013}. Coevolutionary algorithms are a specialized class of evolutionary search algorithms. They simultaneously coevolve two populations (also called species) of (candidate) solutions for a given problem. They can be cooperative or competitive. Such competitive coevolution is similar to what happens in nature between predators and preys. For example, faster preys escape predators more easily, and hence they have a higher probability of generating offspring. This impacts the predators, because they need to evolve as well to become faster if they want to feed and survive~\citep{Meneghini2016}. Hence, the two species, i.e., predators and preys, have coevolved competitively. We note that no species has the competing traits of predators and preys simultaneously as such species could not evolve to survive. In our context, priority assignments defined by developers can be seen as preys and stress test scenarios as predators. The priority assignments need to evolve so that stress testing is not able to push the system into breaking its real-time constraints. Dually, stress test scenarios should evolve to be able to break the system when there is a chance to do so. \noindent\textbf{Contributions.} We propose an \underline{O}ptimal \underline{P}riority \underline{A}ssignment \underline{M}ethod for real-time systems (OPAM). Specifically, we apply multi-objective, two-population competitive coevolution~\citep{Popovici2012} to address the problem of finding near-optimal priority assignments, aiming at maximizing the magnitude of safety margins from deadlines and constraint satisfaction. In OPAM, two species relate to priority assignment and stress testing coevolve synchronously, and compete against each other to find the best possible solutions. We evaluated OPAM by applying it to six complex, industrial systems from different domains, including the aerospace, automotive, and avionics domains, and several synthetic systems. Our results show that: (1)~OPAM finds significantly better priority assignments compared to our baselines, i.e., random search and sequential search, (2)~the execution time of OPAM scales linearly with the number of tasks in a system and the time required to simulate task executions, and (3)~OPAM priority assignments significantly outperform those manually defined by engineers based on domain expertise. We note that OPAM is the first attempt to apply coevolutionary algorithms to address the problem of priority assignment. Further, it enables engineers to explore trade-offs among different priority assignments with respect to two objectives: maximizing safety margins and satisfying engineering constraints. Our full evaluation package is available online~\citep{Artifacts}. \noindent\textbf{Organization.} The remainder of this article is structured as follows: Section~\ref{sec:motivation} motivates our work. Section~\ref{sec:problem} defines our specific problem of priority assignment in practical terms. Section~\ref{sec:relatedwork} discusses related work. Sections~\ref{sec:overview} and \ref{sec:approach} describe OPAM. Section~\ref{sec:eval} evaluates OPAM. Section~\ref{sec:conclusion} concludes this article. \section*{Conflict of interest} \balance \section{Motivating case study} \label{sec:motivation} We motivate our work using an industrial case study from the satellite domain. Our case study concerns a mission-critical real-time satellite, named ESAIL~\citep{ESAIL}, which has been developed by LuxSpace -- a leading system integrator for microsatellites and aerospace system. ESAIL tracks vessels' movements over the entire globe as the satellite orbits the earth. The vessel-tracking service provided by ESAIL requires real-time processing of messages received from vessels in order to ensure that their voyages are safe with the assistance of accurate, prompt route provisions. Also, as ESAIL orbits the planet, it must be oriented in the proper position on time in order to provide services correctly. Hence, ESAIL's key operations, implemented as real-time tasks, need to be completed within acceptable times, i.e., deadlines. Engineers at LuxSpace analyze the schedulability of ESAIL across different development stages. At an early design stage, the engineers use a priority assignment method that extends the rate monotonic scheduling policy~\citep{Fineberg1967}, which is a theoretical priory assignment algorithm used in real-time systems. At a later development stage, if the engineers found that any real-time task of ESAIL cannot complete its execution within its deadline, the engineers, in our study context, reassign priorities to tasks in order to address the problem of deadline violations. The rate monotonic policy assigns priorities to tasks that arrive to be executed periodically and must be completed within a certain amount of time, i.e., periodic tasks with hard deadlines. According to the policy, periodic tasks that arrive frequently have higher priorities than those of other tasks that arrive rarely. In ESAIL, for example, if the vessel-tracking task arrives every 100ms and the satellite-position control task arrives every 150ms, the former has a higher priority than the latter. However, the rate monotonic policy does not account for tasks that arrive irregularly and should be completed within a reasonable amount of time, i.e., aperiodic tasks with soft deadlines. ESAIL contains aperiodic tasks with soft deadlines as well, such as a task for updating software. Hence, the engineers extend the rate monotonic policy to assign priorities to all tasks of ESAIL. The extensions are as follows: First, the engineers assign priorities to periodic tasks based on the rate monotonic policy. Second, the engineers assign lower priorities to aperiodic tasks than those of periodic tasks. As aperiodic tasks with soft deadlines are typically considered less critical than periodic tasks with hard deadlines, the engineers aim to ensure that periodic tasks complete their executions within their deadlines by assigning lower priorities to aperiodic tasks while periodic tasks have higher priority. Engineers use a heuristic to assign priorities to aperiodic tasks. They treat aperiodic tasks as (pseudo-)periodic tasks by setting aperiodic tasks' (expected) minimum arrival rates as their fixed arrival periods, making the tasks frequently arrive. The engineers then apply the rate monotonic policy for the aperiodic tasks with the synthetic periods while ensuring that aperiodic tasks have lower priorities than those of periodic tasks. A priority assignment made at an early design stage keeps changing while developing ESAIL due to various reasons, such as changes in requirements and implementation constraints. At a development stage, instead of relying on the extended rate monotonic policy, the engineers assign priorities based on their domain expertise, manually inspecting schedulability analysis results. Hence, a priority assignment at later development stages often does not follow the extended rate monotonic policy. For example, as aperiodic tasks are also expected to be completed within a reasonable amount of time, some aperiodic tasks may have higher priorities than some periodic tasks as long as they are schedulable. \begin{sloppypar} Engineers at LuxSpace, however, are still faced with the following issues: (1)~Their priority assignment method, which extends the rate monotonic scheduling policy, assigns priorities to tasks in order to ensure only that tasks are to be schedulable. However, engineers have a pressing need to understand the quality of priority assignments in detail as they impact ESAIL operations differently. For example, once ESAIL is launched into orbit, the satellite operates in the space environment, which is inherently impossible to be fully tested on the ground. Unexpected space radiations may trigger unusual system interrupts, which hasn't been observed on the ground, resulting in overruns of ESAIL tasks' executions. In such cases, a priority assignment assessed on the ground may not be able to tolerate such unexpected uncertainties. Hence, engineers need a priority assignment that enables ESAIL tasks to tolerate unpredictable uncertainties as much as possible and to be schedulable. (2)~Engineers at LuxSpace assign priorities to tasks without any systematic assistance. Instead, they rely on their expertise and the current practices described above to manually assign priorities to ensure that tasks are to be schedulable. To this end, we are collaborating with LuxSpace to develop a solution for addressing these issues in assigning task priority. \end{sloppypar} \section{Approach Overview} \label{sec:overview} Finding an optimal priority assignment is an inherently interactive process. In practice, once engineers assign priorities to the real-time tasks in a system, testers then stress the system to find a condition, i.e., a particular sequence of task arrivals, in which a task execution violates its deadline. Testers typically use a simulator or hardware equipment to stress the system by triggering plausible worst-case arrivals of tasks that maximize the likelihood of deadline misses. If testers find task arrivals that induce deadline misses, the task arrivals are reported to engineers in order to fix the problem by reassigning priorities. This interactive process of assigning priorities and testing schedulability continues until both engineers and testers ensure that the tasks meet their deadlines. For such intrinsically interactive problem-solving domains, we conjecture that coevolutionary algorithms are potentially suitable solutions. A coevolutionary algorithm is a search algorithm that mutually adapts one of different species, e.g., in our study, two populations of priority assignments and task-arrival sequences, acting as foils against one another. Specifically, we apply multi-objective, two-population competitive coevolution~\citep{Luke2013} to address our problem of finding optimal priority assignments (see Section~\ref{sec:problem}). In our approach, the two populations of priority assignments and stress test scenarios, i.e., task-arrival sequences, evolve synchronously, competing with each other in order to search for optimal priority assignments that maximize the magnitude of safety margins from deadlines and the extent of constraint satisfaction. Note that better priority assignments enable a system to achieve larger safety margins. Hence, those priority assignments have a higher chance to pass stress test scenarios. This impacts the stress test scenarios because they need to evolve as well, aiming at inducing deadline misses in the system. Recall from Section~\ref{sec:relatedwork} that most of the existing SBSE research relies on search algorithms using a single population~\citep{Chen2018,Abdessalem2020,Shin2020}. However, such algorithms do not fit the problem of priority assignments targeted here. When (1)~two competing traits between task arrivals and priority assignments are encoded together in an individual of a single population and (2)~two contradicting fitness functions regarding safety margins and deadline misses, which are exact opposites, assess such individuals, the notion of Pareto optimality is not applicable. In that case, maximizing the magnitude of safety margins necessarily entails minimizing the magnitude of deadline misses. Hence, a single population-based search algorithm cannot make Pareto improvements that maximize safety margins (resp. deadline misses) while not minimizing deadline misses (resp. safety margins). Specifically, the dominance relation over such individuals does not exist because if an individual $I$ is strictly better than another individual $I^\prime$ in one fitness value, $I$ is always worse than $I^\prime$ in the other fitness value. Hence, we are not able to obtain equally viable solutions with respect to the contradicting objectives using such a method. \begin{figure}[t] \centering\includegraphics[width=.9\columnwidth]{figs_overview} \caption{An overview of our \underline{O}ptimal \underline{P}riority \underline{A}ssignment \underline{M}ethod for real-time systems (OPAM).} \label{fig:overview} \end{figure} Figure~\ref{fig:overview} shows an overview of our proposed solution: \underline{O}ptimal \underline{P}riority \underline{A}ssignment \underline{M}ethod for real-time tasks (OPAM). OPAM requires as input task descriptions defined by engineers, which specify task characteristics and their relationships (see Section~\ref{sec:problem}). Given such input task descriptions, the ``find worst task arrivals' and ``find best priority assignments'' steps aim at generating worst-case sequences of task arrivals and best-case priority assignments, respectively. A worst-case sequence of task arrivals means that the magnitude of deadline misses, i.e., the amounts of time from task deadlines to task completion times, is maximized when tasks arrive as defined in the sequence. Note that if there is no deadline miss, a task-arrival sequence is considered worst-case if tasks complete their executions as close to their deadlines as possible. In contrast, a priority assignment is best-case when the magnitude of safety margins is maximized. Beyond maximizing safety margins, the ``find best priority assignments'' step accounts for satisfying engineering constraints in assigning priorities to tasks. OPAM evolves two competing populations of task-arrival sequences and priority assignments synchronously generated from the two steps. OPAM then outputs a set of priority assignments that are Pareto optimal with regards to the magnitude of safety margins and the extent of satisfying constraints. Hence, OPAM allows engineers to perform domain-specific trade-off analysis among Pareto solutions and is useful in practice to support decision making with respect to their task design. For example, suppose engineers develop a weakly hard real-time systems~\citep{Bernat2001} that can tolerate occasional deadline misses. In that case, engineers may consider a few deadline misses as less important (as long as their consequences are negligible) than the overall magnitude of safety margins in their trade-off analysis. Section~\ref{sec:approach} describes OPAM in detail. \section{Problem description} \label{sec:problem} \begin{figure}[t] \centering\includegraphics[width=.9\columnwidth]{figs_conceptualmodel} \caption{A conceptual model representing the key abstractions to analyze optimal priority assignments.} \label{fig:conceptual model} \end{figure} This section defines the task, scheduler, and schedulability concepts, which extend the concepts defined in our previous work~\citep{Lee2020} by augmenting our previous definitions with the notions of safety margins, constraints in assigning priorities, and relationships between real-time tasks. We then describe the problem of optimizing priority assignments such that we maximize the magnitude of safety margins and the degree of constraint satisfaction. Figure~\ref{fig:conceptual model} shows an overview of the conceptual model that represents the key abstractions required to analyze optimal priority assignments for real-time systems. The entities in the conceptual model are described below. \textbf{Task.} We denote by $j$ a real-time task that should complete its execution within a specified deadline after it is activated (or arrived). Every real-time task $j$ has the following properties: priority denoted by $\fun{pr}(j)$, deadline denoted by $\fun{dl}(j)$, and worst-case execution time (WCET) denoted by $\fun{wcet}(j)$. Task priority $\fun{pr}$ determines if an execution of a task is preempted by another task. Typically, a task $j$ preempts the execution of a task $j^\prime$ if the priority of $j$ is higher than the priority of $j^\prime$, i.e., $\fun{pr}(j) > \fun{pr}(j^\prime)$. \textcolor{rev3}{The $\fun{pr}(j)$ priority is a fixed value assigned to task $j$. Such fixed priorities are determined offline; hence, they are not changed online for any reason. Note that a real-time task scheduler that relies on fixed priorities is applied in all the study subjects in this article (see Section~\ref{subsec:industrial subjects}) and is commonly used in industrial systems~\citep{Briand2005, Guan2009, Lin2009, Anssi2011, Zeng2014, Alesio2015, Durr2019, Lee2020Panda}.} The $\fun{dl}(j)$ function determines the deadline of a task $j$ relative to its arrival time. A task deadline can be either \emph{hard} or \emph{soft}. A hard deadline of a task $j$ constrains that $j$ \emph{must} complete its execution within a deadline $\fun{dl}(j)$ after $j$ is activated. While violations of hard deadlines are not acceptable, depending on the operating context of a system, violating soft deadlines may be to some extent tolerated. Note that we use a metaheuristic search relying on fitness functions quantifying the degrees of deadline misses, safety margins, and constraint satisfaction. Such functions do not depend on the nature of the deadlines. Our approach outputs a set of priority assignments that are Pareto optimal with respect to safety margins and constraint satisfaction. Engineers then perform domain-specific trade-off analysis among Pareto solutions. Hence, in this article, we handle hard and soft deadline tasks in the same manner. Real-time tasks are either \emph{periodic} or \emph{aperiodic}. Periodic tasks, which are typically triggered by timed events, are invoked at regular intervals specified by their \emph{period}. We denote by $\fun{pd}(j)$ the period of a periodic task $j$, i.e., a fixed time interval between subsequent activations (or arrivals) of $j$. Any task that is not periodic is called aperiodic. Aperiodic tasks have irregular arrival times and are activated by external stimuli which occur irregularly. In real-time analysis, based on domain knowledge, we typically specify a minimum inter-arrival time denoted by $\fun{pmin}(j)$ and a maximum inter-arrival time denoted by $\fun{pmax}(j)$ indicating the minimum and maximum time intervals between two consecutive arrivals of an aperiodic task $j$. In real-time analysis, sporadic tasks are often separately defined as having irregular arrival intervals and hard deadlines~\citep{Liu2000}. In our conceptual definitions, however, we do not introduce new notations for sporadic tasks because the deadline and period concepts defined above sufficiently characterize sporadic tasks. Note that for periodic tasks $j$, we have $\fun{pmin}(j) = \fun{pmax}(j) = \fun{pd}(j)$. Otherwise, for aperiodic tasks $j$, we have $\fun{pmax}(j) > \fun{pmin}(j)$. \textbf{Task relationships.} The execution of a task $j$ depends not only on its own parameters described above, e.g., priority $\fun{pr}(j)$ and period $\fun{pd}(j)$, but also on its relationships with other tasks. Relationships between tasks are typically determined by task interactions related to accessing shared resources and triggering arrivals of other tasks~\citep{Alesio2012}. Specifically, if two tasks $j$ and $j^\prime$ access a shared resource $r$ in a mutually exclusive way, $j$ may be blocked from executing for the period during which $j^\prime$ accesses $r$. We denote by $\fun{dp}(j,j^\prime)$ the resource-dependency relation between tasks $j$ and $j^\prime$ that holds if $j$ and $j^\prime$ have mutually exclusive access to a shared resource $r$ such that they cannot be executed in parallel or preempt each other, but one can execute only after the other has completed accessing $r$. The other type of relationship between tasks is related to a task $j$ triggering the arrival of another task $j^\prime$. This is a common interaction between tasks~\citep{Locke1990,Anssi2011,Alesio2015}. For example, $j$ may hand over some of its workload to $j^\prime$ due to performance or reliability reasons. We denote by $\fun{tr}(j,j^\prime)$ the triggering relation between tasks $j$ and $j^\prime$ that holds if $j$ triggers the arrival of $j^\prime$. We note that both relationships are defined at the level of tasks, following prior works~\citep{Locke1990,Anssi2011,Alesio2015} describing the five industrial case study systems used in our experiments (see Section~\ref{subsec:industrial subjects}). \textbf{Scheduler.} Let $J$ be a set of tasks to be scheduled by a real-time scheduler. A scheduler then dynamically schedules executions of tasks in $J$ according to the tasks' arrivals and the scheduler's scheduling policy over the scheduling period $\mathbb{T} = [0,\mathbf{T}]$. We denote by $\fun{at}_k(j)$ the $k$th arrival time of a task $j \in J$. The first arrival of a periodic task $j$ does not always occur immediately at the system start time ($0$). Such offset time from the system start time to the first arrival time $\fun{at}_1(j)$ of $j$ is denoted by $\fun{offset}(j)$. For a periodic task $j$, the $k$th arrival of $j$ within $\mathbb{T}$ is $\fun{at}_k(j) \leq \mathbf{T}$ and is computed by $\fun{at}_k(j) = \fun{offset}(j) + (k-1) \cdot \fun{pd}(j)$. For an aperiodic task $j^\prime$, $\fun{at}_k(j^\prime)$ is determined based on the $k{-}1$th arrival time of $j^\prime$ and its minimum and maximum arrival times. Specifically, for $k > 1$, $\fun{at}_k(j^\prime) \in [\fun{at}_{k-1}(j^\prime)+\fun{pmin}(j^\prime), \fun{at}_{k-1}(j^\prime)+\fun{pmax}(j^\prime)]$ and, for $k = 1$, $\fun{at}_1(j^\prime) \in [\fun{pmin}(j^\prime), \fun{pmax}(j^\prime)]$, where $\fun{at}_k(j^\prime) < \mathbf{T}$. A scheduler reacts to a task arrival at $\fun{at}_k(j)$ by scheduling the execution of $j$. Depending on a scheduling policy (e.g., rate monotonic scheduling policy for single-core systems~\citep{Fineberg1967} and single-queue multi-core scheduling policy~\citep{Arpaci2018}), an arrived task $j$ may not start its execution at the same time as it arrives when higher priority tasks are executing on all processing cores. Also, task executions may be interrupted due to preemption. We denote by $\fun{et}_k(j)$ the completion time for the $k$th arrival of a task $j$. According to the worst-case execution time of a task $j$, we have: $\fun{et}_k(j) \ge \fun{at}_k(j) + \fun{wcet}(j)$. During system operation, a scheduler generates a \emph{schedule scenario} which describes a sequence of task arrivals and their completion time values. We define a schedule scenario as a set $S$ of tuples $(j, \fun{at}_k(j), \fun{et}_k(j))$ indicating that a task $j$ has arrived at $\fun{at}_k(j)$ and completed its execution at $\fun{et}_k(j)$. Due to a degree of randomness in task execution times and aperiodic task arrivals, a scheduler may generate a different schedule scenario for different runs of a system. \begin{figure}[t] \begin{center} \subfloat[Schedule scenario $S$]{ \parbox{1\columnwidth}{\centering \includegraphics[width=.9\columnwidth]{figs_schedule1} \label{fig:schedule 1} } } \subfloat[Schedule scenario $S^\prime$]{ \parbox{1\columnwidth}{ \centering \includegraphics[width=.9\columnwidth]{figs_schedule2} \label{fig:schedule 2} } } \caption{Example schedule scenarios $S$ and $S^\prime$ of three tasks: $j_1$, $j_2$, and $j_3$. (a)~The $S$ schedule scenario is produced when $\fun{pr}(j_1) = 3$, $\fun{pr}(j_2) = 2$, and $\fun{pr}(j_3) = 1$. (b)~The $S^\prime$ schedule scenario is produced when $\fun{pr}(j_1) = 1$, $\fun{pr}(j_2) = 3$, and $\fun{pr}(j_3) = 3$.} \label{fig:schedule} \end{center} \end{figure} Figure~\ref{fig:schedule} shows two schedule scenarios $S$ (Figure~\ref{fig:schedule 1}) and $S^\prime$ (Figure~\ref{fig:schedule 2}) produced by a scheduler over the $[0,23]$ time period of a system run. Both $S$ and $S^\prime$ describe executions of three tasks, $j_1$, $j_2$, and $j_3$ arrived at the same time stamps (see $at_i$ in the figures). In both scenarios, the aperiodic task $j_1$ is characterized by: $\fun{pmin}(j_1) = 5$, $\fun{pmax}(j_1) = 13$, $\fun{dl}(j_1) = 4$, and $\fun{wcet}(j_1) = 2$. The aperiodic task $j_2$ is characterized by: $\fun{pmin}(j_2) = 3$, $\fun{pmax}(j_2) = 10$, $\fun{dl}(j_2) = 4$, and $\fun{wcet}(j_2) = 1$. The periodic task $j_3$ is characterised by: $\fun{pd}(j_3) = 8$, $\fun{dl}(j_3) = 7$, and $\fun{wcet}(j_3) = 3$. The priorities of the three tasks in $S$ (resp. $S^\prime$) satisfy the following: $pr(j_1) > pr(j_2) > pr(j_3)$ (resp. $pr(j_2) > pr(j_3) > pr(j_1)$). In both scenarios, task executions can be preempted depending on their priorities. Then, $S$ is defined by $S = \{(j_1, 5, 7)$, $\ldots$, $(j_2, 4, 5)$, $\ldots$, $(j_3, 8, 14)$, $(j_3, 16, 19))\}$; and $S^\prime$ is defined by $S^\prime = \{(j_1, 5, 7)$, $\ldots$, $(j_2, 4, 5)$, $\ldots$, $(j_3, 8, 12)$, $(j_3, 16, 19))\}$. \textbf{Schedulability.} Given a schedule scenario $S$, a task $j$ is \emph{schedulable} if $j$ completes its execution before its deadline, i.e., for all $\fun{et}_k(j)$ observed in $S$, $\fun{et}_k(j) \le \fun{at}_k(j) + \fun{dl}(j)$. Let $J$ be a set of tasks to be scheduled by a scheduler. A set $J$ of tasks is then schedulable if for every schedule $S$ of $J$, we have no task $j \in J$ that misses its deadline. As shown in schedule scenarios $S$ and $S^\prime$ presented in Figures~\ref{fig:schedule 1} and \ref{fig:schedule 2}, respectively, all three tasks, $j_1$, $j_2$, and $j_3$, are schedulable. However, we note that the overall amounts of remaining time, i.e., safety margins, from the tasks' completions to their deadlines observed in $S$ and $S^\prime$ are different (see the second completion times and deadlines of $j_1$, $j_2$, and $j_3$ in $S$ and $S^\prime$) because $S$ and $S^\prime$ are produced by using different priority assignments. Engineers typically desire to assign optimal priorities to real-time tasks that aim at maximizing such safety margins, as discussed below. \textbf{Problem.} In real-time systems, fixed priorities are typically assigned to tasks~\citep{Davis2016,Lee2020Panda}. Finding an appropriate priority assignment is important not only for ensuring the schedulability of a system but also for maximizing the safety margins within which a system can tolerate unexpected execution time overheads. For example, if an unpredictable error occurs and triggers check-point mechanisms~\citep{Davis2007}, which re-execute part or all of a task $j$, then the execution time of $j$ unexpectedly overruns. Hence, engineers need an optimal priority assignment that maximizes the overall remaining times from task completion times to task deadlines, i.e., safety margins. While assigning priorities to tasks, engineers also account for constraints, that are often but not always domain-specific. For example, aperiodic tasks' priorities should be lower than those of periodic tasks because periodic tasks are often more critical than aperiodic tasks. Hence, engineers develop a system that prioritizes executions of periodic tasks over aperiodic tasks. Recall from Section~\ref{sec:motivation}, this constraint is desirable by engineers. When needed, however, engineers can violate the constraint to some extent in order to ensure that aperiodic tasks complete within a reasonable amount of time while periodic tasks meet their deadlines. Constraints can be either \emph{hard} constraints, which must be satisfied, or \emph{soft} constraints, which are desired to be satisfied. In our study, hard constraints need to be assured while scheduling tasks, e.g., a running task's priority must be higher than a ready task's priority, which are enforced by a scheduler. In the context of optimizing priority assignments, we focus on maximizing the extent of satisfying soft constraints. We refer to a soft constraint as a constraint in this paper. Our work aims at optimizing priority assignments that maximize the safety margins while satisfying such constraints. Specifically, for a set $J$ of tasks to be analyzed, we define three concepts as follows: (1)~a priority assignment for $J$ denoted by $\vv{P}$, (2)~the magnitude of safety margins for a priority assignment $\vv{P}$ denoted by $\fun{fs}(\vv{P})$, and (3)~the degree of constraint satisfaction denoted by $\fun{fc}(\vv{P})$. We note that Section~\ref{subsec:fitness} describes how we optimize $\vv{P}$, and compute $\fun{fs}(\vv{P})$ and $\fun{fc}(\vv{P})$ in detail. Our study aims at finding a set $\mathbf{B}$ of best possible priory assignments that are Pareto optimal~\citep{Knowles2000} such that a priority assignment $\vv{P} \in \mathbf{B}$ maximizes both $\fun{fs}(\vv{P})$ and $\fun{fc}(\vv{P})$, and any other priority assignments in $\mathbf{B}$ are equally viable. \section{Related Work} \label{sec:relatedwork} This section discusses related research strands in the areas of priority assignments, real-time analysis using exhaustive techniques, search-based analysis in real-time systems, and coevolutionary analysis in software engineering. \begin{table}[t] \caption{Comparing our work, OPAM, with existing priority assignment techniques with respect to the properties captured in their underlying system models.} \resizebox{\textwidth}{!}{ \begin{tabular}{l@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c@{ }c} \toprule \multicolumn{1}{c}{Properties} & OPAM & RMPO & DMPO & OPA & OPA-MLD & RPA & FNR-PA & PRPA & OPTA & EPAF \\ \midrule \makecell[l]{Periodic\\ task} & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ \\ \arrayrulecolor{lightgray}\hline\arrayrulecolor{black} \makecell[l]{Aperiodic\\ task} & $\circ$ & & & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & & \\ \arrayrulecolor{lightgray}\hline\arrayrulecolor{black} \makecell[l]{Resource\\ dependency} & $\circ$ & & & & & & & & & \\ \arrayrulecolor{lightgray}\hline\arrayrulecolor{black} \makecell[l]{Triggering\\ relationship} & $\circ$ & & & & & & & & & \\ \arrayrulecolor{lightgray}\hline\arrayrulecolor{black} \makecell[l]{Multi-core\\ system} & $\circ$ & & & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & & \\ \arrayrulecolor{lightgray}\hline\arrayrulecolor{black} \makecell[l]{Safety\\ margin} & $\circ$ & & & & & $\circ$ & $\circ$ & $\circ$ & & \\ \arrayrulecolor{lightgray}\hline\arrayrulecolor{black} \makecell[l]{Engineering\\ constraint} & $\circ$ & & & & $\circ$ & & & & & \\ \bottomrule \end{tabular} } \label{tbl:comp-relatedwork} \end{table} \noindent\textbf{Priority assignment.} The problem of optimally assigning priorities to real-time tasks has been widely studied~\citep{Fineberg1967,Liu1973,Leung1982,Audsley1991,Tindell1994,George1996,Audsley2001,Davis2007,Chu2008,Davis2009,Davis2011,Davis2012,Davis2016,Zhao2017,Hatvani2018}. \cite{Fineberg1967} reported early work that relies on a simple system model, assuming, for example, that all tasks arrive periodically, tasks run on a single processing core, tasks' deadlines are equal to their periods, and task executions are independent from one another. They proposed a priority assignment method, named rate-monotonic priority ordering (RMPO), that assigns higher priorities to the tasks with shorter periods. RMPO can find a feasible priority assignment that guarantees periodic tasks to be schedulable when such priority assignments exist~\citep{Liu1973}. \cite{Leung1982} extended RMPO to relax one of the underlying assumptions made in RMPO. Specifically, their priority assignment approach, known as deadline-monotonic priority ordering (DMPO), accounts for task deadlines that can be less than or equal to their periods. In contrast to our work, however, these methods are often not applicable to industrial systems that are not compatible with their simplified system models. Recall from Section~\ref{sec:problem} that a realistic system typically consists of both periodic and aperiodic tasks. Task executions depend on their relationships, i.e., resource dependencies and triggering relationships, with other tasks. \cite{Audsley2001} designed a priority assignment method, named optimal priority assignment (OPA), that relies on an existing schedulability analysis method $M$. OPA guarantees to find a feasible priority assignment that is schedulable according to $M$ if such priority assignments exist. OPA is applicable to more complex systems than those supported by the methods mentioned above, i.e., RMPO and DMPO. Specifically, OPA can find a feasible priority assignment even in the following situations: (1)~First arrivals of periodic tasks occur after some offset time~\citep{Audsley1991}. (2)~Aperiodic tasks have arbitrary deadlines~\citep{Tindell1994}. (3)~Task executions are scheduled based on a non-preemptive scheduling policy~\citep{George1996}. (4)~Tasks run on multiple processing cores~\citep{Davis2011}. Unlike our approach that accounts for two objectives, safety margins and engineering constraints (see Section~\ref{sec:problem}), OPA attempts to find a feasible priority assignment whose only objective is to make all tasks schedulable. Note that such a feasible priority assignment does not necessarily maximize safety margins as discussed in Section~\ref{sec:problem}. Hence, a feasible priority assignment obtained by OPA is often fragile and sensitive any changes in task executions and unable to accommodate unexpected overheads in task execution times, which are commonly observed in industrial systems~\citep{Davis2007}. OPA has been extended by several works~\citep{Davis2007,Chu2008,Davis2009,Davis2012}. \cite{Davis2007} presented a robust priority assignment method (RPA) with a degree of tolerance for unexpected overruns of task execution times. \cite{Chu2008} introduced an extended OPA algorithm (OPA-MLD) that minimizes the lexicographical distance between the desired priority assignment and the one obtained by the algorithm. OPA-MLD enables important tasks to have higher priorities. \cite{Davis2012} proposed an RPA extension (FNR-PA) to make RPA work when a system allows task preemption to be deferred for some interval of time. \cite{Davis2009} developed a probabilistic robust priority assignment method (PRPA) for a real-time system to be less likely to violate its deadlines. Even though the prior works mentioned above improve OPA to some extent, they assume that task executions are independent of one another. In contrast to these existing approaches, OPAM accounts for dependencies among task executions, i.e., resource dependencies and triggering relationships (see our problem description in Section~\ref{sec:problem}). Some recent priority assignment techniques address scalability. \cite{Hatvani2018} presented an optimal priority and preemption-threshold assignment algorithm (OPTA) that attempts to decrease the computation time for finding a feasible priority assignment. OPTA uses a heuristic to traverse a problem space while pruning infeasible paths to efficiently and effectively explore the problem space. \cite{Zhao2017} introduced an effective priority assignment framework (EPAF) that combines a commercial solver for integer linear programs and their problem-specific optimization algorithm. However, these methods rely on simple system models that assume, for example, task executions to be independent and running on a single processing core. Therefore, the applicability of these techniques is limited. In contrast, recall from Sections~\ref{sec:motivation} and \ref{sec:problem} that our approach aims at scaling to complex industrial systems while accounting for realistic system characteristics regarding task periods, inter-arrival times, resource dependencies, triggering relationships, and multiple processing cores. Table~\ref{tbl:comp-relatedwork} compares our work, OPAM, with the other priority assignment techniques mentioned above. As shown in the table, we note that prior works rely on system models that are very restrictive. In particular, existing work assumes that task executions are independent of one another. However, task dependencies such as resource dependencies and triggering relationships are commonly observed in industrial systems. In addition, we note that no existing solution simultaneously accounts for safety margins and engineering constraints. Hence, to our knowledge, OPAM is the first attempt to provide engineers with a set of equally viable priority assignments, allowing trade-off analysis with respect to the two objectives: maximizing safety margins and satisfying engineering constraints. \noindent\textbf{Real-time analysis using exhaustive techniques.} Constraint programming and model checking have been applied to conclusively and exhaustively verify whether or not a system meets its deadlines~\citep{Kwiatkowska2011, Alesio2012, Nejati2012, Alesio2013}. Existing research on priority assignment based on OPA rely on such exhaustive techniques to prove the schedulability of a set of tasks for a given priority assignment. We note that schedulability analysis is, in general, an NP-hard problem~\citep{Davis2016} that cannot be solved in polynomial time. As a result, exhaustive techniques based on model checking and constraint solving are often not amenable to analyze large industrial systems such as ESAIL -- our motivating case study system -- described in Section~\ref{sec:motivation}. To assess if exhaustive techniques could scale to ESAIL, as discussed in Section~\ref{subsec:threats}, we performed a preliminary experiment using UPPAAL~\citep{Behrmann2004}, a model checker for real-time systems. We observed that UPPAAL was not able to verify schedulability of ESAIL tasks for a fixed priority assignment even after letting it run for several days (see Section~\ref{subsec:threats} for more details). \noindent\textbf{Search-based analysis in real-time systems.} In real-time systems, most of the existing works that use search-based techniques focus on testing~\citep{Wegener1997,Wegener1998,Briand2005,Lin2009,Arcuri2010}. \citeauthor{Wegener1997}~(\citeyear{Wegener1997}, \citeyear{Wegener1998}) introduced a testing approach based on a genetic algorithm that aims to check computation time, memory usage, and task synchronization by analyzing the control flow of a program. \cite{Briand2005} applied a genetic algorithm to find stress test scenarios for real-time systems. \cite{Lin2009} proposed a search-based approach to check whether a real-time system meets its timing and security constraints. \cite{Arcuri2010} presented a black-box system testing approach based on a genetic algorithm. Beyond testing real-time systems, \citeauthor{Nejati2013}~(\citeyear{Nejati2013}, \citeyear{Nejati2014}) developed a search-based trade-off analysis technique that helps engineers balance the satisfaction of temporal constraints and keeping the CPU time usage at an acceptable level. \cite{Lee2020} combined a search algorithm and machine learning to estimate safe ranges of worst-case task execution times within which tasks likely meet their deadlines. In contrast to these prior works, OPAM addresses the problem of optimally assigning priorities to real-time tasks while accounting for multiple objectives regarding safety margins and engineering constraints, thus enabling Pareto (trade-off) analysis. Further, OPAM uses a multi-objective, competitive coevolutionary search algorithm, which has been rarely applied to date in prior studies of real-time systems, as discussed next. \noindent\textbf{Coevolutionary analysis in software engineering.} Despite the success of search-based software engineering (SBSE) in many application domains including software testing~\citep{Wegener1997,Wegener1998,Lin2009,Arcuri2010,Shin2018}, program repair~\citep{Weimer2009,Tan2016,Abdessalem2020}, and self-adaptation~\citep{Andrade2013,Chen2018,Shin2020}, coevolutionary algorithms have been applied in only a few prior studies~\citep{Wilkerson2010,Wilkerson2012,Boussaa2013}. \citeauthor{Wilkerson2012}~(\citeyear{Wilkerson2010}, \citeyear{Wilkerson2012}) present a coevolution-based approach to automatically correct software. Their work introduced a program representation language to facilitate their automated corrections. \cite{Boussaa2013} developed a code-smells detection approach. The main idea is to evolve two competing populations of code-smell detection rules and artificial code-smells. Unlike these prior works, we study the problem of optimally assigning priorities to tasks in real-time systems. To our knowledge, we are the first to address the priority assignment problem using a multi-objective, competitive coevolutionary search algorithm.
{ "timestamp": "2022-03-22T01:44:59", "yymm": "2102", "arxiv_id": "2102.07694", "language": "en", "url": "https://arxiv.org/abs/2102.07694" }
\section{Introduction} The COVID-19 pandemic is an ongoing pandemic caused by the novel coronavirus (SARS-CoV-2)\cite{cov19wiki}. The symptoms are highly variable, and the virus, which spreads through the air and contaminated surfaces, is highly contagious. As of January 2021, there has yet to be a small molecule drug that is specific and effective for COVID-19. During the pandemic, countries around the world made efforts to overcome the difficulties, further reflecting the importance of unity and cooperation and resource sharing. We are continuously exploring the value chain provided by artificial intelligence (AI) in the drug discovery process. In terms of pathological mechanisms, AI natural language processing (NLP) technology can replace manual curation of data and efficiently collect and sort data from global databases. Data mining is a process in which algorithms convert raw data into useful structured data. This technique is then integrated with NLP algorithms to analyze and organize the collected information from areas such as a disease field either through rule-based text mining or a model-based tool. In our effort, we have launched the GHDDI Targeting COVID-19 platform \cite{targetingcov19}. Since its launch on January 29, the platform has been continuously updated and maintained with new functions and modules continuously added. Several of these functional modules include NLP data mining module for SARS-CoV-2 small molecule drug in vitro experimental data that updates new experimental information daily, an automated NLP COVID-19 clinical trial module allowing up-to-date summarization of clinical trial data, and an NLP-based scientific literature recommendation module. Overall, we present the details behind these three modules to support real-time scientific intelligence of COVID-19. \section{Methods} \label{sec:methods} In this section, we briefly introduce the three modules and how they were built using different databases. All of the NLP systems were built using a standard Python 3.6 environment from Anaconda and associated packages mentioned below. Our system's backend and database is hosted on a Ubuntu 18.04 server using the same environment. \subsection{Data Aggregation and Preprocessing} \label{sec:mdap} The data was aggregated using a variety of sources. Through automated download scripts and given Application Programming Interfaces (API), abstract data was downloaded from: PubMed, preprint sources, and dimensions.ai \cite{dimensions2020} using a string query "SARS-Cov-2 OR COVID-19 OR novel coronavirus". These data sources were compiled together and the string data was cleaned using simple Python scripts such as lower-casing all words and removing noise data such as spaces or tabs. Duplicate data was then removed through a sequence of steps by dropping DOI, title, and abstract strings, respectively. This aggregated dataset will be used for subsequent NLP workflows and models. \subsection{Data Dictionaries} \label{sec:mdd} There were several dictionaries that were compiled and utilized for information filtering and information extraction. First, a dictionary of all drug names was compiled using DrugBank drug names and aliases \cite{drugbank}, FDA drug list \cite{fdalist}, and ChEMBL \cite{chembl2015}. All of these drug names were compiled into one list, and string length was computed to filter out outliers. Overall, the final list consisted of unigram, bigrams, and trigrams; it also included drug names with a string length between 5 and 75 characters. The second dictionary involved a filter list to clean out unwanted items from the drug dictionary. In the DrugBank database, several entries that do not necessarily represent drug names can be found such as large biologic molecules and antigens. Likewise, in a similar Kaggle competition \cite{kaggle2020}, a list of filtered items were compiled, and this list was aggregated and used to filter out unwanted terms in the final drug dictionary used in subsequent workflows. \subsection{Preclinical Data Extraction} \label{sec:mpde} This step uses the aggregated dataset mentioned in the previous section. The dataset is then sent through a filter of keywords [EC50, IC50, CC50] to get a subset of only abstracts mentioning these keywords. Then, each abstract from the subset is matched with a corpus of known drug names and sentences are extracted. If sentences contain both a key word and a drug name, the sentence will be searched for a numerical value or descriptive phrase describing the relationship in that sentence. This is done using either regex (Rule 1) or a Spacy noun chunk model (Rule 2). Using regex, the system extracts the numerical value closest in word distance to the keyword or using a rule-based logic. Similarily, the Spacy noun chunk model extracts the noun chunk describing the keyword. The spacy model is an open-source English language model. Several features include POS tagging, noun chunk extraction, and grammar NER. The noun chunks are a descriptive phase that has significant relations to a keyword. This extracted values list is then extracted and mapped onto the drug name with a direct correlation. Additionally, if sentences mentioning the drug and keyword are different, then the system tries to extract a value similar to above but prints the drug name and experimental assay value relationship as an indirect correlation. These results are all tabulated and updated to the website. \begin{itemize} \item Rule 1: Using a regex query, all numbers are extracted. The closest numerical token to the experimental keyword is mapped to the closest drug name. \item Rule 2: Using a Spacy model, all noun chunks are extracted from the sentence. The noun chunk closest in distance to that of the experimental keyword is identified and mapped. \end{itemize} Using these two rules, all data following this logic can be extracted and mapped. Because this text mining procedure is done using a list of known drugs, several metrics are used to validate this workflow. We have evaluated the text mining results based on a similar text mining study\cite{medex}. In that study, 25 unique text items (notes) were randomly sampled and manually reviewed as a gold standard. Overall, we evaluated Precision, Recall, and F-measure in the preclinical data mining results by randomly sampling 25 papers by DOI. \begin{equation} \label{eq:precision} P = N_{correct}/ N_{total} \end{equation} \begin{equation} \label{eq:precision} R = N_{correct}/ N_{total\ possible} \end{equation} The above equations are derived from calculating precision and recall in an information extraction context\cite{ting2010}. \subsection{NLP Topic Model Recommendation Engine} \label{sec:mntmre} The aggregated dataset built in Section 2.1 is utilized in this step. Figure 1 shows the monthly amount of articles uploaded onto our database. As a result of this large number, it was important to split the articles into different categories and recommend them by topic. After preprocessing, the data is then checked and tagged for up to trigrams. Additionally, data lemmatization is used and stop words are removed; a bag-of-words is subsequently created for each abstract. A Latent Dirichlet Allocation (LDA) algorithm is used to build the topic model. This is an unsupervised machine learning model that measures the distribution of words and attempts to cluster this distribution into a specified number of hidden distributions. The word distributions per abstract determine which topic or hidden distribution that abstract best fits into. The bag-of-words object is then sent to the Gensim LDA API \cite{lda2020} for model training, and subsequent Python pickle objects and metadata were used for daily updates. A gridsearch optimization of this topic model was performed by maxmizing the coherence score, and the best scoring model was used for the final output where each topic was hand-labeled. In this dataset, the best scoring model was a 30-topic model which was used for the final output. After this model was trained, inferencing was performed on the original dataset, and then each abstract was assigned a topic. This result was recorded, and a data-driven filter was used to filter out topics that did not meet the amount papers required to form a topic. Then for each topic, the top papers are ranked and sorted by the model’s output weight, being the gamma value in the Gensim model, and the top 10 papers are output into a final tabulated format. This can then be done for new data which can be automatically updated in the future for this module. \subsection{Clinical Trials Text Mining} \label{sec:mctdm} In the clinical trials module, the open-access Figshare data shared by dimensions.ai was used \cite{dimensions2020}. As of January 2021, there were 7000 clinical trials records around the world. Clinical trials contain different phase human experiments relating to drugs or biologics, including vaccines. The data is first preprocessed similar to the methods mentioned above. The data is then tagged for unigrams, bigrams, and trigrams similar to the NLP topic model. Afterwards, the information extraction process clusters the clinical trials into one of these three types. \begin{itemize} \item Using the known drugs dictionary mentioned in the preclinical information extraction section, all small molecule drug names are extracted from the clinical trial descriptive phrases. This list is filtered and extracted samples containing animal, food, and other non-small molecule drug words are removed. \item Given the keyword “vaccine” and all its derivatives, the database was searched for these keywords and a list of vaccine clinical trials was compiled and output. This list is filtered and trials containing words in the blacklist are removed. \item Given several keywords relating to biological products such as plasma, antibody, stem cell, and all of their derivative words, a list of biological products was compiled and output. This list is filtered and trials containing words in the blacklist are removed. \end{itemize} Using these three rules, all information pertaining to drugs, biologicals, and vaccines were extracted from the tabulated data. The data was visualized in our information portal \cite{targetingcov19} together with word clouds for biological drug and vaccine trials as a validation. \begin{figure} \centering \includegraphics[scale=0.666]{Picture0} \caption{Workflow of real-time system to update modules} \label{fig:Picture0} \end{figure} \subsection{Real-time Updates} \label{sec:mctdm} All of these modules are supported by real-time daily updates provided by a server setup to update automatically. This system, as shown in Figure 1, provides daily incremental updates of clinical trial data and research articles using open APIs provided by PubMed, Figshare, and other sources. This data is then stored in our database which is updated daily. After updating the database, we use metadata to track changes in the clinical trial and research article databases. The clinical trial script is run automatically every day and completes the data processing if there is a new update. Likewise, new articles recently updated to our database is preprocessed and then run with the preclinical NLP processing workflow, and updates are appended to a master list that is updated onto our portal. Finally, the entire abstracts database is preprocessed then run with the topic model; afterwards, the top 10 titles are uploaded per topic to the recommendation page. This model is retrained and updated monthly as new data becomes available. \section{Results} \label{sec:results} The results of our modules are presented below. Full results can be found on the Targeting COVID-19 GitHub portal \cite{targetingcov19}. This section gives an in-depth description of the results that were published to the website. \begin{figure} \centering \includegraphics[scale=1]{Picture1} \caption{Count of papers published every month starting from January 2020.} \label{fig:Picture1} \end{figure} Figure 2 visualizes the number of papers uploaded to the databases by month. Due to the number of papers published exponentially increasing in March 2020, it became impossible to track all experimental and clinical results published to a journal or uploaded onto a preprint service. Therefore, we used this data to automatically data mine and extract valuable information that may be of use to scientists of different fields all around the world. \subsection{Small Molecule Drug Text Mining} For small molecule drugs, the compiled drug dictionary is used and the matches are tabulated in the following tables. Table 1 shows the top 10 most common drugs found through information extraction of clinical trial records where there are currently over 1100 clinical trials for small molecule drugs, while Table 2 shows several of the best experimental results of small molecule drugs extracted from preclinical studies literature text. It is noted that the units are extracted from the sentence of the experimental value; standard units are typically given as a molar concentration such as micromolar or nanomolar units. \begin{table} \caption{Top 20 known small molecule drugs undergoing COVID-19 clinical trials.} \centering \begin{tabular}{ll} \toprule \textbf{Treatment} & \textbf{Count} \\ \midrule Hydroxychloroquine & 153 \\ Ritonavir & 65 \\ Lopinavir & 61 \\ Azithromycin & 60 \\ Tocilizumab & 55 \\ Ivermectin & 51 \\ Favipiravir & 38 \\ Remdesivir & 33 \\ Chloroquine & 32 \\ Colchicine & 24 \\ Dexamethasone & 23 \\ Methylprednisolone & 23 \\ Enoxaparin & 22 \\ Nitazoxanide & 20 \\ Ruxolitinib & 19 \\ Anakinra & 15 \\ Angiotensin & 15 \\ Heparin & 15 \\ Baricitinib & 14 \\ Interferon beta & 14 \\ \bottomrule \end{tabular} \label{tab:cttable} \end{table} \begin{table} \caption{Five small molecule drugs with in vitro assay results.} \centering \begin{tabular}{llll} \toprule \textbf{Drug name} & \textbf{Assay} & \textbf{Value} & \textbf{Units (uM)} \\ \midrule Nafamostat & IC50 & 0.0022 & micro \\ Azithromycin & EC50 & 0.008 & um \\ Pralatrexate & EC50 & 0.008 & um \\ Adenosine & EC50 & 0.01 & um \\ Remdesivir & EC50 & 0.01 & um \\ \bottomrule \end{tabular} \label{tab:pctable} \end{table} \subsection{Text Mining Examples from Unstructured Abstract Text} Using the rules previously described in the Methods section, Figure 3 shows several examples of direct correlation sentences of nafamostat, which is also shown in Table \ref{tab:pctable}, labelled with an experimental value along with the experiment type from three different article abstracts\cite{naf1} \cite{naf2} \cite{naf3}. Nafamostat is a small molecule drug which had the best experimental value out of all of the extracted data samples. These sentences were taken directly from the preprint or published abstracts aggregated in our database, and Figure 3 visualizes what the rule-based search engine looked for in each abstract. It is noted that several other drugs are also labeled in Figure 3 for visualization purposes, but these drugs are not described in further detail. \begin{figure} \centering \includegraphics[scale=0.5]{Picture2} \caption{Several sentences from different abstracts containing the drug “nafamostat”. All drugs were labeled in red, experiments in green, and numerical values in blue.} \label{fig:Picture2} \end{figure} \subsection{Topic Model Examples} Table 3 prints 5 of the topics taken from the LDA topic model along with their manually assigned label. The top topics were taken from a grid-search optimized number of topics while maximizing the coherence score using a corpus with over 20,000 abstracts. The best topic model was found to contain 30 topics. Among these topics, there were several that had minimal samples for that cluster. These topics were removed from the final presentation using a data driven approach. Several paper titles for each topic are shown in Figure 4 justifying the manual label attached to each LDA topic. \begin{table} \caption{Topic keywords for five select topics in our optimized topic model.} \centering \begin{tabular}{ccccc} \toprule AI & Mental Health & Disease Analysis & Genetics & PPE \\ \midrule covid & covid & covid & sars\_cov & mask \\ ct & health & risk & protein & use \\ score & mental & age & ace & air \\ use & pandemic & high & viral & respirator \\ image & anxiety & population & virus & particle \\ diagnosis & participant & mortality & human & surface \\ pneumonia & report & factor & cell & wear \\ feature & study & infection & host & environmental \\ lung & survey & disease & analysis & device \\ base & psychological & increase & genome & transmission \\ \bottomrule \end{tabular} \label{tab:tmtable1} \end{table} \begin{figure} \centering \caption{Select Titles of the five topics in our optimized topic model in Table 3. Some of the title names were truncated because of the large string size.} \includegraphics[scale=.55]{table4} \label{fig:table4} \end{figure} \section{Discussion} \label{sec:discuss} We have developed several automatic modules from the openly available data. The full data results are publicly available at our website \textbf{COVID-19: GHDDI Info Sharing Portal}: \url{https://ghddi-ailab.github.io/Targeting2019-nCoV} \subsection{Evaluation of Text Mining Results} We evaluated a random subset of article abstracts with a gold standard that was manually read and labeled for the same information extraction task. The manual gold standard results are then evaluated with the results in Table 4. \begin{table} \caption{Results of data extraction system on a random subset of abstracts.} \centering \begin{tabular}{ll} \toprule \textbf{Metric} & \textbf{Value} \\ \midrule Precision & 0.808 \\ Recall & 0.689 \\ F1 Score & 0.743 \\ \bottomrule \end{tabular} \label{tab:dm_pr} \end{table} It can be seen that the precision of the system was assessed to be around 0.8 showing that our system can indeed extract most of the drug names and experimental values correctly. Therefore, this validates that our system can be used as a recommendation for users to follow-up on these articles. After a manual review of the articles, it was found that many of the drug names that could not be extracted were not found in our drug dictionary that we had compiled. Additionally, wrong experimental values and wrong drug name mappings were attributed to the fact that the rule-based system cannot robustly handle some content such as when multiple numerical values appear at multiple locations in a sentence. More fine-tuning of this system’s rules is needed to boost the precision. It is however noted that a high precision system is not necessarily important in this module as the intended purpose is mainly to gather and recommend preclinical studies for further research. Model-based systems that could be built in the future may be able to rectify these mistakes and output a higher precision final result. \subsection{Evaluation of n-grams} In an earlier iteration of the clinical trial analysis module, only unigrams were used for drug data extraction. This caused an error such as “chloroquine” and “phosphate” being double counted in some instances. Another error included instances where the keyword “interferon” was present in the clinical trial, but the drug dictionary did not have a unigram instance of this keyword. Therefore, the drug was not able to be matched with the dictionary. However, upon adding bigrams and trigrams to this module, “interferon beta” and “interferon alpha” were both successfully extracted from the clinical trial data. Likewise, this addition was utilized in the preclinical workflow, but during preliminary analysis of the results, this workflow was ultimately not included in the current module because the initial results showed no significant improvement of relevant data extraction precision for this workflow. The opposite occured and only unfiltered noise data was extracted using n-grams. This is likely due to the fact that unlike the somewhat cleaned and structured clinical trial text, the abstract text is completely freeform, so the backend algorithm best captures different pieces of information such as an experimental value or a reported experiment using single token keywords. The inclusion of multi-word sequences in this workflow can be extremely superfluous and confusing. Similarly, n-grams up to trigrams were built into the topic model data preprocessing pipeline. This was because there are some word pairs that are necessary for topics to be accurate and differentiable. One key example shown in Table \ref{tab:tmtable1} is 'sars cov' being one token instead of the two tokens 'sars' and 'cov'. This will give the LDA model cleaner input data especially if differentiating between “SARS” virus and “SARS-CoV-2”. \subsection{Topic Model Recommendation System} In Figure 5, the optimal topic model by maximum coherence contained 30 topics. Several topics did not include many papers, so they were excluded in the final results. This filter was done using a data-driven technique where topics that contained less than a fifth of the average amount of papers per topic were excluded in the final results. This meant that topics clustered with very few papers were excluded from the final module results, and the quality of the recommended papers per topic remains clear and differentiable from the evidence shown in Figure 4. This figure showed that most of the Top 5 highest-weighted papers from the corpus in each topic contained something in the title pertinent to this topic. One example is that the hand labeled "AI" topic indeed contained papers with titles discussing deep learning or CT images. Another interesting example is the "PPE" topic where the papers clustered into this topic contained titles talking about N95 respirators, masks, and respiratory aerosols. \begin{figure} \centering \includegraphics[scale=.65]{Picture31} \caption{The coherence score compared to number of topics. This gave an optimized number of topics for the final topic model.} \label{fig:Picture31} \end{figure} \subsection{Limitations and Future Work} The major limitation for the clinical trial and preclinical text mining section is that they both rely on known drug dictionaries for the exact text search. This means that all drug names contain known, approved, or experimental drugs. However, some experimental drugs have not yet been added into the database, so tracking newly published data on experimental drugs is a major challenge for future AI models. The granularity of the preliminary database search was impacted by two important factors: computational cost and precision. In the preclinical text mining and information extraction workflow, we have tried to optimize the performance of the initial preclinical abstract search to have high precision while minimizing the computational cost of text mining. Therefore, during the development of this workflow, we have looked at many different search queries to get the optimal number of abstracts to text mine. Some keywords that were used include: “covid”, “coronavirus”, “preclinical”, “in vitro”, “EC50”, and “experiment”. While all of these keywords yielded the papers of interest, we optimized our search by first using keywords such as “coronavirus” and “sars-cov-2” and then later searched the keywords “EC50” and “IC50”. Compared to searching keywords such as “preclinical” and/or “in vitro”, this allowed a more optimal precision to mining texts of interest while minimizing the number of papers that did not have any useful information as searching the more general keywords gives more samples with useless information and increases the computational effort. In our workflow, we primarily used a rule-based text-mining system to extract drug names and experimental values evidenced in Table 1, Table 2, and Figure 3. Due to the nature of the pandemic, the first efforts all revolved around drug repurposing; therefore, the use of a dictionary for text mining should be sufficient for this purpose as all known drug names and aliases were captured in our dictionary. However, if this system were to be tracking a long-term disease or pandemic, a more robust data extraction system should be built. In our preliminary analysis, a Scispacy Named-entity recognition (NER) model\cite{scispacy} was assessed and compared to the rule-based system. Comparing this model in Figure 6, it can be observed that although unique chemicals are identified, the precision is very low compared to a rule-based system especially since it captures information such as "h=15.26" and "ptdtytsvylgkfrg" as chemicals. Future work can include building an NER model with curated data labels from a set of papers in one specific disease area, or an NER model trained with more specific data labels to avoid false positives with experimental values and protein sequences. \begin{figure} \centering \includegraphics[scale=0.5]{Picture3} \caption{A comparison of an NER model (left) compared to our rule-based system (right).} \label{fig:Picture3} \end{figure} As an extension to these text mining modules, there are several applications to AI models. Once the data mining modules are mature, data curation efforts for small molecule drugs can be reduced and automated since robust research datasets can be produced\cite{medextractr}. Furthermore, the backend scripts can all be extended and developed for other use-cases. One such use-case can be a module performing data mining on known genes and knock-in, knock-down, or knock-out relationships. Additionally, for the topic model, this can be useful for a variety of scientific fields including cancer or infectious diseases. Future work can look into the topic models of these fields or at a specific area in one of these fields. \section{Conclusion} \label{sec:conclusion} The modules presented on our portal and in this article showcase NLP techniques that may be useful in a global pandemic where lots of text data is being generated daily. Our modules aim to ease the burden of reading thousands of articles daily to the recommended ones automatically updated daily by our text mining systems. This will allow a significant reduction of time spent on reader articles and more time dedicated to research on coronavirus. These modules also have the potential to be scaled to other applications in life sciences and may have use-cases in these areas. \bibliographystyle{unsrtnat}
{ "timestamp": "2021-02-16T02:37:13", "yymm": "2102", "arxiv_id": "2102.07640", "language": "en", "url": "https://arxiv.org/abs/2102.07640" }
\section*{Acknowledgments} \begin{sloppypar} The successful installation, commissioning, and operation of the Pierre Auger Observatory would not have been possible without the strong commitment and effort from the technical and administrative staff in Malarg\"ue. We are very grateful to the following agencies and organizations for financial support: \end{sloppypar} \begin{sloppypar} Argentina -- Comisi\'on Nacional de Energ\'\i{}a At\'omica; Agencia Nacional de Promoci\'on Cient\'\i{}fica y Tecnol\'ogica (ANPCyT); Consejo Nacional de Investigaciones Cient\'\i{}ficas y T\'ecnicas (CONICET); Gobierno de la Provincia de Mendoza; Municipalidad de Malarg\"ue; NDM Holdings and Valle Las Le\~nas; in gratitude for their continuing cooperation over land access; Australia -- the Australian Research Council; Brazil -- Conselho Nacional de Desenvolvimento Cient\'\i{}fico e Tecnol\'ogico (CNPq); Financiadora de Estudos e Projetos (FINEP); Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de Rio de Janeiro (FAPERJ); S\~ao Paulo Research Foundation (FAPESP) Grants No.~2019/10151-2, No.~2010/07359-6 and No.~1999/05404-3; Minist\'erio da Ci\^encia, Tecnologia, Inova\c{c}\~oes e Comunica\c{c}\~oes (MCTIC); Czech Republic -- Grant No.~MSMT CR LTT18004, LM2015038, LM2018102, CZ.02.1.01/0.0/0.0/16{\textunderscore}013/0001402, CZ.02.1.01/0.0/0.0/18{\textunderscore}046/0016010 and CZ.02.1.01/0.0/0.0/17{\textunderscore}049/0008422; France -- Centre de Calcul IN2P3/CNRS; Centre National de la Recherche Scientifique (CNRS); Conseil R\'egional Ile-de-France; D\'epartement Physique Nucl\'eaire et Corpusculaire (PNC-IN2P3/CNRS); D\'epartement Sciences de l'Univers (SDU-INSU/CNRS); Institut Lagrange de Paris (ILP) Grant No.~LABEX ANR-10-LABX-63 within the Investissements d'Avenir Programme Grant No.~ANR-11-IDEX-0004-02; Germany -- Bundesministerium f\"ur Bildung und Forschung (BMBF); Deutsche Forschungsgemeinschaft (DFG); Finanzministerium Baden-W\"urttemberg; Helmholtz Alliance for Astroparticle Physics (HAP); Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF); Ministerium f\"ur Innovation, Wissenschaft und Forschung des Landes Nordrhein-Westfalen; Ministerium f\"ur Wissenschaft, Forschung und Kunst des Landes Baden-W\"urttemberg; Italy -- Istituto Nazionale di Fisica Nucleare (INFN); Istituto Nazionale di Astrofisica (INAF); Ministero dell'Istruzione, dell'Universit\'a e della Ricerca (MIUR); CETEMPS Center of Excellence; Ministero degli Affari Esteri (MAE); M\'exico -- Consejo Nacional de Ciencia y Tecnolog\'\i{}a (CONACYT) No.~167733; Universidad Nacional Aut\'onoma de M\'exico (UNAM); PAPIIT DGAPA-UNAM; The Netherlands -- Ministry of Education, Culture and Science; Netherlands Organisation for Scientific Research (NWO); Dutch national e-infrastructure with the support of SURF Cooperative; Poland -Ministry of Science and Higher Education, grant No.~DIR/WK/2018/11; National Science Centre, Grants No.~2013/08/M/ST9/00322, No.~2016/23/B/ST9/01635 and No.~HARMONIA 5--2013/10/M/ST9/00062, UMO-2016/22/M/ST9/00198; Portugal -- Portuguese national funds and FEDER funds within Programa Operacional Factores de Competitividade through Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (COMPETE); Romania -- Romanian Ministry of Education and Research, the Program Nucleu within MCI (PN19150201/16N/2019 and PN19060102) and project PN-III-P1-1.2-PCCDI-2017-0839/19PCCDI/2018 within PNCDI III; Slovenia -- Slovenian Research Agency, grants P1-0031, P1-0385, I0-0033, N1-0111; Spain -- Ministerio de Econom\'\i{}a, Industria y Competitividad (FPA2017-85114-P and FPA2017-85197-P), Xunta de Galicia (ED431C 2017/07), Junta de Andaluc\'\i{}a (SOMM17/6104/UGR), Feder Funds, RENATA Red Nacional Tem\'atica de Astropart\'\i{}culas (FPA2015-68783-REDT) and Mar\'\i{}a de Maeztu Unit of Excellence (MDM-2016-0692); USA -- Department of Energy, Contracts No.~DE-AC02-07CH11359, No.~DE-FR02-04ER41300, No.~DE-FG02-99ER41107 and No.~DE-SC0011689; National Science Foundation, Grant No.~0450696; The Grainger Foundation; Marie Curie-IRSES/EPLANET; European Particle Physics Latin American Network; and UNESCO. \end{sloppypar} \section{Introduction} \label{sec:intro} Ultrahigh energy cosmic rays (UHECRs) are particles coming from outer space, with energies exceeding $10^{18}\,$eV. They provide the only experimental opportunity to explore particle physics beyond energies reachable by Earth-based accelerators, which go up to cosmic ray energies of $9\times 10^{16}\,$eV. The Pierre Auger Observatory~\cite{ThePierreAuger:2015rma} detects extensive air showers that are initiated by the UHECRs colliding with the nuclei in the atmosphere. Information about UHECRs is extracted using simulations based on hadronic interaction models which rely on extrapolations of accelerator measurements to unexplored regions of phase space, most notably the forward and highest-energy region. In addition, accelerator experiments at the highest energies either probe the interactions between protons or of protons with heavy nuclei, while most interactions within air showers are between pions and light nuclei. A further challenge is that the UHECR mass has to be measured despite not being yet completely decoupled from the hadronic uncertainties. The observable with the least dependence on hadronic interactions is the atmospheric depth at which the longitudinal development of the electromagnetic (EM) component of the shower reaches the maximum number of particles, namely $X_\text{max}$~\cite{Linsley77a}. In hadronic cascades the energy of each interacting particle is distributed among the secondaries, mostly pions. Neutral pions rapidly decay into two photons feeding a practically decoupled electromagnetic cascade (other resonances decaying into $\pi^0$'s, electrons, and/or photons also contribute). Charged pions (and other long-lived mesons like kaons) tend to further interact until their individual energies are below a critical value, below which they are more likely to decay. Muons, which are products of hadronic decays, are thus predominantly produced in the final shower stages. In sufficiently inclined showers, the pure EM component is absorbed in the atmosphere and the particles that reach the ground (muons and muon decay products) directly sample the muon content~\cite{inclinedReco,Valino:2009dv}, reflecting the hadronic component of the shower. Air showers are mainly detected at the Pierre Auger Observatory by the surface detector (SD), an array of water-Cherenkov detector stations, and the fluorescence detector (FD), consisting of 24 fluorescence telescopes. By selecting the subsample of events reconstructed with both the SD and FD, and with zenith angles exceeding $62^\circ$, both the muon content and the energy of the shower are simultaneously measured. The results obtained indicate that all the simulations underestimate the number of muons in the showers~\cite{Aab:2014pza,Sciutto:2019pqs}. These analyses come with the caveat that they cannot distinguish a muon rescaling from a shift in the absolute energy scale of the FD measurement. However, muon content and energy scale were disentangled in a complementary technique based on showers with zenith angles below $60^\circ$. Using the longitudinal profile of the shower in the atmosphere obtained with the FD and the signals at the ground measured with the SD, it was shown that the muonic component still has to be scaled up to match observed data, while no rescaling of the EM component and the FD energy is required~\cite{Aab:2016hkv}. The measurements with the FD also show that both the position of the shower maximum in the atmosphere ($X_\text{max}$) and the entire shape of the EM shower are well described by the simulations~\cite{Aab:2014kda,Aab:2018jpg}. At lower energies, down to $\sim 10^{17.3}\,$eV, in a measurement using the subarray of buried scintillators of the Pierre Auger Observatory, a direct count of the muons independent of EM contamination was obtained, which also shows that simulations produce too few muons~\cite{Aab:2020frk}. There is much evidence that all the simulations underpredict the average number of muons in the showers: a comprehensive study of muon number measurements made with different experiments has shown that the muon deficit in simulations starts around $\sim 10^{16}\,$eV and steadily increases with energy. Depending on model and experiment, the deficit at $\sim 10^{20}\,$eV ranges between tens of percent up to a factor of 2~\cite{whispICRC2019}. The increased statistics obtained at the Pierre Auger Observatory allow us to now take a further step and explore fluctuations in the number of muons between showers, hereinafter referred to as {\it physical fluctuations}. The ratio of the physical fluctuations to the average number of muons (relative fluctuations) has been shown to be mostly dominated by the first interaction, rather than the lower energy interactions deeper in the shower development~\cite{Fukui:1960aa,Cazon:2018gww}. Here, we exploit the sensitivity of fluctuations to the first interaction to explore hadronic interactions well above the energies achievable in accelerator experiments. \section{Methodology} \label{sec:measurement} Our analysis here is based on the set of inclined air showers ($62^\circ{}<\theta<80^\circ{}$) that are reconstructed both with the SD and FD between January~1,~2004{} and December~31,~2017{}. For each event, we obtain independent measurements of the muon content (with the SD) and the calorimetric energy (with the FD). To ensure the showers can be reconstructed with small uncertainties, we select only events with at least four triggered stations in the SD array and we further require that all the stations surrounding the impact point of the shower on the ground are operational at the time of the event. Only events with good atmospheric conditions (few clouds and a low aerosol content) are accepted in order to guarantee a good energy reconstruction with the FD. In addition, it is required that the entire shower profile and, in particular, $X_\text{max}$ is within the field of view of our telescopes. Since heavy primaries penetrate the atmosphere less than light ones, the acceptance with this selection would be mass dependent. To avoid this bias, we constrain the field of view to the region where all values of $X_\text{max}$ are accepted. Further details are given in~\cite{Aab:2014pza,Hexpo_2011}. These selection criteria result in a total number of events of 786{}. In addition, only events with energy larger than $4 \times 10^{18}\,$eV, which ensures full trigger efficiency of the SD~\cite{inclinedReco}, are used to extract the fluctuations (281{}~events). The number of muons is reconstructed by fitting a 2D model of the lateral profile of the muon density at the ground to the observed signals in the SD array. The free parameters of the fit are the zenith and azimuth angles of the shower, the impact point of the shower on the ground (shower core position), and a normalization factor with respect to a reference muon density profile in simulated proton showers at $10^{19}\,$eV~\cite{inclinedReco}. There exists a residual pure EM component in showers with low zenith angles and stations very close to the shower core position (at $400\,$m and $64^\circ$ it is $\sim6\%$), which has been subtracted using a parametrization~\cite{Valino:2009dv}. The dimensionless normalization factor we obtain from the fit is then transformed to the dimensionless quantity $R_\mu$, which is given by the integrated number of muons at the ground divided by a reference given by the average number of muons in simulated proton showers at $10^{19}\,$eV and the given zenith angle. At $10^{19}\,$eV and an inclination of $60^\circ$, $R_\mu = 1$ corresponds to $2.148 \times 10^7$ muons. For more details, see~\cite{Aab:2014pza}. In the following, we refer to $R_\mu$ as the number of muons for short. The calorimetric energy of the air showers $E_{\rm cal}$ is reconstructed by integrating the longitudinal shower profiles observed with the FD~\cite{Abraham:2009pm,Aab:2018jpg}. The total energy of the shower is then obtained by adding the average energy carried away by muons and neutrinos, the so-called invisible energy $E~=~E_{\rm cal}+E_{\rm inv}$. At $10^{19}\,$eV, $E_{\rm inv}$ accounts for $14\%$ of the total energy in air showers~\cite{Aab:2019cwj,supplement,Barbosa:2003dc,Risse:2003fw,Pierog2005}. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{calibration.pdf} \caption{\label{fig:meas-calib} Number of muons as a function of the measured energy. The black line is the fitted $\langle \rmu \rangle=a\,[E/(10^{19}\,\mathrm{eV})]^b$. Markers on the top of the frame define the bins in which the fluctuations are evaluated. The numbers give the events in each bin. The effect of the uncertainty of the absolute energy scale is indicated by $\sigma_{\rm sys}(E)$. } \end{figure} In Fig.~\ref{fig:meas-calib} the muon number $R_\mu$ is shown as a function of the measured energy. Markers on the top of the frame define the bins in energy for which we will extract the fluctuations, with the number of events in each bin shown above. The bins are chosen such that the number of events in each is similar. Based on models of air shower development and given the gradual change of the composition in this energy range (single logarithmic dependence on energy)~\cite{Aab:2014kda,UngerKH,Matthews:2005sd,Engel:2011zzb}, the number of muons is related to the primary energy by a single power law \begin{equation} \langle \rmu \rangle(E)=a [E/(10^{19}\,\mathrm{eV})]^b \ , \label{eq:powerlaw} \end{equation} which can be fitted following a procedure described in the text below. The best-fit parameters are given at the beginning of the next section. The scattering in the data has three sources: experimental uncertainties in the energy $s_{E}$ and in the muon number $s_{\mu}$ from event reconstruction (both represented by the error bars), and the \textit{physical fluctuations} in the muon number denoted as $\sigma$. Given Eq.~\eqref{eq:powerlaw}, the variance of the muon number is $\sigma^2 + b^2 \langle s_{E} \rangle^2 + \langle s_{\mu} \rangle^2$. In this Letter, we adopt a method based on maximizing the likelihood of a probability distribution function (PDF). The PDF incorporates the various contributions to the fluctuations, treating each energy bin independently while also accounting for the effect of the migration of events between bins~\cite{Aab:2014pza,Dembinski:2015wqa}. The model assumes that measurements of $E$ and $R_\mu$ follow Gaussian distributions centered at the true value, with widths given by the detector resolution $s_{E}$ and $s_{\mu}$, which are the uncertainties obtained in each individual event reconstruction~\cite{inclinedReco,FDenergyICRC2019}. Physical fluctuations are also assumed to follow a Gaussian distribution of width $\sigma$. Simulations have shown this is an acceptable approximation given the event number in each bin. The total PDF is obtained through the convolution of the detector response and the physical fluctuations with the probability distribution of the hybrid events measured at the Pierre Auger Observatory. The log-likelihood function is then given by \begin{align} \ln \mathcal{L}(a,b,\hat{\sigma}_1,\ldots,\hat{\sigma}_6)~&=~\sum_i \ln \left [ \sum_{k=0}^{6} \, \int\limits_{E_{k-1}}^{E_k} \mathrm{d}E \, h(E) \, C(E) \, \right. \nonumber \\ & \left. \times \exp{ \left ( -\frac{1}{2} \frac{(E_i -E)^2 }{s^2_{E}} \right ) } \right. \nonumber \\ & \left. \times \exp{ \left ( -\frac{1}{2 } \frac{( R_{\mu,i}-\langle \rmu \rangle(E)\,)^2 }{s^2_{\mu} + (\hat{\sigma}_k \cdot \langle \rmu \rangle(E))^2 } \right ) } \right ] \ . \label{eq:likelihood} \end{align} The probability of hybrid events $h(E)$ (product of the energy spectrum of cosmic rays and the efficiency of detection) can be obtained from the data, as explained in~\cite{Dembinski:2015wqa} and~\cite{Aab:2020frk,SpectrumPRD}. The rhs of Eq.~\eqref{eq:likelihood} depends on the parameters $a$ and $b$ via Eq.~\eqref{eq:powerlaw}. To obtain the energy dependence of the fluctuations, we parametrize $\sigma$ by six independent values such that $\sigma(E)~=~\hat{\sigma}_k \cdot \langle \rmu \rangle(E)$ where the constants $\hat{\sigma}_k$ are the relative fluctuations in the $k$th energy bin with limits $[E_{k-1}, E_k]$, where $k$ runs from one to six. In Eq.~\eqref{eq:likelihood}, $k=0$ corresponds to the contributions from the interval $[0,E_{\rm thr}]$ where the SD is not fully efficient. The fluctuations here are assumed to take the value of the first fitted bin $\hat{\sigma}_0 \equiv \hat{\sigma}_1$. The sum over the index $i$ in Eq.~\eqref{eq:likelihood} (the usual sum over the log-likelihoods of events) includes only events above the energy threshold of $4 \times 10^{18}\,$eV. The function $C(E)$ is the normalization factor from the double Gaussian. The result of the fit for the parameters $a$ and $b$ are shown in Fig.~\ref{fig:meas-calib}. The fluctuations are shown in Fig.~\ref{fig:result-sigma}. The distribution of the number of muons and the PDF in the individual energy bins can be found in the Supplemental Material~\cite{supplement}. The dominant systematic uncertainties of $\sigma$ come from the uncertainties in the resolutions $s_{E}$ and $s_{\mu}$. For $s_{\mu}$ we estimate the uncertainty using simulations and data. In simulations, the uncertainty was estimated by the spread in a sample of simulated showers, where each shower is reconstructed multiple times, each time changing only the impact point at the ground. For data, we reconstruct the same event multiple times, leaving out the signals from one of the detector stations. The average relative resolution $\langle s_{\mu}/R_\mu \rangle$ and its systematic uncertainty is thus $(10 \pm 3)\,$\%\xspace at $10^{19}\,$eV. We verified the values of $s_{E}$ by studying the difference in the energy reconstruction of events measured independently by two or more FD stations. The width of the distribution of these energy differences is found to be compatible with $s_{E}$. We therefore take the statistical 1-$\sigma$ uncertainties of this cross check as a conservative upper limit of the systematic uncertainty of $s_{E}$~\footnote{The resolution systematics estimated in~\cite{FDenergyICRC2019} are smaller by about a factor three, but were derived for different quality cuts than the ones applied here.}. The average relative energy resolution $\langle s_E/E\rangle$ is about $(8.4 \pm 2.9)\,$\%\xspace at $10^{19}\,$eV. We have further confirmed that there are no significant contributions to the fluctuations from differences between the individual FD stations, neither related to the longtime performance evolution of the SD and FD detectors. Any residual electromagnetic component in the signal would affect the lower zenith angles more. We therefore split the event sample at the median zenith angle ($66^\circ$) and compare the resulting fluctuations. We find no significant difference between the more and the less inclined sample. In another test, we do find a small modulation of $\langle \rmu \rangle$ with the azimuth angle ($<1\%$), which we correct for. This modulation is related to the approximations used in the reconstruction, which deal with the azimuthal asymmetry of the muon densities at the ground due to the Earth's magnetic field~\cite{inclinedReco}. Finally, we have run an end-to-end validation of the whole analysis method described in this Letter on samples of simulated proton, helium, oxygen and iron showers. Because of the almost linear relation between $R_\mu$ and $E$, the systematic uncertainty on $\sigma$ due to the uncertainty of the absolute energy scale of $14\,$\%~\cite{FDenergyICRC2019} practically cancels out in the relative fluctuations. The systematic uncertainty in the absolute scale of $R_\mu$ of $11\,$\%~\cite{Aab:2014pza} drops out for the same reason. The systematic effects for the bin around $10^{19}\,$eV are summarized in Table~\ref{tab:systematics}. Over all energies, the systematic uncertainties are below $$8$\%$. \begin{table} \begin{ruledtabular} \caption{\label{tab:systematics} Contributions to the systematic uncertainty in the relative fluctuations around $10^{19}\,$eV ($10^{18.97}\,$eV to $10^{19.15}\,$eV). The central value is $\sigma / \langle \rmu \rangle~=~0.102 \pm 0.029~(\mathrm{stat.}) \pm 0.007~(\mathrm{syst.})$. } \begin{tabular}{ccr} Source of uncertainty & & Uncertainty \\ \hline $E$ absolute scale & $\langle E \rangle $ & $<0.1$ \% \\ $E$ resolution & $s_E$ & 4.6 \% \\ $R_{\mu}$ absolute scale & $\langle \rmu \rangle$ & 0.5 \% \\ $R_{\mu}$ resolution & $s_{\mu}$ & 5.2 \% \\ $R_\mu$ azimuthal modulation & $ \langle \rmu \rangle(\phi)$ & 0.5 \% \\ \hline Total systematics& & 7.0 \% \\ \end{tabular} \end{ruledtabular} \end{table} \section{Results and discussion} \label{sec:results} \begin{figure} \centering \includegraphics[width=\columnwidth]{rmu_fluctuations_postlhc_band2.pdf} \caption{\label{fig:result-sigma} Measured relative fluctuations in the number of muons as a function of the energy and the predictions from three interaction models for proton (red) and iron (blue) showers. The gray band represents the expectations from the measured mass composition interpreted with the interaction models. The statistical uncertainty in the measurement is represented by the error bars. The total systematic uncertainty is indicated by the square brackets.} \end{figure} The best-fit value for the average relative number of muons at $10^{19}\,$eV (parameter $a$) is $\langle \rmu \rangle(10^{19}\,\si{eV}) = 1.86 \pm 0.02 \,(\mathrm{stat.}) ~_{-0.31}^{+ 0.36}\,(\mathrm{syst.})$. For the slope (parameter $b$) we find $\text{d} \langle \lnrmu \rangle / \text{d} \ln E = 0.99 \pm 0.02 \,(\mathrm{stat.}) ~_{-0.03}^{ +0.03}\,(\mathrm{syst.})$. These values are consistent with the values previously reported~\cite{Aab:2014pza,supplement}. The measured relative fluctuations as a function of the energy are shown in Fig.~\ref{fig:result-sigma}. We note that the measurement falls within the range that is expected from current hadronic interaction models for pure proton and pure iron primaries~\cite{Pierog:2009zt,Pierog:2013ria,Ostapchenko:2010vb,Ahn:2009wx,PhysRevD.102.063002,Bergmann:2006yz,Pierog:2004re,fluka,fluka2}. To estimate the effect of a mixed composition, we take the fractions of the four mass components (proton, helium, nitrogen and iron) derived from the $X_\text{max}$ measurements~\cite{Aab:2014kda,Bellido:2017cgf,sibyllFractionComment} and, using the simulations of the pure primaries, calculate the corresponding fluctuations in the number of muons. The gray band in Fig.~\ref{fig:result-sigma} encompasses the predicted $\sigma/\langle \rmu \rangle$ of the three interaction models QGSJET\,II-04\xspace, EPOS-LHC\,\xspace, and {Sibyll}\xspace~2.3d given the inferred composition mix for each~\cite{supplement}. \begin{figure} \centering \includegraphics[width=\columnwidth]{umbrella_single_rmu_shade_line_19-0.pdf} \caption{\label{fig:result-umbrella} Data (black, with error bars) compared to models for the fluctuations and the average number of muons for showers with a primary energy of $10^{19}\,$eV. Fluctuations are evaluated in the energy range from $10^{18.97}\,$eV and $10^{19.15}\,$eV. The statistical uncertainty is represented by the error bars. The total systematic uncertainty is indicated by the square brackets. The expectation from the interaction models for any mixture of the four components $p$, He, N, Fe is illustrated by the colored contours. The values preferred by the mixture derived from the $X_\text{max}$ measurements are indicated by the star symbols. The shaded areas show the regions allowed by the statistical and systematic uncertainties of the $X_\text{max}$ measurement~\cite{contourComment}.} \end{figure} In Fig.~\ref{fig:result-umbrella}, the effects of different composition scenarios on both the fluctuations and the average number of muons can be shown by drawing, at a fixed primary energy of $10^{19}\,$eV, the relative fluctuations $\sigma/\langle R_\mu \rangle$ against the average number of muons $\langle R_\mu \rangle$. Given any one of the interaction models, any particular mixture of the four components p, He, N, and Fe falls somewhere within one of the areas enclosed by the corresponding colored lines. The points of pure composition in this contour are labeled accordingly. For each model, the expected values for $\sigma/\langle R_\mu \rangle$ and $\langle R_\mu \rangle$ given the composition mixture obtained from the $X_{\rm max}$ measurements~\cite{Aab:2014kda} is indicated within each contour by the correspondingly colored star marker. The shaded areas surrounding the star markers indicate the statistical and systematic uncertainties inherited from the $X_{\rm max}$ measurements~\cite{contourComment}. Finally our measurement with statistical and systematic uncertainty is shown by the black marker. Within the uncertainty, none of the predictions from the interaction models and the $X_{\rm max}$ composition (star markers) are consistent with our measurement. The predictions from the interaction models QGSJET\,II-04\xspace, EPOS-LHC\,\xspace, and {Sibyll}\xspace~2.3d can be reconciled with our measurement by an increase in the average number of muons of 43\%{}, 35\%{}, and 26\%{}, respectively. For the fluctuations, no rescaling is necessary for any model. Taken together, the average value and fluctuations of the muon flux constrain the way hadronic interaction models should be changed to agree with air shower data. To see this we briefly discuss the origin of the fluctuations. The average number of muons in a proton shower of energy $E$ has been shown in simulations to scale as $\langle N^{*}_\mu \rangle= C E^\beta$ where $\beta \simeq 0.9$ ~\cite{Fukui:1960aa,Engel:2011zzb,Matthews:2005sd,Cazon:2018gww}. If we assume all the secondaries from the first interaction produce muons following the same relation as given for protons above, we obtain the number of muons in the shower as \begin{equation} N_\mu = \sum_{j=1}^{m} C~E_j^\beta = \langle N^{*}_\mu \rangle \sum_{j=1}^{m} x_j^\beta = \langle N_\mu^{*} \rangle ~ \alpha_1 \ , \label{eq:alpha1} \end{equation} where index $j$ runs over $m$ secondary particles which reinteract hadronically and $x_j=E_j/E$ is the fraction of energy fed to the hadronic shower by each \footnote{The energy fed to the electromagnetic shower in the first interaction is $E-\sum_{j=1}^m E_j$, and it rapidly decreases for next generations~\cite{Cazon:2019mtd}}. In this expression, the fluctuations in $N_\mu$ are induced by $\alpha_1$ in the first generation, which fluctuates because the multiplicity $m$ and the energies $x_j$ of the secondaries fluctuate~\cite{Cazon:2018gww}. We can continue this reasoning for the subsequent generations to obtain \begin{equation} \frac{N_\mu}{\langle N^{*}_\mu \rangle} = \alpha_1 \cdot \alpha_2 \cdots \alpha_i \cdots \alpha_n \ , \label{eq:nmu-model} \end{equation} here the subindex $i$ runs over $n$ generations, until the cascade stops. We note that, for the calculation of $\alpha_2$, in the second generation, there are $m$ particles contributing. Assuming the distributions of the $\alpha$'s for each one are similar, when adding up the muons produced by each, the fluctuations produced by one are statistically likely to be compensated by another. In other words, the $\alpha_2$ distribution is narrower by a factor $\sim 1/\sqrt{m}$. The deeper the generation, the sharper the corresponding $\alpha_i$ is expected to be. As a result, the dominant part of the fluctuations comes from the first interaction. This has also been observed with simulations. The model can be generalized for primary nuclei with mass $A$ using the superposition model and fixing the number of participants to $A$ protons, which reduces the different contributions to the fluctuations by a factor $\sim 1/\sqrt{A}$. There are two options to increase the average number of muons in air showers. One is to increase $\alpha$ in a specific generation, notably the first where the energy is the highest and exotic phenomena could conceivably play a role, i.e.\ $\alpha_1\to \alpha_1+\delta \alpha_1$. Note that, if only the first generation is modified (implying some sort of threshold effect for new physics), the increase in $N_\mu$ is linear with the modification. There are several examples in the literature where this approach has been used assuming different mechanisms~\cite{Aloisio:2014dua,AlvarezMuniz:2012dd,Anchordoqui:2016oxy,Farrar:2019cid,Farrar:2013sfa}. For the fluctuations, the change depends on the model. Alternatively, the number of muons can be increased by introducing small deviations in the hadronic energy fraction $\delta \alpha$ in all generations. Accumulated along a number $n$ of generations, these small deviations build up as $N_\mu \propto (\alpha+\delta \alpha)^n$. For instance, a 5\% deviation per generation converts into $\sim 30\%$ deviation after six generations~\footnote{The number of generations is difficult to define as it depends on the details of the measurement and the showers, like the zenith angle, the muon energy threshold, distance from shower axis. Six is a reasonable minimum of generations before the muons reach the Auger detectors~\cite{Matthews:2005sd,AlvarezMuniz:2002ne}.}. On the other hand, a change of 5\% in the fluctuations of $\alpha$ is not amplified in the muon fluctuations because of the suppression in later generations. This approach characterizes the increase in the number of muons in the current hadronic interaction models with regard to previous models~\cite{Grieder:1973x1,Pierog:2006qv,Ostapchenko:2013pia,Drescher:2007hc,PhysRevD.102.063002}. It is also compatible with the increase of the discrepancy in the average number of muons across a wide range of energies reported in~\cite{whispICRC2019}. The present analysis finding that fluctuations are consistent with model predictions means that the increase in muon number may be a small effect accumulating over many generations or a very particular modification of the first interaction that changes $N_\mu$ without changing the fluctuations~\cite{supplement}. \section{Summary} \label{sec:conclusion} We have presented for the first time a measurement of the fluctuations in the number of muons in inclined air showers, as a function of the UHECR primary energy. Within the current uncertainties, the relative fluctuations show no discrepancy with respect to the expectation from current high-energy hadronic interaction models and the composition taken from $X_\text{max}$ measurements. This agreement between models and data for the fluctuations, combined with the significant deficit in the predicted total number of muons, points to the origin of the models’ muon deficit being a small deficit at every stage of the shower that accumulates along the shower development, rather than a discrepancy in the first interaction. Adjustments to models to address the current muon deficit must therefore not alter the predicted relative fluctuations. The Pierre Auger Observatory is currently undergoing an upgrade that includes the deployment of scintillators on top of the SD stations~\cite{AugerPrime} to help disentangle the muonic and electromagnetic content of the showers, as well as an array of radio antennas~\cite{radioUpgrade}. It has been shown that radio arrays can provide an estimate of the calorimetric energy~\cite{radiationEnergy}, and therefore, it will soon be possible to perform an analysis similar to the one presented here with much larger statistics using hybrid events measured by the high-duty-cycle radio and surface detector arrays~\cite{radioUpgrade}. \begin{acknowledgments} \input{acknowledgments} \end{acknowledgments} \input{letter.bbl.tex} \clearpage \onecolumngrid \input{supplement.tex} \end{document} \section*{Supplemental material: Measurement of the fluctuations in the number of muons in extensive air showers with the Pierre Auger Observatory} \subsection{Distribution of the number of muons and raw fluctuations} The distribution of the relative number of muons $(R_\mu-\langle \rmu \rangle)/\langle \rmu \rangle$ in the data in the six energy bins is shown in Fig.~\ref{fig:distributions}. The best-fit model for the data is shown in gray, the physical distribution is shown in blue. The data is well described by a Gaussian. The relative variance in the data, $V/\langle \rmu \rangle^2$, and the average relative resolutions of the muon and energy measurements are shown in Fig.~\ref{fig:meas-variance}. \begin{figure*}[h] \centering \includegraphics[width=0.98\textwidth]{distributions-new.pdf} \caption{Distribution of the relative number of muons in six bins of energy from $10^{18.6}\,$eV to $10^{19.8}\,$eV. The model for the full distribution is shown in gray, the inferred intrinsic distribution of the number of muons is shown by the filled-in curve.} \label{fig:distributions} \end{figure*} \begin{figure}[h] \centering \includegraphics[width=0.48\columnwidth]{relative_fluctuations.pdf} \caption{\label{fig:meas-variance} Black points show the total relative fluctuations in $R_\mu$ as a function of the shower energy (left axis for the variance and right axis for the standard deviation). Blue and pink points show the average relative resolution in $R_\mu$ ($\langle s_{\mu} / R_\mu \rangle$) and $E$ ($\langle s_{E} /E\rangle$) respectively. The error bars show the statistical uncertainty.} \end{figure} \newpage \subsection{Detailed comparison between interaction models and measurement} In Fig.~\ref{fig:result-average} the average number of muons in each bin of energy is shown. The model predictions for proton and iron primaries are shown as well. In Fig.~\ref{fig:rmu-vs-xmax} the measurement of the average number of muons (left panel) and the relative fluctuations (right panel) are shown as a function of the energy. The predictions from interaction models given the measured composition are shown for each model individually. In Figs.~\ref{fig:sig-rmu-xmax} and~\ref{fig:avg-rmu-xmax} the measurement of the average number of muons and the relative fluctuations are compared with the predictions from the interaction models separately. All models, given the measured composition, reproduce the fluctuation measurement. In case of the average number of muons none of the models yields enough muons to describe the data. In Fig.~\ref{fig:avg-rmu-vs-avg-xmax} the measurement of $\langle \xmax \rangle$ and $\langle \lnrmu \rangle$ at $10^{19}\,$eV are compared. Both quantities scale linearly with $\langle \ln A \rangle$, meaning the predictions for different primary compositions fall on a line. \begin{figure} \centering \includegraphics[width=0.48\columnwidth]{rmu_average_postlhc.pdf} \caption{\label{fig:result-average} Measured average number of muons as a function of the energy and the predictions from three interaction models for proton (red) and iron (blue) showers.} \end{figure} \begin{figure*} \centering \includegraphics[width=0.48\columnwidth]{rmu_average_vs_xmax_allinone.pdf} \hfill \includegraphics[width=0.48\columnwidth]{rmu_fluctuations_vs_xmax_allinone.pdf} \caption{\label{fig:rmu-vs-xmax} Left panel: Average number of muons measured as a function of the energy together with the predictions from three interaction models given the composition measured with $X_\text{max}$. The line is the best fit of the form $\langle \rmu \rangle[E]=a (E/(10^{19}\,\mathrm{eV}))^b$. Right panel: Relative fluctuations in the number of muons measured as a function of the energy together with the predictions from three interaction models given the composition measured with $X_\text{max}$.} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{rmu_fluctuations_vs_xmax_all.pdf} \caption{\label{fig:sig-rmu-xmax} Relative fluctuations in the number of muons measured as a function of the energy. The three panels show the predictions for the measured composition from EPOS-LHC\,\xspace (left), QGSJET\,II-04\xspace (middle) and {Sibyll}\xspace~2.3d (right). The lines show the predictions for pure proton (red) and iron (blue). Fitting $\sigma(E)/\langle \rmu \rangle~=~p_0 + p_1 \, \log_{10}(E/10^{19}\mathrm{eV})$ to the measurement, yields $p_0~=~0.12 \pm 0.01$ and the slope $p_1~=~-0.10 \pm 0.04$. The average slope predicted for pure proton (iron) primaries is $-0.01$ ($-0.003$). The values of $\chi^2/\mathrm{n.d.f.}$ between the trend expected from the measured composition and the measured fluctuations are $3.0/6$, $2.5/6$ and $4.3/6$ for EPOS-LHC\,\xspace, QGSJET\,II-04\xspace and {Sibyll}\xspace~2.3d, respectively. } \end{figure*} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{rmu_average_vs_xmax_all.pdf} \caption{\label{fig:avg-rmu-xmax} Average number of muons measured as a function of the energy. The three panels show the predictions for the measured composition from EPOS-LHC\,\xspace (left), QGSJET\,II-04\xspace (middle) and {Sibyll}\xspace~2.3d (right). The lines show the predictions for pure proton (red) and iron (blue). } \end{figure*} \begin{figure} \centering \includegraphics[width=0.48\columnwidth]{avg_rmu_vs_avg_xmax_postlhc_19-0.pdf} \caption{Average logarithmic muon content, $\langle \lnrmu \rangle$, as a function of the average shower depth, $\langle \xmax \rangle$.} \label{fig:avg-rmu-vs-avg-xmax} \end{figure} \subsection{Independence of the muon and energy measurements} The direct contribution from muons to the calorimetric energy through the excitation of the molecules in the air is below $5$\% and thus the fluctuations introduced in $E_{\rm cal}$ by muons are negligible (see [18,19]). For showers of a given total energy, due to the conservation of energy, $E_{\rm inv}$ and $E_{\rm cal}$ are anti-correlated on an event-by-event basis and $E_{\rm inv}$ depends on $R_\mu$. However, the fluctuations in $E_{\rm inv}$ due to the fluctuations in $R_\mu$ are very small (around 1\% at $10^{19}\,$eV relative to $E$ (see [20])), such that in practice the determination of the two variables $E$ and $R_\mu$ can be considered to be independent measurements. The value of $0.1$ we find for the relative fluctuations at $10^{19}\,$eV is consistent with this estimation of the fluctuations in the invisible energy. \subsection{Number of muons and its fluctuations} The average number of muons in a proton shower of energy $E$ has been shown in simulations to scale as $N^{*}_\mu(E)~=~C ~E^\beta$ where $\beta \simeq 0.9$ (see main text for references). If we assume all the secondaries from the first interaction produce muons following the same relation as given for protons above, we obtain the number of muons in the shower as \begin{equation} N_\mu(E) = \sum_{j=1}^{m} C~E_j^\beta = N^{*}_\mu(E) \sum_{j=1}^{m} x_j^\beta ~=~ N_\mu^{*}(E) ~ \alpha_1 \ , \label{eq:alpha1} \end{equation} where index $j$ runs over $m$ secondary particles which reinteract hadronically and $x_j=E_j/E$ is the fraction of energy fed to the hadronic shower by each. In this expression the fluctuations in $N_\mu$ are induced by $\alpha_1$ in the first generation which fluctuates because the multiplicity $m$ and the energies $x_j$ of the secondaries fluctuate. Consider a ``toy`` interaction producing only pions, all with the same energy and only a fraction $f$ of them are charged and contribute to the hadron cascade. This model has no fluctuations and should by construction give $\alpha_1=1$, which follows from Eq.~\eqref{eq:alpha1} if we identify the average number of muons for proton showers with $N^{*}_{\mu}(E)$ which coincides with our definition. This incidentally implies a condition for $\beta=\log (m)/ \log (m/f)$ which is the same as that obtained in [21,22] ($\beta \simeq 0.90$ for $f=2/3$ and $m\sim 50$). In a more realistic scenario $\alpha_1$ fluctuates because the particles do not have the same energy and $f$ (the ratio of charged pions) and $m$ fluctuate.
{ "timestamp": "2021-04-28T02:26:19", "yymm": "2102", "arxiv_id": "2102.07797", "language": "en", "url": "https://arxiv.org/abs/2102.07797" }
\section{Introduction} The axion \cite{Weinberg:1977ma,Wilczek:1977pj} is a hypothetical pseudoscalar particle predicted in the Peccei-Quinn (PQ) mechanism \cite{Peccei:1977hh,Peccei:1977ur}. The PQ mechanism extends the Standard Model (SM) of particle physics to solve the so-called strong-CP problem of Quantum Chromodynamics by adding an extra spontaneously broken global U(1) symmetry, which is anomalous. If the symmetry is broken at a high scale \cite{Kim:1979if,Shifman:1979if,Zhitnitsky:1980tq,Dine:1981rt}, the particle is very weakly interacting and long-lived, and becomes a well-motivated dark matter candidate \cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}. The PQ symmetry can be spontaneously broken before or after inflation, leading to very different axionic dark matter scenarios. If the symmetry is broken before inflation, the Universe is filled by a homogeneous axion field, which produces zero-momentum axions through the vacuum realignment mechanism \cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}. On the other hand, in the post-inflationary PQ symmetry breaking, the phase transition happens in the standard cosmology, producing axionic cosmic strings \cite{Vilenkin:1982ks,Davis:1986xc}. These defects are a variety of global cosmic string \cite{Hindmarsh:1994re,Vilenkin:2000jqa}, which live until the QCD confinement transition, when they form hybrid string-wall composites and are annihilated \cite{Vilenkin:1982ks,Sikivie:1982qv,Georgi:1982ph}. The strings formed in this last scenario are the ones studied in this paper. Axion strings release energy mainly into pseudo-Goldstone radiation, both during their evolution as well as in the string-wall system collapse. This radiation constitutes an initially degenerate gas of axions with a non-thermal distribution. The complicated non-linear dynamics at the QCD transition imprints density fluctuations which provide the seed for the axion minicluster formation through early gravitational collapse \cite{Hogan:1988mp,Kolb:1993zz,Kolb:1994fi,Kolb:1995bu}. These miniclusters could be detected by their distinctive small-scale lensing signals \cite{Hogan:1988mp,Kolb:1995bu,Fairbairn:2017dmf,Fairbairn:2017sil}. The evolution of axion strings and axion radiation is governed by the classical field equations of the underlying scalar field theory, and due to their non-linearities, lattice simulations are required to go beyond order-of-magnitude estimates. In recent years, several groups have studied the evolution of axion strings and the production of axions using lattice simulations \cite{Yamaguchi:1998gx,Yamaguchi:1999yp,Yamaguchi:1999dy,Yamaguchi:2002sh, Hiramatsu:2010yu,Hiramatsu:2012gg,Kawasaki:2014sqa,Fleury:2015aca,Lopez-Eiguren:2017dmc,Klaer:2017qhr,Klaer:2017ond,Gorghetto:2018myk,Kawasaki:2018bzv,Vaquero:2018tib,Buschmann:2019icd,Klaer:2019fxc,Hindmarsh:2019csc,Gorghetto:2020qws,Gorghetto:2021fsn}. An accurate calculation of the total number density of axions produced is essential for an accurate calculation of the axion energy density, which if matched to the dark matter density today, gives a prediction for the axion mass. While the axion number density is not very sensitive to the string density at the QCD transition (see \cite{Dine:2020pds} for a discussion), high accuracy in the mass estimate is required for resonant cavity searches, which are currently targeting axion masses appropriate for production by vacuum realignment \cite{Braine:2019fqb}. The prediction of the axion density depends on having an accurate description of axion string evolution. As the string evolution takes place from the PQ transition at around $10^{10}$ GeV, to the QCD transition at 100 MeV (a factor $10^{11}$), the results from numerical simulations must be extrapolated. It is important to have a physical basis for the extrapolation. Such a physical model is the one-scale model \cite{Kibble:1984hp} and its velocity-dependent improvement \cite{Martins:1996jp,Martins:2000cs,Martins:2018dqg}, which we discuss in more detail below. It predicts that the string network should approach a scaling solution, where the mean string separation grows in proportion to cosmic time $t$, and the RMS velocity of the strings is constant. By scaling we mean that at distances much larger than the string width, network length scales such as the mean string separation are proportional to the cosmic time $t$. In the standard scaling picture the dynamical evolution of the network is independent of the string width and tension. The justification is based on approximating the string dynamics by the Nambu-Goto equations of motion, in which the string tension drops out and the string width plays no role. The general picture of network evolution is that strings are initially in a dense tangle of loops and infinite strings, with mean separation set by the correlation length of the field and the cooling rate \cite{Kibble:1976sj,Zurek:1996sj}. They decay by the collapse of loops of string initially present, and chopped off from the infinite strings. The mean separation grows, until it becomes of order $t$, the cosmic time. This is approximately as fast as causality allows. By this time the temperature has dropped many orders of magnitude, and friction with the cosmic plasma is negligible. Strings evolve essentially in vacuum, in the background provided by the the rest of the matter in the universe. In a previous paper \cite{Hindmarsh:2019csc}, we presented results on the scaling dynamics of axion strings. We showed that the network evolution was consistent with standard scaling, and obtained an asymptotic value for the dimensionless length density of axion string $\zeta$, which is proportional to the mean number of Hubble lengths of string per Hubble volume, $\zeta_\infty = 1.19 \pm 0.20$. This is consistent with, and improves in accuracy, estimates in earlier works \cite{Yamaguchi:1998gx,Yamaguchi:1999yp,Yamaguchi:1999dy,Yamaguchi:2002sh, Hiramatsu:2010yu,Hiramatsu:2012gg,Kawasaki:2014sqa,Lopez-Eiguren:2017dmc}. Equivalently, the mean string separation $\xi$ is always about half a horizon length. In \cite{Hindmarsh:2019csc} we also showed how the presence of the scaling solution can be disguised, either by the choice of variable to study, or by the choice of initial conditions. In this light, claims of a slow or logarithmic growth in the dimensionless length density \cite{Gorghetto:2018myk,Kawasaki:2018bzv,Vaquero:2018tib,Buschmann:2019icd,Martins:2018dqg,Klaer:2019fxc,Gorghetto:2020qws} are to be interpreted as an approach to scaling from initially low values of $\zeta$ that it is not completed before the simulation ends. It is important to note that the only significant difference between the simulations of the different groups is the method for preparing the initial conditions of the field, which determines the initial string separation, and that there are no significant differences in the subsequent evolution of the string network. All but one group report $\zeta \lesssim 1$ at the end of the simulation, consistent with our estimate. The exception \cite{Buschmann:2019icd} explicitly discounts the reliability of their high value, due to a non-standard string-finding algorithm. The initial configurations in this work start with random fields with several different initial correlations lengths $l_{\phi}$ in order to cover a range of initial string separations, which tests the sensitivity of the system to the initial conditions. The evolution of the system has been carried out using both the true physical field equations, and also using the Press-Ryden-Spergel (PRS) method \cite{Press:1989yh} to simulate strings with constant comoving width. Simulating (a priori unphysical) strings with constant comoving width allows for a longer period of scaling, thus giving insight into the long-term behaviour of a system of strings. We extend the study in \cite{Hindmarsh:2019csc} by analysing the root-mean-square velocity $v$ of the networks alongside the mean string separation in units of cosmic time, $x = \xi/t$. We demonstrate that the evolution of the simulations at later stages of the simulation can be well described by a two-parameter velocity-dependent one-scale (VOS) model \cite{Martins:1996jp,Martins:2000cs,Martins:2018dqg} where all simulations tend asymptotically to a common point in the phase space $(x_*,v_*)$, a fixed point of the VOS dynamical system. The dynamical systems analysis predicts that the approach to the fixed point is governed by a pair of complex exponents with negative real parts, a stable spiral. We find good quantitative accord with the prediction near the fixed point, where the model is supposed to be a good description. Due to this good accord, we obtain a more precise estimate of the scaling density of strings than in our previous analysis \cite{Hindmarsh:2019csc}. We find that the physical and constant comoving width systems have fixed points which are consistent with each other. Further away from the fixed point, the qualitative agreement is good. The VOS model predicts that initially overdense ($x<x_*$) networks will accelerate, and evolve towards an underdense ($x>x_*$) network as the energy-loss mechanism (production and decay of string loops) overcompensates. The approach to scaling from the underdense side is a common feature of simulations. The model also predicts that very underdense networks take a long time to reach scaling, often longer than half-box crossing time, consistent with the very underdense simulations of Refs.~\cite{Gorghetto:2018myk,Gorghetto:2020qws}. \section{Model and network parameters} \label{sec:ModelSims} \subsection{Field dynamics} The simplest axion models include a singlet scalar field with a U(1) symmetry, $\Phi$, with action \begin{equation} S=\int d^4 x \sqrt{-g} \Big( \frac{1}{2} \partial_{\mu} \Phi \partial^{\mu}\Phi- \frac{1}{4}\lambda(\Phi^2-\eta^2)^2 \Big), \label{eq:ac} \end{equation} where we have written the field as a two-component vector, and the U(1) symmetry is realised as a rotation on the vector. In a FLRW metric, and when the field is coupled to a thermal bath of weakly-coupled particles, the equations of motion take the form \begin{equation} {\Phi}''+2\frac{{a'}}{a}{\Phi'}-\nabla^2 \Phi = -a^2 \lambda (\Phi^2-\eta^2(T))\Phi, \label{eq:eom} \end{equation} where $a$ is the scale factor, a prime denotes differentiation with respect to conformal time $\tau$, and in the radiation era $a \propto \tau$. The free energy of the system is minimised at the field magnitude $\eta(T)$, where $\eta^2(T) = d(T_\text{c}^2 - T^2)$, $T_\text{c} \simeq \eta$ is the critical temperature of the PQ phase transition, and $d$ is a constant computable in perturbation theory. For $T \gg T_\text{c}$, it is energetically favourable for the field to fluctuate around $\Phi = 0$. Well below the critical temperature it is energetically favourable for the magnitude of the field to take the value $\eta$, with a massless pseudoscalar fluctuation mode (the axion) and a scalar mode of mass $\ms = \sqrt{2\lambda}\eta$. During the phase transition, the direction in field space is chosen at random in uncorrelated regions of the universe, with the result that the field is forced to stay zero along lines \cite{Kibble:1976sj}. These lines form the cores of the axion strings \cite{Davis:1986xc}. The size of the core is approximately $w_0 = \ms^{-1}$. \subsection{Network parameters from field averages} \label{sec:ScObs} The subsequent evolution of the string network can be tracked by the string length $\ell$ and the RMS velocity $v$ of the strings. A couple of estimators for $\ell$ are possible. We define the winding length $\ell_\text{w}$ as the number of plaquettes pierced by strings multiplied by the physical lattice spacing $a\delta x$, corrected by factor of $2/3$ to compensate for the Manhattan effect \cite{Fleury:2015aca}. Such plaquettes are identified calculating the ``winding'' phase of the field around each plaquette of the lattice \cite{Vachaspati:1984dz}. This is an estimate of the length of string measured in the ``universe frame'', that is, observers comoving with the expansion of the universe. Other measures of length are based on the observation that the energy of a string configuration is proportional to its length, and the estimators are constructed using local functions of the fields. To simplify the discussion, we will first neglect the expansion of the universe. Consider a weighted total energy \begin{equation} E = E_\pi + E_{D} + E_V \end{equation} with the functions \begin{eqnarray} E_{\pi} &=& \frac{1}{2} \int d^3 x \Pi^2 \mathcal{W}(\Phi), \\ E_{D} &=& \frac{1}{2} \int d^3 x (\nabla\Phi)^2 \mathcal{W}(\Phi), \\ E_V &=& \int d^3 x V(\Phi) \mathcal{W}(\Phi), \end{eqnarray} where $\Pi = (\partial_t \Phi)$ and $V(\Phi)=\frac{1}{4}\lambda(\Phi^2-\eta^2)^2$. The function $\mathcal{W}(\Phi)$ is a local function of the fields which is strongly peaked near $\Phi = 0$, and zero for $|\Phi| = \eta$, so that it picks out strings. We call the three functions defined above the weighted kinetic, gradient and potential energy respectively. Suppose that all the energy in the volume $\mathcal{V}$ is in the form of global strings, centered on the line ${\mathbf{X}}(\sigma,t)$. The coordinate $\sigma$ is chosen so that \(|{\mathbf{X}}'| = (1 - \dot{\mathbf{X}}^2)^\frac{1}{2} ,\) where the prime represents the derivative with respect to $\sigma$, and the dot the derivative with respect to $t$. We denote the total rest-frame length of string \begin{equation} \SlenRes = \int d\sigma. \end{equation} Writing local rest frame space coordinates ${\mathbf{x}}_\text{s}$, and fields measured in the local rest frame with the subscript s, the fields of a piece of string moving with orthogonal velocity $\dot {\mathbf{X}}$ are \begin{eqnarray} \Pi({\mathbf{x}},t) &=& \gamma \dot{\mathbf{X}} \cdot \nabla \Phi_\text{s}({\mathbf{x}}_\text{s}), \\ \nabla\Phi({\mathbf{x}},t) &=& \gamma \hat{{\mathbf{v}}} (\hat{{\mathbf{v}}} \cdot \nabla \Phi_\text{s}({\mathbf{x}}_\text{s}) ) + \nabla^\perp\Phi({\mathbf{x}},t), \end{eqnarray} where $\hat{\mathbf{v}}$ is a unit vector in the direction of $\dot{\mathbf{X}}$, $\gamma =1/\sqrt{1 - \dot{\mathbf{X}}^2}$ is the boost factor, and \begin{equation} \nabla^\perp_i\Phi({\mathbf{x}},t) = (\delta_{ij} - \hat{v}_i\hat{v}_j) \nabla_j \Phi({\mathbf{x}},t). \end{equation} Choosing the local rest frame so that the string is oriented in the $z_\text{s}$ direction, a string moving with velocity $\dot{\mathbf{X}}$ has scalar kinetic energy \begin{eqnarray} E_{\pi} &=& \frac14 \int dx_\text{s}dy_\text{s} (\nabla\Phi_\text{s})^2 \mathcal{W}(\Phi_\text{s}) \int d\sigma \dot{\mathbf{X}}^2\,, \label{e:Epi} \end{eqnarray} gradient energy \begin{eqnarray} E_{D} &=& \frac14 \int dx_\text{s}dy_\text{s} (\nabla\Phi_\text{s})^2\mathcal{W}(\Phi_\text{s}) \int d\sigma \left( 1 + \frac{1}{\gamma^2} \right), \label{e:ED} \end{eqnarray} and potential energy \begin{equation} E_V = \int dx_\text{s}dy_\text{s}V(\Phi_\text{s})\mathcal{W}(\Phi_\text{s}) \int d\sigma \frac{1}{\gamma^2} . \label{e:EV} \end{equation} The total energy is therefore \begin{equation} E = \mu ( 1 - f_V v^2 ) \SlenRes, \end{equation} where \begin{eqnarray} \mu &=& \int dx_\text{s}dy_\text{s} \left[ \frac{1}{2} (\nabla\Phi_\text{s})^2 \mathcal{W}(\Phi_\text{s}) + V(\Phi_\text{s}) \mathcal{W}(\Phi_\text{s}) \right] \label{e:StrMuDV} \end{eqnarray} is the $\mathcal{W}$-weighted mass per unit length of a static string, with $f_V$ the fraction contributed by the potential energy density, and we have defined an RMS velocity $v$ through \begin{equation} v^2 = \frac{1}{\SlenRes} \int d\sigma \dot{\mathbf{X}}^2 . \end{equation} A convenient choice for the weight function is \begin{equation} \mathcal{W} = V(\Phi) , \end{equation} for which $\mu = 0.892\eta^2$ and $f_V = 0.368$.\footnote{These numbers are obtained from a code implementing a relaxation method on the discretised radial energy functional, with $800$ lattice points and lattice spacing $\eta dr = 0.01$. The convergence criterion was that the change in energy in an update should be less than $10^{-5}\eta$. } Besides the energy, we can also calculate the Lagrangian \begin{eqnarray} L &=& E_\pi - E_{D} - E_V, \\ \end{eqnarray} finding \begin{eqnarray} L &=& -\mu ( 1 - v^2)\SlenRes. \label{e:ASlag} \end{eqnarray} Combining the total energy $E$ and the Lagrangian estimators, an estimate for both the rest-frame length $\SlenRes$ and the mean square velocity can be obtained \begin{eqnarray} \SlenRes &=& \frac{E + f_V L}{\mu(1 - f_V)} \label{eq:slenres}, \\ v_L^2 &=& \frac{E + L}{E + f_VL}, \end{eqnarray} where the subscript $L$ denotes the use of the Lagrangian to obtain the estimate. An alternative way of estimating the string velocity comes from the pressure, \begin{equation} p\mathcal{V} = E_\pi - \frac{1}{3}E_{D} - E_V, \label{eq:vel_lag} \end{equation} which depends on the rest frame length and RMS velocity as \begin{equation} p\mathcal{V}=\frac{1}{3}\mu\SlenRes\left[ (2v^ 2-1)-f_V(2-v^2)\right] . \end{equation} It is then straightforward to derive another mean square velocity estimator \begin{equation} v_\omega^2 = \frac{1 + 3\omega + 2f_V}{2 + f_V(1 + 3\omega) }, \label{eq:vel_w} \end{equation} where $\omega = p\mathcal{V}/E$ is the equation of state parameter of the strings. A third estimate for the string velocity can be constructed from the ratio of the kinetic to gradient energies \cite{Hindmarsh:2017qff}, \begin{equation} R_{\text{s}} = \frac{E_{\pi}}{E_{D}}, \end{equation} which can be rearranged to give \begin{equation} v_{\text{s}}^2 = \frac{2R_{\text{s}} }{1+ R_{\text{s}} }. \label{eq:vel_s} \end{equation} Given that we only have three independent underlying quantities $E_\pi$, $E_D$, and $E_V$, only three independent estimators can be derived from them: one length, and two velocity estimators. The winding length is not derived from the weighted energies, and so is an independent length estimator. As it is the ordinary Euclidean length of the curve traced by the string, it can be represented as \begin{equation} \label{e:LenWinRes} \SlenWin = \int d\sigma |{\mathbf{X}}'| = \SlenRes \left\langle \gamma^{-1} \right\rangle . \end{equation} Note that the average of $\gamma^{-1}$ is not in general equal to $(1 - v^2)^{1/2}$. In a cosmological simulations one can express the string length in terms of Hubble lengths per Hubble volume, or \begin{equation} \label{e:ZetDef} \zeta = \frac{\ell t^2}{\mathcal{V}} . \end{equation} When investigating scaling in string networks, it is more transparent to parametrise the string density by the mean string separation, which is obtained from measures of the string length via \begin{equation} \label{e:XiDef} \xi = \sqrt{\frac{\mathcal{V}}{\ell}} . \end{equation} In this work we will use two length estimators, which will define two different mean string separation estimators: when the length estimator used is the rest-frame estimator $\SlenRes$, we will define $\xi_{\rm r}$; and when the length estimator used is the winding length estimator $\SlenWin$, we will define $\xi_{\rm w}$. The above estimators were derived for a Minkowski space-time. In an expanding background, one can view the space-time coordinates as representing comoving position and conformal time, from which physical lengths follow by multiplication by the scale factor $a$. \section{Simulations and scaling observable results} We solve a discretised version of the equations of motion (\ref{eq:eom}) on a cubic lattice with periodic boundary conditions, evolving the system in conformal time $t$ The results we present in this section are extracted from the same set of simulations analysed in \cite{Hindmarsh:2019csc}, where lattices with $4096$ sites per dimension were used with spatial resolution of $\delta x\eta=0.5$ and conformal time steps of $\delta \tau\eta=0.1$. In addition, we use a set of simulations with a larger initial correlation length, but otherwise identical. In the following lines we will only summarise the procedure and refer the reader to \cite{Lopez-Eiguren:2017dmc,Hindmarsh:2019csc} for more detailed descriptions on the method. The field configuration is initiated at conformal time $\tau_{\rm start}$\ by setting the scalar field canonical momentum to zero and the scalar field to be a Gaussian random field with power spectrum $P_{\Phi}(k)={A}\left[{1+(kl_{\phi})^2}\right]^{-1}$, were $A$ is chosen so that $\langle \Phi^2 \rangle=\eta^2$ and $l_{\phi}$ is the field correlation length in comoving coordinates. We use different values of $l_{\phi}$ in order to cover a range of string separations in the initial conditions. In order to allow the strings to form, and to remove the energy excess in the field fluctuations around the string configurations, we evolve this configuration with a diffusion equation with unit diffusion constant until conformal time $\tau_{\rm diff}$. We then apply the second-order time evolution equation (\ref{eq:eom}). Similarly to our previous paper, we extract data from simulations with both fixed comoving string width and fixed physical string width. We promote the scalar self-coupling constant to be a time dependent parameter $\lambda = \lambda_0/a^{2(1-s)}$ following the PRS method \cite{Press:1989yh}. This makes the comoving string width decrease with conformal time as: \begin{equation} w(\tau) = \frac{w_0}{a^s(\tau)} \, . \label{e:StrWid} \end{equation} The physical equation of motion, where the physical string width remains constant at $w_0 = 1/\sqrt{2\lambda_0}\eta$, and the comoving width decreases with time, corresponds to $s=1$. With $s=0$ the comoving width is constant at $w_0$ and the physical string width increases in time. For the $s=1$ case, it is difficult to avoid the string width being larger than the Hubble length at early times, which also means that the relaxation of the field to its equilibrium value is longer than a Hubble time. In order to speed up the string formation, we arrange the time-dependence of the coupling so that strings are formed and diffused with a constant comoving width, equal to their final comoving width. At the end of the diffusion period, the string width is much smaller than its physical value $w_0$. The string width is then allowed to grow by setting $s = -1 $ until $\tau_{\rm cg}$, which is when the string core has expanded to its correct physical width $w_0$. After conformal time $\tau_{\rm cg}$, the physical evolution with $s=1$ starts. We call this procedure core growth. Simulations end at conformal time \ensuremath{\tau_{\rm end}}. Table \ref{tab:sims} contains all simulation parameter choices that have been considered in the procedures described above. Four simulations with different random number seeds were carried out at each parameter choice, for a total of 28 runs. The data are analysed in cosmic time $t = (\tau/\ensuremath{\tau_{\rm end}})^2\ensuremath{\tau_{\rm end}}/2$. \begin{table}[h!] \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c||c||c|} \hline Model & $s=1$ & $s=0$ \\\hline $l_{\phi}\eta$ & (5,10,20,40) & (5,10,20) \\ \hline $\tau_{\rm start}\eta$ & 50 & 50 \\ $\tau_{\rm diff}\eta$ & 70 & 70 \\ $s_{\rm cg}$ & -1 & -- \\ $\tau_{\rm cg}\eta$ & 271.11 & -- \\ \ensuremath{\tau_{\rm end}\eta} & 1050 & 1050 \\ \hline \end{tabular} \caption{\label{tab:sims} Run parameters used in simulations. See text for explanation.} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{xi_compare_4ks1CL5.pdf} \includegraphics[width=\columnwidth]{xi_compare_4ks0CL5.pdf} \caption{ Comparison of the mean string separation defined from the winding length estimator $\xi_{\rm w}$ (solid black) and the rest-frame estimator $\xi_{\rm r}$ (solid blue) as presented after Eq.~(\ref{e:XiDef}), from a single simulations with correlation length $l_{\phi}\eta=5$ and $s=1$ (upper panel) and $s=0$ (lower panel). The dashed black line corresponds to the winding estimator $\xi_{\rm w}$ modified via Eq.~(\ref{e:LenWinRes}). \label{fig:xiScaling_comp}} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{ratio_l_4ks1rs.pdf} \includegraphics[width=\columnwidth]{ratio_l_4ks0rs.pdf} \caption{ Ratios of the winding length estimator $\SlenWin$ to rest-frame estimator $\SlenRes$ (solid line) for different correlation lengths. The lines correspond to a single simulations. In dashed, we plot $(1 - v^2_\text{s})^{1/2}$. \label{fig:xiratio}} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{xi_scaling_4ks1r.pdf} \includegraphics[width=\columnwidth]{xi_scaling_4ks0r.pdf} \caption{ Mean string separation $\xi_r$ from simulations with $s=1$ (top panel) and $s=0$ (bottom panel) with initial field correlation lengths $l_{\phi} \eta=5$ (black), $l_{\phi} \eta=10$ (red), $l_{\phi} \eta=20$ (blue) and $l_{\phi} \eta=40$ (green - only for $s=1$). The solid line represents the mean over realisations of $\xi$ at each time, with the shaded regions showing the 1-$\sigma$ variation. The vertical green line corresponds to the end of the core growth period, after which the system is evolved with the physical equations of motion in the $s=1$ case. \label{fig:xiScaling}} \end{figure} Figure \ref{fig:xiScaling_comp} shows the comparison of the evolution of the mean string separation for $\xi_{\rm r}$ and $\xi_{\rm w}$ presented in the previous section (\ref{e:XiDef}) for a single run with $l_{\phi}\eta=5$. The upper panel is for simulations with $s=1$ and the lower panel for $s=0$. Their growth is consistent with a linear asymptote, as extensively studied in \cite{Hindmarsh:2019csc}. This is the expectation from the standard picture of scaling in axion string networks \cite{Vilenkin:1982ks,Kibble:1984hp,Martins:1996jp}. Note that in \cite{Hindmarsh:2019csc}, the winding length estimator $\xi_{\rm w}$ was used to establish the asymptotic linear growth; here we establish that the rest-frame estimator also grows linearly, as expected. The ratio of the winding length estimator $\SlenWin$ to the rest frame estimator $\SlenRes$ is plotted in Fig.~\ref{fig:xiratio} (solid line), which according to Eq.~(\ref{e:LenWinRes}) is an estimate of $\vev{\gamma^{-1}}$, where $\gamma$ is the Lorentz factor of the string. For comparison, we plot $(1 - v^2_\text{s})^{1/2}$, using the scalar field estimator (\ref{eq:vel_s}), whose time-dependence in the simulations is discussed later in this section. As pointed out in the previous section, the two quantities are not necessarily equal, but empirically we observe that they are close by the end of the simulation. The closeness of $(1 - v^2_\text{s})^{1/2}$ to $\vev{\gamma^{-1}}$ is also observed in Fig.~\ref{fig:xiScaling_comp}, where we show as a dashed line the winding estimator multiplied by $(1 - v^2_\text{s})^{1/2}$. We choose $\SlenRes$ as the length estimator for the rest of this work, which is better suited to the dynamical modelling we carry out. It can be related to the winding length through the factor of approximately $0.8$ on show in Fig.~\ref{fig:xiratio}. The evolution of the mean string separation for all simulations is shown in Fig.~\ref{fig:xiScaling}. The solid line represents the mean obtained by averaging over realisations and the shaded regions the $1\sigma$ standard deviations. Uncertainties are calculated by propagating the fluctuations in the weighted energies \eqref{e:Epi}, \eqref{e:ED} and \eqref{e:EV}. We use black for $l_{\phi}\eta=5$, red for $l_{\phi}\eta=10$, blue for $l_{\phi}\eta=20$, and green for $l_{\phi}\eta=40$ (only in simulations with $s=1$). The end of the core growth period is shown as a vertical green dashed line. These figures extend the results of Fig.~1 in Ref.~\cite{Hindmarsh:2019csc}, which shows the winding length estimator only for $s=1$, and a subset of the initial correlation lengths. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{v_compare_4ks1CL5.pdf} \includegraphics[width=\columnwidth]{v_compare_4ks0CL5.pdf} \caption{ Comparison of velocity estimators presented in Sec.~\ref{sec:ScObs} for a simulation with correlation length $l_{\phi} \eta=5$ and $s=1$ (upper panel) and $s=0$ (lower panel). The values of the scalar field velocity estimator $v_{\text{s}}$ (\ref{eq:vel_s}), the equation of state velocity estimator $v_\omega$ (\ref{eq:vel_w}) and the Lagrangian-derived velocity estimator $v_L$ (\ref{eq:vel_lag}) are shown in black, red and blue, respectively. \label{fig:v_comp}} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{v_scaling_4ks1r.pdf} \includegraphics[width=\columnwidth]{v_scaling_4ks0r.pdf} \caption{ Velocities from scalar field estimator $v_{s}$ (\ref{eq:vel_s}) from simulations with $s=1$ (top panel) and $s=0$ (bottom panel) with initial field correlation lengths $l_{\phi} \eta=5$ (black), $l_{\phi} \eta=10$ (red), $l_{\phi} \eta=20$ (blue) and $l_{\phi} \eta=40$ (green - only for $s=1$). The solid line represents the mean over realisations at each time, with the shaded regions showing the 1-$\sigma$ variation. The vertical green line corresponds to the end of the core growth period. \label{fig:v_s_Scaling}} \end{figure} We now turn to the velocity estimators. To establish their consistency, we plot all three for the same run in Fig.~\ref{fig:v_comp}, with $s=1$ in the top panel and $s=0$ in the bottom panel. As mentioned in the previous section, only two are independent, but the fact that all three are so close gives confidence that they are indeed estimating a global translational velocity of a string-like solution, rather than field fluctuations in regions where $\Phi$ is close to zero, which is a potential contaminant of velocity estimators. Uncertainties in velocities are calculated by propagating the fluctuations in the weighted energies \eqref{e:Epi}, \eqref{e:ED} and \eqref{e:EV}. We find that the largest fluctuations are in the weighted potential energy $E_V$. We therefore choose the estimator $v_\text{s}$ derived from kinetic and gradient energies only (\ref{eq:vel_s}) as the mean square string velocity estimator, and show the means and uncertainties in Fig.~\ref{fig:v_s_Scaling}. We see that after an initial period of acceleration, there is a decreasing trend, approaching what appears to be a constant value at the end of the simulations. The maximum velocity is larger for the fields with smaller correlation length in the initial conditions, as is consistent with string-like behaviour, where acceleration is proportional to curvature. For the case of strings with constant physical width ($s=1$), the RMS velocity is approximately constant during the core growth phase, and then approaches an asymptote more slowly than the $s=0$ simulations. Figures~\ref{fig:xiScaling} and \ref{fig:v_s_Scaling} show that, independently of the initial field correlation length, all simulations are compatible, \textit{i.e.\ } all of them give separation and velocity data which are within $1\sigma$ of each other. Moreover, the behaviour of both estimators ($\xi$ and $v$) qualitatively agrees with the standard scaling, showing a tendency towards linear growth in $\xi$ and a constant RMS velocity. There is a departure from standard scaling in the earlier phases of the simulations, which needs to be understood in order to improve the estimates of the asymptotic behaviour of $v$ and $\xi$, or more precisely the asymptotic values of the {scaled mean string separation}, \begin{equation} x = \xi/t . \label{x} \end{equation} In our previous paper \cite{Hindmarsh:2019csc}, which studied scaling using $\xi_\text{w}$ only, we observed that the average slope $ \Delta \xi / \Delta t$ converged more quickly to a constant than $\xi / t$. We therefore used the slope of the curve of $\xi$ against $t$ as our estimator of the asymptotic value $x_*$. The linear fit can have a significant constant term, which we parameterised in terms of the intercept with the time axis, the time offset, $t_0$. The value of $t_0$ has no physical importance, and instead parametrises an effect of the initial conditions. In this paper we make use of RMS velocity data, which gives extra information about the approach to scaling, and avoids the need for $t_0$. We will see that in doing so we improve the accuracy of the estimate of $x_*$, while remaining consistent with our previous estimate. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{phaseplane_contour_4ks1_paper.pdf} \includegraphics[width=\columnwidth]{phaseplane_contour_4ks0_paper.pdf} \caption{ Phase space plot for $v_s$ and $x_{\rm r}$ for $s=1$ (top panel) and $s=0$ (bottom panel), and the same colour scheme correspond to different initial correlation lengths, as the previous figure. Larger dots are plotted every 20 cosmic time units, starting at cosmic time $t\eta=4$. As time increases, all curves spiral in towards an apparent fixed point, analysed in Section \ref{sec:fits}. \label{fig:phaseplane}} \end{figure} In Fig.~\ref{fig:phaseplane} we show the evolution of the network in the phase space $(x,v)$, where the {scaled mean string separation} is measured with the rest frame string length $x_{\rm r}$ and the RMS velocity is measured using the scalar field energies $v_\text{s}$. The phase space representation shows clearly the different regimes in which the network evolves. At the end of the diffusion period the strings are accelerated under their curvature, and the RMS velocity increases rapidly while the inter-string distance remains nearly constant. For $s=1$ simulations, the diffusive evolution is followed by the core growth period, which is part of the preparation of the initial conditions. During the core growth period, velocities remain approximately constant. The scaled mean string separation, however, changes, and it changes differently for different initial correlation lengths: For correlation lengths $l_{\phi}\eta=5,\ 10$, the scaled mean string separation grows, whereas for correlation lengths $l_{\phi}\eta=20,\ 40$ it decreases. Finally, when the physical equations of motion (\ref{eq:eom}) are being solved, the system starts to spiral towards an apparent common fixed point for all simulations. Estimating the position of the fixed point and hence the asymptotic values of $x$ and $v$ is the subject of the next two sections. It is interesting to note the qualitative difference in the velocity evolution between the core growth era and the physical equations of motion. In the core growth era the velocity remains constant after the initial acceleration: this constant depends on the initial conditions. It is only after the physical evolution sets in that the velocity starts evolving towards its asymptote. Note that the core growth era corresponds to evolution with the fixed scale hierarchy $\ms/H$ explored in Ref.~\cite{Klaer:2019fxc}. We will discuss this observation in the final section. The initial conditions of the field and the time at which they are set determine the simulation's starting point in the phase space. Figure~\ref{fig:phaseplane} shows that varying the initial field correlation length one can choose whether to start on the left hand side or on the right hand side of the hypothetical fixed points, corresponding to strings being either above or below their scaling density. As mentioned before, the time offset $t_0$ depends on the the initial condition, and for approaches to the fixed point from the right corresponds to $t_0<0$. \section{Phase space analysis with the VOS model} In this section we model our results as a dynamical system. The model best adapted to a network of strings is the velocity-dependent one-scale (VOS) model \cite{Kibble:1984hp,Martins:1996jp,Martins:2018dqg}. This class of models assumes a statistical distribution of string configurations and velocities which has a universal form, parametrised by the string separation $\xi$ (or equivalently the length $\ell$ in a volume $\mathcal{V}$) and RMS velocity $v$. When applied to Nambu-Goto strings, the VOS model describes ``long'' strings only, that is, either infinite strings, or string loops with total length greater than some threshold of order $\xi$. In our simulations, string lengths and velocities are measured over the whole string network, including loops. As a string network's total length is dominated by strings winding around the simulation's periodic box, the distinction should not be important for a first approximation. The movement of the long strings results causes a segment of string of length $\xi$ to encounter others at a rate of order $v/\xi$. The encounter causes the string to reconnect, producing a loop. If this loop is smaller than $\xi$ it radiates and shrinks without further encounters, apart from self-intersections. The net result is the loss of energy from the string network into axions and massive scalar radiation. The string motion is a balance between the acceleration caused by the curvature, and the Hubble damping. There can also be damping due to the preferential loss of energy from fast-moving segments of string, which encounter others more rapidly: this effect is neglected in the simplest models. There is also direct energy loss from long string in the form of radiation, which we will not distinguish from energy loss via loops in our modelling. The equations of motion for a system of Nambu-Goto strings, and the assumptions above lead to the following dynamical system, \begin{eqnarray} \frac{d\xi}{dt} &=& H\xi(1+v^2) + \frac{1}{2} cv, \label{eq:xiVOS} \\ \frac{dv}{dt} &=& (1-v^2)\left(\frac{k}{\xi}-2Hv\right), \label{eq:vVOS} \end{eqnarray} where $H$ is the Hubble parameter. The model has two phenomenological parameters, $k$ and $c$. The parameter $c$ describes the efficiency of the energy loss mechanism, while the parameter $k$ describes the correlation between the string curvature vector and the velocity. For exactly Nambu-Goto strings, $k$ is a function of velocity. However, as a first approximation, and because the strings we are studying are not exactly Nambu-Goto, we will take $k$ to be a constant. Using the dimensionless mean string separation variable $x$ (\ref{x}), and taking, $H = 1/2t$ as appropriate for a radiation-dominated universe, \begin{eqnarray} t \dot{x}&=& \frac{1}{2} x \left(v^2 - 1\right) +\frac{c}{2} v\\ t \dot{v} &=& \left(1-v^2\right)\left(\frac{k}{x}- v\right) \end{eqnarray} This dynamical system has a fixed point in the relevant region $0 \le x$, $0 \le v < 1$, \begin{equation} \label{e:FixPoi} x_*=\sqrt{k( c+k)}\qquadv_*=\sqrt{\frac{k}{( c+k)}} . \end{equation} From here, one can express the parameters $c$ and $k$ in terms of the fixed point values, \begin{equation} k = x_*v_*, \quad c = \frac{x_*}{v_*} ( 1 - v_*^2). \label{fp} \end{equation} Small perturbations $(\delta x, \delta v)$ evolve to the fixed point according to \begin{equation} t \frac{d}{dt} \left( \begin{array}{c} \delta x \\ \delta v \end{array} \right) = M_* \left( \begin{array}{c} \delta x \\ \delta v \end{array} \right) , \end{equation} where \begin{equation} M_*= \left( \begin{array}{cc} \frac{1}{2}(v_*^2-1)& \frac{ x_*}{2v_*}(1+v_*^2)\\ \frac{v_*}{x_*}(v_*^2-1)&v_*^2-1 \end{array} \right) \end{equation} The eigenvalues of the matrix $\sigma_\pm$ are \[ \sigma_\pm= -\frac{3}{4}\kappa\pm\sqrt{\frac{9}{16}\kappa^2-\kappa} \] where $\kappa = 1 - v_*^2$. Since $0<\kappa<1$ the eigenvalues $\sigma_\pm$ are complex, with negative real part. Therefore, the fixed point is a stable spiral. We plot flows in the phase diagram predicted by the VOS model in Fig.~\ref{f:PhaPlaFlo} for the global best fit $(x_*,v_*)$, along with the mean values of selected $(x,v)$ from the simulations. The stable spiral form is clearly visible in the streamlines. In the next section we explain the fitting procedure from which the global best fit $(x_*,v_*)$ was obtained. \section{Fits and asymptotic behaviour} \label{sec:fits} In this section we measure the degree at which the evolution dictated by the VOS model presented in the previous section, Eqs.~(\ref{eq:xiVOS}) and (\ref{eq:vVOS}), is compatible with our simulations, by performing a fitting analysis. We compute the $\chi^2$ value for each set of runs with a given $(s,l_{\phi})$ as \begin{equation} \chi^2=\sum_i \frac{(O_i-E_i)^2}{\sigma_i^2}, \end{equation} where $O_i$ is the observed value, $E_i$ the expected value on the basis of the model, and $\sigma_i$ the uncertainty in the observed value. In our case, the observed values are the time series data $(x,v)$ recorded from our simulations. We use a bootstrapping method to create the time series for a specific $l_{\phi}$. For each case we have four different runs, out of which we create the bootstrapped time series by choosing randomly a value at specific time step $i$. This procedure is performed four times so that four different bootstrapped time series are created. The observed values and their uncertainties are then obtained by averaging and computing the standard deviation from those bootstrapped realisations. The expected value is the value predicted by the VOS model, as described below. The set of observations is taken in the time range $[t_{\text{fit}}, t_{\text{end}}]$, where the start of the fitting period is $t_\text{fit}\eta=171.4$ (conformal $\tau_\text{fit}\eta = 600$). Note that the $l_{\phi}\eta=40$ case is present only for $s=1$. We explore the two dimensional parameter space using a grid of size $100\times100$, with the priors $0.35<k<0.7$ and $0.6<c<1$. These are set by preliminary analysis of a wider parameter space. \begin{table}[h!] \renewcommand{\arraystretch}{1.5} \scalebox{0.95}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline $s$ & $l_{\phi}\eta$ & $k$ & $c$ & $x_*$ & $v_*$ \\ \hline \multirow{3}{*}{0} & 5 & $0.474\pm0.006$ & $0.811\pm0.010$ & $0.780\pm0.009$ & $0.607\pm0.002$\\ & 10 & $0.486\pm0.003$ & $0.845\pm0.017$ & $0.805\pm0.007$ & $0.604\pm0.004$\\ & 20 & $0.497\pm0.010$ & $0.764\pm0.010$ & $0.792\pm0.010$ & $0.628\pm0.005$\\ \hline \multicolumn{2}{|c|}{Mean} & $0.487\pm0.013$ & $0.803\pm0.032$ & $0.793\pm0.012$ & $0.615\pm0.012$\\ \hline \hline \multirow{4}{*}{1} & 5 & $0.459\pm0.008$ & $0.829\pm0.016$ & $0.768\pm0.013$ & $0.597\pm0.003$\\ & 10 & $0.485\pm0.011$ & $0.856\pm0.032$ & $0.806\pm0.021$ & $0.601\pm0.004$\\ & 20 & $0.519\pm0.004$ & $0.888\pm0.020$ & $0.854\pm0.006$ & $0.607\pm0.005$\\ & 40 & $0.521\pm0.005$ & $0.797\pm0.015$ & $0.829\pm0.006$ & $0.629\pm0.005$\\ \hline \multicolumn{2}{|c|}{Mean} & $0.494\pm0.027$ & $0.843\pm0.039$ & $0.814\pm0.037$ & $0.609\pm0.014$\\ \hline \end{tabular}} \caption{\label{tab:ckxv} Inferred best-fit values of model parameters $c$ and $k$, and asymptotic values of $x$ and $v$ for each correlation length in $s=0$ and $s=1$. These values are obtained by fitting a set of 20 bootstrap realisations for each correlation length, using data with $t > 171.4$ (conformal time 600). The uncertainties on the global means for $s=0$ and $s=1$ are the standard deviations of the mean values for each correlation length. } \end{table} The best-fit values of $c$ and $k$, and asymptotic scaled mean string separation and velocity ($x_*$ and $v_*$) for each $(s,l_{\phi})$ can be found in Table~\ref{tab:ckxv}, where fits were taken with $\ensuremath{t_{\rm fit}}\eta = 171.4$. The final mean values for $s=0,1$ are obtained by averaging over different values of $l_{\phi}$, and the quoted uncertainties correspond to the resulting standard deviations. The errors obtained by quadrature combination of bootstrap errors were systematically smaller than the standard deviations. Mean values for the parameters for other values of $\ensuremath{t_{\rm fit}}\eta = 171.4$ are shown in Table~\ref{tab:ckxv_early}. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{phaseplane_stream_4ks1_paper.pdf} \includegraphics[width=\columnwidth]{phaseplane_stream_4ks0_paper.pdf} \caption{\label{f:PhaPlaFlo} Phase plane for $s=1$ (top), $s=0$ (bottom), with stream lines of the best-fit values of the VOS model parameters shown in Table~\ref{tab:ckxv_early}, with $\ensuremath{t_{\rm fit}}\eta = 50$. The colour scheme is the same as in previous figures. The larger markers correspond to the same points as in Fig.~\ref{fig:phaseplane}, with empty circles denoting points with $\ensuremath{t_{\rm fit}}\eta < 50$.} \end{figure} Figure \ref{f:PhaPlaFlo} shows the evolution of the simulations alongside the streamlines of the VOS dynamical system calculated using the inferred global mean values $(x_*, v_*)$, obtained by fitting with $\ensuremath{t_{\rm fit}}\eta = 50$, the earliest time from which we start fitting (see Table~\ref{tab:ckxv_early}). It can be seen that after an initial relaxation period, the simulation data follow the spiral-like evolution towards the fixed point. A key feature is that initial conditions with $x < x_*$ tend to flow to states with $v > v_*$, and then around to $x> x_*$, corresponding to string networks less dense than scaling. For a more quantitative comparison between our simulations and the VOS model, we show in Figure~\ref{f:Resi} the relative difference between the simulation time series data and the VOS best-fit model for each $(s, l_{\phi})$, where the initial conditions for the integration of the VOS equations are set at $t_{\text{fit}}$. Shaded regions correspond to the uncertainties propagated from simulation estimators. It can be observed that the mean relative difference always lies below $5\%$ level of deviation, with zero deviation always within the errors. \begin{table}[h!] \renewcommand{\arraystretch}{1.5} \scalebox{0.95}{ \begin{tabular}{|c|c|c|c|c|c|} \hline $s$ & $t_{\text{fit}}\eta$ & $k$ & $c$ & $x_*$ & $v_*$ \\ \hline \multirow{10}{*}{0} & 50 & $0.486\pm0.027$ & $0.804\pm0.008$ & $0.793\pm0.030$ & $0.614\pm0.011$\\ & 100 & $0.487\pm0.018$ & $0.800\pm0.018$ & $0.792\pm0.022$ & $0.615\pm0.007$\\ & 150 & $0.484\pm0.017$ & $0.804\pm0.034$ & $0.790\pm0.020$ & $0.613\pm0.010$\\ & 171 & $0.486\pm0.012$ & $0.807\pm0.041$ & $0.792\pm0.013$ & $0.613\pm0.013$\\ & 233 & $0.478\pm0.011$ & $0.808\pm0.053$ & $0.783\pm0.026$ & $0.610\pm0.010$\\ & 305 & $0.450\pm0.015$ & $0.789\pm0.077$ & $0.746\pm0.025$ & $0.604\pm0.022$\\ & 386 & $0.450\pm0.036$ & $0.745\pm0.115$ & $0.732\pm0.029$ & $0.616\pm0.040$\\ \hline \multirow{7}{*}{1} & 50 & $0.493\pm0.057$ & $0.840\pm0.013$ & $0.810\pm0.062$ & $0.607\pm0.024$\\ & 100 & $0.493\pm0.043$ & $0.841\pm0.015$ & $0.811\pm0.045$ & $0.607\pm0.020$\\ & 150 & $0.498\pm0.031$ & $0.837\pm0.021$ & $0.815\pm0.036$ & $0.611\pm0.012$\\ & 171 & $0.496\pm0.030$ & $0.843\pm0.039$ & $0.814\pm0.037$ & $0.609\pm0.014$\\ & 233 & $0.492\pm0.028$ & $0.835\pm0.046$ & $0.808\pm0.037$ & $0.609\pm0.013$\\ & 305 & $0.484\pm0.028$ & $0.854\pm0.083$ & $0.804\pm0.032$ & $0.602\pm0.026$\\ & 386 & $0.483\pm0.060$ & $0.846\pm0.102$ & $0.799\pm0.050$ & $0.604\pm0.045$\\ \hline \end{tabular}} \caption{\label{tab:ckxv_early} Global mean values of model parameters $c$ and $k$, and asymptotic values of $x$ and $v$, with different start times for the fit $\ensuremath{t_{\rm fit}}$. } \end{table} Table~\ref{tab:ckxv_early} contains the global mean values obtained applying the same bootstrap procedure as in the previous analysis. The last four fit start times are those used for the linear fitting procedure carried out in Ref.~\cite{Hindmarsh:2019csc}. A remarkably good agreement is obtained in the global means when comparing early and late fits. Earlier fits have a smaller scatter in $c$, but a larger scatter in $k$. This can be related to the spread of velocities in the initial conditions The spread is minimised for the intermediate times $\ensuremath{t_{\rm fit}} = 150, 171$ and $233$. We quote results from $\ensuremath{t_{\rm fit}} = 171$, which is also the earliest fitting time from our previous paper \cite{Hindmarsh:2019csc}, meaning that a direct comparison of the methods can be made. The simplest VOS model therefore gives a good quantitative description of the joint evolution of the string separation and RMS velocity. We have experimented with fitting for additional parameters $q$ and $d$ (with $\beta = 1$ and $r = 1$) in the VOS model presented in Ref.~\cite{Correia:2019bdl}. but our preliminary analysis shows that the preferred values for these additional parameters are compatible with zero. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{xi_vos_diff_4ks1_paper.pdf} \includegraphics[width=\columnwidth]{v_vos_diff_4ks1_paper.pdf} \caption{\label{f:Resi} Relative difference of the VOS prediction and the simulation data of the dimensionless string separation $x$ and rms velocity $v$. The shaded bands represent errors propagated from the simulations' energy estimators.} \end{figure} It is also interesting to study the network evolution in terms of the length density parameter $\zeta$ (\ref{e:ZetDef}), and by plotting against the logarithm of time one can emphasise the earlier times when the network is further away from scaling. Fig.~\ref{f:Log} shows our $s=1$ rest-frame length data plotted this way, along with the best-fit VOS models for each correlation length, and their extrapolation to larger values of time. The asymptotic $\zeta_{\text{r},*}$ obtained from the overall mean values of fit parameters in Table~\ref{tab:ckxv} is also depicted, for which we obtain $\zeta_{\text{r},*} = 1.50 \pm 0.11$. The central value is shown as solid purple line and its corresponding errors in shaded purple bands. Note that all simulations approach the asymptotic $\zeta$ from below, and are still slowly increasing at the end of the simulation, but within $20$\% of its asymptotic value. The increase is most noticeable for simulations which start very underdense, and therefore have further to evolve to reach scaling. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{zeta_vosbehaviour_4ks1_paper.pdf} \includegraphics[width=\columnwidth]{v_vosbehaviour_4ks1_paper.pdf} \caption{\label{f:Log} String network evolution expressed in terms of the rest-frame length density parameter $\zeta$ and RMS velocity $v$, plotted against $\log(t\eta)$, with VOS models (dotted line) that correspond to best fit values for each $l_{\phi}$ shown in Table~\ref{tab:ckxv}. The prediction of the VOS model is plotted from $t_{\rm fit}$ on. The horizontal dashed black line and grey band in the top panel show the mean and uncertainly obtained from our previous analysis \cite{Hindmarsh:2019csc}, translated from the universe-frame length used in that paper by multiplying by $(1 - v_*^2)^{-1/2}$. The solid purple line and shaded purple bands are the mean and uncertainly obtained from the analysis in this work. The vertical green dashed line marks the end of the core growth phase, and the fit start time $\ensuremath{t_{\rm fit}}$ is indicated by the vertical grey dashed line. } \end{figure} We also include the value of the asymptotic length density parameter reported in our previous paper \cite{Hindmarsh:2019csc} in dashed black with its corresponding uncertainty in grey. In \cite{Hindmarsh:2019csc} the universe-frame string length $\SlenWin$ was used as the measure of the string length, and the value of $x_*$ estimated by linear fits to $\xi_\text{w}$ against $t$, from which we obtained an estimate of the asymptotic value $\zeta_{*} = 1.19 \pm 0.20$. In this work we use the rest-frame length estimator, the corresponding string density parameter is (\ref{e:LenWinRes}), for which $\zeta_{\text{r},*} = \zeta_{*} (1 - v_*^2)^{-1/2}$, and gives $\zeta_{\text{r},*} = 1.51\pm 0.25$. As the figure shows, the agreement with our previous result is very good. \section{Discussion and conclusions} \label{sec:Con} In this paper we have studied axion string networks in the radiation era by measuring the root-mean-square velocity of the strings $v$ and the mean string separation $\xi$. The strings are modelled by a scalar field with two real components with a spontaneously broken O(2) symmetry, simulated in periodic cubic lattices. Performing a phase space analysis in the variables $x = \xi/t$ and $v$, we find good evidence for the existence of a fixed point, which shows that the system reaches a scaling regime. These prompted us to continue the analysis in the framework of the velocity-dependent one-scale (VOS) model. The VOS model assumes a statistical distribution of string positions and velocities which can be adequately described by two parameters mentioned above. By assuming that the strings follow approximately Nambu-Goto trajectories, that they reconnect with a fixed high probability when they cross, and that the loops so formed annihilate quickly, the VOS model reduces the network evolution to a simple dynamical system, a pair of first order non-linear ordinary differential equations. The equations have a fixed point $(x_*,v_*)$, which describes a scaling network, that is, one whose mean separation increases linearly with time, and with constant RMS velocity. We have fitted the results of a set of numerical simulations in the radiation era to a two-parameter VOS model, with initially random fields with several different initial correlations lengths $l_{\phi}$, using both the true physical field equations and the PRS approximation. In terms of the core growth parameter $s$, the comoving string width behaves as $w = {w_0}{a^{-s}} $. In this paper we have used $s=1$, which corresponds to the true physical case, and $s=0$ which corresponds to a string with constant comoving width. We find that the two-parameter VOS model gives a good qualitative and quantitative description of the network evolution, with parameters given in Table \ref{tab:ckxv}. Qualitatively, the initial acceleration of the string network results in a RMS velocity which overshoots the fixed point $v_*$, as the Hubble length in our initial conditions is larger than the string separation, meaning that the dynamical system is underdamped. The higher velocity results in more rapid loop formation, and hence an increase in the mean string separation. This decreases the acceleration, and hence the RMS velocity. The net result is a curved approach to the fixed point in the $(x,v)$ plane, clearly visible in Fig.~\ref{f:PhaPlaFlo}. In assessing the quantitative success of the two-parameter VOS model, we observe that the residuals to the fits points in Fig.~\ref{f:Resi} are consistent with zero, and that the fixed points given in Table \ref{tab:ckxv} for differential initial correlation lengths are remarkably similar. The fluctuations between the fixed point estimates are slightly larger than the bootstrap fitting errors would predict, which suggests that the model could be tuned slightly, or that the fitting errors have been underestimated. A preliminary investigation shows that the more complex model of Ref.~\cite{Correia:2019bdl} does not improve the fit. A more thorough exploration of VOS models and a more accurate estimate of the fixed point could be obtained with a wider range of initial correlation lengths and initial times. Translating the values of Table \ref{tab:ckxv} to the universe-frame length density parameter $\zeta$, estimated in our previous paper by linear fitting, we find \begin{eqnarray} \zeta_* &=& 1.20 \pm 0.09 \; (s=1), \\ \zeta_* &=& 1.25 \pm 0.04 \; (s=0). \end{eqnarray} These values are consistent with our previous determination, with an improved accuracy arising from the joint fit with the velocity data in the context of the VOS model. An important consequence of the description in terms of a dynamical system is that the approach to the fixed point is determined by a pair of complex exponents $\sigma_\pm$, whose real part is $-\frac{3}{4}( 1- v_*^2) \simeq -0.47$. Hence, even when close to the fixed point, the approach can be rather slow. If the initial string separation $\xi_\text{i}$ is chosen far away from its scaling value $x_* t_\text{i}$, it may not get within $1\sigma$ of its scaling value (as determined by the VOS model) by the end of the simulation, which has to be chosen as $L/2$ for a box of side $L$ in systems like this one with degrees of freedom propagating at the speed of light. This is particularly noticeable for initial conditions which are very underdense, i.e.~with $x_\text{i} \gg x_*$. When the length density parameter $\zeta$ is plotted against the logarithm of cosmic time $\log(t\eta)$, one sees a slow drift up towards the fixed point value. Other groups have also noticed this feature of underdense initial conditions \cite{Gorghetto:2018myk,Kawasaki:2018bzv,Vaquero:2018tib,Buschmann:2019icd,Klaer:2019fxc,Gorghetto:2020qws}. As we have explained elsewhere \cite{Hindmarsh:2019csc} this does not signal a breakdown of the standard scaling picture. Our analysis in the framework of the VOS model shows that slow approaches to the fixed point from values of $\zeta$ less than its fixed point value are to be expected, and indeed, nearly all simulations to date have final values of $\zeta$ less than our estimated fixed point. If the slow upward drift in $\zeta$ were an asymptotic feature of axion string networks, one would expect to see final values of $\zeta$ significantly above the fixed-point value in the largest simulations. However, the maximum value of $\zeta$ obtained in the most recent (and therefore largest) simulations are nearly all below value of $\zeta_*$ computed in this work. These values are (all of them measured in the universe frame): $\zeta\simeq0.9$ in the physical case and $\zeta\simeq1.2$ using the PRS approximation in \cite{Gorghetto:2020qws}, $\zeta\simeq 1.1$ in the physical case in \cite{Klaer:2019fxc} and $\zeta\simeq1.4$ using the PRS approximation in \cite{Vaquero:2018tib}. In \cite{Kawasaki:2018bzv} the authors show the physical evolution of $\zeta$ for three different ratios of the Hubble scale to the string width at the time of the PQ phase transition, giving $\zeta\simeq 1.3$, $\zeta\simeq 1.1$ and $\zeta\simeq 0.9$. In \cite{Buschmann:2019icd}, the quoted value $\zeta\simeq4$ is not consistent with other groups, but the authors of that work caution that the method to detect strings they use gives only a rough estimate of $\zeta$. They also suggest that a better method will render their results comparable with the ones in \cite{Gorghetto:2020qws}, and therefore, also with other groups. In summary, our data and fits already show that the straightforward and physically motivated picture provided by the simplest VOS model provides a good description of the evolution of the string network consistent with the standard scaling picture, and an asymptotically constant dimensionless length density and RMS velocity. This gives confidence that our results can be extrapolated over the many orders of magnitude required for predictions of the axion number density. As pointed out in Ref.~\cite{Hindmarsh:2019csc}, predictions from a scaling string network will be around 50\% higher than recent estimates \cite{Klaer:2017ond}. We also comment on a suggestion that simulations with growing comoving string width shed light on the asymptotic behaviour of axion string networks \cite{Klaer:2019fxc}. If the string width grows in proportion to the horizon ($w \propto \tau$), the field equations have a scale symmetry, and it can be argued that a fixed point must exist. We use this growth in width in the core growth phase of our $s=1$ simulations, where it can can be seen that this phase characterised by a constant RMS velocity, but not a consistent one. A smaller initial correlation length gives a larger RMS velocity in the core growth phase (see the top panels in Figs.~\ref{f:Log} and \ref{fig:v_s_Scaling}). This is understandable in that the initial acceleration is proportional to the curvature. However, as the mean string separation increases, the RMS velocity of the strings does not decrease, as would be expected from the decrease in the average acceleration. While it seems that $x$ is evolving towards a value $x_* \simeq 0.8$, there is little sign of a definite value of the RMS velocity. It seems therefore that networks in the core growth phase can be used to estimate the fixed point in $x$, but not the RMS velocity. As a final remark, we note that the asymptotic scaling behaviour presented here can also be applicable to generic global string networks, such as those in axion models beyond the canonical QCD scenario \cite{Svrcek:2006yi}. A general observational consequence of scaling in a system of topological defects is a scale-invariant gravitational wave power spectrum \cite{Figueroa:2012kw,Figueroa:2020lvo} during radiation domination. This suggests that recent claim that axion string networks produce a tilted gravitational wave spectrum \cite{Gorghetto:2021fsn}, based on an assumed logarithmic growth in the string density throughout the radiation era, should be revisited. \begin{acknowledgments} We are grateful to Daniel Cutting and Daniel G. Figueroa for comments on the draft manuscript. MH (ORCID ID 0000-0002-9307-437X) acknowledges support from the Science and Technology Facilities Council (grant number ST/L000504/1) and the Academy of Finland (grant number 333609). JL (ORCID ID 0000-0002-1198-3191) and JU (ORCID ID 0000-0002-4221-2859) acknowledge support from Eusko Jaurlaritza (IT-979-16) and PGC2018-094626-B-C21 (MCIU/AEl/FEDER,UE). ALE (ORCID ID 0000-0002-1696-3579) is supported by the National Science Foundation grant PHY-1820872. ALE is grateful to the Early Universe Cosmology group of the University of the Basque Country for their generous hospitality and useful discussions. This research was supported by the Munich Institute for Astro- and Particle Physics (MIAPP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC-2094 – 390783311. This work has been possible thanks to the computational resources on the STFC DiRAC HPC facility obtained under the dp116 project. Our simulations also made use of facilities at the i2Basque academic network and CSC Finland. \end{acknowledgments}
{ "timestamp": "2021-05-28T02:22:48", "yymm": "2102", "arxiv_id": "2102.07723", "language": "en", "url": "https://arxiv.org/abs/2102.07723" }
\section{Introduction} Topology optimization (TO) seeks the optimum material distribution given an objective function, a design domain, and a set of boundary conditions. Each optimization is an iterative process, often requiring hundreds of iterations. From a computational standpoint, running a 3D TO requires the investment of most of the available computational effort to solve the analysis equations for every iteration. The associated computational cost, which is significant for any problem that approaches real conditions, typically depends on the dimensionality and resolution of the domain, on the number of design variables being used, as well as on the numerical solution procedure itself. A good and relatively recent review of the popular numerical solution procedure approaches can be found in \cite{sigmund2013topology} and a detailed discussion of the computational cost involved can be found in \cite{amir2011reducing} and \cite{limkilde2018reducing}. Very recently, various machine learning algorithms have been proposed to tackle the massive computational cost of established gradient-based TO methods, including convolutional neural network (CNN) \cite{banga20183d}, generative adversarial network \cite{li2019non}, and conditional generative adversarial network \cite{yu2019deep}. While these approaches are promising, the training needed by these methods requires large datasets that are computationally expensive to generate, as discussed below. Furthermore, these methods have not been shown to be capable of handling multiple boundary conditions, domains with different topologies and geometries, or high-resolution 3D domains. In this paper we introduce a transfer learning method based on a convolutional neural network that (1) can handle high-resolution 3D design domains of various shapes and topologies; (2) supports real-time design space explorations as the domain and boundary conditions change; (3) requires a much smaller set of high-resolution examples for the improvement of learning in a new task compared to traditional deep learning networks; and (4) is multiple orders of magnitude more efficient than the established gradient-based methods. \section{Background} The published literature on topology optimization has exploded over the last two decades to include methods that use shape and topological derivatives or evolutionary algorithms formulated on various geometric representations and parametrizations. Different approaches have been developed to search for the optimal shape, including approaches based on material density functions \cite{bendsoe1989optimal}; level sets \cite{allaire2002level,chen2007shape}; topological derivatives \cite{eschenauer1994bubble,novotny2003topological}; phase fields \cite{bourdin2003design}; and several other variations \cite{sigmund2013topology}. One popular and established approach based on material density, known as Solid Isotropic Material with Penalization (SIMP) \cite{bendsoe1989optimal}, finds the optimal topology by changing the material density of elements in the unit interval $[0,1]$. The methods based on topological derivatives aim to predict the sensitivity of the problem to the addition of an infinitesimal hole at prescribed locations inside the design domain, and this information is used to generate new holes \cite{sokolowski2009topological}. One of the key challenges of all these methods is the immense computational cost associated with 3D topology optimization problems, so it is not surprising that there is an extensive body of work dedicated to improving the computational cost of TO. Parallel computing has been employed in \cite{borrvall2001large} for large scale topology optimization by dividing the domain into sub-domains that are independently solved on separate processors. The results presented in \cite{borrvall2001large} present an optimization of a domain with 24x80x128 elements in 234 minutes using 16 processors. The work described in \cite{wu2015system} relies on a high-performance multigrid GPU solver to find the optimum solution of models with millions of elements. Their method solved on the GPU a 200x100x100 cantilever beam with 50 iterations, and volume fraction of 0.8 in 2.4 minutes. Observe that standard SIMP implementations \cite{andreassen2011efficient,liu2014efficient} require more than 120 iterations for convergence, even for simple cantilever beams. Design space adjustment and refinement is employed in \cite{jang2008design} to speed up the computations for large-scale domains. This work performs TO for an MMB beam with 10800 elements in 5.5 minutes, which is roughly a third of the time required by a SIMP implementation to produce a solution to the same problem. Other papers, such as \cite{kim2012new}, have employed the reduction of the the number of design variables to decrease the computational cost, and showed the capability to produce an optimal solution of a cantilever beam in 10.12 minutes compared to 20.2 minutes required by a SIMP-based solver. As mentioned above, machine learning methods have been recently used to tackle the computational efficiency of the topology optimization problem. For example, the work discussed in \cite{lynch2019machine} uses machine learning to tune the numerical parameters that control the convergence of established topology optimization algorithms to avoid the manual tuning, which is computationally costly. CNNs have been applied to estimate the optimal topologies for 3D low resolution beams \cite{banga20183d}, or 2D domains \cite{sosnovika2017neural,lin2018investigation,gaymann2019deep,o2019standard}. Moreover, CNNs have been very recently used to predict the optimum 2D structure for simple low resolution 2D domains \cite{zhang2019deep}. In this paper, the authors claim a generalization ability of their network primarily because they explicitly input into the network the initial displacement and strain fields as well as the volume fraction, rather then the explicit boundary conditions. However, they use 80,000 training samples and 10,000 test samples, which is simply prohibitive for any problem that is of reasonable complexity. At the same time, different versions of GANs have been used to optimize the topology of 3D cantilever beams \cite{rawat2019application}, 2D low resolution cantilever beams \cite{rawat2019novel,shen2019new}, and 2D high resolution beams \cite{li2019non,yu2019deep}. Variational auto-encoders (VAE) and supported vector regression (SVR) have been applied to 2D low resolution domains with different boundary conditions \cite{guo2018indirect}, and cantilever beam \cite{lei2019machine}, respectively. All these methods can reduce the TO computational time for given domain resolution, set of boundary conditions, and initial domain as long as large amounts of data are available. However, applying these methods to tasks that the algorithms have not been trained on requires new large sets of training data. This is not only impractical for any topology optimization problem of reasonable complexity, but also makes these methods unsuitable for design space explorations. A key assumption of most machine learning algorithms is that the training data and task data are in the same feature space and have the same distribution. This is a reasonable assumption as long as the training data is relatively painless to generate and abundant. However, this is definitely not the case in many engineering application, including topology optimization. Transfer learning has emerged as one promising learning algorithm that has the potential to greatly improve the learning performance by limiting the amount of training that needs to be performed to adapt the algorithms to new scenarios. It aims to imitate one of the distinctive features of human intelligence, that is, to effectively transfer previously learned knowledge to new domains \cite{weiss2016survey, torrey2010transfer}. Transfer learning has been successfully used in medical image processing \cite{khatami2018sequential}, and brain-computer interface calibration \cite{hossain2018multiclass}. \begin{figure}[t] \centering \includegraphics[width=0.8\Columnwidth]{transfer3.jpg} \caption{A schematic diagram of transfer learning after \cite{Pan2010survey}} \label{fig1} \end{figure} \subsection{Contributions and Outline} In this paper we propose a transfer learning approach that we developed specifically for topology optimization. We show that the method produces highly accurate predictions of the optimal 3D topologies at real-time rates for non-trivial 2D and 3D high resolution TO problems. Furthermore, we show that the proposed method serves as the first practical underlying framework for real-time 3D design explorations based on topology optimization, and that fine tuning/retraining the proposed learning algorithm for new tasks can be done with a much smaller high-resolution dataset than the traditional deep learning networks. To the best of our knowledge, this paper documents the first promising attempt to use transfer learning for topology optimization. The rest of the paper is organized as follows. Section \ref{formulation:sec} presents the formulation of the proposed method, as well as a detailed description of the implementation and data generation. This section also includes a discussion of the source and target networks as well as of the two metrics that we used to evaluate our network. Section \ref{results} illustrates the generality and flexibility of our approach by providing a variety of 2D and 3D examples with different resolutions, boundary conditions and design spaces, including design domains that are \textit{unseeen} to the source network. These examples show that the proposed method supports real-time design space explorations as the domain and boundary conditions change and is multiple orders of magnitude more efficient than the established methods for both 2D and 3D design scenarios. Finally, Section \ref{conclusions:sec} summarizes the key advantages and limitations of the proposed method and of its potential applications. \begin{figure*}[t] \centering \includegraphics[height=4cm]{flow-chart.png} \caption{A diagram of the proposed method for 3D topology optimization.} \label{fig:2} \end{figure*} \section{Problem Formulation}\label{formulation:sec} We describe the proposed method in the context of the well known minimum compliance topology optimization problem, which is considered to be a ``global response'' of a structure. However, our method can be equally well applied to TO problems that consider more local responses in their objective functions, such as stress-based TO \cite{le2010stress}. The general topology optimization method aims to find the spatial distribution of material $\rho(\mathbf{x})$ that minimizes an objective function $f(\Omega, \rho)$, subject to various constraints $g_i \leq 0$, where the state field $\mathbf{u}$ satisfies given state equation. It is common to assume that the objective function is expressed as an integral over a local function such as the strain energy density \cite{sigmund2013topology}, and to solve the problem using the finite element method. In the SIMP method, each element is associated with a density variable $\rho_{e} \in [0,1]$, where $0$ corresponds to an empty element, and $1$ to an element completely filled with material. This optimization problem can be formulated as \begin{eqnarray}\label{eq.1} min &:& f(\Omega, \mathbf{\rho}) = \mathbf{U^{t}KU} = \sum E_{e}(\rho_{e})\mathbf{u}_{e}^{t}\mathbf{k}^0_{e}\mathbf{u}_{e} \\ s.t &:& \mathbf{KU} = \mathbf{F} \\ & & \sum \rho_{e}v_{e} \leq V_{max} \\ & & 0\leq \rho \leq 1 \end{eqnarray} where $\Omega$ is the domain; $\mathbf{\rho}$ is the design variable vector of densities; $\mathbf{U}$ and $\mathbf{F}$ are the global displacement and force vectors, respectively; $\mathbf{K}$ is the global stiffness matrix; $\mathbf{u}_{e}$ is the element displacement vector; $\mathbf{k}^0_{e}$ is the element stiffness matrix for an element with unit Young's modulus; $v_{e}$ and $\rho_{e}$ are the element volume and density of element $e$, respectively; and $V_{max}$ is the volume upper bound. $E_{e}(\rho_{e})$ is the element's Young's modulus determined by the element density $\rho_{e}$: \begin{equation} E_{e}(\rho_{e}) = E_{min} + \rho_{e}^{p}(E_{0}-E_{min}), \hspace{1cm} \rho_{e} \in[0,1] \label{eq.2} \end{equation} In equation (\ref{eq.2}), $E_{0}$ represents the stiffness of the material, $E_{min}$ is a very small number, and $p$ is a penalization factor \cite{sigmund2007morphology,sigmund2013topology}. \subsection{Topology Optimization with Transfer Learning} One of the distinctive features of human intelligence is the ability to effectively transfer previously learned knowledge to new application domains. We all use this capability every single day even without realizing it \cite{Steiner2001}. In contrast, most machine learning algorithms are trained on and function only on well defined tasks. Transfer learning aims to improve this limitation by transferring the knowledge from a source task to improve the performance in target task that is different from but related to the source task. Every transfer learning algorithm uses specific learning algorithms to learn the tasks at hand, which is why transfer learning is often described as extensions of those learning algorithms. In a broad sense, transfer learning deals with 3 different questions: (1) \textit{what} information should be transferred; (2) \textit{how} to transfer the information; and (3) \textit{when} to transfer it. As an example, for the topology optimization task discussed in this paper, we transfer the weights and biases of all layers of the source network except for the last layer. To address the second question, we implemented a mechanism to transfer this knowledge from the source network to the target network with minimal retraining. The third question deals with establishing the cases when the knowledge transfer should be performed, i.e., the validity of the transfer learning method \cite{Pan2010survey}, and this is often addressed by measuring the performance to new tasks. This is also the approach that we take in this paper. Because optimizing a high resolution domain using state of the art gradient based TO methods, including the popular SIMP method, is always computationally very expensive, generating sufficient training data for practical design scenarios becomes a crucial bottleneck. This is why we developed a deep transfer learning method that uses a fully convolutional neural network, which allows us to transfer the knowledge obtained from training the algorithm on low resolution models to be usable on high resolution cases with different design domains and boundary conditions that the source network has not been trained on. Figure \ref{fig1} illustrates how transfer learning works. First, a source model is built and trained with a large amount of low resolution data, which is relatively inexpensive from a computational point of view. This is followed by transferring what the source model learned to a target model operating on different but related tasks. This process allows the use of a much smaller amount of data to retrain/fine tune the target model to improve learning in the target task. In our implementation, the source and target models are CNN based decoder-encoders with the source model trained on large amount of low resolution (and thus relatively inexpensive) data, and the target model retrained for the new task with relatively small high-resolution datasets. The overall architecture is illustrated in Fig.\ref{fig:2}. The input is the design space and the boundary conditions. For the examples presented in this paper we only considered externally applied forces, but adding externally applied torques is straightforward. During the final step, we build the target network described in section \ref{networks} by augmenting the source network with additional layers, and train it with the high resolution data. \subsection{Network Architecture and Network Training}\label{networks} Convolutional Neural Networks (CNNs) have been successfully used in object classification and segmentation tasks. More recently, a number of papers have shown CNNs to be trainable and effective on large datasets for solving inverse problems in imaging, such as denoising, deconvolution, super-resolution, and medical image reconstruction. These inverse problems practically involve the determination of an image from noisy measurements. Since the datasets output by SIMP can loosely be considered as a special case of these inverse problems, the architectures of our source and target networks are inspired by some of the work described in reviewed in \cite{mccann2017convolutional} and elsewhere. \begin{figure*}[h!] \centering \includegraphics[height=8cm]{2D/NET1.png} \caption{The architecture of the source and target networks. Numbers below the boxes denote the number of filters used.} \label{fig:4} \end{figure*} Our source network is a two-dimensional encoder-decoder based convolutional neural network (CNN) illustrated in Fig.\ref{fig:4}. The encoder part includes eight convolutional layers with the rectified linear unit (ReLU) activation function, and three max-pooling layers. The decoder part of our source network includes three transposed convolutional layers and seven convolutional layers using the ReLU activation function. We trained the network using the ADAM optimizer \cite{lehman2010revising}, which is a standard gradient-based optimization algorithm. The ADAM optimizer finds the optimal weights and biases of the network that minimize the loss between the predicted structures and the simulated structures according to the mean squared error (MSE). We built our target network on top of the source network as illustrated in Figure \ref{fig:4}. To the end of the source network we added one transposed convolutional layer to increase the output dimension of the pre-trained network to the higher resolution, as well as three trainable convolutional layers. To the front of the network, we added a rescaling function to downsample the higher resolution input to the target network to the lower resolution required by the source network. Prior to training the target network, we remove the last layer of the pre-trained network. This modification is based on measuring the accuracy of the predictions as described in the next section. This modified target network is trained as described in section \ref{results} with ADAM as the optimizer and MSE as the loss function. Importantly, we trained the source network only once for a given dimension of the space (2D or 3D). This trained source network is integrated within the target network, which is retrained/fine tuned with smaller datasets for new design domains and/or boundary conditions. \subsection{Data Generation} We used freely available SIMP-based topology optimization finite element codes \cite{andreassen2011efficient,liu2014efficient} to generate our training and test cases, and we modified the codes to automate the training/test case generation for different domains and boundary conditions. The input to the codes are the following design variables and physical quantities: voxelized domain geometry, volume fraction, filter radius (see \cite{sigmund2007morphology,bourdin2001filters} for details), loading boundary conditions (number, magnitudes, directions, spatial locations), and displacement boundary conditions. We prescribed to the SIMP codes a volume fraction equal to 0.5 and a 1.5 filter radius. We sampled the magnitudes of the force components using uniform random sampling within the range $[-100,100]$N. The spatial location on the domain boundary where the external load was applied was chosen based on a uniform random sampling within prescribed ranges along the coordinate axes. For example, the force components $P_x$ and $P_y$ for a 2D beam domain are applied within the range $[\frac{b_x}{2},b_x]$ and $[0, b_y]$, respectively, where $b_x$ and $b_y$ are the beam dimensions in the $x$ and $y$ directions. We also used a discrete random sampling to select one of the defined displacement boundary constraints illustrated in Fig.\ref{fig:3}. \begin{figure*}[t] \centering \includegraphics[width=2\Columnwidth]{Beams.png} \caption{The randomly selected displacement boundary constraint cases: (a) Cantilever Beam, (b) Simply Supported Beam, (c) Constrained Cantilever, and (d) The boundary conditions for the cases shown in Figures \ref{fig:cube} and \ref{fig:cubesph}.} \label{fig:3} \end{figure*} With this random sampling of the boundary conditions we generated two datasets: a low-resolution dataset for our source network, and a smaller, high resolution dataset for the target model. Those two datasets with distinct elements, were then split into training and testing datasets, where the latter was at least 20\% of the former. Details about the sizes of these datasets are provided in Sections \ref{2DStructures} and \ref{3DStructures}. The individual sizes of the testing datasets for our examples are shown in Tables \ref{tab:2Daccuracies} and \ref{tab:3Daccuracies}. We use matrices to store the data fed into the five channels to our network architecture. For example, we used five channels for the 2D cases, as follows: \begin{enumerate} \item First channel: Initial density value for each voxel. \item Second channel: Constraints in the $x$ direction initialized to zero, then the elements corresponding to the constrained elements are set to 1. \item Third channel: Constraints in the $y$ direction initialized to zero, then the elements corresponding to the constrained elements are set to 1. \item Fourth channel: Force value in $x$ direction at each voxel, \item Fifth channel: Force value in $y$ direction at each voxel. \end{enumerate} \subsection{Evaluating the Network}\label{net:eval:sec} We employed the following two criteria for the evaluation of our method against state of the art methods: \begin{enumerate} \item \textit{Mean Squared Error} (MSE), which is defined as: % \[ MSE = \frac{\sum_{j=1}^{M}\sum_{i=1}^{N} (y_{pred}^{ij} - y_{true}^{ij})^2}{N \cdot M} \] % where $N$ and $M$ are the number of rows and columns, $y_{pred}^{ij}$ and $y_{true}^{ij}$ are the predicted and reference values of element located in the $i^{th}$ row and $j^{th}$ column, respectively. \item \textit{Binary Accuracy} (BA), a widely used measure that compares the binarized value of predicted elements with the actual value \cite{christen2007quality}: % \[BA = \frac{TP + TN}{N}\] % where $TP$ (True Positive) is the number of elements correctly predicted as 1, $TN$ (True Negative) is the number of elements correctly predicted as 0, and $N$ is again the total number of elements. Prior to calculating the binary accuracy, we rounded the element values to the nearest integer, that is, either to 0 or 1. \end{enumerate} As the average of the squared deviations, the Mean Squared Error is a second sample moment about the origin of the error, and is the minimum variance unbiased estimator \cite{birch1983classroom}. For a given predictor, the closer MSE is to zero, the better the predictor performance. Furthermore, accuracy reflects the overall ratio of correct predictions. A common version of this measure, namely the binary accuracy, uses binarized density values in the ground-truth and predicted domains, and shows how accurately the network predicts the existence of material in each voxel. The values of the binary accuracy lie within the unit interval, and the closer the binary accuracy is to 1, the better the prediction. In this work, we use the binary accuracy to measure how accurate our network is in predicting the existence of material, and we use MSE to estimate\footnote{These two evaluation metrics do not always agree. For example, assume that the predicted and actual values are 0.49 and 0.51 respectively. MSE tells us that the error is 0.04\% which means the prediction is very accurate, But binary accuracy treats the predicted value as 0 and the actual value as 1, and implies a 0 binary accuracy. A more detailed discussion can be found in standard texts on probability.} how close are the density predictions to the ground-truth values. Our experiments achieved an average binary accuracy and MSE around 95\% and 3\%, respectively. All predictions were performed on a Dell Intel Xeon Processor E5-2650 v3 CPU with 64 GB RAM and Nvidia Quadro K2200 4GB GPU. All training and test cases were generated on the UConn HPC facility running Red Hat RHEL7 operating system. For the optimal topologies predicted by our algorithms, we are also providing the corresponding compliance error relative to the compliance of the ground truth optimal structures. Since our predicted structures are directly output by our algorithms without any post-processing/beautification steps, the resulting compliance errors shown for the examples are particularly promising. \section{Results}\label{results} \subsection{2D Structures}\label{2DStructures} \subsubsection*{Comparison with Ground Truth (SIMP)} We used a freely available Matlab Code \cite{andreassen2011efficient} to generate the ground truth results. For the 2D examples, we trained the source network with 8,000 low resolution cases (40 x 80), and fine-tuned the target model (various resolutions, as shown in Table \ref{tab:2Daccuracies}) with 1500 high resolution cases. Importantly, the source network is trained only once and the learned information is being reused. The high resolution cases used for fine tuning the target network have the same resolution as the test cases used in our examples and shown in Table \ref{tab:2Daccuracies}. To generate the training data for all 2D examples, boundary conditions have been randomly selected from one of the 3 cases shown in Figure \ref{fig:3}(a-c), and the location, orientation and magnitude of the external force have also been randomized. We first compared our approach with the performance of the SIMP solver \cite{andreassen2011efficient} for 2D structures in terms of the output accuracy as described in section \ref{net:eval:sec}, and the average time required to obtain the optimal topology. For this evaluation, we used 2,000 low resolution test cases for the source network and a smaller set of high resolution cases with randomly generated boundary conditions, as summarized in Table \ref{tab:2Daccuracies}. Fig.\ref{fig:2Dbeam-rect} shows a side-by-side comparison between the optimal topology output by our method relative to the corresponding ground-truth result (i.e., SIMP-based optimal structures) for different domains and boundary conditions. Specifically, Figure \ref{fig:2Dbeam-recta} shows the optimal 2D structure output by our source network alone on a low resolution design domain as well as the corresponding MSE, binary accuracy and compliance error. Moreover, Figures \ref{fig:2Dbeam-rect}(b-f) show the optimal 2D structures output by our target network for different domains and boundary conditions, and for design domains with higher resolutions. The predicted and ground truth solutions are not only visually similar, but the corresponding average MSE and average binary accuracy are around 3.6\% and 95\%, respectively, and the resulting compliance error is around 8.7\%. Figures \ref{fig:7} and \ref{fig:9} show a similar comparison for domains that have different geometry, topology and boundary conditions that the source network has \textit{not} been trained on. The examples shown in Figures \ref{fig:7} and \ref{fig:9} also use different volume fractions (i.e.,0.3 and 0.4, respectively). Importantly, these Figures illustrate not only the performance of our predictions, but also the fact that the same source network can be used to build different target models for substantially different design spaces and boundary conditions. \begin{figure*}[p!] \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=.5\linewidth]{2D/4080/4080.png} \caption{} \label{fig:2Dbeam-recta} \end{subfigure} \hspace{-4.3cm} \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=.5\linewidth]{2D/80160/80160.png} \caption{} \label{fig:2Dbeam-rectb} \end{subfigure} \hspace{-4.3cm} \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=.5\linewidth]{2D/120160/120160.png} \caption{} \label{fig:2Dbeam-rectc} \end{subfigure} \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=.5\linewidth]{2D/120240/120240.png} \caption{} \label{fig:2Dbeam-rectd} \end{subfigure} \hspace{-4.3cm} \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=.5\linewidth]{2D/160320/160320.png} \caption{} \label{fig:2Dbeam-recte} \end{subfigure} \hspace{-4.3cm} \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=.5\linewidth]{2D/200400/200400.png} \caption{} \label{fig:2Dbeam-rectf} \end{subfigure} \caption{Comparison between the ground truth (SIMP optimized) 2D structures and our predictions of the optimal structures. Fig. (a) shows the prediction of our source network alone. Figs (b-f) show the prediction of the optimal structures output by the fine tuned target model. The individual quality metrics for our predictions are presented in Table \ref{tab:2Daccuracies}, and the prediction time is shown in Table \ref{tab:predictiontime}.} \label{fig:2Dbeam-rect} \end{figure*} \begin{figure*}[ht!] \centering \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=\Columnwidth]{2D/Lshape/Lshape.png} \caption{} \label{fig:7a} \end{subfigure} \hspace{-1cm} \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=\Columnwidth]{2D/Lshapewithhole/Lshapewithhole.png} \caption{} \label{fig:7b} \end{subfigure} \caption{Predicted optimal structures versus ground truth (SIMP optimized) for high resolution domains: (a) an L-shaped domain of genus 0, and (b) L-shaped domain with a hole (genus 1). The individual quality metrics for our predictions are presented in Table \ref{tab:2Daccuracies}, and the prediction time is shown in Table \ref{tab:predictiontime}. Observe that the source network for the 2D examples has only been trained on the domains shown in Figure \ref{fig:2Dbeam-rect}, so the domains shown in this Figure are \textit{unseen} to the source network. } \label{fig:7} \end{figure*} \begin{figure*}[ht!] \centering \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=\Columnwidth]{2D/curved/curved.png} \caption{} \label{fig:9a} \end{subfigure} \hspace{-0.3cm} \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=\Columnwidth]{2D/frame/frame.png} \caption{} \label{fig:9b} \end{subfigure} \caption{Predicted optimal structures versus ground truth (SIMP optimized) for high resolution domains that have different geometry and topology. The individual quality metrics for our predictions are presented in Table \ref{tab:2Daccuracies}, and the prediction time is shown in Table \ref{tab:predictiontime}. Observe that the source network for the 2D examples has only been trained on the domains shown in Figure \ref{fig:2Dbeam-rect}, so the domains shown in this Figure are \textit{unseen} to the source network. } \label{fig:9} \end{figure*} The quality metrics for the individual examples have been compiled in Table \ref{tab:2Daccuracies}. Furthermore, we show in Table \ref{tab:predictiontime} the time required by our transfer learning-based method to make predictions for several 2D optimal topologies: the average prediction time is 0.017 seconds per design case versus 138.0 seconds for the SIMP solver. These resulting quality metrics are very promising, and are particularly so for the resolutions considered in the our experiments. Moreover, as the resolution increases the level of detail that is picked up by our transfer learning-based predictor increases as well. Notably, the normalized prediction time increases at a much slower rate compared to that of the SIMP algorithm. \begin{table*}[h!] \centering \caption{2D structures: MSE, binary accuracies, and compliance error relative to SIMP.} \begin{tabular}{lcSSSSS} \toprule Design Domain & Resolution & {\shortstack{Number of \\ test cases}} & {MSE} & {\shortstack{Binary \\Accuracy}} & {\shortstack{Compliance \\ Error}} & {\shortstack{Compliance \\ Error Std.}}\\ \midrule from Fig. \ref{fig:2Dbeam-recta} (predicted by the source network) & 40 x 80 & 2000 & 2.14\% & 96.61\% & 2.1\% & 0.054 \\ from Fig. \ref{fig:2Dbeam-rectb} & 80 x 160 & 750 & 3.70\% & 94.50\% & 3.65\% & 0.055 \\ from Figure \ref{fig:2Dbeam-rectc} & 120 x 160 & 500 & 3.45\% & 94.61\% & 4.81\% & 0.056 \\ from Figure \ref{fig:2Dbeam-rectd} & 120 x 240 & 625 & 4.83\% & 94.59\% & 4.93\% & 0.062\\ from Figure \ref{fig:2Dbeam-recte} & 160 x 320 & 375 & 4.29\% & 94.46\% & 6.54\% & 0.080 \\ from Figure \ref{fig:2Dbeam-rectf} & 200 x 400 & 375 & 4.35\% & 94.55\% & 9.57\% & 0.106 \\ Curved beam (Fig. \ref{fig:9a}) & 80 x 160 & 500 & 4.01\% & 94.44\% & 9.2\% & 0.088\\ Curved beam (Fig. \ref{fig:9a}) & 120 x 240 & 750 & 4.13\% & 93.94\% & 11.3\% & 0.091 \\ Frame (Fig. \ref{fig:9b}) & 80 x 160 & 500 & 2.61\% & 95.53\% & 5.39\% & 0.070 \\ Frame (vol. fr. = 0.4) (Fig. \ref{fig:9b}) & 120 x 240 & 500 & 3.00\% & 94.80\% & 15.50\% & 0.182 \\ L shape (vol. fr. = 0.3) (Fig. \ref{fig:7a}) & 120 x 240 & 500 & 2.50\% & 95.75\% & 13.20\% & 0.130 \\ L shape w/hole (vol. fr. = 0.3) (Fig. \ref{fig:7b}) & 120 x 240 & 500 & 2.60\% & 95.72\% & 20.00\% & 0.210 \\ \bottomrule \textit{Average} & & & 3.46\% & 94.95\% & 8.85\% & 0.098 \\ \bottomrule \end{tabular} \label{tab:2Daccuracies} \end{table*} \begin{table}[h!] \centering \caption{2D Structures: Comparison of computational time of our predictions vs. the SIMP algorithm.} \begin{tabular}{l S S} \toprule Resolution & {\shortstack{SIMP \\(sec. per case)}} & {\shortstack{Our method \\(sec. per case)}} \\ \midrule 80 x 160 (Fig. \ref{fig:2Dbeam-rectb}) & 24 & 0.0093 \\ 120 x 160 (Fig. \ref{fig:2Dbeam-rectc}) & 36 & 0.010 \\ 120 x 240 (Fig. \ref{fig:2Dbeam-rectd}) & 80 & 0.015 \\ 160 x 320 (Fig. \ref{fig:2Dbeam-recte}) & 200 & 0.022 \\ 200 x 400 (Fig. \ref{fig:2Dbeam-rectf}) & 350 & 0.030 \\ \bottomrule \textit{Average} & 138 & 0.017 \\ \bottomrule \end{tabular} \label{tab:predictiontime} \end{table} \begin{figure}[h!] \centering \includegraphics[width=\Columnwidth]{2D/cnn3.png} \caption{Comparison between our predictions and those of a traditional CNN-based algorithm trained on 1500 and 1650 high resolution data, respectively. Our source network has been trained with low resolution cases as well, and we took into account the time required to generate these low resolution cases when selecting the number of high resolution cases for the traditional CNN-based algorithm.} \label{fig:8} \end{figure} \subsubsection*{Comparison With Traditional Deep Learning Methods} In order to compare our transfer learning-based method with other published deep learning methods for topology optimization, we examined two criteria. Specifically, we compare our method in terms of: \begin{enumerate} \item time required to generate equivalent training data producing similar prediction performance; and \item prediction accuracy with the training data generated in the same amount of time. \end{enumerate} One key advantage of our method compared to the traditional deep learning methods is that the method proposed in this paper requires a much smaller number of high-resolution cases to train the target network than to train an equivalent deep learning network. For example, we used 8,000 low resolution 2D cases and 1500 high resolution cases to obtain the high quality predictions shown in Figures \ref{fig:2Dbeam-rect}-\ref{fig:9}. On top of that, the source network has to be trained only once with the 8,000 low resolution 2D cases. On the other hand, training a deep learning network to produce a prediction of similar quality requires at least 8,000 high resolution cases for every design space and every type of boundary conditions. This is a significant time difference that becomes more severe as the resolution of the design space increases and as we move to 3D. For example, the SIMP algorithm \cite{andreassen2011efficient} requires 5 seconds to produce the optimal topology for every low resolution (40 x 80) case and 350 seconds for every high resolution case (200 x 400). Thus, training our source and target networks requires \textit{5.3 times} fewer high resolution cases than the equivalent CNN, and about $5$ times less computation time. To put this in perspective, by using this particular SIMP algorithm, generating equivalent training datasets for our method is $~620$ hours faster than for an equivalent CNN. This difference rapidly increases with the increase in the fidelity of the desired results. To be able to compare the prediction accuracies of transfer learning based vs deep learning based methods for the same amount of time, we replicated our target network in terms of layers, loss function, optimizer and so on, and trained it as a normal deep learning network. We determined the average total time needed to generate the training sets for our method for the example shown in Figure \ref{fig:2Dbeam-rectf}, and then generated as many high-resolution cases for the deep CNN as possible in the same amount of time. Finally, we trained the deep CNN network with this dataset and compared the quality of the predictions between our method and of the corresponding deep CNN, as illustrated in Figure \ref{fig:8}. This experiment clearly, although unsurprisingly, illustrates the significant superiority of our predictions compared to other deep learning methods based on data generated in the \textit{same} amount of time. Furthermore, the much smaller size of the training dataset required by our transfer learning network implies that the corresponding training time is also much smaller than that required by traditional deep learning networks, as illustrated in Table \ref{tab:2Dtrainingtime}. \begin{table*}[h!] \centering \caption{2D Structures: Comparison of the corresponding training time. The last two rows show the training time required by an equivalent Deep CNN.} \sisetup{group-separator={,},group-minimum-digits = 4} \begin{tabular}{ l S S S S} \toprule {Resolution} & {\shortstack{Number of \\ training cases}} & {\shortstack{Training time \\ (seconds, per epoch)}} & {\shortstack{Number of \\ epochs}} & {\shortstack{Training time \\ (minutes)}} \\ \midrule 40 x 80 (Fig. \ref{fig:2Dbeam-recta}) & 8000 & 62.87 & 29 & 30.3 \\ 80 x 160 (Fig. \ref{fig:2Dbeam-rectb}) & 1500 & 26 & 4 & 1.73 \\ 120 x 160 (Fig. \ref{fig:2Dbeam-rectc}) & 1500 &27.19 & 5 & 2.26 \\ 120 x 240 (Fig. \ref{fig:2Dbeam-rectd}) & 1500 & 34.88 &10 & 5.81 \\ 160 x 320 (Fig. \ref{fig:2Dbeam-recte}) & 1500 & 41.25 &8 & 5.5 \\ 200 x 400 (Fig. \ref{fig:2Dbeam-rectf}) & 1500 & 58.68 &9 & 8.80 \\ \bottomrule 200 x 400 (CNN) (Fig. \ref{fig:8}) & 1650 & 67.43 &28 & 31.46 \\ 200 x 400 (CNN) & 8000 & 307.54 & 29 & 148.64 \\ \bottomrule \end{tabular} \label{tab:2Dtrainingtime} \end{table*} \subsection{3D structures} \label{3DStructures} \begin{figure*}[h!] \vspace{3pt} \begin{center} \begin{tabular}{cc} \includegraphics[width=\Columnwidth]{3D/408010/408010.png} \hspace{15pt} & \includegraphics[width=\Columnwidth]{3D/8016010/8016010.png} \hspace{15pt} \\ (a) & (b)\\ \end{tabular} \\ \begin{tabular}{cc} \includegraphics[width=1\Columnwidth]{3D/1hole/1hole.png} \hspace{15pt} & \includegraphics[width=1\Columnwidth]{3D/2hole/2hole.png} \hspace{15pt} \\ (c) & (d) \\ \end{tabular} \\ \end{center} \caption{The predicted and ground truth optimal structures for 3D design spaces, including their symmetric difference. Figures (a) and (b) show a simple parallelepipedic domain with two resolutions and figures (c) and (d) show a beam with one or two holes, respectively. The individual quality metrics for our predictions are presented in Table \ref{tab:3Daccuracies}, and the prediction time is shown in Table \ref{tab:3Dpredictiontime}.} \label{fig:3Dbeam} \vspace{3pt} \hrule \end{figure*} \begin{figure*}[h!] \centering \begin{subfigure}[t]{\textwidth} \centering \includegraphics[width=2\Columnwidth]{3D/differentBC.png} \caption{} \label{fig:cubea} \end{subfigure} \hspace{-0.3cm} \begin{subfigure}[t]{\textwidth} \centering \includegraphics[width=2\Columnwidth]{3D/differentBC2.png} \caption{} \label{fig:cubeb} \end{subfigure} \caption{Predicted versus ground truth structures for two different domains as well as the boundary conditions shown in Figure \ref{fig:3}(d). Both domains and boundary conditions are unseen to the source network. The individual quality metrics for our predictions are presented in Table \ref{tab:3Daccuracies}, and the prediction time is shown in Table \ref{tab:3Dpredictiontime}.} \label{fig:cube} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{3D/differentBC3.png} \caption{A domain and boundary conditions that were not in the training set for our source model, but it was included in the much smaller dataset used for fine-tuning the target model. The individual quality metrics for our predictions are presented in Table \ref{tab:3Daccuracies}, and the training time is shown in Table \ref{tab:3Dtrainingtime}.} \label{fig:cubesph} \end{figure*} We also applied our transfer-learning based method to 3D domains and we used a freely available Matlab Code \cite{liu2014efficient} to generate the ground truth results. For the 3D examples, we used 12,000 low resolution data ($20 \times 40 \times 10$) to train the source model, and 1500 high(er) resolution data for fine-tuning the target model. Solving the TO problem in 3D is notoriously time consuming. Thus, to generate the ground truth cases for the examples used in this section, we limited the number of iterations of the SIMP solver to 150. To generate the training datasets for all 3D examples with parallelepipedic domains (Figures \ref{fig:3Dbeam}), the boundary conditions were randomly chosen from one of the 3 cases shown in Figures \ref{fig:3}(a-c), and the location, orientation and magnitude have also been randomized. The cases shown in Figures \ref{fig:cube} and \ref{fig:cubesph} used randomized boundary conditions according to Figure\ref{fig:3}(d). We first show in Figures \ref{fig:3Dbeam}(a) \& (b) the comparison between our predicted and the ground truth (SIMP) 3D optimal structures for two beams obtained for two different resolutions of the design space, namely $40 \times 80 \times 10$ and $80 \times 160 \times 10$. Moreover, Figures \ref{fig:3Dbeam}(c) \& (d) show the same comparison for beams that have different topologies for a $40 \times 80 \times 10$ resolution. In order to help illustrate the difference between the two solutions more clearly, we also provide the symmetric difference between the ground truth and predicted result, i.e., the voxels that are in either one but not the other structure. Importantly, our method can provide impressive performance even for cases for which the source network has not been specifically trained for. As an example, consider the design space illustrated in Figure \ref{fig:cubesph}, which was not part of the training set for our source model, but it was included in the much smaller dataset used to fine-tune the target model. The predicted optimal structures for this new design problem, which are summarized in Figures \ref{fig:cube} \& \ref{fig:cubesph} are still highly accurate. Table \ref{tab:3Daccuracies} shows the average MSE, binary accuracy and compliance error for the 3D cases described above. Note that average MSE and binary accuracy are also around 3\% and 95\% for these 3D cases. Furthermore, we summarize in Table \ref{tab:3Dpredictiontime} the time required by our method to predict the 3D optimal structures, and compare these times with those required by the SIMP method to reach 150 iterations for the same 3D problems. Our method is consistently multiple orders of magnitude faster than the SIMP method, and achieves real-time rates even for our preliminary and non-optimized implementation. \begin{table*}[ht!] \centering \caption{3D structures: MSE, Binary Accuracy and Compliance Error relative to SIMP.} \begin{tabular}{lcSSSS} \toprule Design Domain & Resolution & {\shortstack{Number of \\ test cases}} & {MSE} & {\shortstack{Binary \\Accuracy}} & {\shortstack{Compliance \\ Error}}\\ \midrule Domain & 20 x 40 x 10 & 1000 & 2.04\% & 95.62\% & 1.56\% \\ Domain (Fig. \ref{fig:3Dbeam}a) & 40 x 80 x 10 & 300 & 3.14\% & 94.31\% & 2.43\% \\ Domain (Fig. \ref{fig:3Dbeam}b) & 80 x 160 x 10 & 100 & 3.1\% & 93.9\% & 10.1\% \\ With hole (Fig. \ref{fig:3Dbeam}c) & 40 x 80 x 10 & 150 & 3.45\% & 94.00\% & 2.05\% \\ With 2 hole (Fig. \ref{fig:3Dbeam}d) & 40 x 80 x 10 & 150 & 3.52\% & 94.31\% & 2.85\% \\ Cube (Fig. \ref{fig:cubea}) & 40 x 40 x 40 & 175 & 3.28\% & 93.29\% & 9.9\% \\ Cube with holes (Fig. \ref{fig:cubeb}) & 40 x 40 x 40 & 180 & 3.51\% & 93.11\% & 7.5\% \\ Dome with holes (Fig. \ref{fig:cubesph}) & 40 x 40 x 40 & 200 & 2.41\% & 95.71\% & 0.38\% \\ \bottomrule \textit{Average} & & & 3.05\% & 94.28\% & 4.60\% \\ \bottomrule \end{tabular} \label{tab:3Daccuracies} \end{table*} \begin{table}[ht!] \centering \caption{3D structures: comparison of prediction time vs. SIMP algorithm.} \sisetup{group-separator={,},group-minimum-digits = 4} \begin{tabular}{lSS} \toprule Resolution & {\shortstack{SIMP \\(sec. per case)}} & {\shortstack{Our method \\(sec. per case)}} \\ \midrule 20 x 40 x 10 & 300 & 0.015 \\ 40 x 80 x 10 (Fig. \ref{fig:3Dbeam}a) & 4500 & 0.031 \\ 80 x 160 x 10 (Fig. \ref{fig:3Dbeam}b) & 7500 & 0.04 \\ 40 x 40 x 40 (Fig. \ref{fig:cube}) & 5550 & 0.033 \\ \bottomrule \textit{Average} & 4462.5 & 0.029 \\ \bottomrule \end{tabular} \label{tab:3Dpredictiontime} \end{table} \begin{table*}[ht!] \centering \caption{3D structures: Training time} \sisetup{group-separator={,},group-minimum-digits = 4} \begin{tabular}{lSSSS} \hline {Resolution} & {\shortstack{Number of \\ training cases}} & {\shortstack{Training time \\ (seconds, per epoch)}} & {\shortstack{Number of \\ epochs}} & {\shortstack{Training time \\ (minutes)}} \\ \hline 20 x 40 x 10 & 12000 & 1101.87 & 20 & 367.25 \\ 40 x 80 x 10 (Fig. \ref{fig:3Dbeam}a) & 1500 & 174.56 & 5 & 14.54 \\ 80 x 160 x 10 (Fig. \ref{fig:3Dbeam}b) & 1500 & 290 & 15 & 72.5 \\ 40 x 40 x 40 (Cube) (Fig. \ref{fig:cubea}) & 1900 & 252 & 20 & 84 \\ 40 x 40 x 40 (Cube with holes) (Fig. \ref{fig:cubeb}) & 1700 & 230 & 30 & 115 \\ 40 x 40 x 40 (Dome with 2 holes) (Fig. \ref{fig:cubesph}) & 1500 & 202 & 35 & 117.8 \\ \hline \end{tabular} \label{tab:3Dtrainingtime} \end{table*} Similarly with the 2D case described above, the time required to generate a comparable training dataset requires much less time for our method relative to the time required for an equivalent deep CNN. By extrapolating from our 3D experiments, our method requires a time that is at least 5 times less than the time required to generate the comparable dataset for the equivalent deep CNN, and the difference increases with the resolution. For example, for the case shown in Figure \ref{fig:cubesph}, by using the algorithm described in \cite{liu2014efficient} run in parallel on 100 processors, for a $40 \times 40 \times 40$ resolution, and with the optimization stopped early at the $150^{th}$ iteration for each case, the dataset generation for our method requires approximately 152 hours (6.3 days) less than the time required to generate an equivalent dataset for the deep CNN, even with such an aggressive parallelization. At higher resolutions, and assuming that one can use a highly parallelized algorithm, such as the one implemented on the GPU described in \cite{wu2015system} that computes an optimal topology for a cantilever beam for a $200 \times 100 \times 100$ resolution in 144 seconds, our transfer learning-based method would need at least 388 hours (16.2 days!) less time than the time required to generate the equivalent training set for the deep CNN. \subsection{Generalizability of Network Predictions} The solution manifolds between our low-resolution and high-resolution domains for some of the examples shown in this paper can be considered to already be dissimilar. Specifically, our source network is trained on rectangular/parallelepipedic low-resolution domains of genus 0, but the target network is fine-tuned on domains whose geometry, topology and boundary conditions are unseen to the source network. Nevertheless, to further confirm the generalization performance of our method, we also considered the types of problems from \cite{cang2019one}, which use local density constraints for the topology optimization. It is important to note that we used here the same source network trained on the dataset as explained above, but only fine-tuned the target network on the data set used in \cite{cang2019one}. The results are shown in Figure \ref{fig:generalizability}(a). Note that the predictions of our network are visually very similar to the ground truth. Moreover, the work presented in \cite{cang2019one} used 7000 training cases, while our target network only required 1500 cases for fine-tuning. The fact that we used the same source network for this new, more complex TO problem strongly affirms that knowledge is being transferred between our source and target networks. Moreover, we have also explored the heat sink design problem, which results in a tree-like optimal structure with very thin branches. The ground truth solution in this case has genus 0, so this can be considered more of a size rather than a topology optimization problem. \begin{figure*}[ht!] \centering \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=\Columnwidth]{2D/local-density.png} \caption{} \end{subfigure} \hspace{-0.3cm} \begin{subfigure}[t]{.5\textwidth} \centering \includegraphics[width=\Columnwidth]{2D/heat-sink.png} \caption{} \end{subfigure} \caption{Predicted optimal structures versus ground truth (SIMP optimized) for problems with local density constraints \cite{cang2019one} and heat sink design. The fact that we used the same source network for these new, more complex TO problems strongly affirms that knowledge is being transferred between our source and target networks.} \label{fig:generalizability} \end{figure*} We adapted our network to the heat sink design problem to further illustrate the capabilities of our method and of the knowledge transfer abilities between our source and target network. Figure \ref{fig:generalizability}(b) shows the network performance for some of these test cases, including the predicted optimal solutions before and after the application of the density threshold, i.e., integer rounding for the density values. While capturing the very thin members of the optimal structure remains a limitation, which is discussed in the next section, our results indicate that knowledge is transferred between the source and the target network, as intended, even for the case of heat sink design. It is also important to note that in all the examples presented in the paper we used the same source network trained only once, as described above. \subsection{Current Limitations in Capturing Thin Members} Our transfer learning network has not been designed to achieve high prediction performance of very thin members. We hypothesize that there are several reasons for this behavior. First, our current network does not have a complexity that is sufficiently high, mainly due to GPU limitations imposed by our hardware on the size of the layer output during training. A more complex network would definitely improve detection performance for the thin members. On the other hand, a more complex network would need more training data, which is expensive to generate with the SIMP algorithms that we used. Second, the thin members that we predict with a somewhat lower accuracy have a width of 1-2 pixels/voxels, which is at or below 1\% of the largest dimension of the domain. Increasing the resolution relative to the width of the members would also improve prediction performance for the thin members. This, in turn, would require higher resolution training data, whose computational cost increases exponentially with the resolution, as well as a more complex network architecture. Finally, we applied in all our examples a simple integer rounding as a threshold for the densities of the individual elements. This, in turn, eliminated from the solutions that we report all pixels/voxels whose density was below 0.5, including those that belonged to the thin members. However, our network predicts a much more nuanced density field, as shown in the last row of Figure \ref{fig:generalizability}. \subsection{Transfer Learning in Conjunction with SIMP} One way to take full advantage of the information output by our network without increasing the complexity of the network architecture and the data resolution is to couple our transfer learning-based predictions to SIMP to obtain a more accurate definition of the thin members. Table \ref{tab:TO+TL} shows the time required for the SIMP algorithm to generate the optimal structure by using as input the prediction output by our network. Specifically, for a 200 x 400 resolution domain, the SIMP algorithm that we used calculates such an optimum structure in 8 seconds (with an average of 4.5 seconds) compared to 350 seconds needed by a normal SIMP optimization starting with the full design domain. \begin{table}[ht!] \centering \caption{Average time required by SIMP to generate the optimal solution starting from our predicted structure.} \begin{tabular}{l S} \toprule Resolution & {\shortstack{Time \\(sec. per case)}}\\ \midrule 80 x 160 (Fig. \ref{fig:2Dbeam-rectb}) & 1 \\ 120 x 160 (Fig. \ref{fig:2Dbeam-rectc}) & 3.5 \\ 120 x 240 (Fig. \ref{fig:2Dbeam-rectd}) & 4 \\ 160 x 320 (Fig. \ref{fig:2Dbeam-recte}) & 6 \\ 200 x 400 (Fig. \ref{fig:2Dbeam-rectf}) & 8 \\ \bottomrule \textit{Average} & 4.5 \\ \bottomrule \end{tabular} \label{tab:TO+TL} \end{table} \section{Conclusions} \label{conclusions:sec} We proposed in this paper a highly efficient and accurate non-iterative topology optimization method that uses transfer learning on a convolutional neural network architecture. Our method uses low resolution datasets to train a source network and a much smaller high resolution dataset to fine-tune a target network. The learned knowledge captured by the source network once is transferred to the target network, so that the latter requires a much smaller number of training cases than an equivalent deep CNN to make predictions with the same level of accuracy. We provided numerous examples to show that the proposed method produces predictions of the optimal 3D topologies at real-time rates for non-trivial 3D high resolution TO problems. Furthermore, we showed that the proposed method can produce accurate predictions efficiently for various design spaces, boundary conditions, and volume fractions, including for cases that have not been part of the source network's training set. Our experiments achieved an average binary accuracy and MSE around 95\% and 3\%, respectively, at real-time rates in both 2D and 3D. Like any other data-driven method, our approach inherits any existing data biases in the datasets. In our experiments, we reduced the biases by randomizing the input used to generate the ground truth structures. Moreover, the capability of this method to explore regions of the design space for which the algorithm has not been trained for has the same limitations as most other transfer learning algorithms. In addition, when the source task and the target task are not similar enough, negative transfer may occur and the algorithm performance may fail to improve \cite{torrey2010transfer} without additional information. This, however, is not unlike what happens in real life. Consider one of the traditional examples used to explain transfer learning, namely that of learning how to ride a bicycle. Clearly these skills can be transferred by bicycle riders and used to learn how to ride other two wheeled devices, such as motorcycles or scooters. However, the same bicycle riding skills cannot be easily employed to ride, for example, unicycles - as anyone that has tried to ride a unicycle can attest to. Perhaps the key bottleneck of any data driven TO method is the computational cost to generate suitable training data, which is computationally demanding for gradient-based optimization algorithms, and is particularly so in 3D. On one hand, our experiments show that the proposed transfer learning-based method requires much less time than equivalent deep CNN to generate the training dataset to reach the same accuracy. On the other hand, employing more efficient gradient-based approaches to generate ground truth optimal structures are needed to be able to perform careful studies of how to best train the proposed transfer learning-based method and to better understand its generalization capabilities, scalability and limitations. Fortunately, recent advances in software and hardware architectures, such as the recently announced optimized physics libraries from AMD and NVIDIA, which promise to include FEA capabilities, come at the right time and with the potential to dramatically speed up the data generation for our purposes. Generalizability is a critical aspect of any machine learning-based method. On one hand, we discussed In section \ref{results} the capability of the proposed transfer learning method to transfer knowledge between the source and the target network, and we illustrated the generalizability of its predictions. However, a more complete treatment of this difficult question, which has not been addressed yet in the literature, would likely involve constructions of local subspaces that approximate the solution manifolds corresponding to the low and high resolution problems, and definitions of new metrics that would measure the ``distance'' between the corresponding approximations of the solution manifolds. Very likely, an effective distance would probably require the projection of these approximations of the solution manifolds onto some common subspace, but there are a number of important open problems that need to be solved first. Perhaps the current efforts in the mathematical optimization community focused on Reduced Bases Methods could provide some insight into these issues. It is also possible that modifying the type of input to replace or augment the explicit boundary conditions by one or more physical fields, such as displacement and stress/strain, would further improve the generalization capabilities of the proposed method. However, such a study is outside the scope of this paper. Nevertheless, the proposed approach shows that transfer learning can serve as a practical underlying framework for performing real-time 3D design space explorations with topology optimization. To the best of our knowledge, this paper documents the first attempt to use transfer learning for topology optimization and provides exciting and important directions for future research. \section*{Acknowledgement} This work was supported in part by the National Science Foundation grants CMMI-1462759, IIS-1526249, and CMMI-1635103.
{ "timestamp": "2021-02-16T02:37:58", "yymm": "2102", "arxiv_id": "2102.07657", "language": "en", "url": "https://arxiv.org/abs/2102.07657" }
\section{Context-Responsive Labeling Management} \label{sec:method} Our approach positions labels in AR space, followed by a context-responsive computation. Here we introduce occlusion removal, perform level-of-detail strategies, and enforce coherent label placement. In this section, we will detail the proposed technique. \subsection{Positioning Labels in AR} \label{ssec::arpos} In \textcolor{black}{a} preprocessing step, \textcolor{black}{we map} the geographical locations from the real world to our Cartesian AR world space. \textcolor{black}{This considers} the GPS position of the user's device, the GPS location of the POIs, and the compass orientation of the device~\cite{gpsPositioning} The labels are oriented \textcolor{black}{towards} the user's position by aligning the normal vectors of the labels with the AR device in the AR world space. Once this initial label positioning is done, a perspective projection from the AR world space into the screen space of the device is performed. \textcolor{black}{In doing so, we can position the labels in AR spatially relative to the position of the user to support exploration and navigation as shown by Guarese~et~al.~\cite{guarese}.} In principle, existing frameworks, like the \emph{AR + GPS Location SDK} package~\cite{gpslocation} or the \emph{Wikitude AR SDK} package~\cite{wikitude} can be used to map real-world objects to the AR world space. Unfortunately in our experiment, the techniques are not stable \textcolor{black}{due to} the inaccurate GPS sensor~\cite{lowCost} or compass~\cite{kuhlmann} \textcolor{black}{data} of mobile devices. To test and assess the quality of the coherence strategies for the \emph{occlusion management} and \emph{level-of-detail management}, \textcolor{black}{we predefine the positions of labels} at the $(x, z)$-coordinates in the AR world space. The existing libraries do not provide stable label positions, which would lead to a less coherent behavior that is not relying on the proposed \emph{coherence management}. Once the labels are placed, we order the labels based on their distance to the user for future computations. \section{Conclusion and Future Work} \label{sec:conclude} We present a context-responsive labeling framework in Augmented Reality, which allows us to introduce rich-content labels associated with POIs. The label management strategy suppresses label occlusions and incoherent label movements caused by transitions and rotations of the device during user interaction. \textcolor{black}{The framework} presents an alternative approach for spatial data navigation. The \emph{level-of-detail management} takes the position of the user and label density in the view volume into account. The computed levels-of-detail for each label avoid excessive vertical stacking of labels, while still retaining basic information, which depends on the object distance. To further reduce visual clutter, we introduce the concept of super labels, which group a set of labels. Smooth transitions have been implemented in our \emph{coherence management} to avoid flickering and enable stable label movement. The evaluation shows the applicability of the proposed approach. As future direction, techniques will be investigated to overcome the drawbacks of seeing only the labels that are in the current view \textcolor{black}{volume.} The user should still anticipate POIs outside the view volume and retain \textcolor{black}{a} global overview of the annotated scene as with 2D maps. One possibility would be including the technique presented by Lin~et~al.~\cite{kaipaper} to depict labels that are currently outside the view volume \textcolor{black}{and place hints at the display border} of the device. Considering the positioning accuracy, it would be interesting to include so-called \emph{Dual-Frequency GPS} \cite{dualfrequency} or \emph{Continuous Operating Reference Stations} (CORS) \cite{cors} as investigated by related work to improve the sensor accuracy of mobile devices \cite{kuhlmann}. \textcolor{black}{A selection scheme with the integration of service providers (e.g., OpenStreetMap or Google Maps with large POI data) could improve the system usability.} \section{Qualitative Evaluation} \label{sec:evaluate} We conducted an online survey to evaluate the effectiveness \textcolor{black}{and the applicability of our approach. Primarily, we aim to confirm the appropriateness of the selected design principles. It is based on users' preferences by examining task performance in terms of \textcolor{black}{required} time and result accuracy. Our hypotheses of the study are summarized as follows: \begin{itemize} \item[\textbf{(H1)}] The design principle, removing label occlusions, has higher priority in comparison to showing precise positions of labels. \item[\textbf{(H2)}] Rich label design in AR leads to a better POI exploration and decision-making experience in contrast to plain text labels. \item[\textbf{(H3)}] Users can perform faster route planning tasks using our system \textcolor{black}{compared to} conventional maps. \end{itemize} } \textcolor{black}{We further decompose our hypotheses into four main tasks as summarized in Table~\ref{tab:tasks} \textcolor{black}{for} an online questionnaire. In the future, we plan to do an in-person user study as one of our primary attempts. For each measurable task, time and accuracy were collected. After each task, we also asked participants to provide reasons regarding their experience when performing the task. At the end of the entire questionnaire, we requested general feedback and collected some personal information for further analysis (e.g., age, educational background, experience with AR devices, and so forth). Privacy agreements have been received prior to the user study and the collected data is carefully stored without identifications of the participants. In total, we recruited \textcolor{black}{$30$} participants who are experienced with visualization techniques and graduate students of visual computing participated in the survey. The age of the participants ranges from 24 to 64 years with the majority of participants being in the late twenties or the early thirties. One limitation of the user study comes from the limited access to the general audience, while experience in visual computing will help the participants to answer the questions smoothly. We performed a within-subjects study design, where we \textcolor{black}{tested} all variable conditions for a participant in order to analyze individual behaviors in more depth. Questions in each task are also randomized to avoid a learning effect. For more details, \textcolor{black}{we refer} to the accompanying supplementary materials. } \begin{table}[ht!] \centering \scriptsize \begin{tabular}{|c|l|} \hline Tasks & \textcolor{black}{Goal of the investigation and question samples} \\ \hline \hline Task 1 & \textcolor{black}{Impact of occlusion on attribute tasks and comparative tasks} \\ & \textcolor{black}{Q1: What is the waiting time of an attraction?} \\ & \textcolor{black}{Q2: Which attraction has the minimal waiting time?} \\ \hline Task 2 & \textcolor{black}{Effectiveness} of levels-of-detail \\ & \textcolor{black}{Q3: Which themed area has the minimal waiting} \\ & \textcolor{black}{\hspace{5mm}time? (with LOD variations)} \\ & \textcolor{black}{Q4: Which LOD do you prefer?} \\ \hline Task 3 & Effectiveness of 2D maps and our AR encoding \\ & \textcolor{black}{Q5: Choose the attraction with the minimal} \\ & \textcolor{black}{\hspace{5mm} waiting time in the specified themed area} \\ \hline Task 4 & Combinatorial features in our system \\ & \textcolor{black}{Q6: Provide your feedback to different configuration settings} \\ \hline \end{tabular} \caption{Overview of the tasks in the user study.} \label{tab:tasks} \end{table} \textbf{(H1)} \textcolor{black}{demonstrates the importance of resolving label occlusions in AR.} As described in Section~\ref{sec:related}, existing work concludes the importance of resolving occlusions in AR to support the decision making process by the users~\cite{grassetimage}. \textcolor{black}{In Task~1, we show participants a few snapshots (see supplementary materials) of our system, and ask the participants to} determine the waiting time of the specified attraction (Q1) and select the attractions with minimal waiting times (Q2). Three participants managed to select the correct waiting times if occlusions occurred, \textcolor{black}{and the participants} stated that the waiting times were not recognizable \textcolor{black}{in such a situation. Figure~\ref{fig:q1-3} \textcolor{black}{summarizes} task completion time and accuracy. The time needed to answer the questions could be decreased (Q1 from $33.26~s$ to $12.6 ~s$, Q2 from $21.39~s$ to $13.7~s$) and the number of correct answers could be increased (Q1 from $10~\%$ to $86.67~\%$, Q2 from $3.33~\%$ to $100~\%$) when showing results with our \emph{occlusion management} (Figure~\ref{fig:q1-3}).} \textcolor{black}{$24$} participants explicitly stated that it was difficult or impossible to select the correct answers \textcolor{black}{if} information is occluded, and \textcolor{black}{$24$} participants agree that the occlusion-free positioning eases decision-making processes when investigating the labels. \textcolor{black}{For hypothesis \textbf{(H2)}, we design questions in Task~2, where participants need to take several attributes} into account to answer the questions. \textcolor{black}{In Q3, the participants were} asked to select a themed area of the amusement park with the lowest average waiting time. \textcolor{black}{We showed participants images with labels of different LOD settings, including text labels, labels with the lowest LOD, and super labels.} The time needed to answer questions \textcolor{black}{for} this task is summarized in Figure~\ref{fig:q1-3}(a). The participants, in general, spent more time if only text labels are present \textcolor{black}{($52.05~s$ on average) since they probably like} to calculate the correct number to answer the questions properly. If we present information using the lowest LOD, a shorter time \textcolor{black}{($23.52~s$ on average)} was required in comparison to pure text labels. \textcolor{black}{Using super labels achieved a similar performance, participants spent} \textcolor{black}{$21.61~s$} to answer the questions. % If the waiting time is depicted using text labels or labels in the lowest LOD, the themed area with the minimal average waiting times was correctly selected by \textcolor{black}{$73.33~\%$} of the participants. \textcolor{black}{$90~\%$} of the participants selected the correct answers if the super labels were shown (Figure~\ref{fig:q1-3}(b)). In Q4, we ask participants which LOD they prefer. We presented text labels, labels in one of the three LODs, and labels in dynamic LODs as computed by our \emph{level-of-detail management}. The dynamic LODs were chosen as the favorite approach by \textcolor{black}{$40~\%$} of the participants. \textcolor{black}{$53.33~\%$} of the participants preferred the \textcolor{black}{highest} LOD. Participants, who selected the dynamic LODs as their favorite design, emphasized that the vertical stacking of labels is reduced while detailed information about close attractions is preserved. The participants who chose the \textcolor{black}{highest LOD} as their favorite design appreciated the detailed information that can be used \textcolor{black}{in} decision making. \textcolor{black}{It is surprising} that they were not disturbed \textcolor{black}{or annoyed} by the excessive vertical stacking of the labels. The dynamic LODs avoid this excessive vertical stacking while presenting more information about close labels and less information about far labels. \textcolor{black}{To check vertical stacking, Figure~\ref{fig:q5}(a)} \textcolor{black}{compares} the \textcolor{black}{highest} LOD and dynamic LODs. The more information is included for \textcolor{black}{a label, the higher is the chance the label needs to be shifted upwards and stacked.} We recorded the $y$-coordinate from the highest label of the two methods as a representative value for each themed area. \textcolor{black}{The height of the stacked labels} can be effectively reduced \textcolor{black}{when} using the dynamic LODs. For hypothesis \textbf{(H3)}, we \textcolor{black}{aim to} compare the decision making effectiveness when using 2D \textcolor{black}{paper maps} or our AR encoding in Task~3. We again \textcolor{black}{measure} the task completion time and accuracy between using a Tokyo Disneyland map and our visualization. \textcolor{black}{As a preprocessing, we first removed other POIs (e.g., shops or restaurants) and left the $35$ big attractions from the official 2D map of the amusement park, to increase the fairness of the comparison. More details about the task are included in the supplementary material.} \textcolor{black}{$60~\%$ of the participants selected attractions with minimal waiting times of a themed area when using the 2D map and $83.33~\%$ when the AR encoding was employed (Figure~\ref{fig:q5}(b)). The average time that the participants needed to select an attraction using the 2D map was $58.79~s$ while they spent $32.18~s$ on average when using our approach, which clearly shows a reduced effort for POI selection (Figure~\ref{fig:q5}(c))}. \begin{figure}[th!] \centering \setlength{\tabcolsep}{2pt} \begin{tabular}{c|c} \includegraphics[width=0.49\linewidth]{images/evaluation/New/Q13Time.pdf} & \includegraphics[width=0.49\linewidth]{images/evaluation/New/Q13Accuracy.pdf} \\ \textcolor{black}{(a) Time} & \textcolor{black}{(b) Accuracy} \\ \end{tabular} \textcolor{black}{\caption{(a) Task completion times (in seconds) and (b) accuracy of Q1 to Q3. The error bars represent the standard errors.} \label{fig:q1-3} } \vspace{1mm} \centering{ \begin{tabular}{c|c|c} \includegraphics[width=0.49\linewidth]{images/evaluation/LabelStacking.pdf} & \includegraphics[width=0.3\linewidth]{images/evaluation/New/Q52DARAcc.pdf} & \includegraphics[width=0.16\linewidth]{images/evaluation/New/Q52DARTime.pdf} \\ \textcolor{black}{(a) Height of labels} & \textcolor{black}{(b) Accuracy} & \textcolor{black}{(c) Time} \\ \end{tabular} } \textcolor{black}{\caption{(a) Combined height of the stacked labels. (b) Accuracy and (c) task completion times (in seconds) of Q5. The error bars show the standard errors.} \label{fig:q5} } \end{figure} \textcolor{black}{In the feedback session, participants are allowed to freely comment on the \textcolor{black}{presented} approach. Videos are shown highlighting the dynamic behavior of our tool when the user interacts with the system.} Two participants mentioned that they prefer 2D maps compared to AR since 2D maps \textcolor{black}{give a global top view. However, they performed the tasks in the user study better with the AR setting}. \textcolor{black}{We believe that both 2D maps and AR systems} have strengths and weaknesses \textcolor{black}{depending on the tasks and use cases. In our study, we have proven that for navigation purposes, AR systems could be more practical.} Two participants \textcolor{black}{also suggested to combine 2D maps together with AR systems as done by Veas et al.~\cite{Veas:2012:TVCG}. This could allow us to exploit the advantages of both approaches and achieve a similar result as in \emph{Google Maps} and \emph{Google Street View}.} \textcolor{black}{Other} participants would prefer super labels combined with the \textcolor{black}{highest} LOD. \textcolor{black}{This could} reduce visual clutter\textcolor{black}{, but might lead to a higher} vertical stacking of labels compared to dynamic LODs. \textcolor{black}{We, therefore, allow users to adjust the thresholds for switching LODs, to accommodate this preference.} The occlusion handling and the smooth transitions \textcolor{black}{were positively} mentioned by participants in the general feedback. Examples include: \emph{"I really like the occlusion management, to my eyes, it's almost seamless."} or, \emph{"Active occlusion handling is much superior to no occlusion handling."}. \textcolor{black}{Super label aggregation has been another popular and specifically mentioned feature}. Participants appreciate the overview on the themed areas by giving feedback such as, \emph{"I like the super label transitions if there are many attractions because it gives a good overview of an area."}, \textcolor{black}{and} \textit{"I like the super label transitions the most."}. \textcolor{black}{Overall, all participants expressed interest to use our system for navigation purposes.} \section{Introduction} \label{sec:intro} We schedule and \textcolor{black}{plan routes} irregularly in our everyday life. For example, we visit offices, go to restaurants, or see doctors, in order to accomplish necessary tasks. In some cases, such as visiting medical doctors or popular restaurants, one has to wait in a queue until being able to \textcolor{black}{proceed}. \textcolor{black}{This is time-inefficient and most people try to avoid it.} Normally, if a person needs to decide the next place to visit, he or she can extract knowledge about the targets of interest. Then a decision is made based on the corresponding experience or referring locations using a map. 2D maps are one of the most popular methods that describe the geospatial information \textcolor{black}{of objects, to give an overview of the object positions in a certain area. \textcolor{black}{With} a 2D map for navigation, users need to remap or translate the objects on the map to the real environment, to} understand the relationships and distances to these objects~\cite{guarese}. This inevitably \textcolor{black}{strains} our cognition. It is also the reason why some people cannot quickly \textcolor{black}{locate} themselves on \textcolor{black}{a} 2D map or find the correct direction or orientation immediately. \textcolor{black}{\emph{Augmented Reality (AR)} and \emph{Mixed Reality (MR)} have} been proposed to overlay information directly \textcolor{black}{on} the real-world environment with a lower complexity by instructing users \textcolor{black}{in an effective wa ~\cite{McMahon:2015:JSET, Ens:2019:JHCS}. In this paper, we use AR \textcolor{black}{as our technique of choice} for the explanation. Displaying texts or images in \textcolor{black}{AR or MR} allows us to acquire information encoded with geotagged data and stored in GISs. It is also known that using AR for guiding users in exploring the real environment can be more effective in comparison to a 2D representation~\cite{Devaux:2018:IV}.} \textcolor{black}{In mixed environments}, \emph{points of interest (POIs)} are often associated with text labels~\cite{hedgehog, imagebased, nextgen} in order to present additional information (e.g., name, category, etc.). For example, an Augmented Reality Browser (ARB) facilitates us to embed and show relevant data in a real-world environment. Technically, POIs are registered at certain geographical positions via GPS coordinates. Based on the current position and the viewing angle of the device, the POIs are annotated and the corresponding labels are then projected to the screen of the user's device. Naive labeling strategies \textcolor{black}{can} lead to occlusion problems between objects, especially in an environment with a dense arrangement of POIs. Additionally, properly selecting the right level of a label to present information can help to avoid overcrowded \textcolor{black}{situations}. Moreover, retaining the consistency between \textcolor{black}{successive} frames also enables us to maintain a good user experience and to avoid motion sickness. Based on the aforementioned findings, we summarize that a good AR labeling framework should address: \begin{itemize} \item[\textbf{(P1)}] \textbf{The occlusion of densely placed labels in AR space.} Occlusion removal has been considered as a primary design criterion \textcolor{black}{in visualization approaches. It reflects user preferences and also allows the system} to present information explicitly~\cite{Wu:2013:EuroVis}. \item[\textbf{(P2)}] \textbf{\textcolor{black}{Limited} Information provided by plain text.} As summarized by Langlotz~et~al.~\cite{nextgen}, labels in AR often \textcolor{black}{contain} plain text rather than other richer content, such as figures or hybrids of texts and figures. \item[\textbf{(P3)}] \textbf{Label incoherence due to the \textcolor{black}{movement} of mobile devices.} During the interaction with \textcolor{black}{an AR system}, the user may frequently change positions or viewing angles. This leads to unwanted flickering that impacts \textcolor{black}{information consistency}~\cite{imagebased}. \end{itemize} In this paper, we develop a context-responsive framework to optimize label placement in AR. \textcolor{black}{By \emph{context-responsive}, we refer to taking contextual attributes, such as GPS positions, mobile orientations, etc., into account. The system responds to the user with an appropriate positioning of labels. The} approach contains three major components: (1) \emph{occlusion management}, (2) \emph{level-of-detail management}, and (3) \emph{coherence management}, which are essential for the approach to be context-responsive. \textcolor{black}{The} \emph{occlusion management} eliminates overlapping labels by adjusting the positions of occluded labels with a greedy approach to achieve a fast performance. Then, a levels-of-detail scheme is introduced to select the \textcolor{black}{appropriate} level in \textcolor{black}{a} hierarchy and present it based on how densely \textcolor{black}{packed} the labels are in the view volume of the user. We construct a 3D scene to manipulate and control the movement of labels \textcolor{black}{enhancing} the user experience. We introduce a novel approach to manage label placement tailored to AR. It enables an interactive environment \textcolor{black}{with} continuous changes of device positions and orientations. A survey by Preim~et~al.~\cite{preim1} concluded that existing labeling techniques often \textcolor{black}{resolve} overlapping labels once the camera stops moving or the camera position is assumed to be fixed to begin with. \textcolor{black}{Approaches} often project labels to a 2D plane to \textcolor{black}{determine} the occlusions and \textcolor{black}{then} perform occlusion removal. However, object movement in 3D is not obvious in the 2D projections of a 3D scene, which leads to temporal inconsistencies that \textcolor{black}{are} harmful to label readability~\cite{hedgehog}. \textcolor{black}{{\v C}mol{\'i}k et al.~\cite{Cmolik:TVCG:2020} summarized the difficulty of retaining label coherence due to many discontinuities of objects projected into 2D images.} As \textcolor{black}{in the} sequence of snapshots \textcolor{black}{in} Figure~\ref{fig:teaser}, we treat labels as objects in a 3D scene and \textcolor{black}{apply} our management strategies \textcolor{black}{for better quality control.} In summary, \textcolor{black}{the} main technical contributions are: \begin{itemize} \item A fast label occlusion removal technique for mobile devices. \item A clutter-aware level-of-detail management. \item A 3D object arrangement that retains label coherence. \item \textcolor{black}{A prototype to demonstrate the applicability of our approach~\cite{Koeppel:2021:repo}}. \end{itemize} The remainder of \textcolor{black}{the} paper is structured as follows: Section~\ref{sec:related} presents previous work and relates our approach \textcolor{black}{to} existing research. An overview of our design principles and system \textcolor{black}{is} described in Section~\ref{sec:overview}. In Section~\ref{sec:method}, we detail the methodology and technical \textcolor{black}{aspects}. The implementation is explained and use cases are demonstrated in Section~\ref{sec:result}, followed by \textcolor{black}{an evaluation} in Section~\ref{sec:evaluate}. The limitations are explained in Section~\ref{ssec:limitations}, and we conclude this work and provide future research directions in Section~\ref{sec:conclude}. \section{Limitations} \label{ssec:limitations} The limitations of our system are inherited from the \textcolor{black}{hardware, especially the} accuracy of mobile GPS. The position and \textcolor{black}{particularly} the rotation \textcolor{black}{data from} the available Xiaomi Mi A2 smartphone and the Google Nexus C tablet \textcolor{black}{are} not consistent \textcolor{black}{based on our experience.} A less coherent behavior of our system \textcolor{black}{follows} as the \textcolor{black}{sensor data from} each of the two devices is not stable. \textcolor{black}{This, unfortunately, limits the capability to fully utilize the application, while we also envision that} this will sooner or later be solved by \textcolor{black}{newer technologies}. \textcolor{black}{To remove the errors, we thus} present the results using predefined label positions \textcolor{black}{in AR 3D world space.} \textcolor{black}{This allows us to avoid those errors induced by the hardware (e.g., changes in the device position and viewing angle) that could influence the coherence of labels. It will} be interesting to collaborate with researchers focusing on high-precision GPS positioning systems. \textcolor{black}{Another limitation is that the data could contain many POIs with long text descriptions. If each label should be large enough to show the text, not much background information could be \textcolor{black}{depicted} eventually. The current aggregation of labels to super labels is straightforward and can be easily extended based on the use cases.} \textcolor{black}{One important decision criterion for the \emph{occlusion management} and the \emph{level-of-detail management} is the position of the user. The ordering of the labels based on the position of the user influences the resulting labeling.} \textcolor{black}{Furthermore, one limitation is} the loss of the global overview using AR compared to 2D maps as \textcolor{black}{mentioned by related work \cite{grassetimage, firstnavigation, guarese} and two user study participants}. Users need to interact with the system and look \textcolor{black}{into} different directions to see all the labels. The AR view only depicts the labels that are currently in front of the user in the respective view volume. We could \textcolor{black}{in the future} introduce additional labels on the sides of the screen to \textcolor{black}{provide hints to invisible objects}. \subsection{Level-Of-Detail Management} \label{ssec:module2} \textcolor{black}{Labels occupy space that is a scarce resource on a mobile device}, especially if many labels should be shown simultaneously. To reduce \textcolor{black}{unwanted visual clutter, we introduce an} LOD concept for labels~\cite{matkovic} and incorporate a \emph{level-of-detail management} in the pipeline (Figure~\ref{fig:overview}(d)). The LOD is also computed based on the \textcolor{black}{sorted distances of labels} and the label density The LOD selection consists of two steps: LOD calculation and super label aggregation. In our implementation, the \textcolor{black}{lowest} LOD occupies the least space and includes a colored rectangle and an icon (Figure~\ref{fig:lodsfirst}(a)). The middle LOD presents a colored rectangle, the icon, and an iconic image (photo) of the POI (Figure~\ref{fig:lodsfirst}(a)). The \textcolor{black}{highest LOD contains} a text tag and occupies the most space (Figure~\ref{fig:lodsfirst}(a)). The level-of-detail for each label changes when the user navigates through the scene. \begin{comment} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{images/method/LODs.eps} \caption{Our three LODs for each label. The arrows between them indicate the possible state changes over time.} \label{fig:lodsfirst} \end{figure} \end{comment} \subsubsection{LOD Calculation} The LOD for each label depends on the distance to the user and the label density. For each label, a virtual view volume aligned to the $(x, z)$ ground plane is constructed to mimic that the user would look into the direction of each label. The horizontal distance along the $(x, z)$ ground plane and the vector from the user to the position of each label are used. If the angle between these two vectors is above a threshold $t$ ($45^{\degree}$ by default in our system), the label is located outside the aligned view volume. \textcolor{black}{We} split the view volume and each label below the threshold $m_1$ ($20^{\degree}$ by default) receives the \textcolor{black}{highest} LOD until one label \textcolor{black}{exceeds} the angle $m_1$. \textcolor{black}{The remaining labels are displayed in the middle LOD until reaching the threshold} $m_2$ ($30^{\degree}$ by default). \textcolor{black}{If a label exceeds $m_2$,} it will be displayed in the \textcolor{black}{lowest} LOD. \textcolor{black}{The threshold angles can} be changed according to user preferences. The LODs of all labels are consistent when the viewing angle of the device changes for the current user position. The \emph{level-of-detail management} provides coherent movement when rotating the AR device. The LODs for the labels are updated if the user \textcolor{black}{moves.} \subsubsection{Super Label Aggregation} To further reduce visual clutter, we introduce super labels \textcolor{black}{that group individual labels} \textcolor{black}{(see Section~\ref{ssec:encoding})}. The position of a super label is calculated as the average $(x,z)$-positions of the individual labels that are part of the aggregation in the 3D scene. A predefined grouping \textcolor{black}{(i.e., themed areas of amusement parks)} of labels is necessary to compute the super labels, \textcolor{black}{while unsupervised clustering algorithms can also be directly applied.} We do not aggregate labels of the closest predefined group considering the position of the user. Individual labels in the close surroundings of the user are always displayed and not aggregated supporting the exploration process. \textcolor{black}{We} only aggregate individual labels to super labels if the user is located outside of the \textcolor{black}{respective label group.} \subsection{Occlusion Management} \label{ssec:occclusion} Showing many labels simultaneously on a mobile device will, unfortunately, lead to occlusions of labels, especially if the annotated POIs are close to each other or even hidden by other \textcolor{black}{labels (Figure~\ref{fig:manage}(a)).} Point-feature labeling has been extensively investigated due to its NP-hardness, even when looking for an optimal solution just in 2D~\cite{nphard}. In our setting, occlusions change over time, since the users move. Fast responsive management strategies are required to update the scene regularly. Viewing angle and position changes of the user need to be accounted for to guarantee smooth state transitions and to eliminate unwanted flickering. We \textcolor{black}{perform} the entire occlusion handling in the 3D scene, overcoming the label positioning inconsistencies caused by viewing angle changes. The occlusion handling consists of two steps, occlusion detection and shift computation. \subsubsection{Occlusion Detection} \textcolor{black}{We employ ray tracing to detect occlusions, which is different from existing approaches~\cite{labelsurvey}}. As the labels have been sorted by the distance to the user, the occlusions are detected and solved iteratively from label $l_1$ to label $l_n$ of the sorted list $S$. For each label $l_i$, the origins of four rays are set to the location of the user's device in AR. \textcolor{black}{The rays run through the corner points} of \textcolor{black}{label $l_i$ as shown in Figure~\ref{fig:manage}(b).} If another label is hit during the ray traversals, \textcolor{black}{an occlusion occurs.} To ensure that all possible occlusions will be detected, we assume that \textcolor{black}{labels} closer to the viewer are either larger or as large as labels \textcolor{black}{farther away}. This allows us to \textcolor{black}{use just} four rays to detect 3D occlusions effectively. The approach works for rectangular shapes or rectangular bounding boxes of polygonal shapes and could be extended to polygons \textcolor{black}{or 3D objects (e.g., buildings in MR)}. \textcolor{black}{Other configurations can be accommodated by increasing the number of rays.} Figure~\ref{fig:manage}(b) gives an example, where label $l_1$ (orange) is in front of label $l_2$ (red). In this case, the corner ray $1$ of label $l_2$ collides with label $l_1$, indicating that label $l_1$ occludes label $l_2$. Since we assume that closer labels are always larger or as large as \textcolor{black}{farther away labels}, no \textcolor{black}{occluding} labels will be missed during the occlusion detection. \subsubsection{Shift Computation} \textcolor{black}{Once the occlusions are detected, we can iteratively shift the labels greedily in the order of increasing distance.} Since the labels are shifted from the closest to the farthest one, the label $l_i$ will be located either at its initial $(x,z)$-coordinates or above the previous label $l_{i -1}$ along the y-axis. Figure~\ref{fig:manage}(c) illustrates the basic shift of label $l_2$. The blue lines represent the \textcolor{black}{corner rays for occlusion detection} and the gray line shows the traversed ray for calculating the occlusion free position of label $l_2$. Figure~\ref{fig:manage}(d) depicts an occlusion-free result after \textcolor{black}{shifting} label $l_2$, where the shift distance $d$ is $|y_2'-y_2|$. \textcolor{black}{Szirmay-Kalos et al.~\cite{worstcase} proved that the ray-tracing approach at least requires a logarithmic computation time in the worst case based on the number of scene objects. On the other hand, modern platforms already provide real-time ray-tracing~\cite{unity}.} In our approach, the \emph{occlusion management} takes $O(n^2)$ if labels are aligned in a sequence \textcolor{black}{along} the current viewing direction. The current label $l_i$ \textcolor{black}{possibly} needs to be shifted above each label in front of it. We show a comparison with different label alignments in Section~\ref{sec:result}. \textcolor{black}{The greedy label placement terminates} as soon as no other label in front occludes label $l_i$. \section{Context-Responsive Framework} \label{sec:overview} \textcolor{black}{Based on the taxonomy by Wiener et al.~\cite{Wiener:2009:SCC}, our approach supports aided and unaided wayfinding tasks. We can directly highlight the destination label and assist users to combine decision-making processes, memory processes, learning processes, and planning processes for finding the overall best destinations.} The effort to identify objects in AR is low~\cite{guarese} because real-world objects can be directly annotated~\cite{firstnavigation, grassetimage} and AR navigation \textcolor{black}{is less user-focus demanding} compared to \textcolor{black}{other map techniques~\cite{McMahon:2015:JSET}.} The responsive framework is inspired by Hoffswell et al.~\cite{Hoffswell:2020:CHI}, who proposed a taxonomy for responsive visualization design, which is essential to present information based on the device context. In principle, our design has three major components, including (1) occlusion management, (2) level-of-detail management, and (3) coherence management, each of which aims to solve the problems \textbf{(P1-P3)}, respectively. We first introduce the encoding of labels \textcolor{black}{beyond plain text,} followed by an overview of the \textcolor{black}{presented} approach. \subsection{Label Encoding} \label{ssec:encoding} The label encoding \textcolor{black}{reduces the limitations in existing work and solves \textbf{(P2)}. We introduce additional types of labels than merely text labels as concluded by Langlotz~et~al.~\cite{nextgen}}. We use color to encode \textcolor{black}{scalar variables of each POI~\cite{suitablecomp,mazza}.} In general, the users can choose a color scheme and a scale according to their preferences. A label consists of several of the following components: \begin{itemize} \item a text tag containing the name of the POI, \item an iconic image (photo) of the POI, \item an icon encoding the type of the POI, and \item a color-coded rectangle representing a scalar value of the POI. \end{itemize} \begin{figure}[tbh!] \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{cc} \includegraphics[width=0.48\linewidth]{images/method/LODs.pdf} & \includegraphics[width=0.48\linewidth]{images/method/SuperSubLabelExample.pdf} \\ \textcolor{black}{(a) Label encoding, three LODs} & \textcolor{black}{(b) Super label} \\ \end{tabular} \caption{An example label encoding \textcolor{black}{(\emph{Tokyo Disneyland Dataset})}.} \label{fig:lodsfirst} \end{figure} \begin{figure*}[htb!] \centering{ \setlength{\tabcolsep}{1pt} \begin{tabular}{ccccc} \includegraphics[width=0.15\linewidth]{images/overview/0Input.pdf} & \includegraphics[width=0.15\linewidth]{images/overview/1ARPos.pdf} & \includegraphics[width=0.15\linewidth]{images/overview/2Occlusion.pdf} & \includegraphics[width=0.15\linewidth]{images/overview/3LOD.pdf} & \includegraphics[width=0.15\linewidth]{images/overview/4Coherency.pdf} \\ (a) Input & (b) Positioning labels in AR & (c) Occlusion management& (d) Level-of-detail management& (e) Coherence management \\ \end{tabular} } \caption{The input scenario (a), positioning of labels in AR (b), and the three management strategies of our approach (c)-(e).} \label{fig:overview} \end{figure*} \textcolor{black}{In Figure~\ref{fig:lodsfirst}, labels concerning the \emph{Tokyo Disneyland Dataset} are shown.} POIs are attractions in this case. \textcolor{black}{Attractions can be categorized into three types, i.e., \emph{thrilling}, \emph{adventure}, and \emph{children}, each of which is depicted through a type icon.} Figure~\ref{fig:lodsfirst}(a) provides an explanatory label annotating an attraction of the dataset. The text tag depicts the name of the attraction and the waiting time (e.g., \emph{Big Thunder Mountain $100$ min}). The iconic image shows a photo of the train of the attraction and the type icon indicates that it is a thrilling attraction. The colored \textcolor{black}{(rectangular) backgrounds of the labels encode} the corresponding waiting times. \subsection{\textcolor{black}{Pipeline} of the Approach} Figure~\ref{fig:overview} gives an overview of our approach. We first position labels of POIs in AR (Figure~\ref{fig:overview}(a) as a top view and (b) as a front view) and perform the proposed three management strategies. We \textcolor{black}{process the objects} in the 3D scene using a Cartesian world coordinate system, where the $xz$-plane is parallel to the ground plane. Figure~\ref{fig:overview}(a) depicts a top view of our coordinate system, the $x$-axis and $z$-axis define the ground plane and the $y$-axis is vertically upwards from the ground plane. The input to our system is a set of POIs $P = \{p_1,p_2,...,p_n\}$ and a set of labels $L = \{l_1,l_2,...,l_n\}$, for example, manually selected by the users or downloaded from an online database. In the \emph{positioning labels in AR} preprocessing (Section~\ref{ssec::arpos}), for each POI $p_i$, the corresponding label $l_i$ is initially \textcolor{black}{placed} perpendicularly to the ground plane in the world coordinate system (Figure~\ref{fig:overview}(b)). Currently, each POI $p_i$ has one associated label $l_i$ describing the attributes of the POI. We also assume that the $(x, z)$-coordinates of each annotated POI are more important than the $y$-coordinate, since the $(x, z)$-coordinates are essential to indicate the relative positions of the POIs \textcolor{black}{as suggested by prior work~\cite{firstnavigation, guarese}.} The \emph{occlusion management} strategy (Section~\ref{ssec:occclusion}) \textcolor{black}{addresses} \textbf{(P1)} and resolves occlusions of labels considering the current configuration of the device. The labels are first sorted by distance from the device \textcolor{black}{into} a list $S$, from the nearest to the farthest positions. With this information, we resolve occlusions starting with the closest label and using a greedy approach (see Figure~\ref{fig:overview}(c)). The greedy approach arranges the lowest $y$-positions of the labels to be visible iteratively. This allows effective execution of the occlusion-handling on mobile devices, where the computation powers are limited compared to desktop computers. The occlusion strategy provides a solution to \textcolor{black}{otherwise} inconsistently moving labels when the viewing angle of the AR device changes~\cite{hedgehog}. In the \emph{level-of-detail management} (Section~\ref{ssec:module2}), we introduce four distinct types of label encodings for \textbf{(P2)} to represent three levels-of-detail (LODs, Figure~\ref{fig:lodsfirst}(a)) of an individual label and one \emph{super label} to indicate an aggregated group of labels for visual clutter reduction (Figure~\ref{fig:lodsfirst}(b)). \textcolor{black}{The \emph{level-of-detail management} depicts} a different amount of information for each label (see Figure~\ref{fig:overview}(d)). The LOD of a label $l_i$ is selected according to the distance of the annotated POI to the device and the label density in the view volume. For convenience, we assume that \textcolor{black}{close labels get at least as much screen space as distant labels}, since it is natural to show objects larger when they are close by. \textcolor{black}{However, different configurations can be also incorporated by adding rays in the occlusion detection.} Super labels (Figure~\ref{fig:lodsfirst}(b)) are representative labels that \textcolor{black}{depict a set of aggregated labels in order to reduce visual clutter. Figure~\ref{fig:lodsfirst}(b) gives} an example of a super label for the \emph{Tokyo Disneyland Dataset}. The themed area \emph{Adventureland} is aggregated and \textcolor{black}{the blue background color} of the super label encodes the average waiting time. \textcolor{black}{A color legend at the bottom of the label presents the individual waiting times of the aggregated attractions in this themed area.} \begin{comment} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{images/method/LODs.eps} \caption{Our label encoding with three LODs and a super label. The arrows between them indicate the possible state changes over time while the arrows at the super label show, which labels belong to it. \label{fig:lodsfirst} \end{figure} \end{comment} \emph{Positioning labels in AR}, \emph{occlusion management}, and \emph{level-of-detail management} are smoothly updated in the \emph{coherence management} module (Figure~\ref{fig:overview}(e)). To avoid flickering that inevitably reduces coherency~\cite{imagebased}, the labels are not moved or changed immediately, but follow a common animation policy, by strategically updating changes over time (Section~\ref{ssec:coherencemanagement}) to solve problem \textbf{(P3)}. \section{Related Work} \label{sec:related} \textcolor{black}{We} present a novel responsive approach \textcolor{black}{considering} label occlusion, visual clutter, and coherence simultaneously. We \textcolor{black}{discuss} related work to identify our contributions \textcolor{black}{by first covering general navigation techniques, and then specific labeling topics in different applications and spaces.} \subsection{Spatial Identification and Navigation} \label{ssec:navi} Spatial cognition studies show how people acquire experience and knowledge to identify where they are, how to continue the journey, and visit places effectively~\cite{Waller:2012:APA}. \textcolor{black}{Maps are classical tools used to detect positions and extract spatial information throughout human history~\cite{wu:2020:eurovis}, while modern maps often use markers to identify and highlight the \textcolor{black}{locations} of POIs.} 2D maps may not be always optimal since the 2D information needs to be translated to the real environment~\cite{guarese}. An alternative, or maybe a more intuitive way, is to map the information \textcolor{black}{directly to the physical environment. McMahon et al.~\cite{McMahon:2015:JSET} compared paper maps and Google Maps to AR or more specifically hand-held AR~\cite{Sereno:2021:TVCG}, which better supports people in terms of activating their navigation skills.} \textcolor{black}{Willett et al.~\cite{Willett:2017:TVCG} introduced embedded data representations, a taxonomy describing the challenges of showing data in physical space, and mentioned that occlusion problems have not yet been fully resolved. } Bell et al.~\cite{firstnavigation} proposed \textcolor{black}{a} pioneering view-management approach to project objects \textcolor{black}{onto the} screen while resolving occlusions or to arrange similar objects close to each other. Guarese and Maciel~\cite{guarese} investigated MR, to assist navigation tasks by overlaying the real environment with virtual holograms. Schneider et al.~\cite{schneider} investigated an AR navigation concept, where the system projects the content onto a vehicle’s windshield to assist driving behaviors. \subsection{Labeling in Various Spaces (2D, 3D, VR, and AR)} \label{ssec:labeling} Labeling is an automatic approach to position text or image labels in order to efficiently communicate additional information about POIs. It improves clarity and understandability of the \textcolor{black}{underlying information~\cite{labelsurvey}.} \textcolor{black}{Internal labels} are overlaid onto their reference objects. External labels are placed outside the objects and are connected to them by leader lines. Recently, {\v C}mol{\'i}k~et~al.~\cite{Cmolik:TVCG:2020} have introduced \emph{Mixed Labeling} that facilitates the integration of internal and external labeling \textcolor{black}{in 2D}. Labeling techniques have been extensively investigated in geovisualization, where resolving occlusions and leader crossings~\cite{Lin:2010:pvis} are primary aesthetic criteria to ensure good readability. Besides 2D labeling, in digital map services, such as Google Maps \textcolor{black}{and} other GISs, scales have been considered to improve user interaction. Active range optimization, for example, uses rectangular pyramids to \textcolor{black}{eliminate} label-placement conflicts across different zoom levels~\cite{Been:2010:cg,Wu:2017:EuroVis}. Labeling of 3D scenes has been mainly investigated in medical applications~\cite{Oeltze:2014:vcbm}, usually focusing on complex mesh and volume scenes\textcolor{black}{, as well as} intuitiveness for navigation. \textcolor{black}{ Maass and D{\"o}llner~\cite{Maass:2006:WSCG} developed a labeling technique to dynamically attach labels to the hulls of objects in a 3D scene. Later they extended this billboard concept by taking occlusion with labels and scene elements into account~\cite{Maass:2008:CAG}. } The approach by Kou{\v r}il et al.~\cite{kouril-2018-LoL} \textcolor{black}{annotates} a complex 3D scene, involving multi instances across multiple scales in a dense 3D biological environment. \textcolor{black}{Occlusion in these approaches is detected after projecting objects into 2D. It is hard to maintain coherence.} \textcolor{black}{Handheld} Augmented Reality has become useful as the computing power of mobile devices \textcolor{black}{has increased}. One advantage of using AR is to overlay information directly \textcolor{black}{on} the real world that the user is familiar with. \textcolor{black}{ For example, White and Feiner~\cite{White:2009:CHI} proposed \emph{SiteLens}, a situated visualization that embeds relevant data of the POIs in AR. Veas et al.~\cite{Veas:2012:TVCG} investigated outdoor AR applications, where they focused on multiple-view coordination and occlusion with objects in the background. Labels are not fully researched here. } As referred to in most of the \textcolor{black}{following} papers, occlusions between labels have been considered as a primary issue in AR applications~\cite{nextgen, grassetimage,imagebased}. Grasset~et~al.~\cite{grassetimage} proposed a view management technique to annotate landmarks \textcolor{black}{in} an image. \textcolor{black}{Edge detection and image saliency} are integrated to identify unimportant regions for text label placement. Jia~et~al.~\cite{imagebased} investigated a similar strategy, \textcolor{black}{with incorporating human placement preferences as a set of constraints} to improve the work by Grasset~et~al.~\cite{grassetimage}. \textcolor{black}{Two prototypes are implemented for desktop computers due to the poor \textcolor{black}{temporal performance} on mobile devices}. Tatzgern~et~al.~\cite{hedgehog} developed a pioneering approach that considers labels as 3D objects in the scene to avoid unstable labels due to view changes. The approach estimates the center position of an object and moves labels along a 3D pole, which attaches to the object. Another proposed scenario constrains label movement to a predefined 2D view plane. \textcolor{black}{This technique is limited to} annotating \textcolor{black}{objects} in front of the camera. \textcolor{black}{Existing work tends to directly solve label occlusions in 2D or \textcolor{black}{to} project labels from 3D to 2D and apply 2D solutions. \textcolor{black}{These techniques cannot avoid label inconsistencies}~\cite{Cmolik:TVCG:2020}. In contrast to existing approaches, we handle labels as objects in the 3D scene. This allows us to compensate for incoherent label movement caused by viewing angle changes of the device.} \textcolor{black}{We integrate} the labeling technique \textcolor{black}{into} 3D to retain stability and introduce additional visual variables, including text, images, icons, and colors, to enrich the corresponding visual representation. Our label encoding also varies in order to balance information provided by POIs. More design choices will be explained in Section~\ref{sec:overview}. \section{Experimental Results} \label{sec:result} To assess the applicability of our technique, we investigate three different use cases, including a (1) \emph{Synthetic Dataset}, a (2) \emph{Local Shops Dataset}, and the (3) \emph{Tokyo Disneyland Dataset}. The \emph{Synthetic Dataset} shows different variations of label layouts. The \emph{Local Shops Dataset} provides a real-world example, where the labels are close and next to each other. The \emph{Tokyo Disneyland Dataset} presents another real-world scenario, \textcolor{black}{where the labels are spread out in the 3D scene.} We use Unity as the visualization platform~\cite{unity} and incorporate the Vuforia engine~\cite{vuforia} to arrange objects in AR. The images shown in this section were taken using a Xiaomi Mi A2 device (Qualcomm Snapdragon $660$ processor and $4$ GB RAM) \textcolor{black}{with} Android $10$ in portrait mode. \subsection{Synthetic Dataset} \label{subsec:syntheticdata} We \textcolor{black}{study three different label layouts of} the \emph{Synthetic Dataset} (Figure \ref{fig:syntheticlayout}) and compute the execution time measured on the mobile device Xiaomi Mi A2. The three layouts are a circle layout, a grid layout, and a line layout, which are \textcolor{black}{computationally increasingly expensive.} This assumption is based on the fact that if more labels are hidden in the current viewing direction, more occlusion removal steps are necessary. Figure~\ref{tab:computationtimes} gives the execution times of all layouts in milliseconds based on a variation of label numbers. The labels in this dataset have a height and width of $120$ world space units \textcolor{black}{by default in Unity.} The circle layout (Figure~\ref{fig:syntheticlayout}(a)) requires the least computation times to resolve occlusions since many labels are initially \textcolor{black}{arranged without occlusion issues}. The radius of the circle layout is set to $1,000$ world space units \textcolor{black}{in this experiment.} The grid layout (Figure~\ref{fig:syntheticlayout}(b)) distributes the labels equally leading to densely \textcolor{black}{placed labels} in the scene. \textcolor{black}{In our setting, the} number of labels per row is equal to $\sqrt{n}$, where $n$ is \textcolor{black}{the total number of labels in} Figure~\ref{tab:computationtimes}. If $\sqrt{n}$ is not an \textcolor{black}{integer}, the layout contains one partial label row in the grid. The size of the grid is \textcolor{black}{$4,000 \times 4,000$} world space units and includes both near and far labels in \textcolor{black}{the} world space. The line layout (Figure~\ref{fig:syntheticlayout}(c)) represents the worst case \textcolor{black}{example.} The labels are located \textcolor{black}{one after another, which leads to the maximum number of $i-1$ shifts} for each label $l_i$. The labels are placed $90$ world space units behind each other. As shown in Figure \ref{tab:computationtimes}, resolving occlusions for the grid layout leads to higher computation times than the circle layout, \textcolor{black}{but} lower computation times compared to the line layout. \begin{figure}[tb!] \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{ccc} \includegraphics[width=0.33\linewidth]{images/results/Synthetic/SyntheticCircle.pdf} & \includegraphics[width=0.33\linewidth]{images/results/Synthetic/SyntheticGrid.pdf} & \includegraphics[width=0.33\linewidth]{images/results/Synthetic/SyntheticLine.pdf} \\ (a) & (b) & (c) \\ \end{tabular} \caption{An example of the \emph{Synthetic Dataset} in top view with the displayed results beneath. Labels are arranged on a \textcolor{black}{(a) circle, (b) grid, and (c)line.}} \label{fig:syntheticlayout} \vspace{2mm} \centering \includegraphics[width=\linewidth]{images/results/Synthetic/SyntheticTime.pdf} \caption{\textcolor{black}{Computation times for removing occlusions}.} \label{tab:computationtimes} \end{figure} \begin{comment} \begin{table*}[tbh!] \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{9}{|c|}{\textbf{\emph{Synthetic Dataset} Occlusion Handling Performance}} \\ \hline \hline \multicolumn{3}{|c|}{\textbf{Circle Layout}} & \multicolumn{3}{|c|}{\textbf{Grid Layout}} & \multicolumn{3}{|c|}{\textbf{Line Layout}}\\ \hline \hline \textbf{\#Labels} & \textbf{Time (ms)} & \textbf{\#Raycasts} & \textbf{\#Labels} & \textbf{Time (ms)} & \textbf{\#Raycasts} & \textbf{\#Labels} & \textbf{Time (ms)} & \textbf{\#Raycasts} \\ \hline \hline 10 & 1 & 36 & 10 & 1 & 40 & 10 & 5 & 180 \\ \hline 20 & 2 & 76 & 20 & 2 & 80 & 20 & 25 & 760 \\ \hline 30 & 4 & 116 & 30 & 5 & 160 & 30 & 60 & 1696\\ \hline 40 & 5 & 156 & 40 & 8 & 244 & 40 & 115 & 3076 \\ \hline 50 & 7 & 196 & 50 & 12 & 324 & 50 & 195 & 4860 \\ \hline 75 & 12 & 296 & 75 & 43 & 804 & 75 & 570 & 11060 \\ \hline 100 & 20 & 396 & 100 & 124 & 2012 & 100 & 1340 & 19760 \\ \hline \end{tabular} \caption{Computation times for the occlusion handling measured on a Xiaomi Mi A2 smartphone for different synthetic label layouts.} \label{tab:computationtimes} \end{table*} \end{comment} \subsection{Local Shops Dataset} \label{subsec:shopsdata} The \emph{Local Shops Dataset} contains shop locations, types of shops, and number of people inside a shop (per m$^2$) of a strip mall (Figure~\ref{fig:shop}). The icons indicate the respective shop types (e.g., clothing, shoes, and groceries). Considering the current COVID-19 regulations, we encode the number of people per m$^2$, to identify the customer density \textcolor{black}{or COVID-19 safety measure} in the shop in real-time. In Figure~\ref{fig:shop}, we use a color scale from white to red. \textcolor{black}{The text displays the name and measure accordingly. Figure~\ref{fig:shop} gives} an explanatory result, in which the device is tilted. \textcolor{black}{As shown here, the placement of the labels is thereby not influenced.} The rectangular labels remain parallel to the ground. \subsection{Tokyo Disneyland Dataset} \label{subsec:disneydata} The \emph{Tokyo Disneyland} is one of the most popular amusement parks in the world. \textcolor{black}{Many visitors often need to line up for hours to enjoy a specific attraction,} and many magazines and blogs guide visitors to optimize their one-day visit~\cite{Bricker:2020:dtb}. The amusement park consists of $35$ big attractions, which we mark all as POIs \textcolor{black}{in our system} to give an overview of the park. In the park, themed areas, such as the \emph{Westernland}, are \textcolor{black}{subregions grouping several attractions for convenience.} We use the themed areas of the amusement park to aggregate labels \textcolor{black}{and present the area using the corresponding super label.} \textcolor{black}{Once the \emph{positioning labels in AR} has been preprocessed, labels might} initially be occluded. Figure~\ref{fig:comp12} compares the results of the same position and viewing angle. \textcolor{black}{Initially, the labels are occluded as shown in Figure~\ref{fig:comp12}(a) and the respective occlusion-free result is \textcolor{black}{given} in Figure~\ref{fig:comp12}(b).} \textcolor{black}{Since the occlusion-free results are independent of} the viewing angle of the device, no incoherent label movement occurs when the user rotates the device. The occlusions are resolved for all the labels around the users as explained in Section~\ref{sec:overview} and Section~\ref{ssec:occclusion}. Labels closer to the user are more likely to stay close to their initial positions than labels that are farther away. \textcolor{black}{The two closest labels in Figure~\ref{fig:comp12}} are \emph{Big Thunder Mountain} and \emph{Mark Twain's Riverboat} showing an iconic image of a train and a boat. The positions of these two labels are not changed. \textcolor{black}{Labels that are occluded by these two labels will be shifted upwards during the \emph{occlusion management}.} Figure~\ref{fig:areatransition1} depicts \textcolor{black}{the transition of a super label to its} individual labels. \textcolor{black}{The super label represents the} \emph{Westernland} themed area of the \emph{Toko Disneyland}. \begin{figure}[tb!] \setlength\arraycolsep{0pt} \begin{minipage}[b]{0.18\textwidth} \centering{ \includegraphics[width=\linewidth]{images/results/Shops/RotationDifference.eps} \\ } \vspace{5mm} \caption{\textcolor{black}{An example with a $45^{\circ}$~tilted mobile device}. \vspace{0.5mm} } \label{fig:shop} \end{minipage} \quad \begin{minipage}[b]{0.275\textwidth} \centering{ \begin{tabular}{cc} \setlength{\tabcolsep}{0pt} \includegraphics[width=0.43\linewidth]{images/results/Compare1.pdf} & \includegraphics[width=0.43\linewidth]{images/results/Compare2.pdf} \\ \textcolor{black}{(a)} & \textcolor{black}{(b)} \\ \end{tabular} } \caption{Occlusions that occur in (a) are resolved in (b).} \label{fig:comp12} \end{minipage}% \end{figure} \begin{figure*}[th!] \centering \includegraphics[width=0.95\linewidth]{images/results/AreaTransition1.png} \caption{Transition from a super label to the individual labels for each POI over time.} \label{fig:areatransition1} \end{figure*} \begin{figure*}[tbh!] \begin{minipage}{0.80\textwidth} \centering{ \setlength{\tabcolsep}{1pt} \begin{tabular}{cccc} \includegraphics[width=0.24\linewidth]{images/results/AllOneLOD/Lowest.pdf} & \includegraphics[width=0.24\linewidth]{images/results/AllOneLOD/Middle.pdf} & \includegraphics[width=0.24\linewidth]{images/results/AllOneLOD/Highest.pdf} & \includegraphics[width=0.24\linewidth]{images/results/AllOneLOD/Dynamic.pdf} \\ \textcolor{black}{(a) Lowest LOD} & \textcolor{black}{(b) Middle LOD} & \textcolor{black}{(c) Highest LOD} & \textcolor{black}{(d) Dynamic LODs} \\ \end{tabular} } \caption{A comparison of different LODs and dynamic LODs (\textcolor{black}{applying} the \protect\emph{level-of-detail management})} \label{fig:compareLODsTilt} \end{minipage}\hfill \begin{minipage}{0.2\textwidth} \centering{ \setlength{\tabcolsep}{1pt} \begin{tabular}{cc} \includegraphics[width=0.48\linewidth]{images/results/Lateral/AdventureLeft.pdf} & \includegraphics[width=0.48\linewidth]{images/results/Lateral/AdventureRight.pdf} \\ \textcolor{black}{(a)} & \textcolor{black}{(b)}\\ \includegraphics[width=0.48\linewidth]{images/results/Lateral/WesternLeft.pdf} & \includegraphics[width=0.48\linewidth]{images/results/Lateral/WesternRight.pdf} \\ \textcolor{black}{(c)} & \textcolor{black}{(d)} \\ \vspace{-7mm} \end{tabular} } \textcolor{black}{ \caption{Lateral transitions} \vspace{2mm} \label{fig:latTransNew} } \end{minipage} \end{figure*} Figure~\ref{fig:compareLODsTilt} \textcolor{black}{presents different} LODs of the themed area \emph{Westernland}. Figure \ref{fig:compareLODsTilt}(a) \textcolor{black}{shows} all labels in the \textcolor{black}{lowest} LOD consisting of a colored rectangle encoding the waiting time and an icon indicating the attraction type. This LOD provides the \textcolor{black}{simplest} overview of the attractions, \textcolor{black}{and} it presents the least amount of information as only the attraction \textcolor{black}{types} and the color encodings are included. Figure \ref{fig:compareLODsTilt}(b) illustrates the middle LOD adding an iconic image to the encoding. In this case, the type icon is less dominant than in the \textcolor{black}{lowest} LOD. Figure \ref{fig:compareLODsTilt}(c) depicts the \textcolor{black}{highest} LOD by adding a text tag stating the name and the exact waiting time of the attraction in minutes. This LOD provides the most detailed information. However, higher vertical stacking of labels is necessary to resolve occlusions compared to the \textcolor{black}{lowest} and middle LOD during the occlusion handling. Figure \ref{fig:compareLODsTilt}(d) \textcolor{black}{presents the label placement of the themed area \emph{Westernland} once the dynamic LOD selection is enabled.} This solution constitutes a compromise \textcolor{black}{concerning} the presented amount of information \textcolor{black}{and label displacement. It includes} detailed information about close attractions and \textcolor{black}{keeps} the vertical stacking of labels low compared to the \textcolor{black}{highest} LOD. The preferred LOD might vary depending on the use case and the user's preference (see Section~\ref{sec:evaluate}). Each LOD has its benefits and drawbacks with the dynamic LODs being the most versatile one as they present detailed information about close labels and avoid excessive vertical stacking (see Section~\ref{sec:evaluate}). \textcolor{black}{Figure~\ref{fig:latTransNew} exemplifies lateral translations of the user and the resulting label arrangements. Figure~\ref{fig:latTransNew}(a) and Figure~\ref{fig:latTransNew}(c) correspond to the initial positions. In Figure~\ref{fig:latTransNew}(b) and Figure~\ref{fig:latTransNew}(d), the user moved laterally to the right. The label positions are updated smoothly depending on the movement of the user.} \subsection{Coherence Management} \label{ssec:coherencemanagement} To avoid unwanted flickering, we incorporate smooth transitions for each movement and change. \textcolor{black}{Smooth transitions are implemented} if positions of labels change to be occlusion-free during the interaction with the system, if LODs of labels change, or if labels are aggregated to super labels. \textcolor{black}{We investigated ten different easing functions, including linear, and various quadratic and cubic equations, for the transitions to further increase the coherency. \textcolor{black}{For comparison, we refer readers to the supplementary videos.} We believe that the ease-in ease-out sine function (Eq.~\ref{eq:completion}) represents the best easing function as it provides harmonic transitions. The easing function can be changed based on user preferences. Let $t_{transition}$ be the duration for \textcolor{black}{a} transition to be completed. The variables $t_{start}$ and $t_{current}$ indicate the start time and the current time \textcolor{black}{during} the transition. The function $e(t_{current})$ represents \textcolor{black}{the} easing function for\textcolor{black}{ a smooth transition}:} \textcolor{black}{ \begin{eqnarray} e(t_{current}) = -0.5 * (\cos{(\pi * \frac{t_{current} - t_{start}}{t_{transition}}}) - 1). \label{eq:completion} \end{eqnarray} } \subsubsection{Smooth Occlusion Transitions} Due to the interaction of the user, occlusion-free label positions may vary from one frame to the next. If the labels would simply be displayed at the newly calculated positions, the labels \textcolor{black}{might} abruptly change their positions, which destroys the users' experience since the labels do not move in a coherent way. \textcolor{black}{To allow the user to better keep track of the labels, we implemented smooth transitions from the previous locations of the labels to the newly calculated ones. \textcolor{black}{We interpolate }original positions and the newly calculated positions of the labels. The position for label ${l_i}$ is updated every frame until it reaches its destination. Let $p_{goal}({l_i})$ be the new occlusion-free label position and $p_{start}({l_i})$ the label position at the start of the transition. We calculate the current label position for label ${l_i}$:} \begin{eqnarray} \textcolor{black}{\vec{p}(l_{i}) = \vec{p}_{start}({l_i}) + (\vec{p}_{goal}({l_i}) - \vec{p}_{start}({l_i})) * e(t_{current})}. \label{eq:occlTrans} \end{eqnarray} \subsubsection{Smooth LOD Transitions} If the LOD for a label changes, the transition needs to be smoothed to avoid flickering and allow a coherent user experience. The LODs of labels change over time, and we adapt the alpha channel to achieve a smooth transition. In this way, the iconic images, the icons, and the text tags fade in or out using \textcolor{black}{ \begin{eqnarray} \alpha(l_{i})=\begin{cases} e(t_{current}), & b = 1 \\ 1 - e(t_{current}), & b = 0, \end{cases} \label{eq:lodtrans} \end{eqnarray} where} $\alpha(l_{i})$ is the alpha value of the iconic image, the icon, or the text tag of label $l_{i}$. \textcolor{black}{Since our easing function $e$ (in Eq.(\ref{eq:completion})) returns a value between $0$ and $1$, the result can be used to set the alpha channel in Eq.(\ref{eq:lodtrans})}. The variable $b$ indicates, if the object should become invisible ($b=0$) or if the object should become visible ($b=1$). \subsubsection{Smooth Aggregation Transitions} \textcolor{black}{If} labels are aggregated to super labels, individual labels will be moved to the respective super label positions in the scene. Simultaneously, we fade in the super labels and fade out the labels by interpolating the alpha channels. \textcolor{black}{If} individual labels are aggregated, the labels move towards their super label and disappear. If an aggregation is split up again, coherency is achieved analogously. If the alpha channel of a super label is decreased, the individual labels reappear over time and \textcolor{black}{move} back to their respective positions (Eqs.(\ref{eq:superal}), (\ref{eq:superas}), and (\ref{eq:superpos})). Let $l_i$ be a label that will be aggregated into \textcolor{black}{a} super label $l_{s}$. \textcolor{black}{The alpha values of $l_i$ and $l_{s}$ and the position of $l_i$ are computed as follows:} \textcolor{black}{ \begin{eqnarray} \alpha(l_{i}) = 1 - e(t_{current}) \label{eq:superal} \end{eqnarray} \begin{eqnarray} \alpha(l_{s}) = e(t_{current}) \label{eq:superas} \end{eqnarray} \begin{eqnarray} \vec{p}(l_{i}) = \vec{p}_{start}(l_i) + (\vec{p}(l_{s}) - \vec{p}_{start}(l_i)) * e(t_{current}) \label{eq:superpos} \end{eqnarray} }
{ "timestamp": "2021-02-16T02:40:37", "yymm": "2102", "arxiv_id": "2102.07735", "language": "en", "url": "https://arxiv.org/abs/2102.07735" }
\subsection{Data Collection} We collected the dataset from an online platform, \textit{Codeforces}~\cite{codeforces}, that organizes programming contests regularly. Each contest consists of a set of problems to which users submit their solutions. The online judge system automatically evaluates each solution for correctness using several test cases (typically 5 to 13, though the number varies among problems) and reports their corresponding runtime and memory usage. The dataset contains several unique solutions to each problem with varying runtime and memory usage characteristics, from which a deep learning model can ``learn". We develop a Python tool to automatically retrieve a list of contests from the Codeforces website using their provided API and disregard any contest that has not yet finished. Subsequently, our data collection tool carries out an API request for a list of submission IDs for each contest in the retrieved list. For each problem, our tool parses all submissions ignoring those marked by Codeforces as incorrect solutions. Finally, our tool enters each problem set along with source code, source language, runtime, and memory usage properties to a database for each test case. This process results in a total of $4,313,322$ correct solutions, spanning across $1,278$ problems. The distribution of solution times ranges from $6$ problems, each having over $40,000$ submissions to $600$ problems, each having less than $1,000$ submissions. Each problem is unique in regards to its difficulty and popularity. For training, it is crucial to select problem sets that have a sufficient number of solutions and are of sufficient difficulty such that runtime and memory usage across solutions show non-trivial variability. In this paper, we only focus on submissions written in C++, and the tests are averaged to obtain a mean runtime for each problem. However, the tool is generic in its ability to collect all programs written in all languages. We present the performance of the built models in Section~\ref{sec:results} from seven groups of algorithms. Table~\ref{tab:DATA} presents statistics and descriptions of these nine selected problems (Tags A-I). \change{These problems are automatically selected based on sufficient variation in execution times and more than $100$ correct solutions. While not all of the algorithms in our dataset represent scientific applications, Dynamic Programming (DP) and Graph Traversal are two of the $13$ dwarfs of scientific applications, and shortest path algorithms are core to several commercial ones.} \subsection{Generating Code Pairs} As described in the previous section, we formulate the problem of comparative performance analysis as correlating $\delta(Code)$ for a pair of source codes with $\delta(performance)$. This formulation takes a differential approach instead of predicting the absolute performance of a new or a variant of the same application, which can be intractable given that execution time is a factor of a large number of variables. Our pipeline automatically generates pairs of codes from the selected problems for training a robust model to facilitate this formulation. Since the ordering of every set of codes could be considered as a unique pair, for $N$ submissions, there exists a total of $N^{2}$ possible pairs. Though data-driven approaches such as deep neural networks are typically known to require large amounts of data, we argue that all possible pairs are not required to train a robust model. Not all pairs add unique information for the model to learn, and repetitive training creates a model that has been overfitted. Hence, in Section~\ref{sec:dataneed}, we evaluate the data requirements to build a robust model. The state-of-practice method for reducing the training dataset is to use random subsets of input. Hence, we explore the use of random subsets (of code pairs) of varying sizes for training the models and make appropriate recommendations. For every pair of programs, we generate the target variable (binary) as follows: if the first element of the pair has a higher execution time, we label it as positive, otherwise negative. This formulation emulates a developer looking to determine if a new version of the program will have a lower (improved) runtime. Our experiments also study the impact of including two-way ordering of every pair of codes, i.e., $(a, b)$ and $(b,a)$ on the model accuracy. \subsection{Model Evaluation and Generalization} \label{sec:LAYERS} \begin{figure}[t] \vspace{-0.1in} \captionsetup{font=small,skip=0pt} \centering \includegraphics[width=\columnwidth]{figs/boxwithline.pdf} \caption{The overall model evaluation and generalizability of our proposed tree-LSTM approach compared to a traditional GCN method. The X-axis shows the training dataset, and the Y-axis shows the accuracy. The lines show the accuracy of models in classifying the performance differences (lower or otherwise) between random pairs of disjoint submissions for the same problem (as the training problem). The boxplots show models' accuracy in classifying the performance difference between random pairs of disjoint submissions from all others problems except the training one.} \label{fig:boxplot} \vspace{-0.1in} \end{figure} This experiment's objective is two-fold--(a) test the effectiveness of our proposed tree-LSTM architecture compared to GCN in building source-code representations and their impact on model accuracy, and (b) determine how well the predictive model can generalize to unseen problems. For this experiment, we train models on submissions from a problem and measure the accuracy of the model in predicting the change in performance (positive or negative) for unseen submissions from (i) the same problem \change{(disjoint set)}, (ii) different problem but the same algorithmic group, and (iii) other problems from diverse algorithmic groups. \change{Figure~\ref{fig:boxplot} shows training datasets along the X-axis. The boxplots along the Y-axis show models' accuracy in classifying the performance difference between random pairs of disjoint submissions from diverse problems. The line plots show the accuracy of $p_{i}$ vs $p_{i}$, where the training and testing datasets are disjoint submissions of the problem $p_{i}$.} \noindent \textbf{Generalization: } \change{From Figure~\ref{fig:boxplot}, we can observe that a model built for a specific problem predicts the label of a disjoint set of submissions from (i) the same problem with up to 81\% accuracy (line chart for training set \texttt{E}); (ii) different problems from different algorithmic groups with up to 80\% accuracy (boxplot for the problem \texttt{E}). Further investigation shows that the highest accuracy is incurred when the constructive algorithm to solve problem \texttt{E} classifies random pairs of submissions from a DFS problem \texttt{G}. Also, a model built on the same problem can accurately classify the difference in execution times between random pairs of disjoint submissions from other problems from the same algorithmic group with up to 82\% accuracy. E.g., model built on DFS problem \texttt{F} classifies submissions from another DFS-based problem \texttt{G} with up to 82\% accuracy. These observations show that the model is learning problem characteristics and not just remembering programming constructs.} \change{To test our approach's generalizability, we build a model using a large dataset by randomly selecting 100 submissions from 100 different problems with sufficient variation in execution times and more than 1000 correct solutions. We denote this dataset as \texttt{MP} (for multiple problems). We then evaluate a model trained on \texttt{MP} to predict performance differences of submissions from both A-I and these $100$ problems (disjoint test set). Figure~\ref{fig:boxplot} shows that the predictive model trained on \texttt{MP} can accurately classify performance difference between random pairs of solutions with up to 84\% accuracy for A-I (boxplot) and 73\% accuracy for a disjoint set of \texttt{MP} submissions (line chart).} \noindent \textbf{Tree-LSTM vs GCN: } From Figure~\ref{fig:boxplot} we can observe that the prediction task with the tree-LSTM-based embeddings consistently outperforms that built using the GCN model. The tree-LSTM based representations capture crucial hierarchical information about code structure that a generic graph-based model fails to do, which explains GCN's poor performance. \begin{table}[t] \captionsetup{font=small,skip=0pt} \renewcommand{\arraystretch}{1.2} \centering \begin{tabular}{|c|c|c|c|} \hline & F & G & I \\ \hline F & .80 & .72 & .67 \\ \hline G & .82 & .76 & .68 \\ \hline I & .76 & .67 & .77 \\ \hline \end{tabular} \caption{Models trained and evaluated on different problems in similar algorithm groups (DFS and Graphs). Rows indicate the training dataset, and columns display the test set. While problems F and G share the same algorithmic classes (DFS, Graphs, and Trees), the problem I has a partial overlap (DFS, DP, Graphs). This result indicates that a more considerable overlap in problem characteristics will result in higher prediction accuracy.} \vspace{-10pt} \label{table:GEN} \end{table} \begin{table}[t] \footnotesize \renewcommand{\arraystretch}{1.2} \centering \begin{tabular}{|c|c|c|} \hline \rowcolor{gray!20} Layers & Uni-Directional & Bi-Directional \\ \hline \multirow{2}{*}{1} &0.773 & 0.769\\\cline{2-3} &0.780 & 0.78 \\ \hline \multirow{2}{*}{2} &0.765 & 0.767\\\cline{2-3} &0.789 & 0.786 \\ \hline \multirow{2}{*}{3} &0.766 & 0.77\\\cline{2-3} &0.783 & 0.767 \\ \hline \hline \rowcolor{gray!20} \multicolumn{3}{|c|}{Alternating layers} \\ \hline \multicolumn{3}{|c|}{\textbf{0.77} (A) and \textbf{0.804} (C)} \\ \hline \end{tabular} \captionsetup{font=small,skip=0pt} \caption{Prediction performance of the proposed approach \textit{problem set A and C}. We report the results obtained using different architectural choices.} \label{table:LAYERS} \vspace{-0.2in} \end{table} \begin{figure}[t] \captionsetup{font=small,skip=0pt} \centering \includegraphics[width=.4\textwidth]{roc} \caption{ROC curve on the validation set obtained using the multi-layer alternating Tree-LSTM architecture on \textit{problem set A}.} \label{fig:ROC} \vspace{-0.2in} \end{figure} \subsection{ROC vs Accuracy Metric} In addition to the accuracy metric, we evaluate all our models based on the receiver operator curve (ROC) to study how the prediction task's performance varies as the confidence threshold changes. The confidence threshold of the models' output (probability) determines if the performance difference between two code pairs should be classified as positive or negative. Increasing the confidence threshold lowers the false positive rate and equivalently the true positive rate. Having a lower false positive means that if a model classifies the change in execution time as increasing, the application developer can confidently invest the time and effort to resolve coding inefficiencies (perhaps by using other tools). E.g., in Figure \ref{fig:ROC}, we can observe that the ROC for the problem \texttt{A}, obtained using the $3-$ layer alternating tree-LSTM architecture, achieves a high area under the ROC metric of $0.85$, which implies $85\%$ true positive rate. Since this measure agrees with the accuracy metric, we only report the experiments' accuracy scores. \subsection{Impact of Architectural Choice} This experiment's objective is to evaluate the impact of the different architectural choices for the tree-LSTM (the best representation learning model for source-code, as found in Section~\ref{sec:LAYERS}) on the overall prediction accuracy. Table~\ref{table:LAYERS} presents the impact of three architectural choices on the accuracy of the prediction task. We increase the number of layers from $1$ to $3$ for the uni- and bi-directional architectures and observe an insignificant change in the accuracy. The bi-directional architecture is significantly more complicated and takes much longer to train since information is combined from both the forward and backward passes at every layer. The lack of improvement in model accuracy indicates overfitting due to the arbitrary increase in model complexity. The alternating architecture produces an equivalent representation compared to the other architectures, e.g., the alternating architecture improves the downstream predictive task's performance by 2\% for problem C. However, the alternating architecture combines information once during the forward pass followed by the backward pass, thereby gathering more information than a uni-directional one. In comparison, its accuracy is similar to that of the bi-directional architecture while being faster to train. \subsection{Impact of Data Sampling and Augmentation} \label{sec:dataneed} \begin{figure}[t] \captionsetup{font=small,skip=0pt} \centering \includegraphics[width=\columnwidth]{figs/perf_vs_numsub_pairs_all.pdf} \caption{(a) Accuracy of the model based on a percentage of maximum pairs for 2048 submissions. (b) Accuracy changes based on training set characteristics. } \label{fig:pairs} \vspace{-0.2in} \end{figure} A crucial aspect of modern machine learning methods is the complex trade-off between the task complexity and the amount of training data required to produce reliable models. Hence, we study the impact of data sampling on our approach's observed performance. We vary the number of submissions, with a fixed ratio of pairs for every case and the number of pairs for a given number of submissions. \noindent \textbf{Impact of number of submissions during training: } First, we increase the number of submissions in the training set from $32$ to $4096$ by powers of $2$. For each case, we construct the training pairs by selecting a random $75\%$ of all possible pairs in that case. For all tests, we use the same test set of submissions. Figure \ref{fig:pairs}(a) shows results for the problem set A with the multi-layer alternating tree-LSTM architecture. Figure \ref{fig:pairs}(a) shows that the accuracy steadily improves as the number of submissions grows. However, beyond $1000$ submissions, there is a diminishing return. Since data collection and annotation are time-consuming, having a reasonable number of training samples improves this methodology's adaptability in practice. \noindent \textbf{Impact of the percentage of pairs during training: } We also investigate the question of how many pairs should be included from those submissions for training. We perform this study by increasing the percentage of pairs used for a fixed number of submissions. In particular, we select the number of submissions at $2048$ and vary the ratio of pairs used. Interestingly, Figure \ref{fig:pairs}(b) shows that the accuracy initially improves rapidly as the number of pairs (randomly chosen) increases, achieving an accuracy improvement of 10\%, however, the accuracy score then begins to dip. However, there is a dip in the accuracy score. We continue to include more pairs since complex models such as deep networks tend to overfit when the training data's complexity is high. This observation motivates the need for further investigation of sampling strategies for optimal performance. \noindent \textbf{Impact of the ordering of pairs during training: } We evaluate how important it is for the model to train on both orderings of a single pair ($(a,b)$ and $(b, a)$). We compare a training set containing only one ordering pair to a model trained on the same number of overall teams, with half being the others' reverse. We find that the accuracy improves marginally, up to 2\% from using symmetrical pairs as opposed to non-symmetrical ones (the figure is not included due to space limitation). \subsection{Prediction Sensitivity} \label{sec:sensitivity} \begin{figure}[t] \captionsetup{font=small,skip=0pt} \centering \includegraphics[width=\columnwidth]{figs/perf_vs_std_all.pdf} \caption{Studying the sensitivity of the proposed approach.} \label{fig:SENS} \vspace{-0.2in} \end{figure} When comparing the execution times, the small differences are less significant than the large differences, e.g., a 1-millisecond difference is treated the same as that with 4000. To evaluate how sensitive the model predictions are to the variation in the submissions' execution times, we sort the evaluation sets and record accuracy for pairs with a difference beyond a certain threshold. Figure \ref{fig:SENS} shows the results for models trained on problems A, B, and C. With these three problems, we can observe that the accuracy of the prediction task consistently improves with the increase in the minimum difference that the model needs to resolve. Further investigation uncovers that a massive difference in execution time for source code typically comes from having either loop constructs (e.g., for, while) or significantly longer code. Hence, it becomes easier for the model to spot discriminatory structure in the source code when source code variants differ significantly in execution times. The execution times of all the problems in this dataset are reasonably close. These results indicate that as we move toward problems with a more significant difference in execution time between versions, the model will perform better. \begin{figure}[t] \captionsetup{font=small,skip=0pt} \centering \includegraphics[width=\columnwidth]{figs/embeddings_all.pdf} \caption{Visualization of the learned representations of nodes and ASTs obtained using t-SNE. (a) Two-Dimensional Representation of the node embeddings. Green are operations, red are other expressions, blue are statements, yellow are literal values, and black are support nodes. (b) Two-Dimensional Representation of the AST latent representations. Each color corresponds to one of the problem sets.} \label{fig:EMBED} \vspace{-0.2in} \end{figure} \subsection{Visualizing Learned Representations} We initialize nodes with random embedding vectors, and the model subsequently learns representations of each node from the data. To evaluate the effectiveness of the learning process, we map the embeddings down from the $\lambda = 120$ dimensional space to a two-dimensional space and plot in Figure \ref{fig:EMBED}. For the low-dimensional projection, we leverage the unsupervised, non-linear t-Distributed Stochastic Neighbor Embedding (t-SNE) technique~\cite{maaten2008visualizing} that is primarily used for data exploration and visualizing high-dimensional data. \noindent\textbf{Node representations: } Near nodes in t-SNE have similar representations, nodes with equal value along one axis have some similarities, and unequal nodes separated across both axes are significantly different, hence they should have differing representations. Figure~\ref{fig:EMBED}(a) shows that the tree-LSTM model discovers that the string and char literal representations are closely related. The model learns that \texttt{plus plus} and \texttt{plus assign} operators are close in nature, hence group them closely. Since the \texttt{for} and \texttt{while} representations share similar values along a single axis, it indicates that the model can encode their similarities into the representations while still capturing their differences. \noindent\textbf{Code representations: } Similarly, we use our model to generate code embeddings for three different problems with $100$ submissions each and project those to two-dimensional representations (red, blue, and green). In Figure~\ref{fig:EMBED}(b), we observe that the model can create distinctly different representations from other problems. We also observe that problems represented with red and green often have grouped clusters indicating more similarity than to the problem in blue. \section{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \titlespacing{\paragraph}{0em}{.4em}{.5em} \titlespacing{\section}{0em}{-1em}{-1em} \titlespacing{\subsection}{0em}{-1em}{-1em} \titlespacing{\subsubsection}{0em}{-.5em}{-.5em} \setlength{\parindent}{0em} \setlength{\parskip}{.5em} \section{Introduction} \input{introduction} \section{Dataset Description} \label{sec:dataset} \input{dataset} \section{Proposed Approach} \label{sec:overview} \input{overview} \section{Methods} \label{sec:methodology} \input{method} \section{Experimental Setup} \label{sec:setup} \input{setup} \section{Results} \label{sec:results} \input{eval} \section{Discussions} \label{sec:future-work} \input{future} \section{Related Work} \label{sec:related-work} \input{related} \section{Conclusion} \label{sec:conclusions} \input{conclusions} \bibliographystyle{plainnat \section{Introduction} \label{sec:intro} \input{introduction} \section{Dataset Description} \label{sec:dataset} \input{dataset} \section{Proposed Approach} \label{sec:overview} \input{overview} \section{Methods} \label{sec:methodology} \input{method} \section{Setup Experiments} \label{sec:setup} \input{setup} \section{Results} \label{sec:eval} \input{eval} \section{Discussions} \label{sec:future-work} \input{future} \section{Related Works} \label{sec:related-work} \input{related} \section{Conclusion} \label{sec:conclusions} \input{conclusions} \bibliographystyle{ACM-Reference-Format} \subsection{AST Generation} \label{sec:GENAST} The first step towards applying deep neural networks to codes is to create an appropriate representation. In general, the code can be treated as a text excerpt and processed with standard language modeling tools such as word or document embeddings. However, we advocate using abstract syntax trees since they are better descriptors of a code structure. This transformation leads to an additional challenge that the neural network should leverage the inherent tree structure of ASTs. To generate the ASTs, we use the ROSE~\cite{dan2011rose} compiler infrastructure. ROSE is a flexible, portable, and scalable source-to-source compiler infrastructure widely used in the scientific community comprising the national laboratories, universities, and the industry. \change{The AST from ROSE is modified only to include internal nodes that are part of the source code's function definitions. This process removes irrelevant information from the tree and allows models to train faster.} For simplicity, the source code's function definitions are all set as children of a root node. \change{While these simplifications make the embedding learning process simpler, more fine-grained representation of the tree nodes could provide additional information for the model to exploit. Finally, the AST generation process outputs a list of the node IDs and a list of links between nodes to represent the tree. \subsection{Constructing Node Embeddings} \label{sec:NODEREP} We assign a unique ID to each type of internal node (e.g., \texttt{for}, \texttt{while}), consistent across all trees in the database. A node type gets the same ID even when they appear multiple times in the same tree. Following standard practices in the natural language processing area, our machine learning pipeline transforms the user-defined tokens (IDs in ASTs) into vector representations. A naive approach for constructing a vector representation from an AST is using one-hot encoding, where each ID can be assigned a $1-$sparse binary vector with the value $1$ at the location of the ID and $0$ elsewhere. Since such a vector's size is a function of the total number of unique IDs in the dataset, it is high-dimensional (often referred to as \textit{cursed} representations). Such representations can lead to severe overfitting when building predictive models. Hence, we investigate a different approach in this work that assigns a specific vector representation for each ID using an embedding lookup structure. We fix the dimensionality of the embedding at $\lambda$, and initialize the embeddings randomly. However, the neural network training process can subsequently tune the embeddings. The total number of parameters to be optimized in this step is $\lambda \times D$, where $D$ is the total number of unique IDs in our dataset. This embedding layer will allow the model to infer similarities between nodes in terms of their performance impact, much more effectively than simple one-hot encoding. Once encoded, these embeddings are passed along to train a model for generating representations for the entire tree. In this paper, we initialize using random embeddings. In the future, we will investigate using pre-trained embeddings by adapting word embedding techniques (e.g., Skip-gram~\cite{mikolov2013efficient}, GloVe~\cite{pennington2014glove}) from the Natural Language Processing (NLP) literature. \subsection{Training Models} Following the state-of-practice in neural networks, we advocate using multiple layers in our tree-LSTM architecture. In this design, the hidden states at the end of one layer are used as the next layer's node representations. This process typically leads to greater refinement of each sub-tree's representations as each layer provides a better representation of the tree structure. In addition to this native implementation, which we refer to as \textit{uni-directional} tree-LSTM, we also consider two variants: \textit{bi-directional} tree-LSTM, and \textit{alternating} tree-LSTM. In the first variant, we allowed the tree transition to be bi-directional, i.e., from root to leaf nodes and leaf to root nodes. As illustrated in Figure \ref{fig:3LAYER}(b), two different tree-LSTMs run independently, with one having hidden states going from child to parent and the other going from parent to child. The parent node copies its representation to all its children instead of just sending its representation to a single node. This information propagation pattern enables a node's hidden state to include information from its children and its ancestors. Finally, the two representations are concatenated together to form a unified representation for a node. Since our approach uses the final root node representation to make the prediction, the downward pass in the bi-directional training's final layer is not required. In the second variant of the tree-LSTM, bi-directional training is simplified by alternating forward and backward passes. As shown in Figure \ref{fig:3LAYER}(c), a $3-$layer tree-LSTM will be constructed with $2$ forward layers and $1$ backward layer between them. Compared to the bi-directional training, this contains only half the number of parameters to train, avoids overfitting in practice, and produces highly effective latent representations for ASTs. In our experiments, we find that the alternating tree-LSTM consistently produces the best performance in all cases. \subsection{Classifier Design} \label{sec:classifier} Since our approach's overall objective is to perform comparative analysis, we first concatenate the hidden representations for the two ASTs and subsequently pass it to a fully connected classifier with the sigmoid activation. This classifier's number of parameters is $2*d$, where $d$ is the size of the latent representation in our tree-LSTM. We then compute the binary cross-entropy loss between the predicted probabilities and the correct labels to optimize the network's unknown parameters. \subsection{Problem Formulation:} Formally, let us denote the ASTs for a pair of source code by $p_i$ and $p_j$ respectively, where every $p_i \in \mathrm{P}$ is a submission pertinent to the problem $\mathrm{P}$. We define a deep feature extractor $F$ that processes an AST to produce a latent representation ${z} \in \mathrm{Z}$, where $\mathrm{Z}$ denotes the latent space. Mathematically, this process is expressed as $F: \mathrm{P} \mapsto \mathrm{Z}$. Since the goal is to predict if the AST of $p_j$ is expected to have a lower execution time than $p_i$, we first concatenate their features to produce $\bar{{z}}_{ij} = [{z}_i, {z}_j]$. As a result, when the dimensionality of the latent space $\mathrm{Z}$ is $d$, the size of the concatenated feature $\bar{{z}}_{ij}$ becomes $2*d$. The classifier function $C$ maps the concatenated feature $\bar{{z}}_{ij} \in \bar{\mathrm{Z}}$ into the target variable $y_{ij} \in \mathrm{Y}$, where the output space $\mathrm{Y}$ is discrete and assumes one of the two values $0$ or $1$. In other words, the classifier mapping can be expressed as $C: \bar{\mathrm{Z}} \mapsto \mathrm{Y}$. \change{The feature extractor $F$ consists of two components. First, it learns to represent each code construct of an AST (\textit{node} of a tree) using an embedding lookup function that assigns a feature vector to each node depending on its type (e.g., \texttt{for loops} or \texttt{if statements}). Second, it learns representation for the entire AST or a sub-tree using the deep learning algorithm ($\mathbf{z}$). In this paper, we propose tree-LSTM for automatic feature representation. Section~\ref{sec:tree-lstm} discusses our rationale in details. Our proposed approach jointly infers the representations for each of the nodes and subsequently for the entire AST. This automated learning approach alleviates the need for any manual feature engineering technique, which presents a nontrivial challenge for source code.} The classifier is implemented as a feed-forward network and produces the likelihood of one version of the code being superior to the other in terms of expected performance. This likelihood is then encoded into a decision label using Equation~\ref{eqn:decision}: \begin{gather} pred(p_{i}, p_{j}) \rightarrow \begin{cases} 0 & t_{i} < t_{j} \text{; } p_{i} \text{is faster}\\ 1 & t_{i} \ge t_{j} \text{; } p_{j} \text{is faster or equivalent} \end{cases} \vspace{-0.2in} \end{gather} \subsection{Tree-Structured LSTMs for AST Modeling} \label{sec:tree-lstm} Since conventional solutions such as fully connected networks and convolutional neural networks typically utilize unstructured high-dimensional and image data, they can not directly handle ASTs. Consequently, there has been a recent surge in deep learning techniques designed specifically for these challenging data types. Tree LSTM~\cite{tai2015improved} is a deep learning architecture for processing tree-structured data. Recent work has demonstrated that leveraging the hierarchical structures of languages, both natural and programming, gives models salient characteristics of the data and improves performance on the downstream tasks~\cite{tai2015improved,eriguchi2016tree}. Consequently, we propose using tree-structured LSTM, a specific construction of recurrent neural networks, to produce concise vector representations for each source code. Intuitively, our model uses hierarchical accumulation to encode each non-terminal node's representation by aggregating the hidden states of all of its descendants. The accumulation process occurs in two stages. First, the model induces the value states of non-terminals with hierarchical embeddings, which helps the model become aware of the hierarchical and sibling relationships between the nodes. Second, the model performs an upward cumulative-sum operation on each \textit{target node} \change{(the node whose representation is being learned)}, accumulating all elements in the branches originating from the target node to its descendant leaves. In Section~\ref{sec:results}, we thoroughly evaluate the performance of tree-LSTM with Graph Convolution Network (GCN) based deep representation learning models suitable for structured graph data. \change{We demonstrate that the accuracy of the predictions are higher with embeddings built using tree-LSTM compared to that of GCN.} Broadly, tree-LSTM is a recurrent neural network (RNN)~\cite{williams1989learning}, designed to perform feature extraction from arbitrary length sequence data via the recursive application of a transition function on a hidden state vector. It operates by taking in the hidden state of the previous element of the sequence and the input for its current element. At each time step $t$, the hidden state vector $h_{t}$ is a function of the input vector $x_{t}$ (vector representation of the $t^{th}$ element in the sequence) and its previous hidden state $h_{t-1}$. Consequently, the $h_{t}$ can be interpreted as a concise representation of the sequence of elements observed up to time $t$. In a typical RNN, the transition function is implemented as follows: \begin{equation} h_t = \tanh{W x_t + U h_{t-1} + b}, \end{equation}\noindent where $W$, $U$, and $b$ are learnable parameters, and $\tanh$ denotes the hyperbolic tangent nonlinearity. An inherent limitation of RNNs is that as the sequence length grows the problem of \textit{exploding} or \textit{vanishing} gradients makes training very difficult~\cite{pascanu2013difficulty}. Consequently, the LSTM architecture addresses this limitation in learning long-term dependencies by introducing a memory cell that preserves the states over long periods of time~\cite{sundermeyer2012lstm, graves2005framewise}. For a time step $t$, an LSTM unit is typically comprised of vectors from an input gate $i_{t}$, forget gate $f_{t}$, output gate $o_{t}$, memory cell state $c_{t}$ and hidden state $h_{t}$. Intuitively, the forget gate controls the extent to which the memory cell's previous states are forgotten, the input gate controls how much each unit is updated, and the output gate controls the exposure of the internal memory state. In a nutshell, the hidden state vector in an LSTM unit is a gated, partial view of the internal memory cell state. Mathematically, the transition equations can be derived as follows: \begin{align} i_t &= \sigma(W^i x_t + U^i h_{t-1} + b^i),\nonumber \\ f_t &= \sigma(W^f x_t + U^f h_{t-1} + b^f),\nonumber \\ o_t &= \sigma(W^o x_t + U^o h_{t-1} + b^o),\nonumber \\ u_t &= \sigma(W^u x_t + U^u h_{t-1} + b^u),\nonumber \\ c_t &= i_t \odot u_t + f_t \odot c_{t-1},\nonumber \\ h_t &= o_t \odot \tanh({c_t}). \end{align} A limitation of this architecture is that it allows only strictly sequential information propagation. However, in the case of ASTs, the information flow happens to multiple children from a given parent node. \change{Hence, we propose a new architecture for the tree-LSTM technique to deal with information flow through an AST~\cite{tai2015improved}}. The crucial difference between an LSTM unit and a tree-LSTM unit is that the gating vectors and memory cell updates are dependent on the states of possibly many child units. Additionally, instead of a single forget gate, the tree-LSTM unit contains one forget gate for each child, which allows it to selectively leverage information from each child. Section~\ref{sec:NODEREP} describes the input vector at each node. Figure~\ref{fig:3LAYER}(a) shows that the transition function is applied to the leaf nodes first and then progressively moves up the tree to the root node. Mathematically, this can be described as follows: Assuming that the $\mathcal{C}(j)$ denotes the set of children of a node $j$, \begin{align} \tilde{h}_j &= \sum_{k \in \mathcal{C}(j)} h_k,\nonumber \\ i_j &= \sigma(W^i x_j + U^i \tilde{h}_j + b^i),\nonumber \\ f_{jk} &= \sigma(W^f x_j + U^f h_k + b^f), \nonumber \\ o_j &= \sigma(W^o x_j + U^o \tilde{h}_j + b^o), \nonumber \\ u_j &= \sigma(W^u x_j + U^u \tilde{h}_j + b^u), \nonumber \\ c_j &= i_j \odot u_j + \sum_{k \in \mathcal{C}(j)} f_{jk} \odot c_k,\nonumber \\ h_j &= o_j \odot \tanh({c_j}). \end{align}Note that, after processing the entire AST \change{(or a selected sub-tree)}, the classifier function uses the final hidden representation at the root node (of the sub-tree) for prediction. \subsection{System} We run our experiments on the Google Cloud Platform (GCP). Specifically, we use a machine with eight virtual CPUs, 30 GB RAM, and one NVIDIA Tesla P100 GPU with 16GB memory. Each CPU consists of 2.30GHz Intel (R) CPU with 4 cores and is based on the $x86\_64$ Architecture. The GPUs are equipped with CUDA 10.0 toolkit by NVIDIA. \subsection{Deep Representation Learning Techniques} \label{sec:gcn} In this paper, we evaluate the efficacy of our proposed tree-LSTM (described in Section~\ref{sec:tree-lstm}) based representation learning technique compared to that of a more generic one---Graph Convolution Network (GCN)~\cite{kipf2016semi,schlichtkrull2018modeling}. The GCN is a generalization of Convolution Neural Networks (CNN)~\cite{kalchbrenner2014convolutional}, where the GCN stacks multiple graph convolution layers to extract a high-level node representation that takes $N$-dimensional data to graph data. A GCN model takes the graph-structured data as input and generates a vector representing a source-code. The GCN applies semi-supervised node classification, which classifies each node of the tree to help decide the type for the whole AST. We extend the GCN model by creating a wrapper layer that combines information from an internal node's directly connected nodes. The significant difference between GCN and tree-LSTM is in information flow to each internal node--GCN leverages all neighboring nodes compared to tree-LSTM which leverages knowledge of parent-child relationships. The source-code embeddings are then passed from GCN to the classifier, as described in Section~\ref{sec:classifier}. \subsection{Hyper-parameter Tuning} \label{sec:hyper} For automated hyper-parameter tuning, we leverage the Optuna optimization framework~\cite{akiba2019optuna}. For GCN, we observe that the most critical parameters to tune for the given downstream prediction task are the number of convolution layers and the hidden layer's size. We vary the number of convolutional layers from 1 to 16 for GCN (more than that exhausts GPU memory) and the hidden layer's size from 8 to 256. Our experiments show that (6, 117) as the number of convolutional layers and the hidden layer's size, respectively achieves the best accuracy ($68.5\%$). For tree-LSTM, we use $100$ hidden states and the feature embedding vectors of length $120$. With these parameters, we achieve the best accuracy ($73\%$) for the large dataset.
{ "timestamp": "2021-04-23T02:08:27", "yymm": "2102", "arxiv_id": "2102.07660", "language": "en", "url": "https://arxiv.org/abs/2102.07660" }
\section{Introduction} \label{sec:intro} We consider level sets percolation for the Gaussian free field on the cable system of transient weighted graphs. This model often undergoes a phase transition at level zero, which was first proved in \cite{MR3502602} on $\mathbb{Z}^d,$ $d\geq3,$ then on trees in \cite{MR3492939} and \cite{MR3765885}, and later on a large class of transient graphs, possibly with positive killing measure, in \cite{DrePreRod3}. In many cases, this phase transition is also particularly well understood in the near-critical regime, see \cite{DiWiLu} on $\mathbb{Z}^d,$ $d\geq3,$ or the recent paper \cite{DrePreRod5} for additional results on general graphs. We will prove that this behaviour of the phase transition, although typical, cannot be extended to any transient weighted graph. In \cite{DrePreRod3}, two simple criteria on the underlying graph are introduced, which together imply that the critical parameter associated to this percolation problem is equal to zero: condition \eqref{capcondition}, which says that the capacity of any unbounded set is infinite, and $\mathit{\mathbf{h}}_{\text{kill}}<1,$ see \eqref{defh0}, which says that with positive probability the discrete random walk on the graph will not be killed (by the killing measure). Canonical examples of graphs verifying \eqref{capcondition} and $\mathit{\mathbf{h}}_{\text{kill}}<1$ are massless (i.e.\ with zero killing measure) vertex-transitive graphs. In this paper, we are interested in understanding the limit of this result, and we present various examples of graphs which answer positively the following questions: If either \eqref{capcondition} or $\mathit{\mathbf{h}}_{\text{kill}}<1$ do not hold, is it possible to find a graph with strictly positive or negative critical parameter? Can one find a graph such that the critical parameter is equal to zero, but \eqref{capcondition} or $\mathit{\mathbf{h}}_{\text{kill}}<1$ do not hold? While trying to answer these questions, we will prove several results which are interesting in their own right: another characterization of random interlacements when $\mathit{\mathbf{h}}_{\text{kill}}\equiv1,$ see \eqref{disinterh0=1}, or Corollary \nolinebreak \ref{killedinterdes} for a more general result, and an isomorphism between the Gaussian free field and the $\mathit{\mathbf{h}}$-transform of random interlacements for any harmonic function $\mathit{\mathbf{h}},$ see Theorem \nolinebreak \ref{couplingintergffh}, which holds under the same conditions as the isomorphism in Theorem \nolinebreak \ref*{4couplingintergff} of \cite{DrePreRod3}. When $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}},$ this corresponds to an isomorphism between the Gaussian free field and killed random interlacements, that is the trajectories in the random interlacement process which are killed, see Corollary \nolinebreak \ref{h0transformiso}, and when $\mathit{\mathbf{h}}$ is the potential kernel on $\mathbb{Z}^2,$ this corresponds to an isomorphism between the pinned free field and two-dimensional random interlacements, see Theorem \nolinebreak \ref{couplingintergffdim2}. We use the same setting and notation as in \cite{DrePreRod3} that we now describe briefly, and we refer to Section \ref{sec:notation} for details. We consider a transient weighted graph $ \mathcal{G}= (\overline{G},\bar{\lambda},\bar{\kappa}),$ where $\overline{G}$ is a finite or countably infinite set, the weights $\bar{\lambda}_{x,y},$ $x,y\in{\overline{G}},$ describe the rate at which the canonical jump process on $\overline{G}$ jumps to a neighbor, and the killing measure $\overline{\kappa}_x,$ $x\in{V},$ describe the rate at which it is killed. We allow the killing measure $\overline{\kappa}$ to be infinite, and, using network equivalence, we define a triplet $(G,\lambda,\kappa)$ so that $\kappa$ is finite, $\{x\in{\overline{G}}:\bar{\kappa}_x<\infty\}\subset G,$ and the restriction of the jump process on $(G,\lambda,\kappa)$ to $\{x\in{\overline{G}}:\bar{\kappa}<\infty\}$ corresponds to the jump process on $(\overline{G},\bar{\lambda},\bar{\kappa}),$ see around \eqref*{4eq:defGfinite} in \cite{DrePreRod3} for details. When $\bar{\kappa}$ is finite, which will be the case in most of the examples in this article, this simply corresponds to the choice $(\overline{G},\bar{\lambda},\bar{\kappa})=(G,\lambda,\kappa).$ Unless explicitly mentioned otherwise, we will assume that the jump process on $\mathcal{G}$ is transient. One can naturally associate to $\mathcal{G}$ the cable system, or metric graph, $\tilde{\mathcal{G}},$ corresponding to a continuous version of the graph where each edge $e=\{x,y\}$ is replaced by an open interval $I_e$ linking $x$ to $y,$ and where we add a half-open interval $I_x$ starting in each vertex $x.$ \phantomsection \label{deftildeged}We denote by $\tilde{\mathcal{G}}^{-}$ the subset of $\tilde{\mathcal{G}}$ consisting only of the edges $I_e$ for $e\in{E},$ that is removing the edges $I_x$ starting from $x$ for all $x\in{G}.$ One can then define a diffusion $X=(X_t)_{t\geq0}$ on the cable system $\tilde{\mathcal{G}},$ starting in $x$ under the probability $P_x^{\tilde{\mathcal{G}}}.$ It behaves like a Brownian motion inside the continuous edges and like the jump process on $\mathcal{G}$ on the vertices. The diffusion $X$ then stays in $\tilde{\mathcal{G}}$ until a time $\zeta\in{[0,\infty]},$ after which it remains in some cemetery state $\Delta,$ and, as $t\nearrow\zeta,$ either $X_t$ reaches the open end of the cable $I_x$ for some $x\in{G},$ and we say that $X$ has been killed, or $X_t$ exits every bounded and connected sets, and we say that $X$ blows up or survives. Note that, when starting in $x\in{G},$ the event that $X$ is killed corresponds to the event that the jump process on $\mathcal{G}$ is killed (i.e.\ by the killing measure). We define $\mathit{\mathbf{h}}_{\text{kill}}$ as the probability to be killed and $\mathit{\mathbf{h}}_{\text{surv}}$ as the probability to blow up: for all $x\in{\tilde{\mathcal{G}}},$ \begin{equation} \label{defh0} \mathit{\mathbf{h}}_{\text{kill}}(x)\stackrel{\mathrm{def.}}{=}P^{\tilde{\mathcal{G}}}_x\big((X_t)_{t\geq0}\text{ is killed}\big)\text{ and }\mathit{\mathbf{h}}_{\text{surv}}(x)\stackrel{\mathrm{def.}}{=}P^{\tilde{\mathcal{G}}}_x\big((X_t)_{t\geq0}\text{ blows up}\big)=1-\mathit{\mathbf{h}}_{\text{kill}}(x). \end{equation} The Gaussian free field on $\tilde{\mathcal{G}}$ is then defined under some probability $\P^G$ as the centered Gaussian field $(\phi_x)_{x\in{\tilde{\mathcal{G}}}}$ with covariance function given by \begin{equation} \label{defGFF} \mathbb{E}^G[\phi_x\phi_y]=g(x,y), \end{equation} where $g(x,y)$ is the average time spent in $y$ by the diffusion $X$ under $P_x^{\tilde{\mathcal{G}}},$ see \eqref{Greendef} for a precise definition. We are interested in the level sets of the Gaussian free field, defined as \begin{equation} \label{deflevelsets} E^{\geq h}\stackrel{\mathrm{def.}}{=}\{x\in{\tilde{\mathcal{G}}}:\phi_x\geq h\}. \end{equation} The critical parameter associated with the percolation of the level sets of the Gaussian free field on the cable system is defined as \begin{equation} \label{defh*} \tilde{h}_*\stackrel{\mathrm{def.}}{=}\inf\big\{h\in\mathbb{R}:\,\P^{G}(E^{\geq h}\text{ contains an unbounded connected component})=0\big\}, \end{equation} where we say that a connected set $F\subset\tilde{\mathcal{G}}$ is unbounded if and only if $F\cap G$ is infinite. In \cite{DrePreRod3}, two main results are proved about the percolation of the Gaussian free field on the cable system. First, \begin{equation} \label{eq:h0<1thenh*>0} \text{if $\mathit{\mathbf{h}}_{\text{kill}}<1,$ then $\tilde{h}_*\geq0,$} \end{equation} see \eqref*{4ifhkill<1thenh_*>0} in \cite{DrePreRod3}, where we write $\mathit{\mathbf{h}}_{\text{kill}}<1$ when $\mathit{\mathbf{h}}_{\text{kill}}(x)<1$ for all $x\in{\tilde{\mathcal{G}}},$ or equivalently $\mathit{\mathbf{h}}_{\text{kill}}(x)<1$ for some $x\in{\tilde{\mathcal{G}}}.$ This first result is actually an easy consequence of an extension of the isomorphism between random interlacements and the Gaussian free field on the cable system, Proposition \nolinebreak 6.3 in \cite{MR3502602}, to massive weighted graphs, see \eqref*{4couplingusualiso} and below in \cite{DrePreRod3} for details. Let us now introduce the following conditions \begin{equation} \label{capcondition} \tag{Cap} \mathrm{cap}(A)=\infty\text{ for all unbounded, closed, connected sets }A\subset \widetilde{\mathcal{G}}, \end{equation} where $\mathrm{cap}(A)$ is the capacity of the set $A,$ which one can interpret as the size of the set $A$ from the point of view of the diffusion $X,$ see \eqref{defcap} and below for a precise definition, and \begin{equation} \label{0bounded} \tag{Sign} E^{\geq0}\text{ contains }\P^G\text{-a.s.\ only bounded connected components}. \end{equation} Note that if \eqref{0bounded} holds, then $\tilde{h}_*\leq 0.$ The second main result of interest from \cite{DrePreRod3} is that \begin{gather} \label{eq:capfinite} \P^G\big(\mathrm{cap}(E^{\geq0}(x_0))<\infty\big)=1\text{ for all } x_0\in{\tilde{\mathcal{G}}}, \\\text{ and therefore \eqref{capcondition} $\Longrightarrow$ \eqref{0bounded},}\label{eq:capimplies0bounded} \end{gather} see Theorem \nolinebreak \ref*{4T:main},1) in \cite{DrePreRod3}. The implication \eqref{eq:capimplies0bounded} follows from the fact that, under condition \eqref{capcondition}, every closed and connected sets with finite capacity are bounded, and generalizes results on $\mathbb{Z}^d,$ $d\geq3$ from \cite{MR3502602} and on trees from \cite{MR3492939} and \cite{MR3765885}. It moreover follows from \eqref{eq:h0<1thenh*>0} and \eqref{eq:capimplies0bounded} that if \eqref{capcondition} is verified and $\mathit{\mathbf{h}}_{\text{kill}}<1,$ then $\tilde{h}_*=0.$ In this article, we present some (non-trivial) examples related to the implications in \eqref{eq:h0<1thenh*>0} and \eqref{eq:capimplies0bounded}, in order to understand better if the two conditions \eqref{capcondition} and $\mathit{\mathbf{h}}_{\text{kill}}<1$ are optimal or not. We sum up these examples in the following theorem. \begin{The} \label{mainth} \begin{enumerate}[1)] \item There exist graphs with $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$ and $\tilde{h}_*\geq0,$ see Corollary \nolinebreak \ref{Cor:h_0=1andh_*=0}, and so \eqref{eq:h0<1thenh*>0} is not an equivalence. \item On $(d+1)$-regular trees with large killing measure, or on almost any graphs with sub-exponential volume growth such that $\kappa\geq c$ and $\lambda\leq C$ we have $\tilde{h}_*<0,$ see Theorem \nolinebreak \ref{The:h_*<0}, and so the implication \eqref{eq:h0<1thenh*>0} is not trivial. \item There exists a graph for which \eqref{0bounded} holds, but for which this property cannot be directly deduced from \eqref{eq:capfinite}, see Proposition \nolinebreak \ref{Z20counterexample} and the discussion below \eqref{capGfinite}. In particular \eqref{capcondition} does not hold for this graph, and so the implication \eqref{eq:capimplies0bounded} is not an equivalence. \item There exist graphs for which \eqref{0bounded} does not hold, see Proposition \nolinebreak \ref{h*infinity}, and so the implication \eqref{eq:capimplies0bounded} is not trivial. \end{enumerate} \end{The} In order to obtain most of the examples in Theorem \nolinebreak \ref{mainth} a major role will be played by random interlacements, which were initially introduced on $\mathbb{Z}^d,$ $d\geq3,$ in \cite{MR2680403}, then on any transient massless graphs in \cite{MR2525105}, and extended to their cable system in \cite{MR3502602}. We explain in details how to extend the definition of the random interlacement process to the cable system of any transient weighted massive graphs in Section \ref{sec:defmassinter}, see in particular Theorem \nolinebreak \ref{nuexists}, as a Poisson point process of doubly non-compact trajectories modulo time-shift, and we denote by ${\cal I}^u\subset\tilde{\mathcal{G}}$ the set of points visited by at least one of these trajectories, see Section \ref{sec:notation} for details. Random interlacements are linked to the Gaussian free field via an isomorphism theorem, first derived in \cite{MR2892408} on discrete graphs and in \cite{MR3502602} on the cable system, and then strengthened in \cite{MR3492939} and \cite{DrePreRod3}. In order to illustrate the role of random interlacements to study percolation for the level sets of the Gaussian free field on $\tilde{\mathcal{G}},$ we recall the following consequence of the isomorphism theorem, which follows from Theorem \nolinebreak \ref*{4T:main},2) in \cite{DrePreRod3}: \begin{equation} \label{levelsetsvsIu} \begin{gathered} \text{ if \eqref{0bounded} holds, then }E^{\geq-\sqrt{2u}}\text{ has the same law as }\mathcal{C}_u\cup E^{\geq 0}\text{ under }\P^G\otimes\P^I, \\\text{ where }\mathcal{C}_u\text{ denotes the closure of the union of the connected components of} \\\text{the sign clusters $\{x\in{\tilde{\mathcal{G}}}:|\phi_x|>0\}$ intersecting the interlacement set ${\cal I}^u.$} \end{gathered} \end{equation} Let us now comment on the proofs and the four class of examples in Theorem \nolinebreak \ref{mainth} in more details. The first example consist of a class of graphs with $\mathit{\mathbf{h}}_{\text{kill}}\equiv1,$ killing measure diverging to infinity, and total weight from a vertex at generation $n$ to all its children also diverging to infinity, see \eqref{condtreeh_*=0}, \eqref{condtreedyadich*=0} or \eqref{condlineh_*=0} for precise conditions. We prove that $\tilde{h}_*\geq0$ on these graphs using the coupling with random interlacements described in \eqref{levelsetsvsIu}. When $\mathit{\mathbf{h}}_{\text{kill}}\equiv1,$ one can describe random interlacements on $\tilde{\mathcal{G}}^{-}$ as follows: \begin{equation} \label{disinterh0=1} \begin{gathered} \text{if $\mathit{\mathbf{h}}_{\text{kill}}\equiv1,$ then the trace on $\tilde{\mathcal{G}}^{-}$ of the random interlacement process has the same} \\\text{law as a Poisson point process with intensity }u\sum_{x\in{G}}\kappa_xP^{\tilde{\mathcal{G}}^{-}}_x\text{ modulo time-shift,} \end{gathered} \end{equation} where $P^{\tilde{\mathcal{G}}^{-}}_x$ is the law of the trace $X^{\tilde{\mathcal{G}}^{-}}$ of $X$ on $\tilde{\mathcal{G}}^{-},$ see Section \ref*{4S:I_x} of \cite{DrePreRod3} for a more precise description of this law and \eqref*{4traceonG} and above in \cite{DrePreRod3} for a definition of the trace of a process. The description \eqref{disinterh0=1} of random interlacements is a direct consequence of our construction of random interlacements, see Theorem \nolinebreak \ref{nuexists}, and its proof can be found below Corollary \nolinebreak \ref{killedinterdes}. In Section \ref{sec:h0=1h_*=0}, we will use the description \eqref{disinterh0=1} to prove that, on the class of graphs that we consider, the random interlacement set either always contains a supercritical Galton-Watson tree or percolates along some fixed infinite path, and thus always contains an unbounded connected component. Using \eqref{levelsetsvsIu}, will let us, in turn, deduce that $\tilde{h}_*\geq0,$ and in fact $\tilde{h}_*=0$ under some additional conditions, see Corollary \nolinebreak \ref{Cor:h_0=1andh_*=0}. Note that one can find examples of such graphs with either sub-exponential volume growth or exponential volume growth, see the examples below Corollary \ref{Cor:h_0=1andh_*=0}, and that one can also find examples of graphs with $\tilde{h}_*>0$ and $\mathit{\mathbf{h}}_{\text{kill}}\equiv1,$ see Remark \ref{h0=1h_*=infinity}. The second class of examples in Theorem \nolinebreak \ref{mainth} consist of any graphs with $\lambda\leq C,$ $\kappa\geq c,$ or just exponential decay of the Green function, see \eqref{condgvscap}, and which have either sub-exponential volume growth, see \eqref{condsizeball} for a more precise condition, or are $(d+1)$-regular tree with large enough killing measure. The proof of the inequality $\tilde{h}_*<0$ relies on a suitable renormalization scheme, see \eqref{defLk} and below. We first prove a quantitative bound on the probability that a cluster of $E^{\geq-h}$ has diameter $L$ for small, but positive, $h$ (depending on $L$ in an explicit form) by combining \eqref{levelsetsvsIu} and a result about the two-points function for the sign clusters of the Gaussian free field, Proposition \nolinebreak 5.2 in \cite{MR3502602}, see Lemma \nolinebreak \ref{iniren} for details. We then iterate this bound using our renormalization scheme to remove the dependency of $h$ on $L,$ relying on decoupling inequalities for the Gaussian free field from \cite{MR3325312}, see Lemma \nolinebreak \ref{lemmainduction}. One can then easily check that this last bound indeed imply that $\tilde{h}_*<0,$ and in fact we show that the probability that a component of $E^{\geq h}$ has diameter at least $L$ decays exponentially fast for some $h<0,$ see \eqref{connectingtoballdecayexpo}. Combining this with \eqref{levelsetsvsIu} leads, in turn, to similar results for the random interlacement set, see Corollary \nolinebreak \ref{cor:u_*>0}. The third example in Theorem \nolinebreak \ref{mainth} is more challenging to find than a graph simply verifying \eqref{0bounded} but not \eqref{capcondition}. Indeed, as noted in Remark \ref*{4R:mainresults1},\ref*{4signwithoutcap} in \cite{DrePreRod3}, one can easily obtain such graphs by simply adding vertices to a graph verifying \eqref{capcondition}, and so \eqref{0bounded} by \eqref{eq:capimplies0bounded}, so that the new graph does not verify \eqref{capcondition} but still \eqref{0bounded}, see the beginning of Section \ref{sec:Z20} for details. For these examples, one can however still almost directly deduce \eqref{0bounded} from \eqref{eq:capfinite}. In order to avoid this reasoning, and, in a sense, find a real counterexample to the implication \eqref{eq:capimplies0bounded}, we introduce a condition \eqref{capGfinite} on the graph $\mathcal{G},$ under which no information about the percolation of $E^{\geq0}$ can be obtained from \eqref{eq:capfinite}, see the discussion below \eqref{capGfinite} for details. One can interpret this condition \eqref{capGfinite} as a stronger form of the complement of the condition \eqref{capcondition}. An example of a graph verifying \eqref{0bounded} and \eqref{capGfinite} is the graph $\mathbb{Z}^{2,0},$ which correspond to the two-dimensional square lattice with infinite killing measure at the origin and zero killing measure everywhere else. It is easy to prove that \eqref{capGfinite} holds for $\mathbb{Z}^{2,0},$ and so \eqref{capcondition} does not hold, but in order to prove that \eqref{0bounded} holds, we introduce the notion of Doob-transform $\mathcal{G}_{\mathit{\mathbf{h}}}$ of the graph $\mathcal{G},$ where $\mathit{\mathbf{h}}$ is an harmonic function on $\tilde{\mathcal{G}},$ see Definition \ref{harmo}. The diffusion $X$ on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}$ corresponds to a time-changed version of the usual $\mathit{\mathbf{h}}$-transform of the diffusion $X$ on $\tilde{\mathcal{G}},$ see for example Chapter 11 in \cite{MR2152573}, and we refer to \eqref{semigrouph} for a more precise statement. The sign clusters of the Gaussian free field on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}$ correspond to the sign clusters of the Gaussian free field on $\tilde{\mathcal{G}},$ see \eqref{eq:GFFh}, and so \eqref{0bounded} is equivalent on $\mathcal{G}$ or $\mathcal{G}_{\mathit{\mathbf{h}}}.$ In particular, if \eqref{capcondition} holds on $\mathcal{G}_{\mathit{\mathbf{h}}}$ for some harmonic function $\mathit{\mathbf{h}},$ then \eqref{0bounded} holds on $\mathcal{G}$ by \eqref{eq:capimplies0bounded}, see Corollary \nolinebreak \ref{capimplieseverythingh}. \label{pagedefa}In the case of $\mathbb{Z}^{2,0},$ let $({\bf a}(x))_{x\in{\tilde{\mathbb{Z}}}^2}$ be the continuous potential kernel associated to the diffusion $X$ on $\tilde{\mathbb{Z}}^2,$ defined as in (4.15) of \cite{MR2677157} for $x\in{\mathbb{Z}^2},$ with ${\bf a}$ constant on $I_x$ for each $x\in{\mathbb{Z}^2},$ and linear on $I_e$ for each edge $e$ of $\mathbb{Z}^2.$ It is a classical result that the potential kernel ${\bf a}$ is harmonic, see Proposition \nolinebreak 4.4.2 in \cite{MR2677157}. We check that \eqref{capcondition} holds for $\mathbb{Z}^{2,0}_{\bf a},$ and this proves that $\mathbb{Z}^{2,0}$ indeed verifies \eqref{0bounded}, see Proposition \nolinebreak \ref{Z20counterexample}. Finally, let us comment on the last class of examples in Theorem \nolinebreak \ref{mainth}. They consist of $(d+1)$-regular trees such that the length of the edge between a vertex at generation $n$ and one of its children is $1/(2\alpha^n),$ $\alpha<1.$ One can show under some conditions on $\alpha$ and $d,$ see \eqref{condondalpha}, that, for some vertex $x_0,$ $E^{\geq h}(x_0)$ is included in a supercritical Galton-Watson tree for all $h\in\mathbb{R},$ and so $\tilde{h}_*=\infty,$ see Proposition \nolinebreak \ref{h*infinity}. One can find such trees with zero killing measure, or with $\mathit{\mathbf{h}}_{\text{kill}}\equiv1,$ depending on the choice of $\alpha,$ see Remark \ref{endremark},\ref{alphadexist}. We finish this section by mentioning some interesting results that we obtain along the way. In Section \ref{sec:Doob}, we use the notion of $\mathit{\mathbf{h}}$-transform of the graph $\mathcal{G}$ to prove that one can define a notion of $\mathit{\mathbf{h}}$-transform of random interlacements, see Definition \ref{defhtransforminter}, and an isomorphism between the Gaussian free field on $\mathcal{G}$ and the $\mathit{\mathbf{h}}$-transform of random interlacements similar to Theorem \nolinebreak 2.4 in \cite{MR3492939}, under the same condition as in Theorem \nolinebreak \ref*{4T:main},2) in \cite{DrePreRod3}, see Theorem \nolinebreak \ref{couplingintergffh}. In particular, it implies a coupling similar to \eqref{levelsetsvsIu} but for the $\mathit{\mathbf{h}}$-transform of random interlacements, see \eqref{levelsetsvsIuh}. At the end of Section \ref{sec:Doob}, we gather some interesting consequences of this isomorphism when choosing $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}}$ or $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{surv}},$ see Corollaries \nolinebreak \ref{h0transformiso} and \ref{couplingintergffK}. For the moment, in order to illustrate more precisely the kind of results one can obtain using the $\mathit{\mathbf{h}}$-transform of random interlacements and the Gaussian free field, we present a theorem concerning the two-dimensional random interlacements and pinned Gaussian free field. We consider the lattice $\mathbb{Z}^2$ with its usual edge set $E_2,$ weights $\frac14$ between every two neighbors of $\mathbb{Z}^2$ and $0$ otherwise, and killing measure equal to $0,$ and $\tilde{\mathbb{Z}}^2$ the associated cable system. Let us also define $\mathbb{Z}^2_n$ the graph with same vertices and weights as $\mathbb{Z}^2,$ but with killing measure equal to infinity on $B_n^c$ and zero on $B_n,$ where $B_n$ is the discrete ball of radius $n$ centered at the origin. Even if the graph $\mathbb{Z}^2$ is not transient, one can define a pinned version of the Gaussian free field $(\phi^p_x)_{x\in{\tilde{\mathbb{Z}}^2}}$ under some probability $\P^{G,p}$ with covariance function given by \begin{equation} \label{def2dgff} \begin{split} \mathbb{E}^{G,p}[\phi_x^p\phi_y^p]&=\lim_{n\rightarrow\infty}\mathbb{E}^G_{\tilde{\mathbb{Z}}^2_n}[(\phi_x-\phi_0)(\phi_y-\phi_0)] \\&=\lim\limits_{n\rightarrow\infty}g_{\tilde{\mathbb{Z}}^2_{n}}(x,y)-g_{\tilde{\mathbb{Z}}^2_{n}}(x,0)-g_{\tilde{\mathbb{Z}}^2_{n}}(y,0)+g_{\tilde{\mathbb{Z}}^2_{n}}(0,0) \end{split} \end{equation} for all $x,y\in{\tilde{\mathbb{Z}}^2}.$ The limit in \eqref{def2dgff} exists for each $x,y\in{\mathbb{Z}^2},$ see (2.27) in \cite{MR3936156}, and, since one can obtain the value of $g_{\tilde{\mathbb{Z}}^2_{n}}(x,y),$ $x,y\in{\tilde{\mathbb{Z}}_n^2},$ from its value on $\mathbb{Z}_n^2$ by interpolation, see (2.1) in \cite{MR3502602}, it is easy to show that the limit in \eqref{def2dgff} exists in fact for each $x,y\in{\tilde{\mathbb{Z}}^2}.$ In fact, one can show that $\phi^p$ corresponds to the Gaussian free field for the graph $\mathbb{Z}^{2,0},$ see Lemma \nolinebreak \ref{le:phiZ2otherdef}. We are interested in the percolation of the level sets $E_{\bf a}^{p,\geq h}=\{x\in{\tilde{\mathbb{Z}}^2}:\,\phi_x^p\geq h\times {\bf a}(x)\},$ $h\in\mathbb{R},$ of the pinned free field on the cable system, and we denote by $E_{\bf a}^{p,\geq h}(x_0)$ the connected component of $x_0\in{\tilde{\mathbb{Z}}^2}$ in $E_{\bf a}^{p,\geq h}.$ Note that one could also consider percolation for the usual level sets of the pinned field $\{x\in{\tilde{\mathbb{Z}}^2}:\,\phi_x^p\geq h\}$ but the phase transition is then trivial, see Remark \ref{endrkdim2},\ref{normalpinnedlevelsets}), and the level sets $E_{\bf a}^{p,\geq h}$ appear more naturally in the context of the isomorphism with random interlacements, see Theorem \nolinebreak 5.5 in \cite{MR3936156}. In \cite{MR3475663}, a definition of random interlacements on $\mathbb{Z}^2$ was given using the ${\bf a}$-transform of the random walk on $\mathbb{Z}^2,$ and corresponds to a Poisson soup of trajectories conditioned on never hitting the origin. We extend this definition here to a point process $\omega^{(2)}$ of trajectories on the cable system $\tilde{\mathbb{Z}}^2$ under some probability $\P^{I,2},$ see \eqref{defRIdim2}. We denote by $\ell_{x,u}^{(2)}$ the continuous field of local times associated with the trajectories in $\omega^{(2)}$ with label at most $u,$ and by ${\cal I}^u_2$ the associated interlacement set. For all closed sets $F\subset\tilde{\mathbb{Z}}^2$ such that $0\in{F}$ we have \begin{equation} \label{defIudim2} \P^{I,2}({\cal I}^u_2\cap F=\varnothing)=\exp\big(-u\mathrm{cap}_{\tilde{\mathbb{Z}}^2}(F)\big), \end{equation} where $\mathrm{cap}_{\tilde{\mathbb{Z}}^2}(A),$ see \eqref{capdim2}, is an extension of the usual definition of two-dimensional capacity to the cable system, as defined in Section 6.6 in \cite{MR2677157} for instance. Note that since $\mathbb{Z}^2$ is recurrent, this definition of capacity differs from the usual definition of capacity in the rest of the paper, see \eqref{defequicap}. Moreover, the normalization in \eqref{defIudim2} corresponds to the one from \cite{MR3936156}, and differs by a constant from \cite{MR3475663}, but is more natural in our context. As a consequence of our results about the $\mathit{\mathbf{h}}$-transform and \cite{DrePreRod3}, we obtain that the critical parameter associated with the percolation of $E^{p,\geq h}_{\bf a}$ is equal to zero, as well as the law of the capacity of the level sets of the pinned Gaussian free field. We also obtain an isomorphism between the pinned Gaussian free field and two-dimensional random interlacements, which corresponds to a signed version of Theorem \nolinebreak 5.5 in \cite{MR3936156} on the cable system. The proof of Theorem \nolinebreak \ref{couplingintergffdim2} appears at the end of Section \ref{sec:Z20}. \begin{The} \label{couplingintergffdim2} The sign clusters $E^{p,\geq 0}_{\bf a}$ of the pinned free field in dimension two are $\P^{G,p}$-a.s.\ bounded and the level sets $E^{p,\geq h}_{\bf a}$ contain with $\P^{G,p}$ positive probability an unbounded connected component for all $h<0.$ Moreover, for all $h,u\geq0$ and $x_0\in{\tilde{\mathbb{Z}}^2}$ \begin{equation} \label{eq:laplacecapkilleddim2} \mathbb{E}^{G,p}\left[\exp\left(-u\mathrm{cap}_{\tilde{\mathbb{Z}}^2}\big({E}^{p,\geq h}_{\bf a}(x_0)\cup\{0\}\big)\right)\mathds{1}_{\phi_{x_0}^p\geq h\times {\bf a}(x_0)}\right] =\P^{G,p}\big(\phi_{x_0}^p\geq {\bf a}(x_0)\sqrt{2u+h^2}\big) \end{equation} and \begin{equation} \label{eqcouplingintergffdim2} \begin{gathered} \big(\phi_x^p\mathds{1}_{x\notin{\mathcal{C}_u^{2}}}+\sqrt{2\ell_{x,u}^{(2)}+(\phi_x^p)^2}\mathds{1}_{x\in{\mathcal{C}_u^{2}}}\big)_{x\in{\tilde{\mathbb{Z}}^2}}\text{ has the same law under }\P^{I,2}\otimes\P^{G,p} \\\text{ as }\big(\phi_x^p+\sqrt{2u}{\bf a}(x)\big)_{x\in{\tilde{\mathbb{Z}}^2}}\text{ under }\P^{G,p}\text{ for all }u\geq0, \end{gathered} \end{equation} where $\mathcal{C}_u^2$ denotes the closure of the union of the connected components of the sign clusters $\{x\in{\tilde{\mathbb{Z}}^2}:|\phi_x^p|>0\}$ intersecting the random interlacement set ${\cal I}^u_2.$ \end{The} Note that Sections \ref{sec:defmassinter}+\ref{sec:h0=1h_*=0}, Section \ref{sec:h_*<0}, Sections \ref{sec:Doob}+\ref{sec:Z20}, and Section \ref{sec:example}, which correspond to the proofs of the respective items in Theorem \nolinebreak \ref{mainth}, can essentially be read independently, unless exceptionally explicitly mentioned otherwise. \vspace{2mm} \noindent{\bf Acknowledgments.} The author thanks Alexander Drewitz and Pierre-François Rodriguez for several useful discussions about the various problems solved in this article. \section{Notation and definition} \label{sec:notation} We consider a massive weighted graph $\mathcal{G}=(\overline{G},\bar{\lambda},\bar{\kappa}),$ where $\bar{\kappa}$ is possibly infinite, to which we associate an equivalent triplet $(G,\lambda,\kappa),$ as defined in \eqref*{4eq:defGfinite} in \cite{DrePreRod3}, for which $\kappa$ is finite. $G$ is then a finite or countable set of vertices, $\lambda=(\lambda_{x,y})_{x,y\in{V}}\in{[0,\infty)^{V\times V}}$ are called weights, and $\kappa=(\kappa_x)_{x\in{V}}\in{[0,\infty)^G}$ is called the killing measure. We assume that the associated graph with vertex set $G$ and edge set $E =\{ \{x,y \}\in{G^2} : \lambda_{x,y}>0\}$ is connected and locally finite. One can associate to the weighted graph $\mathcal{G}$ its canonical jump process, that is the continuous-time Markov chain on $\overline{G}$ which jumps from $x \in \overline{G}$ to $y \in \overline{G}$ at rate $\bar{\lambda}_{x,y}$ and is killed at rate $\bar{\kappa}_x.$ We define $\lambda_x=\kappa_x+\sum_{y\sim x}\lambda_{x,y}$ the total weight of a vertex $x\in{G},$ where $y\sim x$ means that $\{x,y\}\in{E}.$ The cable system $\tilde{\mathcal{G}}$ associated to $\mathcal{G}$ is defined by glueing together open segments $I_e$ with length $\rho_{e}=1/(2\lambda_{x,y}),$ $e=\{x,y\}\in{E},$ through their endpoints, and glueing the closed endpoint of half-open intervals $I_x$ with length $\rho_x=1/(2\kappa_x)$ to $x,$ $x\in{G}.$ For all $e=\{x,y\}\in{E}$ and $t\in{[0,\rho_{e}]}$ we denote by $x+t\cdot I_e$ the point of $I_e$ at distance $t$ from $x,$ that is $x=x+0\cdot I_e=y+\rho_e\cdot I_e,$ and similarly for all $x\in{G}$ and $t\in{[0,\rho_x)},$ we denote by $x+t\cdot I_x$ the point of $I_x$ at distance $t$ from $x.$ One can endow $\tilde{\mathcal{G}}$ with a distance $d_{\tilde{\mathcal{G}}},$ or simply $d$ when there is no ambiguity about the choice of the graph $\mathcal{G},$ such that $d_{\tilde{\mathcal{G}}}(x,y)$ is the length of the shortest path between $x$ and $y$ when replacing the length of $I_e$ by $1$ for each $e\in{E\cup G},$ through some given increasing bijection $[0,\infty)\rightarrow[0,1)$ for $I_x$ when $\kappa_x=0.$ The associated metric space $\tilde{\mathcal{G}}$ is a Polish space, and a connected set $K$ is compact for this topology if and only $K\cap G$ is finite and $K\cap \overline{I_e}$ is a connected compact of $\overline{I_e}$ for all $e\in{E},$ and $K\cap I_x$ is a connected compact of $I_x$ for all $x\in{G}.$ A connected set $F$ is unbounded if and only if $F\cap G$ is infinite. Let $m$ be the Lebesgue measure on $\tilde{\mathcal{G}},$ that is the sum of the Lebesgue measure on each $I_e,$ $e\in{E\cup G},$ with the normalization $m(I_e)=\rho_e$ for all $e\in{E\cap G},$ and $W_{\tilde{\mathcal{G}}}^+,$ be the set of continuous functions from $[0,\infty)$ to $\tilde{\mathcal{G}}\cup\{\Delta\},$ where $\Delta$ is some cemetery point, that is for each $w\in{W_{\tilde{\mathcal{G}}}^+}$ there exists a time $\zeta\in{[0,\infty]}$ such that $w_{[0,\zeta)}$ is continuous on $\tilde{\mathcal{G}}$ and $w(t)=\Delta$ for all $t\geq\zeta.$ We also define $W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}$ the set of forwards trajectories in $W_{\tilde{\mathcal{G}}}^+$ which are killed, that is escape $\tilde{\mathcal{G}}$ through some $I_x,$ $x\in{G},$ and $W_{\tilde{\mathcal{G}}}^{\mathcal{S},+}$ the set of forwards trajectories in $W_{\tilde{\mathcal{G}}}^+$ which blow up, that is exit every bounded and connected sets before time $\zeta.$ Let $X_t$ be the projection function at time $t$ for all $t\geq0,$ and $\mathcal{W}_{\tilde{\mathcal{G}}}^+$ the algebra generated by $X_t,$ $t\geq0.$ For all measures $\tilde{m}$ on $\tilde{\mathcal{G}},$ that is $\tilde{m}_{|I_e}$ is a measure on $(I_e,\mathcal{B}(I_e))$ for all $e\in{E\cup{{G}}},$ and measurable function $f:\tilde{\mathcal{G}}\rightarrow\mathbb{R},$ we define \begin{equation*} (f,f)_{\tilde{m}}\stackrel{\mathrm{def.}}{=}\sum_{e\in{E\cup G}}\int_{I_e}f^2\,\mathrm{d}\tilde{m}_{|I_e}, \end{equation*} $L^2(\tilde{\mathcal{G}},\tilde{m})=\{f:\,(f,f)_{\tilde{m}}<\infty\},$ and $(f,g)_{\tilde{m}}$ the associated Dirichlet form on $L^2(\tilde{\mathcal{G}},\tilde{m}).$ Let also $D(\tilde{\mathcal{G}},\tilde{m})\subset L^2(\tilde{\mathcal{G}},\tilde{m})$ be the space of function $f\in{C_0(\tilde{\mathcal{G}})},$ the closure of the space of functions with compact support with respect to the $\|\cdot\|_{\infty}$ norm, such that $f_{|I_e}\in{W^{1,2}(I_e,\tilde{m}_{|I_e}})$ for all $e\in{E\cup{G}}$ and \begin{equation*} \sum_{e\in E\cup{G}}\|f_{|I_e}\|_{W^{1,2}(I_e,\tilde{m}_{|I_e})}^2<\infty. \end{equation*} Following \cite{MR3152724}, the canonical Brownian motion on $\tilde{\mathcal{G}}$ is then defined by taking probabilities $P_x^{\tilde{\mathcal{G}}},$ or simply $P_x$ when there is no ambiguity about the choice of the graph $\mathcal{G},$ $x\in{\tilde{\mathcal{G}}},$ under which the process $X$ is an $m$-symmetric diffusion on $\tilde{\mathcal{G}}$ starting in $x$ and with associated Dirichlet form on $L^2(\tilde{\mathcal{G}},m)$ \begin{equation} \label{Dirichlet} \mathcal{E}_{\tilde{\mathcal{G}}}(f,g)\stackrel{\text{def.}}{=}\frac12(f',g')_m\text{ for all }f,g\in{D({\tilde{\mathcal{G}}},m)}. \end{equation} We refer to Section \ref*{4s:usefulresults} in \cite{DrePreRod3} for more details and properties of the cable system $\tilde{\mathcal{G}}$ and its associated diffusion $X.$ If $F$ is either a subset of $G$ or a union of edges $I_e$ for $e\in{A\subset G\cup E},$ we denote by $X^F$ the trace of $X$ on $F,$ that is the time changed process with respect to the positive continuous additive functional corresponding to the time spent by $X$ in $F,$ see above \eqref*{4traceonG} in \cite{DrePreRod3} for details. The diffusion $X$ then behaves locally like a Brownian motion on each $I_e,$ $e\in{G\cup E},$ and \begin{equation} \label{printonG} \text{the trace }Z\stackrel{\text{def.}}{=}X^G\text{ of }X\text{ on }G\text{ has the same law as the canonical jump process on }(G,\lambda,\kappa). \end{equation} One can also see $\{x\in{\overline{G}}:\,\bar{\kappa}_x<\infty\}$ as a subset of $G,$ and then the law of the trace of $X$ on $\{x\in{\overline{G}}:\,\bar{\kappa}_x<\infty\}$ is the same as the law of the canonical jump process on $\mathcal{G},$ which justifies our choice of $(G,\lambda,\kappa).$ We also denote by $(\hat{Z}_n)_{n\in\mathbb{N}}$ the discrete-time skeleton of $Z;$ i.e. the sequence of elements of $G$ visited by the process $Z,$ with the convention that $\hat{Z}_n =\Delta$ for all large enough $n$ if $Z$ gets killed. Unless explicitly mentioned otherwise, we will from now on assume that the graph $\mathcal{G}$ is transient, that is that $\ell_{y}(\zeta)$ is $P_x^{\tilde{\mathcal{G}}}$-a.s.\ finite for all $x,y\in{\tilde{\mathcal{G}}}$, where $(\ell_y(t))_{y\in{\tilde{\mathcal{G}}},t\geq0}$ is the continuous field of local times with respect to $m$ associated with $X.$ We can now define the Green function on $\tilde{\mathcal{G}}$ by taking \begin{equation} \label{Greendef} g_{\tilde{\mathcal{G}}}(x,y)=E_x^{\tilde{\mathcal{G}}}[\ell_y(\zeta)]\text{ for all } x,y\in{\tilde{\mathcal{G}}}, \end{equation} where $E_x^{\tilde{\mathcal{G}}}$ denotes expectation with respect to $P^{\tilde{\mathcal{G}}}_x.$ Denoting by $\phi_x,$ $x\in{\tilde{\mathcal{G}}},$ the coordinate maps on the space $C(\tilde{\mathcal{G}},\mathbb{R}),$ endowed with $\sigma$-algebra they generate, we define a probability $\P^G_{\tilde{\mathcal{G}}}$ on $C(\tilde{\mathcal{G}},\mathbb{R})$ so that \eqref{defGFF} holds, and we call $\phi$ under $\P^G_{\tilde{\mathcal{G}}}$ the Gaussian free field on $\tilde{\mathcal{G}}.$ We often write $g$ instead of $g_{\tilde{\mathcal{G}}}$ and $\P^G$ instead of $\P^G_{\tilde{\mathcal{G}}}$ when there is no ambiguity about the choice of the graph $\mathcal{G}.$ We refer to \cite{MR3502602} or Section \ref*{4subsec:GFF} in \cite{DrePreRod3} for a description of the main properties of the Gaussian free field, and in particular on how to construct the Gaussian free field on the cable system from the discrete Gaussian free field on $G.$ If $A\subset\tilde{\mathcal{G}}$ is a set without accumulation point in $\tilde{\mathcal{G}},$ it follows from Lemma \nolinebreak \ref*{4GA} in \cite{DrePreRod3} that \begin{equation} \label{eq:enhancements} \begin{gathered} \text{there exists a unique graph }\mathcal{G}^{A}\text{ with vertex set }G^{A}\stackrel{\mathrm{def.}}{=}A\cup G \\\text{ such that the trace of }X\text{ on }G^{A}\text{ has the same law under }P_x^{\tilde{\mathcal{G}}^{A}}\text{ and }P_x^{\tilde{\mathcal{G}}}. \end{gathered} \end{equation} The graph $\mathcal{G}^{A}$ informally corresponds to the graph on which for each $e\in{E\cup G},$ we used network equivalence to add $A\cap I_e$ as new vertices, by adapting the weights and killing measure so that each edge $I_e,$ $e\in{E\cup G},$ of $\tilde{\mathcal{G}}$ corresponds to a union of edges of $\tilde{\mathcal{G}}^A$ with total length $\rho_e.$ Since $\tilde{\mathcal{G}}^A$ has additional half-open intervals $I_x,$ $x\in{G^A},$ we will often consider $\tilde{\mathcal{G}}$ as a subset of $\tilde{\mathcal{G}}^{A}.$ We now define the equilibrium measure of a set $F\subset G$ by \begin{equation} \label{defequicap} e_{F,\tilde{\mathcal{G}}}(x)\stackrel{\mathrm{def.}}{=}\lambda_xP_x^{\tilde{\mathcal{G}}}(\tilde{H}_F=\infty)\text{ for all }x\in{G}, \end{equation} where $\tilde{H}_F=\inf\{n\geq1:\hat{Z}_n\in{F}\}$ is the first time the discrete time Markov chain $\hat{Z}$ on $G$ return in $F$ after time one, which is equal to $\infty$ if $\hat{Z}$ never returns in $F.$ When $F\subset\tilde{\mathcal{G}}$ is a closed set, we define the hitting time $H_F$ of $F$ by $H_F=\inf\{t\geq0:\,X_t\in{F}\},$ with $\inf\varnothing=\zeta,$ the last exit time $L_F$ of $F$ by $L_F=\sup\{t\geq0:X_t\in{F}\},$ with $\sup\varnothing=0$ and let \begin{equation} \label{defpartialext} \hat{\partial}F=\left\{x\in{ F}:\,P_x\left(X_{L_F}=x,L_F\in{(0,\zeta)}\right)>0\right\}. \end{equation} Note that $\hat{\partial}F$ does not have any accumulation point since $I_e$ contains at most two points of $\hat{\partial}F$ for all $e\in{E\cup G}.$ One can then extend the definition \eqref{defequicap} of the equilibrium measure to any closed set $F\subset\tilde{\mathcal{G}}$ by considering the graph $\mathcal{G}^{\hat{\partial}F}$ similarly as in \eqref*{4defequilibriumcable} in \cite{DrePreRod3}. We can now define the capacity via \begin{equation} \label{defcap} \mathrm{cap}_{\tilde{\mathcal{G}}}(K)=\sum_{x\in{\hat{\partial}K}}e_{K,\tilde{\mathcal{G}}}(x)\text{ for all compacts }K\subset\tilde{\mathcal{G}}. \end{equation} One can then extend this definition of the capacity to any closed sets $F\subset\tilde{\mathcal{G}}$ by approximating $F$ by an increasing sequence of compacts, see \eqref*{4defcapinfinity} in \cite{DrePreRod3}. Note that the capacity of a non-compact set is not necessarily equal to the sum of its equilibrium measure: for instance on $\mathbb{Z}^3$ with unit weights and zero killing measure, the capacity of $\mathbb{Z}^3$ is infinite, but $e_{\mathbb{Z}^3,\tilde{\mathbb{Z}}^3}(x)=0$ for all $x\in{\mathbb{Z}^3}.$ We simply write $e_{F}(x)$ instead of $e_{F,\tilde{\mathcal{G}}}$ and $\mathrm{cap}(F)$ instead of $\mathrm{cap}_{\tilde{\mathcal{G}}}(F)$ when there is no ambiguity about the choice of the graph $\mathcal{G}.$ We turn to the definition of random interlacements on $\tilde{\mathcal{G}}.$ The random interlacement measure was first defined on $\mathbb{Z}^d,$ $d\geq3,$ in \cite{MR2680403}, and then on any discrete transient graph with $\kappa\equiv0$ in \cite{MR2525105}. It was then extended to the cable system of $\mathbb{Z}^d$ in \cite{MR3502602} using the fact that one can obtain the diffusion $X$ by adding Brownian excursions on the edges to a discrete random walk on $\mathbb{Z}^d,$ and this proof can easily be extended to any transient graphs on which discrete random interlacements exist. Let us now recall this definition. For any $x\in{\hat{\partial} F},$ we define \begin{equation} \label{defPxF} \text{$P^{F,\tilde{\mathcal{G}}}_x$ the law of $(X_{t+L_F})_{t\geq0}$ under $P_x(\cdot\,|\,X_{L_F}=x,L_F\in{(0,\zeta)}),$} \end{equation} or simply $P^{F}_x$ when there is no ambiguity about the choice of the graph $\mathcal{G}.$ Note that $X_{L_F}\in{\hat{\partial}F}$ if $L_F\in{(0,\zeta)}$ a.s. since $\hat{\partial}F$ has no accumulation points. Using similar ideas as in the proof of (1.56) in \cite{MR2932978}, one can use the Markov property to prove that when considering only events which depends on the trace $Z$ of $X$ on $G,$ the probability $P_x^{F,\tilde{\mathcal{G}}}$ can be rewritten for all $F\subset\tilde{\mathcal{G}}$ with $\hat{\partial}F\subset G$ as follows \begin{equation} \label{PxFforZ} P^{F,\tilde{\mathcal{G}}}_x(Z\in{\cdot})=P^{\tilde{\mathcal{G}}}_x(Z\in{\cdot}\,|\,\tilde{H}_F=\infty)\text{ for all }x\in{\hat{\partial}F}. \end{equation} We now define the set of doubly non-compact trajectories $W_{\tilde{\mathcal{G}}}$ as the set of continuous functions from $\mathbb{R}$ to $\tilde{\mathcal{G}}\cup{\Delta},$ which take values in $\tilde{\mathcal{G}}$ between times $\zeta^-\in{[-\infty,\infty)}$ and $\zeta^+\in{(-\infty,\infty]},$ and is equal to $\Delta$ on $(\zeta^-,\zeta^+)^c.$ We denote by $p_{\tilde{\mathcal{G}}}^*(w)$ the equivalence class of $w$ modulo time-shift for each $w\in{W_{\tilde{\mathcal{G}}}},$ and $W_{\tilde{\mathcal{G}}}^*=\{p_{\tilde{\mathcal{G}}}^*(w),\,w\in{W_{\tilde{\mathcal{G}}}}\}.$ We define $\mathcal{W}_{\tilde{\mathcal{G}}}$ the $\sigma$-algebra on $W_{\tilde{\mathcal{G}}}$ generated by the coordinate functions, and $\mathcal{W}_{\tilde{\mathcal{G}}}^*=\{A\subset W_{\tilde{\mathcal{G}}}^*:(p_{\tilde{\mathcal{G}}}^*)^{-1}(A)\in{\mathcal{W}_{\tilde{\mathcal{G}}}}\}.$ For each closed set $F\subset\tilde{\mathcal{G}}$ and $w\in{W_{\tilde{\mathcal{G}}}},$ we denote by $H_F(w)=\inf\{t\in\mathbb{R}:\,w(t)\in{F}\},$ the first hitting time of $F,$ with the convention $\inf\varnothing=\zeta^+.$ Let \begin{equation} \label{defWFG} W_{F,\tilde{\mathcal{G}}}^0=\{\zeta^-<H_F=0<\zeta^+\}\text{ and }W_{F,\tilde{\mathcal{G}}}^*=p_{\tilde{\mathcal{G}}}^*(W_{F,\tilde{\mathcal{G}}}^0), \end{equation} that is $W_{F,\tilde{\mathcal{G}}}^*$ is the set of trajectories in $W_{\tilde{\mathcal{G}}}^*$ not starting in $F$ but which hit $F$ in finite time. If $F$ is compact, then $W_{F,\tilde{\mathcal{G}}}^*$ is simply the set of trajectories hitting $F.$ The forwards part of a trajectory $w\in{W_{\tilde{\mathcal{G}}}}$ is $(w(t))_{t\geq0}$ and its backwards part $(w(-t))_{t\geq0},$ and we denote by $\mathcal{W}_{K,\tilde{\mathcal{G}}}^0$ the set of events $B\in{\mathcal{W}_{\tilde{\mathcal{G}}}},$ $B\subset W_{F,\tilde{\mathcal{G}}}^0,$ such that $B$ is equal to the set of trajectories $w$ with forwards part in $B^+=\{(w(t))_{t\geq0}:\,w\in{B}\}$ and backwards part in $B^-=\{(w(-t))_{t\geq0}:\,w\in{B}\}.$ We define a measure $Q_{F,\tilde{\mathcal{G}}}$ on $\mathcal{W}_{\tilde{\mathcal{G}}},$ whose restriction to $\mathcal{W}_{F,\tilde{\mathcal{G}}}^0$ is given by \begin{equation} \label{defQK} Q_{F,\tilde{\mathcal{G}}}\stackrel{\mathrm{def.}}{=}\sum_{x\in{\hat{\partial} F}}e_{F,\tilde{\mathcal{G}}}(x)P_x^{\tilde{\mathcal{G}}}({\cdot^+})P^{F,\tilde{\mathcal{G}}}_x({\cdot^-}), \end{equation} and such that $Q_{F,\tilde{\mathcal{G}}}(A)=0$ for all $A\in{\mathcal{W}_{\tilde{\mathcal{G}}}}$ with $A\cap W_{F,\tilde{\mathcal{G}}}^0=\varnothing.$ One can then define a unique measure $\nu$ on $(W^*_{\tilde{\mathcal{G}}},\mathcal{W}^*_{\tilde{\mathcal{G}}}),$ such that for each compact $K$ of $\tilde{\mathcal{G}}$ the measure of any events included in $W^0_{K,\tilde{\mathcal{G}}}$ is the same under $Q_{K,\tilde{\mathcal{G}}}$ and under $\nu,$ modulo time-shift, see \eqref{definter} for an exact formula. As explained above, this statement is classical when $\kappa\equiv0,$ see \cite{MR2525105} and \cite{MR3502602}, and can actually be extended to massive graphs, see Remark \ref*{4R:nomassconversion} in \cite{DrePreRod3}. In Section \ref{sec:defmassinter}, we will give a direct proof of the existence of this measure on the cable system of massive graphs, see Theorem \nolinebreak \ref{nuexists}, and additionally prove that the previous description of the measure $\nu$ can be extended to closed sets $F$ instead of only compacts $K,$ which will later be useful, see Corollary \nolinebreak \ref{killedinterdes} and the proof of Proposition \nolinebreak \ref{Prop:condtreeh_*=0}. We now define the random interlacement process $\omega$ under some probability $\P^I_{\tilde{\mathcal{G}}},$ or $\P^I$ when there is no ambiguity about the choice of the graph $\mathcal{G},$ as a Poisson point process with intensity measure $\nu\otimes\lambda,$ where $\lambda$ is the Lebesgue measure on $(0,\infty).$ We also denote by $\omega_u$ the point process, which consists of the trajectories in $\omega$ with label less than $u,$ by $(\ell_{x,u})_{x\in{\tilde{\mathcal{G}}}}$ the continuous total field of local times with respect to $m$ on $\tilde{\mathcal{G}}$ of $\omega_u$ and by ${\cal I}^u=\{x\in{\tilde{\mathcal{G}}}:\ \ell_{x,u}>0\}$ the interlacement set at level $u.$ The trace $\hat{\omega}_u\stackrel{\mathrm{def.}}{=}\omega_u^{G}$ of $\omega_u$ on $G$ corresponds to a random interlacement process on the discrete graph $\mathcal{G},$ and the random interlacement set ${\cal I}^u$ is characterized by the following relation \begin{equation} \label{defIu} \P^I({\cal I}^u\cap F=\varnothing)=\exp(-u\mathrm{cap}(F))\text{ for all closed sets $F$}. \end{equation} When $0<\mathit{\mathbf{h}}_{\text{kill}}<1,$ see \eqref{defh0}, there are four type of trajectories in the interlacement process: either their forwards and backwards parts are killed, or their forwards and backwards parts blow up, or their backwards parts are killed and forwards parts blow up, or their forwards parts are killed and backwards parts blow up, and we denote the respective sets of trajectories by $W_{\tilde{\mathcal{G}}}^{\mathcal{K},*},$ $W_{\tilde{\mathcal{G}}}^{\mathcal{S},*},$ $W_{\tilde{\mathcal{G}}}^{\mathcal{KS},*}$ and $W_{\tilde{\mathcal{G}}}^{\mathcal{SK},*}.$ We call respectively killed random interlacements, surviving random interlacements, killed-surviving random interlacements and surviving-killed random interlacements the point processes corresponding to each type of trajectory, which are Poisson point processes with respective intensity measure $\nu^{\mathcal{K}},$ $\nu^{\mathcal{S}},$ $\nu^{\mathcal{KS}}$ and $\nu^{\mathcal{SK}}$ (that is $\nu^{\mathcal{K}}(A)=\nu(A\cap W_{\tilde{\mathcal{G}}}^{\mathcal{K},*})$ for instance). These notions of killed and surviving random interlacements are not used here to prove Theorem \nolinebreak \ref{mainth}. However, in Sections \ref{sec:defmassinter} and \ref{sec:Doob}, we are going to prove general results about random interlacements, which in turn will be useful to find examples as in Theorem \nolinebreak \ref{mainth}, but are also interesting in their own rights, see Theorem \nolinebreak \ref{nuexists} and Theorem \nolinebreak \ref{couplingintergffh}. We will then use killed or surviving interlacements to give a first illustration of the independent interest of these results, see Corollaries \nolinebreak \ref{killedinterdes}, \ref{h0transformiso} and \ref{couplingintergffK}. We refer to Section 6 in \cite{DrePreRod5} for an example of a proof where killed interlacements and killed-surviving interlacements play an essential role. \begin{Rk} \label{rk:killedonARI} For a set $A\subset G,$ we say that a trajectory on $\tilde{\mathcal{G}}$ is killed on $A$ if it reaches the open end of the cable $I_x$ for some $x\in{A}.$ One could also want to study "killed on $A$" random interlacements, that is the point process consisting of the trajectories in the random interlacement process $\omega$ whose forwards and backwards trajectories have been killed on $A,$ or "surviving on $A^c$" random interlacements, that is the point process consisting of the trajectories in the random interlacement process $\omega$ whose forwards and backwards trajectories have not been killed on $A.$ Let $A_{\infty}=\{x+\rho_x(1-2^{-n})\cdot I_x:\,x\in{A^c\cap G},n\in\mathbb{N}\},$ and $\mathcal{G}^{A_{\infty}}$ the graph defined in \eqref{eq:enhancements}. Since $\kappa^{A_{\infty}}_x=0$ for all $x\in{A_{\infty}\cup (G\setminus A)},$ killed trajectories in $W_{\tilde{\mathcal{G}}^{A_{\infty}}}^+$ are always killed on $I_x$ for some $x\in{A}.$ Therefore, the trace on $\tilde{\mathcal{G}}(\subset\tilde{\mathcal{G}}^{A_{\infty}})$ of the killed random interlacement process under $\P^{I}_{\tilde{\mathcal{G}}^{A_{\infty}}}$ has the same law as the killed on $A$ random interlacement process under $\P^I_{\tilde{\mathcal{G}}},$ and the trace on $\tilde{\mathcal{G}}(\subset\tilde{\mathcal{G}}^{A_{\infty}})$ of the surviving random interlacement process under $\P^{I}_{\tilde{\mathcal{G}}^{A_{\infty}}}$ has the same law as the surviving on $A^c$ random interlacement process under $\P^I_{\tilde{\mathcal{G}}}.$ Therefore in the sequel we will mainly focus on killed and surviving random interlacements, since all our definitions and results could be extended to killed on $A,$ or surviving on $A^c,$ random interlacements by considering the graph $\mathcal{G}^{A_{\infty}}$ instead of $\mathcal{G}.$ If now $A\subset \overline{G},$ that is we allow $\bar{\kappa}_x=\infty$ for some $x\in{A},$ we say that a trajectory on $\tilde{\mathcal{G}}$ is killed on $A$ if this trajectory is killed on $A'$ for the graph $(G,\lambda,\kappa),$ where $A'\subset G$ is the union of $A\cap G$ and all the vertices $z\in{G}$ for which there exist $x\in{A\cap G^c}$ and $y\in{G}$ with $\bar{\lambda}_{x,z}>0$ and $\bar{\lambda}_{z,y}>0.$ We adapt the definition of killed on $A$ and surviving on $A^c$ random interlacements accordingly, and similarly as before, it is enough to study killed, or surviving, random interlacements to obtain results on killed on $A,$ or surviving on $A^c,$ random interlacements, even if $A\subset\overline{G},$ which we will from now on do. \end{Rk} \section{Random interlacements on massive graphs} \label{sec:defmassinter} In this section, we give an alternative definition of killed random interlacements, or simply random interlacements when $\mathit{\mathbf{h}}_{\text{kill}}=1,$ on the cable system of massive weighted graphs. We first present the last exit decomposition \eqref{lastexitdec} of the diffusion $X$ on $\tilde{\mathcal{G}},$ that is a decomposition of the law of $X$ before and after the time $L_F$ at which $X$ leaves a closed set $F$ of $\tilde{\mathcal{G}}.$ We then use this last exit decomposition to describe the restriction of the measure $\nu$ underlying random interlacements to $W_{F,\tilde{\mathcal{G}}}^0,$ as defined in \eqref{defWFG}, for any closed sets $F,$ see Theorem \nolinebreak \ref{nuexists}. Finally, we use this description to obtain the law of the restriction of the killed, killed-surviving and surviving-killed interlacement measures to $\tilde{\mathcal{G}}^{-},$ see Corollary \nolinebreak \ref{killedinterdes}. Using Theorems \nolinebreak 4.1.2 and 4.2.4 in \cite{MR2778606}, one can associate to the diffusion $X$ a symmetric family of probability densities $(p_t(x,y))_{t>0},$ $x,y\in{\tilde{\mathcal{G}}},$ such that \begin{equation} \label{greenpt} P_x(X_t\in{\mathrm{d}}y)=p_t(x,y)m(\mathrm{d}y),\text{ and then } g(x,y)=\int_0^{\infty}p_t(x,y)\,\mathrm{d} t. \end{equation} The fact that the formula \eqref{greenpt} for the Green function holds, recall the definition \eqref{Greendef}, can be for instance deduced from Theorem \nolinebreak 3.6.5 in \cite{MR2250510}. Let us now recall some useful results from Section 2 of \cite{MR1278079} about the existence of Markovian bridges, that we apply to our $m$-symmetric diffusion $X.$ Under $P_x,$ the process $(p_{t-s}(X_s,y))_{s\in{[0,t)}}$ is a martingale, and thus we can define \begin{equation*} P_{x,y,t}(A)\stackrel{\mathrm{def.}}{=}\frac{E_{x}[p_{t-s}(X_s,y)\mathds{1}_A]}{p_t(x,y)}\text{ for all }A\in{\mathcal{F}_s:=\sigma(X_u,u\leq s)}\text{ and }0\leq s<t, \end{equation*} and this definition is consistent. One can extend the definition of $P_{x,y,t}$ to a probability measure on $\mathcal{F}_t,$ which informally corresponds to the law of a bridge of length $t$ between $x$ and $y$ for $X.$ Applying the optional stopping theorem to the martingale $(p_{t-s}(X_s,y))_{s\in{[0,t)}},$ see for instance Theorem \nolinebreak 3.2 in Chapter II of \cite{MR1725357}, we have that for all $t>0$ and stopping times $T$ \begin{equation} \label{stoppedbridge} E_x[p_{t-T}(X_T,y)\mathds{1}_{A,T<t}]=P_{x,y,t}(A,T<t)p_t(x,y)\text{ for all }A\in{\mathcal{F}_T}, \end{equation} where $\mathcal{F}_T=\{F\in{\mathcal{F}_t}:F\cap\{T\leq s\}\in\mathcal{F}_s\text{ for all }s<t\}$ is the filtration associated with $T.$ Moreover by $m$-symmetry of $X,$ we have for all $t>0$ and $x,y\in{\tilde{\mathcal{G}}}$ that \begin{equation} \label{symbridge} (X_{t-s})_{s\in{[0,t]}}\text{ has the same law under } P_{x,y,t}\text{ as }(X_s)_{s\in{[0,t]}}\text{ under }P_{y,x,t}. \end{equation} Using \eqref{symbridge}, one can derive a decomposition for stopping time on the reversed time scale: for all random times $\tau$ such that $\{\tau\geq t\}$ is in $\sigma(X_{t+u},u\geq0),$ we have that a.s.\ \begin{equation} \label{revmarkovbridge} (X_s)_{s\in{[0,\tau]}}\text{ has the same law under } P_{x}(\cdot\,|\,\mathcal{G}_{\tau})\text{ as }(X_s)_{s\in{[0,\tau]}}\text{ under }P_{x,X_{\tau},\tau}, \end{equation} where $\mathcal{G}_{\tau}=\sigma(\tau,X_{\tau+u},u\geq0).$ Using results for general Hunt processes, see either Theorem \nolinebreak 8 in \cite{MR336827}, Proposition \nolinebreak 5.9 in \cite{MR334335} or Theorem \nolinebreak 2.12 in \cite{MR521533}, under $P_x,$ if $L_F\in{(0,{\zeta})}$ then $(X_{s+L_F})_{s>0}$ is a Markov process depending on the past only through $X_{L_F},$ and so we have for all $x\in{\tilde{\mathcal{G}}},$ on the event $L_F\in{(0,{\zeta})},$ that \begin{equation} \label{lawafterLK} (X_{s+L_F})_{s\geq0}\text{ has the same law under } P_{x}(\cdot\,|\,L_F,X_{L_F})\text{ as }(X_s)_{s\geq0}\text{ under }P_{X_{L_F}}^{F}, \end{equation} where $P_{\cdot}^F$ is defined in \eqref{defPxF}. Combining \eqref{revmarkovbridge} and \eqref{lawafterLK}, one can thus describe the law of $(X_t)_{t\geq0}$ both before and after the last visit $L_F$ of $F.$ Let us now describe the law of $L_F$ and $X_{L_F}.$ Following the proof of (1.56) in \cite{MR2932978} and \eqref*{4exitequi} in \cite{DrePreRod3}, we moreover have that \begin{equation} \label{exitequi} P_y(X_{L_F}=x,L_F\in{(0,\zeta)})={g(y,x)e_{F}(x)}\text{ for all }x,y\in{\tilde{\mathcal{G}}}. \end{equation} This leads to the following description of the law of $L_F$ and $X_{L_F}.$ \begin{Lemme} For all closed sets $F\subset\tilde{\mathcal{G}}$ and $x\in{\tilde{\mathcal{G}}}$ and $y\in{\hat{\partial}F}$ we have \begin{equation} \label{disLK} P_x(L_F\in{\mathrm{d}t},L_F\in{(0,\zeta)},X_{L_F}=y)=p_t(x,y)e_{F}(y)\mathrm{d}t. \end{equation} \end{Lemme} \begin{proof} For all $t>0,$ we have by the Markov property at time $t$ and \eqref{exitequi} \begin{equation*} P_x(t< L_F<\zeta,X_{L_F}=y)=E_x\big[P_{X_t}(X_{L_F}=y,0<L_F<\zeta)\big]=E_x\big[g(X_t,y)\big]e_{F}(y). \end{equation*} Using \eqref{stoppedbridge}, we moreover have $E_x[p_{s-t}(X_t,y)]=p_s(x,y)$ for all $s>t,$ and so by \eqref{greenpt} \begin{equation*} E_x\big[g(X_t,y)\big]=\int_{t}^{\infty}E_x\big[p_{s-t}(X_t,y)\big]\,\mathrm{d} s=\int_{t}^{\infty}p_s(x,y)\,\mathrm{d} s, \end{equation*} and we can conclude. \end{proof} We are now ready to give the last exit decomposition of $(X_t)_{t\geq0}$ before and after time $L_F.$ We denote by $W_{\tilde{\mathcal{G}}}^{+,f}$ the set of continuous trajectories in $\tilde{\mathcal{G}}$ with finite length, that is of continuous functions from $[0,t]$ to $\tilde{\mathcal{G}}$ for some $t>0.$ Let $\pi_t:\{w\in W_{\tilde{\mathcal{G}}}^+:\,t<\zeta\}\rightarrow W_{\tilde{\mathcal{G}}}^{+,f}$ be the application $w\mapsto w_{|[0,t]}$ and $\mathcal{W}_{\tilde{\mathcal{G}}}^{+,f}$ be the $\sigma$-algebra generated by $w\mapsto ((w(st))_{s\in{[0,1]}},t)$ when we endow $\{w:[0,1]\rightarrow\tilde{\mathcal{G}}:\,w\text{ is continuous}\}\times(0,\infty)$ with the product topology. For each $A_1\in{\mathcal{W}_{\tilde{\mathcal{G}}}^{+,f}}$ and $A_2\in{\mathcal{W}_{\tilde{\mathcal{G}}}^+},$ using \eqref{revmarkovbridge} with $\tau=L_{F},$ we have that for all $x\in{\tilde{\mathcal{G}}}$ and $y\in{\hat{\partial} F}$ \begin{align*} &P_x\big((X_t)_{t\in{[0,L_F]}}\in{A_1},(X_{t+L_F})_{t\geq 0}\in{A_2},X_{L_F}=y,L_F\in{(0,\zeta)}\big) \\&=E_x\Big[\mathds{1}_{(X_{t+L_{F}})_{t\geq 0}\in{A_2},X_{L_{F}}=y,L_F\in{(0,\zeta)}}(P_{x,y,L_{F}}\circ\pi_{L_F}^{-1})(A_1)\Big] \\&=P^{F}_y(A_2)E_x\Big[\mathds{1}_{X_{L_{F}}=y,,L_F\in{(0,\zeta)}}(P_{x,y,L_{F}}\circ\pi_{L_F}^{-1})(A_1)\Big], \end{align*} where we used \eqref{lawafterLK} in the last equality. By \eqref{disLK}, we moreover have that \begin{equation*} E_x\Big[\mathds{1}_{X_{L_{F}}=y,L_F\in{(0,\zeta)}}(P_{x,y,L_{F}}\circ\pi_{L_F}^{-1})(A_1)\Big]=e_{F}(y)\int_{0}^\infty (P_{x,y,s}\circ\pi_{s}^{-1})(A_1)p_s(x,y)\,\mathrm{d} s. \end{equation*} Summing over $y$ in $\hat{\partial} F,$ see \eqref{defpartialext}, we thus obtain the following last exit-decomposition for all closed sets $F\subset\tilde{\mathcal{G}},$ $x\in{\tilde{\mathcal{G}}},$ $A_1\in{\mathcal{W}_{\tilde{\mathcal{G}}}^{+,f}}$ and $A_2\in{\mathcal{W}^{+}}$ \begin{equation} \label{lastexitdec} \begin{split} &P_x\big((X_t)_{t\in{[0,L_F]}}\in{A_1},(X_{t+L_F})_{t\geq 0}\in{A_2},L_F\in{(0,\zeta)}\big) \\&=\sum_{y\in{\hat{\partial} F}}e_{F}(y)P^{F}_y(A_2)\int_{0}^\infty P_{x,y,s}(\pi_s^{-1}(A_1))p_s(x,y)\,\mathrm{d} s. \end{split} \end{equation} The last exit decomposition \eqref{lastexitdec} let us directly prove that random interlacements on the cable system of any massive transient graphs exist, as defined below \eqref{defQK}. Recall the definitions of $W_{F,\tilde{\mathcal{G}}}^*$ from \eqref{defWFG} and $Q_{F,\tilde{\mathcal{G}}}$ from \eqref{defQK}. \begin{The} \label{nuexists} There exists a unique measure $\nu$ on $W^*_{\tilde{\mathcal{G}}}$ such that for all closed sets $F\subset\tilde{\mathcal{G}}$ \begin{equation} \label{definter} \nu(A)=Q_{F,\tilde{\mathcal{G}}}\big((p_{\tilde{\mathcal{G}}}^*)^{-1}(A)\big)\text{ for all }A\in{\mathcal{W}_{\tilde{\mathcal{G}}}^*},A\subset W_{F,\tilde{\mathcal{G}}}^*. \end{equation} \end{The} \begin{proof} Let us fix a compact $K$ and a closed set $F$ with $K\subset F.$ For all $A\in{\mathcal{W}_{K,\tilde{\mathcal{G}}}^0},$ let $A'=\{(w(t+H_{F}))_{t\in\mathbb{R}}:\,w\in{A}\},$ with the convention that $\omega(t+H_{F})=\Delta$ for all $t\in\mathbb{R}$ if $H_F=\zeta^-.$ In order to prove \eqref{definter}, it is enough to prove that \begin{equation} \label{QK=QK'} Q_{K,\tilde{\mathcal{G}}}(A)=Q_{F,\tilde{\mathcal{G}}}(A')\text{ for all $A\in{\mathcal{W}_{K,\tilde{\mathcal{G}}}^0}$ such that $A'\in{\mathcal{W}_{F,\tilde{\mathcal{G}}}^0}$}. \end{equation} Indeed one can then define $\mathds{1}_{W_{K,\tilde{\mathcal{G}}}^*}\nu=Q_{K,\tilde{\mathcal{G}}}\circ(p^*_{\tilde{\mathcal{G}}})^{-1}$ for all compacts $K$ of $\tilde{\mathcal{G}},$ and this definition is consistent by \eqref{QK=QK'}, and we can conclude by taking a sequence of compacts increasing to $\tilde{\mathcal{G}}.$ The uniqueness of $\nu$ is clear since $W_{K,\tilde{\mathcal{G}}}^*$ increases to $W^*_{\tilde{\mathcal{G}}}$ as $K$ increases to $\tilde{\mathcal{G}},$ and \eqref{QK=QK'} directly implies \eqref{definter}. Let us now prove \eqref{QK=QK'}. Using \eqref{defPxF}, \eqref{defQK} and \eqref{exitequi} we have \begin{equation*} Q_{K,\tilde{\mathcal{G}}}(A)=\sum_{x\in{\hat{\partial} K}}\frac{1}{g(x,x)}P_x(A^+)P_x\big((X_{t+L_K})_{t\geq 0}\in{A^-},X_{L_K}=x,L_K\in{(0,\zeta)}\big), \end{equation*} where $A^+$ is the forwards part of $A$ and $A^-$ its backwards part, as defined above \eqref{defQK}. Since $A'\in{\mathcal{W}_{F,\tilde{\mathcal{G}}}^0},$ taking $A^{\pm}=\{(w(t))_{t\in{[0,H_{K}]}}:\,\omega\in{A'}\},$ one can easily check that $L_K\in{(0,\zeta)}$ and $(X_{t+L_K})_{t\geq 0}\in{A^-}$ if and only if $0<L_K\leq L_F<\zeta,$ $(X_{t+L_{F}})_{t\geq 0}\in{(A')^-}$ and $(X_{-t+L_{F}})_{t\in{[0,L_{F}-L_K]}}\in{A^\pm}.$ Therefore using \eqref{lastexitdec} for $F$ and \eqref{symbridge}, we obtain that for all $x\in{\hat{\partial} K}$ \begin{align*} &P_x\big((X_{t+L_K})_{t\geq 0}\in{A^-},X_{L_K}=x,L_K\in{(0,\zeta)}\big) \\&=\sum_{y\in{\hat{\partial} F}}\hspace{-1mm}e_{F}(y)P^{F}_y\big((A')^-\big)\int_0^{\infty}\hspace{-2mm}P_{x,y,s}\big((X_{s-t})_{t\in{[0,s-L_K]}}\in{A^{\pm}},X_{L_K}=x,L_K\in{(0,s]}\big)p_s(x,y)\,\mathrm{d} s \\&=\sum_{y\in{\hat{\partial} F}}\hspace{-1mm}e_{F}(y)P^{F}_y\big((A')^-\big)\int_0^{\infty}\hspace{-2mm}P_{y,x,s}\big((X_{t})_{t\in{[0,H_K]}}\in{A^{\pm}},X_{H_K}=x,H_K\in{[0,s)}\big)p_s(y,x)\,\mathrm{d} s. \end{align*} Moreover by \eqref{stoppedbridge}, we can write \begin{align*} &\int_0^{\infty}P_{y,x,s}\big((X_{t})_{t\in{[0,H_K]}}\in{A^\pm},X_{H_K}=x,H_K\in{[0,s)}\big)p_s(y,x)\,\mathrm{d} s \\&=\int_{0}^\infty E_y\big[p_{s-H_K}(x,x)\mathds{1}_{(X_{t})_{t\in{[0,H_K]}}\in{A^\pm},X_{H_K}=x,H_K\in{[0,s\wedge\zeta)}}\big]\,\mathrm{d} s \\&=E_y\Big[\mathds{1}_{(X_{t})_{t\in{[0,H_K]}}\in{A^\pm},X_{H_K}=x,H_K<\zeta}\int_{H_K}^{\infty}p_{s-H_K}(x,x)\,\mathrm{d} s\Big] \\&=g(x,x)P_y\big((X_{t})_{t\in{[0,H_K]}}\in{A^\pm},X_{H_K}=x,H_K<\zeta), \end{align*} where we used \eqref{greenpt} in the last equality. Combining the previous equations, we thus obtain by the strong Markov property at time $H_K$ that \begin{align*} Q_{K,\tilde{\mathcal{G}}}(A)&=\sum_{x\in{\hat{\partial} K},y\in{\hat{\partial} F}}\hspace{-3mm}e_{F}(y)P_x(A^+)P_y\big((X_{t})_{t\in{[0,H_K]}}\in{A^\pm},X_{H_K}=x,H_K<\zeta\big) P^{F}_y\big((A')^-\big) \\&=\sum_{x\in{\hat{\partial} K},y\in{\hat{\partial} F}}\hspace{-3mm}e_{F}(y)P_y\big((A')^+,X_{H_K}=x,H_K<\zeta\big) P^{F}_y\big((A')^-\big) \\&=Q_{F,\tilde{\mathcal{G}}}(A'), \end{align*} where we used in the second equality the fact that $(X_t)_{t\geq0}\in{(A')^+}$ if and only if $H_K<\zeta,$ $(X_t)_{t\in{[0,H_K]}}\in{A^{\pm}}$ and $(X_{t+H_K})_{t\geq0}\in{A^+},$ and we can conclude. \end{proof} The main interest of Theorem \nolinebreak \ref{nuexists} is that, contrary to the usual definition of the intensity measure $\nu,$ see for instance Theorem \nolinebreak 1.1 in \cite{MR2680403}, it includes any closed sets $F,$ and not only compact, or finite, sets. If $F$ is a closed set such that $P_x(L_F=\zeta)=0$ for all $x\in{\tilde{\mathcal{G}}},$ then for all $A\in{\mathcal{W}_{\tilde{\mathcal{G}}}^*}$ such that each $w\in{A}$ hits $F,$ we have by \eqref{defWFG}, \eqref{defQK} and \eqref{definter} that $\nu(A\cap W_{F,\tilde{\mathcal{G}}}^*)=\nu(A),$ which leads to the following description of the trajectories in the random interlacement process hitting $F.$ Start independently for each $x\in{\hat{\partial}F}$ a number $N_x$ of doubly-infinite trajectories with law $\text{Poi}(ue_F(x)),$ each trajectory hitting $x$ at time zero, with forwards trajectory having law $P_x$ and backwards trajectory law $P_x^{F}.$ Then by \eqref{definter} the point process consisting of all these trajectories modulo time-shift has the same law as the trajectories in $\omega_u$ hitting $F.$ Examples of such sets $F$ are $F=G$ if $\mathit{\mathbf{h}}_{\text{kill}}\equiv1,$ or $F=\mathbb{Z}^k\times\{0\}^{d-k}$ if $\mathcal{G}=\mathbb{Z}^d$ with $\kappa\equiv0$ and $k\in{\{1,\dots,d-3\}},$ which follows easily from Wiener’s test, see Theorem \nolinebreak 2.2.5 in \cite{MR1117680}. \begin{Rk} \begin{enumerate}[1)] \item Similarly as in (1.40) of \cite{MR2680403} or (2.16) of \cite{MR3167123}, it is easy to show that random interlacements on the cable system are invariant under time reversal. Indeed for all connected compacts $K$ of $\tilde{\mathcal{G}}$ we have by \eqref{defQK}, \eqref{lastexitdec} and \eqref{symbridge} that for all $A,A''\in{W_{\tilde{\mathcal{G}}}^+}$ and $A'\in{W_{\tilde{\mathcal{G}}}^{+,f}},$ \begin{align*} &Q_{K,\tilde{\mathcal{G}}}\big((X_{-t})_{t\geq0}\in{A},(X_t)_{t\in{[0,L_K]}}\in{A'},(X_t)_{t\geq L_K}\in{A''}\big) \\&=\sum_{x,y\in{\hat{\partial} K}}e_{K}(x)e_{K}(y)P^{K}_y(A'')P^{K}_x(A)\int_{0}^{\infty}P_{x,y,s}(\pi_s^{-1}(A'))p_s(x,y)\,\mathrm{d} s \\&=Q_{K,\tilde{\mathcal{G}}}\big((X_{-t})_{t\geq0}\in{A''},(X_{L_K-t})_{t\in{[0,L_K]}}\in{A'},(X_t)_{t\geq L_K}\in{A}\big). \end{align*} Denoting by $\check{\nu}$ the image of $\nu$ under time reversal, taking a sequence of compacts increasing to $\tilde{\mathcal{G}},$ we thus directly obtain by \eqref{definter} that \begin{equation} \label{nuinvrev} \nu=\check{\nu}. \end{equation} \item One can find a result similar to \eqref{definter} but with \begin{equation} \label{defWFGprime} W_{F,\tilde{\mathcal{G}}}^{'0}=\{\zeta^-<0=L_F<\zeta^+\}\text{ and }W_{F,\tilde{\mathcal{G}}}^{'*}=p_{\tilde{\mathcal{G}}}^*(W_{F,\tilde{\mathcal{G}}}^{'0}) \end{equation} instead of $W_{F,\tilde{\mathcal{G}}}^{0}$ and $W_{F,\tilde{\mathcal{G}}}^{*},$ see \eqref{defWFG}, where $L_F(w)=\sup\{t\in\mathbb{R}:\,w(t)\in{F}\}$ for all $w\in{W_{\tilde{\mathcal{G}}}},$ with the convention $\sup\varnothing=\zeta^-.$ Indeed defining $\mathcal{W}_{F,\tilde{\mathcal{G}}}^{'0}$ as the set of events $B\in{\mathcal{W}_{\tilde{\mathcal{G}}}},$ $B\subset W_{F,\tilde{\mathcal{G}}}^{'0},$ which are a product of $B^+$ and $B^-,$ and taking \begin{equation} \label{defQKprime} Q_{F,\tilde{\mathcal{G}}}'\stackrel{\mathrm{def.}}{=}\sum_{x\in{\hat{\partial} F}}e_{F,\tilde{\mathcal{G}}}(x)P_x^{\tilde{\mathcal{G}}}({\cdot^-})P^{F,\tilde{\mathcal{G}}}_x({\cdot^+}), \end{equation} we have by \eqref{nuinvrev} that \begin{equation} \label{definterprime} \nu(A)=Q_{F,\tilde{\mathcal{G}}}'\big((p_{\tilde{\mathcal{G}}}^*)^{-1}(A)\big)\text{ for all }A\in{\mathcal{W}_{\tilde{\mathcal{G}}}^*},A\subset W_{F,\tilde{\mathcal{G}}}^{'*}. \end{equation} \end{enumerate} \end{Rk} Let us now see another consequence of Theorem \nolinebreak \ref{nuexists}. We define $G_{\kappa}=\{x\in{G}:\,\kappa_x>0\},$ and recall the definition of $\tilde{\mathcal{G}}^{-}$ from page \pageref{deftildeged}, and of killed, killed-surviving and surviving-killed random interlacements from below \eqref{defIu}. \begin{Cor} \label{killedinterdes} Let $\tilde{\nu}^\mathcal{K},$ $\tilde{\nu}^{\mathcal{KS}}$ and $\tilde{\nu}^{\mathcal{SK}}$ be the probabilities on $(W_{\tilde{\mathcal{G}}},\mathcal{W}_{\tilde{\mathcal{G}}})$ given by \begin{align*} \nonumber\tilde{\nu}^{\mathcal{K}}\stackrel{\mathrm{def.}}{=}\sum_{x\in{G_{\kappa}}}\kappa_x\mathit{\mathbf{h}}_{\text{kill}}(x)P^{\tilde{\mathcal{G}}}_x\big(\cdot^+\,|\,\mathcal{W}_{\tilde{\mathcal{G}}}^{\mathcal{K},+}\big)P^{\overline{I_x^c},\tilde{\mathcal{G}}}_x\big(\cdot^-\big)\text{ on }\mathcal{W}_{\tilde{\mathcal{G}}^{-},\tilde{\mathcal{G}}}^{0}, \\\tilde{\nu}^{\mathcal{KS}}\stackrel{\mathrm{def.}}{=}\sum_{x\in{G_{\kappa}}}\kappa_x\mathit{\mathbf{h}}_{\text{surv}}(x)P^{\tilde{\mathcal{G}}}_x\big(\cdot^+\,|\,\mathcal{W}_{\tilde{\mathcal{G}}}^{\mathcal{S},+}\big)P^{\overline{I_x^c},\tilde{\mathcal{G}}}_x\big(\cdot^-\big)\text{ on }\mathcal{W}_{\tilde{\mathcal{G}}^{-},\tilde{\mathcal{G}}}^{0}, \\\nonumber\tilde{\nu}^{\mathcal{SK}}\stackrel{\mathrm{def.}}{=}\sum_{x\in{G_{\kappa}}}\kappa_x\mathit{\mathbf{h}}_{\text{surv}}(x)P^{\overline{I_x^c},\tilde{\mathcal{G}}}_x\big(\cdot^+\big)P^{\tilde{\mathcal{G}}}_x\big(\cdot^-\,|\,\mathcal{W}_{\tilde{\mathcal{G}}}^{\mathcal{S},+}\big)\text{ on }\mathcal{W}_{\tilde{\mathcal{G}}^{-},\tilde{\mathcal{G}}}^{'0}, \end{align*} and such that $\tilde{\nu}^{\mathcal{K}}=\tilde{\nu}^\mathcal{KS}_{\tilde{\mathcal{G}}}=0$ for all $A\in\mathcal{W}_{\tilde{\mathcal{G}}}$ with $A\cap W_{\tilde{\mathcal{G}}^{-},\tilde{\mathcal{G}}}^0=\varnothing,$ and $\tilde{\nu}^\mathcal{SK}_{\tilde{\mathcal{G}}}=0$ for all $A\in\mathcal{W}_{\tilde{\mathcal{G}}}$ with $A\cap W_{\tilde{\mathcal{G}}^{-},\tilde{\mathcal{G}}}^{'0}=\varnothing.$ Then \begin{equation} \label{newdescriptionnuK} \nu^{\mathcal{K}}(A)=\tilde{\nu}^{\mathcal{K}}\big((p_{\tilde{\mathcal{G}}}^*)^{-1}(A)\big)\text{ for all }A\in{\mathcal{W}_{\tilde{\mathcal{G}}}^{*}},A\subset W^{\mathcal{K},*}_{\tilde{\mathcal{G}}}\cap W^{*}_{\tilde{\mathcal{G}}^{-},\tilde{\mathcal{G}}}, \end{equation} and similarly for killed-surviving random interlacements if $A\subset W^{\mathcal{KS},*}_{\tilde{\mathcal{G}}}\cap W^{*}_{\tilde{\mathcal{G}}^{-},\tilde{\mathcal{G}}},$ and for surviving-killed random interlacements if $A\subset W^{\mathcal{SK},*}_{\tilde{\mathcal{G}}}\cap W^{'*}_{\tilde{\mathcal{G}}^{-},\tilde{\mathcal{G}}}$. \end{Cor} \begin{proof} Let us first consider killed random interlacements. We have $\hat{\partial}\tilde{\mathcal{G}}^{-}=G_{\kappa},$ $e_{\tilde{\mathcal{G}}^{-}}(x)=e_{G}(x)=\lambda_xP_x(\tilde{H}_G=\infty)=\kappa_x$ and $P^{\tilde{\mathcal{G}}^{-}}_x=P^{\overline{I_x^c}}_x$ since $L_{\overline{I_x^c}}=L_{\tilde{\mathcal{G}}^{-}}$ on the event $\{L_{\tilde{\mathcal{G}}^{-}}\in{(0,\zeta)},X_{L_{\tilde{\mathcal{G}}^{-}}}=x\}=\{L_{\overline{I_x^c}}\in{(0,\zeta)},X_{L_{\overline{I_x^c}}}=x\}$ for all $x\in{G_{\kappa}}.$ Therefore $\tilde{\nu}^{\mathcal{K}}(A)=Q_{\tilde{\mathcal{G}}^{-},\tilde{\mathcal{G}}}(A)$ for all sets $A\subset W_{\tilde{\mathcal{G}}}$ with $A^+,A^-\subset W_{\tilde{\mathcal{G}}}^{\mathcal{K},+},$ and so \eqref{newdescriptionnuK} follows readily from \eqref{definter}. The proof is similar for killed-surviving interlacements, as well as for surviving-killed interlacements using \eqref{defQKprime} and \eqref{definterprime} with $F=\tilde{\mathcal{G}}^{-}.$ \end{proof} Corollary \nolinebreak \ref{killedinterdes} provides us with the following description of the point process consisting of the trajectories of killed interlacements hitting $\tilde{\mathcal{G}}^{-}$: for each $x\in{G},$ take a Poisson number of trajectories with parameter $u\kappa_x\mathit{\mathbf{h}}_{\text{kill}}(x),$ each independent, with law $P_x(\cdot\,|\,W^{\mathcal{K},+}_{\tilde{\mathcal{G}}})$ for their forwards part and $P_x^{\overline{I_x^c}}$ for their backwards part. Then the point process which consist of all these trajectories modulo time-shift has the same law as the point process consisting of the trajectories of killed interlacements hitting $\tilde{\mathcal{G}}^{-}.$ Note that $P_x^{\overline{I_x^c}}$ can also be seen as the law as a $\text{BES}^3(0)$ process on $I_x$ starting in $x$ and stopped when reaching the open end of $I_x,$ see for instance Theorem \nolinebreak 4.5, Chapter XII in \cite{MR1725357}. We will see another description of killed random interlacements as an $\mathit{\mathbf{h}}_{\text{kill}}$-transform of random interlacements in Corollary \nolinebreak \ref{h0transformiso}. When $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$ killed random interlacements and random interlacements coincide, and one can thus describe the trace on $\tilde{\mathcal{G}}^{-}$ of the random interlacement process, that is the point process consisting of the trace on $\tilde{\mathcal{G}}^{-}$ of each trajectory in the random interlacement process $\omega_u$ hitting $\tilde{\mathcal{G}}^{-},$ as in \eqref{disinterh0=1}. One can describe similarly the discrete killed random interlacement process, that is the point process consisting of the trajectories in $\hat{\omega}_u$ whose forwards and backwards parts are killed, as well as killed-surviving and surviving-killed random interlacements. Note that finitary interlacements, as introduced in \cite{MR3962876}, are a special case of killed random interlacements, and \eqref{disinterh0=1} can be seen as generalization of Proposition \nolinebreak 4.1 in \cite{MR3962876}, and we refer to Remark \ref{linkwithfinitary} for additional results on finitary random interlacements. \begin{Rk} \label{rqkilled} \begin{enumerate}[1)] \item \label{desgiveall}If one applies \eqref{newdescriptionnuK} to a new graph $\mathcal{G}'$ which is the same graph as $\mathcal{G},$ plus an additional vertex $x+t_x\cdot I_x$ on each $I_x,$ $x\in{G}$ and $t_x\in{(0,\rho_x)},$ then \eqref{newdescriptionnuK} describes the law of $\nu^\mathcal{K}$ on $(\tilde{\mathcal{G}}')^{-},$ that is on $\tilde{\mathcal{G}}^{-}$ and on $[x,x+t_x\cdot I_x](\subset I_x),$ $x\in{G}.$ We can approximate the whole cable system $\tilde{\mathcal{G}}$ in that way by letting $t_x\rightarrow\rho_x$ for all $x\in{G},$ and thus \eqref{newdescriptionnuK} is enough to obtain the complete law of $\nu^{\mathcal{K}}.$ One cannot however find a direct description similar to \eqref{newdescriptionnuK} for the complete law of $\nu^{\mathcal{K}}$ since for all $x\in{G}$ with $\kappa_x>0,$ \begin{equation*} \nu^{\mathcal{K}}([x,x+t\cdot I_x])\geq e_{[x,x+t\cdot I_x]}(x+t\cdot I_x)\mathit{\mathbf{h}}_{\text{kill}}(x+t\cdot I_x)\rightarrow{\infty} \end{equation*} as $t\nearrow\rho_x$ by a similar argument as in the proof of \eqref*{4capIx} of \cite{DrePreRod3}, and so there is an infinite number of trajectories in the killed random interlacement process hitting $I_x$ (or in fact $[x+t\cdot I_x,x+\rho_x\cdot I_x)$ for any $t<\rho_x$). \item \label{killedonA} If $A\subset G,$ Corollary \nolinebreak \ref{killedinterdes} still holds for killed on $A$ random interlacements instead of killed random interlacements when replacing $G_{\kappa}$ by $G_{\kappa}^{A_{\infty}}=\{x\in{A}:\,\kappa_x>0\},$ killed trajectories $W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}$ by killed on $A$ trajectories, and $\mathit{\mathbf{h}}_{\text{kill}}$ by the probability to be killed on $A,$ which follows from considering the graph $\mathcal{G}^{A_{\infty}}$ as in Remark \ref{rk:killedonARI}. \end{enumerate} \end{Rk} \section{Some trees with \texorpdfstring{$\mathit{\mathbf{h}}_{\text{kill}}\equiv1$}{hkill=1} and \texorpdfstring{$\tilde{h}_*=0$}{h*=0}} \label{sec:h0=1h_*=0} We present a class of weighted graphs for which $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$ and the critical parameter $\tilde{h}_*=0,$ thus proving that the implication \eqref{eq:h0<1thenh*>0} is not an equivalence. First, we use the description \eqref{disinterh0=1} of random interlacements when $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$ (or more precisely Corollary \ref{killedinterdes}) and the link between the Gaussian free field and random interlacements from \eqref{eq:capimplies0bounded} to prove that $\tilde{h}_*\geq0$ under certain conditions \eqref{condtreeh_*=0}, \eqref{condtreedyadich*=0} or \eqref{condlineh_*=0} on the growth of the weights, killing measure and number of neighbors in a subset of our graph in Proposition \nolinebreak \ref{Prop:condtreeh_*=0}, which combined with a simple condition which implies $\mathit{\mathbf{h}}_{\text{kill}}=1$ provides us with the desired class of graphs, see Corollary \nolinebreak \ref{Cor:h_0=1andh_*=0}. We say that $\mathcal{G}$ contains a tree $(T_i)_{i\geq0}$ if $T_i,$ $i\geq0,$ is a sequence of disjoints subsets of $G$ such that $|T_0|=1$ and each vertex of $T_{i+1}$ has a unique neighbour in $T_i$ for all $i\geq0,$ and we say that $(T_i)_{i\geq0}$ is binary if $|\{y\in{T_{n+1}:\,y\sim x}\}|=2$ for all $x\in{T_n}.$ \begin{Prop} \label{Prop:condtreeh_*=0} Let $\mathcal{G}$ be a transient graph such that either \begin{gather} \label{condtreeh_*=0} \text{$\mathcal{G}$ contains a tree }(T_i)_{i\geq0}\text{ with }\inf_{x\in{T_n}}\sum_{\substack{y\in{T_{n+1}}\\y\sim x}}1\wedge\frac{\lambda_{x,y}\kappa_x}{\lambda_x}\tend{n}{\infty}\infty, \\\label{condtreedyadich*=0} \text{or $\mathcal{G}$ contains a binary tree }(T_i)_{i\geq0}\text{ with }\inf_{x\in{T_n}}\frac{\lambda_{x,y}\kappa_x}{\lambda_x}\tend{n}{\infty}\infty, \\\label{condlineh_*=0} \text{or there exists an infinite connected path } (x_1,x_2,\dots)\subset G^\mathbb{N}\text{ with }\frac{\lambda_{x_n,x_{n+1}}\kappa_{x_n}}{\lambda_{x_n}\log(n)}\tend{n}{\infty}\infty, \end{gather} then $\tilde{h}_*\geq0.$ \end{Prop} \begin{proof} Let us first assume that \eqref{condtreeh_*=0} holds. For each $x\in{T_n}$ and $y\in{T_{n+1}},$ the probability that a discrete time trajectory starting at $x$ first jump to $y$ is $\lambda_{x,y}/\lambda_x.$ Starting an independent Poisson number of discrete trajectories at $x$ with parameter $u\kappa_x$ for all $x\in{T_i},$ $i\geq0,$ let us denote by $A_u\subset G$ the union of the set of vertices at generation $n+1$ visited at time one by a trajectory starting at generation $n,$ $n\in\mathbb{N}.$ The average number of neighbors $y\in{T_{n+1}\cap A_u}$ of a vertex $x\in{T_n}$ is then \begin{equation} \label{averagenumberneighbor} \sum_{\substack{y\in{T_{n+1}}\\y\sim x}}1-\exp\Big(-\frac{u\lambda_{x,y}\kappa_x}{\lambda_x}\Big)\geq \frac12\sum_{\substack{y\in{T_{n+1}}\\y\sim x}}1\wedge \frac{u\lambda_{x,y}\kappa_x}{\lambda_x}\geq\frac{u\wedge1}{2}\sum_{\substack{y\in{T_{n+1}}\\y\sim x}}1\wedge \frac{\lambda_{x,y}\kappa_x}{\lambda_x}. \end{equation} Therefore, if $n$ is large enough, the intersection of $A_u$ and all the descendants of a vertex at generation $n$ stochastically dominates a supercritical Galton-Watson tree, and thus contains an infinite connected component with positive probability. If we denote by $\tilde{A}_u\subset\tilde{\mathcal{G}}$ the set obtained by adding the cable between each $x\in{A_u}$ and its first ancestor, we thus have that $\tilde{A}_u$ contains an unbounded connected component with positive probability. It moreover follows from Corollary \ref{killedinterdes} that the trace on $\tilde{\mathcal{G}}^-$ of the sum of the killed random interlacements process and the killed-surviving random interlacements process has the same law as a Poisson point process with intensity $\sum_{x\in{G}}u\kappa_xP_x^{\tilde{\mathcal{G}}^-}$, and so ${\cal I}^u\cap\tilde{\mathcal{G}}^{-}$ stochastically dominates $\tilde{A}_u.$ Therefore, ${\cal I}^u$ contains an unbounded connected component with positive probability for all $u>0.$ If \eqref{0bounded} holds, it moreover follows from \eqref{levelsetsvsIu} that ${\cal I}^u$ is stochastically dominated by $E^{\geq-\sqrt{2u}},$ and so $E^{\geq-\sqrt{2u}}$ contains an unbounded connected component with positive probability for all $u>0,$ that is $\tilde{h}_*\geq0.$ If \eqref{0bounded} does not hold, it is clear that $\tilde{h}_*\geq0$ by monotonicity. If \eqref{condtreedyadich*=0} holds, then for each $u>0,$ there exists $N\in\mathbb{N}$ such that the average number of neighbors in $T_{n+1}\cap A_u$ of each $x\in{T_n},$ see \eqref{averagenumberneighbor}, is at least $2(1-e^{-1})>1,$ and we can conclude similarly as before. Let us finally assume that \eqref{condlineh_*=0} holds. For all $u>0$ we have \begin{equation*} \sum_{n\in\mathbb{N}}\exp\Big(-\frac{u\lambda_{x_n,x_{n+1}}\kappa_{x_n}}{\lambda_{x_n}}\Big)=\sum_{n\in\mathbb{N}}\left(\frac{1}{n}\right)^{\frac{u\lambda_{x_n,x_{n+1}}\kappa_{x_n}}{\lambda_{x_n}\log(n)}}<\infty. \end{equation*} Therefore by Borel-Cantelli lemma, there exists a.s.\ $N\in\mathbb{N}$ such that the random interlacements process at level $u$ on $\mathcal{G}$ contains for all $n\geq N$ a trajectory starting in $x_n$ and visiting $x_{n+1}$ at time $1,$ and so ${\cal I}^u$ contains an unbounded connected component, and we can conclude similarly as above. \end{proof} It is moreover easy to find a condition which implies that $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$: it is enough that the probability to be instantly killed for a discrete trajectory is uniformly larger than a constant. \begin{Cor} \label{Cor:h_0=1andh_*=0} Let $\mathcal{G}$ be a graph verifying either \eqref{condtreeh_*=0}, \eqref{condtreedyadich*=0} or \eqref{condlineh_*=0} such that $\kappa_x\geq c$ and $\frac{\kappa_x}{\lambda_x}\geq c'$ for all $x\in{G}.$ Then $\tilde{h}_*=0$ and $\mathit{\mathbf{h}}_{\text{kill}}\equiv1.$ \end{Cor} \begin{proof} Since $\frac{\kappa_x}{\lambda_x}\geq c',$ the canonical random walk on $\mathcal{G}$ has probability at least $c'>0$ to be killed at each step, and so $\mathit{\mathbf{h}}_{\text{kill}}\equiv1.$ Moreover, since for all compacts $K\subset G,$ we have $e_K(x)=\lambda_xP_x(\tilde{H}_K=\infty)\geq\kappa_x\geq c$ for all $x\in{K},$ we deduce that $\mathrm{cap}(A)\geq c|A|$ for all $A\subset G.$ Thus by \eqref*{4capconditiondis} in \cite{DrePreRod3} we have that \eqref{capcondition} holds, and so $\tilde{h}_*\leq0$ by \eqref{eq:capimplies0bounded}. We can now conclude by Proposition \ref{Prop:condtreeh_*=0}. \end{proof} Let us give three canonical examples of graphs verifying the conditions of Corollary \ref{Cor:h_0=1andh_*=0}, which illustrate the difference between the three conditions \eqref{condtreeh_*=0}, \eqref{condtreedyadich*=0} or \eqref{condlineh_*=0}. \begin{enumerate} \item any trees with constant weights such that the number $N_n$ of children of each vertex at generation $n$ and the killing measure $\kappa_n$ of each vertex at generation $n$ verify $\kappa_n\geq cN_n$ and $N_n\tend{n}{\infty}\infty,$ \item the $(d+1)$-regular tree, $d\geq2,$ such that the weight $\lambda_n$ of each edge between a vertex at generation $n$ and a vertex at generation $n+1$ and the killing measure $\kappa_n$ of each vertex at generation $n$ verify $\kappa_n\geq c(\lambda_n\wedge\lambda_{n-1})$ and $\lambda_n\tend{n}{\infty}\infty,$ \item any graphs $\mathcal{G}$ with $\kappa_x\geq c$ and $\kappa_x/\lambda_x\geq c'$ for all $x\in{G}$ containing an infinite connected path $(x_1,x_2,\dots)\subset G^\mathbb{N}$ with $(\kappa_{x_n}\wedge\lambda_{x_n,x_{n+1}})/\log(n)\tend{n}{\infty}\infty,$ \end{enumerate} Note that all these examples have an unbounded killing measure, and in fact one can easily show that this is always the case for any graphs verifying either \eqref{condtreeh_*=0}, \eqref{condtreedyadich*=0} or \eqref{condlineh_*=0}. Example \nolinebreak (c) shows that one can find graphs with sub-exponential volume growth verifying $\tilde{h}_*=0$ and $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$ if the killing measure and weights increase sufficiently fast along some path, and example \nolinebreak (b) shows that it is in fact enough that they diverges to infinity on the $(d+1)$-regular tree. We refer to Remark \ref{rkboundedlambda} for an explanation of why these are important examples in view of Theorem \nolinebreak \ref{The:h_*<0}. Finally, example \nolinebreak (a) shows that it is also possible to find graphs verifying $\tilde{h}_*=0$ and $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$ with bounded weights $\lambda_{x,y},$ $x,y\in{G},$ or equivalently bounded lengths $\rho_{e},$ $e\in{E},$ of the cables. \begin{Rk} \label{h0=1h_*=infinity} In Proposition \nolinebreak \ref{h*infinity}, we also give an example of a graph verifying $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$ for which $\tilde{h}_*=\infty$ (this is the case when $\alpha\leq\frac1d$), which also show that the implication \eqref{eq:h0<1thenh*>0} is not an equivalence. The advantage of Corollary \nolinebreak \ref{Cor:h_0=1andh_*=0} is that it shows that even under condition \eqref{capcondition}, under which $\tilde{h}_*\leq0,$ the implication $\mathit{\mathbf{h}}_{\text{kill}}<1\ \Rightarrow\ \tilde{h}_*=0$ is still not an equivalence. \end{Rk} \section{Proof of \texorpdfstring{$\tilde{h}_*<0$}{h*<0} on massive graphs} \label{sec:h_*<0} In this section we prove that $\tilde{h}_*<0$ on almost any massive graph with sub-exponential volume growth, bounded weights and $\kappa\geq c,$ and on a class of trees including the $(d+1)$-regular tree with unit weights and large enough constant killing measure, see Theorem \nolinebreak \ref{The:h_*<0} and Corollary \nolinebreak \ref{Cor:h_*<0} for exact statements. This shows in particular that the implication \eqref{eq:h0<1thenh*>0} is not trivial, and in fact that one has $\tilde{h}_*<0$ on most typical examples of graphs with $\mathit{\mathbf{h}}_{\text{kill}}\equiv1.$ Finally, we use the isomorphism between random interlacements and the Gaussian free field to derive similar results for the interlacement set on the cable system in Corollary \nolinebreak \ref{cor:u_*>0}. Throughout this section, we will use $c$ and $C$ for constants changing from place to place, and numbered constants $c_0,$ $C_0$... for fixed constants, which appear in increasing numerical order. For each $L\geq0$ and $x\in{G},$ let us define the discrete ball $B(x,L)=\{y\in{G}:d(x,y)< L\}$ with internal boundary $\partial B(x,L)=\{y\in{B(x,L)}:\,\exists\,z\sim y,z\notin{B(x,L)}\},$ \begin{equation} \label{defgb} g(L)=\sup_{x\in{G},y\in{B(x,L)^c}}g(x,y)\text{ and }b(L)=\sup_{x\in{G}}|\partial B(x,L)|. \end{equation} \begin{The} \label{The:h_*<0} Assume that $\lambda_x\leq C$ for all $x\in{G},$ and that there exist $C<\infty$ and $c_0>0$ such that \begin{equation} \label{condgvscap} g(L)\leq C\exp(-c_0L)\text{ for all }L\geq0. \end{equation} Assume moreover that either there exist $\alpha>0$ and $c>0$ such that \begin{equation} \label{condsizeball} b(L)\leq \exp\left(\frac{cL}{\log(L)^{1+\alpha}}\right)\text{ for all }L\geq2, \end{equation} or $\mathcal{G}$ is a tree and there exist $C<\infty$ and $c_1\in{(0,c_0\cdot c_2)},$ where $c_2\in{(0,1]}$ is some absolute constant independent of the choice of $\mathcal{G},$ such that \begin{equation} \label{condsizeballtree} b(L)\leq C\exp\left(c_1L\right)\text{ for all }L\geq1. \end{equation} Then there exist $h<0,$ $C<\infty$ and $c>0$ such that \begin{equation} \label{connectingtoballdecayexpo} \P^G(x\leftrightarrow \partial B(x,L)\text{ in }E^{\geq h})\leq C\exp(-cL)\text{ for all }x\in{G}\text{ and }L\geq1, \end{equation} where $\{x\leftrightarrow \partial B(x,L)\text{ in }E^{\geq h}\}$ is the event that there exists a continuous path $\pi\subset E^{\geq h}$ between $x$ and $\partial B(x,L).$ In particular, $\tilde{h}_*(\mathcal{G})<0.$ \end{The} Before proving Theorem \nolinebreak \ref{The:h_*<0}, let us give a few examples of graphs on which it is easy to check that the conditions of Theorem \nolinebreak \ref{The:h_*<0} are fulfilled, and thus $\tilde{h}_*<0.$ \begin{Cor} \label{Cor:h_*<0} If $\mathcal{G}$ is either a graph such that \eqref{condsizeball} holds and there exist $c>0$ and $C<\infty$ with $\kappa_x\geq c$ and $\lambda_x\leq C$ for all $x\in{G},$ or if $\mathcal{G}$ is a $(d+1)$-regular tree endowed with unit weights and constant killing measure $\kappa\equiv \hat{\kappa}\in{[C_0,\infty)},$ where $C_0\in{(0,\infty)}$ is a fixed constant depending on $d,$ then \eqref{connectingtoballdecayexpo} holds and $\tilde{h}_*(\mathcal{G})<0.$ \end{Cor} \begin{proof} Let us first assume that \eqref{condsizeball} holds and there exist $c>0$ and $C<\infty$ with $\kappa_x\geq c$ and $\lambda_x\leq C$ for all $x\in{G}.$ By Theorem \nolinebreak \ref{The:h_*<0}, one only needs to prove that \eqref{condgvscap} holds. Under these conditions, the probability that the discrete random walk $\hat{Z}$ on $G$ is killed at time one is uniformly bounded from below by $\inf_{x\in{G}}\kappa_x/\lambda_x.$ Therefore, if the graph distance between $x$ and $y$ is $L,$ we have that $P_x(H_y<\infty)\leq (1-\inf_{x\in{G}}\kappa_x/\lambda_x)^L,$ and that the number of times a random walk starting in $y$ return in $y$ is smaller than a geometric random variable with parameter $\kappa_y/\lambda_y.$ Therefore \eqref{condgvscap} holds since \begin{equation} \label{boundonGL} g(L)\leq \sup_{x\in{G}}\frac{1}{\kappa_x}\exp\Big(\log\Big(1-\inf_{x\in{G}}\frac{\kappa_x}{\lambda_x}\Big)L\Big). \end{equation} Let us now assume that $\mathcal{G}$ is a $(d+1)$-regular tree endowed with unit weights and constant killing measure $\kappa\equiv \hat{\kappa}\in{[C_0,\infty)}.$ It is clear that \eqref{condsizeballtree} holds for $c_1=\log(d+1)$ and that $\lambda_x=(d+1)+\hat{\kappa}$ for all $x\in{G},$ and thus in view of Theorem \nolinebreak \ref{The:h_*<0}, we only need to prove that \eqref{condgvscap} holds for some $c_0$ such that $\log(d+1)\leq c_0\cdot c_2.$ Let us call $p=p(\hat{\kappa},d)=P_x(H_y<\zeta)$ for some neighbors $x$ and $y,$ which does not depend on the choice of these neighbors by transitivity. Since for all $x,y\in{G},$ the random walk starting in $x$ visits $y$ if and only if it visits the first vertex on the geodesic between $x$ and $y,$ then the second and so on, one can easily show by the Markov property that $P_x(H_y<\zeta)=p^{d(x,y)}.$ By transitivity, we immediately obtain that \eqref{condgvscap} holds for $c_0=\log(1/p).$ But since $p(\hat{\kappa},d)\rightarrow0$ when $\hat{\kappa}\rightarrow\infty,$ one can choose the constant $C_0$ large enough so that $\log(d+1)\leq \log(1/p(\hat{\kappa},d))\cdot c_2$ if $\hat{\kappa}\geq C_0,$ and we can conclude. \end{proof} Corollary \nolinebreak \ref{Cor:h_*<0} indicates that when the weights and killing measure are uniformly bounded away from $0$ and $\infty,$ we typically have $\tilde{h}_*(\mathcal{G})<0.$ This holds for almost any graph with sub-exponential volume growth, for instance for the typical example of the massive $d$ dimensional lattice $\mathbb{Z}^d,$ $d\geq 3,$ or for $(d+1)$-regular trees, which have exponential volume growth when $d\geq3,$ when the killing measure is large enough. Note that on the contrary $\tilde{h}_*(\mathcal{G})\geq0$ in the massless case $\kappa\equiv0$ by \eqref{eq:h0<1thenh*>0}. We now turn to the proof of Theorem \nolinebreak \ref{The:h_*<0}, and we are first going to show that the probability in \eqref{connectingtoballdecayexpo} can be made arbitrarily small for an adapted choice of $h,$ depending on $L,$ which will follow from the coupling between random interlacements and the Gaussian free field \eqref{levelsetsvsIu} and a result from \cite{MR3502602} about the probability to connect two vertices in the level sets at level $0.$ Under the assumption \eqref{condgvscap}, to simplify notation we define \begin{equation} \label{defc2} c_3=\frac{c_0}{2}, \end{equation} for all $L>0$ \begin{equation} \label{defpL} p(L)=\left\{\begin{array}{ll} \sup_{x\in{G}}\mathrm{cap}(B(x,L))&\text{if }\eqref{condsizeball}\text{ holds,}\\ \sup_{x\in{G},y\in{\partial B(x,L)}}\mathrm{cap}([x,y])&\text{if }\mathcal{G}\text{ is a tree,} \end{array}\right. \end{equation} where $[x,y]\subset\tilde{\mathcal{G}}$ denotes the geodesic path between $x$ and $y$ in the cable system, and for all $h>0$ \begin{equation} a(L,h)=\left\{\begin{array}{ll} \sup_{x\in{G}}\P^G\left(x\leftrightarrow \partial B(x,L)\text{ in }E^{\geq -h}\right)&\text{if }\eqref{condsizeball}\text{ holds,}\\ \sup_{x\in{G},y\in{\partial B(x,L)}}\P^G\left(x\leftrightarrow y\text{ in }E^{\geq -h}\right)&\text{if }\mathcal{G}\text{ is a tree.} \end{array}\right. \end{equation} Note that there are trees such that \eqref{condsizeball} holds, and one can then choose for instance the first definition for $p(L)$ and $a(L,h).$ \begin{Lemme} \label{iniren} Let $\mathcal{G}$ be a graph such that $\lambda_x\leq C$ for all $x\in{G},$ \eqref{condgvscap} holds and either \eqref{condsizeball} holds or $\mathcal{G}$ is a tree, then there exists $C<\infty$ such that \begin{equation} \label{eq:iniren} a\left(L,\frac{t}{\sqrt{p(L)}}\right)\leq t^2+e^{-c_3L}\text{ for all }L\geq C\text{ and }t\geq0. \end{equation} \end{Lemme} \begin{proof} First note that \begin{equation} \label{Greenimpliescap} \text{ if \eqref{condgvscap} is verified, then the condition \eqref{capcondition} holds.} \end{equation} Indeed, let $A\subset G$ be a finite connected set with diameter $n+1,$ and for each $i\in\{0,\dots,n\}$ let $x_i\in{A}$ be such that $d(x_0,x_i)=i.$ Then by \eqref*{4variational} in \cite{DrePreRod3} and \eqref{condgvscap} we have \begin{equation*} \mathrm{cap}(A)\geq\left(\frac{1}{(n+1)^2}\sum_{i,j=0}^ng(x_i,x_j)\right)^{-1}\geq\left(\frac{2}{n+1}\sum_{k=0}^{n}C\exp(-ck)\right)^{-1}\geq cn, \end{equation*} and \eqref{Greenimpliescap} then follows from \eqref*{4capconditiondis} in \cite{DrePreRod3}. Let us take $u=t^2/(2p(L)),$ and first assume that \eqref{condsizeball} holds. Recalling the definition of $\mathcal{C}_u$ from \eqref{levelsetsvsIu}, if $x\leftrightarrow \partial B(x,L)$ in $\mathcal{C}_u\cup \{x\in{\tilde{\mathcal{G}}}:\,\phi_x>0\},$ then either ${\cal I}^u\cap B(x,L)\neq\varnothing$ or $x\leftrightarrow \partial B(x,L)$ in $\{x\in{\tilde{\mathcal{G}}}:\,|\phi_x|>0\}.$ By symmetry of the Gaussian free field, \eqref{Greenimpliescap}, \eqref{eq:capimplies0bounded} and \eqref{levelsetsvsIu}, we obtain \begin{equation*} \P^G\Big(x\leftrightarrow \partial B(x,L)\text{ in }E^{\geq -\frac{t}{\sqrt{p(L)}}}\Big)\leq \P^I({\cal I}^u\cap B(x,L)\neq\varnothing)+2\P^G(x\leftrightarrow\partial B(x,L)\text{ in }E^{\geq0}). \end{equation*} By \eqref{defIu} and \eqref{defgb} we moreover have \begin{equation*} \P^I({\cal I}^u\cap B(x,L)\neq\varnothing)=1-\exp\big(-u\mathrm{cap}(B(x,L))\big)\leq 1-\exp\left(-\frac{t^2}{2}\right)\leq t^2. \end{equation*} Moreover, by Propositions \nolinebreak 2.1 and 5.2 in \cite{MR3502602}, since $g(y,y)\geq \lambda_y^{-1}\geq c$ for all $y\in{G},$ we have by a union bound that for all $L$ large enough \begin{equation*} \P^G(x\leftrightarrow\partial B(x,L)\text{ in }E^{\geq0})\leq Cb(L)\arcsin(Cg(L))\leq \exp(-c_3L), \end{equation*} where we used \eqref{condgvscap} and \eqref{condsizeball} in the last inequality, as well as the fact that $\arcsin(x)\leq Cx$ for all $x\leq1.$ Let us now assume that $\mathcal{G}$ is a tree and fix some $x\in{G},$ $L\geq1,$ $y\in{\partial B(x,L)}$ and $t\geq0.$ We can prove similarly as before that since $\mathcal{G}$ is a tree \begin{equation*} \P^G\Big(x\leftrightarrow y\text{ in }E^{\geq -\frac{t}{\sqrt{p(L)}}}\Big)=\P^G\Big([x,y]\subset E^{\geq -\frac{t}{\sqrt{p(L)}}}\Big)\leq \P^I({\cal I}^u\cap [x,y]\neq\varnothing)+2\P^G( x\leftrightarrow y\text{ in } E^{\geq0}). \end{equation*} Using \eqref{defIu} and Propositions \nolinebreak 2.1 and 5.2 in \cite{MR3502602}, we can conclude. \end{proof} Lemma \nolinebreak \ref{iniren} implies that the probability in \eqref{connectingtoballdecayexpo} can be made arbitrarily small by taking $h=\frac{t}{\sqrt{p(L)}},$ $t$ small enough and $L$ large enough. This will serve as the base of a renormalization scheme, similar to the one presented in Section 7 of \cite{MR3420516}, that we now explain. For some $L_0>0$ we define recursively \begin{equation} \label{defLk} L_{k+1}=2L_k\Big(1+\frac{1}{(k+1)^{1+\alpha}}\Big)\text{ for all }k\geq0, \end{equation} where $\alpha$ is the same constant as in \eqref{condsizeball} if \eqref{condsizeball} holds, and $\alpha=1$ otherwise. Then there exists a constant $C_1<\infty$ only depending on $\alpha$ such that \begin{equation} \label{ren:boundonL_k} 2^kL_0\leq L_k\leq C_12^kL_0\text{ for all }k\in\mathbb{N}. \end{equation} Let us also define for all $t\geq0$ and $k\in\mathbb{N}_0$ \begin{equation} h_k(t)=\frac{t}{\sqrt{p(L_0)}}\left(1-\sum_{i=1}^{k}\frac{c}{i^{1+\alpha}}\right), \end{equation} where the constant $c=c(\alpha)$ is chosen small enough so that $h_{\infty}(t)>0$ for all $t>0,$ where $h_{\infty}(t)$ is the limit of $h_k(t)$ as $t\rightarrow\infty.$ For any $h\in\mathbb{R}$ we have \begin{equation} \label{Lk+1inclusLk} \{x\leftrightarrow \partial B(x,L_{k+1})\text{ in }E^{\geq h}\}\subset\bigcup_{y\in{\partial B(x,L_{k+1})}}\{y\leftrightarrow \partial B(y,L_k)\text{ in }E^{\geq h}\}\cap\{x\leftrightarrow \partial B(x,L_k)\text{ in }E^{\geq h}\}, \end{equation} and if $\mathcal{G}$ is a tree, for any $y\in{\partial B(x,L_{k+1})},$ letting $z$ and $z'$ be the unique points on the geodesic between $x$ and $y$ such that $z\in{\partial B(x,L_k)}$ and $z'\in{\partial B(y,L_k)},$ \begin{equation} \label{Lk+1inclusLktree} \{x\leftrightarrow y\text{ in }E^{\geq h}\}\subset\{y\leftrightarrow z'\text{ in }E^{\geq h}\}\cap\{x\leftrightarrow z\text{ in }E^{\geq h}\}. \end{equation} The two events on the right-hand side of \eqref{Lk+1inclusLk} and \eqref{Lk+1inclusLktree} are measurable with respect to the field on distant sets since for all $y\in{\partial{B(x,L_{k+1})}}$ \begin{equation} \label{xfarfromy} d\big(B(y,L_k),B(x,L_k)\big)\geq \frac{L_k}{(k+1)^{1+\alpha}}, \end{equation} upon choosing $L_0$ large enough. One can use the decoupling inequality from \cite{MR3325312} to prove that these events are almost uncorrelated, up to some sprinkling parameter, as we now explain. Similarly as in Section 6 of \cite{DrePreRod2}, one can adapt the proofs of Corollary \nolinebreak 1.3 and Proposition \nolinebreak 1.4 in \cite{MR3325312} to the cable system of any transient weighted graph to show that for all $L\geq1,$ $\delta>0,$ $x_1,x_2\in{G}$ with $s\stackrel{\mathrm{def.}}{=}d(B(x_1,L),B(x_2,L))>0,$ increasing events $A_1$ and $A_2$ such that $A_i\subset C(\tilde{\mathcal{G}},\mathbb{R})$ is measurable with respect to the $\sigma$-algebra generated by the coordinate functions and depend only on the value of the function on the edges $I_{\{x,y\}},$ $x,y\in{B(x_i,L)}$ for each $i\in{\{1,2\}},$ \begin{equation} \label{decoupling} \begin{split} \P^G(\phi\in{A_1\cap A_2})&\leq\P^G(\phi+\delta\in{A_1})\P^G(\phi+\delta\in{A_2}) +2b(L)\exp\left(-\frac{\delta^2}{8g(s)}\right). \end{split} \end{equation} Combining \eqref{Lk+1inclusLk}, \eqref{xfarfromy} and \eqref{decoupling} with Lemma \nolinebreak \ref{iniren}, we can now derive a bound on $a(L_k,h_k(t)).$ \begin{Lemme} \label{lemmainduction} Let $\mathcal{G}$ be a graph such that $\lambda_x\leq C$ for all $x\in{G}$ and \eqref{condgvscap} holds. If \eqref{condsizeball} holds, then there exist a constant $C_2<\infty$ depending only on $\alpha,$ as well as $C<\infty$ such that for all $t\in{(0,1/2]}$ and $L_0\geq C$ with \begin{equation} \label{L0mustbelarge} t^2\geq CL_0\exp\left(-\frac{c_3L_0}{C_2}\right), \end{equation} and for all $k\in\mathbb{N}_0,$ we have \begin{equation} \label{toproverecursively} a\big(L_k,h_k(t)\big)\leq \frac12\exp\left(C2^kL_0\sum_{i=0}^k\frac{1}{\log(L_i)^{1+\alpha}}\right)(t^2+e^{-c_3L_0})^{2^k}. \end{equation} If $\mathcal{G}$ is a tree such that \eqref{condsizeballtree} holds, then there exists $C<\infty$ such that for all $t\in{(0,1/2]}$ and $L_0\geq C$ verifying \eqref{L0mustbelarge}, and for all $k\in\mathbb{N}_0,$ we have \begin{equation} \label{toproverecursivelytree} a\big(L_k,h_k(t)\big)\leq \frac12(2(t^2+e^{-c_3L_0}))^{2^k}. \end{equation} \end{Lemme} \begin{proof} We fix some $t\in{(0,1/2]}$ and first prove \eqref{toproverecursively} by induction on $k$ under the assumptions \eqref{L0mustbelarge} and \eqref{condsizeball}. The statement for $k=0$ follows directly from \eqref{eq:iniren} if $L_0$ is large enough. Let us now assume that \eqref{toproverecursively} holds for some $k\in{\mathbb{N}_0}.$ Combining \eqref{Lk+1inclusLk}, \eqref{xfarfromy} and \eqref{decoupling} we obtain by a union bound that \begin{equation} \label{eq:recursion1} a\big(L_{k+1},h_{k+1}(t)\big)\leq b(L_{k+1})\left(a\big(L_k,h_k(t)\big)^2+2b(L_k)\exp\left(-\Big(\frac{ct}{\sqrt{p(L_0)}(k+1)^{1+\alpha}}\Big)^2\frac{1}{8g(s_k)}\right)\right), \end{equation} where $s_k=\frac{L_k}{(k+1)^{1+\alpha}}.$ By \eqref{toproverecursively}, we moreover have that \begin{equation} \label{eq:recursion2} \begin{split} b(L_{k+1})a\big(L_k,h_k(t)\big)^2&\leq\frac14b(L_{k+1})\exp\left(C2^{k+1}L_0\sum_{i=0}^k\frac{1}{\log(L_i)^{1+\alpha}}\right)(t^2+e^{-c_3L_0})^{2^{k+1}}\\&\leq\frac14\exp\left(C2^{k+1}L_0\sum_{i=0}^{k+1}\frac{1}{\log(L_i)^{1+\alpha}}\right)(t^2+e^{-c_3L_0})^{2^{k+1}}, \end{split} \end{equation} where the last inequality holds by \eqref{condsizeball} and \eqref{ren:boundonL_k} when choosing the constant $C$ large enough, independently of $t$ and $k.$ Moreover, by \eqref{condgvscap}, \eqref{condsizeball} and \eqref{ren:boundonL_k}, noting that \begin{equation*} p(L_0)\leq C\sum_{k=0}^{L_0}b(k)\leq CL_0\exp\Big(\frac{cL_0}{\log(L_0)^{1+\alpha}}\Big)\leq g(s_k)^{-1/2} \end{equation*} for $L_0$ large enough, independently of $k,$ we have \begin{align*} \left(\frac{ct}{\sqrt{p(L_0)}(k+1)^{1+\alpha}}\right)^2\frac{1}{8g(s_k)}\geq \frac{ct^2g(s_k)^{-\frac12}}{(k+1)^{2(1+\alpha)}}&\geq \frac{ct^2}{(k+1)^{2(1+\alpha)}}\exp\left(\frac{c_02^kL_0}{2(k+1)^{1+\alpha}}\right) \\&\geq ct^22^{k+1}\exp(c_3L_0/C_2), \end{align*} for $L_0$ large enough, where $C_2$ is a constant depending only on $\alpha.$ Therefore, upon choosing $C$ large enough, independently of $k,$ if \eqref{L0mustbelarge} holds then by \eqref{condsizeball} and \eqref{ren:boundonL_k} \begin{equation} \label{eq:recursion3} 2b(L_k)\exp\left(-\Big(\frac{ct}{\sqrt{p(L_0)}(k+1)^{1+\alpha}}\Big)^2\frac{1}{8g(s_k)}\right)\leq \frac14e^{-c_3L_02^{k+1}}\leq \frac14(t^2+e^{-c_3L_0})^{2^{k+1}}, \end{equation} and by \eqref{condsizeball} and \eqref{ren:boundonL_k}, upon choosing $C$ large enough, \begin{equation} \label{eq:recursion4} b(L_{k+1})\leq\exp\left(C2^{k+1}L_0\sum_{i=0}^{k+1}\frac{1}{\log(L_i)^{1+\alpha}}\right). \end{equation} We can easily conclude that \eqref{toproverecursively} holds for $k+1$ by combining \eqref{eq:recursion1}, \eqref{eq:recursion2}, \eqref{eq:recursion3} and \eqref{eq:recursion4}. The proof of \eqref{toproverecursivelytree} is similar when $\mathcal{G}$ is a tree and \eqref{condsizeballtree} holds. Indeed, when $k=0,$ \eqref{toproverecursivelytree} is just \eqref{eq:iniren}. Let us now assume that \eqref{toproverecursivelytree} holds for some $k\in\mathbb{N}_0.$ Combining \eqref{Lk+1inclusLktree}, \eqref{xfarfromy} and \eqref{decoupling} we have \begin{equation} \label{eq:recursion1tree} a\big(L_{k+1},h_{k+1}(t)\big)\leq a\big(L_k,h_k(t)\big)^2+2b(L_k)\exp\left(-\left(\frac{ct}{\sqrt{p(L_0)}(k+1)^{1+\alpha}}\right)^2\frac{1}{8g(s_k)}\right). \end{equation} Moreover, one can easily prove that \eqref{eq:recursion3} still holds under condition \eqref{condsizeballtree} since $p(L_0)\leq CL_0\leq g(s_k)^{-1/2}$ if $L_0$ is large enough, and we can easily deduce from \eqref{eq:recursion1tree} that \eqref{toproverecursivelytree} also holds for $k+1.$ \end{proof} In view of Lemma \nolinebreak \ref{lemmainduction}, we are now ready to prove Theorem \nolinebreak \ref{The:h_*<0}. \begin{proof}[Proof of Theorem \nolinebreak \ref{The:h_*<0}] We take $t\equiv t(L_0)=e^{-c_3L_0/(4C_2)},$ then \eqref{L0mustbelarge} holds if $L_0$ is large enough. Let us first assume that \eqref{condsizeball} holds. By Lemma \nolinebreak \ref{lemmainduction} and \eqref{ren:boundonL_k} one can fix $L_0$ large enough, so that for all $k\in\mathbb{N}_0$ \begin{align*} a\big(L_k,h_k(t)\big)&\leq\frac12\exp\left(C2^kL_0\sum_{i=0}^{\infty}\frac{1}{\big(i+\log(L_0)\big)^{1+\alpha}}\right)\left(2\exp(-cL_0)\right)^{2^k} \\&\leq2^{2^k}\exp\left(C2^k\frac{L_0}{\log(L_0)^{\alpha}}\right)\exp\big(-c2^{k}L_0\big) \\&\leq\exp\big(-c2^{k}L_0\big)\leq\exp(-cL_k). \end{align*} For any $L\geq1,$ let us take $k\in\mathbb{N}_0$ such that $L_{k}\leq L\leq L_{k+1},$ then for all $x\in{G}$ by \eqref{ren:boundonL_k} \begin{equation*} \P^G\big(x\leftrightarrow \partial B(x,L)\text{ in }E^{\geq -h_{\infty}(t)}\big)\leq a\big(L_k,h_k(t)\big)\leq\exp(-cL_k)\leq\exp(-cL). \end{equation*} Taking the limit as $L\rightarrow\infty,$ we get that the component of $x$ in $E^{\geq -h_{\infty}(t)}$ is $\P^G$-a.s.\ bounded for all $x\in{G},$ and by a union bound $E^{\geq -h_{\infty}(t)}$ contains $\P^G$-a.s.\ only bounded components, that is $\tilde{h}_*(\mathcal{G})\leq -h_{\infty}(t)<0.$ Let us now assume that $\mathcal{G}$ is a tree such that \eqref{condsizeballtree} holds. We then have by a union bound that \begin{align*} \P^G\big(x\leftrightarrow \partial B(x,L_k)\text{ in }E^{\geq -h_{k}(t)}\big)&\leq b(L_k)a\big(L_k,h_k(t)\big) \\&\leq C\exp(c_1C_1L_02^k)4^{2^k}\exp\Big(-\frac{c_3}{2C_2\vee1}L_02^k\Big), \end{align*} where we used \eqref{condsizeballtree}, \eqref{ren:boundonL_k} and \eqref{toproverecursivelytree} in the last inequality. In view of \eqref{defc2}, since $C_1$ and $C_2$ depend only on $\alpha$ and we can choose $\alpha=1$ when $\mathcal{G}$ is a tree, we can define the absolute constant \begin{equation*} c_2\stackrel{\text{def.}}{=}\frac{1}{4C_1(2C_2\vee1)}, \end{equation*} and, if $c_1\leq c_0\cdot c_2,$ then, taking $L_0$ large enough, we can easily conclude that \eqref{connectingtoballdecayexpo} holds and $\tilde{h}_*(\mathcal{G})<0$ similarly as before. \end{proof} \begin{Rk} \label{rkboundedlambda} Consider any graphs verifying \eqref{condsizeball} and the conditions of example \nolinebreak (c) below Corollary \nolinebreak \ref{Cor:h_0=1andh_*=0}, then \eqref{condgvscap} holds by \eqref{boundonGL}. Similarly, if $\mathcal{G}$ is the $(d+1)$-regular tree, $d\geq2,$ as in example \nolinebreak (b) below Corollary \nolinebreak \ref{Cor:h_0=1andh_*=0} but with $\kappa_n= t\lambda_n$ for some $t>0$ and sequence $\lambda_n$ increasing to infinity, then it verifies \eqref{condgvscap} with $c_0=\log(1+t/(d+1))$ by \eqref{boundonGL}, as well as \eqref{condsizeballtree} with $c_1=\log(d+1),$ and in particular $c_1< c_0\cdot c_2$ if $t$ is chosen large enough. By Corollary \nolinebreak \ref{Cor:h_0=1andh_*=0}, we have $\tilde{h}_*=0$ on these graphs, and so the condition $\lambda_x\leq C$ in Theorem \nolinebreak \ref{The:h_*<0} is necessary. It is however not clear whether one could replace the condition \eqref{condgvscap} by $\mathit{\mathbf{h}}_{\text{kill}}\equiv1,$ which is necessary in view of \eqref{eq:h0<1thenh*>0}, and allow graphs with exponential growth instead of either assuming \eqref{condsizeball} or that the graph is a tree with fast enough decay of the Green function. \end{Rk} One can easily derive from Theorem \nolinebreak \ref{The:h_*<0} and \eqref{levelsetsvsIu} a similar result but for the random interlacement set ${\cal I}^u$ on the cable system. \begin{Cor} \label{cor:u_*>0} Let $\mathcal{G}$ be any graph satisfying the assumptions of either Theorem \nolinebreak \ref{The:h_*<0} or Corollary \nolinebreak \ref{Cor:h_*<0}. There exist $u>0$ and $c>0$ such that \begin{equation} \label{connectingtoballdecayexpoIu} \P^I(x\leftrightarrow \partial B(x,L)\text{ in }{\cal I}^{u})\leq\exp(-cL)\text{ for all }x\in{G}\text{ and }L\geq1, \end{equation} and in particular the critical parameter associated with the percolation of ${\cal I}^u$ on the cable system is positive. \end{Cor} \begin{proof} It follows from \eqref{Greenimpliescap}, \eqref{eq:capimplies0bounded} and \eqref{levelsetsvsIu} that ${\cal I}^u$ is stochastically dominated by $E^{\geq -\sqrt{2u}}.$ The inequality \eqref{connectingtoballdecayexpoIu} then follows from \eqref{connectingtoballdecayexpo} for $u=h^2/2.$ Moreover taking the limit as $L\rightarrow\infty,$ we obtain that ${\cal I}^u$ contains $\P^I$-a.s.\ only bounded components for any $u$ verifying \eqref{connectingtoballdecayexpoIu}, and we can conclude. \end{proof} \begin{Rk} \begin{enumerate}[1)] \label{linkwithfinitary} \item In \cite{cai2021rigorous}, a result similar to Corollary \nolinebreak \ref{cor:u_*>0} is proven. They consider finitary random interlacements, which by Proposition \nolinebreak 4.1 in \cite{MR3962876}, see also \eqref{disinterh0=1}, correspond to random interlacements on the graph $\mathcal{G}^T=(\mathbb{Z}^d,\lambda^T,\kappa^T),$ $d\geq3,$ where $\lambda^T_{x,y}=\frac{T}{T+1}$ and $\kappa^T_x=\frac{2d}{T+1}.$ If ${\cal I}^u(\subset\tilde{\mathcal{G}})$ contains only bounded components, then the set of edges crossed by a trajectory in the random interlacement process contains only finite components, which happens a.s.\ for $u$ small enough by Corollary \nolinebreak \ref{cor:u_*>0}. This is thus an alternative proof of the bound $u_c>0$ from Theorem \nolinebreak 4 in \cite{cai2021rigorous}. \item \label{h_*u_*finite}One can easily show that $\tilde{h}_*>-\infty$ and that the critical parameter associated with the percolation of ${\cal I}^u,$ $u>0,$ as a subset of the cable system or as a set of edges crossed by trajectories in the random interlacement process, is finite on any graph $\mathcal{G}$ with uniformly bounded weights and killing measure, that is $c\leq \kappa_x\leq C$ and $c\leq \lambda_{x,y}\leq C,$ and such that $p_c<1,$ where $p_c$ is the critical parameter for Bernoulli bond percolation on $\mathcal{G}.$ We refer to \cite{DcGoRaSeYa} for a review of the literature and a proof of the inequality $p_c<1$ under rather general conditions, which incidentally uses the property \eqref{eq:h0<1thenh*>0}. Finitary random interlacements on $\mathbb{Z}^d$ clearly fulfill all these hypotheses, and combining with the previous remark, we obtain Theorem \nolinebreak 4 in \cite{cai2021rigorous}, but on a more general class of graphs. Indeed, let us start for each $x\in{G}$ a poissonian number of independent trajectories starting in $x$ on $\tilde{\mathcal{G}}^{-}$ with parameter $u\kappa_x.$ Then by \eqref{disinterh0=1}, ${\cal I}^u\cap\tilde{\mathcal{G}}^{-}$ has the same law as the set of points visited by one of these trajectories. For each edge $e=\{x,y\},$ the number of trajectories starting in either $x$ or $y$ and crossing first $e$ has law \begin{equation*} \text{Poi}\Big(\frac{u\kappa_x\lambda_{x,y}}{\lambda_x}+\frac{u\kappa_y\lambda_{x,y}}{\lambda_y}\Big). \end{equation*} Therefore for any $u$ large enough so that $1-\exp\big(-u\lambda_{x,y}(\kappa_x\lambda_x^{-1}+\kappa_y\lambda_y^{-1})\big)>p_c$ for all $x\in{G},$ there is an infinite connected component of edges crossed by the discrete killed random interlacement process, and thus ${\cal I}^u(\subset\tilde{\mathcal{G}})$ contains an unbounded connected component with positive probability. One can easily check that \eqref{capcondition} holds for a graph with uniformly bounded weights and killing measure since $e_A(x)\geq \kappa_x\geq c$ for all $x\in{G}$ and $A\subset G.$ In particular, using \eqref{eq:capimplies0bounded} and \eqref{levelsetsvsIu}, we know that $E^{\geq-\sqrt{2u}}$ contains an infinite connected component with positive probability if ${\cal I}^u$ does, and we can conclude. \end{enumerate} \end{Rk} \section{Doob \texorpdfstring{$\mathit{\mathbf{h}}$}{h}-transform} \label{sec:Doob} In this section, we introduce the notion of the Doob $\mathit{\mathbf{h}}$-transform $\mathcal{G}_{\mathit{\mathbf{h}}}$ of a graph $\mathcal{G},$ when $\mathit{\mathbf{h}}:\tilde{\mathcal{G}}\rightarrow(0,\infty)$ is an harmonic function, so that the diffusion $X$ on the cable system ${\tilde{\mathcal{G}}_\mathit{\mathbf{h}}}$ of $\mathcal{G}_{\mathit{\mathbf{h}}}$ is related to the $\mathit{\mathbf{h}}$-transform of the diffusion $X$ on $\tilde{\mathcal{G}},$ see \eqref{semigrouph}. One can then also relate the law of the Gaussian free field on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}$ to the Gaussian free field on $\tilde{\mathcal{G}},$ see \eqref{eq:GFFh}, from which one can deduce an effective criterion for \eqref{0bounded} to hold in terms of $\mathit{\mathbf{h}}$-transform, see Corollary \nolinebreak \ref{capimplieseverythingh}, which will be useful in Section \ref{sec:Z20}. Introducing the notion of $\mathit{\mathbf{h}}$-transform of random interlacements, see Definition \ref{defhtransforminter}, we finally use results from \cite{DrePreRod3} to obtain under condition \eqref{0bounded} the law of the $\mathit{\mathbf{h}}$-transform of the capacity of the level sets $E_{\mathit{\mathbf{h}}}^{\geq h}(x_0)$ and an isomorphism between the Gaussian free field and the $\mathit{\mathbf{h}}$-transform of random interlacements for various choices of the harmonic function $\mathit{\mathbf{h}},$ see Theorem \nolinebreak \ref{couplingintergffh} and Corollaries \nolinebreak \ref{h0transformiso} and \ref{couplingintergffK}. \begin{Def} \label{harmo} We say that a function $\mathit{\mathbf{h}}:\tilde{\mathcal{G}}\rightarrow(0,\infty)$ is harmonic on $\tilde{\mathcal{G}}$ if for all $x\in{G}$ \begin{enumerate}[1)] \item $\mathit{\mathbf{h}}(\partial I_x)\stackrel{\text{def.}}{=}\lim\limits_{t\nearrow\rho_x}\mathit{\mathbf{h}}(x+t\cdot I_x)$ exists and is finite, \item if $e=\{x,y\}\in{E}$ or $e=x\in{G},$ then $t\mapsto \mathit{\mathbf{h}}(x+t\cdot I_e)\in{C^2([0,\rho_e),(0,\infty))}$ and \begin{equation*} \frac{\mathrm{d}^2\mathit{\mathbf{h}}(x+t\cdot I_e)}{\mathrm{d}t^2}=0\text{ for all }t\in{[0,\rho_e)}, \end{equation*} \item and for all $x\in{G}$ \begin{equation} \label{defharmonic} \Big(\frac{\mathrm{d}\mathit{\mathbf{h}}(x+t\cdot I_{x})}{\mathrm{d}t}+\sum_{y\sim x}\frac{\mathrm{d}\mathit{\mathbf{h}}(x+t\cdot I_{\{x,y\}})}{\mathrm{d}t}\Big)\Big|_{t=0}=0. \end{equation} \end{enumerate} We define the $\mathit{\mathbf{h}}$-transform $\mathcal{G}_{\mathit{\mathbf{h}}}$ of the graph $\mathcal{G}$ as the graph with vertex set $G_{\mathit{\mathbf{h}}}=G,$ with weights $\lambda^{(\mathit{\mathbf{h}})}_{x,y}=\mathit{\mathbf{h}}(x)\mathit{\mathbf{h}}(y)\lambda_{x,y},$ $x,y\in{G},$ and with killing measure $\kappa^{(\mathit{\mathbf{h}})}_x=\kappa_x\mathit{\mathbf{h}}(x)\mathit{\mathbf{h}}(\partial I_x),$ $x\in{G}.$ \end{Def} Note that this corresponds to the usual definition of harmonicity, that is the generator associated with the form $\mathcal{E}_{\tilde{\mathcal{G}}}$ from \eqref{Dirichlet} applied to $\mathit{\mathbf{h}}$ is equal to $0,$ see aronud (2.1) in \cite{MR3152724} for a description of this generator. In order to prove that the graph $\mathcal{G}_{\mathit{\mathbf{h}}}$ is part of the setting described at the beginning of Section \ref{sec:intro}, we only need to prove that the diffusion $X$ on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}$ is transient, which follows from \eqref{relationGreenfunctionh}. The conditions 2) and 3) of Definition \ref{harmo} can be respectively equivalently stated as follows: for all $e\in{E\cup G}$ and $t\in{[0,\rho_e)},$ \begin{equation} \label{hformulaonedges} \mathit{\mathbf{h}}(x+t\cdot I_e)=\begin{cases}2t\lambda_{x,y}\mathit{\mathbf{h}}(y)+\big(1-2t\lambda_{x,y}\big)\mathit{\mathbf{h}}(x)&\text{if }e=\{x,y\}\in{E} \\2t\kappa_x\mathit{\mathbf{h}}(\partial Ix)+\big(1-2t\kappa_x\big)\mathit{\mathbf{h}}(x)&\text{if }e=x\in{G}, \end{cases} \end{equation} and for all $x\in{G}$ \begin{equation} \label{hformulaoutsideofedge} \kappa_x\mathit{\mathbf{h}}(\partial I_x)+\sum_{y\sim x}\lambda_{x,y}\mathit{\mathbf{h}}(y)=\lambda_x\mathit{\mathbf{h}}(x). \end{equation} In particular, the total weight of a vertex ${x}\in{{G}_{\mathit{\mathbf{h}}}}$ is $\lambda_{{x}}^{(\mathit{\mathbf{h}})}=\mathit{\mathbf{h}}(x)^2\lambda_x.$ Note that since $\mathit{\mathbf{h}}>0,$ the edge set $E_{\mathit{\mathbf{h}}}$ of $\mathcal{G}_{\mathit{\mathbf{h}}}$ is equal to $E,$ and we will often identify the edges and vertices of $\mathcal{G}_{\mathit{\mathbf{h}}}$ to the edges and vertices of $\mathcal{G}.$ Let us define a function $\psi_\mathit{\mathbf{h}}:\tilde{\mathcal{G}}\rightarrow\tilde{\mathcal{G}}_\mathit{\mathbf{h}}$ such that for all $e=\{x,y\}\in{E}$ or $e=x\in{G}$ \begin{equation} \label{defpsih} \psi_\mathit{\mathbf{h}}(x+t\cdot I_e)\stackrel{\mathrm{def.}}{=}x+\frac{t}{\mathit{\mathbf{h}}(x)\mathit{\mathbf{h}}(x+t\cdot I_e)}\cdot I_{e}\text{ for all }t\in{[0,\rho_e)}, \end{equation} where with a slight abuse of notation, $I_e\subset\tilde{\mathcal{G}}$ on the left-hand side of \eqref{defpsih} and $I_e\subset\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}$ on the right-hand side of \eqref{defpsih}. We also take $\psi_{\mathit{\mathbf{h}}}(\Delta)=\Delta.$ Using \eqref{hformulaonedges}, one can easily check that this definition does not depend on the choice of the endpoint $x$ or $y$ of $I_e$ when $e=\{x,y\}\in{E},$ and that $\psi_\mathit{\mathbf{h}}$ is bijective. For any forwards trajectories $w^+\in{W^+_{\tilde{\mathcal{G}}_\mathit{\mathbf{h}}}}$ on $\tilde{\mathcal{G}}_\mathit{\mathbf{h}},$ we define the time change \begin{equation} \label{defthetah} \theta_{\mathit{\mathbf{h}}}^{w^+}(t)\stackrel{\mathrm{def.}}{=}\inf\Big\{s\geq0:\,\int_0^s\mathit{\mathbf{h}}\big(\psi_\mathit{\mathbf{h}}^{-1}(w^+(u))\big)^{4}\,\mathrm{d}u>t\Big\}\text{ for all }t\in{[0,\infty)}, \end{equation} with the conventions $\mathit{\mathbf{h}}(\Delta)=0$ and $\inf\varnothing=\zeta,$ and \begin{equation} \label{defxi} (\xi_\mathit{\mathbf{h}}(w^+))(t)\stackrel{\mathrm{def.}}{=}\psi_\mathit{\mathbf{h}}^{-1}\big(w^+({\theta_{\mathit{\mathbf{h}}}^{w^+}(t)})\big)\text{ for all }t\in{[0,\infty)}. \end{equation} The process $\xi_{\mathit{\mathbf{h}}}(X)$ is thus a stochastic process on $\tilde{\mathcal{G}}$ under $P^{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}_{\psi_{\mathit{\mathbf{h}}}(x)},$ $x\in{\tilde{\mathcal{G}}},$ and we call it the $\mathit{\mathbf{h}}$-transform of $X.$ Indeed, we prove in \eqref{semigrouph} that if $T_t^{\tilde{\mathcal{G}}},$ $t\geq0,$ denotes the semigroup on $L^2(\tilde{\mathcal{G}},m)$ associated with $X$ under $P_{x}^{\tilde{\mathcal{G}}},$ $x\in{\tilde{\mathcal{G}}},$ one can relate the semigroup associated with $\xi_{\mathit{\mathbf{h}}}(X)$ to $T_t^{\tilde{\mathcal{G}}},$ in a way which corresponds to the usual definition of the $\mathit{\mathbf{h}}$-transform, see for instance Chapter 11 of \cite{MR2152573}. Moreover, one can also relate the local times associated with $\xi_{\mathit{\mathbf{h}}}(X)$ to the local times associated with $X$ under $P_{\cdot}^{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}},$ and the Gaussian free field on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}$ and on $\tilde{\mathcal{G}}.$ \begin{Prop} \label{corh} Let $\mathit{\mathbf{h}}$ be an harmonic function on $\tilde{\mathcal{G}}.$ Under $P^{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}_{\psi_{\mathit{\mathbf{h}}}(x)},$ $x\in{\tilde{\mathcal{G}}},$ \begin{equation} \label{semigrouph} \text{the semigroup on $L^2(\tilde{\mathcal{G}},\mathit{\mathbf{h}}^2\cdot m)$ associated with $\xi_{\mathit{\mathbf{h}}}(X)$ is $f\mapsto\frac{1}{\mathit{\mathbf{h}}}T_t^{\tilde{\mathcal{G}}}(f\mathit{\mathbf{h}}),$} \end{equation} with respect to the measure $m,$ \begin{equation} \label{eq:localtimesh} \text{the field of local times associated with $\xi_{\mathit{\mathbf{h}}}(X)$ is } \big(\mathit{\mathbf{h}}(x)^2\ell_{\psi_{\mathit{\mathbf{h}}}(x)}(\theta_{\mathit{\mathbf{h}}}^{X}(t))\big)_{t\geq0,x\in{\tilde{\mathcal{G}}}}, \end{equation} and the Gaussian field \begin{equation} \label{eq:GFFh} \big(\mathit{\mathbf{h}}(x)\phi_{\psi_{\mathit{\mathbf{h}}}(x)}\big)_{x\in{\tilde{\mathcal{G}}}}\text{ has the same law under }\P_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}^G\text{ as }(\phi_x)_{x\in{\tilde{\mathcal{G}}}}\text{ under }\P_{\tilde{\mathcal{G}}}^G. \end{equation} \end{Prop} Similar links between the graph $\mathcal{G}_{\mathit{\mathbf{h}}}$ and the $\mathit{\mathbf{h}}$-transform have already been noticed for the discrete graph in specific contexts, see the proof of Proposition \nolinebreak 4.6 in \cite{LuSaTa} or the Appendix of \cite{MR4091511}, and Proposition \nolinebreak \ref{corh} can be seen as a generalization of these results, and its proof is presented in the Appendix. In view of \eqref{eq:GFFh}, one can transfer the results \eqref{eq:h0<1thenh*>0} and \eqref{eq:capimplies0bounded} about the level sets for the Gaussian free field on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}$, defined as in \eqref{deflevelsets}, to similar results for the sets \begin{equation} \label{defEf} E^{\geq h}_{\mathit{\mathbf{h}}}=\{x\in{\tilde{\mathcal{G}}}:\phi_x\geq h\times\mathit{\mathbf{h}}(x)\}\text{ and }E^{\geq h}_{\mathit{\mathbf{h}}}(x_0)=\text{ the connected component of }x_0\text{ in }E^{\geq h}_{\mathit{\mathbf{h}}}. \end{equation} It then follows directly from \eqref{eq:GFFh} that for all $h\in\mathbb{R}$ and $x_0\in{\tilde{\mathcal{G}}}$ \begin{equation} \label{killedlevelsetsvsh0transform} E^{\geq h}(\psi_{\mathit{\mathbf{h}}}(x_0))\text{ has the same law under }\P_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}^G\text{ as }\psi_{\mathit{\mathbf{h}}}\big(E_{\mathit{\mathbf{h}}}^{\geq h}(x_0)\big)\text{ under }\P_{\tilde{\mathcal{G}}}^G, \end{equation} where $E^{\geq h}(x_0)=E^{\geq h}_{\bf 1}(x_0)$ is the connected component of $x_0$ in $E^{\geq h}.$ \begin{Cor} \label{capimplieseverythingh} \begin{enumerate}[1)] \item If there exists an harmonic function $\mathit{\mathbf{h}}$ on $\tilde{\mathcal{G}}$ such that \eqref{capcondition} is verified for $\mathcal{G}_{\mathit{\mathbf{h}}},$ then \eqref{0bounded} holds. \item If $\mathit{\mathbf{h}}$ is an harmonic function on $\tilde{\mathcal{G}}$ such that $\mathit{\mathbf{h}}_{\text{kill}}<1$ on $\mathcal{G}_{\mathit{\mathbf{h}}},$ then $E^{\geq h}_{\mathit{\mathbf{h}}}$ contains an unbounded connected component with $\P^G_{\tilde{\mathcal{G}}}$ positive probability for all $h<0.$ \end{enumerate} \end{Cor} \begin{proof} Noting that $E_{\mathit{\mathbf{h}}}^{\geq 0}=E^{\geq 0}$ for any harmonic function $\mathit{\mathbf{h}}$ on $\tilde{\mathcal{G}},$ 1) follows directly from \eqref{eq:capimplies0bounded} and \eqref{killedlevelsetsvsh0transform}, whereas 2) follows directly from \eqref{eq:h0<1thenh*>0} and \eqref{killedlevelsetsvsh0transform}. \end{proof} In the rest of this section, we will deduce from Proposition \nolinebreak \ref{corh} and Theorem \nolinebreak \ref*{4T:main},2) in \cite{DrePreRod3} an explicit formula for the law of the level sets $E^{\geq h}_{\mathit{\mathbf{h}}}$ from \eqref{defEf}, as well as various isomorphisms between the Gaussian free field and either the $\mathit{\mathbf{h}}$-transform of random interlacements, see Theorem \nolinebreak \ref{couplingintergffh}, or killed or surviving random interlacements, see Corollary \nolinebreak \ref{h0transformiso}, or the trajectories in the random interlacement process never hitting a compact $K,$ see Corollary \nolinebreak \ref{couplingintergffK}, which will lead to a proof of Theorem \nolinebreak \ref{couplingintergffdim2} in Section \ref{sec:Z20}. These results are interesting and widely generalize similar theorems in dimension 2, see Theorems \nolinebreak 5.3 and 5.5 in \cite{MR3936156}, or on finite graphs, see Proposition \nolinebreak 2.4 in \cite{MR4091511}. They are however not needed to prove our main result Theorem \nolinebreak \ref{mainth}, and the hastened reader can directly skip to Section \ref{sec:Z20}, see in particular Proposition \nolinebreak \ref{Z20counterexample} therein, to understand how to to use Corollary \nolinebreak \ref{capimplieseverythingh} to find a graph as in Theorem \nolinebreak \ref{mainth},3). Let us now define the $\mathit{\mathbf{h}}$-transform of random interlacements. If $w^*\in{W_{\tilde{\mathcal{G}}_\mathit{\mathbf{h}}}^*},$ we denote by $\xi_{\mathit{\mathbf{h}}}^*(w^*)$ the trajectory in $W_{\tilde{\mathcal{G}}}^*$ which corresponds to taking the image modulo time-shift of a trajectory with backwards part $\xi_{\mathit{\mathbf{h}}}((w(-t))_{t\geq0})$ and forwards part $\xi_{\mathit{\mathbf{h}}}((w(t)_{t\geq0}),$ for some $w\in{(p_{\tilde{\mathcal{G}}_\mathit{\mathbf{h}}}^*)^{-1}(w^*)},$ and one can easily check that this definition does not depend on the choice of $w.$ To simplify notation, we also denote by $\xi_{\mathit{\mathbf{h}}}^*:W_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}^*\times[0,\infty)\rightarrow W_{\tilde{\mathcal{G}}}^*\times[0,\infty)$ the application which associates $(\xi_{\mathit{\mathbf{h}}}^*(w^*),u)$ to $(w^*,u).$ \begin{Def} \label{defhtransforminter} If $\mathit{\mathbf{h}}$ is an harmonic function on $\tilde{\mathcal{G}},$ under $\P_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}^I,$ let us define $\omega^{\mathit{\mathbf{h}}}=\omega\circ(\xi_{\mathit{\mathbf{h}}}^*)^{-1}$ the $\mathit{\mathbf{h}}$-transform of the random interlacement process, and for all $u>0$ we denote by $(\ell^{\mathit{\mathbf{h}}}_{x,u})_{x\in{\tilde{\mathcal{G}}}}$ the family of local times with respect to $m$ associated with $\omega_u^{\mathit{\mathbf{h}}},$ the point process of trajectories in $\omega$ with label at most $u,$ and by ${\cal I}^u_{\mathit{\mathbf{h}}}=\{x\in{\tilde{\mathcal{G}}}:\ell_{x,u}^{\mathit{\mathbf{h}}}>0\}$ the $\mathit{\mathbf{h}}$-transform of the interlacement set. \end{Def} In \cite{DrePreRod3}, an explicit formula for the law of the capacity of level sets of the Gaussian free field was given, as well as a signed version of the isomorphism between the Gaussian free field and local times of random interlacements on the cable system under the condition \eqref{0bounded}, generalizing the isomorphism from \cite{MR3492939}. Applying these results to the $\mathit{\mathbf{h}}$-transform $\mathcal{G}_{\mathit{\mathbf{h}}}$ of the graph $\mathcal{G},$ we thus obtain the law of the capacity on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}$ of $\psi_{\mathit{\mathbf{h}}}(E_{\mathit{\mathbf{h}}}^{\geq h}(x_0)),$ $h\geq0,$ as well as an isomorphism between the Gaussian free field and the $\mathit{\mathbf{h}}$-transform of random interlacements on the cable system. \begin{The} \label{couplingintergffh} Let $\mathcal{G}$ be a transient graph and $\mathit{\mathbf{h}}$ an harmonic function on $\tilde{\mathcal{G}}.$ If \eqref{0bounded} is verified, then for all $h\geq0,$ \begin{equation} \label{eq:laplacecaph} \begin{split} &\mathbb{E}^G_{\tilde{\mathcal{G}}}\left[\exp\left(-u\mathrm{cap}_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}\big(\psi_{\mathit{\mathbf{h}}}({E}^{\geq h}_{\mathit{\mathbf{h}}}(x_0))\big)\right)\mathds{1}_{\phi_{x_0}\geq h\times\mathit{\mathbf{h}}(x_0)}\right]\\&=\P^G_{\tilde{\mathcal{G}}}\big(\phi_{x_0}\geq \mathit{\mathbf{h}}(x_0)\sqrt{2u+h^2}\big)\text{ for all }u\geq0\text{ and }x_0\in{\tilde{\mathcal{G}}}. \end{split} \end{equation} Moreover, \eqref{eq:laplacecaph} for $h=0$ is equivalent to \begin{equation} \label{eqcouplingintergffh} \begin{gathered} \big(\phi_x\mathds{1}_{x\notin{\mathcal{C}_u^{\mathit{\mathbf{h}}}}}+\sqrt{2\ell_{x,u}^{\mathit{\mathbf{h}}}+\phi_x^2}\mathds{1}_{x\in{\mathcal{C}_u^{\mathit{\mathbf{h}}}}}\big)_{x\in{\tilde{\mathcal{G}}}}\text{ has the same law under }\P^{I}_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}\otimes\P^{G}_{\tilde{\mathcal{G}}} \\\text{ as }\big(\phi_x+\sqrt{2u}{\mathit{\mathbf{h}}}(x)\big)_{x\in{\tilde{\mathcal{G}}}}\text{ under }\P^G_{\tilde{\mathcal{G}}}\text{ for all }u\geq0, \end{gathered} \end{equation} where $\mathcal{C}_u^{\mathit{\mathbf{h}}}$ denotes the closure of the union of the connected components of the sign clusters $\{x\in{\tilde{\mathcal{G}}}:|\phi_x|>0\}$ intersecting ${\cal I}_{\mathit{\mathbf{h}}}^u.$ \end{The} \begin{proof} By \eqref{killedlevelsetsvsh0transform} for $h=0,$ if \eqref{0bounded} holds for $\mathcal{G},$ then it also holds for $\mathcal{G}_{\mathit{\mathbf{h}}}.$ Applying Theorem \nolinebreak \ref*{4mainresultcap} in \cite{DrePreRod3} to the graph $\mathcal{G}_{\mathit{\mathbf{h}}}$ and using \eqref{eq:GFFh} and \eqref{killedlevelsetsvsh0transform}, we obtain that \eqref{0bounded} implies \eqref{eq:laplacecaph} for all $h\geq0.$ Moreover, it follows easily from \eqref{eq:localtimesh} that for all $u>0$ \begin{equation} \label{eq:localtimeRIh} (\ell_{x,u}^{\mathit{\mathbf{h}}})_{x\in{\tilde{\mathcal{G}}}}\text{ has the same law under }\P_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}^{I}\text{ as }(\mathit{\mathbf{h}}(x)^2\ell_{\psi_{\mathit{\mathbf{h}}}(x),u})_{x\in{\tilde{\mathcal{G}}}}\text{ under }\P_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}^{I}. \end{equation} Applying Theorem \nolinebreak \ref*{4couplingintergff} in \cite{DrePreRod3} to the graph $\mathcal{G}_{\mathit{\mathbf{h}}},$ and using \eqref{eq:GFFh} and \eqref{eq:localtimeRIh}, we obtain that \eqref{eq:laplacecaph} for $h=0$ is equivalent to \eqref{eqcouplingintergffh}. \end{proof} \begin{Rk} \begin{enumerate}[1)] \item The isomorphism \eqref{eqcouplingintergffh} generalizes the coupling presented in \eqref{levelsetsvsIu} since it directly implies that \begin{equation} \label{levelsetsvsIuh} \text{ if \eqref{0bounded} holds, then }E^{\geq-\sqrt{2u}}_{\mathit{\mathbf{h}}}\text{ has the same law as }\mathcal{C}_u^{\mathit{\mathbf{h}}}\cup E^{\geq 0}\text{ under }\P^G\otimes\P^I, \end{equation} where $\mathcal{C}_u^{\mathit{\mathbf{h}}}$ is defined similarly as $\mathcal{C}_u$ but for the $\mathit{\mathbf{h}}$-transform of random interlacements, see below \eqref{eqcouplingintergffh}. \item Following \cite{DrePreRod3}, one could obtain several other results on $E_{\mathit{\mathbf{h}}}^{\geq h},$ $h\in\mathbb{R},$ when $\mathit{\mathbf{h}}$ is an harmonic function on $\tilde{\mathcal{G}}$: $E^{\geq{h}}_{\mathit{\mathbf{h}}}(x_0)$ is non-compact with $\P^G_{\tilde{\mathcal{G}}}$ positive probability for all $h<0,$ $\mathrm{cap}_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}\big(\psi_{\mathit{\mathbf{h}}}({E}^{\geq 0}(x_0))\big)<\infty$ $\P^G_{\tilde{\mathcal{G}}}$-a.s.\ for all $x_0\in{\tilde{\mathcal{G}}}$ by Theorem \nolinebreak \ref*{4mainresult} in \cite{DrePreRod3}, formulas similar to \eqref{eq:laplacecaph} for $h<0$ under condition \eqref{capcondition} for $\mathcal{G}_{\mathit{\mathbf{h}}},$ see \eqref*{4lawforhnegative} and \eqref*{4eq:capinfinity} in \cite{DrePreRod3}, an equivalence between \eqref{eq:laplacecaph} for $h=0$ and \eqref{eq:laplacecaph} for all $h>0,$ see \eqref*{4equivisom} in \cite{DrePreRod3} or another formulation of the isomorphism \eqref{eqcouplingintergffh}, see \eqref*{4eqcouplingintergff} in \cite{DrePreRod3}. Finally, for \emph{any} transient graph $\mathcal{G},$ one could also obtain a "squared" version of the isomorphism \eqref{eqcouplingintergffh}, both on the discrete graph $G$ as in \cite{MR2892408}, or on the cable system $\tilde{\mathcal{G}}$ as in \cite{MR3502602}. These results could also be extended to all the consequences of Theorem \nolinebreak \ref{couplingintergffh} for $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}},$ $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{surv}}$ or $\mathit{\mathbf{h}}=a$ gathered in Corollaries \nolinebreak \ref{h0transformiso} and \ref{couplingintergffK} and Theorem \nolinebreak \ref{couplingintergffdim2}. \item Let us describe the analogue of the $\mathit{\mathbf{h}}$-transform but for the discrete graph $G.$ We identify $G_{\mathit{\mathbf{h}}}$ and $G$ and define for all continuous-time trajectories $\overline{w}^+$ on $G$ and $t\geq0$ \begin{align*} \overline{\theta}_{\mathit{\mathbf{h}}}^{\overline{w}^+}(t)&\stackrel{\mathrm{def.}}{=}\inf\Big\{s\geq0:\,\int_0^s\mathit{\mathbf{h}}\big(\overline{w}^+(u)\big)^{2}\,\mathrm{d}u>t\Big\} \\&=\inf\Big\{s\geq0:\,\sum_{x\in{G}}\ell_{x}(s)\mathit{\mathbf{h}}(x)^2>t\Big\}, \end{align*} with the conventions $\mathit{\mathbf{h}}(\Delta)=0$ and $\inf\varnothing={\zeta},$ and \begin{equation*} (\overline{\xi}_\mathit{\mathbf{h}}(\overline{w}^+))(t)\stackrel{\mathrm{def.}}{=}\overline{w}^+\big({\overline{\theta}_{\mathit{\mathbf{h}}}^{\overline{w}^+}(t)}\big)\text{ for all }t\in{[0,\infty)}. \end{equation*} Then the results from Proposition \nolinebreak \ref{corh} still hold when replacing $\xi_{\mathit{\mathbf{h}}}$ by $\overline{\xi}_{\mathit{\mathbf{h}}},$ $\psi_{\mathit{\mathbf{h}}}$ by the identity, the diffusion $X$ by the jump process $Z,$ and the Gaussian free field $(\phi_x)_{x\in{\tilde{\mathcal{G}}}}$ on $\tilde{\mathcal{G}}$ by the Gaussian free field $(\phi_x)_{x\in{G}}$ on $G.$ One can deduce this statement from Proposition \nolinebreak \ref{corh} by using the fact that $Z$ is the trace of $X$ on $G,$ or prove it directly, see the proof of Proposition \nolinebreak 4.6 in \cite{LuSaTa} for a proof of a similar statement. We can then also define the $\mathit{\mathbf{h}}$-transform of discrete random interlacements directly with $\overline{\xi}_{\mathit{\mathbf{h}}}$ similarly as in Definition \ref{definter}, and, if \eqref{eq:laplacecaph} holds for $h=0,$ obtain a version of the isomorphism \eqref{eqcouplingintergffh} on the discrete graph $G$ between the Gaussian free field on $G$ and the $\mathit{\mathbf{h}}$-transform of discrete random interlacements similar to \eqref*{4eqcouplingintergffdis} in \cite{DrePreRod3}. \end{enumerate} \end{Rk} Let us now give some some applications of Theorem \nolinebreak \ref{couplingintergffh} for some particular choices of the harmonic function $\mathit{\mathbf{h}}.$ The result \eqref{semigrouph} implies that $\xi_{\mathit{\mathbf{h}}}(X)$ corresponds under $\P^G_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}$ to the $\mathit{\mathbf{h}}$-transform of $X$ under $\P^G_{\tilde{\mathcal{G}}},$ see for instance Chapter 11 of \cite{MR2152573}, and when $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}},$ see \eqref{defh0}, one can then classically relate the law of the diffusion $X$ on $\tilde{\mathcal{G}}$ conditioned on being killed with the $\mathit{\mathbf{h}}_{\text{kill}}$ transform of $X,$ see Theorem \nolinebreak 11.26 in \cite{MR2152573}. Therefore, the law of $X$ on $\tilde{\mathcal{G}}$ conditioned on being killed can be related to the diffusion $X$ on the $\mathit{\mathbf{h}}_{\text{kill}}$-transform $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}$ of $\tilde{\mathcal{G}},$ and since the proof of this result is short, we include it below for completeness. Similarly, the law of $X$ on $\tilde{\mathcal{G}}$ conditioned on blowing up can be related to the law of $X$ on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{surv}}}.$ \begin{Lemme} \label{Le:h0transformX} If $\mathcal{G}$ is a graph with $\mathit{\mathbf{h}}_{\text{kill}}\neq0,$ then the function $\mathit{\mathbf{h}}_{\text{kill}}$ is harmonic on $\tilde{\mathcal{G}}.$ Moreover, for all $x\in{\tilde{\mathcal{G}}},$ the diffusion \begin{equation} \label{h0transformX} \xi_{\mathit{\mathbf{h}}_{\text{kill}}}(X)\text{ has the same law under }P_{\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(x)}^{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}}\text{ as }X\text{ under }P_x^{\tilde{\mathcal{G}}}(\cdot\,|\,\mathcal{W}_{\tilde{\mathcal{G}}}^{\mathcal{K},+}). \end{equation} The same results also hold when replacing $\mathit{\mathbf{h}}_{\text{kill}}$ by $\mathit{\mathbf{h}}_{\text{surv}}$ and $W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}$ by $W_{\tilde{\mathcal{G}}}^{\mathcal{S},+}.$ \end{Lemme} \begin{proof} We only do the proof for $\mathit{\mathbf{h}}_{\text{kill}},$ the proof for $\mathit{\mathbf{h}}_{\text{surv}}$ is similar. If $e=\{x,y\}\in{E}$ and $t\in{[0,\rho_e]},$ then the probability beginning in $x+t\cdot I_e$ that $X$ hits $y$ before $x$ is $t\rho_e^{-1},$ see for instance equation 3.0.4 (b), in Part II of \cite{MR1912205}, and by the Markov property $\mathit{\mathbf{h}}_{\text{kill}}(x+t\cdot I_e)=t\rho_{e}^{-1}\mathit{\mathbf{h}}_{\text{kill}}(y)+(1-\rho_{e}^{-1}t)\mathit{\mathbf{h}}_{\text{kill}}(x)=2t\lambda_{x,y}\mathit{\mathbf{h}}_{\text{kill}}(y)+(1-2t\lambda_{x,y})\mathit{\mathbf{h}}_{\text{kill}}(x).$ Similarly, if $x\in{G}$ and $t\in{[0,\rho_x)},$ $\mathit{\mathbf{h}}_{\text{kill}}(x+t\cdot I_x)=2t\kappa_x+(1-2t\kappa_x)\mathit{\mathbf{h}}_{\text{kill}}(x).$ Since $\mathit{\mathbf{h}}_{\text{kill}}(\partial I_x)=1$ if $\kappa_x\neq0,$ we deduce that \eqref{hformulaonedges} holds. Moreover for each $x\in{G}$ the formula \eqref{hformulaoutsideofedge} for $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}}$ follows easily from the Markov property for $\hat{Z}$ at time one, and the function $\mathit{\mathbf{h}}_{\text{kill}}$ is thus harmonic on $\tilde\mathcal{G}.$ For all $x\in{\tilde{\mathcal{G}}},$ $t\in{[0,\infty)}$ and functions $f\in{L^2(\tilde{\mathcal{G}},\mathit{\mathbf{h}}^2\cdot m)}$ we have by the Markov property at time $t$ \begin{equation*} E^{\tilde{\mathcal{G}}}_x\big[f(X_t)\,|\,W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}\big]=\frac{1}{\mathit{\mathbf{h}}_{\text{kill}}(x)} E^{\tilde{\mathcal{G}}}_x\big[f(X_t)\mathds{1}_{W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}}\big]=\frac{1}{\mathit{\mathbf{h}}_{\text{kill}}(x)}E^{\tilde{\mathcal{G}}}_x\left[f(X_t)\mathit{\mathbf{h}}_{\text{kill}}(X_t)\right], \end{equation*} and \eqref{h0transformX} follows from \eqref{semigrouph}. \end{proof} \begin{Rk} One can use \eqref{eq:GFFh} for $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{surv}},$ which is harmonic when $\mathit{\mathbf{h}}_{\text{kill}}<1$ in view of Lemma \nolinebreak \ref{Le:h0transformX}, to find an alternative proof of \eqref{eq:h0<1thenh*>0}, without using the isomorphism with random interlacements as in \cite{DrePreRod3}. Indeed, following the reasoning of \cite{MR914444}, see also see the Appendix of \cite{MR3765885}, one can directly prove using the Markov property for the Gaussian free field that $E^{\geq h}(x_0)$ contains an unbounded connected components with positive probability for all $h<0$ and $x_0\in{\tilde{\mathcal{G}}}$ on any graph $\mathcal{G}$ with $\kappa\equiv0.$ If $\mathcal{G}$ is a graph such that $\mathit{\mathbf{h}}_{\text{kill}}<1,$ then $\kappa^{(\mathit{\mathbf{h}}_{\text{surv}})}\equiv0,$ and we thus have by \eqref{killedlevelsetsvsh0transform} that $E_{\mathit{\mathbf{h}}_{\text{surv}}}^{\geq h}$ contains an unbounded connected component with $\P^G_{\tilde{\mathcal{G}}}$-positive probability for all $h<0.$ In particular, since $\mathit{\mathbf{h}}_{\text{surv}}\leq 1,$ we obtain that $E^{\geq h}(x_0)$ contains an unbounded connected component with $\P^G_{\tilde{\mathcal{G}}}$-positive probability for all $h<0,$ that is $\tilde{h}\geq0.$ \end{Rk} As a consequence, one can also relate killed random interlacements on $\tilde{\mathcal{G}},$ as defined below \eqref{defIu}, with random interlacements on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}},$ and surviving random interlacements on $\tilde{\mathcal{G}}$ with random interlacements on $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{surv}}},$ and apply these results to Theorem \nolinebreak \ref{couplingintergffh}. \begin{Cor} \label{h0transformiso} If $\mathcal{G}$ is a graph with $\mathit{\mathbf{h}}_{\text{kill}}\neq0,$ then the random interlacement process \begin{equation} \label{h0transforminter} \omega^{\mathit{\mathbf{h}}_{\text{kill}}}\text{ has the same law under }\P^{I}_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}}\text{ as the killed random interlacement process under }\P^{I}_{\tilde{\mathcal{G}}}, \end{equation} and for all connected compacts $K\subset\tilde{\mathcal{G}},$ \begin{equation} \label{capGhvscapkilled} \mathrm{cap}_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}}\big(\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(K)\big)=\sum_{x\in{\hat{\partial}}K}\lambda_x\mathit{\mathbf{h}}_{\text{kill}}(x)^2P_x^{\tilde{\mathcal{G}}^{\hat{\partial}K}}(\tilde{H}_{K}=\infty\,|\,W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}). \end{equation} Similar results hold when replacing $\mathit{\mathbf{h}}_{\text{kill}}$ by $\mathit{\mathbf{h}}_{\text{surv}},$ killed random interlacements by surviving random interlacements, and $W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}$ by $W_{\tilde{\mathcal{G}}}^{\mathcal{S},+}.$ In particular, if \eqref{0bounded} holds, or simply \eqref{capcondition} for $\mathcal{G},$ then \eqref{eq:laplacecaph} for $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}}$ provides us with the law of the capacity, given by \eqref{capGhvscapkilled}, of the $\mathit{\mathbf{h}}_{\text{kill}}$ level sets of the Gaussian free field, and \eqref{eqcouplingintergffh} for $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}}$ with an isomorphism between the Gaussian free field and killed random interlacements, and similarly for the $\mathit{\mathbf{h}}_{\text{surv}}$ level sets of the Gaussian free field and surviving random interlacements when $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{surv}}.$ \end{Cor} \begin{proof} By \eqref{defequicap} and \eqref{h0transformX} we have for all finite sets $K\subset G$ and all $x\in{G}$ that \begin{equation} \label{eKh0} \begin{split} e_{{\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(K)},\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}}(\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(x))&=\lambda_{\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(x)}^{(\mathit{\mathbf{h}}_{\text{kill}})}P_{\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(x)}^{{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}}}(\tilde{H}_{\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(K)}=\infty) \\&=\lambda_x\mathit{\mathbf{h}}_{\text{kill}}(x)^2P_x^{\tilde{\mathcal{G}}}(\tilde{H}_{K}=\infty\,|\,W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}). \end{split} \end{equation} We thus obtain \eqref{capGhvscapkilled} when $K\subset G,$ is finite, and one can extend it to any compact $K\subset\tilde{\mathcal{G}}$ by considering the graph $\mathcal{G}^{\hat{\partial} K}$ from \eqref{eq:enhancements}. We now turn to the proof of the identity \eqref{h0transforminter} for random interlacements. By definition of random interlacements, see for instance \eqref{definter}, it is enough to prove that \begin{equation} \label{QKh} Q_{\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(K),\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}}\circ(\xi_{\mathit{\mathbf{h}}_{\text{kill}}}^*\circ p_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}}^*)^{-1}(A)=Q_{K,\tilde{\mathcal{G}}}\circ (p_{\tilde{\mathcal{G}}}^*)^{-1}(A\cap W_{\tilde{\mathcal{G}}}^{\mathcal{K},*}) \end{equation} for all finite sets $K\subset G$ and measurable sets $A\subset W_{K,\tilde{\mathcal{G}}}^{*}.$ Using \eqref{h0transformX}, one can easily prove that \begin{align*} P^{\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(K),\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}}_{\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(x)}(\xi_{\mathit{\mathbf{h}}_{\text{kill}}}(X)\in{\cdot})&=\frac{P_x^{\tilde{\mathcal{G}}}\big(X_{L_K}=x,L_K\in{(0,\zeta)\big)}}{P_x^{\tilde{\mathcal{G}}}\big(X_{L_K}=x,L_K\in{(0,\zeta)},W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}\big)}P^{K,\tilde{\mathcal{G}}}_x(\cdot,W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}), \\&=\Big(P_x^{K,\tilde{\mathcal{G}}}\big(W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}\big)\Big)^{-1}P^{K,\tilde{\mathcal{G}}}_x(\cdot,W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}) \\&=\frac{P_x^{\tilde{\mathcal{G}}}(\tilde{H}_K=\infty)}{P_x^{\tilde{\mathcal{G}}}\big(\tilde{H}_K=\infty,W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}\big)}P^{K,\tilde{\mathcal{G}}}_x(\cdot,W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}), \end{align*} where we used \eqref{PxFforZ} in the last equality. Combining with \eqref{eKh0} and Lemma \nolinebreak \ref{Le:h0transformX}, we obtain \eqref{QKh}, and thus \eqref{h0transforminter}. The proof is similar for surviving random interlacements. Using Theorem \nolinebreak \ref{couplingintergffh} and Corollary \nolinebreak \ref{capimplieseverythingh} with $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}}$ and $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{surv}},$ we can conclude. \end{proof} \begin{Rk} \label{rkisokilled} \begin{enumerate}[1)] \item One can characterize the killed random interlacement set similarly as the random interlacement set in \eqref{defIu}, that is the probability that no trajectory in the killed random interlacement process hit a closed set $F$ is given by $\exp(-u\mathrm{cap}_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}}(\psi_{\mathit{\mathbf{h}}_{\text{kill}}}(F))),$ see \eqref{capGhvscapkilled} for an explicit formula, and similarly for surviving random interlacements when replacing $\mathit{\mathbf{h}}_{\text{kill}}$ by $\mathit{\mathbf{h}}_{\text{surv}}.$ \item When $\kappa\not\equiv0$ and $\{x\in{G}:\,\kappa_x>0\}$ is finite, \eqref{eqcouplingintergffh} for $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}}$ can be seen as a reformulation of a signed version of the second Ray-Knight theorem on the cable system. Indeed, one can then define the graph $\mathcal{G}^*$ which corresponds to $\mathcal{G},$ but replacing the open end of each $I_x,$ $x\in{G}$ with $\kappa_x>0,$ by a common vertex $x_*,$ and using \eqref{h0transforminter}, one can show that the law of the excursions on $G$ of $(X_t)_{t<\tau_u^{x_*}}$ under $P^{\tilde{\mathcal{G}}^*}_{x_*}(\cdot\,|\,\tau_u^{x_*}<\zeta)$ is the same as the law of the trace of the killed random interlacement process on $G$ under $\P^{KI}_{\tilde{\mathcal{G}}},$ where $\tau_u^{x_*}=\inf\{s>0:\ell_{x_*}(s)>u\},$ see \eqref*{4RidisonG*} in \cite{DrePreRod3} for a proof of a similar statement. One can then easily replace $\ell_{\cdot,u}^{\mathit{\mathbf{h}}_{\text{kill}}}$ by $\ell_{\cdot}(\tau_u^{x_*})$ in \eqref{eqcouplingintergffh}, which corresponds to a version of Theorem \nolinebreak 8 in \cite{LuSaTa} on the cable system. In particular, following the proof of Theorem \nolinebreak 8 in \cite{LuSaTa}, on any transient graph such that $\kappa\not\equiv0$ and $\{x\in{G}:\,\kappa_x>0\}$ is finite, we obtain that \eqref{eqcouplingintergffh} for $\mathit{\mathbf{h}}_{\text{kill}}$ holds, and thus \eqref{eq:laplacecaph} for $\mathit{\mathbf{h}}_{\text{kill}}$ as well. \item One can prove Theorem \nolinebreak \ref{eqcouplingintergffh} for $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}}$ directly, without using \cite{DrePreRod3} and Doob $\mathit{\mathbf{h}}$-transform. Indeed, let $K_n,$ $n\in\mathbb{N},$ be a sequence of finite subsets of $G$ increasing to $G,$ $\kappa^{(n)}=\kappa\mathds{1}_{K_n},$ and $\mathcal{G}_n$ be the same graph as $\mathcal{G},$ but with killing measure $\kappa^{(n)}$ instead of $\kappa.$ Since $\{x\in{G}:\kappa^{(n)}_x>0\}$ is finite, as explained before, one can use a version of Theorem \nolinebreak 8 in \cite{LuSaTa} on the cable system to obtain \eqref{eqcouplingintergffh} on $\mathcal{G}_n$ for $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}}$ and $n\in\mathbb{N}.$ Using the description of killed random interlacements from \eqref{newdescriptionnuK} and Remark \nolinebreak \ref{rqkilled},\ref{desgiveall}), one can compare for each $n\in\mathbb{N}$ the killed interlacement measures on the whole cable system of $\tilde{\mathcal{G}}_n$ and $\tilde{\mathcal{G}},$ instead of their restriction to compacts as in Lemma \nolinebreak \ref*{4limitKn} in \cite{DrePreRod3}. Proceeding similarly as in the proof of Lemma \nolinebreak \ref*{4couplingisalwaystrue} in \cite{DrePreRod3}, one can then approximate killed random interlacements on $\tilde{\mathcal{G}}$ by killed random interlacements on the sequence $\tilde{\mathcal{G}}_n,$ decreasing to $\tilde{\mathcal{G}},$ to obtain that, when $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}},$ \eqref{eqcouplingintergffh} holds for ${\mathcal{G}}$ if \eqref{0bounded} or \eqref{eq:laplacecaph} holds for $h=0.$ Following the proof of Proposition \nolinebreak \ref*{4couplingimplytheorem} in \cite{DrePreRod3}, we can then also prove that \eqref{eqcouplingintergffh} implies \eqref{eq:laplacecaph} for all $h\geq0$ when $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{kill}}.$ It is less clear how to obtain a direct proof of Theorem \nolinebreak \ref{eqcouplingintergffh} for $\mathit{\mathbf{h}}=\mathit{\mathbf{h}}_{\text{surv}},$ without using $\mathit{\mathbf{h}}$-transforms. \item We have $E^{\geq0}=E^{\geq0}_{\mathit{\mathbf{h}}_{\text{kill}}}=E^{\geq0}_{\mathit{\mathbf{h}}_{\text{surv}}},$ and so Theorem \nolinebreak \ref{couplingintergffh} provides us not only with an explicit formula for the capacity of the sign clusters of the Gaussian free field on the cable system of any graph $\mathcal{G}$ verifying \eqref{0bounded} when $\mathit{\mathbf{h}}\equiv1,$ but also for the capacity of the sign clusters on the graph $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{kill}}}$ when $\kappa\not\equiv0,$ as given in \eqref{capGhvscapkilled}, or on the graph $\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}_{\text{surv}}}$ when $\mathit{\mathbf{h}}_{\text{kill}}<1,$ given similarly as in \eqref{capGhvscapkilled} but with $\mathit{\mathbf{h}}_{\text{surv}}$ and $W_{\tilde{\mathcal{G}}}^{\mathcal{S},+}$ instead of $\mathit{\mathbf{h}}_{\text{kill}}$ and $W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}.$ \item \label{couplingkilledonA} One also obtains for any $A\subset\overline{G}$ results similar to Corollary \nolinebreak \ref{h0transformiso} for $\mathit{\mathbf{h}}=P_x(X\text{ is killed on }A),$ by replacing $W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}$ by $\{X\text{ is killed on }A\}$ and killed interlacements by killed on $A$ random interlacements, as defined in Remark \ref{rk:killedonARI}, or for $\mathit{\mathbf{h}}=P_x(X\text{ is not killed on }A),$ when replacing $W_{\tilde{\mathcal{G}}}^{\mathcal{K},+}$ by $\{X\text{ is not killed on }A\}$ and killed interlacements by surviving on $A^c$ random interlacements. This follows from considering the graph $\mathcal{G}^{A_{\infty}}$ as in Remark \ref{rk:killedonARI}. \end{enumerate} \end{Rk} Let us now give a consequence of Corollary \nolinebreak \ref{h0transformiso}, Corollary \nolinebreak \ref{couplingintergffK}, which is another example of interesting results one can obtain from our Doob $\mathit{\mathbf{h}}$-transform method. It states that, for any compact $K$ of $\tilde{\mathcal{G}},$ one can prove results similar to Theorem \nolinebreak \ref{couplingintergffh} but for the Gaussian free field conditioned on being equal to $0$ on $K$ and the trajectories in the random interlacement process $\omega_u$ avoiding $K.$ In particular, we obtain an isomorphism similar to \eqref{eqcouplingintergffh} between these two objects, which can be seen as a generalization of Theorem \nolinebreak 5.3 in \cite{MR3936156}. Recalling the definition of the hitting time $H_K$ from above \eqref{defpartialext}, we define $\mathit{\mathbf{h}}_K(x)=P_x^{\tilde{\mathcal{G}}}(H_K=\zeta)$ for all $x\in{\tilde{\mathcal{G}}},$ $\omega^{(K)}$ the trajectories in the random interlacement process $\omega$ never hitting $K,$ $\ell^{(K)}_{\cdot,u}$ the total local times of the trajectories in $\omega^{(K)}$ with label at most $u$ and ${\cal I}^u_{(K)}=\{x\in{\tilde{\mathcal{G}}}:\ell^{(K)}_{x,u}>0\}.$ Let us also define for all compacts $K,K'$ such that $K'\subset K^c,$ $\hat{\partial} K'\subset \hat{\partial} G$ and $\hat{\partial} K\subset\hat{\partial} G,$ \begin{equation} \label{defcapK} \begin{gathered} e^{(K)}_{K',\tilde{\mathcal{G}}}(x)\stackrel{\mathrm{def.}}{=}\lambda_x\mathit{\mathbf{h}}_K(x)P_x^{\tilde{\mathcal{G}}}\big(\tilde{H}_{K'}={\infty},H_K=\zeta\big)\text{ for all }x\in{\hat{\partial} K'}\\\text{ and }\mathrm{cap}_{\tilde{\mathcal{G}}}^{(K)}(K')=\sum_{x\in\hat{\partial} K'}e_{K',\tilde{\mathcal{G}}}(x). \end{gathered} \end{equation} Using \eqref{eq:enhancements}, one can extend this definition of capacity to any compact $K,K'$ of $\tilde{\mathcal{G}}$ with $K'\subset K^c.$ Finally, let ${E}^{\geq h}_{(K)}(x_0)$ be the cluster of $x_0\in{K^c}$ in $\{x\in{K^c}:\phi_x\geq h\times\mathit{\mathbf{h}}_K(x)\}.$ \begin{Cor} \label{couplingintergffK} Let $\mathcal{G}$ be a transient graph satisfying \eqref{capcondition} and $K$ a compact of $\tilde{\mathcal{G}}.$ The identities \eqref{eq:laplacecaph} and \eqref{eqcouplingintergffh} still hold when replacing $\P_{\tilde{\mathcal{G}}}^G$ by $\P_{\tilde{\mathcal{G}}}^G(\cdot\,|\,\phi_{|K}=0),$ $\mathrm{cap}_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}\big(\psi_{\mathit{\mathbf{h}}}(E_{\mathit{\mathbf{h}}}^{\geq h}(x_0))\big)$ by $\mathrm{cap}_{\tilde{\mathcal{G}}}^{(K)}\big(E_{(K)}^{\geq h}(x_0)\big),$ $\mathit{\mathbf{h}}(x)$ by $\mathit{\mathbf{h}}_K(x),$ $\P_{\tilde{\mathcal{G}}_{\mathit{\mathbf{h}}}}^I$ by $\P_{\tilde{\mathcal{G}}}^I,$ $\ell_{x,u}^{\mathit{\mathbf{h}}}$ by $\ell_{x,u}^{(K)},$ and ${\cal I}_{\mathit{\mathbf{h}}}^u$ by ${\cal I}_{(K)}^u.$ \end{Cor} \begin{proof} By \eqref{eq:enhancements}, we can assume without loss of generality that $\hat{\partial} K\subset G.$ Up to considering each connected component of $K^c$ individually, we will assume that $K^c$ is connected. We also assume that $\mathit{\mathbf{h}}_K>0,$ otherwise the result is trivially true since otherwise $\mathrm{cap}^{(K)}(K')=0$ for all compacts $K'\subset K^c$ and ${\cal I}^u_{(K)}=\varnothing$ a.s. We call $\mathcal{G}_{K^c}=(\overline{G}_{K^c},\bar{\lambda}_{K^c},\bar{\kappa}_{K^c})$ the graph such that $(\overline{G}_{K^c},\bar{\lambda}_{K^c})=({G},{\lambda}),$ $\bar{\kappa}_{K^c}={\kappa}$ on $K^c\cap G,$ and $\bar{\kappa}_{K^c}=\infty$ on $K\cap G.$ Similarly as in the beginning of Section \ref{sec:notation}, we associate to $\mathcal{G}_{K^c}$ an equivalent triplet $(G_{K^c},\lambda_{K^c},\kappa_{K^c})$ with $\kappa_{K^c}<\infty.$ We finally denote by $\mathcal{G}'=\mathcal{G}^{G_{K^c}}=(G',\lambda',\kappa')$ the enhancement of $\mathcal{G}$ containing $G_{K^c}$ in its vertex set, see \eqref{eq:enhancements}. One can then identify $\tilde{\mathcal{G}}_{K^c}$ with $K^c(\subset\tilde{\mathcal{G}})$ and, using Theorem \nolinebreak 4.4.2 in \cite{MR2778606}, show that the law of $(X_t)_{t<H_K}$ under $P^{\tilde{\mathcal{G}}}_x$ is $P^{\tilde{\mathcal{G}}_{K^c}}_x$ for all $x\in{K^c},$ and that $(Z_t)_{t<H_K},$ see \eqref{printonG}, has the same law under $P^{\tilde{\mathcal{G}}'}_x$ as $Z$ under $P^{\tilde{\mathcal{G}}_{K^c}}_x$ for all $x\in{G_{K^c}}.$ Moreover, using the Markov property for the Gaussian free field, see (1.8) in \cite{MR3492939} for instance, one can easily see that \begin{equation} \label{phionGvsphionGAinfinity} (\phi_x)_{x\in{K^c}}\text{ has the same law under }\P_{\tilde{\mathcal{G}}}^G(\cdot\,|\,\phi_{|K}=0)\text{ as }(\phi_x)_{x\in{\tilde{\mathcal{G}}_{K^c}}}\text{ under }\P_{\tilde{\mathcal{G}}_{K^c}}^G. \end{equation} One can identify trajectories in $W_{\tilde{\mathcal{G}}_{K^c}}^+$ which are not killed on $\hat{\partial} K,$ as defined in Remark \ref{rk:killedonARI}, with trajectories in $W_{\tilde{\mathcal{G}}}^+$ which do not hit $\hat{\partial}K,$ which correspond $P_x^{\tilde{\mathcal{G}}}$-a.s.\ to trajectories in $W_{\tilde{\mathcal{G}}}^+$ which do not hit $K$ since $\mathit{\mathbf{h}}_K>0.$ One can thus show that for all compacts $K'$ of $\tilde{\mathcal{G}}_{K^c}$ that \begin{equation} \label{eK'Kvssurviving} e_{K',\tilde{\mathcal{G}}_{K^c}}(x)P^{K',\tilde{\mathcal{G}}_{K^c}}_x\big(\cdot,X\text{ is not killed on $\hat{\partial}K$}\big)=e_{K',\tilde{\mathcal{G}}}(x)P^{K',\tilde{\mathcal{G}}}_x(\cdot,H_K=\zeta)\text{ for all }x\in{\tilde{\mathcal{G}}_{K^c}}. \end{equation} The proof of \eqref{eK'Kvssurviving} is easy when $\hat{\partial} K'\subset G_{K^c}$ and when considering events involving only the discrete process $Z$ on $G_{K^c}$ since for all $x\in{\hat{\partial}K'}$ by \eqref{defequicap} and \eqref{PxFforZ} \begin{align*} e_{K',\tilde{\mathcal{G}}_{K^c}}(x)P^{K',\tilde{\mathcal{G}}_{K^c}}_x\big(Z\in{\cdot},Z\text{ not killed on }\hat{\partial}K\big)\hspace{-1mm}&=\lambda'_xP_x^{\tilde{\mathcal{G}}_{K^c}}(Z\in{\cdot}, Z\text{ not killed on }\hat{\partial}K,\tilde{H}_{K'}=\infty) \\&=\lambda'_xP_x^{\tilde{\mathcal{G}}'}(Z\in{\cdot},H_K=\zeta,\tilde{H}_{K'}=\infty) \\&=e_{K',\tilde{\mathcal{G}}'}(x)P^{K',\tilde{\mathcal{G}}'}_x(Z\in{\cdot},H_K=\zeta), \end{align*} and $Z$ has the same law under $P^{K',\tilde{\mathcal{G}}'}_{\cdot}$ as the trace of $X$ on $G'=G_{K^c}$ under $P^{K,\tilde{\mathcal{G}}}_{\cdot}.$ In more generality, the equality \eqref{eK'Kvssurviving} can be justified for instance using the last exit decomposition \eqref{lastexitdec} from Section \ref{sec:defmassinter}. Therefore, by \eqref{defQK} we obtain that \begin{equation} \label{survivingisneverhittingA} \begin{gathered} \omega^{(K)}\text{ has the same law under }\P^I_{\tilde{\mathcal{G}}}\text{ as}\\\text{the surviving on $(\hat{\partial} K)^c$ random interlacement process under }\P^{I}_{\tilde{\mathcal{G}}_{K^c}}. \end{gathered} \end{equation} Noting that since $\mathrm{cap}_{\tilde{\mathcal{G}}_{K^c}}(F)\geq\mathrm{cap}_{\tilde{\mathcal{G}}}(F)$ for all $F\subset G\cap K^c,$ we also have that if condition \eqref{capcondition} holds for $\mathcal{G},$ then it also holds for $\mathcal{G}_{K^c}.$ Using Corollary \nolinebreak \ref{h0transformiso} for the graph $\mathcal{G}_{K^c}^{(\hat{\partial}K)_{<\infty}}$ and the identity \begin{equation*} \lambda_xP_x^{\tilde{\mathcal{G}}_{K}}(X\text{ is not killed on }\hat{\partial}K)^2P_x^{\tilde{\mathcal{G}}_K^{\hat{\partial} K'}}\big(\tilde{H}_{K'}=\infty\,|\,X\text{ is not killed on }\hat{\partial}K\big)=e_{K',\tilde{\mathcal{G}}}^{(K)}(x) \end{equation*} for all $K'\subset G\cap K^c$ and $x\in{\hat{\partial} K'},$ one can easily conclude as explained in Remark \ref{rkisokilled},\ref{couplingkilledonA}). \end{proof} \begin{Rk} One could also find results similar to Corollary \nolinebreak \ref{couplingintergffK} for the Gaussian free field conditioned on being equal to $0$ on $K$ and the trajectories in the surviving random interlacement process not hitting $K,$ replacing $\mathit{\mathbf{h}}_K(x)$ by $P_x^{\tilde{\mathcal{G}}}(H_K=\zeta,W_{\tilde{\mathcal{G}}}^{\mathcal{S},+}),$ and adapting the definition of the capacity in \eqref{defcapK}. This can be proved using directly Corollary \nolinebreak \ref{h0transformiso} for surviving random interlacements on $\tilde{\mathcal{G}}_{K^c},$ as defined in the proof of Corollary \nolinebreak \ref{couplingintergffK}. Another possibility is to consider the trajectories in the killed random interlacement process not hitting $K,$ which can be proved using Corollary \nolinebreak \ref{h0transformiso} for killed on $K^c\cap G$ random interlacements on $\tilde{\mathcal{G}}_{K^c}.$ \end{Rk} \section{A non-trivial graph verifying (\ref{0bounded}) but not (\ref{capcondition})} \label{sec:Z20} In this section, we give examples of graphs for which \eqref{capcondition} does not hold, but \eqref{0bounded} holds, thus showing that the implication \eqref{eq:capimplies0bounded} is not an equivalence. We first give trivial examples of such graphs, in the sense that \eqref{0bounded} can still be easily deduced from the property \eqref{eq:capfinite}, even if \eqref{capcondition} does not hold. We then present a condition \eqref{capGfinite}, stronger than the complement of \eqref{capcondition}, under which one cannot deduce \eqref{0bounded} from \eqref{eq:capfinite} anymore, and give an example of a graph verifying \eqref{capGfinite} and \eqref{0bounded}, by considering the graph $\mathbb{Z}^{2,0}$ corresponding to the two-dimensional lattice $\mathbb{Z}^2$ with constant weights, infinite killing measure at the origin, and zero killing measure everywhere else, see Proposition \nolinebreak \ref{Z20counterexample}. Finally, we deduce Theorem \nolinebreak \ref{couplingintergffdim2} from Theorem \nolinebreak \ref{couplingintergffh}. Let us first explain what we mean by a non-trivial example, and we first recall the trivial example given in Remark \ref*{4R:mainresults1},\ref*{4signwithoutcap} of \cite{DrePreRod3}. Let $0$ be the origin of the graph $\mathbb{Z}^3$ with unit weights and zero killing measure, $A\subset I_0$ be some infinite set without accumulation point, and let $\mathcal{G}^*=(\mathbb{Z}^3)^A,$ see \eqref{eq:enhancements}. Then by \eqref*{4capIx} in \cite{DrePreRod3} we have $\mathrm{cap}_{\tilde{\mathcal{G}}^*}(A)=\mathrm{cap}_{\tilde{\mathbb{Z}}^3}(I_0)=\mathrm{cap}_{\tilde{\mathbb{Z}}^3}(\{0\})<\infty,$ and so the graph $\mathcal{G}^*$ does not verify condition \eqref{capcondition} in view of \eqref*{4capconditiondis} in \cite{DrePreRod3}. But all the unbounded connected components of ${\mathbb{Z}}^3$ have infinite capacity, and so by \eqref{eq:capfinite} the intersection with $\mathbb{Z}^3(\subset \tilde{\mathcal{G}}^*)$ of each connected component of $E^{\geq0}(\subset\tilde{\mathcal{G}}^*)$ is finite $\P^G_{\tilde{\mathcal{G}}^*}$-a.s. The intersection of each connected component of $E^{\geq 0}$ with $A$ is also finite, since the Gaussian free field on $I_0$ has the same law as a Brownian motion starting in $\phi_0$ with variance $2$ at time $1$ as explained in Section \ref*{4subsec:GFF} of \cite{DrePreRod3}, and so \eqref{0bounded} holds for $\mathcal{G}.$ Actually, $E^{\geq0}$ is not only bounded but also compact by Lemma \nolinebreak \ref*{41stpartofmaincor} in \cite{DrePreRod3}. Using a similar procedure, one could modify any graph $\mathcal{G}$ verifying \eqref{capcondition} and \eqref{0bounded} such that $\kappa_x=0$ for some vertex $x\in{G}$ by adding infinitely many vertices on the cable $I_x$ to obtain a graph which does not verify \eqref{capcondition} anymore, since $I_x$ now correspond to an unbounded set with finite capacity, but for which \eqref{0bounded} can still be deduced from \eqref{eq:capfinite}, since every connected and unbounded components of $G$ have still infinite capacity. More generally, even when \eqref{capcondition} does not hold, one can often get some information from \eqref{eq:capfinite} about the shape of the connected components of $E^{\geq0},$ which may help in proving \eqref{0bounded}. However, if we assume that \begin{equation} \label{capGfinite} \mathrm{cap}(F)<\infty\text{ for some closed and connected set }F\subset\tilde{\mathcal{G}}\text{ with bounded complement,} \end{equation} then the result \eqref{eq:capfinite} is equivalent to $\P^G(\mathrm{cap}\big(E^{\geq0}(x_0)\cap F^c\big)<\infty)=1$ for all $x_0\in{\tilde{\mathcal{G}}},$ which does not provide us with any information on the boundedness of $E^{\geq 0}(x_0)$ since $F^c$ is bounded. Note that if $\mathcal{G}$ is unbounded, then \eqref{capGfinite} implies that \eqref{capcondition} does not hold. What we mean with "a non-trivial graph verifying \eqref{0bounded} but not \eqref{capcondition}" is thus a graph verifying \eqref{0bounded} and \eqref{capGfinite}. Such a graph does not percolate at level $0,$ but one cannot deduce this result from the information \eqref{capGfinite}, as opposed to graphs verifying (or almost verifying) \eqref{capcondition}, and we now present an example of such a graph. Let $\mathbb{Z}^{2}$ be the graph with weights $\frac14$ between neighbors in $\mathbb{Z}^2$ and $0$ killing measure, and $\mathbb{Z}^{2,0}$ be the same graph as $\mathbb{Z}^2$ but with infinite killing at the origin. Identifying $\tilde{\mathbb{Z}}^{2,0}$ with $\tilde{\mathbb{Z}}^2\setminus I_0,$ let us denote for each $n\in\mathbb{N}$ by $B(n)$ the subset of $\tilde{\mathbb{Z}}^{2,0}$ identified with $\{x\in{\tilde{\mathbb{Z}}^2\setminus I_0}:\,0<d_{\tilde{\mathbb{Z}}^2}(0,x)\leq n\}.$ With a slight abuse of notation, let us also denote by ${\bf a}$ the restriction to $\tilde{\mathbb{Z}}^{2,0}(\subset\tilde{\mathbb{Z}}^2)$ of the potential kernel ${\bf a}$ defined page \pageref{pagedefa}. Since ${\bf a}$ is linear on $I_e$ for each edge and vertex $e$ of $\mathbb{Z}^{2,0},$ it verifies \eqref{hformulaonedges}, and since it also verifies \eqref{hformulaoutsideofedge} by Proposition \nolinebreak 4.4.2 in \cite{MR2677157}, we obtain that ${\bf a}$ is harmonic on $\tilde{\mathbb{Z}}^{2,0},$ in the sense of Definition \ref{defharmonic}. One can relate the capacity of a set on the graph $\mathbb{Z}_{\bf a}^{2,0},$ as defined in Definition \ref{harmo}, to the usual definition of the two-dimensional capacity. Indeed, let us define for all closed sets $A\subset\tilde{\mathbb{Z}}^2$ such that $0\in{A}$ \begin{equation} \label{capdim2} \mathrm{cap}_{\tilde{\mathbb{Z}}^2}(A)\stackrel{\mathrm{def.}}{=}\mathrm{cap}_{\tilde{\mathbb{Z}}^{2,0}_{\bf a}}\big(\psi_{\bf a}(A\setminus I_0)\big). \end{equation} This definition is coherent with the usual definition of the two-dimensional capacity $\mathrm{cap}_{\mathbb{Z}^2}$ from Section 6.6 in \cite{MR2677157}, since by Proposition \nolinebreak 2.2 in \cite{MR3475663} \begin{equation} \label{capdim2rest} \mathrm{cap}_{\tilde{\mathbb{Z}}^2}(A)=\mathrm{cap}_{\mathbb{Z}^2}(A)\text{ for all finite sets }A\subset\mathbb{Z}^2\text{ with }0\in{A}. \end{equation} Using Corollary \nolinebreak \ref{capimplieseverythingh}, we can now easily show that $\mathbb{Z}^{2,0}$ verifies \eqref{0bounded}. \begin{Prop} \label{Z20counterexample} \eqref{0bounded} and \eqref{capGfinite} are verified for the graph $\mathbb{Z}^{2,0}.$ \end{Prop} \begin{proof} It is clear that $n\mapsto\mathrm{cap}_{\mathbb{Z}^{2,0}}\big(\overline{B(n)\setminus B(1)}\big)$ is constant since a trajectory started on the boundary of $B(n)$ will come back in $B(n)$ with probability $1,$ and so $\overline{B(1)^c}$ has finite capacity, from which \eqref{capGfinite} readily follows. It follows from Lemma \nolinebreak 6.6.7,(b) in \cite{MR2677157}, \eqref{capdim2} and \eqref{capdim2rest} that for all $n\in\mathbb{N}$ and connected sets $A_n\subset\mathbb{Z}^{2,0}_{\bf a}$ with diameter at least $n$ containing $\{1,0\}$ \begin{equation*} \mathrm{cap}_{\tilde{\mathbb{Z}}_a^{2,0}}(A_n)\geq C\log(n+1), \end{equation*} for some constant $C>0.$ If $A\subset \mathbb{Z}^{2,0}_{\bf a}$ is infinite and connected, let us denote by $A'\subset \mathbb{Z}^{2,0}_{\bf a}$ a finite connected set connecting $A$ to $\{1,0\}.$ By subadditivity of the capacity, see Proposition \nolinebreak 6.6.2.\ in \cite{MR2677157}, we have that \begin{equation*} \mathrm{cap}_{\tilde{\mathbb{Z}}_a^{2,0}}(A)\geq \mathrm{cap}_{\tilde{\mathbb{Z}}_a^{2,0}}(A\cup A')-\mathrm{cap}_{\tilde{\mathbb{Z}}_a^{2,0}}(A')\geq C\log(n+1)-\mathrm{cap}_{\tilde{\mathbb{Z}}_a^{2,0}}(A') \end{equation*} for all $n\in\mathbb{N}$ since the diameter of $A\cup A'$ is at least $n.$ Using \eqref*{4capconditiondis} in \cite{DrePreRod3} and letting $n\rightarrow\infty,$ we obtain that condition \eqref{capcondition} holds for the graph $\mathbb{Z}_{\bf a}^{2,0},$ and so \eqref{0bounded} holds for $\mathbb{Z}^{2,0}$ by Corollary \nolinebreak \ref{capimplieseverythingh}. \end{proof} \begin{Rk} By Corollary \nolinebreak \ref{capimplieseverythingh}, if there exists an harmonic function $\mathit{\mathbf{h}}$ on $\tilde{\mathcal{G}}$ such that \eqref{capcondition} is verified for $\mathcal{G}_{\mathit{\mathbf{h}}},$ then \eqref{0bounded} holds, and it is an interesting open question to know whether a non-trivial graph verifying \eqref{0bounded} but not \eqref{capcondition} on $\mathcal{G}_{\mathit{\mathbf{h}}}$ for all harmonic functions $\mathit{\mathbf{h}}$ on $\tilde{\mathcal{G}}$ exists. Note that $\mathbb{Z}^{2,0}$ is not such a graph, see the proof of Proposition \nolinebreak \ref{Z20counterexample}. \end{Rk} One can also relate the Gaussian free field on $\tilde{\mathbb{Z}}^{2,0}$ with the pinned field $\phi^p$ defined in \eqref{def2dgff}. \begin{Lemme} \label{le:phiZ2otherdef} The field $(\phi_x^p)_{x\in{\tilde{\mathbb{Z}}^2}\setminus I_0}$ has the same law under $\P^{G,p}$ as $(\phi_x)_{x\in\tilde{\mathbb{Z}}^{2,0}}$ under $\P^G_{\tilde{\mathbb{Z}}^{2,0}}.$ \end{Lemme} \begin{proof} Let $\mathbb{Z}_{n}^{2,0}$ the same graph as $\mathbb{Z}^{2,0},$ but with infinite killing measure outside of $B(n).$ The Gaussian free field under $\P^G_{\tilde{\mathbb{Z}}_n^{2,0}}$ converges in law to the Gaussian free field under $\P^G_{\tilde{\mathbb{Z}}^{2,0}}$ since by the Markov property at the first time $H_{B(n)^c}$ that $X$ hits $B(n)^c$ we have for all $x,y\in{\tilde{\mathbb{Z}}^{2,0}}$ \begin{align*} g_{\tilde{\mathbb{Z}}^{2,0}}(x,y)-g_{\tilde{\mathbb{Z}}^{2,0}_n}(x,y)&=E_x^{\tilde{\mathbb{Z}}^{2,0}}\big[g_{\tilde{\mathbb{Z}}^{2,0}}(X_{H_{B(n)^c}},y)\mathds{1}_{H_{B(n)^c}<\zeta}\big] \\&\leq g_{\tilde{\mathbb{Z}}^{2,0}}(y,y)P_x^{\tilde{\mathbb{Z}}^{2,0}}(H_{B(n)^c}<\zeta)\tend{n}{\infty}0. \end{align*} Moreover, it follows from (5.30) in \cite{MR3936156}, whose proof can easily be extended to the cable system, that the Gaussian free field under $\P^G_{\tilde{\mathbb{Z}}_n^{2,0}}$ converges in law to $(\phi_x^p)_{x\in{\tilde{\mathbb{Z}}^2}\setminus I_0}$ under $\P^{G,p},$ and we can conclude. \end{proof} Combining Lemma \nolinebreak \ref{le:phiZ2otherdef} with Proposition \nolinebreak \ref{Z20counterexample} and Theorem \nolinebreak \ref{couplingintergffh} will let us easily deduce Theorem \nolinebreak \ref{couplingintergffdim2}. Recalling Definition \ref{defhtransforminter}, let us first define the the two-dimensional random interlacement process $\omega^{(2)}$ under a probability $\P^{I,2}$ as a point process of trajectories on $\tilde{\mathbb{Z}}^2$ such that \begin{equation} \label{defRIdim2} \omega^{(2)}\text{ has the same law under }\P^{I,2}\text{ as }\omega^{\bf a}\text{ under }\P_{\tilde{\mathbb{Z}}^{2,0}_{\bf a}}^I, \end{equation} where we identified trajectories on $\tilde{\mathbb{Z}}^{2,0}$ with trajectories on $\tilde{\mathbb{Z}}^2$ avoiding $I_0.$ This definition is coherent with the previous definitions of two-dimensional random interlacements on the discrete graph $\mathbb{Z}^2,$ since the trace of $\omega^{(2)}$ on $\mathbb{Z}^2$ corresponds to the interlacement process defined above Corollary \nolinebreak 4.3 in \cite{MR3936156}, and the associated discrete time skeleton to the interlacement process defined above Definition 2.1 in \cite{MR3475663}. The characterization \eqref{defIudim2} of the corresponding interlacement set ${\cal I}_2^u$ moreover directly follows from \eqref{defIu}, Definition \ref{defhtransforminter}, \eqref{capdim2} and \eqref{defRIdim2}. \begin{proof}[Proof of Theorem \nolinebreak \ref{couplingintergffdim2}] Since ${\bf a}(0)=0,$ we have $\kappa^{({\bf a})}\equiv0$ on $\mathbb{Z}^{2,0},$ and so $E_{\bf a}^{\geq h}$ contains an unbounded connected component with $\P^{G}_{\mathbb{Z}^{2,0}}$ positive probability for all $h<0$ by Corollary \nolinebreak \ref{capimplieseverythingh}. Since \eqref{0bounded} holds for $\mathbb{Z}^{2,0}$ by Proposition \nolinebreak \ref{Z20counterexample}, one can combine Lemma \nolinebreak \ref{le:phiZ2otherdef} with the results from Theorem \nolinebreak \ref{couplingintergffh} for $\mathcal{G}=\mathbb{Z}^{2,0}$ and $\mathit{\mathbf{h}}={\bf a}$ with \eqref{capdim2} and \eqref{defRIdim2} to obtain Theorem \nolinebreak \ref{couplingintergffdim2}, noting that \eqref{eq:laplacecapkilleddim2} and \eqref{eqcouplingintergffdim2} trivially extend to $I_0$ since $\mathrm{cap}_{\tilde{\mathbb{Z}}^2}(I_0)=0,$ ${\cal I}_2^u\cap I_0=\varnothing$ $\P^{I,2}$-a.s.\ and ${\bf a}(x)=0$ for all $x\in{I_0}.$ \end{proof} \begin{Rk} \label{endrkdim2} \begin{enumerate}[1)] \item One could also try to prove Theorem \nolinebreak \ref{couplingintergffdim2} using Corollary \nolinebreak \ref{couplingintergffK}, similarly as the proof of Theorem \nolinebreak 5.5 from Theorem \nolinebreak 5.3 in \cite{MR3936156}. Indeed, one can easily deduce from Lemma \nolinebreak 3.1 in \cite{MR3936156} that for all finite sets $K\subset{\mathbb{Z}^2\setminus\{0\}}$ \begin{equation*} \lim\limits_{n\rightarrow\infty}\frac{4}{\pi^2}\log^2(n)\mathrm{cap}_{\tilde{\mathbb{Z}}^2_n}^{(\{0\})}(K)=\mathrm{cap}_{\mathbb{Z}^2}(K\cup\{0\}), \end{equation*} by (3.7) in \cite{MR3936156} that $2\pi^{-1}\log(n)P_x^{\tilde{\mathbb{Z}}^{2}_n}(H_0=\zeta)\tend{n}{\infty}{\bf a}(x)$ for all $x\in{\mathbb{Z}^2},$ and by (5.30) in \cite{MR3936156} that $(\phi_x)_{x\in{\mathbb{Z}}^{2}_n}$ under $\P^G_{\tilde{\mathbb{Z}}^{2}_n}(\cdot\,|\,\phi_0=0)$ converges in law to $(\phi_x^p)_{x\in{{\mathbb{Z}}^2}}$ under $\P^{G,p}.$ Therefore if one could extend the previous results to the cable system, taking the limit in the version of \eqref{eq:laplacecaph} from Corollary \nolinebreak \ref{couplingintergffK} with $K=\{0\},$ $\mathcal{G}=\mathbb{Z}_n^2,$ $u=4\pi^{-2}\log^2(n)u'$ and $h=2\pi^{-1}\log(n)h'$ and extending the previous results to the cable system could give us \eqref{eq:laplacecapkilleddim2}. Moreover, by Corollary \nolinebreak 4.3 in \cite{MR3936156}, the law of the trace of $\omega_{4\pi^{-2}\log^2(n)u'}^{(0)}$ on $\mathbb{Z}_n^2$ under $\P^G_{\tilde{\mathbb{Z}}_n^2}$ converges to the law of the trace of $\omega^{(2)}_{u'}$ on $\mathbb{Z}^2$ under $\P^{I,2},$ and one could also try similarly to prove \eqref{eqcouplingintergffdim2} by taking the limit in the version of \eqref{eqcouplingintergffh} from Corollary \nolinebreak \ref{couplingintergffK}. This strategy can be effectively applied to prove that the squares of the processes on both side of \eqref{eqcouplingintergffh} have the same law, which corresponds to proving Theorem \nolinebreak 5.5 in \cite{MR3936156} but on the cable system. However, the weak convergence results for the Gaussian fields and random interlacements from \cite{MR3936156} seem not robust enough to prove rigorously that Theorem \nolinebreak \ref{couplingintergffdim2} follows from Corollary \nolinebreak \ref{couplingintergffK} using the previously explained strategy, see for instance the proof of Lemma \nolinebreak \ref*{4couplingisalwaystrue} in \cite{DrePreRod3} which requires more robust convergence results, and using instead the ${\bf a}$-transform directly on the graph $\mathbb{Z}^{2,0}$ solves this problem. \item Some results similar to \eqref{eq:laplacecapkilleddim2} and \eqref{eqcouplingintergffdim2} also holds but for the usual level sets $E^{p,\geq h}=\{x\in{\tilde{\mathbb{Z}}^2}:\,\phi_x^{p}\geq h\}$ of the pinned free field. Indeed \eqref{0bounded} holds on $\mathbb{Z}^{2,0}$ by Proposition \nolinebreak \ref{Z20counterexample}, and using Theorem \nolinebreak \ref*{4mainresultcap} in \cite{DrePreRod3} and Lemma \nolinebreak \ref{le:phiZ2otherdef}, one can thus obtain the law of the capacity (in terms of the graph $\mathbb{Z}^{2,0}$) of the usual level sets $E^{p,\geq h}$ of the pinned free field for all $h\geq0.$ Moreover, by Theorem \nolinebreak \ref*{4couplingintergff} in \cite{DrePreRod3}, one can also obtain an isomorphism similar to \eqref{eqcouplingintergffdim2} but between the pinned free field on $\mathbb{Z}^{2}$ and random interlacements on $\mathbb{Z}^{2,0},$ and replacing $\sqrt{2u}{\bf a}(x)$ by $\sqrt{2u}.$ \item \label{normalpinnedlevelsets}One may also want to investigate percolation for the usual level sets $E^{p,\geq h}$ of the pinned free field. Since $E^{p,\geq0}=E^{p,\geq0}_{\bf a},$ it follows from Theorem \nolinebreak \ref{couplingintergffdim2} and monotonicity that $E^{p,\geq h}$ contains $\P^{G,p}$-a.s.\ only bounded connected components for all $h\geq0.$ Moreover, using \eqref{levelsetsvsIu} and Lemma \nolinebreak \ref{le:phiZ2otherdef}, it is clear that for all $u\geq0,$ $E^{p,\geq-\sqrt{2u}}$ contains unbounded connected components with $\P^{G,p}$ positive probability if and only if $\mathcal{C}_u^p$ contains unbounded connected components with $\P^{G,p}\otimes\P^{I}_{\tilde{\mathbb{Z}}^{2,0}}$ positive probability, where $\mathcal{C}_u^p$ denotes the closure of the union of the connected components of $\{x\in{\tilde{\mathcal{G}}}:|\phi_x^{p}|>0\}$ intersecting ${\cal I}^u.$ Since $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$ on $\tilde{\mathbb{Z}}^{2,0},$ the interlacements on $\tilde{\mathbb{Z}}^{2,0}$ consist only of trajectories whose forwards and backwards parts have been killed in $\{0\}$ and so ${\cal I}^u$ is $\P^I_{\tilde{\mathbb{Z}}^{2,0}}$-a.s.\ bounded. Therefore, since $\{x\in{\tilde{\mathcal{G}}}:|\phi_x^{p}|>0\}$ contains $\P^{G,p}$-a.s.\ only bounded connected components, we obtain that $E^{p,\geq h}$ contains $\P^{G,p}$-a.s.\ only bounded components for all $h\in\mathbb{R},$ that is the associated critical parameter is equal to $-\infty.$ \end{enumerate} \end{Rk} \section{A graph with infinite critical parameter} \label{sec:example} In this section, we give an example of a graph for which the critical parameter $\tilde{h}_*,$ see \eqref{defh*}, is strictly positive, and in fact infinite, thus proving that the implication \eqref{eq:capimplies0bounded} is not trivial. For any $\alpha\in{(0,1)}$ and $d\in\mathbb{N},$ $d\geq2,$ we define $\mathbb{T}_d^{\alpha}$ as the rooted $(d+1)$-regular tree, such that, denoting by $T_n$ the set of vertices in $\mathbb{T}_d^{\alpha}$ at generation $n$ (seen from the root), \begin{equation*} \lambda_{x,y}^{(\alpha)}=\alpha^n\text{ if }x\in{T_n}\text{ and }y\in{T_{n+1}}, \end{equation*} and $0$ otherwise. Moreover, we take $\kappa^{(\alpha)}=0$ if $\alpha>\frac{1}{d}$ and $\kappa^{(\alpha)}=\mathds{1}_{\varnothing}$ otherwise, where $\varnothing$ denotes the root of the tree. Since for $x\in{T_n}$, $n \geq 1$, and $\alpha \in (0,1),$ \begin{equation} \label{eq:RWdrift} P_x^{\mathbb{T}_d^{\alpha}}(\hat{Z}_1\in{T_{n+1}})=d\frac{\alpha^n}{\alpha^{n-1}+d\alpha^n}=\frac{d\alpha}{1+d\alpha} \stackrel{\alpha>\frac{1}{d}}{>}\frac{1}{2}, \end{equation} we have that $\mathbb{T}_d^{\alpha}$ is a transient graph for all $\alpha\in{(0,1)}$ and $d\in\mathbb{N},$ $d\geq2.$ Taking $A$ to be an infinite connected line containing exactly one vertex per generation, and noting that the equilibrium measure of any sets at a point $x\in{T_n}$ is at most $\alpha^{n-1}+d\alpha^n,$ one can easily show that $\mathrm{cap}(A)<\infty,$ and so condition \eqref{capcondition} is not verified for $\mathbb{T}_d^{\alpha}.$ \begin{Prop} \label{h*infinity} There exists a constant $\overline{C}<\infty,$ such that for any $\alpha\in{(0,1)}$ and $d\in\mathbb{N},$ $d\geq2,$ with \begin{equation} \label{condondalpha} d\Big(1-\exp\Big(-\frac{\sqrt{\alpha}}{d\alpha+1}\Big)\Big)>\overline{C}, \end{equation} the set $E^{\geq h}$ contains $\P^G_{\tilde{\mathbb{T}}_d^{\alpha}}$-a.s.\ an unbounded connected component for all $h\in\mathbb{R},$ and so $\tilde{h}_*=\infty.$ \end{Prop} \begin{proof} Using the Markov property for the Gaussian free field, see (1.8) in \cite{MR3492939} for instance, one can construct the Gaussian free field on $(\tilde{\mathbb{T}}_d^{\alpha})^{-}$ (where $(\tilde{\mathbb{T}}_d^{\alpha})^{-}$ is defined as in page \pageref{deftildeged} by removing the edges $I_x,$ $x\in{\mathbb{T}_d^\alpha},$ from $\tilde{\mathbb{T}}_d^{\alpha}$) recursively in the generation $n$ as follows. Let $Y_x,$ $x\in{\mathbb{T}_d^{\alpha}},$ be a family of i.i.d.\ $\mathcal{N}(0,1)$-distributed random variables under $\P,$ and let $\psi_0=Y_0(g_{\mathbb{T}_d^{\alpha}}(0,0))^{1/2}.$ Recursively in~$n \geq 0$, we then define \begin{equation} \label{eq:tree_markov} \psi_x\stackrel{\mathrm{def.}}{=}\psi_{x^-}P_x(H_{\{x^-\}}<\infty)+Y_x\sqrt{g_{T_n^c}(x,x)}, \text{ for all }x\in{T_{n+1}}, \end{equation} where $x^-$ is the first ancestor of $x,$ i.e., the neighbor of $x$ on a geodesic path from $x$ to $0,$ and $g_{T_n^c}(x,x)$ is the Green function defined as in \eqref{Greendef} but for diffusion killed on exiting $T_n^c.$ Using the Markov property for the Gaussian free field, see (1.8) in \cite{MR3492939} for instance, one can then easily prove that $(\psi_x)_{x\in{\mathbb{T}_d^{\alpha}}}$ has the same law as $(\phi_x)_{x\in{\mathbb{T}_d^{\alpha}}}$ under $\P^G_{\mathbb{T}_d^{\alpha}}$. Moreover, let $B^e,$ $e\in{E},$ be a family of independent processes, such that for each edge $e=\{x,y\}\in{E}$ between $x\in{T_n}$ and $y\in{T_{n+1}},$ $B^e$ is a Brownian bridge of length $\frac{1}{2\alpha^n}$ between $0$ and $0$ of a Brownian motion with variance $2$ at time $1,$ and let \begin{equation*} \psi_{x+t\cdot I_e}=2\alpha^nt\psi_y+(1-2\alpha^nt)\psi_x+B_{t}^e\text{ for all }t\in{\big[0,1/2\alpha^n\big]} \end{equation*} (cf.\ the beginning of Section \ref{sec:notation} for notation). Then $(\psi_x)_{x\in{(\tilde{\mathbb{T}}_d^{\alpha}})^{-}}$ has the same law as $(\phi_x)_{x\in{(\tilde{\mathbb{T}}_d^{\alpha}})^{-}}$ under $\P^G_{\mathbb{T}_d^{\alpha}};$ cf.\ Section 2 of \cite{DrePreRod} for the proof of an analogous construction on $\mathbb{Z}^d,$ $d\geq3$. Now for each $x\in{T_{n+1}},$ with $A_x=\{ \psi_{x}\geq(\lambda_{x}^{(\alpha)})^{-1/2} \}$, in view of \eqref{eq:tree_markov} we have that \begin{equation*} \P\big(A_x\,\big|\,\psi_{x^-}\big)\mathds{1}_{ A_{x^-}}\geq\P\big(Y_x\geq(\lambda_{x}^{(\alpha)}g_{T_n^c}(x,x))^{-1/2}\big)\geq\P(Y_0\geq 1); \end{equation*} indeed, the inequality on the right-hand side follows since under $P_x^{\tilde{\mathbb{T}}_d^{\alpha}}$, $Z$ spends at least an exponential time with parameter $\lambda_x^{(\alpha)}$ in $x\in T_{n+1}$ before hitting $T_n$ and so $g_{T_n^c}(x,x)\lambda_x^{(\alpha)}\geq1.$ Moreover, using the exact formula for the distribution of the maximum of a Brownian bridge, see e.g. \cite{MR1912205}, Chapter IV.26, we have for all $x\in{T_{n+1}}$ and $n$ large enough, writing $e=\{x,x^-\},$ on the event $A_x \cap A_{x^-}$, \begin{align*} \P\big(\psi_y\geq (\lambda_x^{(\alpha)})^{-\frac14} \, \forall\,y\in{I_{e}}\,|\,\psi_x,\psi_{x^-}\big)&=1-\exp\big(-2\alpha^n(\psi_x-(\lambda_x^{(\alpha)})^{-\frac14})(\psi_{x^-}-(\lambda_x^{(\alpha)})^{-\frac14})\big) \\&\geq1-\exp\big(-\alpha^n(\lambda_x^{(\alpha)})^{-\frac12}(\lambda_{x^-}^{(\alpha)})^{-\frac12}\big) \\&= 1-\exp\Big(-\frac{\sqrt{\alpha}}{d\alpha+1}\Big). \end{align*} Hence, for all $n$ large enough and any $y\in{T_{n+1}},$ the intersection of the cluster of $y$ in $\{x\in{(\tilde{\mathbb{T}}_d^{\alpha}})^{-}:\,\psi_x\geq(\lambda_x^{(\alpha)})^{-\frac14}\}$ with $\mathbb{T}_d^{\alpha}$ stochastically dominates an independent Galton-Watson tree rooted at $y$, with average number of children equal to \begin{equation*} d\Big(1-\exp\Big(-\frac{\sqrt{\alpha}}{d\alpha+1}\Big)\Big)\P(Y_0\geq 1). \end{equation*} Choosing $\overline{C}=\P(Y_0\geq1)^{-1},$ we thus obtain that $\{x\in{(\tilde{\mathbb{T}}_d^{\alpha})^-}:\,\psi_x\geq(\lambda_x^{(\alpha)})^{-\frac14}\}$ contains {$\P$-a.s.}\ an unbounded connected component if $d\big(1-\exp\big(-\frac{\sqrt{\alpha}}{d\alpha+1}\big)\big)>\overline{C},$ and since $\lambda_x^{(\alpha)}\rightarrow{0}$ as $d(x,0)\rightarrow{\infty},$ we deduce that $\tilde{h}_*=\infty.$ \end{proof} \begin{Rk} \label{endremark} \begin{enumerate}[1)] \item \label{alphadexist}In both cases, $\alpha>\frac1d$ as well as $\alpha\leq\frac1d,$ it is possible to find $\alpha\in{(0,1)}$ and $d\in\mathbb{N},$ $d\geq2$ such that \eqref{condondalpha} holds. For instance, one can take $\alpha=\frac{a}{d}$ for arbitrary fixed $a>0,$ and choose $d$ large enough. In particular, in view of \eqref{eq:RWdrift}, this provides us with graphs such that $\kappa\equiv0$ and $\tilde{h}_*=\infty$ when $\alpha>\frac1d,$ or with $\mathit{\mathbf{h}}_{\text{kill}}\equiv1$ and $\tilde{h}_*=\infty$ when $\alpha\leq\frac1d.$ \item By means of suitable enhancements, cf.\ \eqref{eq:enhancements}, one readily derives from Proposition \nolinebreak \ref{h*infinity} an example of a graph $\mathcal{G}$ with unit weights and zero killing, on which $\tilde{h}_*=\infty$. Indeed, fix some $d\in2\mathbb{N}$ such that $d\big(1-\exp\big(-\frac{\sqrt{2/d}}{3}\big)\big)>\overline{C}.$ Consider the set $A \subset (\tilde{\mathbb{T}}_d^{2/d})^{-}$ (attached to the weights $\lambda^{(2/d)}, \kappa^{(2/d)}$ as above) with $A \cap I_{\{x^-,x\}}=\{ x^-+(k/2)\cdot I_{\{x^-,x\}}: 1\leq k \leq (d/2)^{n-1}-1\},$ for all $x\in{T_n}$, $n\geq 1$. Then \eqref*{4eq:GAsubsetG} in \cite{DrePreRod3} yields that $ \tilde{\mathbb{T}}_d^{2/d} \subset \tilde{\mathcal{G}}$, where $\mathcal{G}\stackrel{\text{def.}}{=}(({\mathbb{T}}_d^{2/d})^A, \lambda^A,\kappa^A)$, with $\lambda^A \equiv 1$ and $\kappa^A\equiv 0$, whence $\tilde{h}_*(\tilde{\mathcal{G}})=\infty$ by Proposition \nolinebreak \ref{h*infinity}. \item \label{counterexampleh*cap<h*}By Theorem \nolinebreak \ref*{4mainresult} in \cite{DrePreRod3}, we have that $\mathrm{cap}(E^{\geq 0}(x_0))<\infty$ $\P^G_{\tilde{\mathbb{T}}^{\alpha}_d}$-a.s.\ for all $x_0\in{\tilde{\mathcal{G}}}$, and so $\tilde{h}_*^{{\rm cap}}\leq0,$ where $\tilde{h}_*^{{\rm cap}}$ is the critical parameter associated to the percolation of $E^{\geq h}$ in terms of capacity, see \eqref*{4defh*cap} in \cite{DrePreRod3}. In particular, when $\alpha>\frac1d$ and \eqref{condondalpha} holds, then $\mathbb{T}_d^{\alpha}$ is an example of a graph for which the inequality \eqref*{4capcomboukappa} in \cite{DrePreRod3} is strict. \item \label{h*infiniteandlaw0}In Corollary \nolinebreak \ref*{4dichotomy} in \cite{DrePreRod3}, it is proved that, under the condition that \eqref{eq:laplacecaph} holds for $h=0$ and $\mathit{\mathbf{h}}\equiv1,$ \eqref{0bounded} implies $\tilde{h}_*=\infty.$ If one could prove that there exist $\alpha\in{(0,1)}$ and $d\geq2$ fulfilling \eqref{condondalpha} such that $\mathbb{T}_d^{\alpha}$ verifies \eqref{eq:laplacecaph} for $h=0$ and $\mathit{\mathbf{h}}\equiv1,$ this would show that this implication in Corollary \nolinebreak \ref*{4dichotomy} of \cite{DrePreRod3} is not trivial. We hope to come back to this question soon. \end{enumerate} \end{Rk}
{ "timestamp": "2021-02-16T02:41:05", "yymm": "2102", "arxiv_id": "2102.07763", "language": "en", "url": "https://arxiv.org/abs/2102.07763" }
\section{Introduction} \label{sec1} While it is often difficult to determine the symmetry property of Cooper pairs (called pairing symmetry in this work)~\cite{Ishida:1998aa,Luke:1998aa,PhysRevLett.110.077003,PhysRevB.90.220502,PhysRevX.7.011032,PhysRevB.96.180507,doi:10.7566/JPSJ.87.093703,Pustogow:2019aa,PhysRevB.100.094530,doi:10.7566/JPSJ.89.034712,Kivelson:2020aa,chronister2020evidence,Ran684,PhysRevLett.123.217001,PhysRevLett.123.217002,PhysRevB.100.220504,Jiao:2020aa,PhysRevResearch.2.032014,bae2020anomalous,hayes2020weyl,ishizuka2020periodic,PhysRevB.86.100507,PhysRevB.87.180503,PhysRevB.89.020509,PhysRevB.89.140504,doi:10.7566/JPSJ.84.054705}, superconducting nodes---geometry of gapless regions in the Bogoliubov quasiparticle spectrum---are key ingredients to identify pairing symmetries. For example, power-law behaviors of the specific heat and the magnetic penetration depth are signatures of nodal superconductivity. Therefore, predictions of superconducting nodes by theoretical studies are helpful to clarify the possible properties of unconventional superconductivity. Inspired by a series of the discovery of heavy-fermion superconductors such as CeCu$_2$Si$_2$~\cite{PhysRevLett.43.1892} and UPt$_3$~\cite{PhysRevLett.52.679}, superconducting order parameters are classified by irreducible representations of point groups~\cite{Volovik1984, PhysRevB.30.4000,10.1143/PTP.74.221,10.1143/PTP.75.442,Sigrist-Ueda}. Since the order parameters are described by basis functions of the irreducible representations in these theories, the intersection between Fermi surfaces and regions where the basis functions vanish is understood as superconducting nodes. Indeed, such analyses succeed in explaining nodes of certain superconductors like cuprate superconductors~\cite{RevModPhys.72.969}. However, recent intensive studies have revealed that such analyses do not consider multiband (orbital) effects and the presence of nonsymmorphic symmetries. As a result, novel symmetry-protected nodes~\cite{PhysRevLett.116.177001,PhysRevLett.118.127001,PhysRevB.96.094526,PhysRevB.96.214514,Kimeaao4513,PhysRevLett.120.057002,PhysRevX.8.011029} have been missed in these theories. For example, although Ref.~\onlinecite{PhysRevB.32.2935} argued that symmetry-protected line nodes could not exist in odd-parity superconductors, several works provide counterexamples in the presence of nonsymmorphic symmetries~\cite{PhysRevB.52.15093,PhysRevB.80.100506,PhysRevLett.117.217002,PhysRevB.94.174502,PhysRevB.95.024508}. UPt$_3$ is a prototypical example of materials that exhibits such symmetry-protected line nodes~\cite{PhysRevB.52.15093,PhysRevB.80.100506,PhysRevLett.117.217002,PhysRevB.94.174502,PhysRevB.95.024508}. Another example is surface nodes called Bogoliubov Fermi surfaces. When the time-reversal symmetry (TRS) is broken, the Bogoliubov Fermi surfaces can be realized by a pseudo magnetic field arising from interband Cooper pairs~\cite{PhysRevLett.118.127001,PhysRevB.98.224509}. Recently, three approaches to overcoming the insufficiency of the previous studies have been proposed. The first approach is based on the group-theoretical analysis of representations of the Cooper pair wave functions~\cite{PhysRevB.80.100506,PhysRevLett.118.207001,doi:10.7566/JPSJ.86.023703,PhysRevB.95.024508,Sumita-Yanase}. In the presence of the inversion symmetry, the theory tells us pairing symmetries that force gap functions to vanish on the mirror plane~\cite{PhysRevB.80.100506,Sumita-Yanase}. Thus, when Fermi surfaces are located on the mirror planes, line nodes exist in the mirror plane because of such pairing symmetries. The second approach is based on homotopy theory~\cite{Teo-Kane2010,PhysRevB.83.064505,PhysRevB.83.224511,doi:10.1143/JPSJ.81.011013, Matsuura_2013,PhysRevLett.110.240404,PhysRevB.90.024516,PhysRevB.90.205136,PhysRevLett.114.096804,PhysRevB.92.214514,PhysRevB.94.134512,AZ-node,Kobayashi-Sumita-Yanase-Sato,Sumita-Nomoto-Shiozaki-Yanase,kim2020linking}. In the presence of the inversion and internal symmetries, we define zero-, one-, and two-dimensional topological charges that protect nodes at generic points. Then, depending on the dimensions of the defined topological charges, the shapes of protected nodes, such as line and surface nodes, are determined. The last approach is the $k\cdot p$ model analysis, which discusses the number of symmetry-allowed mass terms and dispersion in $k\cdot p$ models~\cite{PhysRevLett.108.140405,Yang:2014aa,PhysRevB.90.115111,Chen:2015aa,PhysRevX.5.011029,PhysRevLett.115.036807,PhysRevLett.116.186402}. Despite the significant progress reported in these works, existing theories cover only simple symmetry settings such as generic points or the mirror planes. In other words, high-symmetry settings such as the rotation and the screw axes in the glide planes, which commonly happen in realistic materials, are out of their scope. Therefore, a comprehensive theory to classify and predict superconducting nodes for arbitrary symmetry classes has long been awaited. \add{To achieve this goal, we need to answer the following two questions: \begin{enumerate} \setlength{\itemsep}{-2pt} \item[(I)] Is there a way to comprehensively classify nodes pinned to high-symmetric momenta (often called symmetry-enforced nodes)? \item[(II)] Can we classify topologically protected nodes not pinned to particular momenta, which can freely move in planes or the entire Brillouin zone? \end{enumerate} } \add{In this work, we propose a novel approach to symmetry-enforced nodes on arbitrary lines in momentum space, which will answer the question (I).} Our method is based on two techniques to clarify the shapes of nodes pinned to the lines. First, we employ the symmetry-based analysis of band topology~\cite{Po2017,TQC,Watanabeeaat8685,PhysRevX.8.031069,PhysRevX.8.031070,QuantitativeMappings,Ono-Watanabe2018,SI_Adrian,TQC_review,MTQC,Ono-Yanase-Watanabe2019,Skurativska2019,SI_Shiozaki,Ono-Po-Watanabe2020,SI_Luka,Ono-Po-Shiozaki2020,huang2020faithful}. Symmetry representations of wave functions play a pivotal role in the theory. In particular, there exist necessary conditions of symmetry representations to be gapped phases, referred to as compatibility relations~\cite{PhysRevB.59.5998,refId0,MICHEL2001377,PhysRevX.7.041069,Po2017,TQC}. Conversely, if some compatibility relations are violated, the system should be gapless. Suppose that we find a gapless point on a line, which originates from a violated compatibility relation. When compatibility relations between the line and its neighborhood exist, we find that the region of violation of the compatibility relation is line or surface; that is, line or surface nodes must exist. Although compatibility relations are powerful tools for understanding nodes, they alone cannot provide complete information about the geometry of nodes. More precisely, when there are no compatibility relations between the line and its neighborhood, we cannot judge whether the gapless point on the line is a genuine point node. Then, the classification of point nodes on the lines can compensate for the incompleteness of compatibility relations. The results are mainly classified into three types: (i) genuine point nodes, (ii) loop or surface nodes shrinking to a point, and (iii) no point nodes and such shrunk loop or surface nodes. If the classification result on the line is type (ii) or (iii), the gapless point on the line is considered a part of line or surface nodes. There are two distinctions from existing works in this work. One is that our symmetry-based approach can be applied to any symmetry settings, for example, in the absence of the inversion symmetry and the presence of several nonsymmorphic symmetries. In fact, we apply the framework to all \add{nonmagnetic} and magnetic space groups, considering all the possible pairing symmetries that belong to one-dimensional single-valued representations of the point groups. The classification tables we obtained are tabulated in Supplementary Materials. Furthermore, the symmetry-based approach has a chance to be more refined to answer question (II), which will also be discussed in the present paper. The other one is that our framework leads to an efficient algorithm to detect and diagnose nodes in realistic materials, \add{requiring only pairing symmetry and information of irreducible representations of Bloch wave functions at high-symmetry momenta}. Our results therefore will help reduce the candidates of pairing symmetries in realistic superconductors by comparing our results with experimental results on the existence or absence of nodes. The remaining part of this paper is organized as follows. \add{In Sec.~\ref{sec2}, we provide an overview of our study, which enables readers who do not interested in all details to understand our ideas and results.} In Sec.~\ref{sec3}, we introduce several ingredients used to formulate our theory. We devote Sec.~\ref{sec4} to establish the classification of point nodes on the lines in the presence of point group symmetries. In Sec.~\ref{sec5}, we integrate the point-node classifications into the symmetry-based analysis to classify nodal structures pinned to the lines. In Sec.~\ref{sec6}, we discuss how to apply our theory to detection of nodes in realistic superconductors. As a demonstration, we apply our algorithm to CaPtAs, in which the broken TRS is observed~\cite{PhysRevLett.124.207001}. We show that this material is expected to have small Bogoliubov Fermi surfaces. \add{In Sec.~\ref{sec7}, we comment on nodes at generic momenta and the relationship between such nodes and symmetry-based analysis, which will be an answer to question (II) toward a complete classification of topologically stable nodes.} We conclude the paper with outlooks for the future works in Sec.~\ref{sec8}. Several details are included in appendices to avoid digressing from the main subjects. \section{Overview of this study} \label{sec2} Our major goal is to establish a systematic framework to classify various nodes pinned to lines in momentum space. To achieve this, we will integrate compatibility relations and point-node classifications. In this section, we provide an overview of our strategy and results. \add{Throughout the present paper, \textit{gapless point} means a point of momentum space where the bulk gap in the Bogoliubov quasiparticle spectrum is closing. It does not imply that the gapless point is always a genuine point node. As shown in the following discussions, a gapless point on a line connecting two momenta is sometimes part of a line or surface node.} \textit{Emergent Altland-Zirnbauer classes and zero-dimensional topological invariants}.---In principle, a complete diagnosis of nodal structures requires computations of all topological charges to protect nodes. In this work, we adopt an alternative way:~we characterize any Bogoliubov quasiparticle spectrum by zero-dimensional topological invariants at various momenta. To accomplish this, we first identify emergent Altland-Zirnbauer (EAZ) classes at a point in momentum space. \add{Here, \textit{emergent} means that such a symmetry class is not a global internal symmetry class but a local one for an irreducible representation at a point in momentum space.} Once the EAZ classes are determined for each irreducible representations at various momenta, we define zero-dimensional topological invariants in the topological periodic table~\cite{PhysRevB.78.195125,Kitaev_bott,Ryu_2010} (see also Table~\ref{tab:EAZ}). Let us illustrate the notion of EAZ classes through spinful space group $P2/m$ with $B_g$ pairing. In this symmetry setting, the system possesses TRS $\mathcal{T}$, the particle-hole symmetry (PHS) $\mathcal{C}$, the two-fold rotation $C_{2}^{y}$ along the $y$-axis satisfying the anticommutation relations $\{\mathcal{C}, C_{2}^{y}\}=0$~\cite{Note3}, and the inversion $I$ holding the commutation relation $[\mathcal{C}, I]=0$. Let $H_{\bm{k}}$ and $\psi_{m\bm{k}}$ be the Hamiltonian and its eigenvectors, and $H_{\bm{k}}\psi_{m\bm{k}} = E_{m\bm{k}}\psi_{m\bm{k}}$, where the Bogoliubov quasiparticle spectrum $E_{m\bm{k}}$ is labeled by the band index $m$ and the momentum $\bm{k}$. Since the combined symmetries $I\mathcal{C}$ and $I\mathcal{T}$ do not change $\bm{k}$, $(I\mathcal{C})\psi_{m\bm{k}}$ and $(I\mathcal{T})\psi_{m\bm{k}}$ are also eigenvectors of $H_{\bm{k}}$ with the energies $-E_{m\bm{k}}$ and $E_{m\bm{k}}$, respectively. We begin by focusing on a generic momentum $\bm{k}$ in the two-dimensional plane invariant under the mirror symmetry $M_{y} = IC_{2}^y$. In this plane, the eigenvectors $\psi_{m\bm{k}}$ of $H_{\bm{k}}$ are also those of $M_y$ with mirror eigenvalues $\xi_{m\bm{k}} = \pm i$. Then $(I\mathcal{C})\psi_{m\bm{k}}$ and $(I\mathcal{T})\psi_{m\bm{k}}$ have the mirror eigenvalues $\xi_{m\bm{k}}$ and $-\xi_{m\bm{k}}$, respectively. This implies that the combined symmetry $I\mathcal{C}$ does not change the mirror sector but $I\mathcal{T}$ changes, which results in class D as the EAZ symmetry class of each mirror sector at the point $\bm{k}$ (see Fig.~\ref{fig:overview} (a)). As is the case of the mirror plane, completely the same discussion can be applied to any point (except for higher-symmetry momenta) in the rotation symmetric line. Then, we find that the EAZ symmetry class of each rotation-eigenvalue sector at the point is class D (see Fig.~\ref{fig:overview} (b)). For EAZ class D, the Pfaffian invariants $p_{\bm{k}}^{\pm i}$ are defined. \begin{figure}[t] \begin{center} \includegraphics[width=0.99\columnwidth]{fig_overview_EAZ.pdf} \caption{\label{fig:overview}Illustration of the action of symmetries discussed in Sec.~\ref{sec2}. There are two irreducible representations in the mirror plane ($k_z$-$k_x$ plane) [(a)] and the rotation axis ($k_y$-axis) [(b)]. They are invariant under $I\mathcal{C}$ but exchanged by $I\mathcal{T}$. As a result, the EAZ classes for the irreducible representations are class D, and thus two $\mathbb{Z}_2$ topological invariants (Pfaffian invariants) are defined at every point in the mirror plane and the rotation axis (except for high-symmetry points).} \end{center} \end{figure} \textit{Diagnosis of nodal structures based on compatibility relations}.---As seen in the preceding discussions, we show that zero-dimensional topological invariants are defined at each momentum. Then, the question is whether these zero-dimensional topological invariants are fully independent or not. In general, for the gapped region in momentum space, these zero-dimensional topological invariants are subject to symmetry constraints. Topological invariants do not change when the system in the same topological phase during the continuous deformation (see Fig.~\ref{fig:overview_CR}(a)). Thus, when we consider momentum as parameters of the deformation, the zero-dimensional topological invariants must be the same for the gapped region. In this work, we refer to such constraints on zero-dimensional topological invariants as \textit{compatibility relations}. Conversely, if the zero-dimensional topological invariants are changed between two points, the Bogoliubov quasiparticle spectrum must have gapless points on this line (see Fig.~\ref{fig:overview_CR}(b)). The existence of a gapless point pinned to the line immediately implies that there are two regions in which the zero-dimensional topological invariants are different from each other (see the upper panel of Fig.~\ref{fig:overview_CR}(c)). Next, we discuss the diagnosis of the shape of nodes when we find a gapless point originating from the change of zero-dimensional topological invariants on a line. Suppose that there exist compatibility relations between the two subdivisions and their neighborhoods. Furthermore, since the gradient of dispersion does not usually diverge, it is natural to think that neighborhoods of the regions on the line are gapped. However, due to the compatibility relations, the two neighborhoods also have different topological invariants. Therefore, the boundary of these neighborhoods leads to a line node (see the lower panel of Fig.~\ref{fig:overview_CR}(c)). When the system is three-dimensional, the same discussion can be further applied to the line node and its three-dimensional neighborhood (see Fig.~\ref{fig:overview_CR}(d)). \begin{figure}[t] \begin{center} \includegraphics[width=0.99\columnwidth]{fig_overview_CR.pdf} \caption{\label{fig:overview_CR}Illustration of diagnosis based on compatibility relations. (a, b) Bogoliubov quasiparticle spectrum along the line connecting two momenta $\bm{k}_1$ and $\bm{k}_2$. The spectrum satisfies the compatibility relations on the line in (a), while does not in (b). (c) Two divisions of the line connecting $\bm{k}_1$ and $\bm{k}_2$ and a nodal line in a plane containing the line. Here, the red shaded region and others have different values of the topological invariants, whose boundary results in a nodal line. (d) Surface node. When there are compatibility relations between the plane and its three-dimensional neighborhood, the regions in (c) are extended out of the plane, and the boundary surface is the surface node. } \end{center} \end{figure} Again, we discuss the case for space group $P2/m$ with $B_g$ pairing. Let us start with the mirror plane. We pick two momenta $\bm{k}_1$ and $\bm{k}_2$, which are not the high-symmetry points. We also suppose that the different Pfaffian invariants are assigned, say, $p_{\bm{k}_1}^{\pm i} = 1$ and $p_{\bm{k}_2}^{\pm i} = 0$. Then, a gapless point must be on the line between $\bm{k}_1$ and $\bm{k}_2$, as discussed above. In the mirror plane, there exists a compatibility relation such that $p_{\bm{k}}^{\pm i}$ must be the same for the gapped regions. As a result, we find that the situation is actually the same as Fig.~\ref{fig:overview_CR}(c) and that the gapless point is part of the line node. On the other hand, the situation for rotation axes is different from that for the mirror plane. There are no compatibility relations between a point in the rotation axis and generic momenta. In such a case, one might think that the point node is the only case. However, we cannot conclude that the gapless point is a genuine point node. \add{The possibility of a line node protected by one-dimensional topological invariants, such as the Berry phase and the winding number, still remains since the absence of compatibility relations just guarantees that there are no line and surface nodes protected by zero-dimensional topological invariants.} In summary, compatibility relations can tell us part of nodal structures but not completely determine them. \textit{Gapless point classifications on lines}.---In such a case, we need another tool to distinguish two possibilities of a genuine point node or a line node. This is achieved by the classifications of two-dimensional massive Dirac Hamiltonians near gapless points on the line \begin{align} \label{eq:overview_cc-ham} H_{(k_1, k_2)} &= k_1 \gamma_1 + k_2\gamma_2 + \delta k_3 \gamma_0, \end{align} where $k_1$ and $k_2$ are momenta in the directions perpendicular to the the line, and $\delta k_3$ is a displacement from the gapless point in the direction of the line. Gamma matrices $\gamma_{0}, \gamma_1$, and $\gamma_2$ anticommute with each others. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\columnwidth]{CC_result.pdf} \caption{\label{fig:CC_result}Illustration of results of point-node classifications on a line: (i) The gapless point on the line is a genuine point node. (ii) A point node formed by multiple gapless points can be realized, but such a point node is actually a shrunk loop or surface node. Since there are no reasons why two gapless points are at the same position, it is natural to consider that a shrunk loop or surface node exists in such a case. (iii) The gapless point must be part of a line or surface node. } \end{center} \end{figure} After classifying the Dirac Hamiltonians, we find three types of gapless points: (i) a genuine point node [Fig.~\ref{fig:CC_result} (i)], (ii) a shrunk loop or surface node [Fig.~\ref{fig:CC_result} (ii)], and (iii) part of line or surface nodes [Fig.~\ref{fig:CC_result} (iii)]. It should be noted that the shrinking for case (ii) is not forced by symmetries. In other words, case (ii) indicates that such loop and surface nodes can shrink to a point just by deformations. In this work, we consider that such shrinkable nodes are realized as loop or surface nodes. Indeed, the classification result for the rotation axis in space group $P2/m$ with $B_g$ pairing is the case (iii), as shown in Sec.~\ref{sec4:p2m}. Thus, the gapless point on the rotation axis is not a genuine point node. \textit{Unification of compatibility relations and point-node classifications}.---Unifying compatibility relations and point-node classifications, we finally arrive at our classification scheme for the gapless points on lines, which is summarized in Fig.~\ref{fig:flow}. As a preparation for the classifications, we decompose momentum space into points, lines, polygons, and polyhedrons (called $0$-cells, $1$-cells, $2$-cells, and $3$-cells, respectively in this work) [cf. Fig.~\ref{fig:p4mm}]. Suppose that we have a generator of gapless points on the line. \add{Here, \textit{generator} means a gapless point induced by a change of zero-dimensional topological invariants for irreducible representations. In other words, the generator has a minimum number of gapless states at a point in the line, which cannot be split due to symmetry constraints.} We first check whether compatibility relations between the line and adjacent polygons exist or not. Let us begin by discussing the case where they exist. Then, we further examine if compatibility relations are between the polygons and adjacent polyhedrons. If they exist, the gapless point is part of a surface node [S(A) in Fig.~\ref{fig:flow}]. Otherwise, the gapless point is part of a line node pinned on the polygons [L(A) in Fig.~\ref{fig:flow}]. On the other hand, when the compatibility relations between the line and adjacent polygons do not exist, we ask if the gapless point on the line is a genuine point node from the results of point-node classifications. When the gapless point belongs to case (i) of the point-node classification, it is a genuine point node [P(B) in Fig.~\ref{fig:flow}]. If the gapless point is not consistent with the existence of a genuine point node, i.e., the point-node classification result is the case (ii) or (iii), we conclude that the gapless point is part of a line node [L(B) in Fig.~\ref{fig:flow}]. Note that, since stable surface nodes require zero-dimensional topological charges, which are actually equivalent to zero-dimensional topological invariants, they are always diagnosable by compatibility relations. In this work, we classify the nodes pinned to the lines in all nonmagnetic and magnetic space groups with concrete decomposition of momentum space. All results are summarized as tables in Supplemental Materials, which contain the information about positions and shapes of nodes. \begin{figure*}[t] \begin{center} \includegraphics[width=1.8\columnwidth]{flow.pdf} \caption{\label{fig:flow}A flowchart of our classification scheme. We focus on a generator of gapless points on a line (called 1-cell). We separately perform gapless point classifications on the 1-cell. Then, we ask if compatibility relations between the line and adjacent polygons exist or not. If yes, we examine whether compatibility relations are between the polygons and their three-dimensional neighborhoods. If they exist, the gapless point is part of a surface node, denoted by S(A). Otherwise, the gapless point is part of a line node pinned on the polygons, denoted by L(A). Next, we consider the case where the compatibility relations between the line and adjacent polygons do not exist. In such a case, we ask if the generator coincides with a genuine point node from the results of gapless point classifications on 1-cells. When the gapless point belongs to case (i) of the point-node classification, it is a genuine point node, denoted by P(B). If the gapless point is not consistent with the existence of a genuine point node, i.e., the point-node classification result is the case (ii) or (iii), we conclude that the gapless point is part of a line node, denoted by L(B). } \end{center} \end{figure*} \begin{table}[t] \begin{center} \caption{\label{tab:overview_P2m}Part of classification table for space group $P2/m$ with $B_g$ pairing. The first and second columns represent the boundary points of the line where a gapless point exists. Labels of irreducible representations (irrep) are shown in the third column, which follows the notation in Ref.~\onlinecite{Bilbao}. The fourth column is the classification $\mathbb{Z}$ or $\mathbb{Z}_2$, and the fifth column means the type of nodes. Here P, L, and S denote point, line, and surface nodes, respectively. In addition, while (A) means that the shape of the node is determined only by compatibility relations, (B) indicates that gapless point classifications are necessary. } \begin{tabular}{c|c|c|c|c} \hline HSP1 & HSP2 & irrep & classification & type of node \\ \hline\hline $\left(0,0,0\right)$&$\left(\frac{1}{2},0,0\right)$&$\bar{F}_3$&$\mathbb{Z}_2$&\text{L(A)}\\ $\left(0,0,0\right)$&$\left(0,\frac{1}{2},0\right)$&$\bar{\Lambda}_3$&$\mathbb{Z}_2$&\text{L(B)}\\ \hline \end{tabular} \end{center} \end{table} \textit{Applications to materials}.---Our classification leads to an efficient way to diagnose nodal structures in realistic superconductors. There are two things that they have to do. One is to perform density-functional theory (DFT) calculations and compute irreducible representations in the normal phase at high-symmetry points, which leads to zero-dimensional topological invariants at high-symmetry points in the weak-pairing assumptions~\cite{SI_Luka,Ono-Po-Shiozaki2020} (see Sec.~\ref{sec6} for more details). The other is to check if the obtained zero-dimensional topological invariants satisfy compatibility relations or not. Examining compatibility relations between zero-dimensional topological invariants at two high-symmetry points, we can detect the positions of gapless points on the line between two high-symmetry points. Furthermore, referring to the classification tables, we can also understand the shape of nodes. For example, let us suppose that we have a superconductor crystallized in space group $P2/m$ with $B_g$ pairing. In this space group, there are eight high-symmetry points, at which four irreducible representations are defined and labeled by $1,2,3,$ and $4$. Then, their EAZ classes are class D, and four Pfaffian invariants are defined at these points. We further suppose that the Pfaffian invariants for the irreducible representations $1$ and $2$ at $\Gamma$ are nontrivial and that others are trivial. After examining if compatibility relations are satisfied, we find various violated ones. Here, let us focus on the violated compatibility relations on $(0,0,0)$-$(1/2,0,0)$ and $(0,0,0)$-$(0,1/2,0)$. Then, referring to the Table~\ref{tab:overview_P2m}, we immediately see that the gapless points on these lines are part of line nodes. It is worth noting that our framework have been implemented in an automatic program. In Ref.~\onlinecite{Tang-Ono-Wan-Watanabe2021}, the authors have developed a subroutine, which enable us to perform the diagnosis of nodal structures just by uploading particularly formatted results of DFT calculations. \section{Formalism} \label{sec3} In Sec.~\ref{sec2}, we have provided an overview of our ideas to classify nodes on symmetric lines. In this section, we explain several ingredients of implementations of the systematic classifications, which will be discussed in Secs.~\ref{sec4} and \ref{sec5}. \subsection{BdG Hamiltonian and Symmetry representations} In this work, we always consider superconductors which can be described by the Bogoliubov–de Gennes (BdG) Hamiltonian \begin{align} \label{eq:BdG} H_{\bm{k}}&=\begin{pmatrix} h_{\bm{k}}& \Delta_{\bm{k}} \\ \Delta_{\bm{k}}^{\dagger} & -h_{-\bm{k}}^{*} \end{pmatrix}, \end{align} where $h_{\bm{k}}$ and $\Delta_{\bm{k}}$ denote the normal-phase Hamiltonian and the superconducting gap function, respectively~\cite{Note1}. \add{Here, we choose the gauge such that the BdG Hamiltonian is periodic in $\bm{k}$, i.e., $H_{\bm{k}+\bm{G}} = H_{\bm{k}}$ for reciprocal lattice vectors $\bm{G}$.} Suppose that the normal phase is invariant under a magnetic space group (MSG) $\mathcal{M} = \mathcal{G} + \mathcal{A}$, where $\mathcal{G}$ is a space group and $\mathcal{A}$ is an antiunitary part of $\mathcal{M}$. \add{Note that the notion of MSG contains all ordinary space groups without and with TRS. For instance, when every element in $\mathcal{A}$ is the product of TRS and an element of $\mathcal{G}$, the MSG is no more than a space group with TRS.} A MSG $\mathcal{M}$ always has a subgroup $T$ consisted of all lattice translations. An element $g\in \mathcal{M}$ transform a point $\bm{r}$ in the real space to $g\bm{r} = p_g\bm{r}+\bm{t}_g$, where $p_g$ is an element of $\text{O}(3)$ and $\bm{t}_g$ represents a lattice translation or a fractional translation. Because of the existence of PHS $\mathcal{C}$ in the BdG Hamiltonian, the full symmetry group $G$ is divided by the following four parts \begin{align} G &= \mathcal{M} + \mathcal{M} \mathcal{C} \nonumber\\ \label{eq:MSG} &= \mathcal{G} + \mathcal{A} + \mathcal{P} + \mathcal{J} \end{align} where $\mathcal{P}=\mathcal{G}\mathcal{C}$ and $\mathcal{J}=\mathcal{A}\mathcal{C}$ are sets of particle-hole like and chiral like symmetries. We recall symmetry representations of $G$ in momentum space. We introduce two maps $c, \phi: G \rightarrow \mathbb{Z}_2=\{-1,1\}$. Here, $\phi_g = +1\ (-1)$ means $g$ is unitary (antiunitary), and $c_g=+1\ (-1)$ represents $g$ commutes (anticommutes) with the Hamiltonian $H_{\bm{k}}$. Accordingly, an element $g\in \mathcal{M}$ transforms a point $\bm{k}$ in momentum space into $g\bm{k}=\phi_gp_g\bm{k}$. In addition, the representation $\rho_{\bm{k}}(g)$ is expressed by \begin{align} \rho_{\bm{k}}(g) &= \begin{cases} U_{\bm{k}}(g) \quad \text{for }\phi_g = +1,\\ U_{\bm{k}}(g)K \quad \text{for }\phi_g = -1, \end{cases} \end{align} and $\rho_{\bm{k}}(g)$ satisfies \begin{align} \label{eq:trans_ham} \rho_{\bm{k}}(g)H_{\bm{k}} &= \begin{cases} H_{g\bm{k}}\rho_{\bm{k}}(g)\quad \text{for }c_g = +1,\\ -H_{g\bm{k}}\rho_{\bm{k}}(g)\quad \text{for }c_g = -1, \end{cases} \end{align} where $U_{\bm{k}}(g)$ and $K$ are a unitary matrix and the conjugation operator, respectively. Note that $U_{\bm{k}}(g)$ is a projective representation, i.e., the following relation holds \begin{align} \rho_{g'\bm{k}}(g) \rho_{\bm{k}}(g') = z_{g,g'}\rho_{\bm{k}}(gg'), \end{align} where $z_{g,g'}\in \text{U}(1)$ is a projective factor of $G$. For spinless systems, we can always choose $z_{g, g'} = +1$ for $g,g' \in \mathcal{G}$ or $\mathcal{A}$. Let us consider a point $\bm{k}$ in momentum space. For this point, we introduce a little group $\mathcal{G}_{\bm{k}}= \{h \in \mathcal{G}| h\bm{k} = \bm{k} +^\exists\bm{G}\}$, where $\bm{G}$ is a reciprocal lattice vector. For $h \in \mathcal{G}_{\bk}$, since elements in $\mathcal{G}_{\bk}$ are symmetries of $H_{\bm{k}}$, we can simultaneously block-diagonalize $H_{\bm{k}}$ and $U_{\bm{k}}(h)$ such that \begin{align} \label{eq:block-diag} U_{\bm{k}}(h) &=\text{diag}\left[U_{\bm{k}}^{\alpha_1}(h)\otimes\mathds{1}_{m_1}, \cdots, U_{\bm{k}}^{\alpha_n}(h)\otimes\mathds{1}_{m_n}\right],\\ \label{eq:block-diag2} H_{\bm{k}}&=\text{diag}\left[\mathds{1}_{d^{\alpha_1}}\otimes H^{\alpha_1}_{\bm{k}}, \cdots, \mathds{1}_{d^{\alpha_n}}\otimes H^{\alpha_n}_{\bm{k}}\right], \end{align} where $U_{\bm{k}}^{\alpha}(h)$ is an irreducible representations of $\mathcal{G}_{\bm{k}}$. Here, $d^{\alpha}$ and $m_{\alpha}$ are dimensions of $U_{\bm{k}}^{\alpha}(h)$ and $H^{\alpha}_{\bm{k}}$, respectively~\cite{SI_Luka,Ono-Po-Shiozaki2020}. One often considers the finite group $G_{\bm{k}}/T$, where $G_{\bm{k}}$ is a subgroup of $G$ and is defined in the same way as $\mathcal{G}_{\bk}$. In the literature~\cite{Bradley}, $G_{\bm{k}}/T$ is referred to as ``little co-group.'' Importantly, $G_{\bm{k}}/T$ is isomorphic to a magnetic point group with PHS. We can always relate representations of $G_{\bm{k}}$ to those of $G_{\bm{k}}/T$, and we define the representation $\sigma_{\bm{k}}(g)$ of $G_{\bm{k}}/T$ by \begin{align} \sigma_{\bm{k}}(g) &= \begin{cases} U_{\bm{k}}(g)e^{-i \bm{k} \cdot \bm{t}_g} \quad \text{for }\phi_g = +1,\\ U_{\bm{k}}(g)e^{-i \bm{k} \cdot \bm{t}_g} K \quad \text{for }\phi_g = -1, \end{cases} \end{align} where $\bm{t}_g$ is a fractional translation or zeros. Correspondingly, projective factors also change as \begin{align} \sigma_{\bm{k}}(g)\sigma_{\bm{k}}(h) &= z_{g,h}^{\bm{k}}\sigma_{\bm{k}}(gh), \end{align} where $z_{g,h}^{\bm{k}}=z_{g,h}e^{-i \bm{k} (p_g \bm{t}_h- \phi_g \bm{t}_h)}$. Using these projective factors $z_{g,h}^{\bm{k}}$, we can obtain irreducible representations $u_{\bm{k}}^{\alpha}$ of $\mathcal{G}_{\bk}/T$, which is simply related to irreducible representations $U_{\bm{k}}^{\alpha}$ of $\mathcal{G}_{\bk}$ by \begin{align} U_{\bm{k}}^{\alpha}(g) &= u_{\bm{k}}^{\alpha}(g)e^{-i\bm{k}\cdot \bm{t}_g}. \end{align} \subsection{Cell decomposition} \label{sec3:cell} \begin{figure*}[t] \begin{center} \includegraphics[width=1.6\columnwidth]{p4mm_v2.pdf} \caption{\label{fig:p4mm}Cell decomposition for $p4mm$. We first find a unit of BZ illustrated in the left panel. The red and black arrows signify orientations of 1-cells and 2-cells, respectively. Then, we rotate each p-cell in the unit by the four-fold rotation symmetry. Finally, mapping them by the mirror symmetry, we arrive at the cell decomposition shown in the right panel.} \end{center} \end{figure*} Here, we explain the \textit{cell decomposition} of the Brillouin zone (BZ)~\cite{Shiozaki2018}. In this work, we divide BZ into points, lines, polygons, and polyhedrons, which are called 0-cells, 1-cells, 2-cells, and 3-cells, respectively. Before moving on to the formal discussions, we begin by introducing an example. Let us consider the wallpaper group $p4mm$ in two-dimension. Here we describe a way to find the cell decomposition shown in Fig.~\ref{fig:p4mm}. in which $0$-cells, $1$-cells, and $2$-cells are represented by orange circles, solid red lines, and pink polygons, respectively. We first find an asymmetric unit of BZ, and then decompose the asymmetric unit into three $0$-cells (orange circles), three $1$-cells (solid red lines), and a $2$-cell (pink plane) in the left panel of Fig.~\ref{fig:p4mm}. Finally, we act symmetry operations on this asymmetric unit and obtain the cell decomposition of the entire BZ: \begin{align} \label{eq:C0_p4mm} \mathcal{C}_0 &= \{\Gamma, \text{X}, M, \text{X}_1, \text{M}_1, \text{X}_2, \text{M}_2, \text{X}_3, \text{M}_3 \}, \\ \label{eq:C1_p4mm} \mathcal{C}_1 &= \{a, b, c, a_1, b_1, c_1, a_2, b_2, c_2,a_3, b_3, c_3, c_4, c_5, c_6, c_7\}, \\ \label{eq:C2_p4mm} \mathcal{C}_2 &= \{\alpha, \alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5, \alpha_6, \alpha_7\}, \end{align} where $\mathcal{C}_p\ (p = 0, 1,2)$ represents the set of $p$-cells. Note that, although various $p$-cells are equivalent or symmetry-related to other $p$-cells, we here assign different labels to them. For example, $\text{X}_2=(-\pi,0)$ is equivalent to $\text{X}=(\pi,0)$ and $\text{X}_1=(0,\pi)$ is symmetry-related to $\text{X}$. We proceed to explain a construction for arbitrary symmetry settings. As is the case of the above example, we first find an asymmetric unit of BZ and divide the asymmetric unit into the set of $p$-cells $\{D^{p}_{i}\}_i$ for $p=0,1,\cdots, d$. Next, we copy the decomposition of the asymmetric unit throughout the entire BZ by using crystalline symmetries. In other words, we define the entire set of $p$-cells by \begin{align} \mathcal{C}_p \equiv \bigcup_{i} \bigcup_{g \in G/T} D_{g(i)}^{p}, \end{align} where $D_{g(i)}^{p} = gD^{p}_{i}$. Note that, in this construction, some $p$-cells are equivalent or symmetry-related to others up to reciprocal lattice vectors. However, we do not identify such $p$-cells with others in the procedures of cell decomposition, and we will take into account these identifications in the construction of $E_1$-pages in Sec.~\ref{sec3:E1}. Each $p$-cell satisfies the following conditions: \begin{enumerate} \setlength{\itemsep}{-2pt} \item[(i)] The intersection of any two $p$-cells in $\mathcal{C}_p$ is an empty set, i.e., $D^{p}_{i}\cap D^{p}_{j}=\emptyset\ (i\neq j)$. \item[(ii)] Any point in a $p$-cell $D_{i}^{p}$ is invariant under symmetries or transformed to points in different $p$-cells by symmetries, namely, $g\bm{k} = \bm{k}+^{\exists}\bm{G}$ or $g\bm{k} \in D_{g(i)}^{p}$ if $\bm{k} \in D_{i}^{p}$. \item[(iii)] The boundary $\partial D_{i}^{p}$ consists of $(p-1)$-cells for $p\geq 1$. \item[(iv)] Each $p$-cell ($p\geq 1$) is oriented in a symmetric manner. \item[(v)] Any two of the boundary $p$-cells of the $(p+1)$-cell are not equivalent and symmetry-related to each other. \end{enumerate} For our purpose to systematically diagnose nodes pinned to lines in BZ, the condition (v) is crucial. In Appendix~\ref{app:cell_3D}, we provide units of 3D BZ for each type of lattices. \subsection{Emergent Altland-Zirnbauer classes} \label{sec3:EAZ} \begin{table} \begin{center} \caption{\label{tab:EAZ}The classification of zero-dimensional topological phases for each EAZ symmetry class. Topological indices $p_{\bm{k}}^{\alpha}$ and $N_{\bm{k}}^{\alpha}$ in the table are defined by Eq.~\eqref{eq:Pf} and \eqref{eq:int}. Here, $\mathcal{W}[\alpha]$ represents the triple of the result of Wigner criteria $(W^{\alpha}_{\bm{k}}(\mathcal{T}) , W^{\alpha}_{\bm{k}}(\mathcal{P}), W^{\alpha}_{\bm{k}}(\mathcal{J}))$ defined by Eqs.~\eqref{eq:wigner_C}-\eqref{eq:wigner_G}. } \begin{tabular}{c|c|c|c} \hline EAZ & $\mathcal{W}_{\bm{k}}[\alpha]$ & classification & index \\ \hline\hline A & $(0,0,0)$ & $\mathbb{Z}$ & $N_{\bm{k}}^{\alpha}$ \\ AIII & $(0,0,1)$ & $0$ & None \\ AI & $(1,0,0)$ & $\mathbb{Z}$ & $N_{\bm{k}}^{\alpha}$ \\ BDI & $(1,1,1)$ & $\mathbb{Z}_2$ & $p_{\bm{k}}^{\alpha}$ \\ D & $(0,1,0)$ & $\mathbb{Z}_2$ & $p_{\bm{k}}^{\alpha}$ \\ DIII & $(-1,1,1)$ & $0$ & None \\ AII & $(-1,0,0)$ & $2\mathbb{Z}$ & $N_{\bm{k}}^{\alpha}$ \\ CII & $(-1,-1,1)$ & $0$ & None \\ C & $(0,-1,0)$ & $0$ & None \\ CI & $(1,-1,1)$ & $0$ & None \\ \hline \end{tabular} \end{center} \end{table} \add{Symmetries of $\mathcal{A}, \mathcal{P},$ and $\mathcal{J}$ in Eq.~\eqref{eq:MSG} sometimes keep a sector $H_{\bm{k}}^{\alpha}$ in Eq.~\eqref{eq:block-diag2} unchanged, and other times transform it to another sector. The symmetries that leave $H_{\bm{k}}^{\alpha}$ unchanged lead to an effective internal symmetry class for each irreducible representation on $p$-cell, which is referred to as emergent Altland-Zirnbauer symmetry class (EAZ class).} In the following, we discuss how to know the effects of symmetries in $\mathcal{A}, \mathcal{P},$ and $\mathcal{J}$. In our construction of the cell decomposition, the little groups $\mathcal{G}_{\bk}$ at any point $\bm{k}$ in a $p$-cell $D^p$ are in common, and therefore the common little group is denoted by $\mathcal{G}_{D^p}$. In the same way as $\mathcal{G}_{D^p}$, we define a subset $\mathcal{V}_{D^p}$ of $\mathcal{V}$ by $\mathcal{V}_{D^p} = \{v \in \mathcal{V}| v\bm{k} = \bm{k} +^\exists\bm{G}\ \text{for}\ \forall \bm{k} \in D^p\}$, where $\mathcal{V}=\mathcal{A}, \mathcal{P}, \mathcal{J}$. Then, we identify actions of time-reversal like, particle-hole like, and chiral like symmetries on each $H_{\bm{k}}^{\alpha}$ by the Wigner criteria~\cite{Bradley,Shiozaki2018} \begin{align} \label{eq:wigner_C} W^{\alpha}_{D^p}(\mathcal{P}) &=\frac{1}{\vert \mathcal{P}_{\bm{k}}/T \vert}\sum_{c \in \mathcal{P}_{\bm{k}}/T }z^{\bm{k}}_{c, c}\chi_{\bm{k}}^{\alpha}(c^2) \in \{0, \pm 1\},\\ W^{\alpha}_{D^p}(\mathcal{A}) &=\frac{1}{\vert \mathcal{A}_{\bm{k}}/T \vert}\sum_{a \in \mathcal{T}_{\bm{k}}/T }z^{\bm{k}}_{a, a}\chi_{\bm{k}}^{\alpha}(a^2) \in \{0, \pm 1\},\\ \label{eq:wigner_G} W^{\alpha}_{D^p}(\mathcal{J}) &= \frac{1}{\vert \mathcal{G}_{\bm{k}}/T \vert} \sum_{g \in \mathcal{G}_{\bm{k}}/T } \frac{z^{\bm{k}}_{\gamma, \gamma^{-1}g\gamma}}{z^{\bm{k}}_{g, \gamma}} [\chi^{\alpha}_{\bm{k}}(\gamma^{-1} g \gamma)]^{*}\chi^{\alpha}_{\bm{k}}(g)\\ &\in\{0,1\}\nonumber, \end{align} where $\chi_{\bm{k}}^{\alpha}(g) =\mathrm{tr}[u_{\bm{k}}^{\alpha}(g)]$ for $\bm{k} \in D^p$ and $\gamma$ is a chiral like symmetry. Note that, in fact, it is enough for our purpose to consider a point $\bm{k}$ in $D^p$. When $W_{D^p}^{\alpha}(\mathcal{V}) = 0$, additional symmetries in $\mathcal{V}_{D^p}$ transform $H^{\alpha}_{\bm{k}}$ into another sector $H^{\beta}_{\bm{k}}$. On the other hand, when $W_{D^p}^{\alpha}(\mathcal{V}) = \pm 1$, $H^{\alpha}_{\bm{k}}$ is invariant under the additional symmetries. Then, the EAZ symmetry class for $H_{\bm{k}}^{\alpha}$ is determined by \begin{align} \label{eq:wigner} \mathcal{W}_{D^p}[\alpha]\equiv(W^{\alpha}_{D^p}(\mathcal{A}) , W^{\alpha}_{D^p}(\mathcal{P}), W^{\alpha}_{D^p}(\mathcal{J})). \end{align} Depending on the EAZ symmetry classes, the following zero-dimensional topological invariants are assigned to each sector $H_{\bm{k}}^{\alpha}$~\cite{SI_Luka, Ono-Po-Shiozaki2020} (see Table~\ref{tab:EAZ}): \begin{align} \label{eq:Pf} p_{\bm{k}}^{\alpha} &\equiv \frac{1}{i\pi}\mathrm{log}\frac{\mathrm{Pf}[U(H_{\bm{k}}^{\alpha})]}{\mathrm{Pf}[U(H_{\bm{k}}^{\alpha})^{\text{vac}}]}\mod 2,\\ \label{eq:int} N_{\bm{k}}^{\alpha} &\equiv n_{\bm{k}}^{\alpha} - (n_{\bm{k}}^{\alpha})^{\text{vac}}. \end{align} To define the above topological invariants, we introduce a reference Hamiltonian $H_{\bm{k}}^{\text{vac}}$ in the same symmetry setting~\cite{Skurativska2019,SI_Shiozaki,Ono-Po-Watanabe2020,SI_Luka,Ono-Po-Shiozaki2020}. In Eqs.~\eqref{eq:Pf} and \eqref{eq:int}, $(H_{\bm{k}}^{\alpha})^{\text{vac}}$ denotes the counterpart of $H_{\bm{k}}^{\alpha}$ for $H_{\bm{k}}^{\text{vac}}$, and $U$ is the particle-hole like symmetry for $H_{\bm{k}}^{\alpha}$ satisfying $(UU^*)=+1$ and $U (H_{\bm{k}}^{\alpha})^*=-H_{\bm{k}}^{\alpha} U$. We also define $n_{\bm{k}}^{\alpha}$ and $(n_{\bm{k}}^{\alpha})^{\text{vac}}$ by the number of occupied states in $H_{\bm{k}}^{\alpha}$ and $(H_{\bm{k}}^{\alpha})^{\text{vac}}$. Practically, we can always choose an appropriate reference $H_{\bm{k}}^{\text{vac}}$ using $H_{\bm{k}}$. For example, since the vacuum is always topologically trivial, $H_{\bm{k}}$ in the limit of infinite chemical potential is often used as $H_{\bm{k}}^{\text{vac}}$~\cite{Skurativska2019,Ono-Po-Shiozaki2020}. In fact, we will adopt this definition of a reference Hamiltonian in Sec.~\ref{sec6}. \subsection{$E_1$-pages} \label{sec3:E1} As seen in the preceding discussions, the Wigner criteria in Eqs.~\eqref{eq:wigner_C}-\eqref{eq:wigner_G} tell us EAZ classes for each irreducible representation. Then, let us define abelian groups $E_{1}^{p,0}$ in the following, which can be interpreted as the classification of $\{H_{\bm{k}}^{\alpha}\}_{\alpha}$ at points $\bm{k}$ inside $p$-cells The total set $\mathcal{C}_p$ of $p$-cells consists of $N_p$ subsets (so-called ``star'' in the literature~\cite{Bradley}) defined by $S_{D^{p}_{i}} =\{D_{g(i)}^{p}=gD^{p}_{i}\vert g \in G\}$, where $N_p$ the number of subsets and $D^{p}_{i}$ is a representative $p$-cell of the subset $S_{D^{p}_{i}}$. The representatives form a set of independent $p$-cells \begin{align} F^p \equiv \{D^{p}_{i}\}_{i=1}^{N_p} \end{align} \add{In Ref.~\onlinecite{Shiozaki2018}, the abelian groups $E_{1}^{p,0}$ (called $E_1$-pages) are defined by the direct sum of twisted equivalent $K$-groups~\cite{Freed2013} on $p$-cells in $F^p $. It turns out that $E_{1}^{p,0}$ is the direct sum of the classification of zero-dimensional topological phases of $\{H_{\bm{k}}^{\alpha}\}_{\alpha}$ (defined in Eq.~\eqref{eq:block-diag2}) at a point $\bm{k}$ in each $D^{p}_{i} \in F^p$. Then, $E_{1}^{p,0}$ is completely determined by $\mathcal{W}_{D^{p}_{i}}[\alpha]$ for each irreducible representation and each $D^{p}_{i} \in F^p$. In other words, $E_{1}^{p,0}$ is defined by} \begin{align} \label{eq:E1-def1} E_{1}^{p,0} &\equiv \bigoplus_{i\vert_{D^{p}_{i}\in F^p}}\left(\mathbb{Z}_{2}^{\oplus_{\alpha}}\oplus \mathbb{Z}^{\oplus_{\beta}} \right), \end{align} \add{where we perform the summation about labels of irreducible representations $\alpha$ and $\beta$ with the following conditions: \begin{enumerate} \setlength{\itemsep}{-2pt} \item[(a)] $\mathcal{W}_{D^{p}_{i}}[\alpha]\in\{(0,1,0), (1,1,1)\}$ and $\mathcal{W}_{D^{p}_{i}}[\beta]\in\{(0,0,0), (1,0,0), (-1,0,0)\}$ (see Eq.~\eqref{eq:wigner}); \item[(b)] When an irreducible representation on $D^{p}_{i}$ is related to other ones by antiunitary and chiral-like symmetries, only one of irreducible representation on $D^{p}_{i}$ is taken into account. \end{enumerate} } As discussed in Ref.~\onlinecite{Shiozaki2018}, $E_{1}^{p,0}$ for $p\geq 1$ has several different interpretations. \add{For $p\geq 1$, $E_{1}^{p,0}$ represents the set of gapless states with $(p-1)$-dimensional gapless regions in the Bogoliubov quasiparticle spectrum on $p$-cells.} \add{ Intuitively, it can also be understood as changes of zero-dimensional topological invariants on $p$-cells. Let us focus on a $1$-cell. Then, we define the same zero-dimensional topological invariants for any point on the $1$-cell, as explained in Sec.~\ref{sec3:EAZ}. However, it is not necessary to have the same values of them at all points in the $1$-cell. When we consider momentum as parameters of the deformation, the system must have gapless points on the 1-cell if the zero-dimensional topological invariants at points on the line are different [See Fig.~\ref{fig:E1}]. Possible changes of zero-dimensional topological invariants on the 1-cell are equivalent to the classifications of zero-dimensional topological phases of $\{H_{\bm{k}}^{\alpha}\}_{\alpha}$ at a point $\bm{k}$ on the 1-cell, which is the first interpretation of $E_{1}^{1,0}$. In the same way as $E_{1}^{1,0}$, $E_{1}^{2,0}$ and $E_{1}^{3,0}$ can be understood as the sets of gapless lines and surfaces on $2$- and $3$-cells, respectively [See Fig.~\ref{fig:E1}]. Note that gapless points and lines for $E_{1}^{1,0}$ and $E_{1}^{2,0}$ are not always the genuine point and line nodes. In other words, they are often part of higher-dimensional nodes. } \begin{figure*}[t] \begin{center} \includegraphics[width=1.9\columnwidth]{E1-page.pdf} \caption{\label{fig:E1}Illustration of elements of $E_{1}^{p,0}$. For $p=0$, the elements are gapped states at $0$-cells. As for $p\ (p \geq 1)$, there are two $p$-dimensional regions in which the zero-dimensional topological invariants are different from each other. Since the zero-dimensional topological invariants must be the same for gapped regions, the boundary results in gapless states on $p$-cells.} \end{center} \end{figure*} Based on these interpretations, we can characterize any system by a list of band labels \begin{align} \mathfrak{n}^{(p)} &=(\mathfrak{p}_{D^{p}_{1}}^{\alpha_1}, \mathfrak{p}_{D^{p}_{1}}^{\alpha_2}, \cdots \mathfrak{N}_{D^{p}_{1}}^{\beta_1}, \cdots, \mathfrak{p}_{D^{p}_{2}}^{\alpha'_1}, \cdots,\mathfrak{N}_{D^{p}_{2}}^{\beta'_1}, \cdots), \end{align} where $\mathfrak{p}_{D_{i}^{p}}^{\alpha}$ and $\mathfrak{N}_{D_{i}^{p}}^{\beta}$ are $\mathbb{Z}_2$-valued and $\mathbb{Z}$-valued band labels, respectively. \add{While band labels for $p=0$ are no more than the zero-dimensional topological invariants in Eqs.~\eqref{eq:Pf} and \eqref{eq:int}, those for $p$-cells $(p \geq 1)$ represent changes of the zero-dimensional topological invariants.} Correspondingly, the abelian group $E_{1}^{p,0}$ is formulated by \begin{align} \label{eq:E1-topo} E^{p, 0}_{1} =\bigoplus_{i\vert_{D^{p}_{i}\in F^p}}\left(\bigoplus_{\alpha}\mathbb{Z}_{2}[\mathfrak{b}^{(p)}_{D^{p}_{i},\alpha}] \oplus \bigoplus_{\beta} \mathbb{Z}[\mathfrak{b}^{(p)}_{D^{p}_{i},\beta}] \right), \end{align} where $\{\mathfrak{b}^{(p)}_{D^{p}_{i},\alpha}\}$ denotes the set of \add{generators of $E^{p, 0}_{1}$} which can expand an arbitrary $\mathfrak{n}^{(p)}$, and the summation about $\alpha$ and $\beta$ are the same in Eq.~\eqref{eq:E1-def1}. \add{In addition, $\mathbb{Z}_{2}[\mathfrak{b}^{(p)}_{D^{p}_{i},\alpha}]$ and $\mathbb{Z}[\mathfrak{b}^{(p)}_{D^{p}_{i},\beta}]$ represent abelian groups generated by $\mathfrak{b}^{(p)}_{D^{p}_{i},\alpha}$ and $\mathfrak{b}^{(p)}_{D^{p}_{i},\beta}$} In this work, we construct the generator $\mathfrak{b}^{(p)}_{D^{p}_{i},\alpha}$ as follows. Each $\mathfrak{b}^{(p)}_{D^{p}_{i},\alpha}$ is generated by an irreducible representation $U_{D^{p}_{i}}^{\alpha}$ at a $p$-cell $D_{i}^p$ in $\mathcal{C}_p$. As explained in Sec.~\ref{sec3:cell}, we include the equivalence or symmetry relations among $p$-cells in the basis. We consider a $p$-cell $D^{p}_{i}$, and suppose that we have a nontrivial band label $\mathfrak{p}_{D^{p}_{i}}^{\alpha} = 1$ or $\mathfrak{N}_{D^{p}_{i}}^{\alpha} = 1$ for an irreducible representation $U_{D^{p}_{i}}^{\alpha}$. Then, band labels on equivalent or symmetry-related $p$-cells are determined by those on $D^{p}_{i}$. We first derive the relation between irreducible representations $U_{D^{p}_{g(i)}}^{\alpha'}$ and $U_{D^{p}_{i}}^{\alpha} \begin{align} U_{D^{p}_{g(i)}}^{\alpha'}(h') &= \begin{cases} \frac{z_{h', g}}{z_{g, g^{-1} h' g}}U_{D^{p}_{i}}^{\alpha}(g^{-1}h'g)\quad\text{for }\phi_g = +1 \\ \frac{z_{h', g}}{z_{g, g^{-1} h' g}}[U_{D^{p}_{i}}^{\alpha}(g^{-1}h'g)]^{*}\quad\text{for }\phi_g = -1 \end{cases}, \end{align} where $g\in G$ and $h' \in \mathcal{G}_{D^{p}_{g(i)}}$. Since the spectrum of $H_{D^{p}_{g(i)}}^{\alpha'}$ is the same as that of $H_{D^{p}_{i}}^{\alpha}$, band labels at $D^{p}_{g(i)}$ then straightforwardly follow \begin{align} \label{eq:pf_rel} \mathfrak{p}_{D^{p}_{g(i)}}^{\alpha'} &= \mathfrak{p}_{D^{p}_{i}}^{\alpha}, \\ \mathfrak{N}_{D^{p}_{g(i)}}^{\alpha'}&= \begin{cases} \mathfrak{N}_{D^{p}_{i}}^{\alpha}\quad\text{for }c_g = +1 \\ -\mathfrak{N}_{D^{p}_{i}}^{\alpha}\quad\text{for }c_g = -1 \end{cases}. \end{align} As a result, we can obtain the set of band labels such that only $\mathfrak{p}_{D^{p}_{i}}^{\alpha}$ $(\mathfrak{N}_{D^{p}_{i}}^{\alpha})$ and associated band labels are $1$ ($1$ or $-1$ \add{for EAZ class A and AI; $2$ or $-2$ for EAZ class AII}). Indeed, this is exactly what we call $\mathfrak{b}^{(p)}_{D^{p}_{i},\alpha}$. To make our understanding clearer, let us discuss a simple example: a one-dimensional even-parity superconductor in class D. We first decompose an asymmetric unit into two 0-cells $\Gamma$, X and a 1-cell $a$ as illustrated in Fig.~\ref{fig:1Dex} (a). By acting the inversion symmetry $I$ on the unit, we find the cell decomposition: \begin{align} \mathcal{C}_0 &\equiv \{\Gamma, \text{X}, \text{X}'=I\text{X}\},\\ \mathcal{C}_1 &\equiv \{a, a'=Ia\}, \end{align} where $F^{0}=\{\Gamma,\text{X}\}$ and $F^1=\{a\}$. We then obtain the classifications of each irreducible representation at $0$-cells and $1$-cells. Figure~\ref{fig:1Dex} (a) illustrates the action of the particle-hole like symmetries on each sector of Hamiltonians at each cell in $F^p$, and we find that the EAZ classes for each inversion eigenvalue at $\Gamma$ and X are class D and the EAZ class at $a$ is also class D. Therefore, $E_{1}^{0,0}=(\mathbb{Z}_2)^4$ and $E_{1}^{1,0}=\mathbb{Z}_2$. Next, we formulate $E_1$-pages in the form of Eq.~\eqref{eq:E1-topo}. We define the Pfaffian invariants $\mathfrak{p}_{D^{0}\in \mathcal{C}_0}^{\alpha=\pm}$~\cite{Ryu_2010} for each inversion eigenvalue at the 0-cells, and they form the set of band labels $(\mathfrak{p}_{\Gamma}^{+},\mathfrak{p}_{\Gamma}^{-},\mathfrak{p}_{\text{X}}^{+},\mathfrak{p}_{\text{X}}^{-},\mathfrak{p}_{\text{X}'}^{+},\mathfrak{p}_{\text{X}'}^{-})$. On the other hand, since the 1-cells are invariant under the combination of PHS $\mathcal{C}$ and the inversion symmetry $I$ with $(\mathcal{C} I)^2=+1$, the Pfaffian invariant can also be defined on the $1$-cells $a$ and $a'$. Correspondingly, the set of band labels for the 1-cells is $(\mathfrak{p}_{a},\mathfrak{p}_{a'})$. We then construct the basis vectors of $E_{1}^{0,0}$ and $E_{1}^{1,0}$. From Eq.~\eqref{eq:pf_rel}, we find $\mathfrak{p}_{\text{X}'}^{\pm} = \mathfrak{p}_{\text{X}}^{\pm}$ and $\mathfrak{p}_{a'} = \mathfrak{p}_{a}$. Therefore, we obtain \begin{align} \label{eq:basis1} \mathfrak{b}_{\Gamma,+}^{(0)} &= (1,0,0,0,0,0);\\ \mathfrak{b}_{\Gamma,-}^{(0)} &= (0,1,0,0,0,0);\\ \mathfrak{b}_{\text{X},+}^{(0)} &= (0,0,1,0,1,0);\\ \mathfrak{b}_{\text{X},-}^{(0)} &= (0,0,0,1,0,1);\\ \label{eq:basis5} \mathfrak{b}_{a}^{(1)} &= (1,1), \end{align} and they generate $E_{1}^{0,0}$ and $E_{1}^{1,0}$ as \begin{align} E_{1}^{0,0}&=\mathbb{Z}_2[\mathfrak{b}_{\Gamma,+}^{(0)} ]\oplus\mathbb{Z}_2[\mathfrak{b}_{\Gamma,-}^{(0)} ]\oplus\mathbb{Z}_2[\mathfrak{b}_{\text{X},+}^{(0)} ]\oplus\mathbb{Z}_2[\mathfrak{b}_{\text{X},-}^{(0)} ],\\ E_{1}^{1,0}&=\mathbb{Z}_2[\mathfrak{b}_{a}^{(1)}], \end{align} which are illustrated in Fig.~\ref{fig:1Dex} (b). \begin{figure}[t] \begin{center} \includegraphics[width=1.0\columnwidth]{1D_example_v3.pdf} \caption{\label{fig:1Dex}Illustration of the 1D even-parity superconductors. (a) An asymmetric unit of BZ and EAZ classes for cells in $F^0=\{\Gamma,\text{X}\}$ and $F^1=\{a\}$. Here, the red arrows signify orientations of the 1-cell. (b) Illustrative description of $E_{1}^{0,0}$ and $E_{1}^{1,0}$. The entries in brackets represent the band structures of generators. (c,d) The physical process of $d_{1}^{0,0}$. For the system with $(\mathfrak{p}_{\Gamma}^{+},\mathfrak{p}_{\Gamma}^{-},\mathfrak{p}_{\text{X}}^{+},\mathfrak{p}_{\text{X}}^{-}) = (1,0,1,0)$, $d_{1}^{0,0}$ does not generate a gapless point on the 1-cells [(c)]. On the other hand, for the system with $(\mathfrak{p}_{\Gamma}^{+},\mathfrak{p}_{\Gamma}^{-},\mathfrak{p}_{\text{X}}^{+},\mathfrak{p}_{\text{X}}^{-}) = (0,0,1,0)$, $d_{1}^{0,0}$ there exist a gapless point on each 1-cell [(d)]. In the figure, we omit $\mathfrak{p}_{\text{X}'}^{+},\mathfrak{p}_{\text{X}'}^{-},$ and $\mathfrak{p}_{a'}$ since $\mathfrak{p}_{\text{X}'}^{\pm}=\mathfrak{p}_{\text{X}}^{\pm}$ and $\mathfrak{p}_{a'}=\mathfrak{p}_a$ } \end{center} \end{figure} \subsection{Compatibility relations} \label{sec3:CR} In this subsection, we discuss constraints on the zero-dimensional topological invariants, which are called \textit{compatibility relations} developed in Refs.~\onlinecite{PhysRevX.7.041069,Po2017,TQC,SI_Luka,Ono-Po-Shiozaki2020}. Compatibility relations will be utilized in Sec.~\ref{sec5}. Before moving on to the general discussion, we begin by showing compatibility relations in the 1D even-parity superconductors discussed in Sec.~\ref{sec3:E1}. As shown in Sec.~\ref{sec3:E1}, the Pfaffian invariants are defined for each inversion-eigenvalue sector at $\Gamma$ and X. Note that the sum of the Pfaffian invariants $p_{k}^{+}+p_{k}^{-}\ (k=\Gamma,\text{X})$ is also the Pfaffian invariant defined for total Hamiltonian, not each inversion-eigenvalue sector. Thus, when the system is fully gapped, $p_{k}^{+}+p_{k}^{-}\ (k=\Gamma,\text{X})$ should be the same value as the Pfaffian invariant at any point in 1-cell, i.e., \begin{align} p_{k \in a}&=p_{\Gamma}^{+}+p_{\Gamma}^{-} = p_{\text{X}}^{+}+p_{\text{X}}^{-}, \\ p_{k \in a'}&=p_{\Gamma}^{+}+p_{\Gamma}^{-} = p_{\text{X}'}^{+}+p_{\text{X}'}^{-}. \end{align} This is what we refer to as compatibility relations. Compatibility relations also lead to the relations between band labels on $0$-cells and $1$-cells. Since the band label for $1$-cells can be understood as the change of the zero-dimensional topological invariants, the difference of Pfaffian invariants between $\Gamma$ and $\text{X}\ (\text{X}')$ results in $\mathfrak{p}_{a}\ (\mathfrak{p}_{a'})$, i.e., \begin{align} \label{eq:CR_1D} \begin{pmatrix} \mathfrak{p}_{a}\\\mathfrak{p}_{a'} \end{pmatrix}&= \begin{pmatrix} 1 & 1 & -1 & -1 & 0 & 0 \\ 1 & 1 & 0 & 0 & -1 & -1 \\ \end{pmatrix} \begin{pmatrix} \mathfrak{p}_{\Gamma}^{+}\\\mathfrak{p}_{\Gamma}^{-}\\\mathfrak{p}_{\text{X}}^{+}\\\mathfrak{p}_{\text{X}}^{-}\\\mathfrak{p}_{\text{X}'}^{+} \\ \mathfrak{p}_{\text{X}'}^{-} \end{pmatrix}. \end{align} Then, we generalize the above discussions. Let $D^{(p+1)}$ be a $(p+1)$-cell, and let $D^{p}$ be a boundary $p$-cell of $D^{(p+1)}$. Since $\mathcal{G}_{D^{(p+1)}}$ is a subgroup of $\mathcal{G}_{D^{p}}$ or the same as $\mathcal{G}_{D^{p}}$ in our cell decomposition, an irreducible representation $U_{D^{p}}^{\alpha}$ of $\mathcal{G}_{D^{p}}$ can always be constructed by irreducible representations on $D^{(p+1)}$ \begin{align} \label{eq:irrep_decomposition} U_{D^{p}}^{\alpha}(g) = \bigoplus_{\beta}c_{D^p,D^{p+1}}^{\alpha\beta}U^{\beta}_{D^{(p+1)}}(g), \end{align} where $c_{D^p,D^{p+1}}^{\alpha\beta}$ is a non-negative integer and obtained by the orthogonality of irreducible representations $\sum_{g\in\mathcal{G}_{D^{(p+1)}}/T}\left(\chi_{D^{(p+1)}}^{\beta}(g)\right)^{*}\chi_{D^{p}}^{\alpha}(g)$. When we have the decomposition in Eq.~\eqref{eq:irrep_decomposition}, we know of the number of irreducible representations $U^{\beta}_{D^{p+1}}$ included in $U_{\bm{k}}(g)$ from those at $D^p$ (denoted by $n_{D^{p}}^{\alpha}$). This relation is described by $n_{D^{(p+1)}}^{\beta} = \sum_{\alpha}n_{D^{p}}^{\alpha}c_{D^p,D^{p+1}}^{\alpha\beta}$~\cite{Bradley, Po2017}. Accordingly, when the system is fully gapped, zero-dimensional topological invariants in Eqs.~\eqref{eq:Pf} and \eqref{eq:int} at $\bm{k}' \in D^{(p+1)}$ are related to those at $\bm{k} \in D^{p}$, which we refer to as compatibility relations. There exist the following four types of compatibility relations~\cite{Ono-Po-Shiozaki2020}: \begin{align} \label{eq:CR-1} p_{\bm{k}'}^{\beta} &= \sum_{\alpha}c_{D^p,D^{p+1}}^{\alpha\beta}p_{\bm{k}}^{\alpha}+\sum_{\gamma}c_{D^p,D^{p+1}}^{\gamma\beta}N_{\bm{k}}^{\gamma}, \mod 2\\ \label{eq:CR-2} p_{\bm{k}'}^{\beta} &= 0 \mod 2,\\ \label{eq:CR-3} N_{\bm{k}'}^{\beta} &= \sum_{\alpha}c_{D^p,D^{p+1}}^{\alpha\beta}N_{\bm{k}}^{\alpha},\\ \label{eq:CR-4} N_{\bm{k}'}^{\beta} &= 0, \end{align} Using compatibility relations, we construct a map from $E_{1}^{p,0}$ to $E_{1}^{p+1,0}$. Band labels at all boundary $p$-cells $D^{p}_{i}$ of $D^{p+1}$ contribute to those at $D^{p+1}$. Taking into account the orientations of cells, we have the following relations: \begin{align} \label{eq:gCR-1} \mathfrak{p}_{D^{p+1}}^{\beta} &= \sum_{i}\delta_{D_{i}^p,D^{p+1}}\left[\sum_{\alpha}c_{D^{p}_{i},D^{p+1}}^{\alpha\beta}\mathfrak{p}_{\bm{k}}^{\alpha}+\sum_{\gamma}c_{D^{p}_{i},D^{p+1}}^{\gamma\beta}\mathfrak{N}_{\bm{k}}^{\gamma}\right], \\ \label{eq:gCR-3} \mathfrak{N}_{D^{p+1}}^{\beta}&= \sum_{i}\sum_{\alpha}\delta_{D_{i}^p,D^{p+1}}c_{D^{p}_{i},D^{p+1}}^{\alpha\beta}\mathfrak{N}_{D^{p}_{i}}^{\alpha}, \end{align} where $\delta_{D_{i}^p,D^{p+1}}=0$ when $D^{p+1}$ is not adjacent to $D^{p}_{i}$ and $\delta_{D_{i}^p,D^{p+1}}=1\ (-1)$ if $D^{p+1}$ is adjacent to $D^{p}_{i}$ and the orientation of $D^{p}_i$ agrees (disagrees) with that the orientation induced by $(p+1)$-cell $D^{p+1}$. Note that, while all coefficients are non-negative in Eqs.~\eqref{eq:CR-1}-\eqref{eq:CR-4}, some coefficients in Eqs.~\eqref{eq:gCR-1}-\eqref{eq:gCR-3} can be negative. By computing the above relations for all $(p+1)$-cells, one can construct a matrix in terms of band labels at $p$-cells. \begin{figure*}[t] \begin{center} \includegraphics[width=1.2\columnwidth]{d1.pdf} \caption{\label{fig:d1}Illustration of the the physical process of $d_{1}^{p,0}$. (a) For a given set of the zero-dimensional topological invariants at $0$-cells, $d_{1}^{0,0}$ determines whether gapless points should exist on the adjacent $1$-cells. In the figure, we focus on two $0$-cells (denoted by $\bm{k}_1$ and $\bm{k}_2$) and the 1-cell connecting $\bm{k}_1$ to $\bm{k}_2$. (b) For gapless points on $1$-cells, $d_{1}^{1,0}$ tells us whether the gapless points should be extended to the adjacent $2$-cells. In the figure, we discuss two $2$-cells adjacent to the 1-cell $D^1$ and illustrate the case where the gapless points are extended. } \end{center} \end{figure*} Rewriting the matrix constructed by Eqs.~\eqref{eq:gCR-1} and \eqref{eq:gCR-3} in terms of basis vectors of $E_{1}^{p,0}$ to $E_{1}^{p+1,0}$, we obtain a map from $E_{1}^{p,0}$ to $E_{1}^{p+1,0}$ \begin{align} d_{1}^{p,0}: E_{1}^{p,0} \rightarrow E_{1}^{p+1,0}, \end{align} which is called \textit{first differential}~\cite{Shiozaki2018}. One can see that $d_{1}^{p,0}$ always satisfies $d_{1}^{p+1,0}\circ d_{1}^{p,0}=0$, that is, $d_{1}^{p+1,0}\left(d_{1}^{p,0}(\mathfrak{n}^{(p)})\right) = \bm{0}$. \add{Physically, nontrivial $d_{1}^{p,0}$ connects states on $p$-cells to those on $(p+1)$-cells, as illustrated in Fig.~\ref{fig:d1}. Since $E_{1}^{p,0}$ has only local information about $p$-cells, the global structures are not known. Then, $d_{1}^{p,0}$ determines the relation between $p$ and $(p+1)$-cells. For $p=0$, the nontrivial $d_{1}^{0,0}$ tells us whether gapped states at $0$-cells can be connected without closing the gap on $1$-cells. In other words, if $d_{1}^{0,0}(\mathfrak{n}^{(0)})=\bm{0}$ holds, all zero-dimensional topological invariants at 0-cells satisfies all compatibility relations. On the other hand, when $d_{1}^{0,0}(\mathfrak{n}^{(0)})\neq\bm{0}$, some compatibility relations are violated, which implies that gapless points exist on the 1-cells. As for $p \geq 1$, nontrivial $d_{1}^{p,0}$ connects the gapless states on $p$-cells to those on $(p+1)$-cells. More concretely, nontrivial $d_{1}^{1,0}$ check if the gapless point on a $1$-cell, an element of $E_{1}^{1,0}$, is extended to the adjacent $2$-cells, which result in gapless lines on the $2$-cells, an element of $E_{1}^{2,0}$. In the same way, $d_{1}^{2,0}$ examines whether a gapless line on a $2$-cell, an element of $E_{1}^{2,0}$, is linked to gapless surfaces on the $3$-cells. } In Secs.~\ref{sec5} A and B, we will explain more details on interpretation of $d_{1}^{p,0}$ and how to incorporate these first differentials into classifications of nodes. Let us discuss $d_{1}^{0,0}$ for the above 1D example. Using the bases in Eqs.~\eqref{eq:basis1}-\eqref{eq:basis5}, we rewrite the matrix in Eq.~\eqref{eq:CR_1D} by \begin{align} \label{eq:d1_ex} M_{d_{1}^{0,0}}&= \begin{array}{c|cccc} & \mathfrak{b}_{\Gamma,+}^{(0)} & \mathfrak{b}_{\Gamma,-}^{(0)} & \mathfrak{b}_{\text{X},+}^{(0)} & \mathfrak{b}_{\text{X},-}^{(0)}\\ \hline \mathfrak{b}_{a}^{(1)} & 1 & 1 & -1 & -1 \\ \end{array}, \end{align} which is actually a matrix representation of $d_{1}^{0,0}$. To see the physical meaning of $d_{1}^{0,0}$, let us discuss the band structures in Fig.~\ref{fig:1Dex} (c) and (d). We start with the band structure that corresponds to $\mathfrak{n}^{(0)} = \mathfrak{b}_{\Gamma,+}^{(0)} + \mathfrak{b}_{\text{X},+}^{(0)}$ in Eqs.~\eqref{eq:basis1}-\eqref{eq:basis5}, which implies that $d_{1}^{0,0}(\mathfrak{n}^{(0)})=0$. Thus, there are no gapless points on the 1-cells [Fig.~\ref{fig:1Dex} (c)]. On the other hand, let us suppose that a band-inversion at $\Gamma$ occurs and results in $\mathfrak{n}'^{(0)} = \mathfrak{b}_{\text{X},+}^{(0)}$. From Eq.~\eqref{eq:d1_ex}, we find $d_{1}^{0,0}(\mathfrak{n}'^{(0)}) = \mathfrak{b}_{a}^{(1)}$. As shown in Fig.~\ref{fig:1Dex} (d), gapless points must exist on $1$-cells. This is what we have mentioned above. \section{Classification of gapless points on 1-cell} \label{sec4} In this section, we discuss the method to classify locally stable point nodes on $1$-cells. The Hamiltonian near a gapless point on a $1$-cell is described by \begin{align} \label{eq:cc-ham} H_{(k_1, k_2)} &= k_1 \gamma_1 + k_2\gamma_2 + \delta k_3 \gamma_0, \end{align} where $k_1$ and $k_2$ are momenta in the directions perpendicular to the $1$-cell, and $\delta k_3$ is a displacement from the gapless point in the direction of $D^1$. Gamma matrices $\gamma_{0}, \gamma_1$, and $\gamma_2$ anticommute with each others. Then, the classification of the gapless points on 1-cells of 3D systems is equivalent to that of the above Dirac Hamiltonian. Ref.~\onlinecite{Cornfeld-Chapman} has shown that one can redefine any point group symmetries as onsite symmetries with classifications of \add{massive Dirac Hamiltonians} unchanged, which we will refer to as Cornfeld-Chapman's method. Refs.~\onlinecite{Cornfeld-Chapman, Shiozaki-CC} also have classified 3D \add{massive Dirac Hamiltonians} in the presence of nonmagnetic and magnetic point group symmetries by using the method. \add{In the following, applying Cornfeld-Chapman's method~\cite{Cornfeld-Chapman} to classifications of 2D massive Dirac Hamiltonian on 1-cells, we will reveal that the results are classified into three cases: (i) The gapless point on the 1-cell is a genuine point node. (ii) The gapless point on the 1-cell is a shrunk loop or surface node. (iii) There are no stable point nodes and such shrunk nodes on the 1-cell. } This will be integrated into compatibility relations discussed in Sec.~\ref{sec5}. \subsection{Cornfeld-Chapman's method for 2D systems} \label{sec4:CC} Suppose that there exists a gapless point on a $1$-cell (denoted by $D^1$). Let us discuss the massive Dirac Hamiltonian in Eq.~\eqref{eq:cc-ham} near $D^1$. To apply the Cornfeld-Chapman's method to the massive Dirac Hamiltonian, we consider the little co-group in the following discussion, and then Hamiltonian is symmetric under $G_{D^1}/T$, i.e., $H_{(k_1, k_2)}$ satisfies \begin{align} \sigma_{\bm{k}}(g)H_{(k_1, k_2)}&= \begin{cases} H_{r_g(k_1, k_2)}\sigma_{\bm{k}}(g) \quad \text{for }c_g = +1,\\ -H_{r_g(k_1, k_2)}\sigma_{\bm{k}}(g) \quad \text{for }c_g = -1, \end{cases} \end{align} where $r_g$ is an element of $\text{O}(2)$. Generally, $r_g$ can be written by \begin{align} r_g &= \begin{cases} \begin{pmatrix} \cos \theta_g & -\sin\theta_g\\ \sin\theta_g & \cos \theta_g \\ \end{pmatrix} \quad \text{for } \det r_g =+1,\\ \begin{pmatrix} -\cos \theta_g & -\sin\theta_g\\ -\sin\theta_g & \cos \theta_g \\ \end{pmatrix}\quad \text{for }\det r_g = -1. \end{cases} \end{align} For simplicity, we thereafter use $s_g = \det r_g$. In the following, we will make all elements of $G_{D^1}/T$ onsite. First, we introduce onsite symmetries and define their representations by \begin{align} \label{eq:onsite-symm} \widetilde{\sigma}(g) \equiv \gamma_1^{\frac{1-s_g}{2}}e^{\frac{\theta_g}{2}\gamma_1\gamma_2}\sigma_{\bm{k}}(g) \quad\text{for }\forall g\in G_{D^1}/T. \end{align} By performing explicit calculations, one can verify \begin{align} \widetilde{\sigma}(g)H_{(k_1, k_2)} &= s_g c_g H_{(k_1, k_2)} \widetilde{\sigma}(g),\\ \widetilde{\sigma}(g)\widetilde{\sigma}(h) &= (s_gc_g)^{\frac{1-s_h}{2}}z'_{g,h}z^{\bm{k}}_{g,h}\widetilde{\sigma}(gh), \end{align} where $z'_{g,h}$ is determined by \begin{align} \gamma_1^{\frac{1-s_h}{2}}e^{\frac{\theta_h}{2}\gamma_1\gamma_2}\gamma_1^{\frac{1-s_g}{2}}e^{\frac{\theta_g}{2}\gamma_1\gamma_2} &= z'_{g,h}\gamma_1^{\frac{1-s_{gh}}{2}}e^{\frac{\theta_{gh}}{2}\gamma_1\gamma_2}. \end{align} Note that, when $\sigma_{\bm{k}}(g)$ with $s_g=-1$ commutes (anticommutes) with $H_{(k_1, k_2)}$, $\widetilde{\sigma}(g)$ anticommutes (commutes) with $H_{(k_1, k_2)}$. In other words, unitary (chiral like) symmetries for $s_g =-1$ become onsite chiral (unitary) symmetries. The same thing happens to antiunitary symmetries. As a result, we have another decomposition of symmetry group $G_{D^1}/T=\widetilde{\mathcal{G}} + \widetilde{\mathcal{A}} + \widetilde{\mathcal{P}} + \widetilde{\mathcal{J}}$, where each subset is defined by \begin{align} \widetilde{\mathcal{G}} &= \{g \in G_{D^1}/T | s_gc_g=1, \phi_g=1\}, \\ \widetilde{\mathcal{A}} &= \{g \in G_{D^1}/T | s_gc_g=1,\phi_g=-1\}, \\ \widetilde{\mathcal{P}} &= \{g \in G_{D^1}/T | s_gc_g=-1,\phi_g=-1\}, \\ \widetilde{\mathcal{J}} &= \{g \in G_{D^1}/T | s_gc_g=-1, \phi_g=1\} \end{align} \add{It is well known that the 2D Dirac Hamiltonians in the presence of onsite symmetries are classified by the second homotopy group of the classifying space~\cite{Teo-Kane2010}. Then, our next task is to identify the classifying space. Similar to Eqs.~\eqref{eq:block-diag} and \eqref{eq:block-diag2}, we can block-diagonalize $\widetilde{\sigma}(g)\ (g\in \widetilde{\mathcal{G}} )$ and $H_{(k_1, k_2)}$ such that \begin{align} \label{eq:CCblock-diag} &\widetilde{\sigma}(g) =\text{diag}\left[\tilde{u}^{\widetilde{\alpha}_1}(g)\otimes\mathds{1}_{m_1}, \cdots, \tilde{u}^{\widetilde{\alpha}_n}(g)\otimes\mathds{1}_{m_n}\right],\\ \label{eq:CCCblock-diag2} &H_{(k_1, k_2)}=\text{diag}\left[\mathds{1}_{d^{\widetilde{\alpha}_1}}\otimes h^{\widetilde{\alpha}_1}, \cdots, \mathds{1}_{d^{\widetilde{\alpha}_n}}\otimes h^{\widetilde{\alpha}_n}\right], \end{align} where $\tilde{u}^{\widetilde{\alpha}}(g)$ is an irreducible representations of $\widetilde{\mathcal{G}}$. Here, $d^{\widetilde{\alpha}}$ and $m_{\widetilde{\alpha}}$ are dimensions of $\tilde{u}^{\widetilde{\alpha}}(g)$ and $h^{\widetilde{\alpha}}_{\bm{k}}$, respectively. For each sector $h^{\widetilde{\alpha}}$, we again use Wigner criteria by replacing $z_{g,h}^{\bm{k}}$ in Eqs.~\eqref{eq:wigner_C}-\eqref{eq:wigner_G} with $(s_gc_g)^{\frac{1-s_h}{2}}z'_{g,h}z^{\bm{k}}_{g,h}$, i.e., \begin{align} \label{eq:CCwigner_C} &\widetilde{W}^{\widetilde{\alpha}}(\widetilde{\mathcal{P}}) =\frac{1}{\vert \widetilde{\mathcal{P}} \vert}\sum_{c \in \widetilde{\mathcal{P}}}(s_cc_c)^{\frac{1-s_c}{2}}z'_{c,c}z_{c,c}^{\bm{k}}\widetilde{\chi}^{\widetilde{\alpha}}(c^2) \in \{0, \pm 1\},\\ &\widetilde{W}^{\widetilde{\alpha}}(\widetilde{\mathcal{A}}) =\frac{1}{\vert \widetilde{\mathcal{A}} \vert}\sum_{c \in \widetilde{\mathcal{A}}}(s_ac_a)^{\frac{1-s_a}{2}}z'_{a,a}z_{a,a}^{\bm{k}}\widetilde{\chi}^{\widetilde{\alpha}}(a^2) \in \{0, \pm 1\},\\ \label{eq:CCwigner_G} &\widetilde{W}^{\widetilde{\alpha}}(\widetilde{\mathcal{J}}))= \frac{1}{\vert \widetilde{\mathcal{G}} \vert} \sum_{g \in \widetilde{\mathcal{G}}} \frac{(s_\gamma c_\gamma)^{\frac{1-s_{\gamma^{-1}g\gamma}}{2}}z'_{\gamma,\gamma^{-1}g\gamma}z^{\bm{k}}_{\gamma, \gamma^{-1}g\gamma}}{(s_gc_g)^{\frac{1-s_\gamma}{2}}z'_{g,\gamma}z^{\bm{k}}_{g, \gamma}} \nonumber \\ &\quad\quad\quad\quad\quad\quad\quad\quad \times[\widetilde{\chi}^{\widetilde{\alpha}}(\gamma^{-1} g \gamma)]^{*}\widetilde{\chi}^{\widetilde{\alpha}}(g)\in\{0,1\}, \end{align} where $\widetilde{\chi}^{\widetilde{\alpha}}(g) = \mathrm{tr}[\widetilde{u}^{\widetilde{\alpha}}(g)]$ and $\gamma$ is an element of $\widetilde{\mathcal{J}}$. Correspondences between results of Wigner criteria and classifying spaces C$_s$ and R$_s$ are summarized in Table~\ref{tab:CC-EAZ}. As a result, we classify the Dirac Hamiltonian in Eq.~\eqref{eq:cc-ham} by $\pi_{2}(\text{C}_s)$ or $\pi_{2}(\text{R}_s)$ for each irreducible representation $\tilde{u}^{\widetilde{\alpha}}$ ~\cite{Teo-Kane2010}. } \begin{table} \begin{center} \caption{\label{tab:CC-EAZ}Classification of EAZ symmetry classes. The subscripts $\mathcal{T},\mathcal{C}$, and $\Gamma$ signify that irreducible representations are related by the onsite antiunitary and the chiral symmetries.} \begin{tabular}{c|c|c|c} \hline EAZ & $\widetilde{\mathcal{W}}[\widetilde{\alpha}]$ & classifying space & $\pi_2$\\ \hline\hline A, A$_\mathcal{T}$, A$_\mathcal{C}$, A$_\Gamma$, A$_{\mathcal{T},\mathcal{C}}$ & $(0,0,0)$ & C$_0$ & $\mathbb{Z}$ \\ AIII, AIII$_\mathcal{T}$ & $(0,0,1)$ & C$_1$ & $0$ \\ \hline AI, AI$_\mathcal{C}$ & $(1,0,0)$ & R$_0$ & $\mathbb{Z}_2$ \\ BDI & $(1,1,1)$& R$_1$ & $0$ \\ D, D$_\mathcal{T}$ & $(0,1,0)$ & R$_2$ & $2\mathbb{Z}$ \\ DIII & $(-1,1,1)$& R$_3$ & $0$\\ AII, AII$_\mathcal{C}$ & $(-1,0,0)$ & R$_4$ & $0$ \\ CII & $(-1,-1,1)$& R$_5$ & $0$ \\ C, C$_\mathcal{T}$ & $(0,-1,0)$ & R$_6$ & $\mathbb{Z}$ \\ CI & $(1,-1,1)$& R$_7$ & $\mathbb{Z}_2$\\ \hline \end{tabular} \end{center} \end{table} \subsection{Character decomposition formulas} \label{sec4:cc-formulas} As explained in the previous subsection, we can classify the two-dimensional Dirac Hamiltonians in Eq.~\eqref{eq:cc-ham} on 1-cells. The next step is to map the generating two-dimensional Dirac Hamiltonians to elements of $E_{1}^{1,0}$. This can be achieved by the orthogonality of irreducible representations. In this subsection, we will derive formulas to obtain elements of $E_{1}^{1,0}$ corresponding to generating Dirac Hamiltonians. The formulas are summarized in Table~\ref{tab:formulas}. Let us suppose that we have one of generating Dirac Hamiltonians on a 1-cell and onsite symmetries in Eq.~\eqref{eq:onsite-symm}. Then, we can construct symmetries of $G_{D^1}/T$ by \begin{align} \label{eq:rep2} \sigma_{\bm{k}}(g)&=e^{-\frac{\theta_g}{2}\gamma_1\gamma_2}\gamma_1^{\frac{1-s_g}{2}}\widetilde{\sigma}(g). \end{align} What we have to do is to obtain irreducible representations contained in the above representation $\sigma_{\bm{k}}(g)$ in Eq.~\eqref{eq:rep2}, which result in band labels on the 1-cell. Using the orthogonality of irreducible representations, we obtain $\mathfrak{N}_{D^1}^{\alpha}$ and $\mathfrak{p}_{D^1}^{\alpha}$ by \begin{align} \label{eq:formula_Z} \mathfrak{N}_{D^1}^{\beta}&=\frac{1}{\vert\mathcal{G}_{\bm{k}}/T\vert}\sum_{g\in \mathcal{G}_{\bm{k}}/T}\chi^{\beta}_{\bm{k}}(g)\mathrm{tr}[\gamma_0 e^{-\frac{\theta_g}{2}\gamma_1\gamma_2}\gamma_1^{\frac{1-s_g}{2}}\widetilde{\sigma}(g)],\\ \label{eq:formula_Z2} \mathfrak{p}_{D^1}^{\beta}&=\frac{1/2}{\vert\mathcal{G}_{\bm{k}}/T\vert}\sum_{g\in \mathcal{G}_{\bm{k}}/T}\chi^{\beta}_{\bm{k}}(g)\mathrm{tr}[e^{-\frac{\theta_g}{2}\gamma_1\gamma_2}\gamma_1^{\frac{1-s_g}{2}}\widetilde{\sigma}(g)]\mod 2, \end{align} where occupied and unoccupied bands contribute to band labels with different signs by $\gamma_0$ in Eq.~\eqref{eq:formula_Z}. After performing the same procedures for all irreducible representations of $\mathcal{G}_{\bm{k}}$, we get an element of $E_{1}^{1,0}$ corresponding to one of generating Dirac Hamiltonians. For each of EAZ classes, in fact, we can derive the formulas by fixing the form of generating Hamiltonians and representations, which is summarized in Table~\ref{tab:formulas}. Here, we show the formulas for class A$_\mathcal{C}$ as an example. The generating Hamiltonian and representations can be represented by \begin{align} \label{eq:gene_ham_AC} H_{(k_1, k_2)} &= k_1 \tau_1 + k_2\tau_2 + \delta k_3 \tau_3,\\ \widetilde{\sigma}(\mathcal{C}) &= \begin{cases} i \tau_2 \sigma_1K\quad \text{for } [\widetilde{\sigma}(\mathcal{C})]^2 = -1,\\ \tau_2 \sigma_2 K \quad \text{for } [\widetilde{\sigma}(\mathcal{C})]^2 = +1, \end{cases} \\ \label{eq:gene_rep_AC} \widetilde{\sigma}(g) &= \tau_0 \begin{pmatrix} \widetilde{u}^{\widetilde{\alpha}}(g) & \\ & \widetilde{u}^{\widetilde{\mathcal{C}}\widetilde{\alpha}}(g) \end{pmatrix}\quad \text{for }\forall\widetilde{g} \in \widetilde{\mathcal{G}}, \end{align} where $\widetilde{u}^{\widetilde{\mathcal{C}}\widetilde{\alpha}}$ denotes the particle-hole-related irreducible representation of $\tilde{u}^{\widetilde{\alpha}}$. In addition, $\widetilde{\mathcal{C}}$ is the generator of $\widetilde{\mathcal{P}}$, and $\sigma_{\mu}$ and $\tau_\mu (\mu=0,1,2,3)$ are Pauli matrices representing different degrees of freedom. By substituting Eqs.~\eqref{eq:gene_ham_AC} and \eqref{eq:gene_rep_AC} into Eqs.~\eqref{eq:formula_Z} and \eqref{eq:formula_Z2}, we get \begin{align} \label{eq:AC-Z} \mathfrak{N}_{D^1}^{\beta}&=\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in \mathcal{G}_{D^1}/T}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\quad\times\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{C}}\widetilde{\alpha}}(g)\right),\\ \label{eq:AC-Z2} \mathfrak{p}_{D^1}^{\beta}&=\frac{1}{\vert G_{D^1} \vert}\sum_{g \in \mathcal{G}_{D^1}/T}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\times\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{C}}\widetilde{\alpha}}(g)\right)\ \mathrm{mod}\ 2, \end{align} where $\widetilde{\chi}^{\widetilde{\alpha}}(g) = \mathrm{tr}[\widetilde{u}^{\widetilde{\alpha}}(g)]$. \add{Finally, we find that the results are classified into three cases: \begin{enumerate} \setlength{\itemsep}{-2pt} \item[(i)] One of the generating Dirac Hamiltonians is mapped to a generator of $E_{1}^{1,0}$. In this case, the gapless point on the 1-cell is a genuine point node. \item[(ii)] The obtained element of $E_{1}^{1,0}$ for generating Dirac Hamiltonians does not coincide with any generator of $E_{1}^{1,0}$. In other words, the obtained element is composed of multiple bases of $E_{1}^{1,0}$, which implies that the realized point nodes must degenerate. However, these gapless points do not need to be at the same momentum. In such a case, gapless points are actually parts of shrunk loop or surface nodes. \item[(iii)] The classification of Dirac Hamiltonians is trivial, i.e., the second homotopy group discussed in Sec.~\ref{sec4:CC} is trivial, which implies that any point and shrinkable nodes do not exist. Thus, the gapless point is part of line or surface nodes. \end{enumerate} } \textcolor{black}{One might sometimes notice that the degeneracy of a point node is different from the dimension of corresponding Dirac Hamiltonians for case (i). In such a case, trivial gapped states exist in the energy spectrum. The existence of Dirac Hamiltonians in Eq.~\eqref{eq:cc-ham} ensures that the point node is stable in the sense of K-theory, i.e., against adding trivial degrees of freedom. It is tempting to think that our results have missed stable nodes in the sense of fragile topological phases~\cite{PhysRevLett.121.126402}, i.e., line or surface nodes when any trivial degree of freedom is not added. However, when we consider quadratic and cubic terms, we can explicitly construct minimal dimension Dirac Hamiltonians. This implies that such fragile nodes do not exist on 1-cells. See Appendix~\ref{app:remark} for details. } \begin{table*}[t] \begin{center} \caption{\label{tab:formulas}Formulas to obtain elements of $E_{1}^{1,0}$ corresponding to generating Dirac Hamiltonians. Here, $\chi^{\alpha}$ and $\widetilde{\chi}^{\widetilde{\alpha}}$ are characters of the little co-group $\mathcal{G}_{D^1}/T$ and the onsite symmetry group $\widetilde{\mathcal{G}}$, respectively. The first column represent EAZ classes of irreducible representations $\tilde{u}^{\widetilde{\alpha}}$. In addition, $\widetilde{\mathcal{T}}\widetilde{\alpha}$, $\widetilde{\mathcal{C}}\widetilde{\alpha}$, and $\widetilde{\Gamma}\widetilde{\alpha}$ are labels of the time-reversal, the particle-hole, and the chiral symmetry related irreducible representations. Derivations of these formulas are included in Appendix~\ref{app:formulas}.} \begin{tabular}{c|l|l} \hline EAZ & \quad\quad\quad\quad\quad\quad \quad Formula for the map to $\mathbb{Z}$ & \quad \quad\quad\quad\quad\quad\quad Formula for the map to $\mathbb{Z}_2$ \\ \hline\hline A & $\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\widetilde{\chi}^{\widetilde{\alpha}}(g)$ & $\frac{1}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\widetilde{\chi}^{\widetilde{\alpha}}(g)$\\ \hline A$_\mathcal{T}$ & $\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)-\widetilde{\chi}^{\widetilde{\mathcal{T}}\widetilde{\alpha}}(g)\right)$ & $\frac{1}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{T}}\widetilde{\alpha}}(g)\right)$ \\ \hline A$_\mathcal{C}$ & $\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{C}}\widetilde{\alpha}}(g)\right)$ & $\frac{1}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{C}}\widetilde{\alpha}}(g)\right)$ \\ \hline A$_\Gamma$ & $\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)-\widetilde{\chi}^{\widetilde{\Gamma}\widetilde{\alpha}}(g)\right)$ & $\frac{1}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\Gamma}\widetilde{\alpha}}(g)\right)$ \\ \hline \multirow{2}{*}{A$_{\mathcal{T},\mathcal{C}}$} & $\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*$ & $\frac{1}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*$\\ &\quad\quad\quad\quad\quad$\times\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)-\widetilde{\chi}^{\widetilde{\mathcal{T}}\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{C}}\widetilde{\alpha}}(g)-\widetilde{\chi}^{\widetilde{\Gamma}\widetilde{\alpha}}(g)\right)$ & \quad\quad\quad\quad\quad$\times\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{T}}\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{C}}\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\Gamma}\widetilde{\alpha}}(g)\right)$ \\ \hline C & $\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\widetilde{\chi}^{\widetilde{\alpha}}(g)$ & $\frac{1}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\widetilde{\chi}^{\widetilde{\alpha}}(g)$\\ \hline C$_\mathcal{T}$ & $\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)-\widetilde{\chi}^{\widetilde{\mathcal{T}}\widetilde{\alpha}}(g)\right)$ & $\frac{1}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{T}}\widetilde{\alpha}}(g)\right)$ \\ \hline D & $\frac{-4i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\widetilde{\chi}^{\widetilde{\alpha}}(g)$ & $\frac{2}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\widetilde{\chi}^{\widetilde{\alpha}}(g)$\\ \hline D$_\mathcal{T}$ & $\frac{4i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)-\widetilde{\chi}^{\widetilde{\mathcal{T}}\widetilde{\alpha}}(g)\right)$ & $\frac{2}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{T}}\widetilde{\alpha}}(g)\right)$ \\ \hline AI & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$-$ & $\frac{2}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\widetilde{\chi}^{\widetilde{\alpha}}(g)$\\ \hline AI$_\mathcal{C}$ & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$-$ & $\frac{2}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)+\widetilde{\chi}^{\widetilde{\mathcal{C}}\widetilde{\alpha}}(g)\right)$\\ \hline CI & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$-$ & $\frac{2}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\cos\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\widetilde{\chi}^{\widetilde{\alpha}}(g)$\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Example} It is instructive to discuss concrete symmetry settings. Here we consider four examples in the presence of PHS $\mathcal{C}$: MSGs $P2/m1'$ with $B_g$ pairing, $P21'$ with $B$ pairing, $P4$ with $^1E$ pairing, and $Pmc2_11'$ with $A_2$ pairing. After classifying the Dirac Hamiltonians in Eq.~\eqref{eq:cc-ham} as discussed in Sec.~\ref{sec4:CC}, we obtain elements of $E^{1,0}_{1}$ corresponding to generating Dirac Hamiltonians by using formulas in Sec.~\ref{sec4:cc-formulas}. The results in this subsection will be used in Sec.~\ref{sec5:example}, where the physical consequences will be also discussed. \subsubsection{$P2/m1'$ with $B_g$ pairing} \label{sec4:p2m} We fist discuss spinful MSG $P2/m1'$, and recall that this MSG has the two-fold rotation $C_{2}^{y}$ along the $y$-axis, the inversion $I$, and the TRS $\mathcal{T}$. For $B_g$ pairing, $\{\sigma(\mathcal{C}), \sigma(C_{2}^{y})\} = 0$ and $[\sigma(\mathcal{C}), \sigma(I)]=0$ hold. Let us consider a two-fold rotation symmetric line as the 1-cell $D^1$ [see Fig.~\ref{fig:cell_p2m} (a)]. The little co-group is given by the following subsets: \begin{align} \mathcal{G}_{D^1}/T &=\{e, C_{2}^{y}\}, \\ \mathcal{A}_{D^1}/T &=\{I\mathcal{T}, (IC_{2}^{y})\mathcal{T}\}, \\ \mathcal{P}_{D^1}/T &=\{I\mathcal{C}, (IC_{2}^{y})\mathcal{C} \}, \\ \mathcal{J}_{D^1}/T &=\{\Gamma\equiv\mathcal{T}\mathcal{C}, C_{2}^{y}\Gamma\}, \end{align} where $e$ denotes the identity element. To perform the procedures in Sec.~\ref{sec4:CC}, we define generators of the onsite symmetry group by \begin{align} \label{eq:tC2} \widetilde{\sigma}(C_{2}^{y}) &\equiv \gamma_1\gamma_2 \sigma(C_{2}^{y}),\\ \label{eq:tIT} \widetilde{\sigma}(I\mathcal{T}) &\equiv \sigma(I\mathcal{T}),\\ \label{eq:tIC} \widetilde{\sigma}(I\mathcal{C}) &\equiv \sigma(I\mathcal{C}), \end{align} One can verify that $s_g=+1$ for all elements in $\mathcal{G}_{D^1}/T$, and then the onsite unitary symmetry group is $\widetilde{\mathcal{G}}=\{e,C_{2}^{y} \}$. Since $[\widetilde{\sigma}(C_{2}^{y})]^2=-[\sigma(C_{2}^{y})]^2 =+1$, there are two one-dimensional irreducible representations $\widetilde{u}^{\widetilde{\alpha}}(C_{2}^{y}) = \alpha\ (\alpha=\pm 1)$. The representations in Eqs.~\eqref{eq:tC2}-\eqref{eq:tIC} possess the same commutation and anticommutation relations as $\sigma(C_{2}^{y})$, $\sigma(I\mathcal{T})$, and $\sigma(I\mathcal{C})$, i.e., \begin{align} \{\widetilde{\sigma}(C_{2}^{y}),\widetilde{\sigma}(I\mathcal{C})\} &= 0,\\ [\widetilde{\sigma}(C_{2}^{y}), \widetilde{\sigma}(I\mathcal{T})] &= 0,\\ [\widetilde{\sigma}(I\mathcal{T})]^2&=-1. \end{align} As a result, we find EAZ classes for $\alpha=\pm 1$ are class AII$_\mathcal{C}$, whose classification is $\pi_2(\text{R}_4) = 0$. \add{This result is the case (iii), and therefore any point node is not stable on this line. We will see that the gapless point is part of line nodes in Sec.~\ref{sec5}.} \begin{figure} \begin{center} \includegraphics[width=0.6\columnwidth]{fig_cell_v2.pdf} \caption{\label{fig:cell_p2m}Illustrations of cell decomposition for the half BZ in $P2/m1'$ (a) and the quarter BZ in $Pmc2_11'$ (b). Here we omit orientations except for the 1-cells denoted by $D^1$. In both (a) and (b), adjacent 2-cells to the 1-cell $D^1$ are colored by red. In (b), blue and yellow planes represent the mirror and glide planes of MSG $Pmc2_11'$, respectively.} \end{center} \end{figure} \subsubsection{$P21'$ with $B$ pairing} \label{sec4:p2} We next consider MSG $P21'$, which is generated by the two-fold rotation $C_{2}^{y}$ along the $y$-axis and the TRS $\mathcal{T}$. For $B$ pairing, PHS anticommutes with the two-fold rotation, i.e., $\{\sigma(\mathcal{C}), \sigma(C_{2}^{y})\} = 0$. Again, let us consider a two-fold rotation symmetric line as the 1-cell $D^1$ in Fig.~\ref{fig:cell_p2m} (a). Unlike the case of MSG $P2/m1'$, there exist only the following unitary and chiral parts in the little co-group \begin{align} \mathcal{G}_{D^1}/T &=\{e, C_{2}^{y}\}, \\ \mathcal{J}_{D^1}/T &=\{\Gamma, C_{2}^{y}\Gamma\}. \end{align} To perform the procedures in Sec.~\ref{sec4:CC}, we define generators of onsite symmetries by Eq.~\eqref{eq:tC2} and $\widetilde{\sigma}(\Gamma) \equiv \sigma(\Gamma)$, and we find \begin{align} \label{eq:tG} \{\widetilde{\sigma}(\Gamma), \widetilde{\sigma}(C_{2}^{y})\} = 0. \end{align} Since $[\widetilde{\sigma}(C_{2}^{y})]^2=+1$, we have two one-dimensional irreducible representations $\widetilde{U}^{\widetilde{\alpha}}(C_{2}^{y}) = \alpha\ (\alpha=\pm 1)$ whose EAZ classes are class A$_\Gamma$. Therefore, the Dirac Hamiltonians on the 1-cell are classified into $\pi_2(\text{C}_0)=\mathbb{Z}$. The final step is to map the generating Dirac Hamiltonian of $\mathbb{Z}$ to an elements of $E_{1}^{1,0}$. This can be accomplished by \begin{align} \label{eq:A_G} \mathfrak{N}_{D^1}^{\beta}&=\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\quad \times\left(\widetilde{\chi}^{\widetilde{\alpha}}(g)-\widetilde{\chi}^{\widetilde{\Gamma}\widetilde{\alpha}}(g)\right), \end{align} where $\widetilde{\chi}^{\widetilde{\Gamma}\widetilde{\alpha}}$ is the charcter of irreducible representation chiral-symmetry-related to $\widetilde{\chi}^{\widetilde{\alpha}}$. By substituting irreducible representations in Table~\ref{tab:irreps_P2} into Eq.~\eqref{eq:A_G}, we obtain the band labels of the generating Dirac Hamiltonian \begin{align} \label{eq:gene_P2} (\mathfrak{N}^{1},\mathfrak{N}^{2}) &= (2, -2), \end{align} which corresponds to twice of a basis of $E_{1}^{1,0}$. \add{This result is the case (ii), which indicates that the gapless point is realized by a loop or surface node shrinking to the point. To see this, we consider a concrete Dirac Hamiltonian near the gapless point \begin{align} \label{eq:gene_ham_AG} H_{(k_1, k_2)} &= k_1 \tau_1 + k_2 \tau_2\sigma_3 + \delta k_3\tau_3,\\ \label{eq:gene_rep_AG} \widetilde{\sigma}(C_2) &= \sigma_3,\\ \sigma(C_2) &= e^{-i\tfrac{\pi}{2}\tau_3\sigma_3}\widetilde{\sigma}(C_2) =-i \tau_3,\\ \sigma(\Gamma) &=\widetilde{\sigma}(\Gamma) = \tau_2\sigma_2, \end{align} where we consider $\tilde{\alpha} =1$. Then, we add a symmetric perturbation \begin{align} M=m_0 \sigma_1 + m_1 \sigma_3 + m_2\tau_3 + m_3\tau_3\sigma_2 \end{align} to the Dirac Hamiltonian in Eq.~\eqref{eq:gene_ham_AG}. As a result, we obtain a loop node shown in Fig.~\ref{fig:CC_example} (a). } \begin{figure}[t] \begin{center} \includegraphics[width=0.9\columnwidth]{CC_example.pdf} \caption{\label{fig:CC_example}Illustration of annihilation process of gapless points. Here white solid circles denote gapless points and $\pm$ represent the sign of the winding numbers.} \end{center} \end{figure} \begin{table}[t] \begin{center} \caption{\label{tab:irreps_P2}Irreducible representations of the onsite symmetry group $\widetilde{\mathcal{G}}$ and $\mathcal{G}_{D^1}/T$ for MSG $P2/m1'$ and $P21'$.} \begin{tabular}{c|c|c|c|c|c} \hline & EAZ of $P2/m1' (B_g)$ & EAZ of $P21' (B)$ & irrep $\widetilde{\alpha}$ & $e$ & $C_{2}^{y}$ \\ \hline \multirow{2}{*}{$\tilde{\mathcal{G}}$}& AII$_C$& A$_\Gamma$ & $1$ & $1$ & $1$\\ & AII$_C$ & A$_\Gamma$ & $2$ & $1$ & $-1$\\ \hline\hline & EAZ of $P2/m1' (B_g)$ & EAZ of $P21' (B)$ & irrep $\beta$ & $e$ & $C_{2}^{y}$ \\ \hline \multirow{2}{*}{$\mathcal{G}_{D^1}/T$} & D & A & $1$ & $1$ & $i$ \\ & D & A & $2$ & $1$ & $-i$ \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{$P4$ with $^1E$ pairing} \label{sec4:p4} Next, we discuss the four-fold rotation symmetric line in spinful MSG $P4$, which is the same 1-cell $D^1$ in Fig.~\ref{fig:cell_p2m} (a) with the axes exchanged. Since this MSG does not have TRS, the little co-group $G_{D^1}/T$ has only a unitary part $\mathcal{G}_{D^1}/T = \{e, C_{4}^z, (C_{4}^z)^2, (C_{4}^z)^3\}$. Then, the onsite symmetry group also has a unitary part generated by \begin{align} \label{eq:tC4} \widetilde{\sigma}(C_{4}^{z}) &\equiv e^{\frac{\pi}{4}\gamma_1\gamma_2}\sigma(C_{4}^{z}), \end{align} where $[\widetilde{\sigma}(C_{4}^{z})]^4 = +1$. There are four irreducible representations of $\widetilde{\mathcal{G}}$ in Table~\ref{tab:P4}, and therefore gapless points on the line are classified into $\mathbb{Z}^4$. We can map the generating Dirac Hamiltonians to elements of $E_{1}^{1,0}$ by the following formula \begin{align} \label{eq:A} \mathfrak{N}_{D^1}^{\beta}&=\frac{-2i}{\vert G_{D^1} \vert}\sum_{g \in G_{D^1}}\delta_{s_g,1}\sin\frac{\theta_g}{2}[\chi_{D^1}^{\beta}(g)]^*\widetilde{\chi}^{\widetilde{\alpha}}(g), \end{align} where $\beta$ represent to labels of irreducible representations of $\mathcal{G}_{D^1}/T$ in Table~\ref{tab:P4}. As a result, we obtain the band labels \begin{align} \label{eq:p4-point} (\mathfrak{N}^{1}_{D^1},\mathfrak{N}^{2}_{D^1}, \mathfrak{N}^{3}_{D^1},\mathfrak{N}^{4}_{D^1}) &= \begin{cases} (-1,0,0,1)\quad \text{for}\ \widetilde{\alpha} = 1\\ (1,-1,0,0)\quad \text{for}\ \widetilde{\alpha} = 2\\ (0,0,1,-1)\quad \text{for}\ \widetilde{\alpha} = 3\\ (0,1,-1,0)\quad \text{for}\ \widetilde{\alpha} = 4, \end{cases} \end{align} which correspond to not any basis of $E_{1}^{1,0}$ but linear combinations of them. \add{The result is case (ii), i.e., the gapless point is actually a shrunk loop of surface node. To see this, let us discuss a concrete Dirac Hamiltonian near the gapless point \begin{align} \label{eq:gene_ham_A} H_{(k_1, k_2)} &= k_1 \sigma_1 + k_2 \sigma_2 + \delta k_3\sigma_3,\\ \label{eq:gene_rep_A} \widetilde{\sigma}(C_4) &= \sigma_0,\\ \sigma(C_4) &= e^{-\tfrac{\pi}{4}\sigma_1\sigma_2}\widetilde{\sigma}(C_4) =\begin{pmatrix} e^{-i\tfrac{\pi}{4}} & \\ & e^{i\tfrac{\pi}{4}} \end{pmatrix},\ \end{align} The Diac Hamiltonian and the symmetry representation correspond to the case of $\tilde{\alpha} =1$. We add a $C_4$-symmetric perturbation $M = \text{diag}(m_0, m_1)$ to the Dirac Hamiltonian. As shown in Fig.~\ref{fig:CC_example} (b), the Hamiltonian with the perturbation exhibits a surface node. } \begin{table}[t] \begin{center} \caption{\label{tab:P4}Irreducible representations of the onsite symmetry group $\widetilde{\mathcal{G}}$ and $\mathcal{G}_{D^1}/T$ for $P4$.} \begin{tabular}{c|c|c|cccc} \hline & EAZ & irrep $\widetilde{\alpha}$ & $e$ & $C_{4}^{z}$ & $(C_{4}^{z})^2$ & $(C_{4}^{z})^3$ \\ \hline \multirow{4}{*}{$\tilde{\mathcal{G}}$} & A & $1$ & $1$ & $1$ & $1$ & $1$\\ & A & $2$ & $1$ & $i$ & $-1$ & $-i$ \\ & A & $3$ & $1$ & $-i$ & $-1$ & $i$ \\ & A & $4$ & $1$ & $-1$ & $1$ & $-1$ \\ \hline\hline & EAZ & irrep $\beta$ & $e$ & $C_{4}^{z}$ & $(C_{4}^{z})^2$ & $(C_{4}^{z})^3$ \\ \hline \multirow{4}{*}{$\mathcal{G}_{D^1}/T$} & A & $1$ & $1$ & $e^{i \tfrac{\pi}{4}}$ & $i$ & $e^{i \tfrac{3\pi}{4}}$ \\ & A & $2$ & $1$ & $e^{i \tfrac{3\pi}{4}}$ & $-i$ & $e^{i \tfrac{\pi}{4}}$ \\ & A& $3$ & $1$ & $e^{-i \tfrac{3\pi}{4}}$ & $i$ & $e^{-i \tfrac{\pi}{4}}$ \\ & A& $4$ & $1$ & $e^{-i \tfrac{\pi}{4}}$ & $-i$ & $e^{-i \tfrac{3\pi}{4}}$ \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{$Pmc2_11'$ with $A_2$ pairing} \label{sec4:pmc2} Last, we discuss nonsymmorphic and noncentrosymmetric MSG $Pmc2_11'$ with $A_2$ pairing. Here we consider the 1-cell on the boundary of BZ denoted by $D^1$ in Fig.~\ref{fig:cell_p2m} (b). The little co-group consists of the following four parts: \begin{align} \label{eq:G/T_26} \mathcal{G}_{D^1}/T &=\{e, M_{y}\}, \\ \mathcal{A}_{D^1}/T &=\{C_{2}^{z}\mathcal{T}, M_{x}\mathcal{T}\}, \\ \mathcal{P}_{D^1}/T &=\{C_{2}^{z}\mathcal{C}, M_{x}\mathcal{C}\}, \\ \label{eq:Gamma/T_26} \mathcal{J}_{D^1}/T &=\{\Gamma, M_y\Gamma\}. \end{align} We define generators of the onsite symmetry group by \begin{align} \label{eq:tMy} \widetilde{\sigma}(M_{y}) &\equiv \gamma_1\sigma(M_{y}),\\ \widetilde{\sigma}(M_{x}\mathcal{T}) &\equiv \gamma_1\gamma_2\sigma(M_{y}\mathcal{T}),\\ \widetilde{\sigma}(M_{x}\mathcal{C}) &\equiv \gamma_1\gamma_2\sigma(M_{y}\mathcal{C}). \end{align} Then, the onsite symmetry group is composed of the following symmetries: \begin{align} \widetilde{\mathcal{G}} &= \{e, M_y\Gamma\}, \\ \widetilde{\mathcal{A}} &= \{M_x\mathcal{T}, M_y M_x\mathcal{C}\},\\ \widetilde{\mathcal{P}} &= \{M_x \mathcal{C},M_y M_x\mathcal{T}\},\\ \widetilde{\mathcal{J}} &= \{\Gamma, M_y\}. \end{align} One can explicitly verify $[\widetilde{\sigma}(M_y\Gamma)]^2 = -1$ and $[\widetilde{\sigma}(M_y\Gamma), \widetilde{\sigma}(M_{x}\mathcal{T})] = [\widetilde{\sigma}(M_y\Gamma), \widetilde{\sigma}(M_{x}\mathcal{C})] = 0$. These relations imply that EAZ classes for irreducible representations $\widetilde{U}^{\widetilde{\alpha}}(M_y\Gamma) = i\alpha\ (\alpha=\pm 1)$ are class AIII$_\mathcal{T}$, and therefore the classification is $\pi_2(\text{C}_1)=0$. \add{The result is the case (iii), and the gapless point on the 1-cell is part of line or surface nodes.} \section{Unification of compatibility relations and point-node classifications} \label{sec5} In this section, we integrate classifications of gapless points discussed in Sec.~\ref{sec4} into compatibility relations in Sec.~\ref{sec3:CR}, which results in a unified way to diagnose the shapes of nodes. We first explain the general scheme to classify nodes on 1-cells, and then we apply the scheme to several symmetry settings: MSGs $P2/m1'$ with $B_g$ pairing, $P21'$ with $B$ pairing, $P4$ with $^1E$ pairing, and $Pmc2_11'$ with $A_2$ pairing. \subsection{Revisiting compatibility relations and the first differential} \label{sec5:d1} \begin{figure}[t] \begin{center} \includegraphics[width=1\columnwidth]{fig_differential.pdf} \caption{\label{fig:diff}Illustration of nodes near a gapless point at a 1-cell. (a) Two divisions of the 1-cell and nodal lines in adjacent 2-cells. Here, the red shaded region and others have different values of the topological invariants, whose boundary results in a nodal line. (b) Surface-node. When there are compatibility relations between the 2-cell and adjacent 3-cells, the regions in (a) are extended to 3-cells, and the boundary surface is the surface node.} \end{center} \end{figure} Before moving on to the scheme to classify nodes at 1-cells, let us revisit the first differentials $d_{1}^{p,0}$ for $p=1,2$. \add{Here, we discuss the reason why $d_{1}^{p,0}$ can be understood as the connection gapless states between $p$-cells and $(p+1)$-cells.} Suppose that there exists a gapless point at a 1-cell, which involves the changes of zero-dimensional topological invariants at $\bm{k}$-points in the 1-cell. In other words, there are two parts at the 1-cell which have different zero-dimensional topological invariants. It is not necessary that the gapless point at the 1-cell must be a genuine point node in BZ. In general, it might be part of line or surface nodes. We further assume that compatibility relations between points in the 1-cell and in adjacent 2-cells exist. \add{Although the zero-dimensional topological invariants for the above two parts in the $1$-cell are different, any points in the $1$-cell have common compatibility relations for points in the adjacent $2$-cells. Then, the compatibility relations and the different topological invariants of the two parts lead to two regions on the $2$-cell with different zero-dimensional topological invariants} (see Fig.~\ref{fig:diff} (a)). As a result, the boundary line of these two regions results in the line node. In fact, $d_{1}^{1,0}$ informs us of the existence or absence of such line nodes. Focusing on one of the adjacent 2-cells, we can apply the same discussion to this 2-cell. Namely, when compatibility relations between the 2-cell and adjacent 3-cells exist, $d_{1}^{2,0}$ examines whether the above two regions with different zero-dimensional topological invariants are extended to 3-cells (see Fig.~\ref{fig:diff} (b)). In the following, we formulate the above processes in a systematic manner based on $E_{1}^{p,0}$ and $d_{1}^{p,0}$. \subsection{Classifications of nodes on 1-cell} \label{sec5:general} \begin{figure*}[t] \begin{center} \includegraphics[width=2\columnwidth]{diagnosis.pdf} \caption{\label{fig:diagnosis}Illustration of the diagnostic scheme for case (A). We begin by acting $d_{1}^{1,0}$ on a generator of $E_{1}^{1,0}$ corresponding to a basis $\mathfrak{b}_{D^1}^{\alpha}$. Then, we obtain the gapless lines on the $2$-cells adjacent to the 1-cell $D^1$, which is an element of $E_{1}^{2,0}$. Next, we focus on one of the adjacent $2$-cells. In the figure, we pick $D_1^2$ from the two $2$-cells. In other words, we consider only $\mathfrak{b}_{D_1^2}^{\beta_1}$ of $\mathfrak{b}_{D_1^2}^{\beta_1}+\mathfrak{b}_{D_2^2}^{\beta_2}$ in the figure. Finally, we map $\mathfrak{b}_{D_1^2}^{\beta_1}$ to an element $E_{1}^{3,0}$ by $d_{1}^{2,0}$ and examine whether the mapped element is trivial or not.} \end{center} \end{figure*} As discussed in the preceding section, compatibility relations tell us if the change of zero-dimensional topological invariants at a $p$-cell make domain walls of the changes at $(p+1)$-cells. This process is formulated in terms of $E_{1}^{p,0}$ and $d_{1}^{p,0}$. Recall that $E_{1}^{1,0}$ can be interpreted as the set of gapless states on 1-cells, and let us suppose that we have the set of band labels $\mathfrak{n}^{(1)} =\mathfrak{b}^{(1)}_{D^1,\alpha}$, where $\mathfrak{b}^{(1)}_{D^1, \alpha}$ is a basis vector of $E_{1}^{1,0}$ generated by an irreducible representation $U_{D^1}^{\alpha}$ at a 1-cell $D^1$ (see Sec.~\ref{sec3:E1}). Applying the above strategy to the 1-cell, there are two cases: (A) $d_{1}^{1,0}(\mathfrak{n}^{(1)})\neq \bm{0}$ and (B) $d_{1}^{1,0}(\mathfrak{n}^{(1)})= \bm{0}$. We first consider case (A). Since $d_{1}^{1,0}(\mathfrak{n}^{(1)})$ is an element of $E_{1}^{2,0}$, $d_{1}^{1,0}(\mathfrak{n}^{(1)})$ can be expanded by the basis vectors of $E_{1}^{2,0}$ as \begin{align} \label{eq:d1n} d_{1}^{1,0}(\mathfrak{n}^{(1)}) &= \sum_{i}\left( \sum_{\alpha}r_{D^{2}_{i}, \alpha}^{(2)}\mathfrak{b}_{D^{2}_{i},\alpha}^{(2)} + \sum_{\beta}m_{D^{2}_{i}, \beta}^{(2)}\mathfrak{b}_{D^{2}_{i},\beta}^{(2)}\right), \end{align} where $r_{D^{2}_{i}, \alpha}^{(2)}\in \mathbb{Z}_2$ and $m_{D^{2}_{i}, \beta}^{(2)} \in \mathbb{Z}$. This equation tells us that the gapless point on the 1-cell is extended to adjacent 2-cells with the nontrivial coefficients in Eq.~\eqref{eq:d1n}, which results in the gapless lines on the $2$-cells. As a result, the gapless point on the 1-cell is part of line nodes or surface nodes. To distinguish between these two possibilities, we further examine whether $d_{1}^{2,0}$ is nontrivial. One might recall the relation $d_{1}^{2,0}\circ d_{1}^{1,0}=0$ and think that $d_{1}^{2,0}$ is useless for this purpose. \add{However, when we focus on only one of the adjacent $2$-cells, the same discussion can be applied to the $2$-cells. In other words, picking only one basis vector from Eq.~\eqref{eq:d1n}, we can discuss the action of $d_{1}^{2,0}$ on the picked basis, as is the case of $E_{1}^{1,0}$ (see Fig.~\ref{fig:diagnosis} for an intuitive illustration).} If there exist the basis vectors such that $d_{1}^{2,0}(\mathfrak{b}_{D^{2}_{i},\alpha}^{(2)})\neq 0$ in Eq.~\eqref{eq:d1n}, the gapless point on the 1-cell is part of a surface node. Otherwise, it is part of a line node. Next, we discuss the case (B) where $d_{1}^{1,0}(\mathfrak{n}^{(1)})=\bm{0}$. Since the relation indicates the absence of any domain walls discussed above, one might expect that the gapless point on the 1-cell is a genuine point node. Indeed, this is not always true. \add{The gapless point on the $1$-cell is a genuine point node only if $\mathfrak{n}^{(1)}$ is a member of gapless point classifications, i.e., $\mathfrak{n}^{(1)}$ can be expanded by the obtained band labels from gapless point classifications in Sec.~\ref{sec4:cc-formulas}. If not, the gapless point on the line is part of line nodes extended from the 1-cell to 3-cells, generic momenta.} Using the above scheme, we classify nodes on all 1-cells for any MSG $\mathcal{M}$, taking into account all the possible one-dimensional irreducible representations of the superconducting gaps, \add{the conditions $\mathcal{C}^2=\pm 1$}, and the spinful/spinless nature of the systems. The results are tabulated in the Supplementary Materials. In Appendix~\ref{app:cell_3D}, we explain the cell decomposition for 3D BZ which we used in the classifications. \subsection{Examples} \label{sec5:example} In the following, we will apply the above scheme to concrete symmetry settings. As mentioned in Secs.~\ref{sec1} and \ref{sec2}, our scheme is applicable to complex symmetry settings, e.g., noncentrosymmetric systems and rotation axes in the glide planes, which are out of the scope of previous studies. After we reproduce the results of previous works for spinful superconductors in MSG $P2/m1'$ with $B_g$ pairing by our method, we show that our method can detect nodal structures for those in MSG $P21'$, $P4$, and $Pmc2_11'$, which are noncentrosymmetric, TR breaking, or nonsymmorphic MSGs. The results are summarized in Table~\ref{tab:overview}. \begin{table*}[t] \begin{center} \caption{\label{tab:overview}Summary of classification results for examples discussed in this work. Space groups and pairing symmetries are shown in the first and second columns. The third and fourth ones represent the boundary points of the line where a gapless point exists. The fifth column is the label of an irreducible representation (irrep), which follows the notation in Ref.~\onlinecite{Bilbao}. The sixth column shows the classification $\mathbb{Z}$ or $\mathbb{Z}_2$, and the seventh column means the type of nodes. Here P, L, and S denote point, line, and surface nodes, respectively. In addition, while (A) means that the shape of the node is determined only by compatibility relations, (B) indicates that point-node classifications are necessary. } \begin{tabular}{c|c|c|c|c|c|c} \hline SG & pairing & HSP1 & HSP2 & irrep & classification & type of node \\ \hline\hline \multirow{2}{*}{$P2/m$ with TRS} & \multirow{2}{*}{$B_g$} & $(0,0,0)$ & $(0,\frac{1}{2},0)$ &$\bar{\Lambda}_3$ & $\mathbb{Z}_2$ & L(B) \\ & & $(0,0,0)$ & $(\frac{1}{2},0,0)$ & $\bar{\text{F}}_3$ & $\mathbb{Z}_2$ & L(A) \\ \hline $P2$ with TRS & $B$ & $(0,0,0)$ & $(0,\frac{1}{2},0)$ & $\bar{\Lambda}_3$ & $\mathbb{Z}$ & L(B) \\ \hline \multirow{4}{*}{$P4$ without TRS} & \multirow{4}{*}{$^1E$} & \multirow{4}{*}{$(0,0,0)$} & \multirow{4}{*}{$(0,0,\frac{1}{2})$} & $\bar{\Lambda}_5$ & $\mathbb{Z}$ & S(A) \\ & & & & $\bar{\Lambda}_6$ & $\mathbb{Z}$ & S(A) \\ & & & &$\bar{\Lambda}_7$ & $\mathbb{Z}$ & S(A) \\ & & & & $\bar{\Lambda}_8$ & $\mathbb{Z}$ & S(A) \\ \hline $Pmc2_1$ with TRS & $A_2$ & $(0,0,\frac{1}{2})$ & $(\frac{1}{2},0,\frac{1}{2})$ & $\bar{\text{A}}_3$ & $\mathbb{Z}_2$ & L(B) \\ \hline \end{tabular} \end{center} \end{table*} \subsubsection{$P2/m1'$ with $B_g$ pairing} \label{sec5:p2mBg} Let us begin with the 1-cell $D^1$ in Fig.~\ref{fig:cell_p2m} (a), which is the rotation axis in BZ for $P2/m1'$ with $B_g$ pairing. On the 1-cell, there are two irreducible representations listed in Table~\ref{tab:irreps_P2}. Ref.~\onlinecite{Sumita-Nomoto-Shiozaki-Yanase} has shown that line nodes pinned to the rotation axes can exist in this symmetry setting, although the derivation has not been shown. Here, we show that the line nodes pinned to the rotation axes can be stable by $d_{1}^{1,0}$ and our point-node classifications. Let us suppose that we have $\mathfrak{n}^{(1)} = \mathfrak{b}_{D^1}^{(1)}$ in which $\mathfrak{p}_{D^1}^{1}, \mathfrak{p}_{D^1}^{2}$, $\mathfrak{p}_{\mathcal{T} D^1}^{1}$, and $\mathfrak{p}_{\mathcal{T} D^1}^{2}$ equal $1$. We first define adjacent 2-cells to the 1-cell $D^1$ by Fig.~\ref{fig:cell_p2m} (a). The EAZ classes at the 2-cells are class DIII due to the existence of $I\mathcal{T}$ and $I\mathcal{C}$ with $(I\mathcal{T})^2=-1$ and $(I\mathcal{C})^2=+1$, and then compatibility relations among them do not exist. This results in $d_{1}^{1,0}(\mathfrak{n}^{(1)}) = 0$, which indicates the gapless point on the 1-cell is not extended to the 2-cells. Next, we classify stable point nodes on the 1-cell. As discussed in Sec.~\ref{sec4:p2m}, we find there are no stable point nodes on the 1-cell. Therefore, we conclude that the gapless point is part of a line node extended from the 1-cell to 3-cells. This line node is protected by one-dimensional winding number $W$ defined by the chiral symmetry at the 3-cells. This is precisely what Ref.~\onlinecite{Sumita-Nomoto-Shiozaki-Yanase} has proposed. We then discuss the mirror plane in the $k_y = 0$. Let us focus on the 1-cell $b$ in Fig.~\ref{fig:p4mm} and suppose that we have $\mathfrak{n}^{(1)} = \mathfrak{b}_{b}^{(1)}$ which has $\mathfrak{p}_{b}^{\pm}=1$ for irreducible representations $U_{b}^{\pm}(M_y) = \pm i$. The 2-cells $\alpha$ and $\alpha_7$ are adjacent to the 1-cell $b$ and the same symmetry class. Consequently, compatibility relations among them exist, and $d_{1}^{1,0}( \mathfrak{b}_{b}^{(1)}) = \mathfrak{b}_{\alpha}^{(2)}+\mathfrak{b}_{\alpha_7}^{(2)}$. Here, $\mathfrak{b}_{\alpha}^{(2)}$ $(\mathfrak{b}_{\alpha_7}^{(2)})$ is a basis of $E_{1}^{2,0}$ in which $\mathfrak{p}_{\alpha}^{\pm}$ $(\mathfrak{p}_{\alpha_7}^{\pm})$ and associated band labels equal $1$. As discussed in Sec.~\ref{sec5:general}, $d_{1}^{1,0}(\mathfrak{n}^{(1)}) \neq 0$ indicates that the gapless point on the 1-cell $b$ should be extended to the adjacent 2-cells. Since EAZ classes of all 3-cells are class DIII, there are no compatibility relations, i.e., $d_{1}^{2,0} = 0$. Therefore, we conclude that the gapless point on the 1-cell $b$ is classified into $\mathbb{Z}_2$ and is part of the line node in the mirror plane. Our result is consistent with the result of group theoretical analysis in Ref.~\onlinecite{Sumita-Yanase} \subsubsection{$P21'$ with $B$ pairing} \label{sec5:p2} Next, we consider the same 1-cell as that in Sec.~\ref{sec5:p2mBg}, but without the inversion symmetry. In this case, the system can have line nodes pinned to the rotation axes. Irreducible representations $U_{D^1}^{\beta=1,2}$ and their EAZ classes are listed in Table~\ref{tab:irreps_P2}. We again assume that we have $\mathfrak{n}^{(1)} = \mathfrak{b}_{D^1}^{(1)}$ in which $\mathfrak{N}_{D^1}^{1}=-\mathfrak{N}_{D^1}^{2}=+1$ and associated band labels equal $1$ or $-1$. Unlike the above case, the 2-cells are invariant only under $\Gamma$, and then their EAZ classes are class AIII. As with the case of Sec.~\ref{sec5:p2mBg}, this implies that there are no compatibility relations among them, i.e., $d_{1}^{1,0}(\mathfrak{n}^{(1)}) = 0$, and the gapless point on the 1-cell is not extended to the 2-cells. As shown in Sec.~\ref{sec4:p2}, since $(\mathfrak{N}_{D^1}^{1}, \mathfrak{N}_{D^1}^{2}) = (1, -1)$ is not a member of the gapless point classification in Eq.~\eqref{eq:gene_P2}, we conclude that $\mathfrak{b}_{D^1}$ indicates the existence of line nodes pinned to the rotation axes. This is consistent with the fact that the winding number $W$ does not change after breaking the inversion symmetry of the system in Sec.~\ref{sec5:p2mBg}. To verify the existence of such line nodes, let us consider the following model \begin{align} \label{eq:p2_model} H_{\bm{k}}&=(3-\cos k_x - \cos k_y - \cos k_z-\mu)\tau_z \nonumber\\ &\quad\quad\quad\quad\quad\quad+ (\sin k_x+2\sin k_z)\tau_y,\\ \rho(C_{2}^{y}) &= -i\tau_z\sigma_y,\\ \rho(\mathcal{T}) &= i\sigma_y,\\ \rho(\mathcal{C}) &= \tau_x, \end{align} where $\sigma_{i=x,y,z}$ and $\tau_{j=x,y,z}$ are Pauli matrices which represent different degree of freedom. After computing the region where the spectrum is gapless, we find a line node in Fig.~\ref{fig:p2_line}. This is the line node that we have discussed above. \begin{figure}[t] \begin{center} \includegraphics[width=0.65\columnwidth]{p2_line.pdf} \caption{\label{fig:p2_line}The nodal line of the tight-binding model in Eq.~\eqref{eq:p2_model} for $\mu = +1$.} \end{center} \end{figure} The question is whether $\mathfrak{n}^{(1)} = 2\mathfrak{b}_{D^1}^{(1)}$ is the point node or not. In the following, we show that the above line node can exist even in the case. To explain this, we start with the case where there are two the above line nodes generated by $\mathfrak{n}^{(1)} = 2\mathfrak{b}_{D^1}^{(1)}$ illustrated in Fig.~\ref{fig:p2} (a). By rotating one of the lines, the winding numbers can be cancelled. Then, we get two pair of point nodes in Fig.~\ref{fig:p2} (b). However, in the absence of other symmetries than MSG $P21'$ with PHS, there are no reasons why two gapless points on the 1-cell exist at the same point. Finally, each pair again forms a line node illustrated in Fig.~\ref{fig:p2} (c). As a result, $\mathfrak{n}^{(1)} = 2\mathfrak{b}_{D^1}^{(1)}$ indicates the existence of line nodes in Fig.~\ref{fig:p2} (c), and therefore nodes on the 1-cell are classified into $\mathbb{Z}$, whose elements are line nodes of \textit{case (B)}. \begin{figure}[t] \begin{center} \includegraphics[width=0.99\columnwidth]{p2_B_line_v2.pdf} \caption{\label{fig:p2}Deformation of nodal structures in MSG $P21'$. Two line nodes pinned to the rotation axis are protected by 1D winding numbers (a). These line nodes can be deformed to point nodes without closing gap at 0-cells (b). Since there are no reasons why two point nodes are at the same position, each of two split gapless points is again part of a line node.} \end{center} \end{figure} \subsubsection{$P4$ with $^1E$ pairing} \label{sec5:p4} Next, we discuss MSG $P4$ with $^1E$ pairing, which is generated by the four-fold rotation symmetry $C_{4}^{z}$. We consider the 1-cell $D^1$ in Fig.~\ref{fig:p4} (a). In the following, we show that a gapless point on the 1-cell is part of surface nodes. Irreducible representations $U_{D^1}^{\beta}\ (\beta=1,2,3,4)$ and their EAZ classes are tabulated in Table~\ref{tab:P4}. Suppose that we have $\mathfrak{n}^{(1)} = \mathfrak{b}_{D^1, \beta=1}^{(1)}$ which has $\mathfrak{N}^{1}_{D^1} = -\mathfrak{N}^{3}_{\mathcal{C} D^1} = +1$. Although there exist eight adjacent 2-cells to $D^1$ [colored in Fig.~\ref{fig:p4} (a)], only two of them are independent due to the presence of $C_{4}^{z}$. Here, we choose blue planes $D^{2}_{1}$ and $D^{2}_{2}$ in Fig.~\ref{fig:p4} (a) as independent adjacent 2-cells. Since the EAZ classes at $D^1$, the adjacent 2-cells, and 3-cells are the same, compatibility relations exist. Accordingly, $d_{1}^{1,0}(\mathfrak{n}^{(1)}) = \mathfrak{b}_{D^{2}_1}^{(2)} - \mathfrak{b}_{D^{2}_2}^{(2)}$, in which $\mathfrak{N}_{D_{i=1,2}^2}=+1$ and associated band labels equal $1$ or $-1$. We further find $d_{1}^{2,0}(\mathfrak{b}_{D^{2}_1} )\neq 0$ and $d_{1}^{2,0}(\mathfrak{b}_{D^{2}_2} ) \neq 0$, which implies that a gapless point on the 1-cell is part of surface nodes. Note that the disucssions and results for other values of $\beta$ do not change. As shown in Sec.~\ref{sec4:p4}, when $\mathfrak{n}^{(1)}$ is a linear combination of $\{\mathfrak{b}_{D^1, \beta}^{(1)}\}_{\beta=1}^{4}$, point nodes on the 1-cell can exist. However, the same logic in Sec.~\ref{sec5:p2} is valid, and therefore the point nodes can be inflated, which results in sphere nodes (Bogoliubov Fermi surfaces) pinned at the 1-cell like the right panel in Fig.~\ref{fig:p4} (b). Ref.~\onlinecite{PhysRevLett.125.237004} has discussed such Bogoliubov Fermi surfaces in multi-components superconductors without the inversion symmetry, although Ref.~\onlinecite{PhysRevLett.125.237004} has not discussed the symmetry-protection of them. \subsubsection{$Pmc2_11'$ with $A_{2}$ pairing} \label{sec5:pmc21} Finally, we discuss nonsymmorphic and noncentrosymmetric MSG $Pmc2_11'$ with $A_{2}$ pairing. We focus on the 1-cell $D^1$ in the boundary of BZ [see Fig.~\ref{fig:cell_p2m} (b)], which is invariant under the glide symmetry $G_{y}$. There are two irreducible representations $U_{D^1}^{\pm}(G_y) = \pm 1$ of $\mathcal{G}_{D^1}$, and their EAZ classes are class D. Let us consider that we have $\mathfrak{n}^{(1)}=\mathfrak{b}^{(1)}_{D^1}$ in which $\mathfrak{p}_{D^1}^{\pm} = 1$ and associated band labels are nontrivial. As shown in Fig.~\ref{fig:cell_p2m} (b), three adjacent 2-cells to $D^1$ exist. The EAZ classes of the 2-cells in the $k_y=0$ plane and the $k_z=\pi$ are class A and class DIII, respectively. Consequently, there are no compatibility relations, i.e., $d_{1}^{1,0}(\mathfrak{n}^{(1)}) = \bm{0}$. In addition, as shown in Sec.~\ref{sec4:pmc2}, there are no locally stable point nodes. As a result, we arrive at the line node pinned to the 1-cell $D^1$, which is extended from the 1-cell $D^1$ to 3-cells. Interestingly, such line nodes on the 1-cell do not exist in symmorphic MSG $Pmm21'$ with $A_{2}$ pairing, whose point group is the same as $Pmc2_11'$. In $Pmm21'$ with $A_{2}$ pairing, the line node pinned to the 1-cell $D^1$ is understood by the compatibility relations. This is an example where nonsymmorphic symmetries change the classifications of nodes. As shown in this example, our method can capture the shape of nodes even in the presence of nonsymmorphic symmetries and in the absence of the inversion symmetry. \begin{figure}[t] \begin{center} \includegraphics[width=0.99\columnwidth]{fig_p4_v2.pdf} \caption{\label{fig:p4}(a) A half BZ in MSG $P4$. Here, the blue planes $D^{2}_{1}$ and $D^{2}_{2}$ are adjacent 2-cells to $D^1$ and red ones are symmetry-related to $D^{2}_{1}$ and $D^{2}_{2}$. (b) Deformation of nodal structures in MSG $P4$.} \end{center} \end{figure} \section{Applications to materials} \label{sec6} In this section, we provide an efficient algorithm to diagnose the shape of nodes, which needs only the zero-dimensional topological invariants at 0-cells as input data. Since the energy scale of the superconducting gaps in most superconductors is believed to be much smaller than that of normal phases~\cite{PhysRevB.81.134508,PhysRevB.81.220504,PhysRevLett.105.097001,Ono-Yanase-Watanabe2019,Ono-Po-Watanabe2020,Ono-Po-Shiozaki2020}, assuming the pairing symmetry, we can obtain the input data from DFT calculations by the following formulas: \begin{align} \mathfrak{p}_{\bm{k}}^{\alpha} &= n_{\bm{k}}^{\alpha}\vert_{\text{occ}},\\ \mathfrak{N}_{\bm{k}}^{\alpha} &= n_{\bm{k}}^{\alpha}\vert_{\text{occ}} - n_{-\bm{k}}^{\tilde{\alpha}}\vert_{\text{occ}}, \end{align} where $n_{\bm{k}}^{\alpha}\vert_{\text{occ}}$ is the number of irreducible representations labeled by $\alpha$ in the normal phase, and $\tilde{\alpha}$ is a label of the particle-hole conjugate irreducible representation of $\alpha$. We also demonstrate our scheme through a simple tight-binding model and a recently discovered superconductor CaPtAs. \subsection{Efficient algorithm for detection of nodal structures} In Sec.~\ref{sec5}, we have classified nodes on the 1-cells, and we have shown that the basis of $E_{1}^{1,0}$ can largely determine the shape of nodes. Here we recall that $d_{1}^{0,0}$ is a map from $E_{1}^{0,0}$ to $E_{1}^{1,0}$. This enable us to know nodal structures on the 1-cells from information at the 0-cells. First, let us assume that we have the set of band labels at the 0-cells $\mathfrak{n}^{(0)}$ and $d_{1}^{0,0}(\mathfrak{n}^{(0)})\neq 0$. By expanding $d_{1}^{0,0}(\mathfrak{n}^{(0)})$ by the basis of $E_{1}^{1,0}$, we find which coefficients are nontrivial. Referring to the results of classifications in Sec.~\ref{sec5}, we diagnose the shape of nodal structures, i.e., gapless points on the 1-cells are point nodes or part of line/surface nodes. To demonstrate the scheme, we consider a simple tight-binding model of MSG $P2/m1'$ with $B_g$ pairing: \begin{align} \label{eq:p2m_model} H_{\bm{k}}&=(3-\cos k_x - \cos k_y - \cos k_z-\mu)\tau_z \nonumber\\ &\quad\quad\quad\quad+ (\sin k_x+2\sin k_z)\sin k_y \tau_y\sigma_y,\\ \rho(I) &= \mathds{1},\\ \rho(C_{2}^{y}) &= -i\tau_z\sigma_y,\\ \rho(\mathcal{T}) &= i\sigma_y,\\ \rho(\mathcal{C}) &= \tau_x, \end{align} where $\sigma_{i=x,y,z}$ and $\tau_{j=x,y,z}$ are Pauli matrices which represent different degree of freedom. Using this model, we show that the above algorighm can detect nodal structures discussed in Sec.~\ref{sec5:p2mBg}. After computing Pfaffian invariants in Eq.~\eqref{eq:Pf} for all 0-cells, we find $\mathfrak{p}_{\Gamma}^{1} = \mathfrak{p}_{\Gamma}^{2} = 1$ and others equal zero, where $\mathfrak{p}_{\Gamma}^{1}$ and $\mathfrak{p}_{\Gamma}^{2}$ are band labels for irreducible representations $(U_{\Gamma}^{1}(I), U_{\Gamma}^{1}(C_{2}^{y}) )=(1,+i)$ and $(U_{\Gamma}^{2}(I),U_{\Gamma}^{2}(C_{2}^{y}) )=(1,-i)$. This set of band labels correspond to a basis of $E_{1}^{0,0}$ denoted by $\mathfrak{b}_{\Gamma, 1}^{(0)}$, and we get $d_{1}^{0,0}(\mathfrak{b}_{\Gamma, 1}^{(0)}) = \mathfrak{b}_{a}^{(1)}+\mathfrak{b}_{b}^{(1)}+\mathfrak{b}_{a_1}^{(1)}+\mathfrak{b}_{b_1}^{(1)}+\mathfrak{b}_{D^1}^{(1)}$, where we use the same labels of 1-cells in Figs.~\ref{fig:p4mm} and \ref{fig:cell_p2m}(a). This indicates that gapless points exist on the 1-cells $a,b,a_1,b_1$, and $D^1$. As discussed in Sec.~\ref{sec5:p2mBg}, the gapless point on the 1-cell $b$ is part of line nodes in the mirror plane. Similar to the case, gapless points on the 1-cells $a, a_1,$ and $b_1$ are also extended to their adjacent 2-cells in the plane. Taking into account symmetry relations among 2-cells, we find that a line node in the mirror plane encircles $\Gamma$ point. On the other hand, we have shown that a gapless point in the rotation axis is also part of a line node pinned to the axis. We verify that our method correctly captures the nodes of the tight-binding model shown in Fig.~\ref{fig:p2m}. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\columnwidth]{p2m_node.pdf} \caption{\label{fig:p2m}The nodal lines of the tight-binding model in Eq.~\eqref{eq:p2m_model} for $\mu = +1$. The blue plane is the mirror symmetric plane.} \end{center} \end{figure} \subsection{Material example} In this subsection, we apply the above algorithm to realistic superconductors CaPtAs, whose MSG is $I4_1md1'$. A recent experiment~\cite{PhysRevLett.124.207001} has reported the time-reversal breaking and the signature of point nodes. Breaking TRS indicates that the order parameter belongs to $^1E$ or $^2E$ representations of the point group $C_4$. Then, MSG $I4_1md1'$ is reduced to $I4_1$. Here, we assume that the superconducting gap belongs to $^1E$ representation. Ref.~\onlinecite{Ono-Po-Shiozaki2020} has computed irreducible representations by QUANTUM-ESPRESSO~\cite{qe1,qe2} and \textit{qeirreps}~\cite{qeirreps} and found that $\mathfrak{p}_{\Gamma}^{4} = 1$ and $\mathfrak{N}_{\Gamma}^{1} = -\mathfrak{N}_{\Gamma}^{3}=-1$, where the labels of irreducible representations follow Table~\ref{tab:P4}. Then, the set of band labels $\mathfrak{n}^{(0)}$ corresponds to $-\mathfrak{b}_{\Gamma,1}^{(0)} + \mathfrak{b}_{\Gamma,4}^{(0)}$. In the following, we show that this superconducting material is expected to have small Bogoliubov Fermi surfaces. We check if this material satisfies compatibility relations, i.e., $d_{1}^{0,0}(\mathfrak{n}^{(0)}) = \bm{0}$. After computing $d_{1}^{0,0}(\mathfrak{n}^{(0)})$, we find $d_{1}^{0,0}(\mathfrak{n}^{(0)}) = -\mathfrak{b}_{D^1,1}^{(1)}+\mathfrak{b}_{D^1,3}^{(1)}$, where $D^1$ denotes the rotation symmetric line between $\Gamma=(0,0,0)$ and $\text{Z}=(0,0,2\pi)$. In fact, the symmetry setting in this line is the completely same as that in Sec.~\ref{sec5:p4}, and then the nodal structures are also the same. Since $d_{1}^{0,0}(\mathfrak{n}^{(0)})$ correspond to a set of band labels listed in Eq.~\eqref{eq:p4-point}, we expect that this material has small Bogoliubov Fermi surfaces as discussed in Sec.~\ref{sec5:p4} (see Fig.~\ref{fig:p4} (b)). Our result might not contradict the experimental observation. Since the superconducting gaps in most superconductors are considered to be much small, it is natural to think the Bogoliubov Fermi surfaces are also small. Further experiments to distinguish between this case and \textit{exact} point nodes are awaited. \section{Further extension to nodes at generic points} \label{sec7} Thus far, we have focused on nodes pinned to 1-cells. However, in general, nodes can exist at generic points. In this section, we discuss how to extend our symmetry-based approach to nodes at generic points through the mirror plane in MSG $P2/m1'$ with $B_u$ pairing. Here, we decompose the mirror plane into the cell decomposition in Fig.~\ref{fig:p4mm} and discuss the 1-cell denoted by $b$ in Fig.~\ref{fig:p4mm}. After applying the method in Sec.~\ref{sec4} to the 1-cell, we find that the classification of gapless points is $\mathbb{Z}$. The generating Hamiltonian is \begin{align} H_{(k_1,k_2)} &= k_1 \tau_y + k_2 \tau_{x}\sigma_z+\delta k_3\tau_z,\\ \sigma(I\mathcal{C}) &= i \tau_y K \\ \sigma(I\mathcal{T}) &= i \tau_z \sigma_y K\\ \sigma(M_y) &= i \tau_z \sigma_x, \end{align} where $k_1$ is perpendicular to both the mirror plane, the 1-cell, $k_2$ is perpendicular to the 1-cell but parallel to the mirror plane, and $\delta k_3$ is a displacement from the gapless point in the direction of the 1-cell. The gapless point is protected by the mirror winding number~\cite{PhysRevLett.113.046401}. On the other hand, since the EAZ class at the 1-cell is class AIII, there are no topological invariants, which implies that gapless points pinned to the 1-cell do not exist. In fact, we can add the symmetric perturbation terms which shift the gapless point to the $k_2$-direction. Therefore, gapless points can locally exist everywhere in the mirror plane. The question is whether these gapless points are globally stable. In the following, we show that there can globally exist only two gapless points in the plane. To explain this, let us suppose that there are four gapless points in the plane as shown in Fig.~\ref{fig:mirror}. Since $C_{2}^{y}$ anticommutes with PHS, $C_{2}^{y}$ changes the sign of the winding number (see Appendix~\ref{app:winding}). As discussed above, the gapless points can freely move in the plane, and therefore two winding numbers with opposite signs can be canceled. This indicates that only one pair of gapless points can globally exist. Symmetry indicators in this symmetry class can detect the globally stable gapless points. The symmetry indicator group is $(\mathbb{Z}_2)^2 \times \mathbb{Z}_4$, whose $\mathbb{Z}_2$-parts originate from lower dimensions. The $\mathbb{Z}_4$ index is defined by \begin{align} z_4 &= \frac{1}{4}\sum_{K \in \text{TRIMs}}\left(\mathfrak{N}_{K}^{+}-\mathfrak{N}_{K}^{-}\right)\mod 4, \end{align} where $\mathfrak{N}_{K}^{\pm}$ is the band label for irreducible representations $U_{K}^{\pm}(I) = \pm 1$ at the time-reversal invariant momenta (TRIMs). If the system is fully gapped, $z_4 = 1,3$ indicate the mirror Chern number modulo $2$ equals $1$. However, the nontrivial mirror Chern numbers are forbidden in this symmetry setting~\cite{PhysRevLett.111.056403}. Therefore, we conclude that $z_4 = 1,3$ indicate the existence of gapless points. Actually, the above annihilation procedure can be understood as ``second differential'' $d_{2}^{p,0}$ in the theory of Atiyah-Hirzebruch Spectral Sequence~\cite{Shiozaki2018}. Although establishing full classifications of nodes at generic points and relationship between symmetry indicators and the nodes are interesting issues, they are out of scope of this paper. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\columnwidth]{mirror_winding.pdf} \caption{\label{fig:mirror}Illustration of annihilation process of gapless points. Here white solid circles denote gapless points and $\pm$ represent the sign of the winding numbers.} \end{center} \end{figure} \section{Conclusion and Outlook} \label{sec8} In this work, we have established a systematic framework to classify superconducting nodes pinned to any line in momentum space. After decomposing BZ of all MSGs into points (0-cells), lines (1-cells), planes (2-cells), and polyhedrons (3-cells), we have applied our method to the lines and obtained comprehensive classifications of nodes pinned to the lines. Moreover, our theory has resulted in a highly efficient way to diagnose the superconducting nodes in superconducting materials. As a demonstration, we have analyzed the nodes in CaPtAs assuming the time-reversal broken pairing and pointed out that this material can have small Bogoliubov Fermi surfaces. Our work opens up various possibilities for future studies. Although our results cover a wide range of nodes, nodes at generic points are missing as discussed in Sec.~\ref{sec7}. The symmetry-based approach can be more refined to detect such nodes, and we leave deriving full relationships between symmetry indicators and the nodes as future works. This type of study will give us more information of nodes pinned to lines as follows. Suppose that a system violates compatibility relations, which indicates the existence of nodes pinned to 1-cells as discussed in Sec.~\ref{sec6}. Since we can always forget about symmetries that impose the violated compatibility relations on the system, we can apply symmetry indicators for lower symmetry classes to the system as discussed in Ref.~\onlinecite{PhysRevResearch.2.022066}. Then, the symmetry indicators will clarify topological nature behind the nodes. The integration of our algorithm with DFT calculations enables a comprehensive investigation of nodes in the materials listing in the database. Such studies help to find the possible pairings of unconventional superconductivity compatible with experimental observations. We hope that our study will lead to a deep understanding of superconductivity in discovered superconductors. \begin{acknowledgments} We thank Hoi Chun Po, Shuntaro Sumita, Takuya Nomoto, and Haruki Watanabe for fruitful discussions. In particular, KS thanks Takuya Nomoto for sharing ideas on how the first differential detects the nodal structure in the early stages of the project. SO is also grateful to Yohei Fuji for valuable comments on the manuscript. The work of SO is supported by The ANRI Fellowship and KAKENHI Grant No. JP20J21692 from he Japan Society for the Promotion of Science. The work of KS is supported by PRESTO, JST (Grant No. JPMJPR18L4) and CREST, JST (Grant No. JPMJCR19T2). \textit{Note added}.---After posting the preprint of this work (arXiv:2102.07676), Ref.~\onlinecite{wu2021symmetryenforced} appeared, which is based on a similar idea and discusses only gapless states in the normal phases. However, this work is different from Ref.~\onlinecite{wu2021symmetryenforced} in terms of the formulation and the mathematical approach. Note that, as stressed in this paper, compatibility relations do not determine superconducting nodes pinned to lines in the momentum space completely. Therefore, our unification of compatibility relations and point-nodes classifications plays a vital role in the classifications of the superconducting nodes. \end{acknowledgments}
{ "timestamp": "2021-08-25T02:26:04", "yymm": "2102", "arxiv_id": "2102.07676", "language": "en", "url": "https://arxiv.org/abs/2102.07676" }
\section{INTRODUCTION} The three millimeter window has recently been exploited to search for high-redshift dusty star-forming galaxies (DSFGs) through the detection of their dust continuum emission (e.g. \citealt{Zavala2018c,Zavala2021a,Gonzalez-Lopez2019a}). Thanks to the negative $K$-correction at this wavelength, 3\,mm-selected galaxies are expected to lie at $z>2$ (at the depth of the current surveys; \citealt{Casey2018b}). The current searches have been motivated by the deep continuum maps obtained after collapsing spectroscopic observations, which are typically conducted in this waveband when targeting CO emission lines in high-redshift galaxies. ``COS-3mm-1', the source studied in this work ($\alpha=\,$10:02:36.8, $\delta=\,$+02:08:40.6), was serendipitously discovered $\sim25''$ away from the phase center of an ALMA band 3 observation aimed at detecting CO in a galaxy at $z\sim1.5$ (\citealt{Williams2019a}). The galaxy was detected at $8\sigma$ with a 3\,mm flux density of $S_{\rm3mm}=150\rm\,\mu Jy$, and it is just marginally detected at the $\approx2-3\sigma$ level in {\it Spitzer}/IRAC, SCUBA-2 850$\,\mu$m, and VLA 3\,GHz (after deblending the original maps with the ALMA positional prioir). The system drops out of deep optical imaging, including {\it HST}/CANDELS, Subaru, UltraVISTA, and it is also undetected in {\it Spitzer}/MIPS and in all the {\it Herschel} bands (\citealt{Williams2019a}). All these photometry datapoints were used by \citet{Williams2019a} to constrain the SED of the galaxy and its photometric redshift. The SED-fitting results suggest a high-redshift solution of $z_{\rm phot}=5.5^{+1.2}_{-1.1}$, with a 90\% probability of lying at $z>4.1$.\\ \section{ARCHIVAL DATA} With the aim of finding additional data to further constrain the redshift of this galaxy, I search for public ALMA observations around the position of ``COS-3mm-1'' using the ALMA Science Archive Query. Besides the original data in which the source was identified (ALMA project code: 2018.1.01739.S; PI: C. Williams), an additional program (2015.1.00861.S; PI: J. Silverman) was publicly available in the archive (see \citealt{Silverman2018a} for details on the data). These ALMA observations are centered $\sim40''$ away from the position of the source of interest. The primary beam response at the position of ``COS-3mm-1" is around $0.29$. The data, which cover the frequencies $85.5-89.4\,$GHz and $97.7-101.5\,$GHz, were re-analyzed following the standard ALMA pipeline scripts. Then, to search for emission lines, the spectral windows were imaged using natural weighting of the visibilities in order to maximize the signal-to-noise ratio (SNR). The RMS reached over $100\rm\,km\,s^{-1}$ channel width was around 0.05\,mJy/beam.\\ \section{RESULTS} A tentative emission line was found at $\nu=110.84$\,GHz (Figure \ref{fig1}). The peak SNR is estimated to be around $3.0$, while the SNR of the integrated line is $\approx4.7$. The line is relatively well fit with a Gaussian function with line-width of $550\pm110\,\rm km\,s^{-1}$ and integrated flux density of $0.08\pm0.02\,\rm Jy\,km\,s^{-1}$. As revealed by the line moment-0 map shown in Figure \ref{fig1}, The spatial location of the line is coincident with the 3\,mm continuum detection from \citet{Williams2019a}, implying that, if real, this emission line arises from the 3\,mm-selected galaxy. Assuming the line is real and that it corresponds to a transition of carbon monoxide (the most common emission line in DSFGs), the most likely redshift solutions based on its photometric redshift constraints are $z=4.715$, $5.857$, and $6.999$, corresponding to the $\rm ^{12}CO(5\to4)$, $\rm ^{12}CO(6\to5)$, and $\rm ^{12}CO(7\to6)$ transitions, respectively. Here, I adopt $z=5.857\pm0.001$\footnote{Interestingly, ``MAMBO-9'', another DSFGs $\sim0.5$\,deg away from the source studied in this work (equivalent to around $10\rm\,Mpc$), lies at $z=5.850$ (\citealt{Casey2019a}). The confirmation of more galaxies with similar redshifts within this region of the sky might imply the existence of a large-scale galaxy proto-cluster structure. } as the redshift of the source since it is the closest solution to the maximum of the posterior redshift distribution found by \citealt{Williams2019a}.\\ \begin{figure}[t] \begin{center} \vspace{0.5cm} \includegraphics[scale=0.75]{fig1_v3.png} \caption{Band 3 spectrum extracted at the position of ``COS-3mm-1''. The orange line represents the best-fit Gaussian function to the tentative emission line. The center frequency of the line, the line-width, and the integrated line flux density are also indicated in the figure. The inset (black and white) plot shows the 3\,mm continuum detection from \citet{Williams2019a} with blue contours indicating the moment zero map of the line at 1, 2, and 3$\sigma$. \label{fig1}} \end{center} \end{figure} The $^{12}\rm CO(6\to5)$ line luminosity of $L'_{\rm CO(6\to5)}=(2.55\pm 0.52)\times10^{9}\,\rm K\,km\,s^{-1}\,pc^2$ can be used to calculate the molecular gas mass of ``COS-3mm-1'' by, first, estimating the $\rm CO(1\to0)$ line luminosity and, then, assuming a CO-to-H$_2$ conversion factor. I assume the line luminosity ratio reported by \citet{Bothwell2013a} ($L'_{\rm CO(6\to5)}/L'_{\rm CO(1\to0)}=0.21\pm0.04$) and a CO-to-H$_2$ conversion factor of $\alpha_{\rm CO}=4.6\,\rm M_\odot\,K\,km\,s^{-1}\,pc^2$. Based on these assumptions, I infer a gas mass of $M_{\rm gas}= (6\pm 2)\times 10^{10}\rm M_\odot$, which implies a depletion time scale ($t_{\rm dep}\equiv M_{\rm gas}/\rm SFR$) of $\sim200\rm\,Myr$ (adopting the star formation rate of $\sim300\,\rm M_\odot\,yr^{-1}$ derived by \citealt{Williams2019a}). This value is higher than the typical gas depletion timescales estimated for the bright population of high-redshift DSFGs, with values on the order of a few tens of Myrs (e.g. \citealt{Aravena2016a}). Hence, ``COS-3mm-1'' is probably undergoing a long-lived star formation phase instead of the typical short-lived starburst episodes triggered by galaxy mergers (see discussion by \citealt{Jimenez2020a}). This relatively long gas depletion timescale is in better agreement with those estimated for isolated star-forming disks at lower redshifts, and might imply that the population of faint DSFGs (with $\rm SFR\lesssim500\rm\,M_\odot\,yr^{-1}$) forms stars through a smooth star formation mode over hundreds of Myrs. Nevertheless, the low significance of this detection prevents a robust confirmation of the redshift of this source and its physical properties. Follow-up observations are thus required not only to confirm this line, but also to identify a second line to unambiguously constrain its redshift. At $z=5.85$ the $^{12}\rm CO(5\to4)$ transition is redshifted to $\sim84.0$\,GHz, very close to the low frequency edge of the ALMA Band 3, making the detection unfeasible. The $^{12}\rm CO(5\to4)$ is better suited for NOEMA (NOrthern Extended Millimeter Array) or LMT (Large Millimeter Telescope) observations, although its detection might require exposure times of several hours. The $^{12}\rm CO(7\to6)$ transition is redshifted to $\sim117.6\,$GHz, a frequency covered only by the NOEMA interferometer. The [CII]-$158\mu$m transition will be observed at $\sim277.2\,$GHz, within the ALMA Band 7 coverage. Given that the [CII] is one of the strongest FIR lines, its detection represents a promising way to confirm the redshift of this source. Other emission lines like the [OIII]-$88\mu$m or the [OI]-$63\mu$m, which have previously been detected in high-redshift galaxies (\citealt{Hashimoto2018a,Rybak2020a}), also lie within the ALMA coverage, but their expected line luminosities are more uncertain and the lines are redshifted into the high-frequency ALMA bands which require very good observing conditions. Alternatively, the redshift of this source can be confirmed with JWST observations, through the detection of rest-frame ultraviolet and optical emission lines, such as Ly$\alpha$, [OII]$3727\AA$, H$\beta$, [OIII]$4959,5007\AA$, and/or H$\alpha$.\\ \section{CONCLUSIONS} The tentative emission line reported in this work suggests that, if real, ``COS-3mm-1'' is the highest redshift galaxy selected at 3\,mm and one of the highest DSFGs known to-date. The only three DSFGs identified via (sub-)millimeter observations with higher spectroscopic redshifts are all gravitationally lensed and, with the exception of G09-83808 (\citealt{Zavala2018a}), they are rare extreme galaxies with SFRs exceeding $1000\,\rm M_\odot\,yr^{-1}$ (\citealt{Riechers2013a,Strandet2017a}). Therefore, this source could, potentially, serve as an important benchmark for $z>5$ DSFG studies for the whole extragalactic community, particularly, if it is confirmed to be at the same redshift as `MAMBO-9' (\citealt{Casey2019a}), which would suggest the existence of a galaxy proto-cluster at $z=5.85$ in this field. The confirmation of its redshift and its physical properties is thus imperative. \acknowledgments Jorge A. Zavala thanks Christina Williams, Caitlin Casey, and Sinclaire Manning for their feedback on this manuscript. \section*{DATA AVAILABILITY} This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2015.1.00861.S and ADS/JAO.ALMA \#2018.1.01739.S, archived at \url{https://almascience.nrao.edu/alma-data/archive}. \newpage
{ "timestamp": "2021-02-17T02:00:13", "yymm": "2102", "arxiv_id": "2102.07772", "language": "en", "url": "https://arxiv.org/abs/2102.07772" }
\section*{Introduction} Given an abelian variety $A$ defined over a number field $\mathcal K$, its global root number $w(A/\mathcal K)$ is the sign appearing in the conjectural functional equation of its completed $L$-function. Granting the general Birch--Swinnerton-Dyer conjecture, ${w(A/\mathcal K)=-1}$ exactly when the Mordel--Weil rank is odd. Due to Deligne \autocite{deligne_eq_fonctionelle}, we can define $w(A/\mathcal K)$ unconditionally by computing the local root numbers $w(A_v/\mathcal K_v)$ of the completed abelian variety at each place $v$ of $\mathcal K$. For each infinite place we have $w(A_v/\mathcal K_v)=(-1)^{\dim A}$. If $A$ has good reduction at a finite place $v$, then $w(A_v/\mathcal K_v)=1$. This allows to define \[w(A/\mathcal K)=\prod_v w(A_v/\mathcal K_v),\] the product being taken over all places of $\mathcal K$. The local root numbers at places of bad reduction are signs $\pm1$ and are defined in a general way as we explain next. Let $p$ be a prime number, let $K/\mathbb Q_p$ be a finite extension with an algebraic closure $\overline{K}$, and let $A/K$ be an abelian variety. We choose another prime number $\ell\neq p$ and consider the $\ell$-adic Galois representation $\rho_\ell$ on the étale cohomology group $H^1_{\et}(A_{\overline{K}},\mathbb Q_\ell)$. Applying Grothendieck's monodromy construction we obtain a complex Weil--Deligne representation $\WD(\rho_\ell)$, whose isomorphism class does not depend on $\ell$ (see, e.g., \autocite[Cor.~1.15]{sabitova_root}). Next, following Deligne, after choosing an additive character $\psi$ on $K$ and a Haar measure $\mathop{}\!\mathrm{d} x$ on $K$, we consider the $\epsilon$-factor $\epsilon(\WD(\rho_\ell),\psi,\mathop{}\!\mathrm{d} x)\in\mathbb C^\times$. The local root number is then defined as \[w(A/K):=\frac{\epsilon(\WD(\rho_\ell),\psi,\mathop{}\!\mathrm{d} x)}{|\epsilon(\WD(\rho_\ell),\psi,\mathop{}\!\mathrm{d} x)|}.\] We note that $w(A/K)$ does not depend on $\ell$, $\psi$, or $\mathop{}\!\mathrm{d} x$, see, e.g., \autocite[\S11,\S12]{rohrlich} It follows (see, e.g., \autocite[Prop.~3.1]{chai_semiab}) from the semi-stable reduction theorems and the theory of $p$-adic uniformization that there exists an abelian variety $B/K$ with potentially good reduction and an extension $S$ of $B$ by a torus $T$ such that the rigid analytification of $A$ is a quotient of the analytification of $S$ by a lattice. Then, it follows from the result of Sabitova \autocite[Prop.~1.10]{sabitova_root} that $w(A/K)$ can be determined by computing $w(B/K)$ and the Galois action on $T$. In this paper we treat the case when $A/K$ itself has potentially good reduction. This condition is equivalent to $T=0$ and, by the criterion of Néron--Ogg--Shafarevich, to the condition that the image of inertia via $\rho_\ell$ is finite. By Serre--Tate \autocite[p.~497, Cor.~2]{serre_tate} the representation $\rho_\ell$ is at most tamely ramified whenever $p> 2\dim(A)+1$. If $A/K$ is an elliptic curve with potentially good reduction, formulas for root numbers have been given by Rohrlich \autocite{rohrlich_formulas} when $p\geq5$, by Kobayashi \autocite{kobayashi} when $p=3$, and by the Dokchitsers \autocite{dd_root_ellc2} when $p=2$. The case of Jacobians having semistable reduction have been studied by Brumer--Kramer--Sabitova \autocite{brumer_kramer_sabitova}. For general abelian varieties, the case when $\rho_\ell$ is tamely ramified has been studied by Bisatt \autocite{bisatt}. \subsection{The main setup and results} We consider a curve $C$ of genus $2$ defined over a $5$-adic field $K$. Let $J(C)/K$ be its Jacobian surface. Our aim is to produce a formula for $w(C/K):=w(J(C)/K)$ in terms of other invariants of $C/K$. We suppose that $J(C)/K$ has potentially good reduction and that the associated Galois representation $\rho_\ell$ is wildly ramified. We suppose further that $\rho_\ell$ has the maximal possible inertia image, isomorphic to the semi-direct product $C_5\rtimes C_8$ where $C_8$ acts on $C_5$ via $C_8\twoheadrightarrow \Aut(C_5)$. By choosing a Weierstrass equation we define a discriminant $\Delta\in K^\times$, whose class in $K^\times/(K^\times)^2$ does not depend on the choice of the equation, see \ref{subs:g2_var_change}. Let $k_K$ be the residue field of $K$. We denote by $\lege{\cdot}{k_K}$ the Legendre symbol on $k^\times_K$ and by $(\cdot,\cdot)_K$ the quadratic Hilbert symbol on $K^\times\times K^\times$. We consider $v_K$ the normalized valuation of $K$ such that $v_K(K^{\times})=\mathbb Z$. Let $F_5$ denote the Frobenius group on $5$ elements, defined as the semi-direct product $F_5=C_5\rtimes C_4$ where $C_4$ acts faithfully on $C_5$. \begin{thm}[{Prop.~\ref{prop:hyperell_eq_spec}, Prop.~\ref{prop:hyperell_max_equiv}, Thm.~\ref{thm:max_ramif_rootN}}]\label{thm:max_g2_statement} Let $C/K$ be a smooth projective curve of genus $2$ defined over a $5$-adic field $K$. We suppose that the associated $\rho_\ell$ has finite inertia image of order divisible by $5$. There exists an equation $Y^2=P(X)$ defining $C/K$ with unitary, irreducible $P\in K[X]$ of degree $5$ having integral coefficients and a constant term $a_6$ of valuation prime to $5$. The image of inertia of $\rho_\ell$ is the maximal possible, i.e. isomorphic to $C_5\rtimes C_8$, if and only if any of the following equivalent conditions is verified : \begin{enumerate} \item For any discriminant $\Delta$ of $C/K$ the valuation $v_K(\Delta)$ is odd; \item The $\mathbb F_2$-linear Galois representation on the $2$-torsion points $J(C)[2]$ has inertia image isomorphic to the Frobenius group $F_5$; \item The Artin conductor $a(C/K)$ of $\rho_\ell$ is odd. \end{enumerate} In this case, the root number is given by \[w(C/K)=(-1)^{[k_K:\mathbb F_5]+1}\cdot \lege{v_K(a_6)}{k_K}\cdot(\Delta,a_6)_K.\] \end{thm} \begin{rem} The setting of Thm.~\ref{thm:max_g2_statement} is a particular case of the study by Coppola \autocite{coppola_max}, where a description of $\rho_\ell$ is given. Very recently, building on Coppola's results, Bisatt \autocite[Thm.~2.1]{bisatt_wild} produced similar formulas of root numbers of hyperelliptic curves.\end{rem} \subsection{Structure of the paper} In Section~\ref{sect:wild_char_rootN} we recall some theory of $\epsilon$-factors of one-dimensional Weil representations and give formulas for root numbers in some wild ramification cases by using explicit local class field theory. In Section~\ref{sect:wild_jac2} we present some properties of genus $2$ curves with wild ramification. In Section~\ref{sect:spec_f} we employ the Artin--Schreier theory in order to study $\rho_\ell$ via the automorphisms of curves over finite fields. In Section~\ref{sect:max_inertia} we prove a few characterizations of the maximal ramification case and exploit some of its implications. Section~\ref{sect:max_proof} is dedicated to proving the formula of Thm.~\ref{thm:max_g2_statement}, where we connect the results of Section~\ref{sect:wild_char_rootN} to a particular Weierstrass equation. \subsection*{Aknowledgements} I thank my thesis advisors Adriano Marmora and Rutger Noot as well as Kęstutis Česnavičius and Takeshi Saito for their remarks and suggestions concerning the manuscript. I also thank Jeff Yelton for answering my questions about his results on the splitting fields of the $4$-torsion of Jacobians. The study presented in this paper constitutes a part of my doctoral thesis. The results of Thm.~\ref{thm:max_g2_statement} have been obtained independently of the preprint \autocite{bisatt_wild}. \section*{Notation and conventions} \begin{comment} Let $p$ be a prime number. Let $K$ denote a finite extension of $\mathbb Q_p$. By $v_K$, $\mathcal O_K$, $\mathfrak{m}_K$, $\varpi_K$, $k_K$, and $q_K$ we denote its normalized valuation, ring of integers, maximal ideal, uniformizer, residue field, and the order $|k_K|$, respectively. For any integer $n$ we let $\mathfrak{m}_K^n=\varpi_K^n\mathcal O_K$ and $U^n_K=1+\mathfrak{m}_K^n$. We denote by $\lege{\cdot}{k_K}$ the Legendre symbol on $k_K^\times$ (the unique nontrivial quadratic character), and by $(\cdot,\cdot)_K$ the quadratic Hilbert's symbol. We fix an algebraic closure $\overline{K}$ of $K$. The residue field of $\overline{K}$ will be denoted by $\overline{k}_K$, and it is also an algebraic closure of $k_K$. By $\Gamma_K$, $W_K$, $I_K$, $I_K^{\wild}$, and $\varphi_K$ we will denote the absolute Galois group, the Weil group, the inertia subgroup, the wild inertia subgroup, and a lift of the geometric Frobenius in $W_K$, respectively. Let $\chi_{\unr}:W_{\mathbb Q_p}\to\mathbb C^\times$ denote the (unramified) cyclotomic character, which satisfies $\chi_{\unr}(\varphi_K)=q_K^{-1}$ for every finite $K/\mathbb Q_p$. For a complex Weil representation $\rho$ of $W_K$ and an $s\in\mathbb C$, we denote the Tate twist by $\rho(s):=\rho\otimes\chi_{\unr}^s$. Let $\theta_K: K^\times\cong W_K^{\ab}$ be Artin's reciprocity map normalized to send an uniformiser to the class of a geometric Frobenius lift. It follows that $||\cdot||_K:=\chi_{\unr}\circ\theta_K$ is the non-Archimedean norm on $K$ associated to $v_K$. For every finite Galois extension $L/K$, the map $\theta_K$ induces an isomorphism $\theta_{L/K}:K^\times/\mathcal N_{L/K}(L^\times)\cong \Gal(L/K)^{\ab}$. Abusively, we will make no notational difference between a one-dimensional (Weil) representation of $W_K$ and the induced quasi-character of $K^\times$. Given schemes $X$, $S$, $S'$ as well as morphisms $X\to S$ and $S'\to S$, we will write $X_{S'}:=X\times_S S'$, and also $X_{R'}:=X_{S'}$ if $S'=\Spec R'$ is affine. \end{comment} Let $p$ be a prime number, and let $K/\mathbb Q_p$ be a finite extension. We adopt the convention that every algebraic extension of $K$ used in this text is a subfield of $\overline{K}$. We fix the following notation. \vspace{3mm} {\centering \begin{tabular}{cp{0.40\textwidth}|}\label{table:notation_p_adic} $v_K$ & the valuation of $K$ normalized by $v_K(K^\times)=\mathbb Z$;\\ $\mathcal O_K$ & the ring of integers; \\ $\mathfrak{m}_K$ & the maximal ideal;\\ $\varpi_K$ & a uniformizer;\\ $k_K$ & the residue field;\\ $q_K$ & the order $|k_K|$;\\ $\mathfrak{m}^n_K$ & the subgroup $\varpi^n_K\mathcal O_K\subset K$ for any $n\in\mathbb Z$;\\ $U^n_K$ & $1\!+\!\mathfrak{m}^n_K$ for $n\!\geqslant\!1$, and $U^0_K\!=\!\mathcal O_K^\times$; \\ $\lege{\cdot}{k_K}$ & the Legendre symbol on $k_K^\times$;\\ $(\cdot,\cdot)_K$ & the quadratic Hilbert symbol on $K^\times\times K^\times$;\\ \end{tabular} \begin{tabular}{cp{0.35\textwidth}} $\overline{K}$ & an algebraic closure of $K$;\\ $\overline{k}_K$ & the residue field of $\overline{K}$;\\ $\Gamma_K$ & the group $\Gal(\overline{K}/K)$;\\ $W_K$ & the Weil subgroup of $\Gamma_K$; \\ $I_K$ & the inertia subgroup; \\ $I_K^{\wild}$ & the wild inertia subgroup;\\ $\varphi_K$ & a lift in $W_K$ of the geometric Frobenius;\\ $\chi_{\unr}$ & the unramified (cy\-clo\-to\-mic) character $W_{\mathbb Q_p}\to \mathbb C^\times$ such that $\chi_{\unr}(\varphi_K)=q_K^{-1}$ for every finite $K/\mathbb Q_p$.\\ \end{tabular}} \vspace{3mm} By a \textit{Weil representation} on a complex vector space $V$ we mean a group homomorphism $\rho:W_K\to \GL(V)$ such that $\rho(I_K)$ is finite. For any $s\in\mathbb C^\times$, its Tate twist is $\rho(s):=\rho\otimes\chi_{\unr}^s$. Let $\theta_K: K^\times\cong W_K^{\ab}$ be Artin's reciprocity map normalized to send a uniformizer to the class of a geometric Frobenius lift. It follows that $||\cdot||_K:=\chi_{\unr}\circ\theta_K$ is the non-Archimedean norm on $K$ induced by $v_K$. For every finite Galois extension $L/K$, the map $\theta_K$ induces an isomorphism $\theta_{L/K}:K^\times/\mathcal N_{L/K}(L^\times)\cong \Gal(L/K)^{\ab}$, where $\mathcal N_{L/K}:L^\times \to K^\times$ is the norm map. Given schemes $X$, $S$, $S'$ as well as morphisms $X\to S$ and $S'\to S$, we will write $X_{S'}:=X\times_S S'$, and also $X_{R'}:=X\times_R R':=X_{S'}$ if $S'=\Spec R'$ and $S=\Spec R$ are affine. \section{Root numbers and explicit class field theory}\label{sect:wild_char_rootN} Let $p>2$ be a prime number and let $K/\mathbb Q_p$ be a finite extension. \subsection{Choice of an additive character}\label{subs:add_char_ex} By an \textit{additive character} we mean a locally constant group homomorphism $\psi:K\to\mathbb C^\times$. By $n(\psi)$ we denote the largest integer $n$ such that $\psi$ is trivial on $\mathfrak{m}_K^{-n}$, called the \textit{level} of $\psi$. In order to simplify the computations of the root number we fix a particular character. Let $\psi_k$ be the composition \[\psi_k:\mathcal O_K\twoheadrightarrow k\xrightarrow{\tr_{k/\mathbb F_p}}\mathbb Z/p\mathbb Z\xhookrightarrow{\exp\left(\frac{2\pi i}{p}\cdot\right)}\mathbb C^\times.\] We see that $\psi_k$ is trivial on $\mathfrak{m}_K$. Since $\mathbb C^\times$ is divisible, $\psi_k$ can be extended non-uniquely to an additive character of $K$, which we again denote by $\psi_k$. Since $\tr_{k/\mathbb F_p}$ is nontrivial, independently on the choice of the extension, we have $n(\psi_k)=-1$. Moreover, every additive character of level $-1$ is given by $x\mapsto\psi_k(cx)$ for some $c\in\mathcal O_K^\times$. \subsection{$\psi$-gauges of Weil characters}\label{subs:gauges} Let $\chi:W_K\to\mathbb C^\times$ be a one-dimensional ramified Weil representation. We identify $\chi$ with a quasi-character of $K^\times$ via $\theta_K$. The Artin conductor $a(\chi)$ is the smallest integer $a$ such that $\chi$ is trivial on $U_K^a$. Let $n:=\big\lfloor\frac{a(\chi)+1}{2}\big\rfloor.$ The map $x\mapsto \chi(1+x)$ defined for $x\in\mathfrak{m}_K^{n}$ is additive, trivial on $\mathfrak{m}_K^{a(\chi)}$, and extends to an additive character $\psi_\chi$ of $K$ with $n(\psi_\chi)=a(\chi)$. Let $\varpi_K\in\mathfrak{m}_K$ be a uniformizer. Then $x\mapsto \psi_\chi(\varpi_K^{a(\chi)-1}x)$ has level $-1$. Thus, there exists $c_\chi \in K^\times$, called a \textit{$\psi_k$-gauge of $\chi$}, of valuation $-a(\chi)+1$, unique modulo $\mathfrak{m}_K^{-n+1}$, such that for all $x\in\mathfrak{m}_K^n$, \begin{equation}\label{eq:gauge_def}\chi(1+x)=\psi_k(c_{\chi}x).\end{equation} \subsection{Epsilon factors of characters}\label{subs:root_n_char} In addition to the setting of \ref{subs:gauges}, we fix a Haar mesure $\mathop{}\!\mathrm{d} x$ on $K$. We recall from \autocite[(3.4.3.2)]{deligne_eq_fonctionelle} that the \textit{$\epsilon$-factor} of $\chi$ relative to $\psi_k$ and $\mathop{}\!\mathrm{d} x$ is defined as the integral \begin{equation}\label{eq:eps_def}\epsilon(\chi,\psi_k,\mathop{}\!\mathrm{d} x):= \int_{\varpi_K^{-a(\chi)+1}\mathcal O_K^\times}\chi^{-1}(x)\psi_k(x)\mathop{}\!\mathrm{d} x.\end{equation} We will be mainly interested in the \textit{root number} \[w(\chi,\psi_k):=\frac{\epsilon(\chi,\psi_k,\mathop{}\!\mathrm{d} x)}{|\epsilon(\chi,\psi_k,\mathop{}\!\mathrm{d} x)|},\] which does not depend on $\mathop{}\!\mathrm{d} x$. For $a,b\in\mathbb C^\times$ we will write $a\approx b$ whenever $ab^{-1}$ is contained in the subgroup of $\mathbb C^\times$ generated by positive real numbers and the complex roots of unity of $p$-power orders. We note that if $p\neq2$ and $a,b\in\{-1,1\}$ are such that $a\approx b$, then $a=b$. \subsection{}\label{sect:weil_char_to_finite} The group $\chi(I_K)$ is finite and cyclic, we denote its order by $ep^r$ with $e$ prime to $p$. We view the restriction $\chi |_{I_K}$ as a character of the group $\Gal(K^{\ab}/K^{\unr})$. The group $\ker(\chi|_{I_K})$ cuts out an abelian extension $L'/K$ containing $K^{\unr}$. The closure of the subgroup generated by $\varphi_K$ in $\Gal(L'/K)$ cuts out a totally ramified abelian extension $L/K$, such that $L'=LK^{\unr}$. We then have canonical isomorphisms \[\bigslant{\Gal(K^{\ab}/K^{\unr})}{\ker(\chi|_{I_K})}\cong \Gal(L'/K^{\unr})\cong \Gal(L/K).\] The restriction $\chi|_{I_K}$ induces a faithful complex one-dimensional representation of the finite group $\Gal(L/K)$, and thus $\Gal(L/K)$ must be cyclic. Let $M/K$ be the unique subextension of $L/K$ of degree $p^r$. Then $\chi^e|_{I_K}$ has order $p^r$ and induces a faithful character of the cyclic group $\Gal(M/K)$. The following is an amalgamation of some of the results of \autocite{kobayashi} and \autocite{abbes_saito}. \begin{thm}\label{thm:eps_gauges} Let $\psi_k$ be as in \ref{subs:add_char_ex}. Let $\chi:W_K\to \mathbb C^\times$ be a Weil character such that $|\chi(I_K^{\wild})|=p$. Let $ep=|\chi(I_K)|$ with $e$ prime to $p$. Let $M/K$ be as in \ref{sect:weil_char_to_finite}. We denote by $\sigma\in\Gal(M/K)$ the generator that is sent to $\exp(\frac{2\pi i}{p})$ via $\chi$. Let $\varpi_M$ be a uniformizer of $M$, and let $\delta_\chi:= \mathcal N_{M/K}(1-\frac{\sigma(\varpi_M)}{\varpi_M})$. Let us write $\delta_\chi=u\varpi_K^{v_K(\delta_\chi)}$ with $u\in\mathcal O_K^\times$, whose class in $k_K^\times$ we denote by $\overline{u}$. \begin{enumerate}\item If $a(\chi)$ is even, then $\epsilon(\chi,\psi_k,\mathop{}\!\mathrm{d} x)\approx \chi(\delta_\chi)$; \item If $a(\chi)$ is odd, and $p\equiv 1\bmod 4$, then \[\epsilon(\chi,\psi_k,\mathop{}\!\mathrm{d} x)\approx -\chi(\delta_\chi)\cdot\lege{2\overline{u}}{k_K}\cdot(-1)^{[k_K:\mathbb F_p]}.\] \end{enumerate} \end{thm} \begin{lem}\label{lem:delta_gauge} We have $v_K(\delta_\chi)=a(\chi)-1$ and $c_\chi\delta_\chi \in U^1_K$. In particular, $\chi^{-1}(c_\chi)\approx \chi(\delta_\chi)$. \end{lem} \begin{proof} The lemma is essentially proved in \cite[p.~618]{kobayashi}. We adapt Kobayashi's argument in our setting. Let $t$ be the largest integer such that the $t$-th ramification subgroup $G_t$ of $\Gal(M/K)$ is nontrivial. We then have $G^t=G_t=\Gal(M/K)$ and $G^{t'}=\{1\}$ for $t'>t$, see \autocite[V.\S3]{serre_localfields}. The reciprocity map (see \autocite[XV.\S2]{serre_localfields}) and $\chi$ induce a commutative diagram \begin{equation} \begin{tikzcd} \bigslant{U_K^t}{U_K^{t+1}\mathcal N_{M/K}(U_M^t)} \arrow[r, "\sim"] & G^t= \Gal(M/K) \arrow[r, hookrightarrow ,"\chi^e|_{I_K}"] & \mathbb C^\times \\ U_K^t \arrow[u, two heads] \arrow[r, hookrightarrow] & \Gal(K^{\ab}/K^{\unr}) \arrow[u, two heads] \arrow[r, "\chi|_{I_K}"] & \arrow[u, "z\mapsto z^e"'] \mathbb C^\times. \end{tikzcd}\end{equation} As $e$ is prime to $p$ we observe that $a(\chi)=a(\chi^e)$ and that $a(\chi^e)=t+1$. Since $\sigma\in G_t\!\setminus\! G_{t+1}$, by using \autocite[IV.Prop.~5]{serre_localfields} we obtain \begin{equation}v_K(\delta_\chi)=v_M(1-\frac{\sigma(\varpi_M)}{\varpi_M})=t=a(\chi)-1.\end{equation} Applying \autocite[XV.\S3,~Exercise~1]{serre_localfields} shows that for all $v\in U_K^t$, \begin{equation}\label{eq:explicit_lcft} \theta_{M/K}(v) = \sigma^{\tr_{k_K/\mathbb F_p}\left((v-1)/\delta_\chi \bmod \mathfrak{m}_K \right)}. \end{equation} For every $x\in\mathfrak{m}_K^{a(\chi)-1}\subseteq\mathfrak{m}_K^n$, taking the image of \eqref{eq:explicit_lcft} by $\chi^e$, we obtain $\chi^e(1+x)=\psi_k(e\delta_\chi^{-1}x)$, and taking the $e$-th power of \eqref{eq:gauge_def} gives $\chi^e(1+x)=\psi_k(ec_\chi x)$. We note that $e\delta_\chi^{-1}\mathfrak{m}_K^{a(\chi)-1}=\mathcal O_K$. Thus, \[\psi_k((1-\delta_\chi c_\chi)\mathcal O_K)=\psi_k((e\delta_{\chi}^{-1}-ec_\chi)\mathfrak{m}_K^{a(\chi)-1})=1.\] Therefore, we must have $1-\delta_\chi c_\chi\in\mathfrak{m}^{-n(\psi_k)}_K=\mathfrak{m}_K$. The last part of the lemma follows from the fact that $\chi(U_K^1)$ is a finite $p$-group. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:eps_gauges}] We shall apply \autocite[Prop.~8.7,~(ii)]{abbes_saito} which allows to express the epsilon factor using a refined $\psi_k$-gauge $c$ of $\chi$. Abbes--Saito proves that there exists an element $c\in K$, unique modulo $\mathfrak{m}_K^{-n+1}$, such that for every $x\in\mathfrak{m}_K^{a(\chi)-n}$ we have \[\chi\left(1+x+{\textstyle \frac{x^2}{2}}\right)=\psi_k(cx).\] Let $\tau:k_K\to K$ be the Teichmüler lift. We consider the quadratic Gauss sum \[G_{\psi_k}:=\sum_{x\in k_K}\psi_k(\tau(x)^2).\] The formulas $G_{\psi_k}=\sum_{x\in k_K^\times}\lege{x}{k_K}\psi_k(\tau(x))$ and $G_{\psi_k}^2= \lege{-1}{k_K}q_K$ are well-known (see, e.g., \autocite[\S1.1]{gauss_sums_book}). The Abbes--Saito formula \autocite[\nopp{}(8.7.3)]{abbes_saito} can be rewritten as \begin{equation}\label{eq:abbes_saito}\epsilon(\chi,\psi_k,\mathop{}\!\mathrm{d} x)\approx \chi^{-1}(c)\psi_k(c)\lege{-1}{k_K}^{\binom{a(\chi)}{2}}\!\!G_{\psi_k}^{-a(\chi)}\times \begin{cases}1 \!&\text{if $a(\chi)$ is even,}\\ (\!-2c,\varpi_K)_K \!&\text{if $a(\chi)$ is odd}.\end{cases}\end{equation} Since $c$ is also a $\psi_k$-gauge of $\chi$, we have $\chi^{-1}(c)\approx\chi(\delta_\chi)$ by Lemma~\ref{lem:delta_gauge}. For $r\in\mathbb Z$ large enough, $\psi_k(p^rc)=1$, so $\psi_k(c)\approx1$. If $a(\chi)$ is even, then it is straightforward to verify that (1) holds. We assume the hypotheses of (2). Then $\lege{-1}{k_K}=1$, and $G_{\psi_k}\approx-(-1)^{[k_K:\mathbb F_p]}$, see \autocite[Thm.~11.5.4]{gauss_sums_book}. We also have $(-2,\varpi_K)_K=\lege{2}{k_K}$. Taking into account Lemma~\ref{lem:delta_gauge} and making the relevant substitutions into \eqref{eq:abbes_saito} we are left to prove that $(c,\varpi_K)_K=\lege{\overline{u}}{k_K}$. Lemma~\ref{lem:delta_gauge} also shows that $c\in u^{-1}\varpi_K^{-a(\chi)+1}U^1_K$. Since $a(\chi)$ is odd and $U^1_K$ is pro-$p$, the Hilbert symbol is trivial on $\varpi_K^{-a(\chi)+1}U^1_K$, thus \[(c,\varpi_K)_K=(u,\varpi_K)_K=\lege{\overline{u}}{k_K}.\qedhere\] \end{proof} \begin{prop}\label{prop:kobayashi_D_delta} We continue in the situation of Thm.~\ref{thm:eps_gauges}. Let $\alpha\in \mathcal O_M$ be such that $p\nmid v_M(\alpha)$, and let $D_{\alpha,\chi}:=\mathcal N_{M/K}\left(1-\frac{\sigma(\alpha)}{\alpha}\right)$. Then \[D_{\alpha,\chi}\equiv v_M(\alpha)\delta_\chi\bmod U_K^1.\] \end{prop} \begin{proof} A detailed proof when $p=3$ can be found in \autocite[p.~614]{kobayashi}. It generalizes for any $p>2$ without significant modifications. \end{proof} \begin{cor}\label{cor:char_rootN_D} If $a(\chi)$ is even and $\alpha\in\mathcal O_M$ is such that $p\nmid v_M(\alpha)$, then \[\epsilon(\chi,\psi_k,\mathop{}\!\mathrm{d} x)\approx\chi\left(\frac{D_{\alpha,\chi}}{v_M(\alpha)}\right).\] \end{cor} \begin{proof} Follows from Thm.~\ref{thm:eps_gauges}.(1) and Prop.~\ref{prop:kobayashi_D_delta}. \end{proof} \section{Curves of genus $2$ and wild ramification}\label{sect:wild_jac2} \subsection{Generalities}\label{subs:curve_g2} Let $K$ be a $p$-adic local field with $p\neq2$, and let $C/K$ be a smooth, projective, and geometrically connected curve of genus 2 defined over $K$. The curve $C/K$ is hyperelliptic (see \autocite[7.~Prop.~4.9]{liu_book}), and there exists an open affine subvariety $C_{\aff}$ of $C$ which is defined by a single Weierstrass equation \begin{equation}\label{eq:hyperell_eq} Y^2=P(X), \end{equation} where $P\in K[X]$ has simple roots and has degree $5$ or $6$. \subsection{Discriminants} Following \autocite[\S2]{liu_models} we define the \textit{discriminant} of an equation \eqref{eq:hyperell_eq} in terms of the discriminant of the polynomial $P$: let $a_0$ be the leading coefficient of $4P$, then \begin{equation}\label{eq:disc_def} \Delta(P):=\begin{cases} 2^{-12}\disc(4P) & \text{if }\deg P=6,\\ 2^{-12}a_0^2\disc(4P)& \text{if } \deg P=5.\end{cases} \end{equation} In particular, if $P(X)=(X-\alpha_1)(X-\alpha_2)(X-\alpha_3)(X-\alpha_4)(X-\alpha_5)$, then \begin{equation}\label{eq:disc_unitary} \Delta(P)=2^8\prod_{1\leqslant i<j\leqslant 5}(\alpha_i-\alpha_j)^2. \end{equation} We note that $\Delta(P)\neq0$ since $C$ is non-singular. \subsection{Change of variables}\label{subs:g2_var_change} The equation \eqref{eq:hyperell_eq} is unique up to a transformation \begin{equation}\label{eq:hyperell_change_var} X'=\frac{aX+b}{cX+d}, \hspace{5mm} Y'=\frac{eY}{(cX+d)^3}\end{equation} where $\begin{pmatrix}a&b \\ c&d \end{pmatrix}\in\GL_2(K)$ and $e\in K^\times.$ If $Y'^2=P'(X')$ is obtained from \eqref{eq:hyperell_eq} via \eqref{eq:hyperell_change_var}, then the new discriminant is \begin{equation}\label{eq:var_change_disc}\Delta(P')=e^{20}\begin{vmatrix}a&b\\c&d\end{vmatrix}^{-30}\! \Delta(P) \, .\end{equation} As an immediate consequence, the class of a discriminant in $K^\times/(K^\times)^2$ does not depend on the choice of a Weierstrass equation. \subsection{Root numbers and semi-stable reduction} Let $C/K$ be as in \ref{subs:curve_g2} and let $J(C)/K$ be its Jacobian. We denote by $\mathcal J(C/K)_k^\circ$ the neutral component of the special fiber of the Néron model of $J(C)/K$. Let $\ell\neq p$ be a prime number. We have an isomorphism of $\ell$-adic $\Gamma_K$-representations \[H^1_{\et}(C_{\overline{K}},\mathbb Q_\ell)\cong H^1_{\et}(J(C)_{\overline{K}},\mathbb Q_\ell),\] we denote either of them by $\rho_\ell$. The root number $w(C/K):=w(\rho_\ell)$ is defined via the the complex Weil--Deligne representation associated to $\rho_\ell$ (see, e.g., \autocite{rohrlich}). Let $L/K$ be a finite extension over which $C$ has stable reduction. Then $J(C_L)/L$ has semi-stable reduction, i.e. that $\mathcal J(C_L/L)_k^\circ$ is an extension of an abelian variety by a torus. Using Sabitova's decomposition \autocite[Prop.~1.10]{sabitova_root} we can separate the contributions to $w(C/K)$ coming from the abelian and the toric parts of $\mathcal J(C_L/L)_{k_L}^\circ$. \subsection{Hypotheses} We suppose from now on that $J(C_L)/L$ has good reduction or, equivalently, that $J(C)/K$ has potentially good reduction. It follows from the semi-stable reduction theorems that this happens exactly when $\rho_\ell(I_K)$ is finite. In this case we write $|\rho_\ell(I_K)|=ep^r$ with $e$ coprime to $p$. We further suppose that $r\geq1$, i.e. $\rho_\ell$ is wildly ramified. Due to Serre--Tate \autocite[p.~497,~Cor.~2]{serre_tate}, necessarily, $p\leq5$. We will later suppose that $p=5$. \subsection{Inertially minimal extensions}\label{subs:im_ext} It follows from the Néron--Ogg--Shafarevich criterion that $J(C)$ attains good reduction over $L':=\overline{K}^{\ker \rho_\ell|_{I_K}}$ and that $L'/K^{\unr}$ is the minimal such extension. We call an algebraic extension $L/K$ \textit{inertially minimal (IM) for $J(C)/K$} if $I_L=I_{L'}= \ker \rho_\ell|_{I_K}$. In other words, $L/K$ is IM if and only if $J(C)$ has good reduction over $L$ and has bad reduction over every proper subextension of $K^{\unr}L/K^{\unr}$. \subsection{Good reduction and torsion}\label{subs:red_torsion} For $m\geq1$ we denote by $J(C)[m]$ the subgroup of $m$-torsion points of $J(C)(\overline{K})$ and by $K(J(C)[m])$ the smallest extension of $K$ over which all the points of $J(C)[m]$ are rational. For $m\geqslant 3$ coprime to $p$, it follows from Serre--Tate \autocite[Cor.~3, p.~498]{serre_tate} that the extension $K(J(C)[m])/K$ is IM for $J(C)/K$. Similarly, for $p\neq 2$, Serre~\autocite{serre_2torsion} shows that $|\rho_\ell(I_{K(J(C)[2])})|\leq2$. Thus, if $K(J(C)[2])/K$ is not an IM extension, then there is a totally ramified quadratic extension $L/K(J(C)[2])$ such that $L/K$ is IM for $J(C)/K$. Therefore, if $p\neq 2$, then the groups $\rho_\ell(I^{\wild}_K)$ and $I^{\wild}(K(J(C)[2])/K)$ are isomorphic. \subsection{We suppose for the rest of the section that $p=5$.}\label{subs:inertia_class} The groups $\rho_\ell(I_K)$ associated to abelian surfaces have been classified by Silverberg--Zarhin \autocite[Thm.~1.7]{silverberg_zarhin}. In our case $\rho_\ell(I_K)$ is one of the four groups (in the notation of \autocite{group_names}) satisfying the inclusions $C_5\subset C_{10}\subset \Dic_5\subset C_5\rtimes C_8$. More precisely, the group $\rho_\ell(I_K)$ has the form $C_5\rtimes C_{2^i}$ where $C_{2^i}$ is a subgroup of $C_8$ acting on $C_5$ with kernel $C_2\cap C_{2^i}\subset C_8$. Recall the Frobenius group $F_5=C_5\rtimes C_4$ where $C_4$ acts faithfully on $C_5$. In particular, $F_5$ and $\Dic_5$ are not isomorphic. \begin{prop}\label{prop:wild5_good_equiv} Suppose that $\rho_\ell$ is wildly ramified and let $L/K$ be a finite extension. If $J(C)$ has semi-stable reduction over $L$, then $C$ has good reduction over $L$, i.e. the minimal regular model $\mathcal C'/\mathcal O_L$ is smooth. \end{prop} \begin{proof} From classical theorems we know that $C$ has semi-stable reduction over $L$, and that there exists a stable (flat) model $\mathcal C'_{\can}/\mathcal O_L$ of $C_L/C$. The ring $R:=\mathcal O_{K^{\unr}L}$ is strictly Henselian and $(\mathcal C' _{\can})_R$ is a stable model of $C_{K^{\unr}L}/K^{\unr}L$. Wild ramification of $\rho_\ell$ implies that $5$ divides $[K^{\unr}L:K^{\unr}]$. By studying the possible orders of automorphisms of stable curves Liu \autocite[Cor.~4.1.(4)]{liu_stable_red} showed that $(\mathcal C'_{\can})_{\overline{k}_L}/{\overline{k}_L}$ must be smooth. It follows that $(\mathcal C' _{\can})_R/R$ and hence $\mathcal C' _{\can}/\mathcal O_L$ are smooth. We may use \autocite[10.~Prop.~1.21]{liu_book} to conclude that $\mathcal C'/\mathcal O_L$ is smooth. \end{proof} \begin{rem}\label{rem:good_r_curve_jac} The hypotheses that $\rho_\ell$ is wildly ramified and that $K$ is $5$-adic are essential. The curve $C_L/L$ might have bad reduction even if $J(C)$ has good reduction over $L$. On the other hand, \autocite[Example~8, p.~246]{BLRNeron} shows that the non-rational irreducible components of $\mathcal C'_{\overline{k}_L}$ correspond to nontrivial abelian varieties as quotients of $\mathcal J(C_L/L)^\circ_{\overline{k}_L}$. Using this it can be shown in general that if $\mathcal J(C_L/L)^\circ_{\overline{k}_L}$ is a simple abelian variety, then $C_L/L$ has good reduction. \end{rem} \subsection{An explicit IM extension}\label{subs:A_5_J_10} Let $Y^2=P(X)$ be a hyperelliptic equation defining $C/K$. Generalizing the results of Kraus~\autocite{kraus}, Liu \autocite[\S5.1]{liu_algo} provides an explicit description in terms of invariants of $P$ of the tame part of the minimal extension $L'/K^{\unr}$ over which $C$ has stable reduction. By Prop.~\ref{prop:wild5_good_equiv}, this extension is precisely the IM extension for $J(C)/K$ defined in~\ref{subs:im_ext}. Liu~\autocite[\S2.1]{liu_algo} defines a so-called affine invariant $A_5$. After Prop.~\ref{prop:hyperell_eq_spec} we will always have $A_5=1$. We fix an $8$th root of \[\beta:=-A_5^{-6}\Delta(P)\] in $\overline{K}$, which we denote by $\beta^{1/8}$, and let $\beta^{1/4}:=(\beta^{1/8})^2$. Let $L'_{\tame}/K^{\unr}$ be the maximal tamely ramified subextension of $L'/K^{\unr}$, then $L'/L'_{\tame}$ is totally wildly ramified of degree $5$. Liu proves that \begin{equation}\label{eq:liu_tame} L'_{\tame}=K^{\unr}(\beta^{1/8}).\end{equation} Let $\nu:=v_K(\beta)$, $M:=K(J(C)[2])$, $N:=K\left(\beta^{1/8}\right)$, $H:=K\left(\beta^{1/4}\right)$, and $L:=MN$. We fix a primitive $8$th root of unity $\zeta_8\in\overline{K}$. Let us recall from \autocite[3.39,~Cor.~2.11]{mumford_tata2} that $M$ is the splitting field of $P$. The extension $M/K$ is finite Galois. The extension $L/K$ is finite but not necessarily Galois. \begin{prop}\label{prop:explicit_IM} The extension $L/K$ is IM for $J(C)/K$ or, equivalently, $L'=K^{\unr}L$. The extension $L(\zeta_8)/K$ is Galois. In particular, if the residual degree $f(K/\mathbb Q_5)$ is even, then $L/K$ is Galois. The extension $L/H$ is always Galois. \end{prop} \begin{proof} From \ref{subs:red_torsion} we have $|\rho_\ell(I_M)|\leqslant 2$, so $\rho_\ell|_{\Gamma_M}$ is at most tamely ramified. It now follows from \eqref{eq:liu_tame} that $J(C)$ has good reduction over $LK^{\unr}$, so $L'\subset LK^{\unr}$. On the other hand, we have $L'\supset L_{\tame}'= NK^{\unr}$, and $I_{L'}$ acts trivially on $J(C)[2]$ by the Néron--Ogg--Shafarevich criterion, so $L'\supset LK^{\unr}$. Therefore, $I_{L'}= I_L$. The Galois closure of $N/K$ is $N(\zeta_8)/K$, so $L(\zeta_8)/K$ is Galois. Since $\mathbb Q_5$ contains the $4$th roots of unity and no primitive $8$th roots of unity, $K$ contains $\zeta_8$ if and only if $f(K/\mathbb Q_5)$ is even. The extension $HM/K$ is Galois since $H/K$ and $M/K$ are such. Thus, $L/H$ is Galois as the compositum of $HM/H$ and $N/H$. \end{proof} \begin{prop}\label{prop:hyperell_irred_5} The group $\Gal(M/K)$ is isomorphic to a subgroup of $F_5$ (as in \ref{subs:inertia_class}). As a consequence, the polynomial $P$ has an irreducible factor over $K$ of degree 5. \end{prop} \begin{proof} We recall that $\deg P=5$ or $6$, so we may view $\Gal(M/K)=\Gal(P)$ as a subgroup of $S_5$ or $S_6$, respectively. Since the wild inertia subgroup of $\Gal(P)$ is normal of order $5$, the group $\Gal(P)$ must be a subgroup of a normalizer subgroup $G$ of a $5$-cycle in $S_5$ or $S_6$. We naturally have $F_5\subseteq G$ and, in fact, an equality holds because for $n=5,6$ we have \[|G|=\frac{|S_n|}{\#\{5\text{-Sylow's in }S_n\}}=\frac{n!}{\frac{n!}{4\cdot5\cdot (n-5)!}}=20.\] If $P$ was irreducible over $K$ and had degree $6$, then $\Gal(P)$ would have a subgroup of index $6$, which is impossible. On the other hand, since $\Gal(P)$ contains a $5$-cycle, $P$ must have an irreducible factor of degree at least $5$. \end{proof} \begin{prop}\label{prop:inertia_disc_val} The group $\rho_\ell(I_K)$ is isomorphic to $C_5\rtimes C_8$, $\Dic_5$, $C_{10}$, or $C_5$ if and only if $\nu\equiv1\bmod 2$, $\nu\equiv2\bmod 4$, $\nu\equiv 4\bmod 8$, or $\nu\equiv0\bmod 8$, respectively. In particular, by denoting the ramification index of $L/K$ by $e(L/K)$ we have $40\mid e(L/K)\cdot\nu$. \end{prop} \begin{proof} By \eqref{eq:liu_tame} and Prop.~\ref{prop:explicit_IM}, the tame ramification index of $L/K$ is determined by the the residue $\nu\bmod 8$ and is exactly the maximal prime-to-$5$ divisor of $|\rho_\ell(I_K)|$. The group $\rho_\ell(I_K)$ can then be identified from the classification in \ref{subs:inertia_class}. \end{proof} \begin{prop}\label{prop:almost_abelian} Let $\sigma\in I_K^{\wild}$ and let $\tau\in \Gamma_K$ denote a lift of a topological generator of the tame inertia group $I_K^{\tame}$. Let $\varphi_L\in \Gamma_L$ and $\varphi_{L(\zeta_8)}\in \Gamma_{L(\zeta_8)}$ be lifts of the geometric Frobenii. Then: \begin{enumerate} \item The images $\rho_\ell(\sigma)$, $\rho_\ell(\tau^4)$, and $\rho_\ell(\varphi_L)$ commute; \item The images $\rho_\ell(\tau)$ and $\rho_\ell(\varphi_{L(\zeta_8)})$ commute. \end{enumerate} In particular, $\rho_\ell(\varphi_{L(\zeta_8)})$ is central in $\rho_\ell(\Gamma_K).$ \end{prop} \begin{proof} Prop.~\ref{prop:explicit_IM} shows that $\rho_\ell|_{I_L}$ is trivial and that $L'=LK^{\unr}$. Thus, for (1) we only need to show that the classes $\sigma I_L$, $\tau^4 I_L$, and $\varphi_L I_L $ in $\Gal(L'/K)=\Gamma_K/I_L$ commute. From \ref{subs:inertia_class} we have $\sigma^5\in I_L$ and $\tau^8\in I_L$. We note that the subfield of $L'$ fixed by $\varphi_LI_L$ is $L$. Let $F/K$ be the subextension of $M/K$ fixed by the unique $5$-Sylow subgroup of $\Gal(M/K)$. We claim that $L/F$ is abelian. We observe that $L/F$ is the compositum of the $C_5$-extension $M/F$ and the maximal at most tamely ramified subextension $L_{\tame}/F$ of $L/F$. By \ref{subs:red_torsion}, the ramification index of $L_{\tame}/F$ is at most $2$, so $L_{\tame}/F$ must be abelian. It follows that $L/F$ is abelian. The extension $L'/F$ is abelian as the compositum of $L/F$ and $K^{\unr}F/F$. We observe that the closure of the subgroup generated by $\tau^4I_K^{\wild}$ in $I_K$ cuts out the unique extension of $K^{\unr}$ of degree 4, which contains $F$ (we have $[F:K]\mid 4$ by Prop.~\ref{prop:hyperell_irred_5}). Thus, the class $\tau^4I_L$ is in $\Gal(L'/F)$. On the other hand, $\sigma I_L$ and $\varphi_L I_L$ are also in $\Gal(L'/F)$, so they all commute. For (2) we first note that, for $\gamma\in I_K$, we have $\eta:=\gamma\varphi_{L(\zeta_8)}\gamma^{-1}\varphi_{L(\zeta_8)}^{-1}\in I_K$. Since $L(\zeta_8)/K$ is Galois by Prop.~\ref{prop:explicit_IM}, we have $\gamma\varphi_{L(\zeta_8)}\gamma^{-1}\in\Gamma_{L(\zeta_8)}$, and thus $\eta\in \Gamma_{L(\zeta_8)}\cap I_K=I_{L}$. Thus, $\rho_\ell(\eta)$ is trivial, and hence (2) holds. \end{proof} \subsection{Particular form of hyperelliptic equation}\label{subs:hyperell_eq_spec} If $\deg P=6$, then Prop.~\ref{prop:hyperell_irred_5} shows that $P$ has a root in $K$. By applying a change of variables \eqref{eq:hyperell_change_var} that sends this root to the point at infinity, we may assume that the curve $C/K$ is defined by a Weierstrass equation $Y^2=P(X)$ with $P$ unitary and irreducible of degree $5$. Applying Liu's results \autocite[Prop.~5.1]{liu_algo} gives the following \begin{prop}\label{prop:hyperell_eq_spec} There exists an equation \[Y^2=X^5+a_2X^4+\ldots +a_6,\] which defines $C/K$ with $a_2,\ldots,a_6\in\mathcal O_K$ such that $v_K(a_6)\in\{1,2,3,4,6,7,8,9\}$. With respect to this equation, we have $A_5=1$. \end{prop} \begin{rem} Prop.~\ref{prop:hyperell_eq_spec} is an analogue of Tate's algorithm for elliptic curves. Indeed, Liu also proved that the integer $v_K(a_6)$ determines the geometric type of the minimal regular model of $C/K$. These correspond to the Namikawa--Ueno \autocite{namikawa_ueno} types [VIII-i] and [IX-i] for $i=1,2,3,4$. \end{rem} \begin{comment} \subsection{Possible Namikawa--Ueno types} Continuing to assume that $p=5$ and that $\rho_\ell|_{I^{\wild}_K}$ is nontrivial, we may determine the Namikawa--Ueno (NU) type (see \autocite{namikawa_ueno}) of the geometric fiber $\mathcal C_{\overline{k}_K}$ using the coefficient $a_6$ from Prop.~\ref{prop:hyperell_eq_spec}. Each type corresponds to one of the rows in the table below. We convert the Namikawa--Ueno notation to the one used in \autocite{ogg_classif} and then apply the results from \autocite[\S5.2]{liu_g2_disc_cond} to complete every column of \autoref{table:namikawa_ueno} except the last one (see \ref{subs:hyperell_cond} and \ref{subs:diff_disc_cond} for the notation). \begin{table}[h!] \centering \caption{Geometric reduction types}\label{table:namikawa_ueno} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $v_K(a_6)$ & NU type & Ogg type & $m(C/K)$ & $\mathcal{P}_{\overline{k}_K}$ & $d$ & $v_K(\Delta_{\min})-a(\rho_\ell)$ \\[0.4ex] \hline 1 & [VIII-1] & [0] & 1 & $\mathbb P^1$ & 1 & 0\\ \hline 3 & [VIII-2] & [7] & 9 & $\mathbb P^1$ & 1 & 8\\ \hline 7 & [VIII-3] & [16] & 4 & $2\mathbb P^1$ &3 & 4 \\ \hline 9 & [VIII-4] & [20] & 13 & $\mathbb P^1$ & 1 & 12\\ \hline 2 & [IX-1] & [8] & 5 & $\mathbb P^1$ & 1 & 4\\ \hline 4 & [IX-2] & [36] & 3 & $\mathbb P^1$ & 1 & 2\\ \hline 6 & [IX-3] & [21] & 11 & $\mathbb P^1$ & 1 & 10\\ \hline 8 & [IX-4] & [44] & 9 & $\mathbb P^1$ & 1 & 8\\ \hline \end{tabular} \end{table} \begin{cor}\label{cor:delta-cond=even} We have \[v_K(\Delta_{\min})-a(\rho_\ell)=m(C/K)+\frac{d-3}{2}.\] In particular, $v_K(\Delta_{\min})-a(\rho_\ell)$ is positive and even. \end{cor} \begin{proof} The formula is obtained by combining \eqref{eq:liu_Art_formula} and \eqref{eq:liu_Art_delta}. The quantities on the right-hand side of the equation can be read from Table~\ref{table:namikawa_ueno}. \end{proof} \begin{rem} The corollary above generalizes Ogg's formula for elliptic curves $v_K(\Delta_{\min})-a(\rho_\ell)=m(C/K)-1$. \end{rem} \begin{cor}\label{cor:hyperell_val_m} For $a_6$ as in Prop.~\ref{prop:hyperell_eq_spec} we have \begin{enumerate} \item $v_K(a_6)\equiv1+a(\rho_\ell)-v_K(\Delta_{\min})\equiv 2d-m(C/K)\bmod 5;$ \item $\lege{v_K(a_6)}{\mathbb F_5}=\lege{m(C/K)+3}{\mathbb F_5}$. \end{enumerate} \end{cor} \begin{proof} Both claims are straightforward to verify using Table~\ref{table:namikawa_ueno}. \end{proof} \end{comment} \section{Galois action on the special fiber}\label{sect:spec_f} Let $k$ be a finite field of some characteristic $p>2$. We denote by $k(y)$ the field of rational functions in one variable over $k$. \subsection{Artin--Schreier curves}\label{subs:artin_schreier_curves} We briefly recall some basic Artin--Schreier theory. If $f\in k(y)$ is not in the image of the map $g\mapsto g^p-g$, then the equation $x^p-x=f$ defines a smooth projective curve $C_f$ over $k$ together with a finite morphism $\pi:C_f\to\mathbb P^1_{k}$ of degree $p$. In other words, the function field $k(C_f)=k(x,y)$ is a cyclic extension of $k(y)$ of degree $p$. Inversely, every cyclic extension of $k(y)$ of order $p$ is of this form. We may assume that $f$ is in \textit{standard form}, i.e. each pole of $f$ is of order prime to $p$, this is well-known, see \autocite[\S2]{hesse_AS}. Then, $f$ has poles at exactly the branch points of $\pi$. In particular, if $\pi$ has a unique branch point in $\mathbb P_k^1(k)$ which is the pole of $y$, then $f$ is a polynomial in $y$. If this is the case, then the genus of $C_f$ is given by $g(C_f)=\frac{(\deg f-1)(p-1)}{2}$, see, e.g., \autocite[3.7.8.(d)]{stichtenoth_ff}. For every $a\in k^{\times}$ and $c\in k$ we denote by $C_{a,c}$ the Artin--Schreier curve given by the equation $x^p-x-c=ay^2$. \begin{lem}\label{lem:traces_C10} Let $p\equiv1\bmod 4$. On the curve $C_{1,0}/\mathbb F_p$ we have the automorphisms $\sigma_1:(x,y)\mapsto(x+1,y)$, $\iota:(x,y)\mapsto(x,-y)$, and the endomorphism $F:(x,y)\mapsto(x^p,y^p)$. They commute pairwise and, for all $n,r,f\in\mathbb Z$, the trace of the pullback $(\iota^n\circ\sigma_1^r\circ F^f)^*$ on $H^1_ {\et}((C_{1,0})_{\overline{\mathbb F}_p},\mathbb Q_\ell)$ is given by \[\Tr (\iota^n\circ\sigma_1^r\circ F^f)^*= \begin{cases} (-1)^{n+1}p^{f/2} & \text{ if $f$ is even and $p\nmid r$}, \\ (-1)^np^{f/2}(p-1)&\text{ if $f$ is even and $p\mid r$}, \\ (-1)^{n+1}\lege{r}{\mathbb F_ p}p^{\frac{f+1}{2}}& \text{ if $f$ is odd.}\end{cases}\] \end{lem} \begin{proof} It is straightforward to verify that $\sigma_1$, $F$, and $\iota$ commute. The hyperelliptic involution $\iota$ acts as multiplication by $-1$ on the Jacobian variety, so $(\iota^n)^*=(-\Id)^n$. The curve $C_{1,0}$ has genus $\frac{p-1}{2}$, and $\dim H^1_{\et}((C_{1,0})_{\overline{\mathbb F}_p},\mathbb Q_\ell)=p-1$. We recall from the classical theory that the action of $F^*$ is semisimple and its eigenvalues have absolute value $\sqrt{p}$. We claim that $(F^2)^*$ acts as mulplication by $p$. For this we only need to show $\Tr(F^2)^*=p(p-1)$. The Lefschetz trace formula \[\Tr(F^2)^* =1+p^2-| C_{1,0}(\mathbb F_{p^2})|\] leaves to prove $| C_{1,0}(\mathbb F_{p^2})|=p+1$. For every $x\in\mathbb F_{p^2}$ we have $\Tr_{\mathbb F_{p^2}/\mathbb F_p}(x^p)=\Tr_{\mathbb F_{p^2}/\mathbb F_p}(x)$. It follows that every affine point $(x,y)\in C_{1,0}(\mathbb F_{p^2})$ must be such that $\Tr_{\mathbb F_{p^2}/\mathbb F_p}(y^2)=0$, which is equivalent to $y^2+y^{2p}=0$. The non-zero solutions of the latter satisfy $y^{2(p-1)}=-1$, raising this to the odd power $\frac{p+1}{2}$ leads to a contradiction. Thus, the affine points of $C_{1,0}(\mathbb F_{p^2})$ are $(x,0)$ with $x\in\mathbb F_{p}$, and thus the claim holds. The polynomial $X^2-p$ is irreducible over $\mathbb Z$. Since the characteristic polynomial of $F^*$ is in $\mathbb Z[X]$, it must be $(X^2-p)^{\frac{p-1}{2}}$. We have $0=(\sigma_1^p)^*-\Id=(\sigma_1^*-\Id)\Phi_p(\sigma_1^*)$ where $\Phi_p\in\mathbb Z[X]$ is the $p$-th cyclotomic polynomial. The characteristic polynomial $P_{\sigma_1}$ of $\sigma_1^*$ is in $\mathbb Z[X]$, so its unitary irreducible divisors can only be $X-1$ or $\Phi_p$. Since $\deg P_{\sigma_1}=p-1$, we must have $P_{\sigma_1}(X)=(X-1)^{p-1}$ or $P_{\sigma_1}=\Phi_p$. The first case is impossible since $\sigma_1^*$ is nontrivial. Thus, $\Tr(\sigma_1^r)^*=-1$ if $r$ is prime to $p$, and $\Tr(\sigma_1^r)^*=p-1$ otherwise. The formulas for the case when $f$ is even hence follow. If $f$ is odd, then \begin{equation}\label{eq:trace_f_odd} \Tr (\iota^n\circ\sigma_1^r\circ F^f)^*=(-1)^np^{\frac{f-1}{2}}\Tr(\sigma_1^r\circ F)^*. \end{equation} We use the Lefchetz formula \[\Tr(\sigma_1^r\circ F)^*=1+p-\left|\Fix(\sigma_1^r\circ F)\right|.\] The affine points $(x,y)\in C_{1,0}(\overline{\mathbb F}_p)$ fixed by $\sigma_1^r\circ F$ satisfy $x=x^p+r$ and $y=y^p$, so $y\in\mathbb F_p$ and $-r=x^p-x=y^2$. The latter equation has exactly $\lege{-r}{\mathbb F_p}+1=\lege{r}{\mathbb F_p}+1$ solutions in $y$ for each $r\in\mathbb F_p$. Each solution $y$ gives exactly $p$ solutions for $x^p-x=y^2$. We have proved that $\sigma_1^r\circ F$ has exactly $p\big(\lege{r}{\mathbb F_p}+1\big)+1$ fixed points, so \begin{equation}\label{eq:trace_a_frob} \Tr(\sigma_1^r\circ F)^*=-\lege{r}{\mathbb F_p}p. \end{equation} Substituting \eqref{eq:trace_a_frob} into \eqref{eq:trace_f_odd} finishes the proof. \end{proof} \begin{rem} If $p\equiv 3\bmod4$, then using similar methods one can compute the order $|C_{1,0}(\mathbb F_{p^2})|=p(2p-1)+1$ and prove that $(F^*)^2+p=0$. Consequently, analogous formulas for the traces can be given. \end{rem} \subsection{The 5-adic setting}\label{subs:5-adic_wild} We now continue in the situation where $K$ is $5$-adic and $C/K$ is a curve of genus $2$ whose $\ell$-adic representation has cyclic wild inertia image $\rho_\ell(I_K^{\wild})$ of order $5$. Recall the notation of \ref{subs:A_5_J_10}. \subsection{Galois action on the smooth model}\label{subs:extend_galois_model} By Prop.~\ref{prop:explicit_IM} and Prop.~\ref{prop:wild5_good_equiv}, the curve $C_{L}/L$ has good reduction, so its minimal regular model $\mathcal C'/\mathcal O_{L}$ is smooth. For every finite Galois extension $K'/K$ containing $L$, the minimal regular model of $C_{K'}/K'$ is given by the base change $\mathcal C'_{\mathcal O_{K'}}:=\mathcal C'\times_{\mathcal O_L}\mathcal O_{K'}$. Every element of $\Gal(K'/K)$ gives an $K'$-semilinear automorphism $C_{K'}\xrightarrow{\sim} C_{K'}$, which extends uniquely to an $\mathcal O_{K'}$-semilinear automorphism $\mathcal C'_{\mathcal O_{K'}}\xrightarrow{\sim} \mathcal C'_{\mathcal O_{K'}}$ (see, e.g., \autocite[Corollary~1.2]{liu_neron}). Passing to the projective limit shows that each $\gamma\in\Gamma_K$ induces an $\mathcal O_{\overline{K}}$-semilinear automorphism $\gamma_{\mathcal C'}:\mathcal C'_{\mathcal O_{\overline{K}}}\xrightarrow{\sim} \mathcal C'_{\mathcal O_{\overline{K}}}$. The morphism preserves the special fiber, so we obtain a $\overline{k}_K$-semilinear $\Gamma_K$-action on $\mathcal C'_{\overline{k}_K}$. \begin{comment} so we obtain a commutative diagram \begin{equation}\label{eq:galois_fiber_diag}\begin{tikzcd}[row sep=scriptsize,column sep=scriptsize] \mathcal C'_{\overline{k}_K}\arrow[rrr, "\gamma_{\mathcal C'}"]\arrow[ddd]\arrow[dr]&[-10pt] & &[-10pt] \mathcal C'_{\overline{k}_K}\arrow[dl]\arrow[ddd]&\\[-15pt] &\mathcal C'_{\mathcal O_{\overline{K}}}\arrow[r]\arrow[d]&\mathcal C'_{\mathcal O_{\overline{K}}}\arrow[d] &\\ &{\Spec\mathcal O_{\overline{K}}}\arrow[r,"\gamma"] & { \Spec\mathcal O_{\overline{K}}}&\\[-11pt] {\Spec\overline{k}_K}\arrow[rrr]\arrow[ur, shorten <= -.4em, shorten >= -.4em]& & &{\Spec\overline{k}_K}\arrow[ul, shorten <= -.4em, shorten >= -.4em]& \end{tikzcd}\end{equation} \end{comment} By functoriality, $\Gamma_K$ acts on $H^1_{\et}(\mathcal C'_{\overline{k}_K},\mathbb Q_\ell)$, and, for every $n\in\mathbb Z$ prime to $p$, the smooth base change theorem provides an isomorphism of $\Gamma_K$-modules \begin{equation}\label{eq:et_coh_fiber_iso} H^1_{\et}(C_{\overline{K}},\mathbb Z/n\mathbb Z)\cong H^1_{\et}(\mathcal C'_{\overline{k}_{K}},\mathbb Z/n\mathbb Z).\end{equation} We note that every element $\gamma\in\Gamma_L$ acts on $\mathcal C'_{\mathcal O_{\overline{K}}}=\mathcal C'\times_{\mathcal O_L}\mathcal O_{\overline{K}}$ as $\id \times\gamma$. Since $I_K$ acts trivially on $\overline{k}_K$, the group $I_L$ acts trivially on $\mathcal C'_{\overline{k}_K}$, thus inducing an action of $I_K/I_L$ on $\mathcal C'_{\overline{k}_K}$ by $\overline{k}_K$-automorphisms. We obtain a chain of group homomorphisms \begin{equation}\label{eq:autom_inj} I_K/I_L\hookrightarrow \Aut(\mathcal C'_{\overline{k}_K})\to\Aut\!\left(H^1_{\et}(\mathcal C'_{\overline{k}_K},\mathbb Q_\ell)\right)\xrightarrow{\sim}\Aut\!\left(H^1_{\et}(C_{\overline{K}},\mathbb Q_\ell)\right).\end{equation} \begin{prop}\label{prop:iso_Ca0} Let $\sigma\in I_K^{\wild}$. The induced automorphism $\sigma_{\mathcal C'}$ on $\mathcal C'_{\overline{k}_K}$ descends to $k_L$, and $\mathcal C'_{k_L}$ is $k_L$-isomorphic to $C_{a,0}$ for some $a\in k_L^\times$. The automorphism of $C_{a,0}$ induced by $\sigma_{\mathcal C'}$ is given by $\sigma_a^r:(x,y)\mapsto (x+r,y)$ with some $r\in\mathbb F_p$. The image $\rho_\ell(\sigma)$ is nontrivial if and only if $r\neq0$. \end{prop} \begin{proof} If $\rho_\ell(\sigma)=\Id$, then $\sigma\in I_L$, so $\sigma_{\mathcal C'}$ is the identity on $\mathcal C'_{\overline{k}_K}$ by \eqref{eq:autom_inj}. in the same way, if $\rho_\ell(\sigma)\neq\Id$, then the class of $\sigma$ in $I_K/I_L$ has order $5$, so it induces an automorphism on $\mathcal C'_{\overline{k}_K}$ of order $5$. We have seen in Prop.~\ref{prop:almost_abelian} that the classes of $\sigma$ and $\varphi_L$ commute in $\Gamma_K/I_L$. It follows that they commute as scheme-automorphisms of $\mathcal C'_{\overline{k}_L}$, which means that $\sigma_{\mathcal C'}$ descends to a $k_L$-automorphism of $\mathcal C'_{k_L}$. The main arguments for the second part are given in \autocite{roquette} and \autocite{homma_aut}, which we specialize to our situation. Let $\Gamma\simeq C_5$ be the image of $I_K^{\wild}$ is $\Aut(\mathcal C'_{k_L})$, and let $\pi:\mathcal C'_{k_L}\to \mathcal C'_{k_L}/\Gamma$ be the quotient map, which is defined over $k_L$. As a consequence of the Hurwitz formula, \autocite[Remark~1.2.(A).(b)]{homma_aut} shows that $\Gamma$ fixes a unique closed point $P$ in $\mathcal C'_{k_L}$, and that $\mathcal C'_{k_L}/\Gamma$ has genus zero. Since $\Gamma$ commutes with $(\varphi_L)_{\mathcal C'}$, the point $(\varphi_L)_{\mathcal C'}(P)$ is also fixed by $\Gamma$, so $(\varphi_L)_{\mathcal C'}(P)=P$, meaning that $P$ is a $k_L$-rational point. Then $\pi(P)$ is $k_L$-rational, so $\pi$ is in indeed a finite $k_L$-morphism $\mathcal C'_{k_L}\to \mathbb P^1_{k_L}$ of degree $5$ ramified only at $P$. Let $k_{L}(\mathcal C'_{k_L})$ denote the function field of $\mathcal C'_{k_L}$, then $k_L(\mathcal C'_{k_L})^{\Gamma}$ is a rational function field over $k_L$, and we let $y$ be a generator having a (unique) pole at $P$. Since $k_L(\mathcal C'_{k_L})/k_L(y)$ is cyclic of order $5$, applying Artin--Schreier theory we have $k_L(\mathcal C'_{k_L})=k_L(x,y)$ satisfying an equation $x^5-x=f$ with $f\in k_L(y)$. Furthermore, since the pole of $y$ is the unique branch point of $\pi$, we may suppose that $f\in k_L[y]$. Since $\mathcal C'_{k_L}$ has genus 2, we must have $\deg f=2$. We may further suppose that $f(y)=ay^2+c$ with $a,c\in k_L$, $a\neq0$, thus we have a $k_L$-isomorphism $\mathcal C'_{k_L}\simeq C_{a,c}$. With our particular choice of $L/K$ in \ref{subs:A_5_J_10}, the points of $J(C)[2]$ are rational over $L$. The isomorphism \eqref{eq:et_coh_fiber_iso} implies that the points of $J(C_{a,c})[2]$ are $k_L$-rational, which means that the polynomial $x^5-x-c$ splits completely over $k_L$. By translating $x$ with one of the roots we find that $\smash{\mathcal C'_{k_L}}\simeq C_{a,0}$ as $k_L$-schemes. Lastly, every $\gamma\in\Gamma$ fixes $y$, so $x-\gamma(x)$ is a root of $X^5-X=0$, thus giving $\gamma=\sigma_a^r$ for some $r\in\mathbb F_5$, and $r=0$ if and only if $\gamma$ is trivial. \end{proof} \begin{prop}\label{prop:repr_traces} We fix $\sigma\in I_K^{\wild}$. Let $a\in k_L^\times$ and $r\in\mathbb F_5$ be as in Prop.~\ref{prop:iso_Ca0}. For every $m,n\in\mathbb Z$ we have \[\Tr\rho_\ell(\sigma^m\varphi_L^n)=\begin{cases} -\lege{a}{k_L}^n 5^{\frac{n[k_L:\mathbb F_5]}{2}}& \text{ if $n[k_L:\mathbb F_5]$ is even and $5\nmid m$,} \\ \lege{a}{k_L}^n 4\cdot 5^{\frac{n[k_L:\mathbb F_5]}{2}}& \text{ if $n[k_L:\mathbb F_5]$ is even and $5\mid m$,}\\ -\lege{a}{k_L}^n \lege{rm}{\mathbb F_5}5^{\frac{n[k_L:\mathbb F_5]+1}{2}}& \text{ if $n[k_L:\mathbb F_5]$ is odd.} \end{cases}\] \end{prop} \begin{proof} By Prop.~\ref{prop:iso_Ca0}, the automorphism induced by $\sigma$ on $C_{a,0}$ is given by $\sigma_a^r:(x',y')\mapsto(x'+r, y')$. From the classical theory of the Frobenius actions on the étale cohomology group we know that the morphism $F_q:(x',y')\mapsto(x'^{q_L}, y'^{q_L})$ of $C_{a,0}$ induces the action of $\varphi_L$ on $H^1_{\et}(\mathcal C'_{\overline{k}_K},\mathbb Q_\ell)$. We fix a square root $\sqrt{a}\in\overline{k}_K$. Then there is a $\overline{k}_K$-isomorphism $C_{1,0}\to C_{a,0}$ given by $(x,y)\mapsto (x,\frac{y}{\sqrt{a}})$. Using this isomorphism we compute that the $\overline{k}_K$-automorphism on $C_{1,0}$ induced by $\sigma_a^r$ descends to $\mathbb F_5$ and is exactly $\sigma_1^r$. Similarly, $F_q$ induces $F^{[k_L:\mathbb F_5]}\circ\iota$ on $C_{1,0}$ if $\lege{a}{k_L}=-1$ or $F^{[k_L:\mathbb F_5]}$ if $\lege{a}{k_L}=1$. Therefore, \[\Tr\rho_\ell(\sigma^m\varphi^n_L)=\lege{a}{k_L}^n\cdot\Tr\left(\sigma_1^{rm}\circ F^{n[k_L:\mathbb F_5]}\right)^*.\] The desired formulas follow from Lemma~\ref{lem:traces_C10}. \end{proof} \subsection{Square classes of differences of Weierstrass roots}\label{subs:diff_square} Let $Y^2=P(X)$ be a Weierstrass equation defining $C/K$ with $P\in K[X]$ unitary of degree $5$ as in Prop.~\ref{prop:hyperell_eq_spec}. In particular, $A_5=1$. Any element $\sigma\in I^{\wild}_K$ for which $\rho_\ell(\sigma)$ is nontrivial acts transitively on the roots of $P$. We fix a root $\alpha_1\in M$ of $P$, then the other roots are $\alpha_i:=\sigma^{i-1}(\alpha_1)$. Following Prop.~\ref{prop:iso_Ca0}, the exists $a\in k_L^\times$ such that $\mathcal C_{k_L}\simeq C_{a,0}$ over $k_L$, and $\sigma$ induces $\sigma_a^r\in\Aut(C_{a,0})$ for some $r\in\mathbb F_p^\times.$ We note that the curve $C_{a,0}$ is $k_L$-isomorphic to the curve defined by the equation $y'^2=x'^5-a^4x'$, where $y'=a^3y$ and $x'=ax$. We have $\sigma_a^r:(x',y')\mapsto(x'+ar,y')$. \begin{prop}\label{prop:diff_square} The following properties hold : \begin{enumerate} \item The valuation of $\alpha_i-\alpha_j$ is the same for every $i\neq j$; \item Assume that $[k_L:\mathbb F_5]$ is odd. There exists $\sigma\in I_K^{\wild}$ such that $\lege{ar}{k_L}=1$. In this case, $\alpha_1-\sigma(\alpha_1)\in(L^\times)^2$. \end{enumerate} \end{prop} \begin{proof} (1) Since $C_L/L$ has good reduction by Prop.~\ref{prop:wild5_good_equiv}, there exists an affine variable change over $L$ which transforms $Y^2=P(X)$ into a Weierstrass equation with coefficients in $\mathcal O_L$ and an invertible discriminant (see \autocite[Lemme~3]{liu_models}). An affine transformation modifies all $v_L(\alpha_i-\alpha_j)$ by adding the same constant $v$. The new discriminant has zero valuation so we must have $v_L(\alpha_i-\alpha_j)+v=0$ for all $i\neq j$. (2) The existence of $\sigma$ such that $\lege{ar}{k_L}=1$ follows from $\lege{r}{k_L}=\lege{r}{\mathbb F_5}$. The extension $H/K$ from \ref{subs:A_5_J_10} is at most tamely ramified, so $\sigma$ acts trivially on it. By Prop.\ref{prop:explicit_IM}, the extension $L/H$ is Galois, so $\sigma$ acts on $L$. Then, $\sigma$ induces an $L$-semilinear automorphism of $C_L/L$. The action of $\sigma$ on the function field $K(X,Y)$ with $Y^2=P(X)\in K[X]$ is trivial. Applying the change of variables $X=X'+\alpha_1$ gives the equation \[Y^2=P'(X'):=X'(X'-\alpha_2+\alpha_1)\ldots(X'- \alpha_5+\alpha_1)\in M[X'].\] The $\sigma$-action extends $M$-semilinearly to $M(X,Y)=M(X',Y)$ and \[\sigma(X')=X'-\alpha_2+\alpha_1.\] With $P$ as in Prop.~\ref{prop:hyperell_eq_spec}, we have $40\mid e(L/K)v_K(\Delta(P))$ by Prop.~\ref{prop:inertia_disc_val}. Let $\varpi_L$ be any uniformizer of $L$ and $\delta:=\varpi_L^{\frac{e(L/K)v_K(\Delta(P))}{40}}$. After applying another change of variables $Y=\delta^5Y''$, $X'=\delta^2X''$ we obtain \[Y''^2=P''(X''):=X''\left(X''-\frac{\alpha_2-\alpha_1}{\delta^2}\right)\ldots\left(X''-\frac{\alpha_5-\alpha_1}{\delta^2}\right),\] and $\sigma(X'')=X''-\frac{\alpha_2-\alpha_1}{\delta^2}$. The formula \eqref{eq:var_change_disc} gives \[v_L\left(\Delta(P'')\right)=v_L\left(\delta^{-100} \cdot \delta^{60}\Delta(P)\right)=e(L/K)v_K\left(\Delta(P)\right)-40v_L(\delta)=0.\] For all $i\neq j$, applying part (1) gives \[v_L\left(\frac{\alpha_i-\alpha_j}{\delta^2}\right)=\frac{1}{20}v_L(\Delta(P))-2v_L(\delta)=0.\] It follows that $Y''^2=P''(X')$ defines a smooth model $\mathcal{W}/\mathcal O_L$ of $C_L/L$, which is unique up to isomorphism. Its reduction $\mathcal W_{k_L}/k_L$ must be $k_L$\protect\nobreakdash-\hspace{0pt}{}isomorphic to the curve $C_{a,0}/k_L$, defined by $y'^2=x'^5-a^4x'$. Let $x''$ denote the class of $X''$ in the function field of $\mathcal W_{k_L}$. By construction, the points at infinity of both of these models are fixed by the $k_L$-linear automorphisms induced by $\sigma$. Since on each curve there is a unique such fixed point (proven in \autocite{homma_aut}), there must be an affine variable change $y''=ay'$ $x''=bx'+c$ for some $a,b,c \in k_L$. Then $b^5=a^2$, so $b$ is a square in $k_L$. On one hand, as pointed out in \ref{subs:diff_square}, we have \[\sigma(x'')=b\sigma(x')+c=bx'+bar+c,\] and on the other hand, from the construction of $P''$, we have \[\sigma(x'')=bx'+c+\left(\frac{\alpha_1-\alpha_2}{\delta^2}\bmod \mathfrak{m}_L\right).\] Thus, the class of $\frac{\alpha_1-\alpha_2}{\delta^2}$ in $k_L$ is $bar$, which is a square, so $\alpha_1-\alpha_2\in (L^\times)^2$. \end{proof} \begin{prop}\label{prop:root_diff_square}\vphantom{a} \begin{enumerate} \item We have $H\subset M$. \item For all $k\neq l$, the element $\alpha_k-\alpha_l$ is a square in $L(\zeta_8)$. \end{enumerate} \end{prop} \begin{proof} For (2), by replacing $\sigma$ with some power, without loss of generality we may assume that $k=1$, $l=2$. Applying \eqref{eq:disc_unitary} gives \begin{equation}\label{eq:beta_expr} -\beta=\Delta(P)=2^{8}\prod_{i<j}(\alpha_i-\alpha_j)^2=2^{8}(\alpha_1-\alpha_2)^{20}\prod_{i<j}\left(\frac{\alpha_i-\alpha_j}{\alpha_1-\alpha_2}\right)^2.\end{equation} The wild ramification group $I^{\wild}(M/K)$ acts trivially on $M^\times/U^1_M$, so \[\frac{\alpha_i-\alpha_1}{\alpha_2-\alpha_1}=\sum_{k=0}^{i-2}\frac{\sigma^k(\alpha_2-\alpha_1)}{\alpha_2-\alpha_1}\equiv i-1\bmod \mathfrak{m}_M.\] Then \[\prod_{i<j}\left(\frac{\alpha_i-\alpha_j}{\alpha_1-\alpha_2}\right)^2\equiv \prod_{i<j}(j-i)^2\equiv(288)^2\equiv -1 \bmod \mathfrak{m}_M.\] Since $U_M^1$ is $8$-divisible, it follows that $\beta\in (M^\times)^4$, thus giving (1). Recall that $\beta\in(L^\times)^8$, thus $(\alpha_1-\alpha_2)^4\in(L^\times)^8$. It follows that $\alpha_1-\alpha_2$ is a square in $L(\zeta_8)^\times$, thus proving (2). \end{proof} \begin{rem} Prop.~\ref{prop:root_diff_square} must be contrasted with Prop.~\ref{prop:diff_square}. Unless $L=L(\zeta_8)$, only half of the differences $\alpha_i-\alpha_j$ are squares in $L$. \end{rem} \begin{prop}\label{prop:a-vdelta_even} For any discriminant $\Delta$ of $C/K$ we have $v_K(\Delta)\equiv a(\rho_\ell)\!\bmod \!2$. \end{prop} \begin{proof} This is derived from \autocite{liu_g2_disc_cond}. First, $v_K(\Delta)\equiv v_K(\Delta_{\min})\bmod 2$ for $\Delta_{\min}$ associated to a so-called minimal equation. Then, using \autocite[Prop.~1, Thm.~1]{liu_g2_disc_cond} we have $v_K(\Delta_{\min})-a(\rho_\ell)=m-1+\frac{d-1}{2}$ where $m$ is the number of irreducible components of the special geometric fiber of the minimal regular model of $C/K$ and $d$ is a geometric invariant of $C/K$ defined in \autocite[\S5.2]{liu_g2_disc_cond}. Finally, for each possible geometric type of the minimal regular model Liu computes $d$. In our case, $m$ is $1,3,4,5,9,11,$ or $13$ and $d=1$ for each value of $m$, except $d=3$ when $m=4$. \end{proof} \section{Maximal inertia action over $5$-adic fields}\label{sect:max_inertia} We continue in the setting of \ref{subs:5-adic_wild}. \begin{prop}\label{prop:hyperell_max_equiv} The following are equivalent : \begin{enumerate} \item $v_K(\Delta)$ is odd for any discriminant $\Delta$ of $C/K$; \item The extension $M/K$ is totally ramified and $\Gal(M/K)\simeq F_5$; \item $\rho_\ell(I_K)\simeq C_5\rtimes C_8$; \item $a(C/K)$ is odd. \end{enumerate} \end{prop} \begin{proof} Prop.~\ref{prop:inertia_disc_val} gives (1)$\Leftrightarrow$(3), and Prop.~\ref{prop:a-vdelta_even} gives (1)$\Leftrightarrow$(4). Prop.~\ref{prop:explicit_IM} shows that $\rho_\ell(I_K)$ has a quotient isomorphic to the inertia subgroup of $\Gal(M/K)$. Then \ref{subs:inertia_class} shows that (2) implies (3). Suppose (3), then $L/K$ has ramification index $40$. By \ref{subs:red_torsion}, the ramification index of $M/K$ is at least $20$. Statement~(2) now follows from Prop.~\ref{prop:hyperell_irred_5}. \end{proof} \subsection{Maximal ramification hypothesis}\label{subs:max_inertia_hyp} From now on we suppose that $\rho_\ell(I_K)\simeq C_5\rtimes C_8$. The complex Weil--Deligne representation attached to $\rho_\ell$ is given by the Weil representation $\rho:=\rho_\ell|_{W_K}\otimes_{\mathbb Q_\ell}\mathbb C$ and the trivial monodromy operator. \begin{prop}\label{prop:L_tot_ram} The extension $L/K$ is totally ramified, and $[L:M]=2$, $[M:H]=5$, and $[H:K]=4$. \end{prop} \begin{proof} It follows from Prop.~\ref{prop:root_diff_square}.(1) that $H\subset N\cap M$. Prop.~\ref{prop:hyperell_max_equiv} shows that $M/K$ is totally ramified of degree $20$ and that $[H:K]=4$. Therefore, $M/H$ is totally ramified of degree $5$, and $N\cap M = H$. From Prop.~\ref{prop:hyperell_max_equiv} we also see that $[N:H]=2$ and that $L/K$ has ramification index $40$, so $[L:K]=[H:K][N:H][M:H]=40$. It follows that $L/K$ is totally ramified. \end{proof} \begin{prop}\label{prop:tot_gal_max} In the notation of \autocite{group_names}, we have \[\Gal(L(\zeta_8)/K)\simeq\begin{cases}C_5\rtimes C_8 & \text{if $[k_K:\mathbb F_5]$ is even,} \\ C^2_2.F_5 &\text{if $[k_K:\mathbb F_5]$ is odd.}\end{cases}\] \end{prop} \begin{proof} The inertia subgroup $I(L(\zeta_8)/K)\subset \Gal(L(\zeta_8)/K)$ is isomorphic to $C_5\rtimes C_8$ and has index at most $2$ (from Prop.~\ref{prop:explicit_IM} and Prop.~\ref{prop:L_tot_ram}). It remains to show that if $L(\zeta_8)/L$ is nontrivial, then $\Gal(L(\zeta_8)/K)\simeq C^2_2.F_5$. In this case we have $\Gal(L(\zeta_8)/M)\simeq C_2^2$ since $L/M$ is totally ramified of degree $2$. It follows from Prop.~\ref{prop:hyperell_max_equiv} that $\Gal(L(\zeta_8)/K)$ is an extension $G$ of $F_5$ by $C_2^2$. The extension cannot be split, because otherwise $\Gal(L(\zeta_8)/K)$ would have $C_2^2\rtimes C_4$ as a $2$-Sylow subgroup, which has exponent $4$ and therefore has no element of order $8$. In order to identify $\Gal(L(\zeta_8)/K)$ as $C^2_2.F_5$ by using \autocite{group_names} we are left to show that the extension $G$ is non-central, i.e. that the subgroup $C_2^2\subset G$ which identifies with $\Gal(L(\zeta_8)/M)\subset \Gal(L(\zeta_8)/K)$ is non-central. Indeed, $\Gal(L(\zeta_8)/M)$ cannot be central because $\Gal(L/K)$ is non-Galois. \end{proof} \begin{prop}\label{prop:40_induced} Under the hypothesis of \ref{subs:max_inertia_hyp} the following statements hold : \begin{enumerate} \item The representation $\rho$ is irreducible; \item There exists characters $\chi$ and $\chi'$ of $W_H$ such that \begin{equation}\label{eq:restr_fact} \rho|_{W_H}\simeq\chi\oplus \chi^{-1}(-1)\oplus \chi'\oplus\chi'^{-1}(-1); \end{equation} \item If $\chi$ is any of the four direct summands in \eqref{eq:restr_fact}, then \[\rho\simeq \Ind_{W_H}^{W_K}\chi,\] and the Artin conductor $a(\chi)$ is even. \end{enumerate} \end{prop} \begin{proof} We observe that every irreducible representation of $C_5\rtimes C_8$ necessarily has dimension 1 or 4 (see, e.g., \autocite{group_names}). It follows that $\rho|_{I_K}$ is irreducible since it cannot be a direct sum of $1$\protect\nobreakdash-\hspace{0pt}{}dimentional representations. Thus, (1) holds. The extension $L/H$ is the compositum of the $C_5$-extension $M/H$ and the quadratic extension $N/H$, so $\Gal(L/H)\simeq C_{10}$. It follows that $LK^{\unr}/H$ is abelian. Therefore, $\rho|_{W_H}$ has abelian image and splits into $1$-dimensional factors \begin{equation}\label{eq:decomp_chi} \rho|_{W_H}\simeq\chi_1\oplus\chi_2\oplus\chi_3\oplus\chi_4.\end{equation} Frobenius reciprocity gives a nontrivial morphism of representations \[\Ind_{W_H}^{W_K}\chi_1\to \rho.\] Since $\rho$ is irreducible, the morphism is surjective and, in fact, is an isomorphism because $\dim\Ind_{W_H}^{W_K}\chi_1 =4 = \dim\rho$. Thus, (3) holds. Since $\rho$ is wildly ramified, by using an explicit construction of the induced representation we observe that $\chi_1$ must be wildly ramified. The twisted representation $\rho(\frac{1}{2})$ is symplectic with respect to the Weil pairing, so, in particular, the dual of $\rho$ is $\rho^*\cong \rho(1)$ and $\det\rho=\chi_{\unr}^{-2}$. Then \eqref{eq:decomp_chi} gives \[\rho|_{W_H}\simeq \left(\rho|_{W_H}\right)^*(-1)\simeq\chi_1^{-1}(-1) \oplus \chi_2^{-1}(-1)\oplus \chi_3^{-1}(-1)\oplus \chi_4^{-1}(-1).\] The wild ramification of $\chi_1$ implies that $\chi_1\not\simeq \chi_1^{-1}(-1)$, so we may suppose that $\chi_2\simeq\chi_1^{-1}(-1)$. We then have $\chi_4\simeq\chi_3^{-1}(-1)$. Posing $\chi=\chi_1$ and $\chi'=\chi_3$ gives (2). By Prop.~\ref{prop:hyperell_max_equiv}, $a(C/K)$ is odd. Since $H/K$ is totally tamely ramified of degree $4$, from \autocite[\S10.(a2)]{rohrlich} we have $a(C/K)=a(\rho)=a(\chi)+3$.\end{proof} \begin{prop}\label{prop:rat_4-tors} Each point of $J(C)[4]$ is rational over $L(\zeta_8)$. \end{prop} \begin{proof} Let $Y^2=P(X)$ be as in Prop.~\ref{prop:hyperell_eq_spec}, and let $\alpha_1,\ldots,\alpha_5\in\overline{K}$ be the roots of $P$, so that $M=K(\alpha_1,\ldots,\alpha_5)$. Let $\widetilde{M}:=\mathbb Q(\sqrt{-1},\alpha_1,\ldots,\alpha_5)\subset M$. Then, $C$ and $J(C)$ are defined over $\widetilde{M}$, and it follows from \autocite[Remark~4.2]{yelton_4-tors} that \[\widetilde{M}(J(C)[4])=\widetilde{M}\left(\left(\sqrt{\alpha_i-\alpha_j}\right)_{i<j}\right).\] The proposition now follows from Prop.~\ref{prop:root_diff_square}.(2). \end{proof} \begin{cor}\label{cor:twisted_trivial} The map $\rho(\varphi_{L(\zeta_8)})$ is given as multiplication by the scalar $ \sqrt{q_{L(\zeta_8)}}$. As an immediate consequence, the twisted representation $\rho(\frac{1}{2})$ is trivial on $W_{L(\zeta_8)}$. \end{cor} \begin{proof} Since $\rho(\varphi_{L(\zeta_8)})$ is central in $\Img(\rho)$ by Prop.~\ref{prop:almost_abelian}, it acts as multiplication by a scalar $z\in\mathbb C^\times$. From \eqref{eq:restr_fact} we see that $z=z^{-1}q_{L(\zeta_8)}$, so $z=\pm\sqrt{q_{L(\zeta_8)}}$. We note that $\sqrt{q_{L(\zeta_8)}}$ is always an integral power of $5$, thus, in particular, $z\equiv\pm1\bmod4$. On the other hand, Prop.~\ref{prop:rat_4-tors} implies that $\rho_2(\varphi_{L(\zeta_8)})\in\Aut_{\mathbb Z_2}\left(H^1_{\et}(C_{\overline{K}},\mathbb Z_2)\right)$ satisfies $\rho_2(\varphi_{L(\zeta_8)})\equiv \Id{}\bmod4$. We therefore conclude that $z=\sqrt{q_{L(\zeta_8)}}$. \end{proof} \section{Computation of root numbers}\label{sect:max_proof} We assume the hypotheses of \ref{subs:5-adic_wild} and \ref{subs:max_inertia_hyp} and prove our main result. \begin{thm}\label{thm:max_ramif_rootN} Let $a_6$ be as in Prop.~\ref{prop:hyperell_eq_spec}, and let $\Delta$ be the discriminant associated to any Weierstrass equation defining $C/K$. The root number of $C/K$ is given by \[w(C/K)=(-1)^{[k_K:\mathbb F_5]+1}\cdot\lege{v_K(a_6)}{k_K}\cdot (\Delta,a_6)_K.\] \end{thm} Let $\psi_k:K\to\mathbb C^\times$ be the additive character from \ref{subs:add_char_ex}. For the general theory and the formulas of root numbers the reader may refer to \autocite{rohrlich}. \subsection{Root number of an induced representation}\label{subs:root_induced} We have $\rho=\Ind_{W_H}^{W_K}\chi$ from Prop.~\ref{prop:40_induced}, so the formula \autocite[\S11.($\epsilon$2)]{rohrlich} gives \begin{equation}\label{eq:root_n_ind_formula} w(C/K)=w(\chi,\psi_k\circ \Tr_{K/H})\cdot w(\Ind_{W_H}^{W_K}\mathds{1},\psi_k). \end{equation} \begin{lem}\label{lem:root_n_ind_id_4} We have $w(\Ind_{W_H}^{W_K}\mathds{1},\psi_k)=-1$. \end{lem} \begin{proof} The representation $\Ind_{W_H}^{W_K}\mathds{1}$ is isomorphic to the regular representation of $\Gal(H/K)\simeq C_4$. Let $\chi_4:W_K\to \mathbb C^\times$ denote a totally ramified character of order $4$ such that $\ker\chi_4=W_{H}$. We then have a decomposition \begin{equation}\label{eq:ind_1_dec} \Ind_{W_H}^{W_K}\mathds{1}\simeq \mathds{1}\oplus \chi_4^2\oplus\chi_4\oplus \chi_4^{-1},\end{equation} and thus multiplicativity of root numbers \autocite[\S11.($\epsilon$1)]{rohrlich} gives \[w(\Ind_{W_H}^{W_K}\mathds{1},\psi_k)=w(\chi_4^2,\psi_k)\cdot w(\chi_4\oplus \chi_4^{-1},\psi_k).\] The properties from \autocite[\S12~Lemma]{rohrlich} give \[w(\chi_4\oplus \chi_4^{-1},\psi_k)=\chi_4(\theta_K(-1)),\] where $\theta_K$ is Artin's reciprocity map. We have $\chi_4(\theta_K(-1))=1$ exactly when $-1$ is a 4th power in $K^\times$, so \[w(\chi_4\oplus \chi_4^{-1},\psi_k)=(-1)^{[k_K:\mathbb F_5]}.\] In order to compute $w(\chi_4^2,\psi_k)$ we apply the formula \autocite[\nopp (8.7.1)]{abbes_saito} with $\beta=1$ there and $\tau(\chi_4^2,\psi_k)=-G_{[k_K:\mathbb F_5]}(\chi_4^2)=(-\sqrt{p})^{[k_K:\mathbb F_5]}$ (we use \autocite[Thm.~11.5.2]{gauss_sums_book}), which gives \[w(\chi_4^{2},\psi_k)=(-1)^{[k_K:\mathbb F_5]+1},\] thus completing the proof of the lemma. \end{proof} \subsection{Connection with a Weierstrass equation}\label{subs:gauge_weierstrass_eq} Let $Y^2=P(X)$ be the Weierstrass equation for $C/K$ from Prop.~\ref{prop:hyperell_eq_spec}. We fix a root $\alpha_1\in M$ of $P$, then $M=H(\alpha_1)$. Let $\chi$ be as in Prop.~\ref{prop:40_induced}, and let $\sigma\in I_H$ be an element such that \begin{equation}\label{eq:chi_sigma_img}\chi(\sigma)=e^{\frac{2\pi i}{5}}.\end{equation} It follows that $\sigma$ restricts to a generator of $\Gal(M/H)\simeq C_5$. Let $\alpha_j:=\sigma^{j-1}(\alpha_1)$ be the roots of $P$, and let $d_{\alpha_1,\chi}:=\mathcal N_{M/H}(\alpha_1-\alpha_2).$ We have $\mathcal N_{M/H}(\alpha_1)=-a_6$ and \begin{equation}v_M(\alpha_1)=v_H(\mathcal N_{M/H}(\alpha_1))=v_H(a_6)=4v_K(a_6).\end{equation} Since $a(\chi)$ is even by Prop.~\ref{prop:40_induced}.(3), applying Cor.~\ref{cor:char_rootN_D} (with $K=H$ there) gives (recall the notation $\approx$ from \ref{subs:root_n_char}) \begin{align} w(\chi,\psi_k\circ\Tr_{H/K})& \approx \chi\circ\theta_H\left(v_M(\alpha_1)\cdot \mathcal N_{M/H}(\alpha_1)\right)^{-1}\cdot \chi\circ\theta_H(d_{\alpha_1,\chi}) \nonumber \\ & \approx \chi\circ\theta_H(-4v_K(a_6)a_6)^{-1}\cdot \chi\circ\theta_H(d_{\alpha_1,\chi}). \label{eq:chi_rootN_inter} \end{align} Recall that $\det\rho=\chi_{\unr}^{-2}$. Let $t:W_K^{\ab}\to W_H^{\ab}$ be the transfer map. Deligne's determinant formula \autocite[508]{deligne_eq_fonctionelle} gives \[ \chi_{\unr}^{-2}=\det\Ind_{W_H}^{W_K}\chi=\det\Ind_{W_H}^{W_K}\mathds{1} \cdot \chi\circ t. \] Composing with $\theta_K$ and taking into account the decomposition \eqref{eq:ind_1_dec} gives \[||\cdot||^{-2}_K=\chi_4^2\circ\theta_K \cdot (\chi\circ\theta_H)|_{K^\times}.\] Since $-4v_K(a_6)a_6\in K^\times$ and $||\cdot||_K\approx 1$, the above gives \begin{equation}\label{eq:chi_on_K} \chi\circ\theta_H(-4v_K(a_6)a_6) \approx \chi_4^2\circ\theta_K(-4v_K(a_6)a_6).\end{equation} Since $-\beta$ is a norm from $K(\sqrt{\beta})$, we have $\chi_4^2\circ\theta_K(-\beta)=1$. Therefore, $\chi_4^2\circ\theta_K$ is equal to the Hilbert symbol $(\beta,\cdot)_K$, since both are quadratic ramified characters trivial on $-\beta$. Since $\beta$ differs from any discriminant $\Delta$ of $C/K$ by a square in $K^\times$, we have $(\beta,\cdot)_K=(\Delta,\cdot)_K$. Applying this to \eqref{eq:chi_on_K} together with the formula \autocite[V.(3.4)]{neukirch} gives \begin{equation}\label{eq:chi_rootN_a6} \chi\circ\theta_H(-4v_K(a_6)a_6)\approx \lege{v_K(a_6)}{k_K}\cdot(\Delta,a_6)_K.\end{equation} Plugging \eqref{eq:chi_rootN_a6} into \eqref{eq:chi_rootN_inter} we obtain \begin{equation}\label{eq:chi_rootN_a6_m} w(\chi,\psi_k\circ \Tr_{H/K})\approx\lege{v_K(a_6)}{k_K}\cdot(\Delta,a_6)_K\cdot \chi\circ\theta_H(d_{\alpha_1,\chi}).\end{equation} \begin{lem}\label{lem:chi_d_even} If $[k_K:\mathbb F_5]$ is even, then $\chi\circ\theta_H(d_{\alpha_1,\chi})\approx 1.$ \end{lem} \begin{proof} Here we have $L(\zeta_8)=L$. Then Prop.~\ref{prop:root_diff_square}.(2) implies that \[\alpha_1-\alpha_2\in\mathcal N_{L(\zeta_8)/M}(L(\zeta_8)^\times),\] thus $d_{\alpha_1,\chi}$ is a norm from $L(\zeta_8)^\times$. Since $\rho(\frac{1}{2})=\Ind_{W_H}^{W_K}(\chi(\frac{1}{2}))$, Cor.~\ref{cor:twisted_trivial} implies that $\chi(\frac{1}{2})\circ\theta_H$ is trivial on $\mathcal N_{L(\zeta_8)/H}(L(\zeta_8)^\times)$. Then, \begin{equation} \chi\circ\theta_H(d_{\alpha_1,\chi})=||d_{\alpha_1,\chi}||_H^{-\frac{1}{2}} \cdot \left(\chi({\textstyle\frac{1}{2}})\circ\theta_H\right)(d_{\alpha_1,\chi})=||d_{\alpha_1,\chi}||_H^{-\frac{1}{2}}\approx 1. \qedhere \end{equation} \end{proof} \begin{lem}\label{lem:chi_gauss_frob} Suppose that $[k_K:\mathbb F_5]$ is odd. Let $a\in k_L$ and $r\in\mathbb F_5$ be associated to $\sigma$ as in Prop.~\ref{prop:iso_Ca0}. Then for every geometric Frobenius lift $\varphi_L\in W_L$, we have \[\chi(\varphi_L)=-\lege{ar}{k_L}\sqrt{q_K}.\] \end{lem} \begin{proof} Recall from Prop.~\ref{prop:L_tot_ram} that $L/K$ is totally ramified, so $k_L=k_K$. Since $[k_K:\mathbb F_5]$ is odd, $q_K=q_L=\sqrt{q_{L(\zeta_8)}}$, and $\lege{\cdot}{\mathbb F_5}$ is the restriction of $\lege{\cdot}{k_L}$ to $\mathbb F_5$. Let $\chi'$ be the other character appearing in Prop.~\ref{prop:40_induced}. From Prop.~\ref{prop:repr_traces} we have $\Tr\rho(\sigma)=-1$, which, together with \eqref{eq:chi_sigma_img}, forces \begin{equation}\label{eq:chis_sigma} \chi'(\sigma)\in\left\{(e^{\frac{2\pi i}{5}})^{2}, (e^{\frac{2\pi i}{5}})^{3}\right\}.\end{equation} Cor.~\ref{cor:twisted_trivial} implies that the eigenvalues of $\rho(\varphi_L)$ are $\pm \sqrt{q_L}$. From Prop.~\ref{prop:repr_traces} we have $\Tr\rho(\varphi_L)=0$, so there exists some $w=\pm 1$ such that \begin{equation}\label{eq:chis_frob}\chi(\varphi_L)=w\sqrt{q_L} \hspace{0.5cm}\text{and}\hspace{0.5cm} \chi'(\varphi_L)=-w\sqrt{q_L}.\end{equation} Using \eqref{eq:chi_sigma_img}, \eqref{eq:chis_sigma}, and \eqref{eq:chis_frob} together with a classical formula for Gauss sums \autocite[\S1.1]{gauss_sums_book} gives \[\Tr\rho(\sigma\varphi_L)=w\sqrt{q_L}\left(e^{\frac{2\pi i}{5}}+(e^{\frac{2\pi i}{5}})^{4}-(e^{\frac{2\pi i}{5}})^{2}-(e^{\frac{2\pi i}{5}})^{3}\right)=w\sqrt{5q_L}.\] It now follows from Prop.~\ref{prop:repr_traces} that $w=-\lege{ar}{k_L}$. \end{proof} \subsection{Choosing $\chi$}\label{subs:choice_chi} We assume that $[k_K:\mathbb F_5]$ is odd, then $[k_L:\mathbb F_5]$ is also odd. Although the root number $w(\chi,\psi_k\circ\Tr_{H/K})$ does not depend on the choice of the character $\chi$ in Prop.~\ref{prop:40_induced}.(3), in order to carry out a detailed computation we will need to fix a particular $\chi$. Depending on whether $a$ is a square in $k_L^\times$, we may choose $\sigma$ and, consequently, $\chi$ so that $\lege{ar}{k_L}=1$ and that we still have \eqref{eq:chi_sigma_img}. \begin{lem}\label{lem:chi_d_odd} If $[k_K:\mathbb F_5]$ is odd and $\chi$ is as in \ref{subs:choice_chi}, then $\chi\circ\theta_H(d_{\alpha_1,\chi})\approx -1.$ \end{lem} \begin{proof} Applying Lemma~\ref{lem:chi_gauss_frob} for the chosen $\chi$ gives \[\chi(\varphi_L)=-\sqrt{q_K}.\] On the other hand, Prop.~\ref{prop:diff_square}.(2) tells us that $\alpha_1-\alpha_2$ is a square in $L$, so there exists some $b\in L$ such that $\alpha_1-\alpha_2=\mathcal N_{L/M}(b)$. Prop.~\ref{prop:diff_square}.(1) and Prop.~\ref{prop:L_tot_ram} give \[v_L(b)=v_M(\alpha_1-\alpha_2)=\frac{1}{20}v_M(\Delta(P))= v_K(\Delta(P)).\] It now follows from Prop.~\ref{prop:hyperell_max_equiv} that $v_L(b)$ is odd. The restriction $\chi|_{W_L}$ is unramified, so the above discussion shows that \[\chi\circ\theta_H(d_{\alpha_1,\chi})=\chi\circ\theta_L(b)=\chi(\varphi_L)^{v_L(b)}=\left(-\sqrt{q_K}\right)^{v_L(b)}\approx -1.\qedhere\] \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:max_ramif_rootN}] When $[k_K:\mathbb F_5]$ is even we use Lemma~\ref{lem:chi_d_even}, and when $[k_K:\mathbb F_5]$ is odd we choose $\chi$ as in \ref{subs:choice_chi} and use Lemma~\ref{lem:chi_d_odd} in order to obtain $\chi\circ\theta_H(d_{\alpha_1,\chi})\approx(-1)^{[k_K:\mathbb F_5]}$. Plugging the latter into \eqref{eq:chi_rootN_a6_m} gives \begin{equation}\label{eq:chi_rootN_final} w(\chi,\psi_k\circ\Tr_{H/K})\approx(-1)^{[k_K:\mathbb F_5]}\cdot\lege{v_K(a_6)}{k_K}\cdot (\Delta,a_6)_K.\end{equation} Combining \eqref{eq:chi_rootN_final} and Lemma~\ref{lem:root_n_ind_id_4} into \eqref{eq:root_n_ind_formula} proves the relation $\approx$ between the two sides of the formula of Thm.~\ref{thm:max_ramif_rootN}. Since both sides take values in $\{1,-1\}$, the theorem holds (see \ref{subs:root_n_char}). \end{proof} \subsection{An example} This is \autocite[\href{https://www.lmfdb.org/Genus2Curve/Q/896875/a/896875/1}{genus $2$ curve 896875.a.896875.1}]{lmfdb}. Let $C/\mathbb Q$ be the hyperelliptic curve defined by \[Y^2=P(X):=X^5 + \frac{5}{4}X^4 - \frac{5}{2}X^3 - \frac{5}{4}X^2 + \frac{5}{2}X + \frac{1}{4}.\] Its discriminant is $\Delta=-5^5 \cdot 7 \cdot 41$, and a smooth model over $\mathbb Z_2$ can be given. Over $7$ the reduction is semi-stable, the singular point of the special fiber is given by $X=5$, $Y=0$, and we have $P(X)\equiv(X-5)^2H_7(X)\bmod 7$ with $H_7(X)=X^3 +6X^2 + X + 4$ separable over $\mathbb F_7$. Over $41$ the reduction is again semi-simple, the singular point is at $X=12$, $Y=0$, and we have $P(X)\equiv (X-12)^2H_{41}(X)\bmod 41$ with $H_{41}(X)=X^3+15X^2+29X+21$ separable over $\mathbb F_{41}$. We apply \autocite[Lemma~6.7]{brumer_kramer_sabitova} to compute $w(C/\mathbb Q_7)=-\lege{H_7(5)}{\mathbb F_7}=-\lege{4}{\mathbb F_7}=-1$ and $w(C/\mathbb Q_{41})=-\lege{H_{41}(12)}{\mathbb F_{41}}=-\lege{34}{\mathbb F_{41}}=1.$ We observe that $P(X+1)$ is Eisenstein over $\mathbb Z_5$, so the $\Gamma_{\mathbb Q_5}$-action on $J(C)[2]$ is wildly ramified. Thus, $\rho_\ell$ is wildly ramified for every $\ell\neq5$, and $C/\mathbb Q_5$ has potentially good reduction by Prop.~\ref{prop:wild5_good_equiv}. The equation $Y^2=P(X+1)$ satisfies the conditions of Prop.~\ref{prop:hyperell_eq_spec} with $a_6=\frac{5}{4}$, so Thm.~\ref{thm:max_ramif_rootN} applies to give \[w(C/\mathbb Q_5)=\left(-5^5 \cdot 7 \cdot 41,\frac{5}{4}\right)_{\mathbb Q_5}=-1.\] The global root number is then $w(C/\mathbb Q)=1$, which is compatible with the Hasse--Weil and the BSD conjectures since both analytic and Mordeil--Weil ranks of $J(C)/K$ are $2$ (see \autocite{lmfdb}).
{ "timestamp": "2021-10-06T02:21:32", "yymm": "2102", "arxiv_id": "2102.07745", "language": "en", "url": "https://arxiv.org/abs/2102.07745" }
\section{Introduction and main result} \label{sec:1} We study the time evolution of an incompressible non viscous fluid in the whole space $\bb R^3$, with an axial symmetry without swirl when the vorticity is sharply concentrated on $N$ annuli of radii $r_i\approx r_0$ and thickness $\varepsilon$. In particular, we consider the limit $\varepsilon \to 0$. In a previous paper of some years ago a similar problem \cite{BCM00} was investigated for a vortex alone, showing that it translates with a constant speed. Recently, in \cite{BuM2}, the analysis has been extended to the case of $N$ vortices, also getting a stronger localization property, but restricted to the case of short but positive time. In the present paper, we study the problem for any time. The motion of an incompressible inviscid fluid is governed by the Euler equations, that for a fluid of unitary density in three dimension with velocity $\bs u = \bs u(\bs\xi,t)$ decaying at infinity read \begin{equation} \label{vorteq} \partial_t \bs\omega + (\bs u\cdot \nabla) \bs\omega = (\bs \omega\cdot \nabla) \bs u \,, \qquad \end{equation} \begin{equation} \label{u-vort} \bs u(\bs\xi,t) = - \frac{1}{4\pi} \int_{\bb R^3}\! \mathrm{d} \bs\eta \, \frac{(\bs\xi-\bs\eta) \wedge \bs\omega(\bs\eta,t)}{|\bs\xi-\bs\eta|^3} \,, \end{equation} where $\bs\omega = \bs \omega(\bs\xi,t) = \nabla \wedge \bs u(\bs\xi,t)$ is the vorticity, $\bs\xi = (\xi_1,\xi_2,\xi_3)$ denotes a point in $\bb R^3$, and $t\in \bb R_+$ is the time. The equations are completed by the initial conditions. It is worthwhile to emphasize that the incompressibility condition $\nabla\cdot \bs u=0$ is clearly verified in view of Eq.~\eqref{u-vort}. Denoting by $(z,r,\theta)$ the cylindrical coordinates, we recall that the vector field $\bs F$ of cylindrical components $(F_z, F_r, F_\theta)$ is called axisymmetric without swirl if $F_\theta=0$ and $F_z$ and $F_r$ are independent of $\theta$. The axisymmetry is preserved by the evolution Eqs.~\eqref{vorteq}, \eqref{u-vort}. Moreover, when restricted to axisymmetric velocity fields $\bs u(\bs\xi,t) = (u_z(z,r,t), u_r(z,r,t), 0)$, the vorticity is \begin{equation} \label{omega} \bs\omega = (0,0,\omega_\theta) = (0,0,\partial_z u_r - \partial_r u_z) \end{equation} and, denoting henceforth $\omega_\theta$ by $\omega$, Eq.~\eqref{vorteq} reduces to \begin{equation} \label{omeq} \partial_t \omega + (u_z\partial_z + u_r\partial_r) \omega - \frac{u_r\omega}r = 0 \,. \end{equation} Finally, by Eq.~\eqref{u-vort}, $u_z = u_z(z,r,t)$ and $u_r=u_r(z,r,t)$ are given by \begin{align} \label{uz} u_z & = - \frac{1}{2\pi} \int\! \mathrm{d} z' \!\int_0^\infty\! r' \mathrm{d} r' \! \int_0^\pi\!\mathrm{d} \theta \, \frac{\omega(z',r',t) (r\cos\theta - r')}{[(z-z')^2 + (r-r')^2 + 2rr'(1-\cos\theta)]^{3/2}} \,, \\ \label{ur} u_r & = \frac{1}{2\pi} \int\! \mathrm{d} z' \!\int_0^\infty\! r' \mathrm{d} r' \! \int_0^\pi\!\mathrm{d} \theta \, \frac{\omega(z',r',t) (z - z')}{[(z-z')^2 + (r-r')^2 + 2rr'(1-\cos\theta)]^{3/2}} \,. \end{align} In conclusion, the axisymmetric solutions to the Euler equations are the solutions to Eqs.\ \eqref{omeq}, \eqref{uz}, and \eqref{ur}. We notice that Eq.~\eqref{omeq} means that the quantity $\omega/r$ remains constant along the flow generated by the velocity field, i.e., \begin{equation} \label{cons-omr} \frac{\omega(z(t),r(t),t)}{r(t)} = \frac{\omega(z(0),r(0),0)}{r(0)} \,, \end{equation} with $(z(t),r(t))$ solution to \begin{equation} \label{eqchar} \dot z(t) = u_z(z(t),r(t),t) \,, \qquad \dot r(t) = u_r(z(t),r(t),t) \,. \end{equation} We notice that in the case of non-smooth initial data, Eqs.~\eqref{uz}, \eqref{ur}, \eqref{cons-omr}, and \eqref{eqchar} can be assumed as a weak formulation of the Euler equations in the framework of axisymmetric solutions. An equivalent weak formulation is obtained from Eq.~\eqref{omeq} by a formal integration by parts, \begin{equation} \label{weq} \frac{\mathrm{d}}{\mathrm{d} t} \omega_t[f] = \omega_t[u_z\partial_z f + u_r\partial_r f + \partial_t f] \,, \end{equation} where $f = f(z,r,t)$ is any bounded smooth test function and \[ \omega_t[f] := \int\! \mathrm{d} z \!\int_0^\infty\! \mathrm{d} r \, \omega(z,r,t) f(z,r,t) \,. \] It is known that the global (in time) existence and uniqueness of a weak solution to the associate Cauchy problem holds when initial vorticity is a bounded function with compact support contained in the open half-plane $\Pi:=\lbrace(z,r):r>0\rbrace$, see, for instance, \cite[Page 91]{MaP94} or \cite[Appendix]{CS}. In particular, it can be shown that the support of the vorticity remains in the open half-plane $\Pi$ at any time (note that a point in the half-plane $\Pi$ corresponds to a circumference in the three dimensional space $\bb R^3$). The special class of axisymmetric solutions without swirl are known in literature as \textit{smoke rings} (or \textit{vortex rings}), because of the property to preserve their shape in time, which translates with a constant speed along the $z$-axis. The knowledge of such solutions is very old, but a first rigorous proof of existence and properties of these solutions (in the stationary case) goes back to \cite{FrB74, AmS89}, by means of variational methods. Other information and references on axially symmetric solution without swirl can be found in \cite{ShL92}. We consider in the present paper the special class of initial data for which the vorticity is initially very concentrated. We mean that, given a small parameter $\varepsilon\in (0,1)$, we take initial data for which the vorticity has compact support contained in $N$ disks, that is \begin{equation} \label{in} \omega_\varepsilon(z,r,0) = \sum_{i=1}^N \omega_{i,\varepsilon}(z,r,0) \,, \end{equation} where $\omega_{i,\varepsilon}(z,r,0)$, $i=1,\ldots, N$, are functions with definite sign whose support is contained in $\Sigma(\xi|\rho)$, which is the open disk of center $\xi$ and radius $\rho$, \begin{equation} \label{initial} \Lambda_{i,\varepsilon}(0) := \mathop{\rm supp}\nolimits\, \omega_{i,\varepsilon}(\cdot,0) \subset \Sigma(\zeta^i|\varepsilon), \end{equation} with \[ \overline{\Sigma(\zeta^i|\varepsilon)} \subset\Pi\quad \forall\, i\,, \qquad \Sigma(\zeta^i|\varepsilon)\cap \Sigma(\zeta^j|\varepsilon)=\emptyset\quad \forall\, i \ne j, \] for fixed $\zeta^i = (z_i,r_i)\in \Pi$. We assume also that \begin{equation} \label{2D} \min_i r_i > 2D\quad \forall\, i\,, \qquad |r_i-r_j| \ge 2D \quad \forall\, i \ne j \,, \end{equation} where $D$ is a positive fixed constant. This means that the annuli have different radii. In view of \eqref{cons-omr}, the decomposition Eq.~\eqref{in} extends to positive time setting \begin{equation} \label{in-t} \omega_\varepsilon(z,r,t) = \sum_{i=1}^N \omega_{i,\varepsilon}(z,r,t) \,, \end{equation} with $\omega_{i,\varepsilon}(x,t)$ the time evolution of the $i$th vortex ring, \begin{equation} \label{cons-omr_ni} \omega_{i,\varepsilon}(z(t),r(t),t) := \frac{r(t)}{r(0)} \omega_{\varepsilon,i}(z(0),r(0),0)\,. \end{equation} We focus on the case of a fluid with a large vorticity concentration. Therefore, in order to have non trivial (i.e., neither vanishing nor diverging) limiting velocities of the vortex rings, the initial data have to be chosen appropriately. The correct choice can be inferred by considering the simplest case of a vortex ring alone, of intensity $N_\varepsilon =: \int\! \mathrm{d} z \int _0^\infty\! \mathrm{d} r \, \omega_\varepsilon(z,r,0)$ and supported in a small region of diameter $\varepsilon$. It is well known that it moves along the $z$-direction with an approximately constant speed proportional to $N_\varepsilon |\log \varepsilon|$, see \cite {Fr70}. With this in mind, we assume that there are $N$ real parameters $a_1,\ldots, a_N$, called \textit{vortex intensities}, such that \begin{equation} \label{ai} |\log\varepsilon| \int\!\mathrm{d} z \!\int_0^\infty\!\mathrm{d} r\, \omega_{i,\varepsilon}(z,r,0) = a_i \quad \forall\,i=1,\ldots,N \,. \end{equation} Finally, to avoid too large vorticity concentrations, we further assume there is a constant $M>0$ such that \begin{equation} \label{Mgamma} |\omega_{i,\varepsilon}(z,r,0)| \le \frac{M}{\varepsilon^2|\log\varepsilon|} \quad \forall\, (z,r)\in \Pi \quad \forall\, i=1,\ldots,N\,. \end{equation} Now, we can state the main result of the paper. \begin{theorem} \label{thm:1} Assume the initial data $\omega_\varepsilon(x,0)$ verify Eqs.~\eqref{in}, \eqref{initial}, \eqref{2D}, \eqref{ai}, and \eqref{Mgamma}, and define \begin{equation} \label{free_m} \zeta^i(t) := \zeta^i + \frac{a_i}{4\pi r_i} \begin{pmatrix} 1 \\ 0 \end{pmatrix} t \,, \quad i=1,\ldots,N\,. \end{equation} Then, for any $T>0$ the following holds true. For any $\varepsilon$ small enough there are $\zeta^{i,\varepsilon}(t)\in \Pi$, $t\in [0,T]$, $ i=1,\ldots,N$, and $R_\varepsilon>0$ such that \[ \lim_{\varepsilon\to 0}|\log\varepsilon| \int_{\Sigma(\zeta^{i,\varepsilon}(t)|R_\varepsilon)}\!\mathrm{d} z\,\mathrm{d} r\, \omega_{i,\varepsilon}(z,r,t) = a_i\quad \forall\, i=1,\ldots,N, \quad \forall\, t\in [0,T]\,, \] with \[ \lim_{\varepsilon\to 0} R_\varepsilon = 0, \qquad \lim_{\varepsilon\to 0} \zeta^{i,\varepsilon}(t) = \zeta^i(t) \quad \forall\, t\in [0,T]\,. \] \end{theorem} \begin{remark} \label{rem:1} For the sake of concreteness, we make the assumption Eq.~\eqref{2D} which guarantees $|\zeta^i(t)-\zeta^j(t)|\ge 2D$ for any $i\ne j$ and $t\ge 0$. On the other hand, as it will be clear from the proof, the result is true for any choice of initial conditions $\{\zeta^i\}$ and intensities $\{a_i\}$ provided that the trajectories $\{\zeta^i(t)\}$ remain separate from each other at any positive time. While, in the general case, the statement of the theorem remains valid only for $T<T_*$, where $T_*$ is the first collapsing time (obviously, the initial data which produce collapses are exceptional). \end{remark} \begin{remark} \label{rem:2} Sometimes, the Euler equations are considered with initial vorticity highly concentrated around a generic curve, say $\Gamma= \{\bs \gamma_\sigma\}_{\sigma\in [0,1]} \subset \bb R^3$, see, e.g., \cite{MaB02}. Of course, additional assumptions are needed to analyze the time evolution. Here, the main feature is the so-called LIA approximation (local induction approximation), in which the vorticity remains concentrated around a \textit{vortex filament} $\Gamma(t)= \{\bs\gamma_\sigma(t)\}_{\sigma\in [0,1]}$, whose time velocity $\dot{\bs \gamma}_\sigma(t)$ depends on the curvature and is directed along the binormal vector. In the present paper, we present a situation in which this approximation is rigorously derived. \end{remark} \begin{remark} \label{rem:3} The effect of a viscosity perturbation in the derivation of the vortex model has been discussed in the literature \cite{CS,Mar90,Mar98,Mar07,Gal11}, but this topic is out of the purposes of the present analysis. \end{remark} \begin{remark} \label{rem:4} In this paper, we show that for certain classes of concentrated initial data the time evolution is closely related to the dynamics of a particular system of particles. Actually, the relation between the solution of the Euler Equations and time evolution of some special particle systems is more general and it is at the basis of an approximation method, called ``vortex method'', widely used in literature, see, e.g., \cite{CGP14} or the textbook \cite{MaP94}. \end{remark} The strategy in the proof of Theorem \ref{thm:1} is the same of the previous works on the topics. (We quote here only the more recent ones \cite{BuM1,BuM2}, and address the reader to the references therein). We first show the corresponding result for a ``reduced system'', where a vortex ring alone moves under the action of a suitable external time-dependent vector field. The result for the original model is then achieved by treating the motion of each vortex ring as that of a reduced system, in which the external field describes the force due to its interaction with the other rings. The key tool in the planar case \cite{BuM1} is a sharp a priori estimate on the moment of inertia, which is not available in the axial symmetric case because the velocity field is not a Lipschitz function. To overcome this problem, in \cite{BuM2} the energy conservation is used to control the growth in time of the moment of inertia, which allows us to build up an iterative scheme to deduce the sharp localization property, but the price to pay is that this scheme converges only for short times. Theorem \ref{thm:1} extends the result of \cite{BuM2} globally in time, and the strategy behind this improvement relies in the following observation. A suitable decomposition of the velocity field shows that its non Lipschitz part is directed along the $z$-axis, which suggests that the vorticity should stay more localized along the radial direction. Indeed, this is true and allows us to deduce a sharper estimate on a different quantity, the ``axial moment of inertia''. This new estimate makes possible to build up an iterative scheme as in \cite{BuM2}, but here convergent at any positive time, thus deducing a sharp localization property globally in time. The plan of the paper is the following. In the next section we introduce the reduced system and prove Theorem \ref{thm:1} as a corollary of the analogous result for this system, which is proved in Sections \ref{sec:3} and \ref{sec:4}. Finally, in Appendix \ref{app:a} we extend to the reduced system a concentration property of the vorticity distribution, proved in \cite{BCM00} for the case of a vortex alone. This property is necessary to characterize the axial motion and its proof relies on an accurate control on the time variation of the energy. \section{Reduction to a single vortex problem} \label{sec:2} We rename the variables by letting \begin{equation} \label{nv} x = (x_1,x_2) := (z,r) \end{equation} and extend the vorticity to a function on the whole plane by setting $\omega_\varepsilon(x,t) = 0$ for $x_2\le 0$, so that $x=(x_1,x_2) \in\bb R^2$ henceforth. In this way, the equations of motion Eqs.~\eqref{uz}, \eqref{ur}, \eqref{cons-omr}, and \eqref{eqchar} take the following form, \begin{equation} \label{u=} u(x,t) = \int\!\mathrm{d} y\, H(x,y)\, \omega_\varepsilon(y,t)\,, \end{equation} \begin{equation} \label{cons-omr_n} \omega_\varepsilon(x(t),t) = \frac{x_2(t)}{x_2(0)} \omega_\varepsilon(x(0),0) \,, \end{equation} \begin{equation} \label{eqchar_n} \dot x(t) = u(x(t),t) \,, \end{equation} where $u(x,t) = (u_1(x,t), u_2(x,t))$ and the kernel $H(x,y) = (H_1(x,y),H_2(x,y))$ is given by \begin{align} \label{H1} H_1(x,y) & = \frac{1}{2\pi} \int_0^\pi\!\mathrm{d} \theta \, \frac{y_2(y_2 - x_2\cos\theta)}{\big[|x-y|^2 + 2x_2y_2(1-\cos\theta)\big]^{3/2}} \,, \\ \label{H2} H_2(x,y) & = \frac{1}{2\pi} \int_0^\pi\!\mathrm{d} \theta \, \frac{y_2 (x_1-y_1) \cos\theta}{\big[|x-y|^2 + 2x_2y_2(1-\cos\theta)\big]^{3/2}} \,. \end{align} The ``reduced system'' describes the motion of a single vortex ring in a suitable external time-dependent vector field, which simulates the interaction with the other vortices. This system is defined by Eqs.~\eqref{u=}, \eqref{cons-omr_n}, and, in place of Eq.~\eqref{eqchar_n}, \begin{equation} \label{eqchar_nF} \dot x(t) = u(x(t),t) + F^\varepsilon(x(t),t)\,. \end{equation} The initial datum $\omega_\varepsilon(x,0)$ and the time dependent vector field $F^\varepsilon$ are assumed to satisfy the following conditions. \begin{assumption} \label{ass:1} The function $\omega_\varepsilon(x,0)$ is non-negative (resp.~non-positive) and there is $M>0$ and $a>0$ (resp.~$a<0$) such that \begin{equation} \label{MgammaF} 0 \le |\omega_\varepsilon(x,0)| \le \frac{M}{\varepsilon^2|\log\varepsilon|} \quad \forall\, x\in\bb R^2\,, \qquad |\log\varepsilon|\int\!\mathrm{d} y\, \omega_\varepsilon(y,0) =a\,. \end{equation} Moreover, there exists $\zeta^0 = (z_0,r_0)$, with $r_0>0$, such that \begin{equation} \label{initialF} \Lambda_\varepsilon(0) := \mathop{\rm supp}\nolimits\, \omega_\varepsilon(\cdot,0) \subset \Sigma(\zeta^0|\varepsilon)\,. \end{equation} Finally, $F^\varepsilon=(F^\varepsilon_1,F^\varepsilon_2)$ is a continuous and globally Lipschitz vector field, and it enjoys the following properties. \begin{itemize} \item[(a)] $\bs F^\varepsilon = (F^\varepsilon_z,F^\varepsilon_r,F^\varepsilon_\theta) := (F^\varepsilon_1,F^\varepsilon_2,0)$ has zero divergence, i.e., $\partial_{x_1}(x_2 F^\varepsilon_1) + \partial_{x_2}(x_2 F^\varepsilon_2) = 0$. \item[(b)] There exist $C_F, L >0$ such that, for any $\varepsilon\in (0,1)$ and $t\ge 0$, \begin{equation} \label{2Lipsc} |F^\varepsilon(x,t)| \le \frac{C_F}{|\log\varepsilon|}\,, \quad |F^\varepsilon(x,t) - F^\varepsilon(y,t)| \le \frac{L}{|\log\varepsilon|} |x-y|\qquad \forall\,x,y\in\bb R^2\,. \end{equation} \end{itemize} \end{assumption} \begin{theorem} \label{thm:2} Under Assumption \ref{ass:1}, let \begin{equation} \label{zett} \zeta(t) = \zeta^0+ \frac{a}{4\pi r_0} \begin{pmatrix} 1 \\ 0 \end{pmatrix} t\,. \end{equation} Then, for each $T>0$ the following holds true. \begin{itemize} \item[(1)] For any $k\in \big(0,\frac 14\big)$ there is $C_k>0$ such that, for any $\varepsilon$ small enough, \[ \Lambda_\varepsilon(t) := \mathop{\rm supp}\nolimits\, \omega_\varepsilon(\cdot,t) \subset \{x\in \bb R^2 \colon |x_2-r_0| \le C_k |\log\varepsilon|^{-k}\} \quad \forall\,t\in [0,T]\,. \] \item[(2)] For any $\varepsilon$ small enough there are $\zeta^\varepsilon(t)\in \Pi$, $t\in [0,T]$, and $\varrho_\varepsilon>0$ such that \[ \lim_{\varepsilon\to 0}|\log\varepsilon| \int_{\Sigma(\zeta^\varepsilon(t)|\varrho_\varepsilon)}\!\mathrm{d} x\, \omega_\varepsilon(x,t) = a \,, \] with \[ \lim_{\varepsilon\to 0} \varrho_\varepsilon = 0, \quad \lim_{\varepsilon\to 0} \zeta^\varepsilon(t) = \zeta(t) \,. \] \end{itemize} \end{theorem} \subsection{Proof of Theorem \ref{thm:1}} \label{sec:2.3} Given $T$ as in the statement of the theorem, we fix $R<D$ and let \[ T_\varepsilon := \max\left\{t\in [0,T] \colon |x_2 - r_i| \le R \;\; \forall\, x\in \Lambda_{i,\varepsilon}(s) \;\; \forall\, s\in [0,t]\;\;\forall\, i=1,\ldots,N \right\}. \] By continuity, from Eq.~\eqref{in} and \eqref{initial} it follows that $T_\varepsilon >0$ for any $\varepsilon$ sufficiently small. Moreover, in view of Eq.~\eqref{2D}, for any $t\in [0,T_\varepsilon ]$ the rings evolve with supports $\Lambda_{i,\varepsilon}(t)$ that remain separated from each other by a distance larger than or equal to $2(D-R)$, and hence their mutual interaction remains bounded and Lipschitz. More precisely, during the time interval $[0,T_\varepsilon]$, the $i$-th vortex ring $\omega_{i,\varepsilon}(x,t)$ evolves according to a reduced system, with external field in Eq.~\eqref{eqchar_nF} given by \begin{equation} \label{fk1} F^{i,\varepsilon}(x,t) = \sum_{j: j\ne i} \int\!\mathrm{d} y\, \tilde H(x,y)\, \omega_{j,\varepsilon}(y,t)\;, \end{equation} where $\tilde H(x,y)$ is any smooth kernel such that, e.g., $\tilde H (x,y) = H(x,y)$ if $|x-y|\ge D-R$. In view of the explicit form Eqs.~\eqref{H1}, \eqref{H2} of $H$, and the assumption Eq.~\eqref{ai}, $\tilde H$ can be chosen such that $\bs F^{i,\varepsilon} := (F^{i,\varepsilon}_1,F^{i,\varepsilon}_2,0)$ has zero divergence\footnote{This mollification is obtained by modifying the stream function associated to the field, which always exists for axisymmetric flow without swirl \cite[Section 2]{FrB74}.} and, for some constant $\overline C>0$, any $i,j=1,\ldots, N$, and $t\in [0,T_\varepsilon]$, \[ |F^{i,\varepsilon}(x,t)| \le \frac{\overline C}{|\log\varepsilon|}, \quad |F^{i,\varepsilon}(x,t) - F^{j,\varepsilon}(y,t)| \le \frac{\overline C}{|\log\varepsilon|} |x-y|\quad \forall\,x,y\in\bb R^2. \] We then apply Theorem \ref{thm:2} to the evolution of the $i$-th vortex ring, with parameters $(a_i,\zeta^i,T,k)$ in place of $(a,\zeta^0,T,k)$, and conclude that, for any $\varepsilon$ small enough, \smallskip\noindent (1) $ |x_2 - r_i| \le C_k|\log\varepsilon|^{-k}$ for any $x\in \Lambda_{i,\varepsilon}(t)$, $t\in[0, T_\varepsilon]$, and $i=1,\ldots,N$, \smallskip\noindent (2) there are $\zeta^{i,\varepsilon}(t)\in \Pi$, $i=1,\ldots,N$, and $\varrho_\varepsilon>0$ such that \[ \lim_{\varepsilon\to 0}|\log\varepsilon| \int_{\Sigma(\zeta^{i,\varepsilon}(t)|\varrho_\varepsilon)}\!\mathrm{d} x\, \omega_\varepsilon(x,t) = a_i\,, \] with \[ \lim_{\varepsilon\to 0} \varrho_\varepsilon = 0, \quad \lim_{\varepsilon\to 0} \zeta^{i,\varepsilon}(t) = \zeta^i(t) \,. \] By continuity, $T_\varepsilon=T$ for any $\varepsilon$ small enough, and Theorem \ref{thm:1} is thus proved. \qed \section{The reduced system: analysis of the radial motion} \label{sec:3} The proof Theorem \ref{thm:2} is split in two parts. The first one, which is the content of the present section, concerns the sharp localization property of the vorticity along the radial direction as stated in item (1) of Theorem \ref{thm:2}. Without loss of generality, we consider the case $a=1$, hence Eq.~\eqref{MgammaF} reads \begin{equation} \label{MgammaFb} 0 \le \omega_\varepsilon(x,0) \le \frac{M}{\varepsilon^2|\log\varepsilon|} \quad \forall\, x\in\bb R^2, \qquad |\log\varepsilon|\int\!\mathrm{d} y\, \omega_\varepsilon(y,0) =1. \end{equation} The following weak formulation will be used, which is a direct generalization of Eq.~\eqref{weq}, \begin{equation} \label{weqF} \frac{\mathrm{d}}{\mathrm{d} t} \int\!\mathrm{d} x\, \omega_\varepsilon(x,t) f(x,t) = \int\!\mathrm{d} x\, \omega_\varepsilon(x,t) \big[ (u+F^\varepsilon) \cdot \nabla f + \partial_t f \big](x,t) \,, \end{equation} where $f = f(x,t)$ is any bounded smooth test function. Moreover, the kernel $H(x,y)$ in Eq.~\eqref{u=} can be split as made in \cite[Lemma 3.3]{BuM2}, where it is shown that the most singular part of $H(x,y)$ is given by the kernel $K(x-y)$ corresponding to the planar case, \begin{equation} \label{vel-vor3} K(x) = \nabla^\perp G(x)\,, \quad G(x) := - \frac{1}{2\pi} \log|x|\,, \end{equation} where $v^\perp :=(v_2,-v_1)$ for $v = (v_1,v_2)$. More precisely, for any $x,y\in \Pi$, \begin{equation} \label{sH} H(x,y) = K(x-y) + L(x,y) + \mc R(x,y)\,, \end{equation} where \begin{equation} \label{bc_bound} L(x,y) = \frac{1}{4\pi x_2} \log\frac {1+|x-y|}{|x-y|}\begin{pmatrix} 1 \\ 0 \end{pmatrix} \end{equation} and there exists $C_0>0$ such that, for any $x,y\in \Pi$, \begin{equation} \label{sR} |\mc R(x,y)| \le C_0 \frac{1+x_2+\sqrt{x_2y_2} \big(1+ |\log(x_2y_2)|\big)}{x_2^2}\,. \end{equation} \noindent \textit{A notation warning:} In what follows, we shall denote by $C$ a generic positive constant, whose numerical value may change from line to line and it may possibly depend on the parameters $\zeta^0=(z_0,r_0)$ and $M$ appearing in Theorem \ref{thm:2} and Eq.~\eqref{MgammaFb}, as well as on the given time $T$. \smallskip As claimed at the beginning of the section, our goal is to show that, under Assumption \ref{ass:1}, for any $T>0$ and $k\in\big(0,\frac 14\big)$, if $\varepsilon$ is small enough then \begin{equation} \label{eq:prop1} |x_2 - r_0| \le \frac{C}{|\log\varepsilon|^k} \qquad \forall\, x\in \Lambda_\varepsilon(t) \quad \forall\, t\in [0,T]\,. \end{equation} We let \[ T^0_\varepsilon := \max\left\{t\in [0,T] \colon \frac{r_0}{2} \leq x_2 \leq \frac32 r_0\;\; \forall\, x \in\Lambda_\varepsilon(s)\;\; \forall\, s\in [0,t] \right\} \] and assume hereafter $\varepsilon < r_0/2$ so that $T^0_\varepsilon >0$ in view of Eq.~\eqref{initialF}. In what follows, we show that, for any $k\in\big(0,\frac 14\big)$, \begin{equation} \label{stimGa} |x_2 - r_0| \le \frac{C}{|\log\varepsilon|^k} \qquad \forall\, x\in \Lambda_\varepsilon(t) \quad \forall\, t\in [0,T^0_\varepsilon]\,, \end{equation} provided $\varepsilon$ is small enough. By continuity, this implies that $T^0_\varepsilon = T$ (for $\varepsilon$ sufficiently small), from which Eq.~\eqref{eq:prop1} follows for $\varepsilon$ sufficiently small. The proof of Eq.~\eqref{stimGa} is quite long, so it is divided into three preliminary lemmas plus a conclusion. Preliminarily, it is useful to decompose the velocity field according to Eq.~\eqref{sH}, writing \begin{equation} \label{decom_u} u(x,t) = \widetilde u(x,t) + \int\!\mathrm{d} y\, L(x,y)\, \omega_\varepsilon(y,t) + \int\!\mathrm{d} y\, \mc R(x,y)\, \omega_\varepsilon(y,t) \,, \end{equation} where $\widetilde u(x,t)=\int\!\mathrm{d} y\, K(x-y)\, \omega_\varepsilon(y,t)$. \begin{lemma} \label{lem:1} The following estimates hold true, \begin{equation} \label{bc_bound2} \int\!\mathrm{d} y\, |L(x,y)|\, \omega_\varepsilon(y,t) \le C \,, \quad \int\!\mathrm{d} y\, |\mc R(x,y)| \, \omega_\varepsilon(y,t) \le \frac{C}{|\log\varepsilon|} \qquad \forall\, t\in [0,T^0_\varepsilon]\,. \end{equation} \end{lemma} \begin{proof} From Eq.~\eqref{bc_bound} and \eqref{sR} it follows that \begin{equation} \label{lrest} |L(x,y)| \le \frac{1}{2\pi r_0}\log\frac {1+|x-y|}{|x-y|}\,,\quad |\mc R(x,y)| \le C \quad \forall\, x,y\in \Lambda_\varepsilon(t)\quad \forall\, t\in [0,T^0_\varepsilon]\,, \end{equation} while, from Eq.~\eqref{cons-omr_n}, \eqref{MgammaFb}, and the definition of $T^0_\varepsilon$, \begin{equation} \label{omega_t} |\omega_\varepsilon(x,t)| \le \frac{3M}{\varepsilon^2|\log\varepsilon|} \quad \forall\, t\in [0,T^0_\varepsilon]\,. \end{equation} Since $\log\frac {1+|x-y|}{|x-y|}$ is monotonically unbounded as $y\to x$, the maximum of the function $\int\!\mathrm{d} y\, \log\frac {1+|x-y|}{|x-y|}\, \omega_\varepsilon(y,t)$ is achieved when we rearrange the vorticity mass as close as possible to the singularity. Therefore, in view of Eq.~\eqref{omega_t}, \begin{equation} \label{311b} \begin{split} \int\!\mathrm{d} y\, |L(x,y)|\, \omega_\varepsilon(y,t) & \le \frac{1}{2\pi r_0} \int\!\mathrm{d} y\, \log\frac {1+|x-y|}{|x-y|}\, \omega_\varepsilon(y,t) \\ & \le \frac{3M}{\varepsilon^2|\log\varepsilon|r_0} \int_0^{\bar\rho}\!\mathrm{d} \rho\, \rho \, \log\frac{1+\rho}{\rho} \\ & = \frac{3M}{\varepsilon^2|\log\varepsilon|r_0} \bigg\{\frac{\bar\rho^2}{2} \log\frac{1+\bar\rho}{\bar\rho} - \frac 12 \int_0^{\bar\rho}\!\mathrm{d} \rho\, \frac{\rho}{1+\rho} \bigg\}, \end{split} \end{equation} with $\bar\rho$ such that $3\pi\bar\rho^2 M/(\varepsilon^2|\log\varepsilon|)= 1/|\log\varepsilon|$, from which the first estimate in Eq.~\eqref{bc_bound2} follows. Finally, we observe that, by Liouville's theorem and Eq.~\eqref{cons-omr_n}, since the vector field $\bs F^\varepsilon$ in Assumption \ref{ass:1}-(a) has zero divergence, \begin{equation} \label{w=1} \int\! \mathrm{d} y\, \omega_\varepsilon(y,t) = \int\! \mathrm{d}\bs\xi\, \frac{\omega_\varepsilon(\bs\xi,t)}{r} = \int\! \mathrm{d}\bs\xi_0 \, \frac{\omega_\varepsilon(\bs\xi_0,0)}{r_0} = \int\! \mathrm{d} y\, \omega_\varepsilon(y,0) =\frac{1}{|\log\varepsilon|}\, , \end{equation} where we have used the coordinate transformation $\bs\xi=\phi^t(\bs\xi_0)$, with $\phi^t$ the flow generated by $\dot{\bs\xi} = \bs u(\bs\xi,t) + \bs F^\varepsilon(\bs\xi,t)$. Therefore, the second estimate in Eq.~\eqref{bc_bound2} is a consequence of the second one in Eq.~\eqref{lrest}. \end{proof} We denote by $B_\varepsilon(t)=(B_{\varepsilon,1}(t), B_{\varepsilon,2}(t))$ the center of vorticity of the blob, defined by \begin{equation} \label{c.m.} B_\varepsilon(t) = \frac{\int\! \mathrm{d} x\, x\, \omega_\varepsilon(x,t)}{\int\! \mathrm{d} x\, \omega_\varepsilon(x,t)} =|\log\varepsilon|\int\! \mathrm{d} x\, x\, \omega_\varepsilon(x,t) \,, \end{equation} and by $I_\varepsilon(t)$ the axial moment of inertia with respect to $x_2=B_{\varepsilon,2}(t)$, i.e., \begin{equation} \label{moment} I_\varepsilon(t) = \int\! \mathrm{d} x\, \left(x_2-B_{\varepsilon,2}(t)\right)^2 \omega_\varepsilon(x,t)\,. \end{equation} Since $\Lambda_\varepsilon(t)$ is compact, the time derivatives of $B_{\varepsilon,2}(t)$ (in this section, we are only interested in this component) and $I_\varepsilon(t)$ can be computed by means of Eq.~\eqref{weqF}. To this end, we first observe that the time derivative of $M_2 := \int\!\mathrm{d} x\, \omega(x,t) x_2^2$ is $\dot M_2 = \int\! \mathrm{d} x \, \omega_\varepsilon(x,t) \,2 x_2 F^\varepsilon_2(x,t)$ (it is a conserved quantity in absence of external field, see Appendix \ref{app:a}), so that \begin{equation} \label{growth B} \dot B_{\varepsilon,2}(t) = |\log\varepsilon|\int\! \mathrm{d} x\,\omega_\varepsilon(x,t)\, \left(F^\varepsilon_2(x,t) +\int\!\mathrm{d} y\, \mc R_2(x,y)\, \omega_\varepsilon(y,t) \right), \end{equation} \begin{equation} \begin{split} \label{growth moment} \dot I_\varepsilon(t) & = 2 \int\! \mathrm{d} x\,\omega_\varepsilon(x,t)\, (x_2 - B_{\varepsilon,2}(t)) F^\varepsilon_2(x,t) \\ & \quad - 2 B_{\varepsilon,2}(t) \int\!\mathrm{d} x\, \omega_\varepsilon(x,t) \int\!\mathrm{d} y\, \mc R_2(x,y)\, \omega_\varepsilon(y,t) \,, \end{split} \end{equation} where we have used the expression Eq.~\eqref{decom_u} for $u(x,t)$, the identities \[ \int\! \mathrm{d} x\,\omega_\varepsilon(x,t) \,(x_2-B_{\varepsilon,2}(t)) = 0\,, \quad \int\!\mathrm{d} x\, \widetilde u(x,t)\,\omega_\varepsilon(x,t) = 0\,, \] (which derive from the definition of center of vorticity and the explicit form of $K(x-y)$ in Eq.~\eqref{vel-vor3}), and the fact that $L_2(x,y)=0$, see Eq.~\eqref{bc_bound}. \begin{lemma} \label{lem:2} The following estimate holds, \begin{equation} \label{Iee} I_\varepsilon(t) \le \frac{C}{|\log\varepsilon|^2} \qquad \forall\, t\in [0,T^0_\varepsilon]\,. \end{equation} \end{lemma} \begin{proof} By Eqs.~\eqref{2Lipsc}, \eqref{bc_bound2}, \eqref{growth B}, and \eqref{growth moment} we have that, for any $[0,T^0_\varepsilon]$, $|\dot B_{\varepsilon,2}(t)| \le C/|\log\varepsilon|$ (hence $|B_{\varepsilon,2}(t)| \le C$) and \[ |\dot I_\varepsilon(t)| \le \frac{C}{|\log\varepsilon|} \int\! \mathrm{d} x\, |x_2-B_{\varepsilon,2}(t)| \, \omega_\varepsilon(x,t) + \frac{C}{|\log\varepsilon|^2} \le \frac{C}{|\log\varepsilon|^{3/2}}\sqrt{I_\varepsilon(t)} + \frac{C}{|\log\varepsilon|^2}\,, \] where in the last estimate we used the Cauchy-Schwarz inequality and Eq.~\eqref{w=1}. Eq.~\eqref{Iee} now follows by integration of the last differential inequality since the initial data imply $I_\varepsilon(0) \leq 4\varepsilon^2$. \end{proof} \begin{lemma} \label{lem:3} Recall $\Lambda_\varepsilon(t)=\mathop{\rm supp}\nolimits\omega_\varepsilon(\cdot,t)$ and define \begin{equation} \label{Rt} R_t:= \max\{|x_2-B_{\varepsilon,2}(t)|\colon x\in \Lambda_\varepsilon(t)\}\,. \end{equation} Given $x_0\in\Lambda_\varepsilon(0)$, let $x(x_0,t)$ be the solution to Eq.~\eqref{eqchar_nF} with initial condition $x(x_0,0) = x_0$ and suppose at time $t\in (0,T^0_\varepsilon]$ it happens that \begin{equation} \label{hstimv} |x_2(x_0,t)-B_{\varepsilon,2}(t)| = R_t\,. \end{equation} Then, at this time $t$, \begin{equation} \label{stimv} \frac{\mathrm{d}}{\mathrm{d} t} |x_2(x_0,t)- B_{\varepsilon,2}(t)| \leq \frac{C}{|\log\varepsilon|}+\frac{1}{\pi R_t |\log\varepsilon|} + \sqrt{\frac{C m_t(R_t/2)}{ \varepsilon^2 |\log\varepsilon|}}\,, \end{equation} where the function $m_t(\cdot)$ is defined by \begin{equation} \label{mt} m_t(h) = \int_{|y_2-B_{\varepsilon,2}(t)|>h}\!\mathrm{d} y\,\omega_\varepsilon(y,t)\,. \end{equation} \end{lemma} \begin{proof} We observe that the proof is similar to that given in \cite[Lemma 2.5]{BuM1}. Letting $x=x(x_0,t)$, by Eqs.~\eqref{eqchar_nF}, \eqref{decom_u}, \eqref{w=1}, and \eqref{growth B} we have, \begin{equation} \label{distance1} \begin{split} & \frac{\mathrm{d}}{\mathrm{d} t} |x_2(x_0,t)- B_{\varepsilon,2}(t)| = \big(u_2(x,t) + F^\varepsilon_2(x,t) - \dot B_{\varepsilon,2}(t)\big) \frac{x_2-B_{\varepsilon,2}(t)}{|x_2-B_{\varepsilon,2}(t)|} \\ & \qquad\qquad = V(x,t) \frac{x_2-B_{\varepsilon,2}(t)}{|x_2-B_{\varepsilon,2}(t)|} + \int\!\mathrm{d} y\, K_2(x-y)\, \omega_\varepsilon(y,t) \frac{x_2-B_{\varepsilon,2}(t)}{|x_2-B_{\varepsilon,2}(t)|}\,, \end{split} \end{equation} with \begin{equation} \label{j} \begin{split} V(x,t) & = F^\varepsilon_2(x,t) +\int\!\mathrm{d} z\, \mc R_2(x,z)\, \omega_\varepsilon(z,t) \\ & \quad - |\log\varepsilon| \int\! \mathrm{d} y\,\omega_\varepsilon(y,t)\, \left(F^\varepsilon_2(y,t)+\int\!\mathrm{d} z\, \mc R_2(y,z)\, \omega_\varepsilon(z,t) \right). \end{split} \end{equation} From Eqs.~\eqref{2Lipsc}, \eqref{bc_bound2}, and \eqref{w=1} we have \begin{equation} \label{distance4} |V(x,t)| \le \frac{C}{|\log\varepsilon|} \quad \forall\, t\in [0,T^0_\varepsilon]\,. \end{equation} For the last term in Eq.~\eqref{distance1}, we split the integration region into two parts, the set $A_1= \{y\in \Lambda_\varepsilon(t) \colon |y_2 - B_{\varepsilon,2}(t)|\le R_t/2\}$ and the set $A_2 = \{y\in \Lambda_\varepsilon(t) \colon R_t/2 < |y_2 - B_{\varepsilon,2}(t)| \le R_t\}$. Then, \begin{equation} \label{in A_1,A_2} \int\!\mathrm{d} y\, K_2(x-y)\, \omega_\varepsilon(y,t) \frac{x_2-B_{\varepsilon,2}(t)}{|x_2-B_{\varepsilon,2}(t)|} = H_1 + H_2\,, \end{equation} where \begin{equation} \label{in A_1} H_1 = \frac{x_2-B_{\varepsilon,2}(t)}{|x_2-B_{\varepsilon,2}(t)|} \int_{A_1}\! \mathrm{d} y\, K_2(x-y)\, \omega_\varepsilon(y,t) \end{equation} and \begin{equation} \label{in A_2} H_2 = \frac{x_2-B_{\varepsilon,2}(t)}{|x_2-B_{\varepsilon,2}(t)|} \int_{A_2}\! \mathrm{d} y\, K_2(x-y)\, \omega_\varepsilon(y,t)\,. \end{equation} We consider first the contribution due to the set $A_1$. Recalling Eq.~\eqref{vel-vor3}, after introducing the new variables $x'=x-B_\varepsilon(t)$, $y'=y-B_\varepsilon(t)$, we get, \begin{equation} \label{in H_11} |H_1| \leq \frac{1}{2\pi} \int_{|y_2'|\leq R_t/2}\! \mathrm{d} y'\, \frac{1}{|x'-y'|}\, \omega_\varepsilon(y'+B_\varepsilon(t))\,. \end{equation} From Eq.~\eqref{hstimv} we have $|x'_2| = R_t$, and hence $|y_2'| \le R_t/2$ implies $|x'-y'|\ge |x_2'-y_2'|\geq R_t/2$, so that \begin{equation} |H_1| \leq \frac{1}{\pi \,R_t} \int_{|y_2'|\leq R_t/2}\! \mathrm{d} y'\, \omega_\varepsilon(y'+B_\varepsilon(t)) \leq \frac{1}{\pi R_t |\log\varepsilon|}\,. \label{H_14} \end{equation} We bound now $H_2$. Again by Eq.~\eqref{vel-vor3}, \begin{equation*} |H_2| \le \frac{1}{2\pi} \int_{A_2}\! \mathrm{d} y\, \frac 1{|x-y|} \, \omega_\varepsilon(y,t)\,. \end{equation*} The function $|x-y|^{-1}$ diverges monotonically as $y\to x$, and so the maximum of the integral is obtained when we rearrange the vorticity mass as close as possible to the singularity. By Eq.~\eqref{omega_t} and since, by Eq.~\eqref{mt}, $m_t(R_t/2)$ is equal to the total amount of vorticity in $A_2$, this rearrangement gives, \begin{equation} \label{h2} |H_2| \le \frac{3M\varepsilon^{-2}}{2\pi |\log\varepsilon|} \int_{\Sigma (0|r)}\!\mathrm{d} y'\, \frac{1}{|y'|} = \frac{3M\varepsilon^{-2}}{ |\log\varepsilon|} r \,, \end{equation} where the radius $r$ is such that $3\pi r^2 M/(\varepsilon^2|\log\varepsilon|) = m_t(R_t/2)$. The estimate Eq.~\eqref{stimv} now follows by Eqs.~\eqref{distance1}, \eqref{distance4}, \eqref{in A_1,A_2}, \eqref{H_14}, and \eqref{h2}. \end{proof} We determine now the behavior of the function $m_t (\cdot)$ introduced in Eq.~\eqref{mt} when its argument goes to $0$. The proof will be adapted from that of \cite[Proposition 3.4]{BuM2}. \begin{lemma} \label{lem:mt} Let $m_t$ be defined as in Eq.~\eqref{mt}. For each $\ell>0$ and $k \in \big(0, \frac 14\big)$, \begin{equation} \label{smt} \lim_{\varepsilon\to 0} \, \max_{ t \in [0, T^0_\varepsilon]} \varepsilon^{-\ell} m_t \left(\frac{1}{|\log\varepsilon|^k} \right) = 0\,. \end{equation} \end{lemma} \begin{proof} Given $R\ge 2h^\alpha$, $h>0$, and \begin{equation} \label{alpha_delta} \alpha=\frac{1-k}{1+k}-\delta \, , \qquad \delta\in \left(0, \frac{1-2k}{1+k}\right) \, , \end{equation} let $W_{R,h}(x_2)$, with $x_2$ the second component of $x = (x_1, x_2)$, be a non-negative smooth function, such that \begin{equation} \label{W1} W_{R,h}(x_2) = \begin{cases} 1 & \text{if $|x_2|\le R$}, \\ 0 & \text{if $|x_2|\ge R+h$}, \end{cases} \end{equation} and its derivative $W_{R,h}'$ satisfies \begin{equation} \label{W2} | W_{R,h}'(x_2)| < \frac{C}{h}\,, \end{equation} \begin{equation} \label{W3} |W_{R,h}'(x_2)-W_{R,h}'(y_2)| < \frac{C}{h^2}\,|x_2-y_2|\leq \frac{C}{h^2}\,|x - y|\,. \end{equation} We introduce the quantity \begin{equation} \label{mass 1} \mu_t(R,h) = \int\! \mathrm{d} x \, \big[1-W_{R,h}(x_2-B_{\varepsilon,2}(t))\big]\, \omega_\varepsilon (x,t)\,, \end{equation} which is a mollified version of $m_t$ satisfying \begin{equation} \label{2mass 3} \mu_t(R,h) \le m_t(R) \le \mu_t(R-h,h)\,. \end{equation} Hence it is sufficient to prove \eqref{smt} with $\mu_t$ in place of $m_t$. Since the function $t\mapsto \mu_t(R,h)$ is differentiable, we can compute its time derivative, by Eq.~\eqref{weqF} with test function $f(x,t) =1- W_{R,h}(x_2-B_{\varepsilon,2}(t))$ and then using Eqs.~\eqref{decom_u} and \eqref{growth B}. We have, \begin{equation} \label{mu_t} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \mu_t(R,h) = & - \int\! \mathrm{d} x \, \nabla W_{R,h}(x_2-B_{\varepsilon,2}(t)) \cdot [ u(x,t)+F^\varepsilon(x,t)-\dot{B}_\varepsilon(t) ] \, \omega_\varepsilon(x,t) \\ = & - \int \mathrm{d} x \: W_{R,h}'(x_2-B_{\varepsilon,2}(t)) [u_2(x,t)+F^\varepsilon_2(x,t)-\dot{B}_{\varepsilon,2}(t) ] \, \omega_\varepsilon(x,t) \\ = & -H_3- H_4 \,, \end{aligned} \end{equation} with \begin{equation*} \begin{split} H_3 & = \int\! \mathrm{d} x\, W_{R,h}'(x_2-B_{\varepsilon,2}(t)) \int\!\mathrm{d} y \, K_2(x-y)\, \omega_\varepsilon(y,t)\, \omega_\varepsilon(x,t) \\ & = \frac 12 \int\! \mathrm{d} x \! \int\! \mathrm{d} y\, \omega_\varepsilon(x,t)\, \omega_\varepsilon(y,t) \\ & \quad \times \left[ W_{R,h}'(x_2-B_{\varepsilon,2}(t)) - W_{R,h}'(y_2-B_{\varepsilon,2}(t))\right] K_2(x-y) \,, \\ H_4 & = |\log\varepsilon|\int\! \mathrm{d} x\, W_{R,h}'(x_2-B_{\varepsilon,2}(t))\, \omega_\varepsilon(x,t) V(x,t)\,, \end{split} \end{equation*} where the antisymmetry of $K$ has allowed to achieve the second expression of $H_3$, and $V(x,t)$ is defined in Eq.~\eqref{j}. We can note that, in view of Eq.~\eqref{distance4}, \eqref{W2}, and the fact that $W_{R,h}'(z)$ is zero if $|z|\leq R$, \begin{equation} \label{acca4} H_4 \le \frac{C}{h |\log\varepsilon|} m_t(R) \quad \forall\, t\in [0,T^0_\varepsilon]\,. \end{equation} Now we treat $H_3$. We introduce the new variables $x'=x-B_\varepsilon(t)$, $y'=y-B_\varepsilon(t)$ (as done previously), define $\widetilde\omega_\varepsilon(z,t) := \omega_\varepsilon(z+B_\varepsilon(t),t)$, and \[ f(x',y') := \frac 12 \widetilde\omega_\varepsilon(x',t)\, \widetilde\omega_\varepsilon(y',t) \, [W_{R,h}'(x_2')- W_{R,h}'(y_2')] K_2(x'-y') \,, \] whence $H_3 = \int\! \mathrm{d} x' \! \int\! \mathrm{d} y'\, f(x',y')$. We note that $f(x',y')$ is a symmetric function of $x'$ and $y'$ and that, by Eq.~\eqref{W1}, in order to be different from zero it is necessary that either $|x_2'|\ge R$ or $|y_2'|\ge R$. Therefore, \[ \begin{split} H_3 &= \bigg[ \int_{|x_2'| > R}\! \mathrm{d} x' \! \int\! \mathrm{d} y' + \int\! \mathrm{d} x' \! \int_{|y_2'| > R}\! \mathrm{d} y' - \int_{|x_2'| > h}\! \mathrm{d} x' \! \int_{|y_2'| > R}\! \mathrm{d} y'\bigg]f(x',y') \\ & = 2 \int_{|x_2'| > R}\! \mathrm{d} x' \! \int\! \mathrm{d} y_2'\,f(x',y') - \int_{|x_2'| > R}\! \mathrm{d} x' \! \int_{|y_2'| > R}\! \mathrm{d} y'\,f(x',y') \\ & = H_3' + H_3'' + H_3'''\,, \end{split} \] with \[ \begin{split} H_3' & = 2 \int_{|x_2'| > R}\! \mathrm{d} x' \! \int_{|y_2'| \le R-h^\alpha}\! \mathrm{d} y'\,f(x',y') \,, \\ H_3''& = 2 \int_{|x_2'| > R}\! \mathrm{d} x' \! \int_{|y_2'| > R-h^\alpha}\! \mathrm{d} y'\,f(x',y')\,, \\ H_3''' & = - \int_{|x_2'| > R}\! \mathrm{d} x' \! \int_{|y_2'| > R}\! \mathrm{d} y'\,f(x',y')\,. \end{split} \] By the properties of $W_{R,h}$, we have $W_{R,h}'(y_2') =0$ for $|y_2'| \le R$. In particular, $W_{R,h}'(y_2') = 0$ for $|y_2'| \le R-h^\alpha$, then \[ H_3' = \int_{|x_2'| > R}\! \mathrm{d} x' \, \widetilde\omega_\varepsilon(x',t) W_{R,h}'(x_2') \int_{|y_2'| \le R-h^\alpha}\! \mathrm{d} y'\, K_2(x'-y') \, \widetilde\omega_\varepsilon(y',t) \] and therefore, in view of Eq.~\eqref{W2}, \begin{equation} \label{a1'} |H_3'| \le \frac{C}{h} m_t(R) \sup_{|x_2'| > R} |A_3(x',t)|\,, \end{equation} with \[ A_3(x',t) = \int_{|y_2'| \le R-h^\alpha}\! \mathrm{d} y'\, K_2(x'-y') \, \widetilde\omega_\varepsilon(y',t) \,. \] We note that if $|x_2'| > R$ then $|y_2'| \le R-h^\alpha$ implies $|x'-y'|\ge |x_2'-y_2'|\ge h^\alpha$, hence \[ \begin{split} |A_3(x',t)| & \le \frac{1}{2\pi} \int_{|y_2'|\leq R-h^\alpha}\! \mathrm{d} y'\, \frac{\widetilde\omega_\varepsilon(y',t) }{|x'-y'|} \\ & \le \frac{1}{2 \pi h^\alpha} \int_{|y_2'|\le R-h^\alpha} \! \mathrm{d} y'\, \widetilde\omega_\varepsilon(y',t) \le \frac{1}{2\pi h^\alpha |\log\varepsilon|}\,. \end{split} \] We then obtain, by Eq.~\eqref{a1'}, \begin{equation} \label{H_14b} |H_3'| \le \frac{C}{h^{1+\alpha} |\log\varepsilon|} m_t(R)\,. \end{equation} From Eq.~\eqref{W3}, using Chebyshev's inequality and $R\geq 2 h^\alpha$, \[ |H_3''| + |H_3'''| \le \frac{C}{h^2} \int_{|x_2'| \ge R}\! \mathrm{d} x' \! \int_{|y_2'| \ge R-h^\alpha}\! \mathrm{d} y'\,\widetilde\omega_\varepsilon(y',t) \, \widetilde\omega_\varepsilon(x',t) \le \frac{C I_\varepsilon(t)}{ h^2 R^2}m_t(R) \,. \] Finally, by Eq.~\eqref{Iee}, \begin{equation} \label{a1s} |H_3| \le C \left( \frac{1}{h^{1+\alpha}|\log\varepsilon|} + \frac{1}{h^2 R^2 |\log\varepsilon|^2} \right) m_t(R) \quad \forall\, t\in [0,T^0_\varepsilon]\,. \end{equation} From estimates Eqs.~\eqref{a1s} and \eqref{acca4}, recalling Eq.~\eqref{mu_t}, we get, \begin{equation} \label{equ_mm} \frac{\mathrm{d}}{\mathrm{d}t} \mu_t (R,h) \leq A_\varepsilon(R, h) m_t(R) \quad \forall\, t\in [0,T^0_\varepsilon]\,. \end{equation} where \[ A_\varepsilon(R, h) = C \left(\frac{1}{h^{1+\alpha} |\log\varepsilon|} + \frac{1}{h^2 R^2 |\log\varepsilon|^2} +\frac{1}{h |\log\varepsilon|}\right). \] Therefore, by Eqs.~\eqref{2mass 3} and \eqref{equ_mm}, \begin{equation} \mu_t (R,h) \le \mu_0 (R,h) + A_\varepsilon(R, h) \int_0^t {\mathrm{d}} s \, \mu_s (R-h, h) \quad \forall\, t\in [0,T^0_\varepsilon]\,. \end{equation} We assume now $\varepsilon$ sufficiently small, and we iterate the last inequality $n=\lfloor|\log\varepsilon|\rfloor$ times (denoting with $\lfloor a\rfloor$ the integer part of $a>0$), from \[ R_0 =\frac{1}{|\log\varepsilon|^{k}} \quad\text{to}\quad R_n=\frac{1}{2|\log\varepsilon|^{k}}\,, \] where $R_n = R_0 -n h$, and consequently \[ h= \frac{1}{2 n |\log\varepsilon|^{k}} \,. \] This procedure is correct because in this range for $R$ the assumption $R\ge 2 h^\alpha$ (under which Eq.~\eqref{equ_mm} has been deduced) is satisfied. Indeed, in view of Eq.~\eqref{alpha_delta} we have \[ h^\alpha \approx C \left( \frac{1}{|\log\varepsilon|^{1+k}} \right)^\alpha = C \left( \frac{1}{|\log\varepsilon|^{1+k}} \right)^{\frac{1-k}{1+k}-\delta} = \frac{C}{|\log\varepsilon|^{1-k-(1+k)\delta}}\,, \] with $k<1-k-(1+k)\delta$, and therefore, if $\varepsilon$ is small enough, $h^\alpha \ll R_n$. Moreover, the quantity $A_\varepsilon(R,h)$ is bounded by $C |\log\varepsilon |^q$ with $q<1$, in fact \[ \begin{split} \frac{1}{h^{1+\alpha} |\log\varepsilon|} & \le C \frac{\left( |\log\varepsilon|^{k+1} \right)^{1+\alpha}}{|\log\varepsilon|} \le C |\log\varepsilon |^{1-\delta(k+1)} \,, \\ \frac{1}{h^2 R^2 |\log\varepsilon|^2} & \le C \frac{|\log\varepsilon|^{4k+2} }{|\log\varepsilon|^2} \leq |\log\varepsilon|^{4k}\,, \\ \frac{1}{h |\log\varepsilon|} & \le C |\log\varepsilon|^{k} \,. \end{split} \] In conclusion, \[ \begin{split} \mu_t(R_0-h,h) & \le \mu_0(R_0-h,h) + \sum_{j=1}^{n-1} \mu_0(R_j,h) \frac{(C |\log\varepsilon|^q t)^j}{j!} \\ & \quad + \frac{(C |\log\varepsilon|^q )^{n}}{(n-1)!} \int_0^t\!{\textnormal{d}} s\, (t-s)^{n-1}\mu_s(R_{n},h) \quad \forall\, t\in [0,T^0_\varepsilon]\,. \end{split} \] Since $\Lambda_\varepsilon(0) \subset \Sigma(z|\varepsilon)$, we can determine $\varepsilon$ small enough so that $\mu_0(R_j,h)=0$ for any $j=0,\ldots,n$, hence, for any $t\in [0, T]$, \begin{equation} \label{mass 15'} \mu_t(R_0-h,h) \le \frac{(C |\log\varepsilon|^q)^{n}}{(n-1)!} \int_0^t\!{\textnormal{d}} s\, (t-s)^{n-1}\mu_s(R_{n},h) \le \frac{(C |\log\varepsilon|^q t)^{n}}{n!}\,, \end{equation} where in the last inequality we have used the trivial bound $\mu_s(R_{n},h) \le 1$. Therefore, using also Eq.~\eqref{2mass 3}, Stirling formula, and $n=\lfloor|\log\varepsilon|\rfloor$, \[ m_t(R_0) \le \mu_t(R_0 -h,h) \le \frac{C}{|\log\varepsilon|^{(1-q)|\log\varepsilon|}} \quad \forall\, t\in [0,T^0_\varepsilon]\,, \] which implies Eq.~\eqref{smt}. \end{proof} \begin{remark} \label{rem:rc} In \cite[Prop.~3.4]{BuM2} a similar concentration result is deduced for the vorticity mass outside a small disk, but only for small times. This is due to the non Lipschitz term Eq.~\eqref{bc_bound} (not present in our case), which leads to an estimate like Eq.~\eqref{mass 15'} but with $q=1$. We also remark that a weaker estimate $C/|\log\varepsilon|^\theta$ with $\theta\in (1,2)$ for the axial moment of inertia instead of Eq.~\eqref{Iee} would lead as well to Eq.~\eqref{mass 15'} (choosing $k\in(0,\frac{\theta-1}4)$ in this case). \end{remark} \begin{proof}[Proof of Eq.~\eqref{stimGa}] In view of Eqs.~\eqref{2Lipsc}, \eqref{bc_bound2}, and \eqref{w=1}, from \eqref{growth B} and since $|B_{\varepsilon,2}(0)-r_0|\le \varepsilon$ we have, \begin{equation} \label{compact2} |B_{\varepsilon,2}(t) - r_0| \le \frac{C}{|\log\varepsilon|} \quad \forall\, t\in [0,T^0_\varepsilon]\,. \end{equation} Therefore, it is sufficient to show that, given $k\in \big(0,\frac 14\big)$, \begin{equation} \label{lamb} |x_2 - B_{\varepsilon,2}(t)| \le \frac{C}{|\log\varepsilon|^k} \quad \forall\, x\in \Lambda_\varepsilon(t) \quad \forall\, t\in [0,T^0_\varepsilon]\,, \end{equation} provided $\varepsilon$ is small enough. To this end, we first notice that, in view of Lemma \ref{lem:3} and Eq.~\eqref{Rt}, for any $x_0\in \Lambda_\varepsilon(0)$ and $t\in [0,T^0_\varepsilon]$ we have $|x_2(x_0,t) -B_{\varepsilon,2}(t)| \le R_t$, and whenever $|x_2(x_0,t)- B_{\varepsilon,2}(t)|=R_t$ the differential inequality Eq.~\eqref{stimv} holds true. We claim that this implies \begin{equation} \label{suppRt} \Lambda_{\varepsilon}(t) \subset \{x\in \mathbb{R}^2\colon |x_2-B_{\varepsilon,2}(t)|< \rho(t) \} \quad \forall\, t\in [s_0,s_1] \quad \forall\, [s_0,s_1] \subseteq [0,T^0_\varepsilon]\,, \end{equation} provided $\rho(t)$ solves \begin{equation} \label{eqdiffRt} \dot{\rho}(t) = \frac{2C}{|\log\varepsilon|}+\frac{2}{\pi |\log\varepsilon|\rho(t)} + g(t)\,, \end{equation} with initial datum $\rho(s_0) > R_{s_0}$ and $g(t)$ any smooth function which is an upper bound for the last term in Eq.~\eqref{stimv}. Indeed, $|x_2-B_{\varepsilon,2}(s_0)| < \rho(s_0)$ for any $x\in \Lambda_\varepsilon(s_0)$ and, by absurd, if there were a first time $t_*\in (s_0,s_1]$ such that $|x_2(x_0,t_*)- B_{\varepsilon,2}(t_*)| = \rho(t_*)$ for some $x_0\in \Lambda_\varepsilon(0)$, it would be necessarily $\rho(t_*) = R_{t_*}$; hence, by Eq.~\eqref{stimv}, $\dot \rho (t_*)$ would be strictly larger than $\frac{\mathrm{d}}{\mathrm{d} t}|x_2(x_0,t)- B_{\varepsilon,2}(t)|\big|_{t=t_*}$, which contradicts the characterization of $t_*$ as the first time at which the graph of $t\mapsto |x_2(x_0,t)- B_{\varepsilon,2}(t)|$ crosses the one of $t\mapsto \rho(t)$. Now, let \[ t_0 = \sup\{t\in [0,T^0_\varepsilon] \colon R_s < 3|\log\varepsilon|^{-k} \;\; \forall\, s\in [0,t]\}\,. \] If $t_0 = T^0_\varepsilon$ then Eq.~\eqref{lamb} is already achieved, otherwise we set \[ t_1 = \sup\{t\in [t_0,T^0_\varepsilon] \colon R_s > 2|\log\varepsilon|^{-k} \;\; \forall\, s\in [t_0,t_1]\} \] and consider $\rho(t)$ as in Eq.~\eqref{eqdiffRt}, relative to the interval $[s_0,s_1] = [t_0,t_1]$ and such that \[ \rho(t_0) = 4|\log\varepsilon|^{-k}\,, \quad g(t) \le \frac{C\varepsilon^{(\ell -2)/2}}{|\log\varepsilon|^{1/2}} \qquad \forall\, t\in [t_0,t_1]\,, \] for a fixed $\ell >2$. We note that $\rho(t_0) > R_{t_0}=3|\log\varepsilon|^{-k}$ and that, since $R_t\ge 2|\log\varepsilon|^{-k}$ for any $t\in [t_0,t_1]$, Eq.~\eqref{smt} guarantees that above condition on $g(t)$ is compatible with the requirement that the latter is an upper bound for the last term in Eq.~\eqref{stimv}. Now, as $\rho(t) \ge R_t \ge 2|\log\varepsilon|^{-k}$ for any $t\in [t_0,t_1]$, the second term in the right-hand side of Eq.~\eqref{eqdiffRt} is bounded by $C|\log\varepsilon|^{k-1}$, and therefore, since $k\in \big(0,\frac 14 \big)$, from Eq.~\eqref{eqdiffRt} we deduce that \[ \dot \rho(t) \leq \frac{C}{|\log\varepsilon|^{3/4}} \qquad \forall\, t\in [t_0,t_1]\,, \] which integrated from $t_0$ and $t_1$ gives \[ \rho(t) \le \rho(t_0) + \frac{CT }{|\log\varepsilon|^{3/4}} \le \frac{C}{|\log\varepsilon|^k} \qquad \forall\, t\in [t_0,t_1]\,. \] Clearly, if $t_1=T^0_\varepsilon$ we are done. Otherwise, we can repeat the same argument in the intervals $[t_0',t_1'] \subseteq [t_1,T^0_\varepsilon]$ defined analogously to $[t_0,t_1]$ (if any). Eq.~\eqref{lamb} is thus proved. \end{proof} \section{The reduced system: analysis of the axial motion} \label{sec:4} In this section we prove item (2) of Theorem \ref{thm:2}, remarking that, as in the previous section, we always assume $a=1$. We premise a concentration result which shows that large part of the vorticity remains confined in a disk whose size is infinitesimal as $\varepsilon\to 0$. \begin{lemma} \label{lem:4} Consider the reduced system defined by Eqs.~\eqref{u=}, \eqref{cons-omr_n}, and \eqref{eqchar_nF}. Under Assumption \ref{ass:1}, with Eq.~\eqref{MgammaFb} in place of Eq.~\eqref{MgammaF}, for each $T>0$ there are $\varepsilon_1\in (0,1)$, $C_1>0$ and $q_\varepsilon(t)\in \bb R^2$, such that \begin{equation} \label{lem1b} |\log\varepsilon| \int_{\Sigma(q_\varepsilon(t), \varepsilon|\log\varepsilon|)}\!\mathrm{d} x\, \omega_\varepsilon(x,t) \ge 1 - \frac{C_1}{\log|\log\varepsilon|} \qquad \forall\, t\in [0,T] \quad \forall\, \varepsilon\in (0,\varepsilon_1]\,. \end{equation} \end{lemma} This is the content of \cite[Lemma 3.1]{BuM2} and it is an extension of the analogous result in \cite{BCM00}, where the case without external field is considered. However, since the demonstration given in \cite{BuM2} is affected by an error, we provide here the correct proof, see Appendix \ref{app:a}. The following proposition shows that in the limit $\varepsilon\to 0$ the center of vorticity performs a motion with constant speed along the $x_1=z$ axis. \begin{proposition} \label{prop:2} Under Assumption \ref{ass:1}, for any $T>0$, \begin{equation} \label{bz} \lim_{\varepsilon\to 0} \max_{t\in [0, T]} |B_\varepsilon(t) - \zeta(t)| = 0\,, \end{equation} with $\zeta(t)$ as in Eq.~\eqref{zett}. \end{proposition} \begin{proof} Since $\Lambda_\varepsilon(t)$ is compact we can use Eq.~\eqref{weqF} as in getting Eq.~\eqref{growth B} and compute, \begin{equation} \label{bpunto} \begin{split} \dot B_\varepsilon(t) & = |\log\varepsilon|\frac{\mathrm{d}}{\mathrm{d} t} \int\!\mathrm{d} x\, x\,\omega_\varepsilon(x,t) = |\log\varepsilon| \int\!\mathrm{d} x\, \omega_\varepsilon(x,t) \, (u+F^\varepsilon)(x,t) \\ & = |\log\varepsilon|\int\!\mathrm{d} x\, \omega_\varepsilon(x,t) \, F^\varepsilon (x,t) \\ & \quad + |\log\varepsilon|\int\!\mathrm{d} x\, \omega_\varepsilon(x,t) \int\!\mathrm{d} y\, [L(x,y) + \mc R(x,y)]\omega_\varepsilon(y,t)\,, \end{split} \end{equation} where we used Eq.~\eqref{decom_u} and $\int\!\mathrm{d} x\, \omega_\varepsilon(x,t) \widetilde u(x,t) =0$. In what follows, we fix $T>0$ and $k\in \big(0,\frac 12\big)$, and assume the parameter $\varepsilon$ so small in order that Eq.~\eqref{lem1b} does hold and that Eq.~\eqref{stimGa} implies $T^0_\varepsilon=T$. Therefore, from Eq.~\eqref{bpunto}, \eqref{2Lipsc}, in view of Eqs.~\eqref{bc_bound}, \eqref{bc_bound2}, and \eqref{lrest}, we have, \begin{equation} \label{b12} |\dot B_{\varepsilon,1}(t) - Q_\varepsilon(t)| + |\dot B_{\varepsilon,2}(t)| \le \frac{C}{|\log\varepsilon|}\quad \forall\, t\in [0, T] \,, \end{equation} where \[ Q_\varepsilon(t) := |\log\varepsilon|\int\!\mathrm{d} x\, \omega_\varepsilon(x,t) \frac{1}{4\pi x_2} \int\!\mathrm{d} y\, \log\frac {1+|x-y|}{|x-y|} \omega_\varepsilon(y,t) \,. \] To determine the behavior of $Q_\varepsilon(t)$ as $\varepsilon\to 0$, we decompose, \[ Q_\varepsilon(t) = Q_\varepsilon^1(t) + Q_\varepsilon^2(t)\,, \] with \[ \begin{split} Q_\varepsilon^1(t) & := |\log\varepsilon| \int_{\Sigma(q_\varepsilon(t), \varepsilon|\log\varepsilon|)}\!\mathrm{d} x\, \omega_\varepsilon(x,t) \\ & \quad \times \frac{1}{4\pi x_2} \int_{\Sigma(q_\varepsilon(t), \varepsilon|\log\varepsilon|)}\!\mathrm{d} y\, \log\frac {1+|x-y|}{|x-y|} \omega_\varepsilon(y,t)\,. \end{split} \] The rest $Q_\varepsilon^2(t) = Q_\varepsilon(t) - Q_\varepsilon^1(t)$ is the sum of three terms, each one is the integration of the same function, which in view of Eq.~\eqref{lrest} is bounded by \[ \mc G(x,y) := \frac{1}{2\pi r_0} \log\frac {1+|x-y|}{|x-y|} \omega_\varepsilon(x,t)\omega_\varepsilon(y,t)\,, \] and in the integration domain at least one between the $x$ and the $y$ variable is contained in the set $\Sigma(q_\varepsilon(t), \varepsilon|\log\varepsilon|)^\complement$. Therefore, since $\mc G$ is a symmetric function, \[ \begin{split} Q_\varepsilon^2(t) & \le \frac{3|\log\varepsilon|}{2\pi r_0} \int_{\Sigma(q_\varepsilon(t),\varepsilon|\log\varepsilon|)^\complement}\!\mathrm{d} x\, \omega_\varepsilon(x,t) \int\!\mathrm{d} y\, \log\frac {1+|x-y|}{|x-y|} \omega_\varepsilon(y,t) \\ & \le \frac{C}{\log|\log\varepsilon|} \,, \end{split} \] where we first bounded the $\mathrm{d} y$-integral as done in Eq.~\eqref{311b}, and then we used Eq.~\eqref{lem1b}. Concerning $Q_\varepsilon^1(t)$, we can obtain a lower bound for it, by inserting a lower bound to the function $\frac{1}{4\pi x_2}\log\frac {1+|x-y|}{|x-y|}$ in the domain of integration and applying again Eq.~\eqref{lem1b}, \begin{equation} \label{q2} \begin{split} Q_\varepsilon^1(t) & \ge \frac{|\log\varepsilon|}{4\pi (q_{\varepsilon,2}(t)+\varepsilon|\log\varepsilon|)} \log\frac {1+2\varepsilon|\log\varepsilon|}{2\varepsilon|\log\varepsilon|} \bigg(\int_{\Sigma(q_{\varepsilon}(t),\varepsilon|\log\varepsilon|)}\!\mathrm{d} x\, \omega_\varepsilon(x,t)\bigg)^2 \\ & \ge \frac{|\log\varepsilon|}{4\pi (q_{\varepsilon,2}(t)+\varepsilon|\log\varepsilon|)} \log\frac {1+2\varepsilon|\log\varepsilon|}{2\varepsilon|\log\varepsilon|} \frac{1}{|\log \varepsilon|^2}\bigg( 1 - \frac{C_1}{\log|\log\varepsilon|}\bigg)^2\,. \end{split} \end{equation} On the other hand, by Eqs.~\eqref{omega_t} and \eqref{311b}, we can obtain an upper bound for $Q_\varepsilon^1(t)$, \begin{equation} \label{q3} \begin{split} Q_\varepsilon^1(t) & \le \frac{1}{4\pi (q_{\varepsilon,2}(t)-\varepsilon|\log\varepsilon|)} \sup_x \int\!\mathrm{d} y\, \log\frac {1+|x-y|}{|x-y|} \omega_\varepsilon(y,t) \\ & \le \frac{3M}{2\varepsilon^2 |\log\varepsilon|(q_{\varepsilon,2}(t)-\varepsilon|\log\varepsilon|)} \bigg\{\frac{\bar\rho^2}{2} \log\frac{1+\bar\rho}{\bar\rho} - \frac 12 \int_0^{\bar\rho}\!\mathrm{d} \rho\, \frac{\rho}{1+\rho} \bigg\}\,, \end{split} \end{equation} with $\bar\rho$ such that $3\pi\bar\rho^2 M/(\varepsilon^2|\log\varepsilon|)= 1/|\log\varepsilon|$. Now, in view of Eq.~\eqref{lem1b}, the disk $\Sigma(q_\varepsilon(t), \varepsilon|\log\varepsilon|)$ must have non empty intersection with $\Lambda_\varepsilon(t)$. Therefore, since we are assuming $\varepsilon$ so small that $T^0_\varepsilon=T$, from Eq.~\eqref{stimGa} we deduce that \begin{equation} \label{q2r0} \max_{t\in[0,T]}|q_{\varepsilon,2}(t)-r_0| \leq \frac{C}{|\log\varepsilon|^k}+ \varepsilon |\log\varepsilon|\,. \end{equation} We conclude that the right-hand side in both Eqs.~\eqref{q2} and \eqref{q3} converges to $1/(4\pi r_0)$ as $\varepsilon\to 0$, so that, in view of Eq.~\eqref{b12}, \begin{equation} \label{b1b} \lim_{\varepsilon\to 0} \max_{t\in [0, T]} \bigg|B_{\varepsilon,1}(t) - \bigg(z_0 +\frac{t}{4\pi r_0}\bigg) \bigg| = 0 \,, \end{equation} which, together with Eq.~\eqref{compact2}, proves Eq.~\eqref{bz} (recall we fixed $a=1$). \end{proof} From Eq.~\eqref{lem1b} and Proposition \ref{prop:2}, the proof of item (2) of Theorem \ref{thm:2} is completed if we show that \begin{equation} \label{q1b1} \lim_{\varepsilon\to 0} \sup_{t\in [0,T]} [B_\varepsilon(t) - q_\varepsilon(t)] = 0 \end{equation} (actually, by Eq.~\eqref{q2r0}, the convergence of the second component is already known, but this does not shorten the proof). To this aim, we set $\Sigma_t = \Sigma(q_\varepsilon(t), \varepsilon|\log\varepsilon|)$ and compute, \[ \begin{split} |B_\varepsilon(t) - q_\varepsilon(t)| & \le |\log\varepsilon|\int\! \mathrm{d} x\, |x - q_\varepsilon(t)| \omega_\varepsilon(x,t) \\ & = |\log\varepsilon|\int_{\Sigma_t}\! \mathrm{d} x\, |x - q_\varepsilon(t)| \omega_\varepsilon(x,t) + |\log\varepsilon|\int_{\Sigma_t^\complement}\! \mathrm{d} x\, |x - q_\varepsilon(t)| \omega_\varepsilon(x,t) \\ & \le \varepsilon |\log\varepsilon| + \frac{C_1}{\log|\log\varepsilon|} |B_\varepsilon(t) - q_\varepsilon(t)| \\ & \quad + |\log\varepsilon|\int_{\Sigma_t^\complement}\! \mathrm{d} x\, |x - B_\varepsilon(t)| \omega_\varepsilon(x,t)\,. \end{split} \] where we used Eq.~\eqref{lem1b}. Therefore, by assuming $\varepsilon$ so small to have $2C_1 \le \log|\log\varepsilon|$, \[ \begin{split} |B_\varepsilon(t) - q_\varepsilon(t)| & \le 2 \varepsilon |\log\varepsilon| + \\ & \quad + 2 \sqrt{|\log\varepsilon|\int_{\Sigma_t^\complement}\! \mathrm{d} x\, \omega_\varepsilon(x,t)} \sqrt{|\log\varepsilon|\int_{\Sigma_t^\complement}\! \mathrm{d} x\, |x - B_\varepsilon(t)|^2 \omega_\varepsilon(x,t)} \\ & \le 2 \varepsilon |\log\varepsilon| + 2 \sqrt{\frac{C_1}{\log|\log\varepsilon|}} \sqrt{ |\log\varepsilon| J_\varepsilon(t)}\,, \end{split} \] where we applied again Eq.~\eqref{lem1b}, the Cauchy-Schwarz inequality, and introduced the moment of inertia with respect to center of vorticity defined as \begin{equation} \label{J} J_\varepsilon(t) = \int \!\mathrm{d} x\, |x-B_\varepsilon(t)|^2 \omega_\varepsilon(x,t)\,. \end{equation} Now, we claim that \begin{equation} \label{Jst} J_\varepsilon(t)\le \frac{C}{|\log\varepsilon|} \quad \forall\, t\in [0,T]\,, \end{equation} from which Eq.~\eqref{q1b1} follows in view of the above estimate on $|B_\varepsilon(t) - q_\varepsilon(t)| $. To prove the claim, we compute the time derivative of $J_\varepsilon(t)$, by using Eq.~\eqref{weqF}, \[ \dot J_\varepsilon(t) = 2\int \!\mathrm{d} x\, \omega_\varepsilon(x,t) \, (x-B_\varepsilon(t)) \cdot (u(x,t)+F^\varepsilon(x,t) - \dot B_\varepsilon(t)) \,, \] so that, in view of Eq.~\eqref{bpunto}, \[ \begin{split} \dot J_\varepsilon(t) & = 2 \int \mathrm{d} x \, \omega_\varepsilon(x,t) \left[ u(x,t)-|\log\varepsilon|\int \mathrm{d} y \, \omega_\varepsilon(y,t) \, u(y,t) \right] \cdot (x-B_\varepsilon(t)) \\ & \quad + 2 \int \mathrm{d} x \, \omega_\varepsilon(x,t) \left[ F^\varepsilon(x,t)-|\log\varepsilon|\int \mathrm{d} y \, \omega_\varepsilon(y,t) \, F^\varepsilon(y,t) \right] \cdot (x-B_\varepsilon(t)) \,. \end{split} \] We consider first the term containing $F^\varepsilon$ and note that, by definition of $B_\varepsilon(t)$, \[ \begin{split} & \int \mathrm{d} x\, \omega_\varepsilon(x,t)\, (x-B_\varepsilon(t)) \cdot \int\! \mathrm{d} y \, \omega_\varepsilon(y,t)\, F^\varepsilon (y,t) = 0 \,, \\ & \int\! \mathrm{d} x\, \omega_\varepsilon(x,t)\, (x-B_\varepsilon(t)) \cdot F^\varepsilon(B_\varepsilon(t), t) =0 \,. \end{split}\] We thus obtain, \[ \begin{split} & 2 \left|\int\! \mathrm{d} x \, \omega_\varepsilon(x,t) \left[ F^\varepsilon(x,t)-|\log\varepsilon|\int\!\mathrm{d} y\, \omega_\varepsilon(y,t) \, F^\varepsilon (y,t) \right] \cdot (x-B_\varepsilon(t)) \right| \\ & \quad = 2 \left| \int\! \mathrm{d} x \, \omega_\varepsilon(x,t) \left[ F^\varepsilon(x,t)- F^\varepsilon (B_\varepsilon(t),t) \right] \cdot (x-B_\varepsilon(t)) \right| \\ & \quad \le 2\int\! \mathrm{d} x \, \omega_\varepsilon(x,t)\, \frac{ L}{|\log\varepsilon|} |x-B_\varepsilon(t)|^2 \le \frac{2L}{|\log\varepsilon|} \, J_\varepsilon(t)\,, \end{split} \] where, in the last line, we used Eq.~\eqref{2Lipsc}. For the term containing $u$, we have analogously, \[ \int\! \mathrm{d} x\, \omega_\varepsilon(x,t) \, (x-B_\varepsilon(t)) \cdot \int\! \mathrm{d} y \, \omega_\varepsilon(y,t)\, u(y,t) =0\,. \] Moreover, by the antisymmetry of $K$ and using Eq.~\eqref{decom_u}, \begin{equation} \label{antisym} \int\! \mathrm{d} x \, \omega_\varepsilon(x,t) \, \widetilde{u}(x,t) = \int\! \mathrm{d} x \int\! \mathrm{d} y \, \omega_\varepsilon(x,t)\,\omega_\varepsilon(y, t) \, K(x-y) = 0, \end{equation} so that, as $(x-y) \cdot K(x-y) =0$, \[ \begin{split} \int\! \mathrm{d} x \, \omega_\varepsilon(x,t) \, x \cdot \widetilde{u}(x,t) = & \int\! \mathrm{d} x \int\! \mathrm{d} y \, \omega_\varepsilon(x,t)\,\omega_\varepsilon(y, t) \, x \cdot K(x-y) \\ = & \int\! \mathrm{d} x \int\! \mathrm{d} y \, \omega_\varepsilon(x,t)\,\omega_\varepsilon(y, t) \, y \cdot K(x-y)\,, \end{split} \] which implies that also this integral is zero by the antisymmetry of $K$. Therefore, \[ \begin{split} & 2 \left| \,\int \mathrm{d} x \, \omega_\varepsilon(x,t) \left[ u(x,t)-|\log\varepsilon|\int \mathrm{d} y \, \omega_\varepsilon(y,t) \, u(y,t) \right] \cdot (x-B_\varepsilon(t)) \, \right| \\ & \quad \le 2 \int\! \mathrm{d} x \, \omega_\varepsilon(x,t) \left|\int\!\mathrm{d} y\, L(x,y)\, \omega_\varepsilon(y,t) + \int\!\mathrm{d} y\, \mc R(x,y)\, \omega_\varepsilon(y,t) \right| \, |x-B_\varepsilon(t)| \\ & \quad \le C \int\! \mathrm{d} x \, \omega_\varepsilon(x,t) \, |x-B_\varepsilon(t)| \le \frac{C}{|\log\varepsilon|^{1/2}} \, \sqrt{J_\varepsilon(t)}\,, \end{split} \] where we have used Eq.~\eqref{bc_bound2} and Cauchy-Schwarz inequality. In conclusion, \[ |\dot J_\varepsilon(t)| \le \frac{2L}{|\log\varepsilon|}\,J_\varepsilon(t) +\frac{C}{|\log\varepsilon|^{1/2}} \, \sqrt{J_\varepsilon(t)}\,. \] Recalling that the initial data imply $J_\varepsilon(0) \le 4\varepsilon^2$, this differential inequality implies Eq.~\eqref{Jst}. The proof of item (2) of Theorem \ref{thm:2} is thus completed.
{ "timestamp": "2022-02-08T02:14:03", "yymm": "2102", "arxiv_id": "2102.07807", "language": "en", "url": "https://arxiv.org/abs/2102.07807" }
\section{Introduction} For all the successes of modern physics over the last century-and-a-half, it has left us with three apparently incompatible branches - the nonlinear and deterministic General Relativity, the linear but indeterministic Quantum Theory, and the non-linear and deterministic, but uncomputable, Chaos Theory. For us to have a Theory of Everything, that describes all observed physical phenomena, we need a way to at least unite the first two, so we can describe physical phenomena at any scale. However, due to their differing takes on the determinacy of the universe, this has so far proved difficult. Invariant Set Theory (IST) attempts to unify these three disparate branches by using insight from Chaos Theory to create a fully local and deterministic model of quantum phenomena \cite{Palmer1995Spin,Palmer2009ISP,Palmer2011ISH,Palmer2012Butterfly,Palmer2012Quantum,Palmer2014Lorenz,Palmer2016IST,Palmer2017Gravitational,Palmer2018Experimental,Palmer2019Bell,Palmer2020Discretization,Palmer2020FQXi}. It does by assuming that the universe is a deterministic dynamical system evolving precisely on a fractal invariant set in state space. The natural metric to describe distances on a fractal set is the $p$-adic metric. This replaces the standard Euclidean metric of distance between states in state space. A consequence of this is that putative counterfactual states which lie in the fractal gaps of the invariant set are to be considered distant from states which do lie on the invariant set, even though from a Euclidean perspective such distances may appear small. Given the uncomputability of the possible states on any given fractal attractor, we cannot in advance distinguish states allowed and disallowed by this metric - hence, in IST, the appearance of randomness despite being deterministic. $p$-adic numbers form a back-bone of modern number theory and as such provide a framework to describe quantum physics within a finite number-theoretic framework. An example is the notion of complementarity which underpins the uncertainty principle in quantum mechanics. In Invariant Set Theory complementarity is an emergent property of number theoretic properties of trigonometric functions such as $\cos {\phi}$, for example that $\cos \phi$ is not a rational number when $\exp {i \phi}$ is a primitive $p$th root of unity. The complex Hilbert Space of standard quantum mechanics arises is a singular limit of invariant set theory when $p$ is set equal to infinity. However, despite showing how the vast majority of quantum phenomena can be described deterministically, the theory deviates from standard quantum physics in some of its predictions---mainly in ways which stem from the necessarily finiteness of the $p$-adic metric used. In this paper, we give these key points of deviation, and investigate the extent to which these could be used to experimentally test the theory. \section{Entanglement Limits} In standard quantum theory, there is no limit to the number of quantum objects which can be maximally entangled - however, in IST, there is. Here, we codify this limit, and design experiments to test if it can be probed. For this, we use the $M$-qubit W state \cite{Dur2000Three,Rai2009Possibility}, \begin{equation}\label{EqW} \ket{W^M}=\frac{1}{\sqrt{M}}\sum^{M-1}_{i=0}\ket{0}^{\otimes i}\ket{1}\ket{0}^{\otimes (M-1-i)} \end{equation} (where $\ket{\psi}^{\otimes M}$ is the tensor product of $\ket{\psi}$ with itself $M$ times). For instance, the $M=3$ W state is \begin{equation} \ket{W^3}=\frac{\ket{100}+\ket{010}+\ket{001}}{\sqrt{3}} \end{equation} The W state is a maximally entangled state of $M$ qubits---and in standard quantum theory, there is no limit to how high $M$ can be. However, in IST, the large-but-finite dimension of the $p$-adic metric provides a limit - in this case to the number of qudits that can be maximally entangled. For multiple-qubit entanglement, this limit is codified in \cite{Palmer2020Discretization} as a maximum of $\log_2N$ qubits being able to be maximally entangled, in a $p$-adic system where the equatorial great circle of the Bloch sphere consists of $N$ equally-spaced discrete points. \begin{figure} \centering \includegraphics[width=\linewidth]{InfBSEntgl.pdf} \caption{The first 4 iterates of a set-up to create the entangled state $\ket{W^M}$ (as given in Eq.\ref{EqW}), where $M=2^I$ at the $I^{th}$ iterate. The diagonal blue lines are 50:50 beamsplitters, the diagonal grey lines mirrors, the yellow oval a single-photon source, and the black lines the possible paths of the photon. Given this maximally entangles $2^I$ qubits, IST predicts entanglement generated by an experiment like this should begin to fail after $I=\log_2\log_2N$ iterations, where the two spherical dimensions of the Bloch sphere are each $N$-discrete. We can test whether this entanglement holds or fails by putting mirrors at the ends of each path - if the photon returns with 100$\%$ probability to the input port, it was maximally entangled; if each beamsplitter splits it evenly, such that it only returns $2^{-I}$ of the time, the entanglement has decayed completely - return probabilities between show various levels of entanglement decay.} \label{fig:InfBS} \end{figure} A system of maximally-entangled photon-vacuum qubits can be created using a single photon and a number of mirrors and 50:50 beamsplitters, as shown in Fig.\ref{fig:InfBS}. This naturally forms a W state across $M$ qubits, and, by standard quantum theory, should potentially be able to be extended to $M\rightarrow\infty$. However, this disagrees with IST, which limits to a maximum of $M=\log_2N$ entangled qubits, where the two orthogonal spherical dimensions of the Bloch sphere ($\theta$ and $\phi$) are each discrete in $N$ divisions. While $p$ is expected to be very large, each qubit will only have been affected by $I=\log_2\log_2N$ beamsplitters, so, for realistic experimental beamsplitter loss of $0.1\%$, the chance of losing a given qubit to decoherence only reaches $1\%$ once the system has entangled over 1000 qubits, which is only possible by IST if $N\geq10^{250}$. Further, an advantage of the W state is, even if decoherence effectively measures one of the qubits, so long as the result is 0 (the photon isn't in that mode), this collapse leaves the remaining qubits still maximally entangled in the $(M-1)^{th}$ W state. \begin{figure} \centering \includegraphics[width=\linewidth]{SurvQubits.pdf} \caption{The survival probability of each qubit for a given set of entangled qubits created using the experiment in Fig.\ref{fig:InfBS}, and the maximum number of entangled qubits that can be created in a version of IST where the $p$-adicity causes the Bloch sphere to be split into $N$ divisions in each angular direction. This shows how this beamsplitter experiment allows us to test this entanglement limit for very high-$p$ versions of IST, due to the comparative lack of loss-induced decoherence on the W state created.} \label{fig:Surv} \end{figure} Even if we obtain this state, we need to prove it is entangled. Gr\"{a}fe et al \cite{Grafe2014WState} and Heilmann et al \cite{Heilmann2015HighOrderW} have done this for 8 and 16 qubit W states respectively, confirming that they generated an entangled W state of that size (assuming they inputted a single photon), and Wang et al's integrated silicon photonics chip could be used to do this for a 32-qubit W state \cite{Wang2018Chip}. It is an ongoing problem to specifically discern an entanglement-confirming optical layout for an arbitrarily-large W state, but Lougovski et al give the quantum-information-theoretical groundwork for doing so \cite{Lougovski2009VerifyingWN}. This involves using beamsplitters to shift the optical-path modes to instead each represent one possible permutation of phase combinations for the sub-components (ignoring the global phase of the state) - for instance, for the 4-qubit W state, combining beamsplitters after the state creation so as to have each final path act to project on one of the 4 states \begin{equation} \begin{split} \ket{W_1^4}=(\ket{1000}+\ket{0100}+\ket{0010}+\ket{0001})/2\\ \ket{W_2^4}=(\ket{1000}-\ket{0100}-\ket{0010}+\ket{0001})/2\\ \ket{W_3^4}=(\ket{1000}+\ket{0100}-\ket{0010}-\ket{0001})/2\\ \ket{W_4^4}=(\ket{1000}-\ket{0100}+\ket{0010}-\ket{0001})/2 \end{split} \end{equation} Doing this means a consistent detection on just one of the paths over meany runs (e.g. the one corresponding to just $\ket{W_1^4}$) indicates a pure entangled state is consistently being created (specifically here the state $\ket{W_1^4}$). Were the entanglement to break, the detections would begin to spread between the targeted state $\ket{W_1^4}$ and the other three states, until, for a maximally mixed state, each detector would click 25$\%$ of the time. In the same way, for the $I^{th}$ iterate, consisting of $M=2^I$ qubits, there is a way (just using linear optical components) to project the eventual state into one of the $2^I$ phase permutations of $\ket{W^M}$, and so detect with certainty that a pure entangled state of $M$ qubits was created. Interestingly, preparing these states to certify entanglement requires each optical mode to again only interact with $I$ beamsplitters, to allow us to certify $M=2^I$-qubit entanglement, which simply squares the survival probability - meaning for 1000 qubits, it becomes $~98\%$ rather than $~99\%$. Again, given the resilience of the overall state to loss-induced decoherence, and the fact that Lougovski et al show this certification method also allows us to detect any entangled states of fewer than $M$ qubits, this loss probability poses very little issue to our test of IST - not to mention that, despite the loss, the total number of surviving (maximally-entangled) qubits tends to infinity as $I$ tends to infinity, rather than peaking at a certain value. \begin{figure} \centering \includegraphics[width=\linewidth]{TapsterRaritySource.png} \caption{Type I spontaneous parametric downconversion source for the generation of pairs of position-and-momentum-entangled photons, as given by Rarity and Tapster \cite{Rarity1990Cone}. The generated position of each photon on the cone can be viewed as a W state of arbitrary number of qubits $N$, and so the system of the two photons is a double-W state of $2N$ qubits. This arbitrary number of qubits $N$ can be lower-bounded as the resolution of a circular single-photon position detector array used to detect where on the circle each photon is emitted.} \label{fig:RTSource} \end{figure} This W state-based experimental analysis of IST can be extended by looking at an experiment such as that given by Rarity and Tapster, where a pair of photons are generated in a cone of possible positions, with the angular position of one anti-correlated with the position of the other \cite{Rarity1990Cone}. We show this in Fig.\ref{fig:RTSource}. Considering just one photon in the cone, this is equivalent to a W state where $M$ is the number of sectors into which you subdivide the cone. Adding a second photon, position-entangled with the first, doubles the number of entangled qubits in the system. Rarity and Tapster also give a way to prove these photons are entangled - by interfering them to violate a Bell inequality. However, as this is done assuming their position is a continuous variable, we need to adjust it to prove just how large it holds for as discrete variables. This can be done by making a set of $2M$ apertures on the circumference of the cone, and splitting the ring into two half-circumferences. After this, similarly to what we do in Fig.\ref{fig:InfBS}, we can iteratively combine adjacent apertures to get position-momentum entanglement between adjacent apertures, and, once this projects to equal superpositions across all $M$ apetures on each half-circle, record detected position for each half-circle's photon. By comparing the final detected position between upper half-circumference and lower half-circumference, and seeing if they still correlate, we can confirm this double-W$_M$ state. While the phase between the upper photon and some other discrete division in the upper half will be random, it will be the same as the phase between the lower photon and some discrete division in the lower half. The correlation is always the same, but specific phases at different points on the circumference are not. This is why, using the two photons (and two split half-circles), we can prove the correlations still exist - a similar (though continuous) method was used by Rarity and Tapster to provably violate a Bell Inequality. \section{No Continuous Variables} A second, related implication of IST is that it permits no continuous quantum variables. Due to the necessarily finite dimensions of the $p$-adic metric used in IST, the space of states allowed must also be finite. Given we can lower bound the number of states allowed as the dimension of the Hilbert space we use (to replicate classical information theory), we can say that, the existence of a qudit of dimension $d$ implies a state space of at least dimension $d$ (e.g. a qudit requires at least two distinct states: 0 and 1; a qutrit requires 3 states: 0, 1 and 2, etc...). Hardy extends this argument, saying that, to satisfy his axioms for quantum theory, between any two pure states in a system, there needs to be a continuous reversible transformation available on a system that goes from one to the other. To allow this, Hardy argues a qudit of dimension $d$ requires a state space of dimension $d^2$ \cite{Hardy2001Axioms}. This means for continuous variables to exist, given they have an infinite-dimensional Hilbert space \cite{Braunstein2005QIwithCVs}, there must be an infinite number of states allowed - which is a violation of IST. Therefore, in IST, there can be no quantum continuous variables. In standard quantum physics, a number of variables are held to be continuous - for instance, position, momentum, electric field strength, and time \cite{Weedbrook2012Gaussian}. Therefore, for IST to hold true, all of these variables, currently thought continuous, would actually need to be discrete: of very high (but finite) dimension. While a number of theories/approaches hold one or another of these variables to be continuous (e.g. position in Loop Quantum Gravity, or time in certain toy models of the Universe), the idea that all previously-thought continuous variables are actually discrete would be controversial. \section{Gravity Is Inherently Decoherent} \begin{figure*} \centering \includegraphics[width=\linewidth]{QuantumGravityDiag.png} \caption{The experiment described by Bose et al \cite{Bose2017EntWitnessQG}, and separately by Marletto and Vedral \cite{Marletto2017GravIndEnt}, for testing the ability of gravity to entangle two masses. Two masses, $m_i$ for $i\in\{1,2\}$ are separated from each other by distance $d$. Both are initially in state $\ket{C}_i$, with embedded spin $(\ket{\uparrow}+\ket{\downarrow})/\sqrt{2}$. They are then both admitted into Stern-Gerlach devices, which put them both into the spin-dependent superposition $(\ket{L,\uparrow}_i+\ket{R,\downarrow}_i)/\sqrt{2}$, where $\ket{L}_i$ and $\ket{R}_i$ are separated from each other by distance $\Delta x_i$. They are left in these superpositions for time $\tau$, during which, if gravity is quantum-coherent, evolution under mutual gravitational attraction $h_{00}$ would entangle the two particles, adding relevant phases to both. After time $\tau$, an inverse Stern-Gerlach device is applied to each to return them both to their initial states (potentially modulo the phases applied by $h_{00}$). By applying this process, and measuring spin correlations between the two particles after each run, we can detect if relative phases have been applied to each, and so if gravity is coherent. For IST to hold, gravity must be decoherent, and so cannot entangle two masses, meaning no alteration of phases will be detected.} \label{fig:QGravExp} \end{figure*} IST has been described as not so much a quantum theory of gravity (like String Theory and Loop Quantum Gravity), but a gravitational theory of the quantum \cite{Palmer2016IST}. Aside from its deterministic nature, nowhere is this more apparent than in how it views the regime where gravitational and quantum effects should both be present. In \textit{Invariant Set Theory}, it is described as positing no gravitons and so no supersymmetry (spin-2 gravitons typically being seen as hinting at supersymmetry). Instead, it suggests that gravity is inherently decoherent, turning gravitationally-affected superpositions into maximally mixed states. IST also suggests that effects typically considered signs of dark matter/dark energy are instead due to the ``smearing" of energy-momentum on space-times neighbouring $\mathcal{M}_U$ on $I_U$ influencing curvature of $\mathcal{M}_U$. This smearing avoids precise singularities in $\mathcal{M}_U$ - this avoidance being a key goal of many previous attempts to quantise General Relativity. However, Palmer admits in that paper that all of this still requires quantification. We attempt to begin this here. Palmer suggests an alteration of the Einstein Field Equation (EFE) based on the presence and effects of possible universes $\mathcal{M}'_U$ on our universe $\mathcal{M}_U$, leading to the equation instead being \begin{equation} \begin{split} G_{\mu\nu}&(\mathcal{M}_U)=\\ &\frac{8\pi G}{c^4}\int_{\mathcal{N}(\mathcal{M}_U)}T_{\mu\nu}(\mathcal{M}'_U)F(\mathcal{M}_U,\mathcal{M}'_U)d\mu \end{split} \end{equation} where $F(\mathcal{M}_U,\mathcal{M}'_U)$ is some propagator to be determined and $d\mu$ is a suitably normalised Haar measure in some neighbourhood $\mathcal{N}(\mathcal{M}_U)$ on $I_U$ \cite{Palmer2016IST}. Note this also sets the cosmological constant $\Lambda$ sometimes seen in the EFE to zero, given it separately resolves the issue of dark matter and the acceleration of the expansion of the universe. This gravitational decoherence could be tested by experiments that involve putting heavy objects in spatial superpositions, allowing them to gravitationally interact, returning the spatial superposition components back to a single position, and seeing by the resulting interference pattern if there are any signs of entanglement between the objects (see Fig.\ref{fig:QGravExp}) \cite{Bose2017EntWitnessQG,Marletto2017GravIndEnt}. In this set-up, assuming gravity is coherent, the combined state of the two masses at different points in the experiment is \begin{equation} \begin{split} |\Psi&_{Init}\rangle_{12}=(\ket{\uparrow}_1 +\ket{\downarrow}_1)(\ket{\uparrow}_2+\ket{\downarrow}_2)\ket{C}_1\ket{C}_2/2 \\ |\Psi&(t=0)\rangle_{12}= (\ket{L,\uparrow}_1+\ket{R,\downarrow}_1)(\ket{L,\uparrow}_2+\ket{R,\downarrow}_2)/2 \\ |\Psi&(t=\tau)\rangle_{12}=\frac{e^{i\phi}}{2}\Big(\ket{L,\uparrow}_1(\ket{L,\uparrow}_2 +e^{i\Delta\phi_{LR}}\ket{R,\downarrow}_2)\\ &+\ket{R,\downarrow}_1(e^{i\Delta\phi_{RL}}\ket{L,\uparrow}_2+\ket{R,\downarrow}_2)\Big) \\ |\Psi&_{End}\rangle_{12}=\ket{C'}_1\ket{C'}_2=\Big(\ket{\uparrow}_1(\ket{\uparrow}_2+e^{i\Delta\phi_{LR}}\ket{\downarrow}_2) \\ &+\ket{\downarrow}_1(e^{i\Delta\phi_{RL}}\ket{\uparrow}_2+\ket{\downarrow}_2)\Big)\ket{C}_1\ket{C}_2/2 \end{split} \end{equation} where \begin{equation} \begin{split} &\Delta\phi_{LR}=\phi_{LR}-\phi,\;\Delta\phi_{RL}=\phi_{RL}-\phi\\ &\phi_{RL}\approx\frac{Gm_1m_2\tau}{\hbar(d-\Delta x)}, \;\phi_{LR}\approx\frac{Gm_1m_2\tau}{\hbar(d+\Delta x)}\\ &\phi\approx\frac{Gm_1m_2\tau}{\hbar d} \end{split} \end{equation} However, if gravity isn't coherent, there are two possible final states: if gravity doesn't also collapse the state, the final state will be equivalent to the initial one ($\ket{\Psi_{Init}}_{12}=\ket{\Psi_{End}}_{12}$); or, if gravity does collapse the superposition, each particle will be forced into the (spin) maximally mixed state \begin{equation} \ket{\Psi_{MM}}_{12}=\ketbra{C}{C}_i(\ketbra{\uparrow}{\uparrow}_i+\ketbra{\downarrow}{\downarrow}_i)/2 \end{equation} By measuring spin correlations to estimate the entanglement witness $\mathcal{W}=|\langle\sigma^{(1)}_{x}\otimes \sigma^{(2)}_{z}\rangle-\langle\sigma^{(1)}_{y}\otimes \sigma^{(2)}_{z}\rangle|$, we can distinguish the entangled state from the two other possible final states (if $\mathcal{W}>\mathbb{1}$, the state is entangled), and so see if gravity is coherent - for IST to hold, $\mathcal{W}$ needs to be less than or equal to $\mathbb{1}$. \section{Conclusions} We have identified points of difference between Invariant Set Theory and standard quantum theory. While these are not fatal to IST, they provide potential avenues to experimentally test the theory, to see whether its deterministic, fractal-attractor-based structure is compatible with observed reality. \begin{acknowledgements} This work was supported by the University of York's EPSRC DTP grant EP/R513386/1, and the Quantum Communications Hub funded by the EPSRC grant EP/M013472/1. \end{acknowledgements} \bibliographystyle{apsrev4-2}
{ "timestamp": "2021-02-17T02:00:58", "yymm": "2102", "arxiv_id": "2102.07795", "language": "en", "url": "https://arxiv.org/abs/2102.07795" }
\section{Introduction} All graphs considered in the sequel are finite and simple (without loops and multiple edges). A \emph{bisection} of a cubic graph $G$ is a partition of its vertex set into two disjoint subsets $\mathcal{B}$ and $\mathcal{W}$ such that $|\mathcal{B}|=|\mathcal{W}|$. For simplicity, we shall identify a bisection of $G$ with the $2$-vertex-colouring (not necessarily proper) of $G$, in which every vertex in $\mathcal{B}$ and $\mathcal{W}$ is given colour $1$ (black) and $2$ (white), respectively. In the figures that follow, black and white vertices shall be depicted as filled and unfilled circular vertices, respectively. To distinguish between coloured and uncoloured vertices, the latter shall have a black square shape. A \emph{monochromatic component} is a connected component induced by a colour class of a vertex-colouring. A \emph{$k$-bisection} of a graph $G$ is a bisection of $G$ such that each monochromatic component consists of at most $k$ vertices. Therefore, a \emph{$2$-bisection} is a bisection in which each monochromatic component is a single vertex or an edge. A $2$-colouring of the vertices of a graph is said to be \emph{balanced} if the two color classes are of the same cardinality. Otherwise, it is said to be \emph{unbalanced}. In this paper we shall consider $2$-bisections in bridgeless cubic graphs, in particular, with respect to the following recent conjecture by Ban and Linial. \begin{conjecture}[Ban--Linial \cite{banlinial}]\label{banlinial conjecture} Every bridgeless cubic graph admits a $2$-bisection, except for the Petersen graph. \end{conjecture} We remark that $2$-bisections are equivalent to $4$-weak bisections introduced by Esperet \emph{et al.} in \cite{esperet17}, although, in general, the two definitions do not coincide. In \cite{esperet17}, it was shown that every cubic graph (not necessarily bridgeless) admits a $3$-bisection. Recently this result was extended to the class of simple subcubic graphs by Cui and Liu \cite{cuiliu}, and to the class of subcubic multigraphs by Mattiolo and Mazzuoccolo \cite{mattiolo}. Abreu \emph{et al.} proved Ban--Linial's Conjecture for cycle permutation graphs \cite{abreu}, and for bridgeless claw-free cubic graphs \cite{abreuclaw}. In \cite{esperet17}, Esperet \emph{et al.} also showed that a possible counterexample to Ban--Linial's Conjecture must admit circular flow number at least $5$, where $5$ is the largest possible circular flow number according to Tutte's $5$-Flow Conjecture (see \cite{tarsi}). Furthermore, since properly $3$-edge-colourable cubic graphs (Class I cubic graphs) admit a $2$-bisection (see Proposition 7 in \cite{banlinial}), for a possible counterexample to Conjecture \ref{banlinial conjecture} one must search in the class of bridgeless cubic graphs which do not admit a proper $3$-edge-colouring (Class II bridgeless cubic graphs). The graphs in the latter class are the notorious \emph{snarks} which are critical to many conjectures in graph theory (see for example \cite{snarky}). We note that in literature, the word snark is often reserved to Class II bridgeless cubic graphs which are cyclically $4$-edge-connected and have girth at least $5$, however, in what follows, we shall not be specifically making use of this refinement because, as already mentioned in \cite{abreu}, there is no evidence that a minimal counterexample to Ban--Linial's Conjecture is cyclically $4$-edge-connected and has girth at least $5$, if such a counterexample exists. Therefore, in the sequel, snarks shall represent Class II bridgeless cubic graphs. The least number of perfect matchings needed to cover the edge set of a bridgeless cubic graph $G$ is said to be the \emph{excessive index} of $G$, and the Berge--Fulkerson Conjecture states that every bridgeless cubic graph has excessive index at most $5$ (see \cite{fulkerson,mazzuoccolo}). Class I cubic graphs have excessive index $3$, whilst snarks have excessive index at least $4$. In general, snarks having excessive index exactly $4$ seem to be more manageable than the rest, that is, than those snarks which cannot be covered by four perfect matchings. In fact, two very well-known conjectures---the Cycle Double Cover Conjecture and the \linebreak Fan--Raspaud Conjecture---are true for graphs with excessive index at most $4$ (see \cite{hou, Steffen, ZhangBook} and \cite{fanraspaud,fouquet}, respectively), whilst they are still widely open for the other snarks. Snarks which cannot be covered by four perfect matchings have been the subject of many papers such as \cite{esperetmazzuoccolo,snarky,macajova}. As already mentioned before, two infinite families of bridgeless cubic graphs which were shown to satisfy Ban--Linial's Conjecture are cycle permutation graphs and bridgeless claw-free cubic graphs. In \cite{fouquet}, cycle permutation graphs were shown to have excessive index at most $4$, apart from the Petersen graph which has excessive index $5$. On the other hand, it is conjectured \cite{vahan} that bridgeless claw-free cubic graphs have excessive index at most $4$ (see also \cite{521,522}). Snarks with excessive index $4$ can have circular flow number $5$ (see \cite{goedgebeur}), and so can be critical to Conjecture \ref{banlinial conjecture}. At the same time, snarks with excessive index at least $5$ can have circular flow number strictly less than $5$ \cite{macajova circular}, and consequently admit a $2$-bisection by \cite{esperet17}. However, as far as we know, apart from very few sporadic computational results (see Observation 4.35\footnote[2]{The Petersen graph is the only snark up to $36$ vertices which does not admit a $2$-bisection.} in \cite{abreu}), no snarks having both their circular flow number and excessive index at least $5$ were studied with respect to Ban--Linial's Conjecture. Furthermore, Esperet \emph{et al.} \cite{esperet17} stated that several graphs obtained from the Petersen graph were checked with respect to Ban--Linial's Conjecture and they do in fact admit a $2$-bisection (or equivalently, a $4$-weak bisection), however, the authors say that they are far from offering something in the direction of a general proof. In this sense, we think that our main result dealing with treelike snarks, whose circular flow number and excessive index are both at least $5$, is another step towards further providing evidence to the correctness of Conjecture \ref{banlinial conjecture} and surpassing previous encountered hurdles, since treelike snarks heavily depend on the Petersen graph. \section{Treelike snarks} Treelike snarks were introduced by Abreu \emph{et al.} \cite{treelike} in an attempt to further enrich the family of snarks which cannot be covered by four perfect matchings. In the same paper, the authors also show that the circular flow number of treelike snarks is at least $5$, and that these snarks admit a $5$-cycle double cover. Before proceeding with the definition of these snarks, we introduce multipoles which generalise the notion of graphs. A \emph{multipole} $M$ consists of a set of vertices $V(M)$ and a set of generalised edges such that each generalised edge is either an edge in the usual sense (that is, it has two endvertices) or a dangling edge. A \emph{dangling edge} is a generalised edge having exactly one endvertex, and a \emph{$k$-pole} is a multipole with $k$ dangling edges. The set of dangling edges of $M$ is denoted by $\partial M$ whilst the set of edges of $M$ having two endvertices is denoted by $E(M)$. Two dangling edges are \emph{joined} if they are both deleted and their endvertices are made adjacent. In a similar way, a dangling edge with endvertex $x$ is said to be \emph{joined} to a vertex $y$, if the dangling edge is deleted and $x$ and $y$ are made adjacent. Let $s$ and $t$ be two adjacent vertices of the Petersen graph $P$ and let $P'$ be the graph obtained after deleting $s$ and $t$ (and all the edges incident to them) from $P$. Consider the $4$-pole $M$ obtained from $P'$ such that $V(M)=V(P')$, $E(M)=E(P')$, and each vertex of degree $2$ in $P'$ has exactly one dangling edge incident to it in $M$. We define the \emph{left dangling edges} to be the ones corresponding to the edges originally incident to $s$ and not $t$, and the \emph{right dangling edges} to be the ones corresponding to the edges originally incident to $t$ and not $s$. We also partition the four dangling edges in ordered pairs, say $(l_1,l_2)$, referred to as the first and second left dangling edges, and $(r_1,r_2)$, referred to as the first and second right dangling edges. Due to the symmetry of the Petersen graph the order of the dangling edges in each set of the partition is not relevant, but for simplicity and consistency, we shall assume that $(l_1,l_2)$ and $(r_1,r_2)$ look as in Figure \ref{figure petersen 4pole}. This $4$-pole is said to be a \emph{Petersen $4$-pole}. \begin{figure}[h] \centering \includegraphics[width=0.65\textwidth]{petersen4pole} \caption{The Petersen graph and a Petersen $4$-pole} \label{figure petersen 4pole} \end{figure} A \emph{Halin graph} is a plane graph consisting of a planar representation of a tree without degree $2$ vertices, and a circuit on the set of its leaves (see \cite{Hal64}). We remark that leaves are the vertices in a tree having degree $1$. Let $H$ be a cubic Halin graph consisting of the tree $T$ and the circuit $K$. A \emph{treelike snark} $G$ is the bridgeless cubic graph that can be obtained by the following procedure: \begin{itemize} \item for every leaf $x$ of $T$, we add two new vertices, say $x_1$ and $x_2$, and the edges $xx_1$ and $xx_2$; and \item for every edge $xy$ of $K$, with $x$ being the predecessor of $y$ with respect to the clockwise orientation of $K$, the edge $xy$ is replaced with a Petersen $4$-pole, such that the first and second left dangling edges of this Petersen $4$-pole are joined to $x_1$ and $x_2$, respectively, whilst the first and second right dangling edges are joined to $y_1$ and $y_2$, respectively. \end{itemize} Two leaves $x$ and $y$ of $T$ are called \emph{consecutive} if they are adjacent in the circuit $K$, and we shall say that the Petersen $4$-pole of $G$ replacing the edge $xy$ of $K$ is \emph{in between} the two leaves $x$ and $y$. Moreover, two consecutive leaves are said to be \emph{near} if they have distance two in $T$, that is, they have a common neighbour in $T$. We remark that $T$ always has two near leaves. Similarly, two Petersen $4$-poles $A$ and $B$ are called \emph{consecutive} if there exist three consecutive leaves $x,y,z$ (that is, $x$ and $y$ are consecutive and $y$ and $z$ are consecutive) such that $A$ is in between $x$ and $y$, and $B$ is in between $y$ and $z$. Again, we say that the leaf $y$ is \emph{in between} the Petersen $4$-poles $A$ and $B$. We next consider various $2$-vertex-colourings of a Petersen $4$-pole, not necessarily balanced, in which each monochromatic component consists of at most two vertices. For simplicity, the latter shall be referred to as the \emph{monochromatic property}. A $2$-vertex-colouring of the Petersen $4$-pole is said to be \emph{all $1$-balanced} if only the four endvertices of the dangling edges are coloured $1$ (see Figure \ref{figure type a}). For short, a Petersen $4$-pole given this colouring is said to be an all $1$-balanced Petersen $4$-pole. An all $2$-balanced Petersen $4$-pole is defined in a similar way by interchanging the colours $1$ and $2$ in the $4$-pole. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{typea} \caption{An all $1$-balanced and an all $2$-balanced Petersen $4$-pole} \label{figure type a} \end{figure} We also consider a balanced $2$-vertex-colouring of a Petersen $4$-pole (respecting the monochromatic property) in which exactly two endvertices of the four dangling edges are coloured $1$. Such a colouring is said to be \emph{$(i,j)$-balanced}, where $i$ and $j$ belong to $\{1,2\}$ and are equal to the colour of the endvertices of the first left and first right dangling edges, respectively (see Figure \ref{figure type b}). A Petersen $4$-pole having this colouring is said to be an $(i,j)$-balanced Petersen $4$-pole. When $i$ and $j$ are equal, say to $1$, we distinguish between the \emph{left} and \emph{right} $(1,1)$-balanced Petersen $4$-poles as follows: the left (similarly right) $(1,1)$-balanced colouring corresponds to the one in which the endvertex of the first left (similarly right) dangling edge has a neighbour in the Petersen $4$-pole which is coloured $1$. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{typeb} \caption{A $(1,2)$-balanced and a left $(1,1)$-balanced Petersen $4$-pole} \label{figure type b} \end{figure} Finally, we consider a $2$-vertex-colouring of a Petersen $4$-pole which once again respects the monochromatic property but is now unbalanced, that is, the two colour classes do not have the same cardinality. In this colouring $|\mathcal{B}|=|\mathcal{W}|+2$ (or vice-versa) and exactly three of the endvertices of the dangling edges of a Petersen $4$-pole are given the same colour, say $1$, without loss of generality (see Figure \ref{figure type c}). If, for example, the endvertex of the first left (similarly right) dangling edge is the one having colour $2$, the colouring and the Petersen $4$-pole are said to be \emph{{\scriptsize $^{2}$}$1$-unbalanced} (similarly $1${\scriptsize $^{2}$}-unbalanced). The {\scriptsize $_{2}$}$1$- and $1${\scriptsize $_{2}$}-unbalanced Petersen $4$-poles are defined in a similar manner. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{typec} \caption{A {\scriptsize $_{2}$}$1$-unbalanced and a $2${\scriptsize $^{1}$}-unbalanced Petersen $4$-pole} \label{figure type c} \end{figure} One can clearly see that the above constitute (up to symmetry) all the possible $2$-vertex-colourings of a Petersen $4$-pole in which each monochromatic component has at most two vertices. \subsection{Main result} \begin{theorem}\label{main theorem} Every treelike snark admits a $2$-bisection. \end{theorem} \begin{proof} Let $G$ be a treelike snark. Suppose that $G$ does not admit a $2$-bisection. We assume that the defining tree $T$ of $G$ is of minimum order. Observation 4.35 in \cite{abreu} states that the Petersen graph is the only snark up to $36$ vertices which does not admit a $2$-bisection. Consequently, the smallest treelike snark on $34$ vertices has a $2$-bisection (see also Figure \ref{figure windmill}), and so we can assume that $T$ has at least two degree $3$ vertices. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{windmill} \caption{A $2$-bisection of the smallest treelike snark} \label{figure windmill} \end{figure} As depicted in Figure \ref{figure induction}, consider two near leaves of $T$, say $x$ and $y$, and let $e$ and $f$ be the two edges of $T$ incident to $x$ and $y$, respectively. Moreover, let $g$ be the edge of $T$ adjacent to $e$ and $f$. Consider the three consecutive Petersen $4$-poles $A,B,C$ such that $x$ is in between $A$ and $B$, and $y$ is in between $B$ and $C$. Let $a_{1}$ and $a_{2}$ be the two endvertices of the first and second right dangling edges of the Petersen $4$-pole $A$, respectively, and let $c_{1}$ and $c_{2}$ be the two endvertices of the first and second left dangling edges of the Petersen $4$-pole $C$, respectively. Let $x_{1}$ and $x_{2}$ be the two neighbours of $x$ in $G$ adjacent to $a_{1}$ and $a_{2}$, respectively. Let $y_{1}$ and $y_{2}$ be the two neighbours of $y$ in $G$ adjacent to $c_{1}$ and $c_{2}$, respectively. Let $u$ be the vertex in $T$ mutually adjacent to $x$ and $y$, and let $v$ be the other neighbour of $u$ in $T$. We construct a smaller treelike snark $G'$ following the procedure presented in Figure \ref{figure induction}. Since $T$ has at least two vertices of degree $3$, the resulting graph $G'$ is indeed another treelike snark. Let $T'$ be the tree defining $G'$. As in Figure \ref{figure induction}, $z$ is the leaf in between the Petersen $4$-poles $A$ and $C$ in $G'$. In this case, let $z_{1}$ and $z_{2}$ be the common neighbours of $a_{1}$ and $c_{1}$, and $a_{2}$ and $c_{2}$, respectively. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{biggersmaller} \caption{Induction step in Theorem \ref{main theorem}} \label{figure induction} \end{figure} Since $|V(T')|<|V(T)|$, $G'$ admits a $2$-bisection. In what follows we show that we can always extend the $2$-vertex-colouring corresponding to a $2$-bisection of $G'$, to a $2$-vertex-colouring of $G$ which shall give rise to a $2$-bisection of the bigger treelike snark $G$, a contradiction to our initial assumption. We consider five cases, depending on the colours of $a_{1},a_{2},c_{1},c_{2}$ in $G'$.\\ \noindent\textbf{Case I.} The colours of $a_{1},a_{2},c_{1},c_{2}$ in $G'$ are all the same. Without loss of generality, assume that the colours of $a_{1},a_{2},c_{1},c_{2}$ are all $1$. Consequently, the colours of $(z_{1},z,z_{2})$ are $(2,1,2)$. The colour given to $v$ can be either $1$ or $2$. Suppose first that the colour of $v$ is $1$. We colour $G$ as follows. The vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ are assigned the same colour given to them in $G'$, and the vertices $(x_{1},x,x_{2})$ and $(y_{1},y,y_{2})$ are both assigned the colours $(2,1,2)$, that is, the colours given to $(z_{1},z,z_{2})$ in $G'$. The vertex $u$ is given colour $2$ and consequently, this partial $2$-vertex-colouring of $G$, that is, with the vertices of $B$ still uncoloured, respects the monochromatic property, and, so far, there are two more vertices coloured $2$. However, since the colours of $a_{1},a_{2},c_{1},c_{2}$ are all $1$, and the colours of $x_{1},x_{2},y_{1},y_{2}$ are all $2$, we can let $B$ be a ${1}${\scriptsize $_{2}$}-unbalanced Petersen $4$-pole, giving a $2$-bisection of $G$, a contradiction to our initial assumption. Therefore, $v$ must be coloured $2$ in $G'$. The vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ are assigned the same colour given to them in $G'$, and the vertices $(x_{1},x,x_{2})$ and $(y_{1},y,y_{2})$ are assigned the colours $(2,2,1)$ and $(2,1,2)$, respectively. The vertex $u$ is coloured $1$ as $z$. Consequently, this partial $2$-vertex-colouring is balanced. If $a_{2}$ does not have a neighbour in $A$ which is coloured $1$, then every monochromatic component in this partial $2$-vertex-colouring has order at most two. In this case, if we colour $B$ as a left $(1,1)$-balanced Petersen $4$-pole, a $2$-bisection of $G$ is obtained, a contradiction. Therefore, $a_{2}$ must have a neighbour in $A$ coloured $1$. Since $a_{1}$ is coloured $1$, $A$ is either a {\scriptsize $^{2}$}$1$- or a {\scriptsize $_{2}$}$1$-unbalanced Petersen $4$-pole. Consequently, if we change the colour of $a_{2}$ from $1$ to $2$, $A$ becomes a $(2,1)$- or a right $(1,1)$-balanced Petersen $4$-pole, giving a partial $2$-vertex-colouring of $G$ respecting the monochromatic property, with the vertices coloured $2$ now being two more than those coloured $1$. However, colouring $B$ as a {\scriptsize $_{2}$}$1$-unbalanced Petersen $4$-pole, a $2$-bisection of $G$ is obtained, a contradiction. Thus, Case I cannot occur.\\ \textbf{Case II.} In $G'$, the vertices $a_{1},a_{2}$ are coloured $1$ and the vertices $c_{1},c_{2}$ are coloured $2$, or vice-versa. Without loss of generality, assume that the colours of $a_{1},a_{2}$ are $1$, and the colours of $c_{1},c_{2}$ are $2$. We first claim that the colours of $z_{1}$ and $z_{2}$ must be the same. For, suppose not. If $z_{1}$ is coloured $1$, then $z$ must be coloured $2$, but this implies that $z,z_{2},c_{2}$ are all coloured $2$, a contradiction. Thus, without loss of generality, we assume that $z_{1},z_{2}$ are coloured $2$, implying that $z$ is coloured $1$. We consider two cases, depending on whether $v$ is coloured $1$ or $2$. Suppose first that $v$ is coloured $1$. We colour $G$ as follows. The vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ are assigned the same colour given to them in $G'$, and the vertices $(x_{1},x,x_{2})$ and $(y_{1},y,y_{2})$ are both assigned the colours $(2,1,2)$. The vertex $u$ is given colour $2$ and consequently, every monochromatic component in this partial $2$-vertex-colouring has order at most two. However, there are two more vertices coloured $2$ so far. Since the colours of $a_{1},a_{2}$ are both $1$, and the colours of $x_{1},x_{2}$ are both $2$, we can let $B$ either be a {\scriptsize $_{2}$}$1$- or a {\scriptsize $^{2}$}$1$-unbalanced Petersen $4$-pole, giving a $2$-bisection of $G$. This is a contradiction, and so $v$ must be coloured $2$ in $G'$. As before, we proceed by colouring the vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ with the same colour given to them in $G'$. The vertices $(x_{1},x,x_{2})$ are assigned the colours $(2,1,2)$, and since $c_{1},c_{2}$ are both coloured $2$, we can colour $(y_{1},y,y_{2})$ with the colours $(1,2,1)$. The vertex $u$ is coloured $1$ as $z$. One can see that all the monochromatic components in this partial $2$-vertex-colouring have order at most two, but, so far, there are two more vertices coloured $1$. If we let $B$ be a {\scriptsize $_{1}$}$2$-unbalanced Petersen $4$-pole, we obtain a $2$-bisection of $G$, a contradiction once again. Therefore, we cannot have Case II.\\ \textbf{Case III.} In $G'$, the vertices $a_{1},c_{1}$ are coloured $1$ and the vertices $a_{2},c_{2}$ are coloured $2$, or vice-versa. Without loss of generality, assume that the colours of $a_{1},c_{1}$ are $1$, and the colours of $a_{2},c_{2}$ are $2$. In this case, $(z_{1},z_{2})$ must be coloured $(2,1)$. Without loss of generality, assume that $v$ is coloured $1$. Therefore, $z$ must be coloured $2$. We colour $G$ as follows. The vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ are assigned the same colour given to them in $G'$, and the vertices $(x_{1},x,x_{2})$ and $(y_{1},y,y_{2})$ are assigned the colours $(1,2,1)$ and $(2,1,1)$, respectively. The vertex $u$ is given colour $2$ as $z$. If the vertex $a_{1}$ has no neighbour in $A$ coloured $1$, then this partial $2$-vertex-colouring respects the monochromatic property and, so far, there are two more vertices coloured $1$. However, letting $B$ be a $2${\scriptsize $^{1}$}-unbalanced Petersen $4$-pole gives a $2$-bisection of $G$, a contradiction. Therefore, $a_{1}$ has a neighbour in $A$ coloured $1$. Therefore, $A$ must be a right $(1,1)$- or a $(2,1)$-balanced Petersen $4$-pole. If we change the colour $1$ of $a_{1}$ to $2$, $A$ becomes a {\scriptsize $^{1}$}$2$- or a {\scriptsize $_{1}$}$2$-unbalanced Petersen $4$-pole. Because of this step, the resulting partial $2$-vertex-colouring of $G$ is now balanced and respects the monochromatic property. However, if we let $B$ to be the all $2$-balanced Petersen $4$-pole, we get that $G$ has a $2$-bisection, a contradiction. Hence, we cannot have Case III.\\ \textbf{Case IV.} In $G'$, the vertices $a_{1},c_{2}$ are coloured $1$ and the vertices $a_{2},c_{1}$ are coloured $2$, or vice-versa. Without loss of generality, assume that the colours of $a_{1},c_{2}$ are $1$, and the colours of $a_{2},c_{1}$ are $2$. As in Case II, $z_{1}$ and $z_{2}$ must have the same colour, and so, without loss of generality, we can assume that these two vertices are both coloured $2$. Consequently, $z$ must be coloured $1$. We consider two cases depending on whether $v$ is coloured $1$ or $2$. Suppose first that the colour of $v$ is $1$. We extend the colouring of $G'$ to $G$ as follows. The vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ are assigned the same colour given to them in $G'$, and the vertices $(x_{1},x,x_{2})$ and $(y_{1},y,y_{2})$ are both assigned the colours $(2,1,2)$. The vertex $u$ is given colour $2$ and consequently, every monochromatic component in this partial $2$-vertex-colouring has order at most two, and, so far, there are two more vertices coloured $2$. However, letting $B$ be a {\scriptsize $^{2}$}$1$-unbalanced Petersen $4$-pole gives a $2$-bisection of $G$, a contradiction. Therefore, $v$ must be coloured $2$. The vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ are assigned the same colour given to them in $G'$, but this time the vertices $(x_{1},x,x_{2})$ and $(y_{1},y,y_{2})$ are assigned the colours $(2,2,1)$ and $(2,1,2)$, respectively. The vertex $u$ is coloured $1$ as $z$. As a result, we can see that this partial $2$-vertex-colouring is balanced and respects the monochromatic property. However, letting $B$ be an all $1$-balanced Petersen $4$-pole gives a $2$-bisection of $G$, a contradiction once again. So we must have Case V.\\ \textbf{Case V.} Exactly three of $a_{1},a_{2},c_{1},c_{2}$ have the same colour in $G'$. Without loss of generality, assume that $a_{1},a_{2},c_{1}$ have colour $1$ and $c_{2}$ has colour $2$. Consequently, the colour of $z_{1}$ is $2$. We consider three cases depending on the colours of $(v,z,z_{2})$. When the colour of $v$ is $1$, $(z,z_{2})$ can be coloured $(1,2)$ or $(2,1)$. Suppose we have the former case. We colour $G$ as follows. The vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ are assigned the same colour given to them in $G'$, and the vertices $(x_{1},x,x_{2})$ and $(y_{1},y,y_{2})$ are both assigned the colours $(2,1,2)$. The vertex $u$ is given colour $2$ and consequently, the resulting partial $2$-vertex-colouring respects the monochromatic property and, so far, there are two more vertices coloured $2$. However, if we let $B$ be a $1${\scriptsize $^{2}$}-unbalanced Petersen $4$-pole, a $2$-bisection of $G$ is obtained, a contradiction. Therefore, the colours of $(z,z_{2})$ are $(2,1)$. We colour $G$ as follows. The vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ are assigned the same colour given to them in $G'$, but this time the vertices $(x_{1},x,x_{2})$ and $(y_{1},y,y_{2})$ are assigned the colours $(2,1,2)$ and $(2,1,1)$, respectively. The vertex $u$ is coloured $2$ as $z$, and consequently, the resulting partial $2$-vertex-colouring is balanced and respects the monochromatic property. However, if we let $B$ be a right $(1,1)$-balanced Petersen $4$-pole, a $2$-bisection of $G$ is obtained, a contradiction. Hence, we must have that the colours of $(v,z,z_{2})$ are $(2,1,2)$. The vertices in $G$ corresponding to $V(G')-\{z_{1},z,z_{2}\}$ are assigned the same colour given to them in $G'$, and the vertices $(x_{1},x,x_{2})$ and $(y_{1},y,y_{2})$ are assigned the colours $(2,1,2)$ and $(2,2,1)$, respectively. The vertex $u$ is given colour $1$ as $z$, and consequently, this partial $2$-vertex-colouring is balanced and every monochromatic component has order at most two. However, if we let $B$ be an all-1 balanced Petersen $4$-pole, a $2$-bisection of $G$ is obtained, a contradiction once again. \\ Since there are no more cases to consider, our initial assumption is wrong. Therefore, every treelike snark admits a $2$-bisection, proving our theorem. \end{proof} Finally, we remark that it would be quite intriguing to see whether the above proof can be further extended to accommodate a recent class of snarks which contains treelike snarks: \emph{Halin snarks}. These snarks, introduced in \cite{macajova} by M\'a\v{c}ajov\'a and \v{S}koviera, were shown to have excessive index at least $5$. Moreover, whenever the building blocks (referred to as Halin fragments) used for the construction of a Halin snark are exclusively made up of decollineators (see \cite{macajova} for definition) originating from snarks having circular flow number at least $5$, the circular flow number of the resulting Halin snark is at least $5$ as well. In the above proof, we explicitly took advantage of the symmetry of Petersen $4$-poles and of the behaviour of $2$-vertex-colourings (and their corresponding monochromatic components) in such a pole. In this sense, we remark that in general it is still not clear what assumptions need to be made with regards to the building blocks of Halin snarks in order to tackle this more general problem.
{ "timestamp": "2022-04-20T02:20:21", "yymm": "2102", "arxiv_id": "2102.07667", "language": "en", "url": "https://arxiv.org/abs/2102.07667" }
\section{Introduction} In this note we will study stationary measures for certain types of iterated function schemes. We begin with an important example. \subsection{Bernoulli convolutions} The study of the properties of Bernoulli convolutions was greatly advanced by two influential papers of Paul Erd\"os from 1939 \cite{erdos1939} and 1940 \cite{erdos1940} and has remained an active area of research ever since. We briefly recall the definition: given $0 < \lambda <1$ we can associate the Bernoulli convolution measure $\mu_\lambda$ on the real line corresponding to the distribution of the series \begin{equation} \label{eq:rv} \xi = \sum_{k=0}^\infty \xi_k\lambda^k \end{equation} where $(\xi_k)_{k=1}^\infty$ are independent random variables assuming values $\pm 1$ with equal probability. Equivalently, this is the probability measure given by the weak-star limit of the measures $$ \mu_\lambda = \lim_{n\to +\infty} \frac{1}{2^n} \sum_{i_1, \cdots, i_n \in \{0,1\}} \delta\left(\sum_{j=1}^n (-1)^{i_j} \lambda^j\right), $$ where $\delta(y)$ is the Dirac delta probability measure supported on $y$. The properties of these measures have been studied in great detail. We refer the reader to recent surveys by Gou\"ezel~\cite{G18} and Hochman~\cite{H18} for an overview of existing results. The properties of the measure $\mu_\lambda$ are very sensitive to the choice of $\lambda$. For example, if $0 < \lambda < \frac{1}{2}$ then $\mu_\lambda$ is supported on a Cantor set and is singular with respect to Lebesgue measure, but if $\lambda = \frac{1}{2}$ then the measure $\mu_{1/2}$ equals the normalized Lebesgue measure on $[-2,2]$. For $\frac{1}{2} < \lambda < 1$ the situation is more subtle. In this case the measure $\mu_\lambda$ is supported on the closed interval $\left[-\frac{1}{1-\lambda}, \frac{1}{1-\lambda} \right]$. It was conjectured by Erd\"os in 1940~\cite{erdos1940}, and proved by Solomyak in 1995~\cite{solomyak}, that for almost all $\lambda \in (\frac{1}{2},1)$ (with respect to Lebesgue measure) the measure $\mu_\lambda$ is absolutely continuous. Recently Shmerkin~\cite{S14},~\cite{S15}, developing the method of Hochman~\cite{hochman}, improved this result to show that the set of $\frac{1}{2} < \lambda < 1$ for which $\mu_\lambda$ is {\it not } absolutely continuous has zero Hausdorff dimension. On the other hand, it was shown by Erd\"os in~\cite{erdos1939} that this exceptional set of values is non-empty. We will be concerned with another, though related, aspect of the Bernoulli convolutions $\mu_\lambda$, namely their Hausdorff dimension. \begin{definition} \label{def:HDmeasure} The \emph{Hausdorff dimension} of a probability measure $\mu$ is defined by \begin{equation}\label{eq:dimHmu} \dim_H(\mu) : = \inf \{\dim_H(X) \mid \mbox{ $X$ is a Borel set with } \mu(X)=1\}, \end{equation} where $\dim_H(X)$ stands for the Hausdorff dimension of a set $X$, see Section~\ref{subsection:dimH} for definition. \end{definition} \noindent Any measure $\mu$ which is absolutely continuous with respect to Lebesgue measure automatically satisfies $\dim_H(\mu) =1$, and therefore the result of Shmerkin implies that $\dim_H(\mu_\lambda)=1$ for all but an exceptional set of parameters $\lambda$ of zero Hausdorff dimension. Furthermore, Varj\'u~\cite{V} recently proved a stronger result that $\dim_H(\mu_\lambda) =1$ for all transcendental $\lambda$. Therefore, it remains to consider the set of algebraic parameter values. It turned out that for certain class of algebraic numbers, namely, for the reciprocals of Pisot numbers, it is possible to compute Hausdorff dimension~$\dim_H(\mu_\lambda)$ explicitly, subject to computer resources. We briefly recall the definition. \begin{definition} Pisot number~$\beta$ is an algebraic number strictly greater than one all of whose (Galois) conjugates, excluding itself, lie strictly inside the unit circle. \end{definition} The Pisot numbers form a closed subset of $\mathbb R$ and have Hausdorff dimension strictly less than~$1$. The smallest Pisot number is $\beta_{min} = 1.3247\ldots$ (a root of $x^3 - x - 1=0$). The first progress on dimension of Bernoulli convolutions was made by Erd\"os in~\cite{erdos1939}, where he showed that if $\lambda$ is the reciprocal of a Pisot number, then $\mu_\lambda$ is not absolutely continuous. Garsia~\cite{G63} improved on the Erd\"os result by showing that $\dim_H(\mu_\lambda) < 1$ whenever $\lambda$ is the reciprocal of a Pisot number. This phenomenon is called \emph{dimension drop} and it remains unknown whether Pisot numbers are the only numbers with this property. Alexander and Zagier estimated $\dim_H(\mu_\lambda)$ in the case that $\lambda=\frac2{1+\sqrt5}$ was the reciprocal of the Golden mean and Grabner, Kirschenhofer and Tichy~\cite{GKT02} gave examples of explicit algebraic numbers~$\lambda$, the so-called ``multinacci'' numbers for which the dimension drop takes place. The values they computed are amongst the smallest known values for the dimension of Bernoulli convolutions. For example, they estimated that \begin{align*} \mbox{when } &\lambda^3 - \lambda^2 - \lambda - 1 = 0, && \mbox{ then }\dim_H(\mu_\lambda) = 0.980409319534731\ldots \\ \mbox{when } &\lambda^4 - \lambda^3 - \lambda^2 -\lambda -1 = 0, && \mbox{ then } \dim_H(\mu_\lambda) = 0.986926474333800\ldots. \end{align*} The technique developed in~\cite{GKT02} has been subsequently extended to a wider class of algebraic parameter values cf.~\cite{AFKP} and~\cite{HKPS19}, however, the limitation of this method is that it requires studying each parameter value independently. It is therefore a basic problem to get a \emph{uniform} lower bound on $\dim_H \mu_\lambda$ for $\frac12<\lambda<1$ and to identify possible dimension drops. A simplifying observation is that \begin{equation} \label{eq:sqrt} \dim_H(\mu_\lambda) \geq \dim_H(\mu_{\lambda^2}) \end{equation} and thus if suffices to get a lower bound for $\frac{1}{2}< \lambda < \frac{1}{\sqrt{2}}$. \begin{remark} The inequality~\eqref{eq:sqrt} is established, in particular, in~\cite[Proposition 2.1]{HS18} for algebraic parameter values~$\lambda$, but it is easy to see that it holds for all $\frac12\le \lambda \le 1$, since for any probability measure $\mu$ we have that $\dim_H \mu*\mu \ge \dim_H \mu$. \end{remark} Our first main result on the dimension of Bernoulli convolutions is a collection of piecewise-constant uniform lower bounds over increasingly finer partitions of the parameter space. \begin{theorem}\label{t:main} \begin{enumerate} \item\label{i:1} The dimension of Bernoulli convolutions $\mu_\lambda$ for any $\frac12 < \lambda < 1$ satisfies \[ \dim_H \mu_{\lambda} \ge G_0 := 0.96399. \] \item\label{i:2} Moreover, the dimension of Bernoulli convolutions $\mu_\lambda$ is roughly bounded from below by a piecewise-constant function $G_1$ with $8$ intervals of continuity, $\dim_H \mu_{\lambda} \ge G_1(\lambda)$, where the values of~$G_1$ are given in Table~\ref{tab:g1-bernoulli} for $0.5 \le \lambda \le 0.8$. \item\label{i:3} The previous bound can be further refined. The dimension of Bernoulli convolutions $\mu_\lambda$ is bounded from below by a piecewise-constant function $G_2$ corresponding to approximately~$10000$ intervals $\dim_H \mu_{\lambda} \ge G_2(\lambda)$, where the graph of the function~$G_2$ is presented in Figure~\ref{fig:plotBC}, with the particularly interesting region $0.5 < \lambda < 0.575$ presented in Figure~\ref{fig:g2-bernoulli}. \end{enumerate} \end{theorem} In the proof, we derive~\ref{i:2} from~\ref{i:3} and~\ref{i:1} from~\ref{i:2}, rather than establishing each estimate independently. We choose to give the statement in three parts for the clarity of exposition. \begin{remark} Our proof is computer assisted and these bounds are not sharp, at least in the following sense: using a finer partition of the parameter space one could obtain even better lower bounds. This of course requires more computer time. \end{remark} \parbox[c][60mm][t]{70mm}{ \bigskip \begin{tabular}{|c|c|} \hline Interval & $G_1$ \\ \hline $[0.5000 , 0.5037 )$ & $0.9900 $ \\%.500000 0.503750 0.9900 $[0.5037 , 0.5181 )$ & $0.9800 $ \\%.503700 0.518148 0.9800 $[0.5181 , 0.5200 )$ & $0.9700 $ \\%.518100 0.520000 0.9700 $[0.5200 , 0.5430 )$ & $0.9785 $ \\%.520000 0.543000 0.9785 $[0.5430 , 0.5451 )$ & $0.9639 $ \\%.543210 0.545140 0.9600 $[0.5451 , 0.5527 )$ & $0.9785 $ \\%.545140 0.552772 0.9785 $[0.5527 , 0.5703 )$ & $0.9850 $ \\%.552770 0.570390 0.9850 $[0.5703 , 0.8000 )$ & $0.9900 $ \\%.570300 0.580000 0.9900 \hline \end{tabular} \captionof{table}{Values of $G_1$.} \label{tab:g1-bernoulli} }% \parbox[c][60mm][t]{85mm}{ \includegraphics[width=85mm,height=50mm]{dimBCreduced.pdf} \captionof{figure}{Plots of $G_0$, $G_1$ and~$G_2$.} \label{fig:g2-bernoulli} } The behaviour of the lower bound function~$G_2$ appears to be quite intriguing, in particular, the largest dimension drops seem to correspond to the reciprocals of the limit points of the set of Pisot numbers, see Section~\ref{s:algebraicLambda} for further discussion and Figure~\ref{fig:plotBC}, for detailed plots. To the best of our knowledge, the best result to date is due Feng and Feng~\cite{FF21}; they obtained a global lower bound of $\dim_H(\mu_\lambda) \geq 0.9804085$. They give an alternative approach for computing a lower bound for $\dim_H(\mu_\lambda)$, which uses the conditional entropy. Three years earlier Hare and Sidorov~\cite{HS18} showed that $\dim_H(\mu_\lambda) \geq 0.82$. Their method depends on a result of Hochman and uses the fact that the dimension of~$\mu_\lambda$ can be expressed in terms of the Garsia entropy and most advances on this problem are based on this idea. Our approach is different to both and is rooted in connection between iterated function schemes and random processes. In addition to uniform estimates, it allows us to compute good lower bounds on $\dim_H(\mu_\lambda)$ for individual values~$\lambda$. The following set of algebraic numbers, intimately related to Pisot numbers, is also extensively studied. \begin{definition} A Salem number is an algebraic integer $\sigma >1$ of degree at least 4, conjugate to $\sigma^{-1}$, all of whose conjugates, excluding $\sigma$ and $\sigma^{-1}$, lie on the unit circle. \end{definition} We refer to a survey by Smyth~\cite{Sm15} for an introduction to the topic. The set of limit points of Salem numbers contains the Pisot numbers. We have computed the lower bound for the reciprocals of Salem numbers, thus providing a partial supporting evidence that there is no dimension drop for these parameter values. \begin{theorem} \label{thm:salemnum} \begin{enumerate} \item For every one of the $99$ values $\frac{1}{2} < \lambda < 1$ which is the reciprocal of a Salem number of degree at most $10$ one has that $\dim_H(\mu_\lambda) \geq 0.98546875$. Detailed estimates are tabled in Appendix~\ref{ap:salem}. \item One can also consider the $47$ known so called small Salem numbers $\frac{10}{13}< \lambda < 1$ and show that $\dim_H(\mu_\lambda) \geq 0.999453125$. Lower bounds on the dimensions of the Bernoulli convolutions for the reciprocals of small Salem numbers are presented in Appendix~\ref{ap:smallsalem}. \end{enumerate} \end{theorem} Another conjecture suggests that there exists~$\varepsilon>0$ such that for any $\lambda\in (1-\varepsilon,1)$ the dimension of the measure~$\mu_\lambda$ equals~$1$. In particular, Breulliard---Varj\'u~\cite{BV19} showed that there exists $\varepsilon > 0$ so that $\dim_H(\mu_\lambda) =1$ for $1-\varepsilon < \lambda < 1$ \emph{under the assumption that the Lehmer's conjecture holds}. The Lehmer's conjecture states that the Mahler measure of any nonzero noncyclotomic irreducible polynomial with integer coefficients is bounded below by some constant $c > 1$. It implies, in particular, that there exists a smallest Salem number. As another application of our method, we give an asymptotic for the lower bound of $\dim_H \mu_\lambda$ as $\lambda \to 1$ in Section~\ref{s:abounds}. More precisely, we establish the following result. \begin{theorem} \label{thm:near1} There exist $c > 0$ and $\varepsilon > 0$ so that $\dim_H(\mu_\lambda) \geq 1 - c (1-\lambda)$ for $1-\varepsilon < \lambda < 1.$ \end{theorem} The Bernoulli convolutions are a special case of a far more general construction of self-similar measures, which we describe next. \subsection{Iterated function schemes with similarities} \label{subsection:ifs} Let $k$ be fixed. Given $0 < \lambda < 1$ and $\bar c \in \mathbb R^n$, consider a collection $\mathcal{S} = \{f_j, j = 1,\ldots,k\}$ of $k$ contraction similarities defined by $$ f_j \colon \mathbb{R} \to \mathbb{R}; \qquad f_j(x) = \lambda x \emr{\, +\,} c_j, \mbox{ for } 1\leq j \leq k. $$ Let $\bar p = (p_1, \cdots, p_k)$ be a probability vector where $0 < p_j < 1$ and $\sum\limits_{j=1}^k p_j=1$. \begin{definition} \label{def:IFS} We call a triple $\mathcal{S}(\lambda, \bar c, \bar p)$ an iterated function scheme of similarities. We will omit the dependence on $\bar c$ and $\bar p$ in the sequel when it leads to no confusion. \end{definition} \begin{definition} A probability measure $\mu$ is called a {\it stationary measure} for the contractions $f_1, \cdots, f_k$ and the probability vector $\bar p$ if it satisfies $$ \mu = \sum_{j=1}^k p_j (f_j)_*\mu, $$ i.e., $\int F(x) d\mu(x) = \sum\limits_{j=1}^k p_j \int F(f_jx) d\mu(x)$, for all bounded continuous functions~$F$. \end{definition} \noindent The existence and the uniqueness of stationary measures in this seting follows from the work of Hutchinson~\cite{hutchinson}. In this note we are particularly concerned with the following two systems. The first one has Bernoulli convolution as the stationary measure. \begin{example}[Function scheme for Bernoulli convolutions] \label{ex:BCsystem} Given a real number $\frac12<\lambda<1$ consider the iterated function scheme of two maps $f_0$, $f_1$ given by $f_j(x) = \lambda x + j$, $j=0,1$ and probability vector $\bar p = \left(\frac12,\frac12\right)$. Then the stationary measure $\mu = \mu_\lambda$ corresponds to the distribution of the random variable $$ \sum_{k=0}^\infty \eta_k \lambda^k, $$ where $\eta_k$ are i.i.d. assuming values $0$ and $1$ with equal probability. This agrees with formula~\eqref{eq:rv} up to the change of variables $\xi_k = 2\eta_k-1$. \end{example} \begin{example}[$\{0,1,3\}$-system]\label{ex:013} We can consider the contractions $f_1, f_2, f_3: \mathbb R \to \mathbb R$ defined by $$ f_1(x) = \lambda x, \quad f_2(x) = \lambda x+1, \quad f_3(x) = \lambda x+3, $$ and the probability vector $p = (\frac{1}{3}, \frac{1}{3}, \frac{1}{3})$. For the corresponding stationary measure $\mu_{\lambda}^{0,1,3}$ it is known that for almost all $\frac{1}{4} \leq \lambda \leq \frac{1}{3}$ with respect to Lebesgue measure we have \[ \dim_{H}(\mu_{\lambda}^{0,1,3}) = \frac{\log 3}{\log \lambda^{-1}} \] (this equality also holds for all $\lambda<\frac{1}{4}$) and for almost all $\frac{1}{3} \leq \lambda \leq \frac{2}{5}$ with respect to Lebesgue measure we have $\dim_{H}(\mu^{0,1,3}_\lambda) = 1$; see~\cite{KSS95},~\cite{PS95}. \end{example} The next theorem provides a lower bound for~$\dim_H(\mu_\lambda^{0,1,3})$. \begin{theorem}\label{thm:dim013} The dimension of the stationary measure $\mu_{\lambda}^{0,1,3}$ for the $\{0,1,3\}$-system has the lower bounds \begin{enumerate} \item\label{013i} For any $\lambda \in [\frac{1}{4},\frac{2}{5}]$ we have that $ \dim_H(\mu_{\lambda}^{0,1,3}) \ge G^{0,1,3}_0 := \min\left\{\frac{\log 3}{\log \lambda^{-1}},1\right\} - 0.2.$ \item\label{013ii} Moreover, $\dim_H(\mu_\lambda)$ is bounded from below by a piecewise-continuous function $$ G_1^{0,1,3}(\lambda)|_{I_k} = \min\left\{\frac{\log 3}{\log \lambda^{-1}},1\right\}-c_k $$ with $11$ intervals of continuity $I_k$, $k = 1, \ldots 11$ which are given in Table~\ref{tab:013}, together with the corresponding values $c_k$. In other words, for any $\lambda \in [\frac14, \frac25]$ $$ \dim_H(\mu_{\lambda}^{0,1,3}) \ge G_1^{0,1,3}(\lambda). $$ \item\label{013iii} The estimate from part~\ref{013ii} can be refined further. The dimension $\dim_H(\mu_\lambda^{0,1,3})$ is bounded from below by a piecewise-continuous function $G_2^{0,1,3}$ with approximately~$10000$ intervals of continuity, $\dim_H(\mu_{\lambda}^{0,1,3}) \ge G_2^{0,1,3}(\lambda)$. The graph of the function~$G_2^{0,1,3}$ is presented in Figure~\ref{fig:013}. \end{enumerate} \end{theorem} \parbox[c][70mm][t]{55mm}{ \smallskip \begin{tabular}{|c|c|} \hline Interval $I_k$ & $c_k$ \\ \hline $[0.2500, 0.2630]$&$ 0.0350 $ \\ $[0.2630, 0.2650]$&$ 0.0550 $ \\ $[0.2650, 0.2800]$&$ 0.0350 $ \\ $[0.2800, 0.2820]$&$ 0.0650 $ \\ $[0.2820, 0.2980]$&$ 0.0350 $ \\ $[0.2980, 0.3210]$&$ 0.0850 $ \\ $[0.3210, 0.3320]$&$ 0.1100 $ \\ $[0.3320, 0.3350]$&$ 0.2000 $ \\ $[0.3350, 0.3450]$&$ 0.1100 $ \\ $[0.3450, 0.3670]$&$ 0.0800 $ \\ $[0.3670, 0.4045]$&$ 0.0400 $ \\ \hline \end{tabular} \captionof{table}{Table of values of $c_k$.} \label{tab:013} }% \parbox[c][75mm][t]{105mm}{ \includegraphics[width=105mm,height=65mm]{dim013reduced.pdf} \captionof{figure}{Plots of $G^{0,1,3}_1$ and~$G^{0,1,3}_{2}$.} \label{fig:013} } \begin{remark} An alternative version for (i) could be: For any $0.25<\lambda<0.4$ we have that $\dim_H \mu_\lambda^{0,1,3} \ge G_0^{0,1,3}(\lambda)$, where $$ G_0^{0,1,3}(\lambda) := \begin{cases} \frac{\log 3}{\log \lambda^{-1}}-0.11, & \mbox{ if } 0.25<\lambda<0.3210; \\ \frac{\log 3}{\log \lambda^{-1}}-0.2, &\mbox{ if } 0.3210<\lambda < 0.3250; \\ 0.89, &\mbox{ otherwise. } \end{cases} $$ In particular, we see that the largest dimension drop seems to take place at $\lambda = \frac13$. For this parameter value the dimension can be computed explicitly~\cite{KV2021} following the method of~\cite{GKT02}, more precisely, $$ \dim_H\mu_{1/3}^{0,1,3} =0.83703915049\pm10^{-10}. $$ \end{remark} As in the case of Bernoulli convolutions, the biggest dimension drops appear to correspond to the reciprocals of the limit points of hyperbolic numbers\footnote{An algebraic number is called hyperbolic, if all its Galois conjugates lie inside the unit circle}. However, in contrast to the Pisot numbers in the interval $(1,2)$, the limit set of hyperbolic numbers in the interval $\left(\frac52,4\right)$ is not very well studied. We give detailed plot of $G_2^{0,1,3}$ in Figure~\ref{fig:plot013} and discuss its feautures in Section~\ref{s:algebraicLambda}. We obtain lower bounds for the Hausdorff dimension of Bernoulli convolutions and for the stationary measures of the $\{0,1,3\}$-system using the same method, which we outline in the next section. \subsection{Approach to lower bounds for Hausdorff dimension} \label{ss:approach} The Hausdorff dimension of a measure is an important characteristic which is generally difficult to estimate, both numerically and analytically. We introduce two alternative characteristics of dimension type, namely, the correlation dimension and the Frostman dimension, which are easier to estimate and give a lower bound on the Hausdorff dimension. Whilst the numerical results suggest that in the case of iterated function schemes with similarities the Frostman dimension and Hausdorff dimension behave very different, the correlation dimension appear to exhibit the same dependence on parameter values as expected from the Hausdorff dimension. To sum up, our approach is the following: \indent\parbox{0.8\textwidth}{ \begin{enumerate} \item[Step 1:] Replace the Hausdorff dimension with the correlation dimension or the Frostman dimension; \item[Step 2:] Compute the lower bound for the correlation dimension or the Frostman dimension. \end{enumerate} } \subsubsection{Affine iterated function schemes with similarities} We begin by defining the correlation dimension which bounds the Hausdorff dimension from below (cf. Lemma~\ref{l:inequality}). It has been introduced in~\cite{PGH83} as a characteristic of dimension type. The notion was subsequently formalised by Pesin in~\cite{P93}, see also~\cite{CHY97} and~\cite{SS98}. We will give a formal definition later in Section~\ref{subsection:dimC}. We proceed by introducing one of our main tools, a \emph{symmetric diffusion operator} associated to an iterated function scheme. Let $\mathcal{S}(\lambda,\bar c, \bar p)$ be an iterated function scheme of similarities. Assume that $J_\lambda \subset \mathbb R$ is an interval such that $f_i(J_\lambda)\subset J_\lambda$ for all $i=1,\dots,k$. For the invariant measures $\mu_\lambda$ we have that $\supp \mu_\lambda \subset J_\lambda$. We say that an interval $J$ is $\lambda$-\emph{admissible} for an iterated function scheme~$\mathcal{S}(\lambda,\bar c,\bar p)$ if \begin{equation} \label{eq:admit} \mbox{ Interior}(J) \supset \overline{ \{x - y \mid x,y \in J_\lambda \} } \end{equation} This is illustrated in Figure~\ref{fig:admissible}. Given a (possibly infinite) set of parameter values~$\Lambda\subset[0,1]$ we say that the interval $J$ is {\it $\Lambda$-admissible} if it is an admissible interval for all $\lambda\in\Lambda$. \begin{figure} \centering \includegraphics{intervals4.pdf} \caption{An iterated function scheme of three similarities $f_0(x) = \frac{x}{3}$, $f_1(x)=\frac{x}3+1$, and $f_3(x)=\frac{x}{3}+3$ and an admissible interval~$J$.} \label{fig:admissible} \end{figure} \begin{definition} \label{def:symdif} Given an iterated function scheme of similarities~$\mathcal{S}(\lambda,\bar c, \bar p)$ for any $\alpha\in (0,1)$ we define the \emph{symmetric diffusion operator} $\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$ by \begin{equation} \label{eq:dif-twoway} [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi](x) := \lambda^{-\alpha} \cdot \sum_{i,j=1}^k p_i p_j \cdot \psi\left(\frac{x -c_i+c_j}{\lambda}\right). \end{equation} We consider this operator to be acting on the space of all functions on the real line, however the subset of nonnegative functions $$ \left\{\emr{\psi}: \mathbb R \to \mathbb R^{+} \mid \supp \emr{\psi} \subseteq \overline{\{ x-y \mid x, y \in J_\lambda \}} \right\} $$ is invariant with respect to~$\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$. \end{definition} \begin{remark} Although the difference between two operators $\mathcal{D}^{(2)}_{\alpha_1,\mathcal{S}}$ and $\mathcal{D}^{(2)}_{\alpha_2,\mathcal{S}}$ is in scaling factor only, we prefer to keep this factor as a part of the definition. \end{remark} We are now ready to state a key result, which is the basis for our numerical method. \begin{theorem}\label{t:certificate-D2} Let~$\mathcal{S}(\lambda,\bar c, \bar p)$ be an iterated function scheme of similarities. Assume that for some $\alpha>0$ there exists an admissible compact interval $J \subset \mathbb R$, a function $\psi:\mathbb{R}\to\mathbb{R}^+$ with $\supp \psi \subset J$ which is positive and bounded away from~$0$ and from infinity on~$J$, such that for any $x \in J$ \begin{equation} \label{eq:D2-test} [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi](x) < \psi(x). \end{equation} Then the correlation, and hence the Hausdorff, dimension of the $\mathcal{S}$-stationary measure $\mu$ is bounded from below by~$\alpha$: \[ \dim_H (\mu) \ge D_2(\mu)\ge \alpha. \] \end{theorem} Theorem~\ref{t:certificate-D2} allows us to obtain rigorous lower estimates for the correlation dimension~$D_2(\mu)$ of the stationary measure $\mu$ for a single parameter value $\lambda$ (and thus for the Hausdorff dimension $\dim_H\mu$), once a suitable test function~$\psi$ is found. This also provides us with a way to find an asymptotic lower bound and to prove Theorem~\ref{thm:near1}. \begin{example} To illustrate the way Theorem~\ref{t:certificate-D2} is applied, we may choose $\lambda=0.75$, a function $\psi(x) = 1-0.2|x|$ and to apply the operator $\mathcal D^{(2)}_{0.2,\mathcal S}$. It is clear that we may choose $J = [-4,4]$. Then Figure~\ref{fig:thm5ex} shows that $\mathcal D^{(2)}_{0.2,\mathcal S} \psi (x) < \psi(x)$ and therefore $\dim_H \mu \ge 0.2$. \begin{figure} \centerline{ \includegraphics{thm5ex.pdf} } \caption{Image of the function $\psi = 1 - 0.2|x|$ is strictly smaller than $\psi$.} \label{fig:thm5ex} \end{figure} \end{example} We next want to adapt Theorem~\ref{t:certificate-D2} to prepare for a computer-assisted proof of Theorems~\ref{t:main},~\ref{thm:salemnum} and~\ref{thm:dim013}. In Section~\ref{ss:efbounds-2} we modify the operator $\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$ to obtain an operator~${\mathcal D}_{\alpha,\Lambda,\mathcal J}$ which preserves a subspace of piecewise constant functions, and amend Theorem~\ref{t:certificate-D2} so that a common test function can be used for an open set of parameter values~$\Lambda=(\lambda-\varepsilon,\lambda+\varepsilon)$. This adaptation allows us to choose the test function to be piecewise constant on intervals with rational endpoints and to verify the hypothesis of Theorem~\ref{t:certificate-D2} numerically, thus providing us with a means to obtain a uniform lower bound for the (correlation, and hence Hausdorff) dimension of the corresponding stationary measures. Afterwards, in Section~\ref{ss:efbounds-1} we give an iterative procedure to construct a test function for the operators~$\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$ and ${\mathcal D}_{\alpha,\Lambda,\mathcal J}$. A natural question arises. Assume that $D_2(\mu_\lambda)>\alpha$. Does there exist a test function~$\psi$ so that~\eqref{eq:D2-test} holds? The next result gives an affirmative answer. \begin{theorem}\label{t:finding-D2} Let $\mu_\lambda$ be the unique stationary measure of a scheme of contraction similarities~$\mathcal{S}(\lambda)$. Then for any $\alpha<D_2(\mu_\lambda)$ the hypothesis of Theorem~\ref{t:certificate-D2} holds. In other words, there exists an admissible interval $J$, a piecewise constant function $\psi$ with $\supp \psi \subset J$ which is positive and bounded away from 0 and from infinity on $J$ and such that for any $x \in J$ \begin{equation*} [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi](x) < \psi(x). \end{equation*} \end{theorem} As the reader will see, the technique in the proof of Theorem~\ref{t:certificate-D2} exploits the fact that the maps are similarities with the same scaling coefficient. In the next section~\ref{ss:ingen} we generalise the method to study other types of iterated function schemes at the expense of weaker estimates. \subsubsection{General uniformly contracting schemes} \label{ss:ingen} Let us denote by~$B_r(x)$ a neighbourhood of a point~$x$ of radius~$r$. We will be concerned with iterated function schemes~$\mathcal T(\bar f, \bar p, J)$, where $J \subset \mathbb R$ is a compact interval, $\bar f = (f_1, \ldots, f_n)$ is a finite collection of uniformly contracting $C^{1+\varepsilon}$ diffeomorphisms of~$\mathbb R$, which preserve the interval~$J$, i.e. $f_j (J) \subset J$ for $1 \le j \le n$ and $\bar p$ is a probability vector. Following Hochman~\cite[\S4.1]{H12}, we say that the measure~$\mu$ is \emph{$\alpha$-regular}, if there exists a constant $C$ such that for any $r>0$ and any $x$ we have that \begin{equation} \label{eq:regmeasure} \mu(B_r(x))< C r^\alpha. \end{equation} One of the examples of $\alpha$-regular measures are Bernoulli convolutions~\cite[Proposition 2.2]{FL09}. We introduce the following dimension-type characteristic of a compactly supported probability measure~$\mu$ on $\mathbb R$, which is sometimes referred to as the \emph{Frostman dimension}~\cite{FFK20} (in the context of $\mathbb R^n$) or the \emph{lower Ahlfors dimension} (in the context of general separable metric spaces). It is defined as supremum of the regularity exponents: $$ D_1(\mu): = \sup\{\alpha \mid \exists C: \quad \forall x,r \quad \mu\left( B_r(x) \right) <Cr^{\alpha} \}. $$ \begin{remark} We would like to warn the reader that the Frostman dimension doesn't satisfy all conditions which a dimension of a measure is expected to satisfy, in particular, it is not closed under countable unions. We will see in Lemma~\ref{lem:d2d1} that $D_1(\mu) \le D_2(\mu)$ for any probability measure~$\mu$. It is not hard to show that it is also a lower bound for the packing dimension, as well as other dimensions which can be defined using the local dimension. \end{remark} A pair of complementary results, Theorems~\ref{t:certificate-D1} and~\ref{t:finding-D1} below allow one to get a lower bound on the \emph{Frostman} dimension~$D_1(\mu)$ for the stationary measure of an iterated function scheme~$\mathcal T(\bar f, \bar p, J)$ in terms of an associated linear operator. \begin{definition} Given an iterated function scheme $\mathcal T(\bar f, \bar p, J)$ for any $\alpha\in (0,1)$ we define the associated \emph{asymmetric diffusion operator} $\mathcal{D}^{(1)}_{\alpha,\mathcal T}$ by \begin{equation} \mathcal{D}^{(1)}_{\alpha,\mathcal T}[\psi](x) := \sum_{j=1}^n p_j \cdot |(f_{j}^{-1})'(x)|^{\alpha} \cdot \psi(f_j^{-1}(x)). \label{eq:dif-oneway} \end{equation} We consider this operator to be acting on the space of all functions on the real line, although it preserves nonnegative functions supported on~$J$. \end{definition} \begin{remark} Comparing~\eqref{eq:dif-oneway} with~\eqref{eq:dif-twoway} we see that $\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$ for Bernoulli convolution system described in Example~\ref{ex:BCsystem} corresponds to the operator~$\mathcal{D}^{(1)}_{\alpha,\mathcal T}$ for the system of three contractions $$ \mathcal T:=\{f_1(x) = \lambda x - 1, f_2(x)=\lambda x, f_3(x) = \lambda x+1\} $$ and probability vector $\overline p = (0.25,0.5,0.25)$. \end{remark} By analogy with $\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$, the operator $\mathcal{D}^{(1)}_{\alpha,\mathcal{S}}$ gives us a way to obtain a lower bound for the Frostman dimension. We denote by $B_r(J)$ the closed neighbourhood of the interval $J$ of radius~$r$. \begin{theorem} \label{t:certificate-D1} Assume that for some $\alpha>0$ there exist $r>0$ and a function $\psi:\mathbb{R}\to\mathbb{R}_+$, supported on~$B_r(J)$, positive on $B_r(J)$ and bounded away from $0$ and from infinity on $B_r(J)$, such that \begin{equation*} \forall x\in B_r(J) \quad [\mathcal{D}^{(1)}_{\alpha,\mathcal T} \psi](x) < \psi(x). \end{equation*} Then the measure~$\mu$ is $\alpha$--regular. \end{theorem} \emr{We now give a simple example to illustrate Theorem~\ref{t:certificate-D1} in action. \begin{example} We may consider an iterated function scheme $\mathcal T$ consisiting of two maps $f_1(x) = 0.65x$ and $f_2(x) = 0.6x +1$ with probabilities $p_0 = p_1 = 0.5$. Then for the invariant measure $\mu$ we get $\supp \mu \subset J = [0,2.7]$. If we choose a function $\psi(x) = 1 - 0.4 |x- 1.25|$ on~$J$ and apply $\mathcal D^{(1)}_{0.35,\mathcal T}$ we see that $\mathcal D^{(1)}_{0.35,\mathcal T} \psi (x) < \psi(x)$. This is illustrated in Figure~\ref{fig:ex2}. Therefore we conclude that the Frostman dimension of the stationary measure of this system is bounded from below by~$0.35$. \begin{figure} \centerline{\includegraphics{thm7ex2.pdf}} \caption{The image of the function $\psi(x) = 1-0.4|x-1.25|$ is strictly smaller than $\psi$. } \label{fig:ex2} \end{figure} \end{example}} As in the case of Theorem~\ref{t:finding-D2} for correlation dimension, our next result states that any lower bound can be found using this method. \begin{theorem}\label{t:finding-D1} Let $\mu$ be the stationary measure of the iterated function scheme $\mathcal T(\bar f, \bar p, J)$. Then for any $\alpha < D_1(\mu)$ the hypothesis of Theorem~\ref{t:certificate-D1} holds. In other words there exists a neighbourhood $B_r(J)$ and a piecewise constant function~$\psi$ with $\supp \psi \subset B_r(J)$, which is positive and bounded away from 0 and from infinity on $B_r(J)$, and such that \begin{equation*} \forall x\in B_r(J) \quad [\mathcal{D}^{(1)}_{\alpha,\mathcal T}\psi](x) < \psi(x). \end{equation*} \end{theorem} This test function can be constructed using the process similar to the one which is used in the construction of the test function for the diffusion operator~$\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$, described in~\S\ref{ss:efbounds-1}. \section{Dimension of a measure} In this section we collect together some preparatory material on different dimensions of a measure and their properties. A good reference for the background reading is a book by Falconer~\cite{F90}. See also the work by Mattila at al.~\cite{MMR00} for a discussion and comparison of notions of dimension of a measure. It is convenient to summarize some useful notation for the sequel. For any set $X\subset \mathbb R$ we denote by $\mathcal F_X$ the set of real-valued positive functions, bounded away from zero and from infinity on $X$ and vanishing on $\mathbb R \setminus X$. We would like to equip the set of functions $\mathcal F_X$ with the partial order. We write that $f\prec g$ if $f(x)<g(x)$ for all $x\in X$ and $f\preccurlyeq g$ if $f(x)\le g(x)$. Given a finite partition $\mathcal X \eqdef \{X_j, \, j =1,\ldots N\}$, $X = \cup_{j=1}^N X_j$ we denote by $\mathcal F_{\mathcal X}$ the subset of piecewise-constant functions associated to the partition $\mathcal X$. Given a collection of $n$ maps $f_j$, $j=1,\ldots, n$ we use a multi-index notation for composition of~$k$ of them, namely, we denote $f_{\underline j_k}: = f_{j_1} \ldots f_{j_k}$, where $\underline j_k = j_1,\ldots,j_k \in \{1,\ldots,n\}^k$. Finally, we denote by $\mathds{1\!}_X$ the indicator function of $X$. \subsection{Hausdorff dimension}\label{subsection:dimH} We briefly recall the definition of the Hausdorff dimension of a set $X \subset \mathbb R$. Given $s>0$ and $\delta>0$ we define $s$-dimensional Hausdorff content of~$X$ by $$ H_\delta^s(X) := \inf \left\{ \sum_{i} (\mathrm{diam}(U_i))^s \mid \{U_i\} \mbox{ is a cover for $X$ and $\sup_i \{\hbox{\rm diam}(U_i)\} \leq \delta$} \right\}, $$ where the supremum is over all countable covers by open sets whose diameter is at most $\delta$. We next remove the $\delta$ dependence by defining $s$-dimensnional Hausdorff measure of $X$ by $$ H^s(X) := \lim_{\delta \to 0} H_\delta^s(X) \in [0, +\infty]. $$ Finally, we come to the definition of the Hausdorff dimension of the set $X$. \begin{definition} \label{def:Hdim} The {\it Hausdorff dimension} of $X$ is defined by \begin{equation} \dim_H(X) := \inf\{ s \geq 0 \mid H^s(X) =0\}. \end{equation} \end{definition} In particular, the Hausdorff dimension of Borel sets (Definition~\ref{def:Hdim}) is used in the definition of the Hausdorff dimension of probability measures (Definition~\ref{def:HDmeasure}). \subsection{Correlation dimension}\label{subsection:dimC} A convenient method to obtain a lower bound on the Hausdorff dimension $\dim_H(\mu)$ is a standard technique called \emph{the potential principle} (see~\cite[p.~44]{P}) which allows one to relate the Hausdorff dimension of a measure and convergence of the integral of powers of the distance function. \begin{definition} \label{def:emu} We define \emph{the energy} of a probability measure~$\mu$ by \begin{equation}\label{eq:a-int} I(\mu,\alpha) := \int_{\mathbb R}\int_{\mathbb R} (d(x,y))^{-\alpha} \, \mu(dx) \, \mu(dy). \end{equation} whenever the right-hand is finite. \end{definition} \begin{definition}\label{def:D2} The {\it correlation dimension} of the measure $\mu$ is defined by \begin{equation} \label{eq:D2-bis} D_2(\mu) = \sup\{\alpha \colon I(\mu,\alpha)< + \infty \}. \end{equation} \end{definition} \noindent This is a special case of the more general $q$-dimensions $D_q(\mu)$ defined analogously~\cite{P93}. The correlation dimension of~$\mu$ gives a handy lower bound on the Hausdorff dimension of~$\mu$. \begin{lemma}\label{l:inequality} $\dim_H(\mu) \geq D_2(\mu)$. \end{lemma} \begin{proof} The principle involved is described, for instance, in a book by Falconer~\cite[Theorem~4.13]{F90} for sets or in a book by Mattila~\cite[\S8]{M92} for measures. \end{proof} The following simple result turns out to be very fruitful. \begin{corollary}\label{l:integral} If for a Borel probability measure $\mu$ and $0 < \alpha < 1$ the energy $I(\mu, \alpha)$ is finite, then $\dim_H(\mu) \geq \alpha$. \end{corollary} \begin{remark} Developing the method proposed in~\cite{HKPS19} it is possible to show~\cite{KV2021} that the strict inequality $\dim_H(\mu_\lambda) > D_2(\mu_\lambda)$ holds, for example, for some Pisot values of parameter~$\lambda$ both in the case of Bernoulli convolutions as described in Example~\ref{ex:BCsystem} and in the case of $\{0,1,3\}$-system as described in Example~\ref{ex:013}. \end{remark} We now would like to recall that a convolution of a continuous function~$f$ and a probability measure~$\mu$ is a function given by $(f*\mu) (x)= \int_{\mathbb{R}}f(x-z) d \mu(z)$. This fact brings us to introducing the last dimension notion we discuss in this work. \subsection{Frostman dimension} \label{ss:pem} Since this notion is not very well known we will begin by introducing it. \begin{definition} \label{def:D1} Let us fix a function $f_\alpha(r)=|r|^{-\alpha}$. Let $\mu$ be a compactly supported probability measure. We define its \emph{Frostman dimension} by \begin{align} D_1(\mu)& = \sup\{\alpha \hbox{ : } \exists C: \quad \forall x,r \quad \mu(B_{r}(x))<Cr^{\alpha} \} \tag{\ref{def:D1}.1} \label{eq:D1.1} \\ & = \sup\{\alpha \hbox{ : } \exists C: \quad \forall x \quad (f_\alpha *\mu)(x) <C \} \tag{\ref{def:D1}.2} \label{eq:D1.2} \\ & = \sup\{\alpha \hbox{ : } \text{the convolution } \, f_\alpha *\mu \text{ is a continuous function} \}. \tag{\ref{def:D1}.3} \label{eq:D1.3} \end{align} \end{definition} \begin{remark} It is easy to see that three expressions for $D_1$ give the same value. Indeed, it follows from the Chebyshev inequality that for any $\alpha$ and $C$ such that $(f_\alpha *\mu)(x) <C$ we have that $\mu(B_{r}(x))<Cr^{\alpha}$, so~\eqref{eq:D1.2} implies~\eqref{eq:D1.1}. Since $\supp\mu$ is a compact set, the convolution $(f_\alpha * \mu)(x) \to 0$ as $x \to \infty$. Therefore if the function $f_\alpha * \mu$ is continuous, it is also bounded. Hence~\eqref{eq:D1.3} implies~\eqref{eq:D1.2}. Finally, let us show that if $\mu(B_{r}(x))<Cr^{\alpha}$ for some $\alpha$ and $C$, then for any $\alpha^\prime<\alpha$ we have that $f_{\alpha^\prime} * \mu$ is continuous. Indeed, for any $x$ and $\varepsilon>0$ we have an asymptotic estimate \begin{equation} \label{eq:eps-conv} \int_{B_{\varepsilon}(x)} |x-y|^{-\alpha'} \, d\mu(y) \le \int_0^{\varepsilon} r^{-\alpha'} d(Cr^{\alpha}) = \frac{\alpha C}{\alpha-\alpha'}\cdot \varepsilon^{\alpha-\alpha'} \to 0 \quad \mbox{ as }\varepsilon\to 0. \end{equation} On the other hand, the convolution of $\mu$ with the function $\bar f_{\alpha^\prime}(r)=\max(|r|,\varepsilon)^{-\alpha^\prime}$ is a convolution of a probability measure with a continuous bounded function and hence is continuous. It follows from~\eqref{eq:eps-conv} that these convolutions converge uniformly to $f_{\alpha}*\mu$, and therefore the latter is everywhere finite and continuous as a uniform limit of continuous functions. Thus~\eqref{eq:D1.1} implies~\eqref{eq:D1.3}. \end{remark} It is also not difficult to see that the Frostman dimension is not larger than the correlation dimension. \begin{lemma} \label{lem:d2d1} For any compactly supported probability measure $\mu$, \begin{equation}\label{eq:D1D2} D_1(\mu)\le D_2(\mu). \end{equation} \end{lemma} \begin{proof} Let us consider the function $f_\alpha(r) = |r|^{-\alpha}$. Then for any $\alpha$ such that the convolution $f*\mu$ is bounded, one has that \[ I(\mu,\alpha)=\int_\mathbb{R} (f_\alpha*\mu)(x) d\mu(x) < + \infty. \] Therefore \[ \{\alpha \mid \exists C: \quad \forall x \quad (f_\alpha *\mu)(x) <C \} \subset \{\alpha \mid I(\mu,\alpha)<+\infty \}, \] and the desired inequality~\eqref{eq:D1D2} follows. \end{proof} \section[Computing lower bounds]{Computing uniform lower bounds on dimension} \label{s:efbounds} We begin by modifying the diffusion operator and Theorem~\ref{t:certificate-D2} in preparation for computer-assisted proofs of Theorems~\ref{t:main},~\ref{thm:salemnum} and~\ref{thm:dim013}. \subsection{Extension to open set of parameters} \label{ss:efbounds-2} We keep the notation of Section~\ref{subsection:ifs}. Let $\mathcal{S}(\lambda, \bar c, \bar p)$ be an iterated function scheme of~$n$ similarities with the same scaling coefficient~$\lambda$ and probability vector~$\bar p$. Given $0<\alpha<1$ and a subset $\Lambda \subset [0,1]$ let $J$ be a $\Lambda$-admissible interval and let $\mathcal J = \{J_1,\ldots,J_N\}$ be a partition of $J$. The modified diffusion operator we introduce below preserves the subspace $\mathcal F_{\mathcal J}$ of piecewise constant functions associated to the partition~$\mathcal J$. \begin{definition} We define a finite rank nonlinear diffusion operator $ \mathcal D_{\alpha,\Lambda;\mathcal J}: \mathcal F_{\mathcal J} \to \mathcal F_{\mathcal J}$ by \begin{equation} \label{eq:D-Lambda} \mathcal D_{\alpha,\Lambda, \mathcal J} \psi |_{J_k} = (\inf \Lambda)^{-\alpha} \sum_{i,j=1}^n p_i p_j \sup_{x\in J_k, \, \lambda\in\Lambda} \psi \left( \frac{x- c_i + c_j }\lambda \right), \quad 1 \le k \le N. \end{equation} \end{definition} We see directly from definition that for any $\lambda \in \Lambda$ we have that for any $\psi \in \mathcal F_J$, $$ \mathcal{D}^{(2)}_{\alpha,\mathcal{S}} \psi \preccurlyeq \mathcal D_{\alpha,\Lambda,\mathcal J} \psi. $$ The following adaptation of Theorem~\ref{t:certificate-D2} to the operator~$\mathcal D_{\alpha,\Lambda,\mathcal J}$ follows immediately. \begin{theorem} \label{thm:5star} Assume that for some $\alpha>0$ and a set $\Lambda\subset[0,1]$ there exists an admissible interval~$J_\Lambda$, its partition~$\mathcal J$, and a piecewise-constant function $\psi \in \mathcal F_{\mathcal J}$ which is positive and bounded away from~$0$ and from infinity on~$J_\Lambda$, such that \begin{equation} \label{eq:D2-hat-test} \mathcal D_{\alpha,\Lambda,\mathcal J}\psi \preccurlyeq \psi. \end{equation} Then for any $\lambda \in \Lambda$ the correlation dimension of the $\mathcal{S}_\lambda$-stationary measure $\mu_\lambda$ is bounded from below by~$\alpha$: \[ D_2(\mu_\lambda)\ge \alpha. \] \end{theorem} \emr{ We can illustrate the principle with the following example. \begin{example} In the setting of Bernoulli convolution with $\lambda = 0.75$ we may choose an interval $\Lambda = [\lambda - 10^{-8}, \lambda+10^{-8}]$ and $J = [-4.1, 4.1]$. Applying the operator $\mathcal D_{0.2, \Lambda, \mathcal J}$ to a function \begin{align*} \psi(x) = 0.15 \cdot \mathds{1}_{[-4,4]}+0.1 \cdot \mathds{1}_{[-2.7,2.7]} &+ 0.15 \cdot \mathds{1}_{[-2.1,2.1]} + 0.1\cdot \mathds{1}_{[-1.6,1.6]} + 0.125\cdot \mathds{1}_{[-1.5,1.5]} \\ &+ 0.1 \cdot \mathds{1}_{[-1.2,1.2]} + 0.125 \cdot \mathds{1}_{[-0.5,0.5]} + 0.15\cdot \mathds{1}_{[-0.25,0.25]}, \end{align*} depicted in Figure~\ref{fig:t5star}, we get that $\mathcal D_{0.2, \Lambda, \mathcal J} \psi \prec \psi$ and conclude that $\dim_H \mu_\lambda > 0.2$ for all $\lambda \in \Lambda$. \begin{figure} \centerline{\includegraphics{thm5star.pdf}} \caption{The image of the piecewise constant function~$\psi$ is strictly smaller than~$\psi$.} \label{fig:t5star} \end{figure} \end{example} } Therefore in order to show that $D_2( \mu_\lambda) \ge \alpha$ for all $\lambda \in \Lambda$, it is sufficient to find an $\Lambda$-admissible interval $J_\Lambda$, its partition $\mathcal J$, and a piecewise constant function~$\psi$ associated to $\mathcal J$, with $\psi|_{J_\Lambda} > 0$ such that $\mathcal D_{\alpha,\Lambda,\mathcal J} \psi \preccurlyeq \psi$ and then apply Theorem~\ref{thm:5star}. \begin{remark} Furthermore, by refining the partition in the construction of the operator $\mathcal D_{\alpha,\Lambda,\mathcal J}$ and choosing smaller intervals $\Lambda$, in the limit we obtain the correlation dimension. In particular, this implies a well-known fact that the correlation dimension is lower semicontinuous. \end{remark} \subsection{Constructing the test function} \label{ss:efbounds-1} The construction of a suitable test function~$\psi$ which satisfies the hypothesis of Theorem~\ref{t:certificate-D2} is based on the following general result for linear operators. \begin{notation} Given a linear operator $A$ acting on real-valued functions and a small number $\vartheta > 0$ we introduce \begin{equation} \label{eq:hatop} [\hatop{A} f](x) :=\min([Af](x)+\vartheta,f(x)). \end{equation} Observe that if $A$ preserves the subset of positive functions, then $\hatop A $ also does so. Furthermore, if $A$ preserves the subspace of continuous functions, then $\hatop A$ preseves this subspace too. \end{notation} We say that an operator $A \colon \mathcal F_J \to \mathcal F_J$ is monotone, if for any $f, g \in \mathcal F_J$ such that $g \preccurlyeq f$ we have that $A g \preccurlyeq Af$. Note that we don't require the operator~$A$ to be linear in the definition of monotone. We will need the following easy general statement. \begin{lemma} \label{l:min} Let~$A\colon \mathcal F_X\to\mathcal F_X$ be a monotone operator, and let $\vartheta>0$ be a real number. Assume that for some function $f\in \mathcal F_X$ and for some~$x_0 \in X$ we have that $[\hatop A f](x_0) = [Af](x_0)+\vartheta$. Then for any~$k \ge 1 $ \[ [A\hatop{A}^k f](x_0)+\vartheta\le [\hatop{A}^kf](x_0). \] \end{lemma} \begin{proof} It sufficient to show that the statement holds for $k=1$. Then the result follows by induction. By definition of $\hatop A$ we have that $\hatop{A}f \preccurlyeq f$. Together with monotonicity of $A$ it implies that \[ [A \hatop A f](x_0)+\vartheta \le [Af](x_0)+\vartheta = [\hatop{A} f ](x_0). \] For the inductive step, let us assume that \[ [A \hatop{A}^k f](x_0)+\vartheta \le [\hatop{A}^k f ](x_0). \] Then $[\hatop{A}^{k+1}f](x_0) = [A\hatop{A}^kf](x_0)+\vartheta$ and therefore using monotonicity and the fact that $\hatop A f \preccurlyeq f$ we get $$ [A \hatop{A}^{k+1} f](x_0) + \vartheta \le [A \hatop{A}^k f](x_0) + \vartheta = [\hatop{A}^{k+1} f] (x_0). $$ \end{proof} \begin{proposition} \label{prop:auxop} Let $J \subset \mathbb R$ be a closed interval. Let~$A\colon \mathcal F_J\to \mathcal F_J$ be a monotone operator. Let $\vartheta>0$ be an arbitrarily small real number. Assume that for some $n>0$ and $f \in \mathcal F_J$ we have that ${\hatop A}^n f \prec f$. Then $$ A {\hatop A}^{n} f \preccurlyeq {\hatop A}^{n} f. $$ \end{proposition} \begin{proof} By definition of $\hatop A$ for any function $f \in \mathcal F_J$ we have $\hatop{A}f\preccurlyeq f$. Since $\hatop{A}$ is monotone, we deduce that \begin{equation*} {\hatop A}^n f \preccurlyeq {\hatop A}^{n-1}f \preccurlyeq \cdots \preccurlyeq f. \end{equation*} Since for every $x\in J$ we have $[\hatop{A}^n f](x) < f(x)$, then there exists $0\le m(x) \le n-1$ such that a strict inequality $[{\hatop A}^{m(x)+1} f](x)<[{\hatop A}^{m(x)} f](x)$ holds. Therefore \[ [A{\hatop A}^{m(x)} f](x)+\vartheta\le [{\hatop A}^{m(x)} f](x). \] Applying Lemma~\ref{l:min} to the function ${\hatop A}^{m(x)} f$ with $k=n-m(x)$, we get \[ [A{\hatop A}^n f](x)+\vartheta\le [{\hatop A}^n f](x), \] and the result follows. \end{proof} \medskip Our numerical results are based on the following Corollaries, which follow immediately from Theorem~\ref{t:certificate-D2} and~Theorem~\ref{thm:5star}. \begin{corollary} \label{cor:single} If there exists and admissible interval $J_\lambda$ such that Proposition~\ref{prop:auxop} holds for $A := \mathcal{D}^{(2)}_{\alpha,\mathcal{S}(\lambda)}$ and $ f:= \mathds{1}_{J_\lambda}$, then $D_2(\mu_\lambda) \geq \alpha$. \end{corollary} We use the last proposition in order to find a suitable test function $\psi$ for Theorem~\ref{thm:5star}. \begin{corollary} \label{cor:uniform} If there exist an admissible interval~$J_\Lambda$ and its partition $\mathcal J$ such that Proposition~\ref{prop:auxop} holds for $A := {\mathcal D}_{\alpha,\Lambda,\mathcal J}$ and $f := \mathds{1}_{J_\Lambda} $ then $D_2(\mu_\lambda) \geq \alpha$ for all $\lambda \in \Lambda$. \end{corollary} Note that Corollary~\ref{cor:single} can only be applied to rational parameter values $\lambda \in \mathbb Q$, which can be represented in computer memory exactly. In order to study irrational parameter values, such as Pisot or Salem numbers, we need to apply Corollary~\ref{cor:uniform} to a tiny interval $\Lambda$ with rational endpoints containig the irrational parameter value we would like to study. \subsection[Practical implementation]{Practical implementation: computing lower bounds for $D_2(\mu)$} The following method, based on Corollaries~\ref{cor:single} and~\ref{cor:uniform}, can be used to obtain a lower bound on the correlation dimension of a stationary measure of an iterated function scheme of similarities. \subsubsection{Verifying a conjectured value} \label{sss:verifyalpha} First let us assume that we would like \emph{to check} whether $\alpha$ is a lower bound for the correlation dimension of an iterated function scheme of similarities $\mathcal{S}(\lambda,\bar c, \bar p)$ for an open set of parameter values $\lambda \in \Lambda$. Then we proceed as follows: \begin{enumerate} \item Fix an admissible interval $J_\Lambda$, associated to $\Lambda$. \item Choose a partition $\mathcal J$ of the interval $J$ consisting of $N$ intervals of the same length. From the point of view of efficiency of the practical implementation, it is better to choose the length of the intervals of the partition to be comparable with $|\Lambda|$. In our computations, we often take $N$ so that $$ \frac12 N |\Lambda| \le |J| \le 2 |\Lambda| N. $$ \item Introduce an operator $A:=\mathcal D_{\alpha,\Lambda,\mathcal J}$. \item Take a piecewise-constant function $\mathds{1}_J$ and $\vartheta>0$ and compute the images ${\hatop A}^n \mathds{1}_J$, which are piecewise-constant functions associated to the partition $\mathcal J$. \item If we find $n_0$ so that ${\hatop A}^{n_0} \mathds{1}_J\prec \mathds{1}_J$, then we conclude that $D_2(\mu_\lambda) > \alpha$. \end{enumerate} \emr{We can give a simple example to illustrate the method.} \begin{example} \emr{ Let us show that for the Bernoulli convolution measure with $\lambda \in (0.74,0.76)$ we have $D_2(\mu_\lambda) > 0.75$. The corresponding iterated function scheme consists of two maps and probability vector $\bar p$ given by $$ f_0(x)= \lambda x, \quad f_1(x) = \lambda x+1; \qquad \bar p = \Bigl( \frac12,\frac12\Bigr). $$ Then $\supp \mu_{\lambda} \subseteq [0,4.17]$ and we can choose an admissible interval~$J = [-4.5,4.5]$. We can also consider a uniform partition of $J$ consisting of $120$ intervals. } \emr{ We then choose $\theta=10^{-3}$, and compute the images of $\mathds{1}_J$ under $\hatop A$ for $A = \mathcal D_{0.75,\Lambda,\mathcal J}$. It turns out that~$25$ iterations is sufficient, in particular we have that $\max {\hatop A}^{25} \mathds{1}_J < 0.99$. This is illustrated in Figure~\ref{fig:simple}. } \begin{figure}[h] \centerline{ \includegraphics{simple1.pdf} } \caption{The function ${\hatop A}^{25} \mathds{1}_J\prec\mathds{1}_J$ for $A = \mathcal D_{0.75,\Lambda,\mathcal J}$ and $\vartheta = 10^{-3}$, $\supp \mu_{0.75}$, and an admissible interval~$J$.} \label{fig:simple} \end{figure} \end{example} \emr{When it comes to the realisation of an iterative method in practice one of the common concerns is accumulation of the rounding error. The following remark explains why in the present case this is not a significant issue.} \begin{remark} \emr{Applying the operator $\widehat A_\vartheta$ to a given piecewise-constant function $\psi$ changes its value on one of the intervals only if the \emph{computed} value $A_{\mathrm{comp}}[\psi]$ is below the actual value of $\psi$ on this interval by at least $\vartheta$, otherwise the value stays completely unchanged.} \emr{ Assume that the rounding errors never exceed $\frac{1}{2}\vartheta$; this is quite reasonably the case, for instance, since we typically chose $\vartheta = 10^{-8}$, while making the calculations with quadruple precision, in other words, the numbers involved have $32$ significant digits. } \emr{ Then, the computed image $\widehat A_{\vartheta,\mathrm{comp}}[\psi]$ is lower-bounded by a true image with half the ``added value'': \[ \widehat A_{\vartheta/2}[\psi] \preccurlyeq \widehat A_{\vartheta,\mathrm{comp}}[\psi]. \] By induction, it is then easy to see that after an \emph{arbitrary} number $n$ of iterations one has \[ \widehat A_{\vartheta/2}^n[\psi] \preccurlyeq \widehat A_{\vartheta, \mathrm{comp}}^{n}[\psi]. \] Thus, if after some number $n$ of iterations the \emph{computations} provide $A_{\vartheta, \mathrm{comp}}^{n}[\mathds{1}_J] \prec \mathds{1}_J$, where $A=\mathcal{D}^{(2)}_{\alpha,\mathcal{S}(\lambda)}$, one actually gets the desired $\widehat A_{\vartheta/2}^n[\mathds{1}_J] \prec \mathds{1}_J$ and hence the applicability of Corollary~\ref{cor:single}. In our computations, to avoid problems with the strict inequality handling, we have been asking for the inequality \[ (\widehat{\mathcal{D}^{(2)}_{\alpha,\Lambda,\mathcal{J}}})_{\vartheta, \mathrm{comp}}^{n}[\mathds{1}_J] \prec 0.995\cdot \mathds{1}_J. \] } \end{remark} \emr{Next, we should describe what we would do if an attempt to obtain a lower bound by a given $\alpha$ was unsuccessful.} \begin{remark} \label{rem:droplam} It is possible that after a large number of iterations $n$, there exists an interval $\mathcal J_k$ of the partition $\mathcal J$ where we have the equality: $$ [ {\hatop A }^{n}\mathds{1}_J] |_{\mathcal J_k} = 1. $$ Then we cannot reach a definitive conclusion, as there are three possibilities: \begin{enumerate} \item the number of iterations $n$ is not large enough, \item the intervals of the partition $\mathcal J$ are too long, or the interval $\Lambda$ is too long, \item the number $\alpha$ is not a lower bound, i.e. there exists $\lambda \in \Lambda$ such that $D_2(\mu_\lambda) \le \alpha$. \end{enumerate} In this case we could try to increase the number of iterations, to choose a finer partition, or to drop the conjectured value $\alpha$ to $\alpha^\prime < \alpha$ and to consider the operator $A :=\mathcal D_{\alpha^\prime,\Lambda,\mathcal J}$. It follows from Proposition~\ref{prop:D21} that provided our guess on the lower bound was correct, we will be able to justify it using this approach, subject to computer resources. \end{remark} Our method can be used not only \emph{to verify a suggested lower bound}, but also to find a lower bound for the correlation dimension or to improve an existing lower bound. \subsubsection{Computing a lower bound} \label{sss:compute} Assume that we would like to improve an existing lower bound $d_1 < D_2(\mu_\lambda)$ using no more than $N$ iterations of the operator and piecewise constant functions with no more than $K$ intervals. Additionally assume that there is an upper bound $D_2(\mu_\lambda)<d_2$. Then we can fix $\varepsilon>0$, a uniform partition $\mathcal J$, an operator $A :=\mathcal D_{\alpha,\Lambda,\mathcal J}$ and to search for an $\alpha$ satisfying the following conditions \begin{enumerate} \item\label{it:left} there exists $k<N$: ${\hatop A}^{k} \mathds{1}_J \prec \mathds{1}_J$ \item\label{it:right} for all $k \le N$: ${\hatop A}^{k} \mathds{1}_J \not\prec \mathds{1}_J$ \end{enumerate} One approach to find~$\alpha$ would be to apply the well known bisection method to the interval $(d_1,d_2)$. However, to obtain good estimates, one has to allow for a large number of iterations before dropping the conjectured lower bound and this is very time-consuming. In other words, negative answer is expensive as we have to examine all possibilities described in Remark~\ref{rem:droplam}. It is therefore more efficient to use a partition of the interval $(d_1,d_2)$ into $M\!: =\![\sqrt{(d_2\!-\!d_1)\varepsilon^{-1}}]$ intervals of equal length and to test the values $\alpha_k = d_1+k\cdot \frac{1}M $, $k = 1,\ldots,M$ using the method explained in the previous subsection~\ref{sss:verifyalpha}. We then want to find a $0 \le k \le M$ such that for $A_k :=\mathcal D_{\alpha_k,\Lambda,\mathcal J}$ there exists $n < N$ with the property $$ {\widehat A_{k,\vartheta}}^{n} \mathds{1}_J \prec \mathds{1}_J \qquad \mbox{ and } \qquad {\widehat A_{k+1,\vartheta}}^{N} \mathds{1}_J \not\prec \mathds{1}_J. $$ Then we repeat the procedure again, dividing the interval $\left(\alpha_k, \alpha_{k+1}\right)$ into $M$ intervals of length $\varepsilon$. This way would need to apply all $N$ iterations only twice (to confirm the second condition~\ref{it:right}) to find the desired value~$\alpha$. Finally, we note that in order to compute a good lower bound on a large interval of parameter values, we consider \emph{a cover} of this interval by a large number of small overlapping intervals and compute a lower bound on each of them. \begin{remark} We would like to emphasize that the value $\alpha+\varepsilon$ is not an upper bound for correlation dimension, as it depends on the number~$K$ of iterations allowed and the number of intervals for the space of piecewise constant functions, and might increase (together with $\alpha$) when we increase those values. \end{remark} \begin{definition} We call~$\varepsilon$ the \emph{refinement} parameter. \end{definition} In \emr{Subsection~\ref{sss:mainproof}} we give details of the application of our method to computing lower bound for correlation dimension of Bernoulli convolutions. \emr{We conclude this subsection by presenting the following alternate approach to the use of the diffusion operator, that was suggested to us by an anonymous referee. This suggestion is particularly helpful, and we are glad to be able to present it here:} \begin{remark} \emr{For a given sufficiently small interval $\Lambda$ of values of $\lambda$ one can: \begin{itemize} \item Take the initial function $\psi_0=\mathds{1}_J$, sufficiently small $\vartheta$, initial lower bound $\alpha_0$ and a threshold $t\in (0,1)$; \item Apply the operator ${\hatop A}$, where $A=D_{\alpha_0,\Lambda,\mathcal J}$, until the maximum descends below the chosen threshold. Let $k$ be the smallest number such that \begin{equation} \label{eq:psik} \psi:={\hatop A}^{k}\mathds{1}_J \prec t \cdot \mathds{1}_J. \end{equation} In particular, as $t<1$, this implies that ${\hatop A}[\psi]\preccurlyeq\psi$; \item Then, one gets a lower bound $\alpha$ for the correlation dimension, choosing its value to be the maximal for which $D_{\alpha,\Lambda,\mathcal J}[\psi]\preccurlyeq \psi$, by setting \begin{equation}\label{eq:new-alpha} \alpha:=\alpha_0+ \log_{\lambda_{\min}} \max_J \frac{D_{\alpha_0,\Lambda,\mathcal J}[\psi]}{\psi}= \log_{\lambda_{\min}} \max_J \frac{D_{0,\Lambda,\mathcal J}[\psi]}{\psi}, \end{equation} where $\lambda_{\min}=\min \Lambda$. Observe that since one can actually use \emph{any} function $\psi$ in order to look for the lower bound $\alpha$ (applying then Theorem~\ref{thm:5star}), rounding errors during the iterations are not much of an issue. Indeed, there is only \emph{one} iteration and one division applied in~\eqref{eq:new-alpha}, with the denominator bounded from below by $\vartheta$ due the construction of~$\psi$, and compared to the precision of calculations $\vartheta$ is not a small number at all. \end{itemize} } \end{remark} \emr{This method really works quite well. For instance, taking $\vartheta=10^{-7}$ and $\alpha_0=0.82$ (lower bound by Hare and Sidorov), taking $\Lambda$ to be $0.5\cdot 10^{-5}$-neighborhood of some $\lambda$ and separating the interval $J=[-r,r]$ into $4\cdot 10^4$ intervals, where $r=\frac{1.1}{1-\lambda_{\max}}$, $\lambda_{\max}=\max \Lambda$, and choosing the threshold $t=\frac{1}{20}$ (that is quite small so that $k$ in~\eqref{eq:psik} is quite large), one gets the estimates \begin{itemize} \item $\alpha = 0.9923757365$ for the Fibonacci value $\lambda = 0.6180339887$ (compare with the lower bound $0.992395833333$ in Table~\ref{tab:multinacci}); \item $\alpha = 0.9642020738$ for the tribonacci value $\lambda = 0.54368901$ (compare with the lower bound $0.964214555664$ in Table~\ref{tab:multinacci}); \item $\alpha = 0.999641567$ for one of the Salem numbers $\lambda = 0.71363917$ (compare with the lower bound $0.999687500$ in the table in Section~\ref{ap:salem}). \end{itemize} } \subsubsection{Proof of Theorem~\ref{t:main}} \label{sss:mainproof} Recall that $\dim_H \mu_\lambda \ge \dim_H \mu_{\lambda^2}$ and therefore it is sufficient to compute a lower bound for $\lambda \in [0.5,0.8]$. To obtain a uniform lower bound on the correlation dimension $D_2(\mu_\lambda)$, for Bernoulli convolution measures $\mu_\lambda$ on the entire interval of parameter values $[0.5,0.8]$ we proceed in two steps. First, we consider a cover of the interval $[0.5,0.8]$ by $100$ overlapping intervals of the same size. We then apply the method explained in~\S\ref{sss:compute} with $N = 7 \cdot 10^6$ partition intervals for the test function, and set the maximum for the number of iterations of the diffusion operator to $K=150$. We choose a lower bound $d_1 = 0.82$, an upper bound $d_2 = 1$ and set the refinement parameter to $\varepsilon = 0.01$. The computation takes about 10 minutes for each interval and can be done in parallel; the result is presented in Table~\ref{tab:lowerbound}. \begin{table}[h] \begin{center} \begin{tabular}{|ccc||ccc|} \hline $\Lambda$ & & $\alpha$ & $\Lambda$ & & $\alpha$ \\ \hline $[0.500 ,0.515]$&& $0.96612$& $[0.566 ,0.569]$&& $0.96612$\\ $[0.515 ,0.518]$&& $0.95402$& $[0.569 ,0.614]$&& $0.97822$\\ $[0.518 ,0.542]$&& $0.96370$& $[0.614 ,0.617]$&& $0.96612$\\ $[0.542 ,0.545]$&& $0.95402$& $[0.617 ,0.743]$&& $0.97580$\\ $[0.545 ,0.554]$&& $0.96612$& $[0.743 ,0.800]$&& $0.96612$\\ $[0.554 ,0.566]$&& $0.97580$& && \\ \hline \end{tabular} \end{center} \caption{Uniform lower bounds for the correlation dimension of Bernoulli convolution measures, after the first step.} \label{tab:lowerbound} \end{table} Afterwards, we use the bounds we computed as an initial guess for the corresponding parameters $\lambda$ and improve them by applying the same method again. This time, based on the first estimates, we take uniform covers of $[0.499,0.575]$ and $[0.572,0.8]$ by $5000$ intervals each. We then use $N = 10^7$ intervals for the space of piecewise-constant functions; set the maximum $K = 1000$ for the number of iterations for the diffusion operator; and choose $\varepsilon = 10^{-4}$ as the refinement parameter. This second computation takes about two weeks with 32 threads running in parallel. The result is presented in Figure~\ref{fig:plotBC}. In support of the conjecture that dimension drops occur at Pisot parameter values, we identify minimal polynomials of algebraic numbers which seem to correspond to the bigger drops and verified that they are Pisot values, i.e. all their Galois conjugates lie inside the unit circle. \begin{remark} It follows from the \emph{overlaps conjecture} by Simon~\cite{Si96} which was proved by Hochman~\cite{hochman} for algebraic parameter values, that the dimension drop occurs only for the roots of polynomials with coefficients~$\{-1,0,1\}$. We see that some of the polynomials indicated in the plot have~$\pm2$ among their coefficients. This doesn't contradict the result of Hochman, because the polynomials we give are the minimal polynomials. Each of the polynomials with coefficients $\pm2$ becomes a polynomial with coefficients $\{-1, 0, 1\}$ after multiplying by an appropriate factor. For instance, $x^5-x^3-2x^2-2x-1$ after multplying by $(x-1)$ becomes $x^6-x^5-x^4-x^3+x+1$. \end{remark} \begin{figure} \begin{subfigure}{150mm} \includegraphics{genpic.pdf} \caption{Plot of lower bounds for correlation dimension of Bernoulli convolution measures for $0.5\le \lambda \le 0.8$ with more detailed plots in~\ref{fig:bc1},~\ref{fig:bc2}, \ref{fig:bc3},~\ref{fig:bc4},~\ref{fig:bc5} below.} \end{subfigure} \begin{subfigure}{150mm} \includegraphics{BClam4.pdf} \caption{$0.499 < \lambda < 0.526$ } \label{fig:bc1} \end{subfigure} \begin{subfigure}{150mm} \includegraphics{BClam3.pdf} \caption{$0.523 < \lambda < 0.548$ } \label{fig:bc2} \end{subfigure} \caption{The plot of the piecewise constant function $G_{2}(\lambda)$, which gives lower bounds on correlation dimension of Bernoulli convolution $D_2(\mu_\lambda)$. } \label{fig:plotBC} \end{figure} \begin{figure} \ContinuedFloat \begin{subfigure}{150mm} \includegraphics{BClam2.pdf} \caption{$0.548 < \lambda < 0.58$ } \label{fig:bc3} \end{subfigure} \begin{subfigure}{150mm} \includegraphics{BClam1.pdf} \caption{$0.58 < \lambda < 0.625$ } \label{fig:bc4} \end{subfigure} \begin{subfigure}{150mm} \includegraphics{BClam0.pdf} \caption{$0.625 < \lambda < 0.81$ } \label{fig:bc5} \end{subfigure} \caption{(Continued). The plot of the piecewise constant function $G_{2}(\lambda)$, which gives lower bounds on correlation dimension of Bernoulli convolution $D_2(\mu_\lambda)$. The polynomials indicated are the minimal polynomials of the corresponding values $\lambda$, which are Pisot. } \label{fig:plotBC} \end{figure} Based on the graph of the lower bound function~$G_2(\lambda)$ shown in Figure~\ref{fig:plotBC} we \emph{conjecture} that for reciprocals of Fibonacci $\lambda = 2/(1+\sqrt 5)$ and ``tribonacci'' ($\lambda = \beta^{-1}$, where $\beta$ is the largest root of $x^3-x^2-x-1$) parameter values there exists a sequence $\lambda_n^\prime$ of Pisot numbers such that $\lambda_n^\prime \to \lambda$ as $n\to\infty$ such that the dimensions $D_2(\mu_{\lambda'_n})<1-\varepsilon$ for some $\varepsilon>0$ and moreover the limit $\lim_{n\to \infty} D_2(\mu_{\lambda'_n}) < 1$. \subsubsection{Proof of Theorem~\ref{thm:dim013} } Contrary to the case of Bernoulli convolutions, there were no apriori estimates on dimension drop known. First, we consider a cover of the interval of parameter values $(0.249,0.334)$ by $100$ overlapping intervals $\Lambda_k$, $k=1,\ldots,100$. We set the limit $K=200$ for the number of iterations and $N=10^5$ for the number of intervals for piecewise constant functions. We then choose $\varepsilon = 0.01$ as the refinement parameter and $d_1=0.5$, $d_2 = -\frac{\log 3}{\log\inf\Lambda_k}$ as lower and upper bounds, respectively. Applying the algorithm described in Section~\ref{sss:compute} we obtain rough estimates. The result is presented in Table~\ref{tab:013step1}. Afterwards, we improve this estimate. We choose the refinement parameter $\varepsilon = 10^{-4}$, set $N = 10^7$ to be the number of intervals for the step function, $K = 1000$ for the maximal number of iterations and choose the lower bound which was already computed. The lower bounds for $\lambda \in (0.333,0.401)$ we compute applying the same steps with $d_0 = 0$ and $d_1 = 1$. \begin{table} \centering \begin{tabular} {|cc||cc|} \hline $\Lambda$ & $\alpha$ & $\Lambda$ & $\alpha$ \\ \hline $[0.25000,0.26501]$& $0.77082$ & $[0.32839, 0.33173]$ &$0.85657$ \\ $[0.26501,0.26918]$& $0.79581$ & $[0.33173, 0.33434]$ &$0.79659$ \\ $[0.26918,0.28169]$& $0.80051$ & $[0.33434, 0.33702]$ &$0.87375$ \\ $[0.28169,0.28669]$& $0.83549$ & $[0.33702, 0.34372]$ &$0.89479$ \\ $[0.28669,0.29086]$& $0.85245$ & $[0.34372, 0.34908]$ &$0.91583$ \\ $[0.29086,0.30587]$& $0.83663$ & $[0.34908, 0.35712]$ &$0.94388$ \\ $[0.30587,0.30838]$& $0.87025$ & $[0.35712, 0.36717]$ &$0.91583$ \\ $[0.30838,0.31338]$& $0.86111$ & $[0.36717, 0.37722]$ &$0.95440$ \\ $[0.31338,0.32089]$& $0.88076$ & $[0.37722, 0.38526]$ &$0.95791$ \\ $[0.32089,0.32839]$& $0.86694$ & $[0.38526, 0.40000]$ &$0.98246$ \\ \hline \end{tabular} \caption{ Uniform lower bounds for the correlation dimension of the stationary measure in the $\{0,1,3\}$-problem, after the first step. } \label{tab:013step1} \end{table} The result is presented in Figure~\ref{fig:plot013}. We managed to identify minimal polynomials of algebraic numbers which seem to correspond to some of the biggest dimension drops and verified that the corresponding parameter values are reciprocals of hyperbolic numbers. In the case of the $\{0,1,3\}$-system, the overlaps conjecture implies that the dimension drop \emph{for algebraic parameter values} takes place only for the roots of polynomials with coefficients $\{0,\pm1,\pm2,\pm 3\}$, and the polynomials we have identified satisfy this property. \begin{figure} \begin{subfigure}{150mm} \includegraphics{genpic013.pdf} \caption{Plot of lower bounds for correlation dimension of Bernoulli convolution measures for $0.249 \le \lambda \le 0.401$ with more detailed plots in~\ref{fig:013-p1},~\ref{fig:013-p2}, \ref{fig:013-p3},~\ref{fig:013-p4},~\ref{fig:013-p5} below.} \end{subfigure} \begin{subfigure}{150mm} \includegraphics{013p0.pdf} \caption{$0.25<\lambda<0.283$} \label{fig:013-p1} \end{subfigure} \begin{subfigure}{150mm} \includegraphics{013p1.pdf} \caption{$0.2825<\lambda<0.3128 $} \label{fig:013-p2} \end{subfigure} \caption{The plot of the piecewise constant function $G^{0,1,3}_{2}(\lambda)$, which gives lower bounds on Hausdorff dimension $D_2(\mu^{0,1,3}_\lambda)$ of the stationary measure for the $\{0,1,3\}$-system. The polynomials indicated are the minimal polynomials of the corresponding values $\lambda$, which are hyperbolic.} \label{fig:plot013} \end{figure} \begin{figure} \ContinuedFloat \begin{subfigure}{150mm} \includegraphics[height=67mm]{013p2.pdf} \caption{$0.312<\lambda<0.345 $} \label{fig:013-p3} \end{subfigure} \begin{subfigure}{150mm} \includegraphics[height=67mm]{013p3.pdf} \caption{$0.3445<\lambda<0.3715 $} \label{fig:013-p4} \end{subfigure} \begin{subfigure}{150mm} \includegraphics[height=67mm]{013p4.pdf} \caption{$0.3710<\lambda<0.4 $} \label{fig:013-p5} \end{subfigure} \caption{The plot of the piecewise constant function $G^{0,1,3}_{2}(\lambda)$, which gives lower bounds on Hausdorff dimension $D_2(\mu^{0,1,3}_\lambda)$ of the stationary measure for the $\{0,1,3\}$-system. } \label{fig:plot013} \end{figure} \subsection{Selected algebraic parameter values} \label{s:algebraicLambda} So far we have applied our method to compute uniform lower bounds on dimension of the stationary measures. As we highlighted already in the end of \S\ref{ss:efbounds-1}, in order to get a lower bound on $D_2(\mu_\lambda)$ for an algebraic~$\lambda$, we need to consider a small interval containing the value. In this section, we compute a lower bound on correlation, and hence Hausdorff, dimensions of $\mu_\lambda$, for selected algebraic values and compare our results with the existing data. For some specific values these results are not as accurate as existing estimates. For other parameter values, e.g. Salem numbers, we give a new improved lower bound. We begin by recalling some known results. In~\cite{G63} Garsia introduced a notion of entropy of an algebraic number~$\lambda$ (also see~\cite{HKPS19} for an alternative definition) $$ h(\lambda) = \lim_{N \to +\infty} -\frac{1}{2^n} \sum_{i_1, \cdots, i_N \in \{0,1\}} \log \left( \frac{1}{2^n}\hbox{\rm Card}\left\{ j_1, \cdots, j_N \in \{0,1\} \hbox{ : } \sum_{k=1}^n(i_k-j_k)\lambda^k = 0 \right\} \right). $$ Garsia entropy was first used to estimate Hausdorff dimension of Bernoulli convolution corresponding to the Golden mean~$\lambda = \frac{2}{1+\sqrt5}$. The method has been subsequently extended in~\cite{GKT02} to the roots of the polynomials $$ P_n(x) = x^n - x^{n-1} - \ldots - x - 1. $$ In Table~\ref{tab:multinacci} we give a comparison of the lower bounds we have computed using the diffusion operator and the results of Grabner at al.~\cite{GKT02}. This illustrates that our bounds are quite close to the known values. \begin{table} \centering \begin{tabular}{|ccc||ccc|} \hline $n$ & $\dim_H(\mu_\lambda)$ & $\alpha$ & $n$ & $\dim_H(\mu_\lambda)$ & $\alpha$ \\ $2$ & $ 0.995713126685555$ & $ 0.992395833333 $ & $6$ & $ 0.996032591584967$ & $ 0.990673828125 $ \\ $3$ & $ 0.980409319534731$ & $ 0.964214555664 $ & $7$ & $ 0.997937445507094$ & $ 0.994959490741 $ \\ $4$ & $ 0.986926474333800$ & $ 0.973324567994 $ & $8$ & $ 0.998944915449832$ & $ 0.997343750000 $ \\ $5$ & $ 0.992585300274171$ & $ 0.983559570313 $ & $9$ & $ 0.999465368055570$ & $ 0.998640046296 $ \\ \hline \end{tabular} \caption{Comparison of the lower bound for the correlation dimension $\alpha < D_2(\mu_\lambda)$ computed using the diffusion operator and the Hausdorff dimension computed in~\cite[\S4]{GKT02} for mutlinacci parameter values.} \label{tab:multinacci} \end{table} The connection between $h(\lambda)$ and $\dim_H(\mu_\lambda)$ comes by a result of Hochman~\cite{hochman}: \begin{equation} \label{eq:entropy} \dim_H(\mu_\lambda) = \min\left\{-\frac{h(\lambda)}{\log\lambda},1\right\}. \end{equation} However, despite~\eqref{eq:entropy} being exact, the value $h(\lambda)$ for algebraic numbers is often quite difficult to estimate for all but a small number of examples, see~\cite{AFKP} (also~\cite{lalley}) which gave algorithms to compute the entropy based on Lyapunov exponents of random matrix products. For several explicit (non-Pisot) examples they showed that $\frac{h(\lambda)}{\log(\lambda^{-1})} > 1$; together with~\eqref{eq:entropy} this implies $\dim_H(\mu_\lambda) =1$. On the other hand, Breuillard and Varj\'u~\cite{BV20} gave an estimate on $h(\lambda)$ in terms of the Mahler measure~$M_\lambda$: \begin{equation} \label{eq:BV20} c \cdot \min\left\{1, \log M_\lambda\right\} \le h(\lambda) \le \min\left\{1, \log M_\lambda\right\}. \end{equation} Non-rigorous numerical calculations suggest that one can take $c=0.44$. The upper bound in~\eqref{eq:BV20} is often strict. In particular, it is known that $h(\lambda) < M_\lambda$, provided~$\lambda$ has no Galois conjugates on the unit circle~\cite{BV20}. The dimension of the Bernoulli convolution measure for certain hyperbolic parameter values~$\lambda$ can be computed explicitly, too. In a recent work~\cite{HKPS19} Hare et al. considered hyperbolic algebraic numbers of degree~$5$. For a number of them they showed that the stationary measure has full Hausdorff dimension~\cite[Tables 5.1, 5.2]{HKPS19}. We present our lower bound for the correlation dimension for comparison in Table~\ref{t:old}, which shows that our lower bounds are accurate to~$3$ decimal places. \begin{table}[h] \begin{center} \begin{tabular}{|cccl|} \hline $\lambda $ & $\alpha$ & $ \beta=\lambda^{-1} $ & \qquad \quad polynomial \\ $ 0.862442360254 $ & $ 0.999609375000 $&$1.159497777573$ & $x^5 + x^4 - x^3 - x^2 - 1$ \\ $ 0.827590407756 $ & $ 0.999687500000 $&$1.208327199818$ & $x^5 - x^4 + x^3 - x - 1$ \\ $ 0.874449227129 $ & $ 0.999609375000 $&$1.143576972768$ & $x^5 + x^3 - x^2 - x - 1$ \\ $ 0.710434255787 $ & $ 0.999843750000 $&$1.407589783086$ & $x^5 - x^4 + x^3 - x^2 - x - 1$ \\ $ 0.779544663821 $ & $ 0.999765625000 $&$1.282800135015$ & $x^5 - x^3 - x^2 + x - 1$ \\ $ 0.791906429308 $ & $ 0.999765625000 $&$1.262775452996$ & $x^5 - x^4 + x^2 - x - 1$ \\ $ 0.786151377757 $ & $ 0.999765625000 $&$1.272019649514$ & $x^4 - x^2 - 1$ \\ $ 0.699737022113 $ & $ 0.999843750000 $&$1.429108319838$ & $x^5 - x^3 - x^2 - 1$ \\ $ 0.779544663821 $ & $ 0.999765625000 $&$1.282800135015$ & $x^5 - x^3 - x^2 + x - 1$ \\ $ 0.800094994405 $ & $ 0.999765625000 $&$1.249851588864$ & $x^5 - x^4 + x^3 - x^2 - 1$ \\ $ 0.876611867657 $ & $ 0.999609375000 $&$1.140755717433$ & $x^5 + x^4 - x^3 - x - 1$ \\ $ 0.655195524260 $ & $ 0.999843750000 $&$1.526261952307$ & $x^5 - x^4 - x^2 - x + 1$ \\ $ 0.848374895732 $ & $ 0.999687500000 $&$1.178724176105$ & $x^4 + x^3 - x^2 - x - 1$ \\ $ 0.819172513396 $ & $ 0.999687500000 $&$1.220744084605$ & $x^4 - x - 1$ \\ $ 0.774804113215 $ & $ 0.999765625000 $&$1.290648801346$ & $x^4 - x^3 + x^2 - x - 1$ \\ $ 0.730440478359 $ & $ 0.999843750000 $&$1.369036943635$ & $x^5 - x^3 - x^2 - x + 1$ \\ $ 0.833363173425 $ & $ 0.999687500000 $&$1.199957031806$ & $x^5 - x^3 + x^2 - x - 1$ \\ \hline \end{tabular} \end{center} \caption{Lower bound $\alpha < D_2(\mu)$ for the correlation dimension for selected algebraic numbers for which it is known~\cite{HKPS19} that $\dim_H(\mu_\lambda)=1$, computed using~$5\cdot10^6$ partition intervals and~$500$ iterations of the diffusion operator with the refinement parameter $\varepsilon=10^{-4}$. The value of the root~$\beta$ is given to simplify the comparison with~\cite{HKPS19}.} \label{t:old} \end{table} On the other hand, there are a number of algebraic parameter values to which the method presented in~\cite{HKPS19} doesn't apply, they are listed in~\cite[Table 5.3]{HKPS19}. For these values we give a new lower bound in Table~\ref{t:new}. \begin{table}[h] \begin{center} \begin{tabular}{|cccl|} \hline $\lambda $ & $\alpha$ & $ \beta=\lambda^{-1} $ & \qquad \quad polynomial \\ $0.593423522613 $ & $ 0.998714285714 $ & $ 1.685137110165 $ & $ z^5-z^4-z^2-z-1 $\\ $0.595089298038 $ & $ 0.999000000000 $ & $ 1.680420070225 $ & $ z^5-z^4-z^3-z+1 $\\ $0.557910446633 $ & $ 0.997857142857 $ & $ 1.792402357824 $ & $ z^5-z^4-z^3-z^2+z-1 $ \\ $0.712452611946 $ & $ 0.999800000000 $ & $ 1.403602124874 $ & $ z^5-z^4-z^2+z-1 $ \\ $0.645200388386 $ & $ 0.999800000000 $ & $ 1.549906072594 $ & $ z^5-z^4-z^3+z-1 $ \\ $0.667960707496 $ & $ 0.999800000000 $ & $ 1.497094048762 $ & $ z^5-z^4-z-1 $ \\ $0.808730600479 $ & $ 0.999600000000 $ & $ 1.236505703391 $ & $ z^5-z^3-1 $ \\ $0.837619774827 $ & $ 0.999722222222 $ & $ 1.193859111321 $ & $ z^5-z^2-1 $ \\ $0.856674883855 $ & $ 0.999652777778 $ & $ 1.167303978261 $ & $ z^5-z-1 $\\ $0.889891245776 $ & $ 0.999513888889 $ & $ 1.123732821001 $ & $ z^5+z^4-z^2-z-1 $ \\ \hline \end{tabular} \end{center} \caption{Lower bound $\alpha<D_2(\mu)$ for the correlation dimension for selected algebraic numbers for which there are no previous lower bounds, computed using~$5\cdot10^6$ partition intervals and~$500$ iterations of the diffusion operator and the refinement parameter $\varepsilon =10^{-4}$. The value of the root~$\beta$ is given to simplify the comparison with~\cite{HKPS19}. } \label{t:new} \end{table} \subsubsection{Estimates for Salem numbers: Proof of Theorem~\ref{thm:salemnum}} In a recent work, Breuillard and Varj\'u state an open problem~\cite[Problem 3]{BV20}, asking whether it is true that $h(\lambda) = M_\lambda$ for all Salem parameter values $\lambda \in \left(\frac12,1\right)$. This equality would imply that $\dim_H(\mu_\lambda)=1$ for Salem parameter values. We apply the method described in~\S\ref{sss:compute} to $99$ Salem numbers of degree no more than~$10$ and to~$47$ small Salem numbers. Our computational set-up had the following choices. First, we compute each Salem number $s_k$ with an accuracy of $10^{-32}$ and consider a neighbourhood of radius $\delta=10^{-8}$, i.e. $\Lambda_k = B_\delta(s_k)$. Then, based on the existing results, we choose $d_1 = 0.98$ and $d_2 = 1$, as conjectured lower and upper bounds. The number of intervals for piecewise constant functions is $N=6\cdot10^5$, and the allowed number of iterations for the diffusion operator is $K=300$. We also set the refinement parameter $\varepsilon = 10^{-4}$. The detailed result is presented in Appendix \S\ref{ap:salem} and~\S\ref{ap:smallsalem}. \section[Proof of Theorem 1.8]{Asymptotic bounds: proof of Theorem~\ref{thm:near1} } \label{s:abounds} We have provided a uniform lower bound on the correlation dimension $D_2(\mu_\lambda)$ of Bernoulli convolution measures $\mu_\lambda$. Now we will give an asymptotic lower bound for $D_2(\mu_\lambda)$ in a neighbourhood of~$1$ using the diffusion operator approach. \begin{proof}[of Theorem~\ref{thm:near1}] Given a small $\varepsilon>0$ let us set $\lambda = 1 - \varepsilon$. Then the symmetric diffusion operator~\eqref{eq:dif-twoway} takes the form $$ [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi](x) = (1-\varepsilon)^{-\alpha}\frac14 \left(\psi\left(\frac{x+1}{1-\varepsilon}\right) + 2 \psi\left(\frac{x}{1-\varepsilon}\right) + \psi\left(\frac{x-1}{1-\varepsilon}\right) \right) $$ In order to prove the result, it is sufficient to find a function $f_\varepsilon$ such that for any $c>\frac32$ and for any $\alpha < 1-c\varepsilon$ we have that \begin{equation} \label{eq:dsfe} \mathcal{D}^{(2)}_{\alpha,\mathcal{S}} f_\varepsilon \prec f_\varepsilon. \end{equation} We will specify the function $f_\varepsilon$ explicitly. Let us introduce a shorthand notation $\delta:=2\varepsilon-\varepsilon^2>0$ and define \begin{equation} \label{rhs:eq} f_\varepsilon(x):= \exp(-\delta(1-\varepsilon)^2x^2) = \exp(-\delta x^2) \cdot \exp(\delta^2 x^2) \end{equation} It is not difficult to see that $f_\varepsilon$ satisfies~\eqref{eq:dsfe}. Indeed, note that $\frac{1+\exp(-\delta)}2\le (1-\varepsilon)^{1-c\varepsilon}$ for any $\varepsilon$ sufficiently small and any $c>\frac32$. Therefore to establish~\eqref{eq:dsfe} it is sufficient to show that \begin{equation} \label{near1:eq} \frac14\left(f_\varepsilon\left(\frac{x+1}{1-\varepsilon}\right)+ f_\varepsilon\left(\frac{x-1}{1-\varepsilon}\right)+2f_\varepsilon\left(\frac{x}{1-\varepsilon}\right)\right)\le \frac{1+\exp(-\delta)}{2}f_\varepsilon(x). \end{equation} To prove \eqref{near1:eq} we first note that $$ \begin{aligned} f_\varepsilon\left(\frac{x+1}{1-\varepsilon}\right) + f_\varepsilon\left(\frac{x-1}{1-\varepsilon}\right) &= \exp(-\delta (x+1)^2) + \exp(-\delta(x-1)^2) \cr &= \exp(-\delta) \cdot \exp(-\delta x^2) \left( \exp(-2 \delta x) + \exp(2\delta x)\right) \cr &= 2 \exp(-\delta) \cdot \exp(-\delta x^2) \cdot \cosh(2\delta x). \end{aligned} $$ Moreover, since $f_\varepsilon\left(\frac{x}{1-\varepsilon}\right) = \exp(-\delta x^2)$ we conclude for the left hand side of~\eqref{near1:eq} that \begin{equation} \label{lhs:eq} \frac14\left(f_\varepsilon\left(\frac{x+1}{1-\varepsilon}\right)+ f_\varepsilon\left(\frac{x-1}{1-\varepsilon}\right)+2f_\varepsilon\left(\frac{x}{1-\varepsilon}\right)\right) = \frac12 \exp(-\delta x^2) \cdot \left(1 + \cosh(2\delta x) \exp(-\delta) \right). \end{equation} Combining~\eqref{rhs:eq} and~\eqref{lhs:eq} we see that~\eqref{near1:eq} is equivalent to $$ 1 + \exp(-\delta)\cosh(2\delta x) \le (1+\exp(-\delta) )\exp \left( \delta^2 x^2\right), $$ which in turn, is equivalent to \begin{equation}\label{ineq:eq} \frac{1} {1+\exp(-\delta) } + \frac{\exp(-\delta)} {1+\exp(-\delta) }\cosh(2\delta x) \le \exp \left( \delta^2 x^2\right), \end{equation} To establish (\ref{ineq:eq}) it is sufficient to show that $$ \frac12+\frac12\cosh(2\delta x) \le \exp\left( \delta^2 x^2\right). $$ This last inequality can be established by comparision of the Taylor series coefficients term by term. More precisely, the coefficient in front of the term $(\delta x)^{2k}$ of the function $\cosh(2\delta x)$ is $\frac{2^{2k}}{(2k)!}$ and the same coefficient of the function $\exp\left(\delta^2 x^2\right)$ is equal to $\frac{1}{k!}$. \end{proof} \section[Diffusion operator $\mathcal{D}^{(2)}_{\alpha,\lambda}$]{Diffusion operator $\mathcal{D}^{(2)}_{\alpha,\lambda}$ and correlation dimension} \label{section:diffusion} We would like to start by explaining the idea behind the diffusion operator and its connection with the correlation dimension which lead us to it. Let us recall the energy integral~\eqref{eq:a-int} $$ I(\mu,\alpha) = \int_\mathbb{R} \int_\mathbb{R} |x-y|^{-\alpha} \mu(dx) \mu(dy) $$ and the definition of the correlation dimension~\eqref{eq:D2-bis}: $D_2(\mu) = \sup\{\alpha \colon I(\mu,\alpha) \mbox{ is finite } \}$. In other words, $I(\mu,\alpha)$ is finite for any $\alpha < D_2(\mu)$. Let $\mathcal{S}(\lambda,\bar c, \bar p)$ be an iterated function scheme of~$n$ similarities $f_j = \lambda x - c_j$, $j = 1, \ldots, n$ and let $\mu_\lambda$ be its stationary measure. Then $\mu_\lambda$ is the fixed point of the operator on Borel probability measures $$ T_{\mathcal{S}} \colon \mu \mapsto \sum_{j=1}^n p_j {f_j}_* \mu $$ We now would like to study the induced action of~$T_{\mathcal{S}}$ on~$I(\mu,\alpha)$. To this end, we want to incorporate $I(\mu,\alpha)$ into a family. More precisely, we consider a family of functions given by \begin{equation} \label{eq:psi-def} \psi_{\alpha,\mu}:\mathbb R\to\mathbb R^+\cup \{+\infty\} \qquad \psi_{\alpha,\mu}(r) : = \int_{\mathbb{R}}\int_{\mathbb{R}} |(x-y)-r|^{-\alpha} \, \mu(dx) \, \mu(dy). \end{equation} \begin{notation} We denote by $-\mu$ the push-forward of the measure~$\mu$ under $x \mapsto -x$. \end{notation} In the sequel, we will need the following technical lemma which helps us to decide whether or not $\psi_{\alpha,\mu}(r)$ is finite\footnote{Or in other words whether the function $(x-y-r)^{-\alpha}$ is integrable with respect to $\mu \times \mu$.}. \begin{lemma} \label{lem:psi} Let $\mu$ be a probability measure. Assume that $I(\mu,\alpha) = \psi_{\alpha,\mu}(0)$ is finite. Then $\psi_{\alpha,\mu}(r)$ is finite for any $r \in \mathbb{R}$ and, moreover, we have that $\psi_{\alpha,\mu}(r) < \psi_{\alpha,\mu}(0)$. In particular, $\psi_{\alpha,\mu}$ is a continuous function. \end{lemma} \begin{proof} Let us denote $\nu \colon = \mu*(-\mu)$. It is easy to see that its Fourier transform is a nonnegative function: $$ \hat \nu(t) = \int_\mathbb{R} e^{-itz} \nu(dz) = \int_\mathbb{R} \int_\mathbb{R} e^{-it(x-y)} \mu(dx) \mu(dy) = \hat \mu(t) \overline{\hat \mu(t)} = |\mu(t)|^2 \ge 0. $$ We may write $$ \psi_{\alpha,\mu}(r) = \int_{\mathbb{R}^2} |x-y-r|^{-\alpha} \mu(dx) \mu(dy) = \int_\mathbb{R} |z - r|^{-\alpha} \nu(dz). $$ Then the desired inequality $\psi_{\alpha,\mu}(r)<\psi_{\alpha,\mu}(0)$ for all $r \in \mathbb R$ is equivalent to $$ \int_{\mathbb R} |z-r|^{-\alpha} \nu (d z) \le \int_{\mathbb{R}} | z|^{-\alpha} \nu(d z) . $$ Let us consider the function $f_\alpha(s) \eqdef |s|^{-\alpha}$. Its Fourier transform is known\footnote{One possible approach is via the Gamma function. We rewrite $|s|^{-\alpha} = \frac{2 \pi^{\alpha/2}}{\Gamma(\alpha/2)}\int_0^\infty t^{\alpha-1} e^{-\pi t^2 s^2} dt$ and compute the Fourier transform of the latter by swapping the order of integrals.} to be \begin{equation} \label{eq:FTf} \hat f_\alpha(t) = \frac{\pi^{\alpha-1/2}\Gamma\left( (1-\alpha)/2 \right)}{\Gamma(\alpha/2)} |t|^{\alpha-1} = C_\alpha |t|^{\alpha-1} \ge 0, \quad 0<\alpha<1. \end{equation} where $C_\alpha = \frac{\pi^{\alpha-1/2}\Gamma\left( (1-\alpha)/2 \right)}{\Gamma(\alpha/2)}$. Therefore $\widehat{f_\alpha * \nu_\lambda} = \hat f_\alpha \cdot \hat \nu_\lambda $ is real and non-negative. Using the inverse Fourier transform formula we obtain an upper bound. \begin{multline*} \psi_{\alpha,\mu}(r) = \int_{\mathbb R} |z-r|^{-\alpha} \nu (dz) = (f_\alpha * \nu )(r) = \frac{1}{2\pi} \int_{\mathbb R} e^{-itr}\hat f_\alpha(t) \cdot \hat \nu (t) dt \\ \le \frac{1}{2\pi} \int_{\mathbb R} \hat f_\alpha(t) \cdot \hat \nu (t) dt = \psi_{\alpha,\mu}(0) <\infty. \end{multline*} The function $\psi_{\alpha,\lambda}$ is continuous since it is an inverse Fourier transform of an $L_1$ function. \end{proof} \begin{remark} \label{rem:a-bound} Let $m = \inf\{ x \mid \supp \mu*(-\mu) \subseteq [-x,x]\}$ and let $J = [-a,a]\supsetneq [-m,m]$. Then $\psi_{\alpha,\mu}$ has a bounded continuous extension to $\mathbb R \setminus J$ with an upper bound $$ \psi_{\alpha,\mu}(r)\le (m-a)^{-\alpha} \quad \mbox{ for all } r, \, |r| > a. $$ \end{remark} Therefore, the behaviour of $\psi_{\alpha,\mu}$ on $\supp\mu*(-\mu)$ is the most important to us. The next Proposition ties together the symmetric diffusion operator $\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$, a family of functions $\psi_{\alpha,\mu}$, and the action on measures $T_{\mathcal{S}}$. \begin{proposition} \label{prop:dspsi} Let $\mathcal{S}(\lambda,\bar c, \bar p)$ be an iterated function scheme of~$n$ similaritites. Let~$\mu$ be a probability measure such that $\supp \mu \subset J$ for a closed interval~$J$. Assume that for some $\alpha>0$ the function $\psi_{\alpha,\mu}$ is bounded. Then $$ \psi_{\alpha,T_{\mathcal{S}} \mu} = \mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi_{\alpha,\mu}. $$ \end{proposition} \begin{proof} For convenience, recall the definition of the symmetric diffusion operator~\eqref{eq:dif-twoway}: $$ [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi](x) := \lambda^{-\alpha} \cdot \sum_{i,j=1}^k p_i p_j \cdot \psi\left(\frac{x+c_i-c_j}{\lambda}\right). $$ By straightforward computation, \begin{align*} \psi_{\alpha,T_{\mathcal{S}}\mu}(r) &= \int \int|y-(x-r)|^{-\alpha} T_{\mathcal{S}}\mu(dx) T_{\mathcal{S}} \mu(dy) \\ &=\int\int |y - (x -r)|^{-\alpha} \left(\sum p_j{f_j}_*\mu\right) (dx) \left( \sum p_k {f_k}_* \mu \right) (dy) \\ &=\sum_{j,k} p_jp_k \int \int |f_j(x) - (f_k(y)-r)|^{-\alpha} \mu(dx) \mu(dy) \\ &=\sum_{j,k} p_j p_k \int \int |\lambda x - c_j - \lambda y + c_k + r|^{-\alpha} \mu(dx) \mu(dy) \\ &= \lambda^{-\alpha}\sum_{j,k} p_j p_k \int \int \left|x-y+\lambda^{-1}(r-c_j+c_k)\right|^{-\alpha} \mu(dx) \mu(dy) \\ &=\lambda^{-\alpha} \sum_{j,k} p_j p_k \psi_{\alpha,\mu}\left(\lambda^{-1}(r-c_j+c_k)\right) = [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi](r). \end{align*} \end{proof} \begin{corollary} \label{cor:fixedpoint} Let $\mu$ be the unique stationary measure~$\mu$ of an iterated function scheme $\mathcal{S}(\lambda,\bar c, \bar p)$. Assume that $I(\mu,\alpha)$ is bounded. Then $\psi(\alpha,\mu)$ is the fixed point of $\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$. \end{corollary} \begin{remark} Of course the constant function $f \equiv 1 \subset L^\infty(\mathbb R)$ is an eigenvector for the diffusion operator with the eigenvalue $\lambda^{-\alpha}$, but it does not satisfy the hypothesis of Theorem~\ref{t:certificate-D2} and is of no use to us. \end{remark} In the next section we give a proof for Theorem~\ref{t:certificate-D2}, which provides the grounds for the numerical estimates of the correlation dimension. \subsection{Random processes viewpoint} The random processes viewpoint will be used in the arguments for Theorems~\ref{t:certificate-D2}, \ref{t:finding-D2}, \ref{t:certificate-D1}, and~\ref{t:finding-D1}. We would like therefore to make a preparatory description of the setup. \begin{definition} \label{def:compproc} Let $\mathcal{S}\left(\lambda,\bar c,\bar p \right)$ be an iterated function scheme. We want to consider the set of pairwise differences $\{d_k \mid d_k = c_i - c_j \}$, a probability vector $q_k = \sum\limits_{i,j \colon c_i - c_j = d_k} p_i p_j$, and to define \emph{a complementary} iterated function scheme~$\mathcal{S}(\lambda,\bar d, \bar q)$: $$ g_k(x) = \lambda x - d_k. $$ \end{definition} If $\mu$ is the unique stationary measure of $\mathcal{S}(\lambda,\bar c, \bar p)$ then the unique stationary measure of the complementary iterated function scheme is $\mu*(-\mu)$. Since the measure $\mu$ is compactly supported, we may define: \begin{equation} \label{eq:suppmu-mu} m : = \inf\{x \mid \supp(\mu*(-\mu)) \subseteq [-x,x] \}. \end{equation} The symmetric diffusion operator can be written in terms of the maps of the scheme~$\mathcal{S}(\lambda,\bar d, \bar q)$: \begin{equation} \label{eq:d2-back} [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi](x) = \lambda^{-\alpha} \sum_{i,j=1}^n p_i p_j \psi\left(\frac{x+c_i-c_j}{\lambda}\right) = \lambda^{-\alpha} \sum_k q_k \psi\left( g_k^{-1}(x)\right) . \end{equation} This observation brings us to the idea of introducing the backward process associated to $\mathcal{S}(\lambda,\bar d, \bar q)$, which can be defined as follows. Let $x$ and $y$ be two independent $\mu$-distributed random points \begin{equation} \label{eq:xy} x = \sum_{k=0}^\infty \xi_k\lambda^k, \quad y = \sum_{k=0}^\infty \eta_k \lambda^k. \end{equation} where $\xi_k, \eta_k$ are i.i.d. random variables assuming values $c_j$ with probabilities $p_j$, $j = 1, \ldots, n$. Consider the random process given by renormalized differences \begin{equation} \label{eq:zkproc} z_k := \lambda^{-k-1}\left( \sum_{j=0}^k \xi_j\lambda^j - \sum_{j=0}^k \eta_j\lambda^j \right). \end{equation} It is easy to see that $z_{k+1} = \lambda^{-1} (z_k + \zeta)$, where $\zeta = \xi_{k+1} -\eta_{k+1}$ is a random variable assuming values $d_k=c_i-c_j$ with probabilities $q_k = \sum\limits_{i,j \colon c_i - c_j = d_k} p_i p_j$. \begin{definition} \label{def:backproc} We call $z_k$ the backward process associated to~$\mathcal{S}(\lambda,\bar d,\bar q)$. \end{definition} With this notation, the symmetric diffusion operator takes the form \begin{equation} \label{eq:d2-expect} [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi](x) = \lambda^{-\alpha} \sum_k q_k \psi\left( g_k^{-1}(x)\right) = \lambda^{-\alpha} \mathbb E \left(\psi\left(\frac{x+\zeta}\lambda\right) \right). \end{equation} \noindent We conclude this preparatory discussion by commenting on the r\^ole of the admissible interval. \begin{lemma} \label{lem:return} If a trajectory of the random process $z_n$ leaves an admissible interval~$J$ for the $\mathcal{S}(\lambda,\bar c, \bar p)$, then it never returns to it. In other words, if there exists $k$ such that $z_k \not\in J$, then $z_n\not\in J$ for any $n>k$. \end{lemma} \begin{proof} Evidently, $\supp \mu*(-\mu) \subseteq \{x - y \mid x, y \in \supp \mu\}$. At the same time, $$ \frac{\max |d_k|}{1-\lambda} = \max |d_k| \sum_{k=1}^\infty \lambda^k \in \supp \mu*(-\mu) $$ and in particular $\frac{\max |d_k|}{1-\lambda} < m$, where $m$ is defined by~\eqref{eq:suppmu-mu}. Thus $\max |d_k| < m(1-\lambda)$. Assume that $J = [b_1,b_2] \supsetneq [-m,m]$ is an admissible interval and $z_k \not\in J$. Without loss of generality we may assume that $z_k > b_2 $ then $$ |z_{k+1}| = \Bigl|\frac{z_k+d_k}{\lambda}\Bigr| \ge \frac{|z_k| - \max|d_k|}{\lambda} \ge \frac{b_2 - \max|d_k|}\lambda > \frac{b_2 - m(1-\lambda)}\lambda > b_2. $$ The case $z_k< b_1$ is similar. \end{proof} \subsubsection{Proof of Theorem~\ref{t:certificate-D2}} For the convenience of the reader, we recall the statement. \paragraph{Theorem~\ref{t:certificate-D2} {\it \kern-12pt Let $\mathcal{S}(\lambda,\bar c, \bar p)$ be an iterated function scheme of similarities. Assume that for some $\alpha>0$ there exists an admissible compact interval $J \subset \mathbb R$ and a function $\psi \in \mathcal F_J$ such that \begin{equation*} [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}\psi] \prec \psi. \end{equation*} Then the correlation dimension of the $\mathcal{S}$-stationary measure $\mu$ is bounded from below by~$\alpha$:} \[ D_2(\mu)\ge \alpha. \] The proof of Theorem~\ref{t:certificate-D2} relies on the following lemma which relates the time for which the backward process of the complementary iterated function scheme remains in an admissible interval~$J$ to the correlation dimension of the measure~$\mu$. \begin{lemma} \label{lem:pdim} Let $J = [-a,a]$ be an admissible interval for $\mathcal{S}(\lambda,\bar c,\bar p)$ with the stationary measure~$\mu$. Let $z_n$ be the backward process for the complementary scheme. If $\mathbb P(z_n \in J) \le C_0 \lambda^{\alpha n}$ for some constant $C_0$, independent of $z_0$, then $D_2 (\mu) \ge \alpha$. \end{lemma} \begin{proof} Let~$m$ be as defined in~\eqref{eq:suppmu-mu}. Let $x$ and $y$ be two independent $\mu$-distributed random variables defined by~\eqref{eq:xy}, and let the backward process~$z_n$ be defined by~\eqref{eq:zkproc}. Observe that the difference between $|x-y|$ and the finite sum $\left|\sum_{j=0}^n (\xi_j - \eta_j) \lambda^j\right|$ is no more than $m\lambda^{n+1}$. By a straightforward calculation, we have \begin{equation*} \begin{split} \mathbb P \left(z_n \in J \right) &= \mathbb P\left(\lambda^{-n-1}\left|\sum_{j=0}^n (\xi_j - \eta_j) \lambda^j\right| \le a \right) \\& = \mathbb P\left( \left|\sum_{j=0}^n (\xi_j - \eta_j) \lambda^j\right| \le a \lambda^{n+1} \right) \ge \mathbb P\left( |x-y| \le (a-m) \lambda^{n+1} \right). \end{split} \end{equation*} Therefore the hypothesis of the Lemma implies $\mathbb P\left( |x-y| \le (a-m)\lambda^{n+1}\right) \le C_0 \lambda^{\alpha n}$ and thus for any $r$ we have that \begin{equation} \label{eq:l6p1} \mathbb P( |x-y| \le r) \le C_0 \cdot (a - m)^{-\alpha} r^\alpha = \colon C_1(m,a,\alpha) r^\alpha. \end{equation} In order to show that $D_2 (\mu) \ge \alpha$ it is sufficient to show that for any $\alpha^\prime < \alpha$ the integral $\int_{\mathbb{R}^2} |s-t|^{-\alpha^\prime} \mu(ds) \mu(dt)$ is finite. Indeed, $$ \int_{\mathbb{R}^2} |s-t|^{-\alpha^\prime} \mu(ds) \mu(dt) = \mathbb E (|x-y|^{-\alpha^\prime} ) = \int_{\mathbb{R}} \mathbb P(|x-y|^{-\alpha^\prime} > r ) d r. $$ Evidently, $\mathbb P(|x-y|<r) \le C_1(m,a,\alpha) r^\alpha$ implies $\mathbb P (|x-y|^{-\alpha^\prime} > r) \le \min(1, C_2 \cdot r^{-\alpha/\alpha^{\prime}})$ for some constant~$C_2$, which depends on~$m$, $a$, and~$\alpha$ only. Hence for some constants~$C_3$ and~$C_4$, which depend on~$m$,~$a$, and~$\alpha$, but do not depend on~$r$ we have that $$ \int_{\mathbb{R}^2} |s-t|^{-\alpha^\prime} \mu(ds) \mu(dt) \le \int_{0}^{C_3} 1 d r + \int_{C_3}^{+\infty} r^{-\alpha/\alpha^{\prime}} dr < C_4, $$ since $\frac{\alpha}{\alpha^{\prime}}>1$. \end{proof} \bigskip Finally, we can proceed to the proof of Theorem~\ref{t:certificate-D2}. We use the same notation as above. \smallskip \begin{proof}[of Theorem~\ref{t:certificate-D2}] By the hypothesis of the Theorem there exists $\theta>0$ such that for any $x \in J$ we have that $\theta < \psi(x) <\theta^{-1}$. Consider a discrete random process defined by~$w_n = \lambda^{-\alpha n} \psi(z_n)$. Then for any $z_n \in J$ taking into account~\eqref{eq:d2-expect}, we compute \begin{equation} \begin{split} \mathbb E (w_{n+1} \mid z_n ) & = \mathbb E (\lambda^{-\alpha (n+1)} \psi(z_{n+1}) \mid z_n) = \lambda^{-\alpha(n+1)} \mathbb E \left(\psi \left(\lambda^{-1} (z_n+\zeta) \right) \mid z_n \right) \\ &= \lambda^{-\alpha (n+1)} \mathbb E \left(\psi \left( \lambda^{-1}( z_n+\zeta) \right)\right) = \lambda^{-\alpha n} \mathcal{D}^{(2)}_{\alpha,\mathcal{S}} \psi ( z_n ) \\& \le \lambda^{-\alpha n} \psi (z_n) = w_n. \end{split} \label{eq:wnmart} \end{equation} Thus the process $w_n$ is a supermartingale, as $\mathbb E(w_{n+1} \mid z_n ) \le w_n$. In particular, \[ \mathbb E w_n\le w_0=\psi(z_0)=\psi(0)<\theta^{-1}. \] On the other hand, \[ \mathbb E w_n = \lambda^{-\alpha n} \cdot \mathbb E \psi (z_n) \ge \lambda^{-\alpha n} \cdot \inf \psi |_{J} \cdot \mathbb P(z_n \in J) \ge \lambda^{-\alpha n} \cdot \theta \cdot \mathbb P(z_n \in J). \] Therefore $\mathbb P (z_n \in J) \le \theta^{-2} \lambda^{\alpha n}$ and the Theorem follows from Lemma~\ref{lem:pdim}. \end{proof} \subsection{Effectiveness of the algorithm} Let $\mu$ be the stationary measure of an iterated function scheme of similarities $\mathcal{S}(\lambda,\bar c, \bar p)$. In this section we shall show that for any $\alpha < D_2(\mu)$ the method described in Section~\ref{sss:verifyalpha} will be able to confirm this inequality, subject to computer resources and time. In other words we shall show the following. \begin{proposition} \label{prop:D21} Let $\mu$ be the stationary measure of an iterated function scheme of similarities $\mathcal{S}(\lambda,\bar c, \bar p)$. Assume that $\alpha < D_2(\mu)$. Then there exist: \begin{enumerate} \item A sufficiently small $\varepsilon>0$ and an admissible interval $J_\Lambda$ for $\Lambda = B_\varepsilon(\lambda)$; \item A sufficiently fine partition $\mathcal J$ of $J_\Lambda$; \item A sufficiently large $n \in \mathbb N$; and \item A sufficiently small $\vartheta >0$, \end{enumerate} so that the hypothesis of Corollary~\ref{cor:uniform} holds, more precisely, for $A =\mathcal D_{\alpha,\Lambda,\mathcal J}$ we have $$ \hatop A^n \mathds{1}_{\mathcal J} \prec \mathds{1}_{\mathcal J}. $$ \end{proposition} Theorem~\ref{t:finding-D2} follows immeditately from Proposition~\ref{prop:D21}. We begin with the following technical fact. \begin{lemma} \label{lem:find-D2} Let $\mathcal{S}(\lambda,\bar c,\bar p)$ be an iterated function scheme. Let $\mu$ be the unique stationary measure and assume that $\alpha<D_2(\mu)$. Then for any sufficiently small $\varepsilon>0$ there exist an admissible interval~$J$, a continuous function~$\varphi$, $\varphi|_J > \theta > 0$, and $n$ such that $(\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^n \varphi\prec(1-\varepsilon)\varphi$. \end{lemma} \begin{proof} Since $\alpha<D_2(\mu)$, then $I(\alpha,\mu)$ is finite and by Lemma~\ref{lem:psi} the function $$ \psi_{\alpha,\mu}(r) = \int\int |x-y-r|^{-\alpha} d\mu(x)d\mu(y) $$ is finite for all~$r \in \mathbb R$. Moreover, by Corollary~\ref{cor:fixedpoint} it is the fixed point of the symmetric diffusion operator~$\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$. Let $\mathcal{S}(\lambda,\bar d, \bar q)$ be the complentary iterated function scheme of $N$ similarities. Let $m$ be as defined in~\eqref{eq:suppmu-mu}, so that $\supp \mu*(-\mu) \subseteq [-m,m]$. Let us choose an admissible interval $J:=[-a,a]$ and consider the intersection of its preimages under $g_j = \lambda x +d_j$ $$ \tilde J: = \bigcap_{j=1}^N g_j^{-1}([-a,a]) = [- \tilde a, \tilde a] \supsetneq [-a,a] \supsetneq [-m,m]. $$ Then for any $x\notin J$ and any $j=1,\dots,N$ one has $f_j^{-1}(x)\notin \tilde J$. We define a continuous function~$\varphi$ by \begin{equation} \label{eq:psi-cut} \varphi(r) = \begin{cases} \psi_{\alpha,\mu}(r), & \mbox{ if } |r| \le a , \\ \psi_{\alpha,\mu}(r) \cdot \frac{\tilde a-r}{\tilde a-a}, & \mbox{ if } a <r< \tilde a, \\ \psi_{\alpha,\mu}(r) \cdot \frac{\tilde a+r}{\tilde a- a}, & \mbox{ if } -\tilde a <r< - a, \\ 0, & \mbox{ otherwise.} \end{cases} \end{equation} \begin{figure} \centering \includegraphics[height=55mm,width=120mm]{eigfun.pdf} \caption{The construction of function $\varphi$ in Lemma~\ref{lem:find-D2}. It agrees with $\psi_{\alpha,\mu}$ on the admissible interval $J$, is linear on $\widetilde J \setminus J$ and vanishes outside of $\widetilde J$.} \label{fig:funphi} \end{figure} It is easy to see that $\varphi(r) \le \psi_{\alpha,\mu}(r)$ for all $r \in \mathbb R$. Taking into account monotonicity of the diffusion operator~\eqref{eq:d2-back} we obtain for any $r\in J$ \begin{equation}\label{eq:Ds-phi} [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}} \varphi](r) \le [\mathcal{D}^{(2)}_{\alpha,\mathcal{S}} \psi_{\alpha,\mu}](r) = \psi_{\alpha,\mu}(r) =\varphi(r). \end{equation} On the other hand, for any $r\in (-\tilde a, \tilde a) \setminus J$ one has $[\mathcal{D}^{(2)}_{\alpha,\mathcal{S}} \varphi](r) =0<\varphi(r)$, where the equality is due to Lemma~\ref{lem:return} and formula~\eqref{eq:d2-expect} for~$\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$. Together these two observations give $\mathcal{D}^{(2)}_{\alpha,\mathcal{S}} \varphi \preccurlyeq \varphi$. We shall now show that for sufficiently large~$n$ we have $ [(\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^n \varphi](r)<\varphi(r) $ for all $r\in J$. Indeed, let $d_k = \max|c_i - c_j|$. Then for any $x \in [0,a]$ we have that $$ g_k^{-n} (x) \ge d_k \sum_{j=1}^n \lambda^{-k} \ge d_k \lambda^{-n} \to \infty \mbox{ as } n \to \infty, $$ and the case $x \in [-a,0]$ is similar. Therefore we may choose~$n$ such that for any $x \in J$ there exists a sequence $\underline j_n$ such that $g_{\underline j_n}^{-1}(x) \not\in J$. We may write for any $r\in J$ \begin{equation} \begin{split} [(\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^n\varphi](r) &= \sum_{\underline j_n} q_{j_1} \ldots q_{j_n} \varphi\left(g_{\underline j_n}^{-1} (r)\right) \\ &= \sum_{\underline j_n \, : \, g_{\underline j_n}^{-1}(r) \in J} q_{j_1} \ldots q_{j_n} \psi_{\alpha,\mu} \left(g_{\underline j_n}^{-1} (r)\right) + \sum_{\underline j_n \, : \, g_{\underline j_n}^{-1} \not\in J} q_{j_1} \ldots q_{j_n} \varphi\left(g_{\underline j_n}^{-1} (r)\right) \\ &< \sum_{\underline j_n \, : \, g_{\underline j_n}^{-1}(r) \in J} q_{j_1} \ldots q_{j_n} \psi_{\alpha,\mu} \left(g_{\underline j_n}^{-1} (r)\right) + \sum_{\underline j_n \, : \, g_{\underline j_n}^{-1} \not\in J} q_{j_1} \ldots q_{j_n} \psi_{\alpha,\mu} \left(g_{\underline j_n}^{-1} (r)\right) \\ &= \psi_{\alpha,\mu}(r) = \varphi(r), \end{split} \label{eq:Ds-itn} \end{equation} where the equality in the second line comes from the fact that $\varphi|_J=\psi_{\alpha,\mu}|_J$, while the inequality in the third is due to the strict inequality between the corresponding terms of the sums over the set $\{\underline j_n : g_{\underline j_n}^{-1} \not\in J\}$. Note that $(\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^n\varphi$ is a continuous function, and a strict inequality \[ [(\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^n\varphi](r) < \varphi(r) \] for all~$r \in J$ implies, taking into account compactness of $J$, that for some $\varepsilon > 0$ one has \[ (\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^n\varphi \prec (1-\varepsilon) \varphi. \] \end{proof} We now proceed to prove Proposition~\ref{prop:D21}. \medskip \begin{proof}[of Proposition~\ref{prop:D21}] Let an admissible interval $J$ be fixed. By Lemma~\ref{lem:find-D2} we know that there exist $n$, $\varepsilon$, $\theta>0$ and a function $\varphi(r)>\theta$ such that $\mathcal{D}^{(2)}_{\alpha,\mathcal{S}} \varphi \preccurlyeq \varphi$ and for all $r\in J$ \[ [(\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^n\varphi](r) < (1-\varepsilon) \varphi(r). \] Note that for $c=\frac{1}{\theta}$ we have $\mathds{1}_J\preccurlyeq c \varphi$, and in particular for every~$m$ we get \[ (\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^{mn}(\mathds{1}_J) \preccurlyeq (\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^{mn}(c \varphi) = c (\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^{mn}(\varphi) \preccurlyeq c (1-\varepsilon)^m \varphi. \] Then for~$m$ sufficiently large so that $c(1-\varepsilon)^m \cdot \max_J \varphi<\frac 12$, we obtain a strict inequality \[ (\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^{mn}(\mathds{1}_J)\prec \frac{1}{2} \mathds{1}_J. \] Let us fix $n':=nm$. Note that~$\varphi$ and its images under $\mathcal{D}^{(2)}_{\alpha,\mathcal{S}}$ are continuous. The finite rank operator $\mathcal D_{\alpha,\Lambda, \mathcal J}^{n^\prime}$ depends continuously on the partition $\mathcal J$ and $\Lambda$, therefore \[ \sup_J \left| (\mathcal D_{\alpha,\Lambda, \mathcal J})^{n'} \varphi - (\mathcal{D}^{(2)}_{\alpha,\mathcal{S}})^{n'} \varphi \right| \to 0 \mbox{ as } \varepsilon \to 0 \mbox{ and } M \to \infty. \] In particular, for all sufficiently small $\varepsilon$ and sufficiently large $M$ one has for all $x \in J$ \[ (\mathcal D_{\alpha,\Lambda, \mathcal J})^{n'}(c \varphi)(x) < \frac{1}{2}, \] since the operator $\mathcal D_{\alpha,\Lambda, \mathcal J}$ is monotone, the latter implies \[ (\mathcal D_{\alpha,\Lambda, \mathcal J})^{n'}(\mathds{1}_J) \prec \frac{1}{2}\mathds{1}_J. \] Now, let us denote $A=D_{\alpha,\Lambda, \mathcal J}$; then for any nonnegative $\psi$ we get $\widehat{A}_0 \psi \preccurlyeq A \psi$, and hence \[ \widehat{A}_0^{n'} (c\varphi) \preccurlyeq A^{n'} (c\varphi) \prec \frac{1}{2}\mathds{1}_J. \] On the other hand, $(\hatop A)^{n'} (c\varphi)$ converges to $\widehat{A}_0^{n'} (c\varphi)$ uniformly as $\vartheta\to 0$. Hence for all sufficiently small $\vartheta$ we get \[ (\hatop A)^{n'}(c \varphi) < \frac{1}{2} \] everywhere on $J$. Finally we conclude \[ (\hatop A)^{n'}(\mathds{1}_J) \prec \frac{1}{2}\mathds{1}_J. \] \end{proof} \subsubsection{Proof of Theorem~\ref{t:finding-D2}} Let $\mathcal{S}(\lambda)$ be an interated function scheme of similarities and let $\mu$ be its unique stationary measure. Assume that $\alpha < D_2(\mu)$. Then by Proposition~\ref{prop:D21} there exist an interval $\Lambda\ni\lambda$, an admissible interval $J_\Lambda$, and its partition~$\mathcal J$ such that for the finite rank diffusion operator $A = \mathcal D_{\alpha,\Lambda,\mathcal J}$ we have that $\varphi:=\hatop{A}^n \mathds{1}_J \prec \mathds{1}_J$. Then by Proposition~\ref{prop:auxop} we have $\mathcal D_{\alpha,\Lambda,\mathcal J} \varphi \prec \varphi$ in other words, that the function $\varphi$ satisfies the hypothesis of Theorem~\ref{thm:5star}. \section[Diffusion operator $\mathcal{D}^{(1)}_{\alpha,\lambda}$]{Diffusion operator $\mathcal{D}^{(1)}_{\alpha,\lambda}$ and regularity of the measure} In this section we consider general iterated function schemes of orientation-preserving contracting $C^{1+\varepsilon}$ diffeomorphims and show that the diffusion operator approach can be used to get a lower bound on the regularity exponent of the stationary measure. We briefly recall the setting. Let $J \subset \mathbb R$ be a compact interval. Consider an interated function scheme~$\mathcal T(\bar f, \bar p, J)$ consisting of~$n$ uniformly contracting diffeomorphisms $f_j \colon \mathbb R \to \mathbb R$, $f_j \in C^{1+\varepsilon}(\mathbb R)$ which preserve the interval $J$: $f_j (J) \subset J$ for $j = 1, \ldots, n$ and probability vector $\bar p = (p_1, \ldots, p_n)$. Let $\mu$ be the stationary measure so that $ \sum_{j=1}^n p_j {f_j}_* \mu = \mu $; evidently, $\supp \mu \subset J$. The asymmetric diffusion operator is defined by~\eqref{eq:dif-oneway}. \begin{equation*} \mathcal{D}^{(1)}_{\alpha,\mathcal T}[\psi](x) := \sum_{j=1}^k p_j \cdot |(f_{j}^{-1})'(x)|^{\alpha} \cdot \psi(f_j^{-1}(x)). \end{equation*} \begin{example}[Bernoulli convolution revisited] In the special case of Bernoulli convolution scheme~$\mathcal{S}$ as defined in Example~\ref{ex:BCsystem} and $\alpha=1$ the operator $\mathcal{D}^{(1)}_{1,\mathcal{S}} \colon L^1(\mathbb R) \to L^1(\mathbb R)$ has a fixed point $\mathcal{D}^{(1)}_{1,\mathcal{S}} h = h$ precisely when $\mu$ has an $L^1$ density, i.e., $\frac{d\mu}{dx} = h \in L^1(\mathbb R)$. \end{example} We will need the following technical fact for the proof of Theorem~\ref{t:certificate-D1}. \begin{lemma} \label{lem:dist} Let $\mathcal T(\bar f, \bar p)$ be an iterated function scheme of uniformly contracting $C^{1+\varepsilon}$-diffeomorphims which preserve a compact interval~$J$. Then the distortion is uniformly bounded. In other words, there exist two constants $c_1$, $c_2$ such that for any sequence $\underline j_n$ we have for the distortion of the composition $f_{\underline j_n} = f_{j_n} \circ \ldots \circ f_{j_1}$ that for all $ x,y \in J$ $$ e^{c_1} < \frac{f_{\underline j_n}^\prime (x)}{f_{\underline j_n}^\prime (y)} < e^{c_2} $$ \end{lemma} \begin{proof} The argument generalises the classical argument for a single function, which can be found, in particular in~\cite[\S 3.2]{MS93}. More precisely, we define the distortion of~$f$ on the interval $J$ by $$ \varkappa(f,J) := \max_J \log f^\prime - \min_J \log f^\prime. $$ It is easy to see that it is subadditive with respect to composition, in particular, for any $f$, $g$ we have $$ \varkappa(f\circ g, J) \le \varkappa (g,J) + \varkappa(f, g(J)). $$ Since by assumption $f_j$ are uniformly contracting $C^{1+\varepsilon}$ diffeomorphisms, there exist constants $C = C(\mathcal T)$ such that $\varkappa(f_j,J) \le C \cdot |J|^\varepsilon$ and $\lambda<1$ such that $f_j^\prime(x) < \lambda$ for all $j = 1, \ldots n$ and for any $x \in J$. Therefore \begin{multline*} \varkappa(f_{\underline j_n}, J) \le \sum_{k=1}^n \varkappa(f_{j_k}, f_{j_{k-1}}\circ \ldots \circ f_{j_1}(J)) \le C \sum_{k=1}^n |f_{j_{k-1}}\circ \ldots \circ f_{j_1}(J) |^\varepsilon \\ \le C \sum_{k=0}^{n-1} \lambda^{k\varepsilon} |J|^{\varepsilon} \le \frac{ C |J|^\varepsilon}{1-\lambda^\varepsilon}, \end{multline*} and the result follows. \end{proof} \subsection{Proof of Theorem~\ref{t:certificate-D1} } For the convenience of the reader, we recall the statement. \paragraph{Theorem~\ref{t:certificate-D1}.} \emph{Assume that for some $\alpha>0$ there exists a function $\psi \in \mathcal F_{B_r(J)} $ such that for any $x \in B_r(J)$ we have that \begin{equation*} [\mathcal{D}^{(1)}_{\alpha,\mathcal T}\psi](x) < \psi(x). \end{equation*} Then the measure~$\mu$ is $\alpha$--regular. } \begin{proof} Let $I = B_\delta(x) \subset J$ be an interval of length~$|I|=2\delta$. We shall show that there exists $c \in \mathbb R$ such that \begin{equation} \label{eq:d1-proof1} \mu(I) \le c \cdot \delta^\alpha. \end{equation} We will use a random processes approach as in the previous section. Let us first consider a random process \begin{equation} \label{eq:past} F_{\underline \omega_n}(x) = f_{\omega_1}\circ \ldots \circ f_{\omega_n}(x), \end{equation} where $\omega_j$ are the i.i.d. distributed with $\mathbb P(\omega_j = j) = p_j$. Using the induced action on the stationary measure~$\mu$ we define another random process by \begin{equation} \label{eq:xi} \xi_n(\omega) = \mu\left(F_{\underline \omega_n}^{-1}(I)\right). \end{equation} It follows from the invariance of~$\mu$ that the process~$\xi$ is a martingale. Indeed, \begin{equation*} \mathbb E \xi_n (\omega) = \sum_{\underline j_n} p_{j_1} \ldots p_{j_n} \mu(f_{\underline j_n}^{-1}(I)) = \mu(I) = \xi_0. \end{equation*} By assumption the maps are uniformly contracting, therefore their inverses are uniformly expanding and by compactness the derivatives are bounded on $B_r(J)$. Let us denote by $\beta_{min} $ and $\beta_{max} $ the lower and the upper bound, respectively: $$ 1 < \beta_{min}:= \inf_{\stackrel{x \in B_r(J)}{1 \le j \le n}} (f_j^{-1})^\prime(x ) < \sup_{\stackrel{x \in B_r(J)}{1 \le j \le n}} (f_j^{-1})^\prime(x) =: \beta_{max} $$ and consider the stopping time \begin{equation} \label{eq:stopt} T(\omega) = \min\left\{ n \mid F_{\underline \omega_n}^{-1}(x) \not\in B_r(J) \mbox{ or } \left|F_{\underline \omega_n}^{-1}(I)\right| > \frac{r}{\beta_{max}} \right\}. \end{equation} Since the diffeomorphisms~$f_j$ are contracting, we have for any $\omega$ $$ T(\omega) < \frac{\log(2\beta_{max} \varepsilon )}{\log \beta_{min}} + 1. $$ Consider the backward random process associated with the inverses of diffeomorphisms~$f_j$ \begin{equation} \label{eq:eta} \eta_n (\omega) = \left( ( F_{\underline \omega_n}^{-1})^\prime (x)\right)^\alpha \cdot \psi (F_{\underline \omega_n}^{-1}(x)) . \end{equation} We claim that $\eta$ is a supermartingale. Indeed, by assumption $$ [\mathcal{D}^{(1)}_{\alpha,\mathcal T} \psi] (F_{\underline \omega_n}^{-1}(x) ) \le \psi (F_{\underline \omega_n}^{-1}(x) ) $$ therefore \begin{align*} \mathbb E (\eta_{n+1} & \mid \omega_1 \ldots \omega_n) = \\ & = \left( F_{\underline \omega_n} ^{-1} (x) \right) ^\alpha \cdot\sum_{j=1}^n p_{j_{n+1}} \left( (f^{-1}_{j_{n+1}})^\prime \left( F_{\underline \omega_n} ^{-1} (x)\right) \right)^\alpha \cdot \psi \left(f_{j_{n+1}}^{-1}( F_{\underline \omega_n}^{-1}(x) ) \right) \\ &= \left( F_{\underline \omega_n} ^{-1} (x) \right)^\alpha \cdot [\mathcal{D}^{(1)}_{\alpha,\mathcal T} \psi] (F_{\underline \omega_n}^{-1}(x) ) \\ &\le \left( F_{\underline \omega_n} ^{-1} (x) \right)^\alpha \psi(F_{\underline \omega_n}^{-1}(x) ) = \eta_n . \end{align*} In particular, since $T(\omega)$ is finite, \begin{equation} \label{eq:expeta} \mathbb E \eta_{T(\omega)} \le \eta_0 = \psi (x). \end{equation} We next want to consider the expectations $\mathbb E \xi_{T(\omega)}$ and $\mathbb E \eta_{T(\omega)}$. By definition~\eqref{eq:stopt} of $T(\omega)$ we have that at least one of the following events takes place: \begin{align*} A :& = \left[F_{\underline \omega_T}^{-1}(x) \not\in B_r(J)\right] \\ B :& = \left[ F_{\underline \omega_T}^{-1}(x) \in B_r(J) \mbox{ and } \left|F_{\underline \omega_T}^{-1}(I)\right| > \frac{r}{\beta_{max}} \right] \end{align*} We claim that if~$B$ doesn't occur, then~$\eta_{T(\omega)}=0$ and $ \xi_{T(\omega)}=0$. Indeed, the first follows from~\eqref{eq:eta} and the fact that $\supp \psi \subset B_r(J)$. For the second, note that, by definition~\eqref{eq:stopt} of $T(\omega)$, in this case we have $F_{\underline \omega_{T-1}}^{-1}(x) \in J$ and $\left|F_{\underline \omega_{T-1}}^{-1}(I)\right| < \frac{r}{\beta_{max}} $. Therefore $$ \left| F_{\underline \omega_T}^{-1}(I)\right| \le \beta_{max} \cdot \frac{r}{\beta_{max}} = r, $$ hence $F_{\underline \omega_T}^{-1}(I) \cap J = \varnothing$, and $\xi_{T(\omega)} = \mu( F_{\underline \omega_T}^{-1}(I))=0$. Now assume that~$B$ occurs. Then $\left| F_{\underline \omega_T}^{-1}(I)\right| \le r$ and therefore $F_{\underline \omega_T}^{-1}(I)\subset B_{2r}(J)$. By Lemma~\ref{lem:dist} applied to the interval $B_{2r}(J)$, we see that there exist~$c_1$ and~$c_2$ such that for any $y\in I$ $$ e^{c_1} \le \frac{(F_{\underline \omega_T}^{-1})^\prime(x)}{(F_{\underline \omega_T}^{-1})^\prime(y)} \le e^{c_2}. $$ Indeed, if we denote $\bar x:=(F_{\underline \omega_T}^{-1})(x), \bar y:=(F_{\underline \omega_T}^{-1})(y)$; then $\bar x,\bar y\in B_{2r}(J)$ and \[ \frac{(F_{\underline \omega_T}^{-1})^\prime(x)}{(F_{\underline \omega_T}^{-1})^\prime(y)} = \frac{(F_{\underline \omega_T})^\prime(\bar y)}{(F_{\underline \omega_T})^\prime(\bar x)}. \] In particular, \begin{equation} \label{eq:difbound} (F_{\underline \omega_T}^{-1})^\prime(x) \ge e^{c_1} \frac{|F_{\underline \omega_T}^{-1} (I)|}{|I|} \ge \frac{e^{c_1} r}{\delta \cdot \beta_{max}}. \end{equation} We have an upper bound for $\mu(I) = \mu(B_\delta(x))$: \begin{equation} \label{eq:01} \mu(I) = \mathbb E \xi_{T(\omega)} = \mathbb E \mu(F_{\underline \omega_T}^{-1}(I)) = \mathbb E \left(\mu(F_{\underline \omega_T}^{-1} (I)) \cdot \mathds{1\!}_B \right) \le \mathbb P(B), \end{equation} since $\mu(F_{\underline \omega_T}^{-1}(I)) \le 1$ and if $B$ doesn't take place, then $\mu(F_{\underline \omega_T}^{-1} (I)) =0$. By the hypothesis of the Theorem, there exists $c_3>0$ such that for any $x \in B_r(J)$ we have that $ \frac1{c_3} < \psi(x) < c_3$. Therefore, using~\eqref{eq:expeta} and~\eqref{eq:difbound} \begin{equation} c_3 \ge \psi(x) = \eta_0 \ge \mathbb E \eta_{T(\omega)} = \mathbb E \left( \left( (F_{\underline \omega_T}^{-1} )^\prime (x) \right)^\alpha\cdot \psi (F_{\underline \omega_T}^{-1}(x) ) \right) \ge \frac1{c_3} \cdot \Bigl(\frac{e^{c_1} r}{\delta \cdot \beta_{max}}\Bigr)^\alpha \cdot \mathbb P(B). \end{equation} In particular, we obtain an upper bound $\mathbb P(B) \le c_4 \cdot \delta^\alpha$ and the desired estimate~\eqref{eq:d1-proof1} follows from~\eqref{eq:01}. \end{proof} \subsection{Proof of Theorem~\ref{t:finding-D1} } \newcommand{\bar{\alpha}}{\bar{\alpha}} \newcommand{\mathbb P}{\mathbb P} \newcommand{\widetilde{x}}{\widetilde{x}} \newcommand{\widetilde{y}}{\widetilde{y}} Theorem~\ref{t:finding-D1} follows immediately from Proposition~\ref{prop:auxop} and Proposition~\ref{prop:D11} below. We begin with the following lemma, which is analogous to Lemma~\ref{lem:find-D2}. However, in the case of general iterated function scheme~$\mathcal T(\bar f, \bar p, J)$ we don't know the eigenfunction of $\mathcal{D}^{(1)}_{\alpha,\mathcal T}$. \begin{lemma} \label{lem:find-D1} Let~$\mu$ be the stationary measure of $\mathcal T(\bar f,\bar p, J)$. Assume that $\alpha < D_1(\mu)$. Then there exists $r>0$ such that $$ \lim_{n\to \infty} \bigl\|(\mathcal{D}^{(1)}_{\alpha,\mathcal T})^n \mathds{1\!}_{B_r(J)} \bigr\|_\infty = 0. $$ \end{lemma} \begin{proof} Let us introduce a shorthand notation $I := B_r(J)$ and \begin{equation*} \psi_n(x) := (\mathcal{D}^{(1)}_{\alpha,\mathcal T})^n \mathds{1}_I. \end{equation*} Since $\mathcal{D}^{(1)}_{\alpha,\mathcal T}$ preserves non-negative functions, it is sufficient to show that \begin{equation} \label{eq:psin} \psi_n(x) \to 0 \mbox{ as } n \to \infty \mbox{ uniformly in } x. \end{equation} Let $x \in I$ be fixed. We may rewrite~$\psi_n$ using the definition of the assymetric operator~\eqref{eq:dif-oneway} as follows \begin{equation}\label{eq:D-ap} \psi_n (x) = \sum_{\underline j_n} p_{j_1} \ldots p_{j_n} ((F_{\underline j_n}^{-1})^\prime (x))^{\alpha} \mathds{1}_I (F_{\underline j_n}^{-1}(x)), \end{equation} where $F_{\underline j_n}=f_{j_1}\circ \dots \circ f_{j_n}$. By Lemma~\ref{lem:dist} there exists $c_1$ such that for any word $\underline j_n$ for any $y_1, y_2 \in I$ \[ e^{-c_1}<\frac{F_{\underline j_n}^\prime(y_1)}{F_{\underline j_n}^\prime (y_2)}<e^{c_1} \] in particular, there exists $c_2>0$ such that for any $y \in I$ we have \begin{equation}\label{eq:F-lower} F_{\underline j_n}^\prime (y) \ge c_2 |F_{\underline j_n} (I)| . \end{equation} Let us collect nonzero terms from the right hand side of~\eqref{eq:D-ap} by length of $F_{\underline j_n}(I)$. More precisely, given $\delta>0$ consider the set of words \[ R_{\delta} := \{ \underline j_n \mid x \in F_{\underline j_n}(I), \quad \delta < |F_{\underline j_n} (I)|\le 2\delta \}. \] Note that since~$\mu$ is stationary, $\mu = \sum_{\underline j_n} p_{\underline j_n} (F_{\underline j_n})_* \mu$, and $\supp ((F_{\underline j_n})_* \mu)\subset F_{\underline j_n} (I)$, we have for any word $\underline j_n \in R_{\delta}$ that $ \supp (F_{\underline j_n})_* \mu \subset B_{2\delta}(x)$. Hence $$ \mu(B_{2\delta}(x)) \ge \sum_{ \underline j_n \in R_\delta} p_{\underline j_n} ((F_{\underline j_n})_* \mu)(B_{2\delta}(x)) = \sum_{\underline j_n \in R_\delta} p_{\underline j_n} \cdot 1 = \mathbb P (R_{\delta}). $$ By assumption $\alpha < D_1(\mu)$. Then there exists $ \alpha < \bar \alpha < D_1(\mu) $ such that \begin{equation} \label{eq:probup} \mathbb P (R_{\delta}) \le \mu(U_{2\delta}(x)) \le \const \cdot (2\delta)^{\bar \alpha}. \end{equation} Note that a term of the sum~\eqref{eq:D-ap} corresponding to a given $\underline j_n$ is nonzero only if $F_{\underline j_n}^{-1}(x)\in I$ or, equivalently, if $x\in F_{\underline j_n}(I)$. It follows from~\eqref{eq:F-lower} that for any $\underline j_n \in R_\delta$ we have \begin{equation} \label{eq:fprimeup} (F_{\underline j_n}^{-1})'(x) = \frac{1}{F_{\underline j_n}'(y)} \le \frac{1}{c_2\delta}. \end{equation} Thus, combining~\eqref{eq:probup} with~\eqref{eq:fprimeup} we get an upper bound for a part of the sum from~\eqref{eq:D-ap}, which corresponds to~$\underline j_n \in R_{\delta}$ \begin{equation} \label{eq:sumrd} \sum_{\underline j_n \in R_{\delta}} p_{\underline j_n} ((F_{\underline j_n}^{-1})'(x))^{\alpha} \mathds{1}_I(F_{\underline j_n}^{-1}(x)) \le \mathbb P(R_{\delta}) \cdot \frac{1}{(c_2 \delta)^{\alpha}} \le c_5 \cdot \delta^{\bar\alpha-\alpha}. \end{equation} Let us denote $\delta_n:=\max_{\underline j_n} F_{\underline j_n}(I)$. Since by assumption $f_j$ are uniformly contracting, $\delta_n \to 0 $ as $n \to \infty$ exponentially fast. Then, all $R_{\delta}$ corresponding to $\delta > \delta_n$ are empty, and therefore \[ \{ \underline j_n \mid x \in F_{\underline j_n}(I) \}= \bigcup_{k=0}^{\infty} R_{2^{-k} \delta_n}. \] Thus \begin{align*} \psi_n(x) &= (\mathcal{D}^{(1)}_{\alpha,\mathcal T})^n \mathds{1}_I (x) = \sum_{\underline j_n} p_{\underline j_n} ((F_{\underline j_n}^{-1})^\prime(x))^\alpha \mathds{1}_I (F_{\underline j_n}^{-1}(x)) \\ &\le \sum_{k=0}^{\infty} \sum_{\underline j_n \in R_{2^{-k}\delta_n}} p_{\underline j_n} ((F_{\underline j_n}^{-1})'(x))^{\alpha} \mathds{1}_I(F_{\underline j_n}^{-1}(x)) \le \const \cdot \sum_{k=0}^\infty (2^{-k} \delta_n)^{\bar \alpha - \alpha} = \const \cdot \delta_n^{\bar \alpha-\alpha}. \end{align*} Since $\delta_n \to 0$ as $n\to \infty$ and $\bar \alpha > \alpha$ the estimate~\eqref{eq:psin} follows. \end{proof} Discretization of the operator~$\mathcal{D}^{(1)}_{\alpha,\mathcal T}$ can be defined similarly to the discretization of the symmetric operator $\mathcal{D}^{(2)}_{\alpha, \mathcal S}$ given by~\eqref{eq:D-Lambda}. Let $\mathcal J$ be a partition of $B_r(J)$ of $N$ intervals. We introduce a non-linear finite rank operator \begin{equation} \label{eq:D1-Lambda} \mathcal{D}^{(1)}_{\alpha, \mathcal J} \psi |_{J_k} = \sum_{j=1}^n p_j \sup_{x\in J_k} |(f_j^{-1})^\prime(x)|^\alpha \cdot \sup_{x\in J_k} \psi \left( f_j^{-1}(x) \right), \quad 1 \le k \le N. \end{equation} An analogue of Proposition~\ref{prop:D21} holds for $\mathcal{D}^{(1)}_{\alpha, \mathcal J}$ with obvious modifications. \begin{proposition} \label{prop:D11} Let $\mu$ be the stationary measure of an iterated function scheme of diffeomorphisms $\mathcal T(\bar f ,\bar p, J)$. Assume that $\alpha < D_1(\mu)$. Then there exist: \begin{enumerate} \item a sufficiently small interval $B_r(J)$; \item a sufficiently fine partition $\mathcal J$ of $B_r(J)$; \item a sufficiently large $k \in \mathbb N$; and \item a sufficiently small $\vartheta >0$, \end{enumerate} so that for $A = \mathcal{D}^{(1)}_{\alpha,\mathcal J}$ we have that $\hatop A^k \mathds{1}_{B_r(J)} \prec \mathds{1}_{B_r(J)} $. \end{proposition} \begin{proof} The argument is along the same lines as the proof of Proposition~\ref{prop:D21}. However, in this case we do not know the eigenfunction, so we proceed as follows. By Lemma~\ref{lem:find-D1} there exists $r>0$ such that \begin{equation} \label{eq:propD11:0} \lim_{n\to \infty} \bigl\|(\mathcal{D}^{(1)}_{\alpha,\mathcal T})^n \mathds{1\!}_{B_r(J)} \bigr\|_\infty = 0. \end{equation} Let us consider a continuous function~$\psi$ defined by $$ \psi(x) = \begin{cases} 1, & \mbox{ if } x \in B_{r/2} (J); \\ 0, & \mbox{ if } x \not\in B_{r}(J); \\ \mbox{linear}, & \mbox{ otherwise. } \end{cases} $$ Since the operator $\mathcal{D}^{(1)}_{\alpha,\mathcal T}$ is monotone on~$\mathcal F_{B_r(J)}$, and $\psi \preccurlyeq \mathds{1}_{B_r(J)}$, it follows from~\eqref{eq:propD11:0} that $$ \lim_{n\to \infty} \bigl\|(\mathcal{D}^{(1)}_{\alpha,\mathcal T})^n \psi \bigr\|_\infty = 0. $$ In particular, we may choose $k$ such that $(\mathcal{D}^{(1)}_{\alpha,\mathcal T})^k \psi \prec \frac12 \mathds 1_{B_r(J)}$. As in the proof of Proposition~\ref{prop:D21}, the function $\psi$ and all its images $(\mathcal{D}^{(1)}_{\alpha,\mathcal T})^m \psi$ are continuous. Hence, as the size~$\varepsilon$ of intervals of the partition~$\mathcal J$ decreases to zero, the images of $\psi$ under the iterations of the finite rank operator converge \[ \sup_{B_r(J)} \left| (\mathcal{D}^{(1)}_{\alpha,\mathcal J})^k \psi - (\mathcal{D}^{(1)}_{\alpha,\mathcal T})^k \psi \right|\to 0. \] Let us denote $A:=\mathcal{D}^{(1)}_{\alpha,\mathcal J}$. Then for a sufficiently fine partition $\mathcal J$, we obtain $$ A^k \psi = (\mathcal{D}^{(1)}_{\alpha,\mathcal J})^k \psi \prec \frac34\mathds{1}_{B_r(J)}. $$ The latter implies for ${\hatop A}$ defined by~\eqref{eq:hatop} \[ \widehat{A}_0^k \psi \preccurlyeq A^k \psi \prec \frac34\mathds{1}_{B_r(J)}. \] Finally, $(\hatop A)^k \psi$ converges uniformly to $(\widehat A_0)^k \psi$ as $\vartheta\to 0$, hence for all $\vartheta$ sufficiently small we get $\widehat{A}_\vartheta^k \psi \prec \frac56\mathds{1}_{B_r(J)}$. On the other hand, by monotonicity ${\hatop A}^k \mathds{1}_{B_{r/2}(J)} \preccurlyeq \widehat{A}_\vartheta^k \psi$. Hence everywhere on $B_{r/2}(J)$ we have $$ {\hatop A}^k \mathds{1}_{B_{r/2}(J)}(x) < \mathds{1}_{B_{r/2}(J)}(x), $$ and thus with respect to partial order on $\mathcal F_{B_{r/2}(J)}$ \[ {\hatop A}^k \mathds{1}_{B_{r/2}(J)} \prec \mathds{1}_{B_{r/2}(J)}. \] \end{proof} Numerical experiments show that the Frostman dimension behaves differently to correlation dimension and to Hausdorff dimension. We give estimates for multinacci numbers for comparison in Table~\ref{tab:frost}. In particular it appears that in the case of Bernoulli convolutions the measure $\mu_\lambda$ corresponding to the root of $x^4-x^3-x^2-x-1$ has smaller regularity exponent than the measure corresponding to the root of $x^3 - x^2 -x-1$. \begin{table} \centering \begin{tabular}{|cccc|} \hline $n$ & $\dim_H(\mu_\lambda)$ & $\alpha_2 < D_2(\mu)$ & $\alpha_1 < D_1(\mu)$ \\ $2$ & $ 0.995713126685$ & $ 0.992395833333 $ & $0.940215301807$ \\ $3$ & $ 0.980409319534$ & $ 0.964214555664 $ & $0.853037293349$ \\ $4$ & $ 0.986926474333$ & $ 0.973324567994 $ & $0.844963475586$ \\ $5$ & $ 0.992585300274$ & $ 0.983559570313 $ & $0.854479046521$ \\ $6$ & $ 0.996032591584$ & $ 0.990673828125 $ & $0.866685890042$ \\ $7$ & $ 0.997937445507$ & $ 0.994959490741 $ & $0.880046136101$ \\ $8$ & $ 0.998944915449$ & $ 0.997343750000 $ & $0.891195693964$ \\ $9$ & $ 0.999465368055$ & $ 0.998640046296 $ & $0.900999615532$ \\ \hline \end{tabular} \caption{Hausdorff dimension and lower bounds on correlation and Frostman dimension for the multinacci parameter values, i.e. the largest roots of $x^n - x^{n-1} - \ldots -x -1$. } \label{tab:frost} \end{table} \begin{remark} Nevertheless, the nonsymmetric operator can be applied to Bernoulli convolutions mesures to show that at the other end of the range of parameters, there exists $c > 0$ and $ \varepsilon > 0$ such that $\dim_H(\mu_\lambda) \geq 1-\frac{c}{\log (\lambda-\frac{1}{2})^{-1}} $ for $\frac{1}{2}< \lambda < \frac{1}{2}+ \varepsilon$. Indeed, it suffices to apply $N=\lfloor\log_2 (\lambda-\frac{1}{2})^{-1}\rfloor$ iterations of $\mathcal{D}^{(1)}_{\alpha,\mathcal{S}}$ to the initial function $\mathds{1}_{[-0.1,2.1]}$. The scalar factor $\lambda^{N\alpha}$ will be no larger than $\frac14$, while each point of $J=[-0.1,2.1]$ will be covered by at most two images of~$J$ under composition of~$n$ maps of the iterated function scheme which corresponds to Bernoulli convolutions as defined in Example~\ref{ex:BCsystem} $f_{0}(x)=\lambda x$, and $f_1(x)=\lambda x-1$. \end{remark}
{ "timestamp": "2022-01-19T02:39:35", "yymm": "2102", "arxiv_id": "2102.07714", "language": "en", "url": "https://arxiv.org/abs/2102.07714" }
\section{The simple harmonic oscillator}\label{sec:SHO} The discussion in this section, which partially follows {Ref.~\cite{Baumann}}, gives a short review of the standard discussion on the simple harmonic oscillator (SHO). Of particular interest is the result obtained in Eq.~\eqref{ModeSolution}, which is referred to in the subsequent sections while specifying the quantization procedure of the scalar perturbations of the metric $g_{\mu\nu}$ and the scalar field $\phi$. The action for the simple harmonic oscillator is given by \begin{equation} S_{\mathrm{\tiny SHO}} = \int \text{d} t \,\mathcal{L}_{\mathrm{SHO}},\quad\text{where}\quad \mathcal{L}_{\mathrm{SHO}}= \frac{{\dot{x}}^2}{2} - \frac{1}{2}\omega^2 x^2, \end{equation} with $x$ satisfying the equation of motion (EOM) \begin{equation}\label{SHO} \ddot{x}+\omega^2 x=0. \end{equation} Upon quantization, $x$ is treated as an operator $\hat{x}$, which can be written in terms of the creation $\hat a^\dag$ and annihilation $\hat a$ operators as \begin{equation}\label{SHOModeExpansion} \hat{x} = v(t)\hat{a}+v^{*}(t)\hat{a}^{\dagger}. \end{equation} {Equation~\eqref{SHO}} for the position operator $\hat{x}$ implies that the mode $v(t)$ in Eq.~\eqref{SHOModeExpansion} satisfies \begin{equation}\label{ModeEquation} \ddot{v}+\omega^2 v=0 {\implies} v(t)= A\exp(-i\omega t)+B\exp(i\omega t). \end{equation} Moreover, the commutation relation between $\hat{x}$ and its canonical conjugate $\hat{p}=\frac{\partial\mathcal{L}}{\partial\dot{\hat{x}}}=\dot{\hat{x}}$, yields an additional constraint on the mode $v$ \begin{equation}\label{ConstraintOne} v{\dot{v}}^{*} - v^{*}\dot{v} = i {\implies} |A|^2 - |B|^2 = \frac{1}{2\omega}. \end{equation} Here, we have set $\hbar=1$, and used the fact that $\left[\hat{a},\hat{a}^{\dagger}\right]=1$. In terms of the creation and annihilation operators, the Hamiltonian $\hat{H}_{\mathrm{SHO}} = \frac{1}{2}\hat{p}^2+\frac{1}{2}\omega^2\hat{x}^2$ reads \begin{equation} \hat{H}_{\mathrm{SHO}} = \left(\frac{\dot{v}^2+\omega^2 v^2}{2}\right)\hat{a}\hat{a}+\left(\frac{{\dot{v}}^{*2}+\omega^2v^{*2}}{2}\right)\hat{a}^{\dagger}\hat{a}^{\dagger}+\left(\frac{{|\dot{v}|}^2+\omega^2|v|^2}{2}\right)\left(\hat{a}\hat{a}^{\dagger}+\hat{a}^{\dagger}\hat{a}\right). \end{equation} The vacuum state $\ket{0}$ of the annihilation operator $\hat{a}$ is defined as the state that satisfies $\hat{a}\ket{0}=0.$ Demanding that this state is an eigenstate of the Hamiltonian yields an additional constraint on the mode $v$ \begin{equation}\label{ConstrainTwo} \dot{v}^2+\omega^2 v^2=0 {\implies} \dot{v}= \pm i\omega v. \end{equation} The fact that $\dot{v}\propto v$, implies that either $A$ or $B$ in Eq.~\eqref{ModeEquation} is zero. From Eq.~\eqref{ConstraintOne} it can be seen that $A$ cannot be zero, and therefore, the solution of the mode $v(t)$ reads \begin{equation}\label{ModeSolution} v(t)=\frac{1}{\sqrt{2\omega}}\exp(-i\omega t). \end{equation} The standard result obtained in Eq.~\eqref{ModeSolution} specifies the complete time evolution of the operator $\hat{x}(t)$ in terms of the creation and annihilation operators. It will also be useful for completing the quantization of the scalar perturbations in section \ref{sec:PertQuant}. \section{Single field inflation}\label{sec:Single Field Inflation} \subsection{Scalar field in FLRW Universe} Throughout our calculations, we work with a $(-,+,+,+)$ signature, and in reduced Planck units with $\hbar=c=1$, $M_{\mathrm{P}}\equiv 1/\sqrt{8\pi G}$. The action for an arbitrary homogeneous scalar field $\phi$ with a generic metric $g_{\mu\nu}$ reads \begin{equation} S =\int \text{d}^4x\sqrt{-g}\left[\frac{M^2_{\mathrm{P}}}{2}R-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi - V(\phi)\right], \label{action} \end{equation} where the scalar potential $V(\phi)$ is assumed to have the desired properties necessary for slow roll inflationary dynamics. In addition, $R$ and $g$ are the Ricci scalar and the determinant of the metric tensor $g_{\mu\nu}$, respectively. The variation of the action in Eq.~\eqref{action} with respect to the scalar field $\phi$ leads to the Klein-Gordon equation, given by \begin{equation}\label{KGGeneral} \partial_{\mu}\left[\sqrt{-g}g^{\mu\nu}\partial_{\nu}\phi\right] - \sqrt{-g}V,_{\phi}=0. \end{equation} As mentioned before, the full inflationary dynamics is studied by decomposing the scalar field and the metric as a classical background part $\bar{\phi}$ and $\bar{g}_{\mu\nu}$, and their respective perturbations $\delta\phi$ and $\delta g_{\mu\nu}$. Therefore, we have \begin{equation}\label{background split} {\phi=\bar{\phi}+\delta\phi\,,\qquad g_{\mu\nu}=\bar{g}_{\mu\nu} + \delta g_{\mu\nu}}. \end{equation} {The classical background dynamics of $\bar{\phi}$ and $\bar{g}_{\mu\nu}$ is computed first, independently of the perturbations $\delta\phi$ and $\delta g_{\mu\nu}$. In order to do so, one has to make a choice for the background metric. Following the standard procedure we take this to be the flat Friedmann-Lema\^itre-Robertson-Walker (FLRW) metric \begin{equation} g_{\mu\nu}=a^2(\eta)\eta_{\mu\nu}, \end{equation} where $\eta$ is the conformal time, and $\eta_{\mu\nu}$ the Minkowski metric. On this background, and for a homogeneous background field $\phi(\eta)$, the two Friedmann equations and the Klein-Gordon equation read \begin{align} 3 M^2_{\mathrm{P}}h^2 ={}& \frac{\dot{\phi}^2}{2}+a^2V\left(\phi\right),\label{Friedmann1}\\ h^2 - \dot{h} ={}& \frac{\dot{\phi}^2}{2 M^2_{\mathrm{P}}},\label{Friedmann2}\\ \ddot{\phi}+2h\dot{\phi}+a^2V,_{\phi}={}&0.\label{KGBack} \end{align} In the above equations, we have defined $h\equiv\dot{a}/a$, and the dot denotes a derivative with respect to the conformal time $\eta$. We work with a convention where $\partial_i$ denotes the derivative with respect to the spacial comoving coordinate $x^i$ and $V,_{\phi}\equiv \partial V/\partial\phi$. For a suitable choice of $V(\phi)$, the solutions to Eqs.~\eqref{Friedmann1} - \eqref{KGBack} yield $\phi(\eta)$ and $a(\eta)$ such that, until the end of inflation, $V(\phi(\eta))$ remains (almost) constant and the scale factor $a$ expands exponentially starting from its initial value at the beginning of inflation. \subsection{Action for the perturbations} Next, we compute the dynamics of the perturbations of the scalar field $\delta\phi (\eta,\boldsymbol{x})$ and the metric $\delta g_{\mu\nu}(\eta,\boldsymbol{x})$ over the flat FLRW background metric. To first order, the scalar field perturbation $\delta\phi (\eta,\boldsymbol{x})$ couples only to the scalar perturbations of the metric \cite{Bardeen1980, Uzan}. On a flat FLRW metric, the most general form of the metric including scalar perturbations is given by \cite{Bardeen1980,Sasaki1986,Uzan} \begin{equation}\label{metricgeneric} \mathrm{d}s^2=a^2(\eta)\left[-\left(1+2A\right)\mathrm{d}\eta^2+2 B_{,i}\mathrm{d}x^i\mathrm{d}\eta+\left((1+2\psi)\delta_{ij}+2E,_{ij}\right)\mathrm{d}x^i\mathrm{d}x^{j}\right]. \end{equation} Two of the functions among $A(\eta,\boldsymbol{x})$, $B(\eta,\boldsymbol{x})$, $\psi(\eta,\boldsymbol{x})$, $E(\eta,\boldsymbol{x})$ can be removed by working in a specific gauge \cite{lyth_liddle_2009,Riotto,Uzan}. In order to derive the EOM governing the dynamics of perturbations, it is convenient to work in a gauge where $\psi=0$ and $B=0$ \cite{Nakamura:1996da}. In this gauge, the metric reduces to \begin{equation}\label{metricGauge} \mathrm{d}s^2=a^2(\eta)\left[-\left(1+2A\right)\mathrm{d}\eta^2+\left(\delta_{ij}+2E,_{ij}\right)\mathrm{d}x^i\mathrm{d}x^{j}\right]. \end{equation} After taking the Fourier transform of the perturbations over the comoving coordinates $x^i$, the matrix representation of the metric $g_{\mu \nu}$ reads \begin{equation}\label{Perturbed Metric} g_{\mu\nu}(\k)=\bar{g}_{\mu\nu}+\delta g_{\mu\nu}(\k), \end{equation} where we have \begin{equation}\label{Perturbed Metric Matrix} {\bar{g}}_{\mu\nu}= \begin{pmatrix} -a^2(\eta)& 0\\ 0 & \delta_{ij} a^2(\eta) \end{pmatrix},\qquad \delta g_{\mu\nu}(\k)= \begin{pmatrix} -2A_{\k} a^2(\eta)& 0\\ 0 & \left(-2k_ik_j E_{\k}\right)a^2(\eta) \end{pmatrix}. \end{equation} In the matrix form the off-diagonal zeros represent the space-time $g_{0i}$ components of the metric. The perturbed Klein-Gordon equation is obtained by perturbing Eq.~\eqref{KGGeneral}, and retaining only the terms that are first order in the perturbations $A$, $E$, and $\delta\phi$. It is given by \begin{equation}\label{KGPerturbed1} \partial_{\mu}\left[(\delta\sqrt{-g})\bar{g}^{\mu\nu}\partial_{\nu}\bar{\phi}\right]+ \partial_{\mu}\left[\sqrt{-\bar{g}}(\delta g^{\mu\nu})\partial_{\nu}\bar{\phi}\right]+\partial_{\mu}\left[\sqrt{-\bar{g}}\bar{g}^{\mu\nu}\partial_{\nu}(\delta\phi)\right]- (\delta\sqrt{-g})V,_{\bar{\phi}}- \sqrt{-\bar{g}}V,_{\bar{\phi}\bar{\phi}}\delta\phi=0, \end{equation} where the overhead bar denotes the unperturbed version of the variable. By imposing the homogeneity of the background scalar field $\bar{{\phi}}$ (i.e. $\partial_{i}\bar{\phi}=0$), and by using Eq.~\eqref{Perturbed Metric Matrix} and Eq.~\eqref{KGBack} in Eq.~\eqref{KGPerturbed1} one can simplify the latter, which in Fourier space becomes \begin{equation}\label{KGPerturbed2} \ddot{\delta\phi}_{\k}+2h\dot{\delta\phi}_{\k}+(k^2+a^2 V,_{\bar{\phi}{\bar{\phi}}})\delta\phi_{\k}+2A_{\k}a^2V,_{\bar{\phi}} - \dot{\bar{\phi}}(\dot{A_{\k}}+k^2 \dot{E_{\k}})=0\,, \end{equation} where $k^2\equiv \k\cdot\k$, and $\delta\phi_{\k}$ is the Fourier component of the inhomogeneous field perturbations $\delta\phi(\x,\eta)$. Using Einstein's equations $\delta G_{\mu\nu}= \frac{1}{M^2_{\mathrm{P}}}\delta T_{\mu\nu}$ for the scalar perturbations, the metric perturbations $A_{\k}$ and $E_{\k}$ can both be expressed in terms of $\delta\phi_k$ and its derivative as \cite{Nakamura:1996da} \begin{align} 2A_{\k} ={}&\frac{\dot{\bar{\phi}}\delta\phi_{\k}}{M^2_{\mathrm{P}}h},\label{Einstein1}\\ \dot{A_{\k}}+ k^2\dot{E_{\k}}={}&\frac{\delta\phi_{\k}}{M^2_{\mathrm{P}}}\frac{d}{d\eta}\left(\frac{\dot{\bar{\phi}}}{h}\right)\label{Einstein2}. \end{align} By imposing Eq.~\eqref{Einstein1}, Eq.~\eqref{Einstein2} and Eq.~\eqref{KGBack} in Eq.~\eqref{KGPerturbed2}, we get \begin{equation} \ddot{\delta\phi_{\k}}+2h\dot{\delta\phi_{\k}}+\left[k^2+a^2 V,_{\phi\phi}-\frac{1}{M^2_{\mathrm{P}}a^2}\frac{d}{d\eta}\left(\frac{a^2\dot{\phi}^2}{h}\right)\right]\delta\phi_{\k}=0, \end{equation} where we have dropped the bar over the unperturbed classical background field $\bar{\phi}$. Defining the rescaled field variable $u_{\k}$ as \begin{equation} u_{\k}=a \delta \phi_{\k}, \end{equation} the friction term $2h\dot{u}_{\k}$ disappears, and the EOM for $u_{\k}$ becomes \begin{equation}\label{MukhSasPart1} \ddot{u}_{\k}+\left[k^2+a^2V,_{\phi\phi}-\frac{\ddot{a}}{a}-\frac{1}{M^2_{\mathrm{P}}a^2}\frac{d}{d\eta}\left(\frac{a^2\dot{\phi}^2}{h}\right)\right]u_{\k} =0. \end{equation} Furthermore, we define the factor $z$ as \begin{equation}\label{ZFactor} z \equiv a M_\mathrm{P}\sqrt{2 \epsilon}{/c_s}\,, \end{equation} where $c_s$ is the speed of sound (where $c_s=1$ during inflation, and $c_s=1/\sqrt{3}$ during the radiation dominated era). In terms of the cosmic time $dt = a(\eta)d\eta$, the slow roll parameter $\epsilon$ is given by \begin{equation}\label{SlowRollOne} \epsilon={} -\frac{1}{H^2}\frac{\text{d} H}{\text{d} t}, \end{equation} where $H$ is the Hubble parameter $H=a^{-1}\frac{d}{dt}a$. Using {Eq.~}\eqref{Friedmann1} and {Eq.~}\eqref{Friedmann2} during inflation we get $z={a\dot{\phi}}/{h}$. Using Eq.~\eqref{KGBack} and Eq.~\eqref{Friedmann2}, as well as the equations resulting from taking their time derivative, Eq.~\eqref{MukhSasPart1} takes the compact form \begin{equation}\label{MukhSas2} \ddot{u}_{\k}+\left(k^2 - \frac{\ddot{z}}{z}\right)u_{\k}=0. \end{equation} This equation, governing the dynamics of $u_\k$, can also be obtained from the following action functional \begin{equation}\label{ActionPert} \delta S^{(2)} = \frac{1}{2}\int \text{d}\eta\int \text{d} \x \left[\dot{u}^2-\delta^{ij}\partial_{i}u\partial_j u+\frac{\ddot{z}}{z}u^2\right]\,, \end{equation} which can be computed independently by considering the action of Eq.~\eqref{action} to second order in the scalar perturbations \cite{Mukhanov1988, Sasaki1986, MUKHANOV1992203}. We make a short comment here about our choice of the gauge, where $\psi=B=0$. In this gauge, the gauge-invariant field perturbation $\delta\phi_{\text{G}}$, which is defined as \cite{Sasaki1986,Mukhanov1988} \begin{equation}\label{MukhSasVar} \delta\phi_{G}\equiv\delta\phi - \dot{\phi}\frac{\psi}{h}, \end{equation} is equal to $\delta\phi$ since $\psi=0$. {Thus, in this gauge, one can also identify $u$ with its corresponding gauge-invariant rescaled field $u_G\equiv a\delta\phi_{G}$. In general, one can work with the gauge-invariant variable $\delta\phi_{G}$ defined in Eq.~\eqref{MukhSasVar} from the very beginning and arrive at the gauge-invariant version of the action in Eq.~\eqref{ActionPert} with $u$ replaced with $u_G$ \cite{Sasaki1986, Mukhanov1988, MUKHANOV1992203}}. From now on, we will be working with the gauge-invariant variables, but without explicitly retaining the index `$G$' in the subscript.} \subsection{Quantizing the perturbations}\label{sec:PertQuant} In order to allow for sufficient inflation, a successful inflationary model must have an almost flat potential $V(\phi)$, with the scalar field slowly rolling down its small but finite slope \cite{Riotto, lyth_liddle_2009, Uzan}. Equation \eqref{Friedmann1} translates this demand in having a dynamics where the rate of change of the Hubble parameter is rather small. The slow roll dynamics requires $\epsilon$ to be small. The smallness of $\epsilon$ can be qualitatively understood as the condition that the `kinetic energy' of the scalar field $\dot{\varphi}^2/a^2$ is much smaller than the `potential energy' $V(\phi)$. That the kinetic term is small, would follow from {Eq.~\eqref{KGBack}} if one chooses a suitable inflationary potential which has negligible first and second derivatives $\left|M^2_{\mathrm{P}}\left(V,_{\phi}/V\right)^2\right|\ll 1$ and $|M^2_{\mathrm{P}}V,_{\phi\phi}/V|\ll 1$. In conformal time, we have $\epsilon = \dot{\phi}^2/(2M^2_{\mathrm{P}}h^2)$. Since in the slow-roll approximation both the kinetic term and the acceleration of the scalar field can be neglected, we take $\epsilon\ll 1$ and treat $\epsilon$ as a constant. In this limit one obtains for a perfect de Sitter spacetime \begin{equation} a(\eta) \approx -1/({H_{\text{inf}}} \eta), \end{equation} where we have approximated {$H_{\text{inf}}$} to be a constant, which follows from Eq.~\eqref{Friedmann1} for a flat potential $V$ under {the slow-roll} approximation. Under the same approximation scheme we obtain the standard expressions \begin{equation}\label{SlowRollApprox} {\frac{\ddot{z}}{z}\approx \frac{\ddot{a}}{a} \approx \frac{2}{\eta^2}.} \end{equation} Using the result of Eq.~\eqref{SlowRollApprox} in Eq.~\eqref{MukhSas2} we get \begin{equation}\label{MukhSas2PDS} \ddot{u}_k+\left(k^2 - \frac{2}{\eta^2}\right)u_k =0. \end{equation} In analogy to the SHO mode expansion in Eq.~\eqref{SHOModeExpansion}, upon quantization the status of the classical perturbation $u(\eta,\x)$ is raised to that of a field operator $\hat{u}(\eta,\x)$ which can be written as \begin{equation} \hat{u}(\eta,\x)={}\int \frac{\text{d}\k}{\left(2\pi\right)^{3/2}}\exp(i\k\cdot\x)\hat{u}_{\k}(\eta),\label{CanonicalVariable} \end{equation} where the Fourier component $\hat{u}_{\k}(\eta)$ is given in terms of the modes $v_k(\eta)$, and the creation and annilation operators as \begin{equation} \hat{u}_{\k}(\eta)={} v_k(\eta)\hat{a}_{\k} + v^*_k(\eta)\hat{a}^\dagger _{-\k}. \label{fieldopu} \end{equation} The solution for the modes $v_k(\eta)$ is given by \begin{equation} v_k(\eta) = \frac{A(-\frac{\cos(k\eta)}{k\eta}-\sin(k\eta))}{\sqrt{2k}}+\frac{B\left(-\cos(k\eta)+\frac{\sin(k\eta)}{k\eta}\right)}{\sqrt{2k}}. \end{equation} To fix the free parameters $A$ and $B$, one imposes the Bunch-Davies vacuum condition \cite{Riotto,lyth_liddle_2009,Uzan}. Such a condition demands the system to be in the ground state $|0\rangle$ at $\eta\rightarrow -\infty$, which must then also be an eigenstate of the Hamiltonian corresponding to the action functional for $u$ [cf.~Eq.~\eqref{ActionPert}]. Since the differential equation satisfied by the modes in this limit is identical to that of an SHO [cf. Eq.~\eqref{ModeEquation}], following an analogous procedure as the one described in section \ref{sec:SHO}, we must have \begin{equation} \left.v_k(\eta)\right|_{\eta\rightarrow -\infty} = \frac{\exp(-i k\eta)}{\sqrt{2k}}. \end{equation} This gives $B=-1$ and $A=i$. In the perfect de Sitter limit, the full solution $v_k(\eta)$ then reads \cite{Uzan,lyth_liddle_2009} \begin{equation} v_k(\eta)=\frac{e^{-i\eta k} \left(1-\frac{i}{\eta k}\right)}{(2k)^{1/2}}. \label{modeDS} \end{equation} \subsection{Power spectrum} The power spectrum $\mathcal{P}_{u}$ of the operator $\hat{u}(\eta,\x)$ is defined to be \cite{Riotto,lyth_liddle_2009,Baumann,Uzan} \begin{equation} \left\langle0|\hat{u}^2(\eta,\x)\right|0\rangle \equiv \int \text{d}\ln(k)\,\mathcal{P}_{u}(k,\eta). \label{PowerSpectrumdef} \end{equation} The expectation value $\left\langle0|\hat{u}^2(\eta,\x)\right|0\rangle$ is independent of $\x$. Indeed, if one uses the expression~\eqref{CanonicalVariable} along with Eq.~\eqref{fieldopu} we get \begin{equation}\label{MeanSqauredExp} \left\langle0|\hat{u}^2(\eta,\x)\right|0\rangle =\int\int \frac{\text{d}\k \text{d}\k'}{(2\pi)^3} e^{i(\k+\k')\cdot \x}v_k(\eta)v^{*}_{k'}(\eta)\langle 0|\hat{a}_{\k}\hat{a}^{\dagger}_{-\k'}|0\rangle. \end{equation} Using the commutation relation of the creation and annihilation operators, we see that the expectation value is independent of $\x$ and that the power spectrum $\mathcal{P}_{u}(k,\eta)$ is given by \begin{equation} \mathcal{P}_{u}(k,\eta) = \frac{k^3}{2\pi^2}|v_k(\eta)|^2. \end{equation} In the superhorizon limit $k\eta\ll 1$, i.e. when we consider a cosmological perturbation of wavelength larger than the length scale {$1/(aH_\text{inf})$}, the expression for the mode in Eq.~\eqref{modeDS} simplifies to \cite{Uzan} \begin{equation} v_k(\eta) \overset{k\eta\ll 1}{\approx} -\frac{i}{\sqrt{2}}\frac{1}{\eta k^{3/2}}. \end{equation} Using this result in Eq.~\eqref{PowerSpectrumdef}, we obtain that the power spectrum in the superhorizon limit is given by \begin{equation}\label{PowerSpectrumModes} \mathcal{P}_{u}(k,\eta)\approx\frac{1}{\left(2\pi\eta\right)^2}. \end{equation} A central quantity of interest for computing inflationary observables is the comoving curvature perturbation $\hat{\mathcal{R}}$, which is related to $\hat{u}$ as \begin{equation} \hat{\mathcal{R}}\equiv \hat{u}/z. \label{defcomcurvpert} \end{equation} The power spectrum $\mathcal{P}_\mathcal{R}$ of $\hat{\mathcal{R}}$ can therefore be obtained from the power spectrum of $\mathcal{P}_{u}$ as $\mathcal{P}_{\mathcal{R}}=\mathcal{P}_{u}/z^2$. More explicitly, it is given by \begin{equation} \mathcal{P}_{\mathcal{R}} \approx \frac{1}{2\epsilon M^2_{\mathrm{P}}}\left(\frac{H}{2\pi}\right)^2. \end{equation} We {point out} that, for a perfect de Sitter solution of the modes [cf.~Eq.~\eqref{modeDS}], this result is independent of both the conformal time $\eta$ and the scale $k$. As remarked in the main text, in a more general treatment, where the {slow-roll} dependence is taken into account in the time evolution of the Hubble parameter, the power spectrum acquires a mild scale dependence. The power spectrum is then parametrized as \cite{Riotto,lyth_liddle_2009,Uzan} \begin{equation} {\mathcal{P}_{\mathcal{R}}=A^*_{\mathcal{R}}\left(\frac{k}{k_{*}}\right)^{n^*_{\mathcal{R}}-1}.\label{PowerLawPh}} \end{equation} The values of $A^*_{\mathcal{R}}$ and $n^*_{\mathcal{R}}$ are constrained by Planck data \cite{Akrami2018} at the pivot scale $k_{*}=0.05\,\mathrm{Mpc^{-1}}$ to be \begin{equation} {A_{\mathcal{R}}^{*}= \left(2.099\pm0.014\right)\times 10^{-9},\quad\text{and}\quad n_{\mathcal{R}}^{*}= 0.9649 \pm 0.0042,\label{PlanckData}} \end{equation} at the $68\%$ confidence level. As mentioned in the main text, we ignore the scale dependence of the power spectrum $\mathcal{P}_{\mathcal{R}}$, and work with a perfect de Sitter solution for the modes reported in Eq.~\eqref{modeDS}. Since the correction induced by collapse dynamics to $A^*_{\mathcal{R}}$ will turn out to be several orders of magnitude below $10^{-9}$, it will make the scale dependence of the correction term irrelevant. \section{Dynamical collapse models}\label{sec:Dynamical Collapse Model} Dynamical collapse models are phenomenological models which modify the standard Schr\"odinger evolution through the addition of nonlinear and stochastic terms. The nonlinearity allows the breakdown of superpositions during a measurement, while the stochasticity is necessary in order to avoid faster-than-light signalling \cite{Bassi2013}. To correctly describe the dynamics of microscopic systems, which are successfully described by quantum mechanics, and that of classical macroscopic systems, one requires that these effects are stronger for larger systems. Here, we focus on the Continuous Spontaneous Localization (CSL) model \cite{Pearle1989,Ghirardi1990}, which is the most studied among the dynamical collapse models. The model is defined through the following stochastic differential equation \begin{equation}\begin{aligned} \text{d} \ket{\psi}=&\left[-i\hat{H} \text{d} t + \frac{\sqrt{\gamma}}{m_0} \int \text{d} \x \left[\hat{M}(\x) - {\braket{\hat{M}(\x)}}\right] \text{d} W_t(\x) \right.\\ &\left.-\frac{\gamma}{2 m_0^2} \int \text{d} \x \text{d} \y {\left[\hat{M}(\x)-\braket{\hat{M}(\x)} \right]G(\x-\y) \left[\hat{M}(\y)-\braket{\hat{M}(\y)}\right]} \text{d} t \right]\ket{\psi}, \label{CSL} \end{aligned}\end{equation} where $\hat{H}$ is the Hamiltonian of the system, $\gamma$ is a phenomenological parameter of the model encoding the strength of the collapse process and ${\braket{\,\cdot\,}}$ denotes the expectation value on the state $\ket\psi$. The noise $W_t(\x)$ defined at each point of space is characterized through the correlation \begin{equation} \mathbb{E}\left[ \xi_t(\x) \xi_{t'}(\y) \right]=G(\x-\y)\delta(t-t'), \qquad \text{where} \qquad G(\x-\y)=\frac{1}{(4 \pi \rC^2)^{3/2}}e^{-\frac{(\x-\y)^2}{4 \rC^2}}, \label{CSLnoise} \end{equation} and $\xi_t(\x)=\text{d} W_t(\x)/\text{d} t$. Here, $\mathbb{E}[\cdot]$ indicates the stochastic average, and $\rC$ denotes the second phenomenological parameter of the model. Finally, the operator $\hat{M}(\x)$ in Eq.~\eqref{CSL} is the mass density operator, given by \begin{equation} \hat{M}(\x)= \sum_j m_j \hat{a}_j^\dagger(\x) \hat{a}_j(\x), \end{equation} where the operators $\hat{a}_j^\dagger (\x)$ and $\hat{a}_j(\x)$ are the creation and annihilation operators of a particle of type $j$ in the space point $\x$. The stochastic differential equation for $\ket{\psi}$ in Eq.~\eqref{CSL} leads to the following master equation \begin{equation} \frac{\text{d} \hat{\rho}}{\text{d} t}=-\frac{\gamma}{2m_0^2} \int \text{d} \x \text{d} \y \,G(\x-\y) \left[\hat{M}(\x),\left[\hat{M}(\y),\hat{\rho}\right]\right], \label{CSLmastereq} \end{equation} where the density operator $\hat{\rho}$ is defined as $\hat{\rho}=\mathbb{E}[\ket{\psi}\bra{\psi} ]$. The expectation value $\mathbb{E} [\bra{\psi} \hat{O} \ket{\psi} ]$ of an arbitrary operator $\hat{O}$ can be calculated in terms of the density operator $\hat{\rho}$ in Eq.~\eqref{CSLmastereq} as $\mathbb{E}[\bra{\psi}\hat{O}\ket{\psi} ]=\text{Tr} [\hat{O}\hat{\rho} ]$. Therefore, in order to calculate the expectation values of observables, one may use any unravelling that yields the same master equation of the CSL model. One of these unravellings is provided by the following linear equation in the Stratonovich form \begin{equation} \frac{\text{d} \ket{\psi}}{\text{d} t}= -i\left[\hat{H}+ \frac{\sqrt{\gamma}}{m_0}\int \text{d} \x\, \hat{M}(\x){\xi}_t(\x)\right]\ket{\psi}. \end{equation} Indeed, both the latter equation and Eq.~\eqref{CSL} provide the same master equation for the statistical operator $\hat \rho$ \cite{Bassi2003}. From the above equation, we can interpret the action of the CSL model as an addition of a stochastic Hamiltonian \begin{equation} \hat{H}_{\text{\tiny CSL}}=\frac{\sqrt{\gamma}}{m_0} \int \text{d} \x \hat{M}(\x) {\xi}_t(\x), \end{equation} to the standard one. \subsection{Interaction picture framework} We study the effects of dynamical collapse models by employing a perturbative approach. We split the total Hamiltonian $\hat{H}$ as \begin{equation} \hat{H}=\hat{H}_{0}+ \hat{H}_{\text{\tiny CSL}}, \end{equation} where $\hat{H}_{0}$ is the Hamiltonian of the system and $\hat{H}_{\text{\tiny CSL}}$ is the additional contribution due to dynamical collapse models {in the Schr\"{o}dinger picture}. To quantify the correction induced by collapse models, we perform the calculations in the interaction picture. In this picture \cite{Schlosshauer2007}, we identify $\hat{H}_{0}$ as the time dependent background Hamiltonian. The operators $\hat{O}^\text{\tiny I}$ and the states $\left|\psi^\text{\tiny I}(t)\right\rangle$ are given by \begin{equation} \hat{O}^\text{\tiny I} (t)=\hat{U}_{0}^{-1}(t,t_0) \hat{O} \hat{U}_{0}(t,t_0)\,,\qquad {\ket{\psi^\text{\tiny I}(t)}}=\hat{U}_{\text{\tiny CSL}}(t,t_0)\ket{\psi(t_0)}, \label{operint} \end{equation} where \begin{equation} \hat{U}_{0}(t,t_0)=\mathcal{T} \left\{ \exp \left[-i \int_{t_0}^t \text{d} t' \hat{H}_{0}(t') \right] \right\}\,,\qquad \hat{U}_{\text{\tiny CSL}}(t,t_0)=\mathcal{T} \left\{\exp \left[-i\int_{t_0}^t d t'\hat{H}_{\text{\tiny CSL}}^\text{\tiny I} (t') \right]\right\}, \end{equation} with $\mathcal{T}$ denoting the time-ordering operator and $\hat{H}_{\text{\tiny CSL}}^\text{\tiny I}$ the Hamiltonian $\hat{H}_{\text{\tiny CSL}}$ in the interaction picture. We consider the stochastic Hamiltonian $\hat{H}_{\text{\tiny CSL}}$ to be given by \begin{equation} \hat{H}_{\text{\tiny CSL}}(t)=\frac{\sqrt{\gamma}}{m_0} \int \text{d} \x\, \xi_{t}(\x) \hat{L}_{\text{\tiny CSL}} (t,\x), \label{StochasticHamiltonian} \end{equation} where the pair $(t,\x)$ denotes the temporal and spatial coordinates and $\hat{L}_{\text{\tiny CSL}}(t,\x)$ is an operator yet to be specified. The parameter $\gamma$ is the collapse rate of the CSL model and $m_0$ denotes a reference mass taken equal to that of a nucleon. We define $\xi_t(\x)$ as in Eq.~\eqref{CSLnoise}. Taking into account the time dependence of both the operator $\hat{O}^\text{\tiny I}(t)$ and the state $\ket{\psi(t)}$, and retaining only the leading order term in $\gamma$, we get \begin{align} {\braket{\hat{O}}}&=\bra{\psi^\text{\tiny I}(t)}\hat{O}^\text{\tiny I}(t)\ket{\psi^\text{\tiny I}(t)}\\ &\approx \bra{\psi(t_0)}\left[\hat{1}+i\int_{t_0}^t \text{d} t' \hat{H}_{\text{\tiny CSL}}^\text{\tiny I} (t') - \int_{t_0}^t \int_{t_0}^{t'} \text{d} t' \text{d} t'' {\hat{H}_{\text{\tiny CSL}}^\text{\tiny I}(t'') \hat{H}_{\text{\tiny CSL}}^\text{\tiny I}(t')} \right]\hat{O}^\text{\tiny I}(t) \left[\hat{1}-i\int_{t_0}^t \text{d} t' \hat{H}_{\text{\tiny CSL}}^\text{\tiny I} (t') \right.\\\nonumber &\left.-\int_{t_0}^t \int_{t_0}^{t'} \text{d} t' \text{d} t'' \hat{H}_{\text{\tiny CSL}}^\text{\tiny I}(t') \hat{H}_{\text{\tiny CSL}}^\text{\tiny I} (t'') \right]\ket{\psi(t_0)}\\\nonumber &=\bra{\psi(t_0)}\left[\hat{O}^\text{\tiny I}(t)-i \int_{t_0}^{t} \text{d} t' \left[\hat{O}^\text{\tiny I}(t),\hat{H}_{\text{\tiny CSL}}^\text{\tiny I} (t') \right]-\int_{t_0}^t \int_{t_0}^{t'} \text{d} t' \text{d} t'' {\left[\hat{H}_{\text{\tiny CSL}}^\text{\tiny I}(t''),\left[\hat{H}_{\text{\tiny CSL}}^\text{\tiny I}(t'),\hat{O}^\text{\tiny I}(t) \right] \right]} \right]\ket{\psi(t_0)}\,. \end{align} By taking the stochastic average over all the realizations of the noise, one obtains \begin{equation} \begin{split} &{\overline{O}\equiv\mathbb{E}[\braket{\hat{O}}]} \approx \bra{\psi(t_0)}\hat{O}^\text{\tiny I}(t)\ket{\psi(t_0)}- \frac{i \sqrt{\gamma}}{m_0} \int_{t_0}^{t} \text{d} t' \int \text{d} \x' \mathbb{E}[\xi_t(\x')] \bra{\psi(t_0)}\left[\hat{O}^\text{\tiny I}(t),\hat{L}_{\text{\tiny CSL}}^\text{\tiny I} (t',\x') \right]\ket{\psi(t_0)} \\ &- \frac{\gamma}{m_0^2} \int_{t_0}^{t}\int_{t_0}^{t'} \text{d} t' \text{d} t'' \int \text{d} \x' \int \text{d} \x'' {\mathbb{E}\left[\xi_{t''}(\x'')\xi_{t'}(\x') \right] \bra{\psi(t_0)}\left[\hat{L}_{\text{\tiny CSL}}^\text{\tiny I}(t'',\x''),\left[\hat{L}_{\text{\tiny CSL}}^\text{\tiny I}(t',\x'),\hat{O}^\text{\tiny I}(t) \right] \right]\ket{\psi(t_0)}}. \end{split} \end{equation} By using Eq~\eqref{CSLnoise}, we get \begin{equation}\label{StochAvOp} \begin{split} {\overline{O}} & \approx \bra{\psi(t_0)} \hat{O}^\text{\tiny I}(t) \ket{\psi(t_0)}\\ &- \frac{\lambda}{2 m_0^2} \int_{t_0}^{t} \text{d} t' \int \text{d} \x' \int \text{d} \x'' {e^{-\frac{(\x''-\x')^2}{4 \rC^2}} \bra{\psi(t_0)}\left[\hat{L}_{\text{\tiny CSL}}^\text{\tiny I}(t',\x''),\left[\hat{L}_{\text{\tiny CSL}}^\text{\tiny I}(t',\x'),\hat{O}^\text{\tiny I}(t) \right] \right]\ket{\psi(t_0)}}, \end{split} \end{equation} where $\lambda=\gamma/(4 \pi \rC^2)^{3/2}$ is the collapse rate. Therefore, we obtain \begin{equation} {\overline{O} = \braket{\hat{O}(t)}_0+ \delta \overline{O}(t)_{\text{\tiny CSL}}}, \label{correctexpec} \end{equation} where, \begin{align}\label{PrDeltaPr} {\braket{\hat{O}(t)}_0}&=\bra{\psi(t_0)}\hat{O}^\text{\tiny I}(t)\ket{\psi(t_0)}, \nonumber\\ {\delta \overline{O}(t)_{\text{\tiny CSL}}}&=- \frac{\lambda}{2m_0^2} \int_{t_0}^{t} \text{d} t' \int \text{d} \x' \int \text{d} \x'' {e^{-\frac{(\x''-\x')^2}{4 \rC^2}} \bra{\psi(t_0)}\left[\hat{\mathcal{H}}_{\text{\tiny CSL}}^\text{\tiny I}(t',\x''),\left[\hat{\mathcal{H}}_{\text{\tiny CSL}}^\text{\tiny I}(t',\x'),\hat{O}^\text{\tiny I}(t) \right] \right]}\ket{\psi(t_0)}. \end{align} The term {$\braket{\hat{O}(t)}_0$} represents the expectation value of the operator $\hat{O}(t)$ in the standard scenario, and the term ${\delta \overline{O}(t)_{\text{\tiny CSL}}}$ stands for the modification to the expectation value of the operator $\hat{O}$ due to CSL. \section{Dynamical collapse within a cosmological setting} In what follows, we implement the framework of the previous section within a cosmological setting. Working in conformal time $\eta$ and comoving coordinates $\x$, we set the noise $\xi_\eta(\x)$ to be \begin{equation}\label{correlationComoving} \mathbb{E}[\xi_{\eta}(\x)]=0,\quad\text{and}\quad \mathbb{E}[\xi_{\eta}(\x)\xi_{\eta'}(\y)]=\frac{\delta(\eta-\eta')}{a(\eta')} G(\x-\y), \qquad \text{where} \qquad G(\x-\y)=\frac{1}{(4 \pi \rC^2)^{3/2}}e^{-\frac{a^{ 2}(\eta)(\x-\y)^2}{4 \rC^2}}. \end{equation} Here, the scale factor $a(\eta)$ is introduced in the above definitions so that $\xi_t(\x_p)$ has the same properties as in the standard CSL model, when expressed in terms of the cosmic time $t$ and the physical coordinates $\x_p$. For this setting, we have that the expectation value of the operator $\hat O$ can be expressed as in Eq.~\eqref{correctexpec} with Eq.~\eqref{PrDeltaPr} now reading \begin{align}\label{PrDeltaPr2} {\braket{\hat{O}(\eta)}_0}&=\bra{0}\hat{O}^\text{\tiny I}(\eta)\ket{0}\,, \nonumber\\ {\delta \overline{O}(\eta)_{\text{\tiny CSL}}}&=- \frac{\lambda}{2m_0^2} \int_{\eta_0}^{\eta} \frac{\text{d} \eta'}{a(\eta')} \int \text{d} \x' \int \text{d} \x'' e^{-\frac{a^2(\eta')(\x''-\x')^2}{4 \rC^2}} \bra{0}\left[\hat{\mathcal{H}}_{\text{\tiny CSL}}^\text{\tiny I}(\eta',\x''),\left[\hat{\mathcal{H}}_{\text{\tiny CSL}}^\text{\tiny I}(\eta',\x'),\hat{O}^\text{\tiny I}(\eta) \right] \right]\ket{0}. \end{align} In the above expressions, we have set the initial state of the system to be the {Bunch-Davies vacuum state} $\ket{0}$. We will study the modifications due to CSL of the expectation value of the comoving curvature perturbation squared $\hat{\mathcal{R}^2}$ [cf. Eq.~\eqref{defcomcurvpert}] over two cosmological epochs. The first one is the phase of cosmological inflation described in section \ref{sec:inflation} and the second one is that of {the} radiation dominated era described in section \ref{sec:Radiation dominated era}. {These two are separated at $\eta=\eta_{e}$ by a phase of reheating, where $\eta_{e}$ denotes the end of inflation. For a simplified treatment, like in Ref.~\cite{Martin2020}, we assume that the collapse dynamics does not introduce any substantial corrections during this phase connecting the two epochs of interest}. Naturally, in the first epoch $\eta_0$ would correspond to the beginning of inflation, and the correction to ${\overline{\mathcal{R}^2}}$ due to collapse models is computed at the end of inflation. During the radiation dominated epoch, the initial time is taken to be the end of inflation and the correction to ${\overline{{\mathcal{R}}^2}}$ is computed at the end of radiation dominated era $\eta_{r}$. \subsection{Inflation} \label{sec:inflation} We now specify the operator $\hat{{L}}_{\text{\tiny CSL}}$. As motivated in the main text, we take it to be the standard Hamiltonian density of the scalar perturbation $\hat{L}_{\text{\tiny CSL}}=\hat{\mathcal{H}}_{\text{\tiny CSL}}=\hat{\mathcal{H}}_{0}$. Thus, combining Eq.~\eqref{operint} and Eq.~\eqref{StochasticHamiltonian}, we have \begin{equation}\label{HIDC} \hat{H}^{\tiny I}_{\text{\tiny CSL}}(\eta)=\frac{\sqrt{\gamma}}{m_0} \int \text{d} \x \xi_{\eta}(\x) \hat{\mathcal{H}}^{\tiny I}_{\text{\tiny CSL}} (\eta,\x), \end{equation} where $\hat{\mathcal{H}}^{\tiny I}_{\text{\tiny CSL}}=\hat{U}_{0}^{-1}\hat{\mathcal{H}}_{0} (\eta,\x)\hat{U}_{0}$ represents the Hamiltonian density of the scalar perturbations in the interaction picture. This coincides with the Hamiltonian density of the scalar perturbations in the Heisenberg picture in standard cosmology, where one does not have additional contributions coming from collapse dynamics. During inflation, $\hat{\mathcal{H}}^{\tiny I}_{\text{\tiny CSL}}=\hat{\mathcal{H}}^{\text{h}}_{\text{inf}}$, {where the latter} is the standard inflationary Hamiltonian corresponding to the action in Eq.~\eqref{ActionPert}{, and is} given by \begin{equation} \hat{\mathcal{H}}^{\text{h}}_{\text{inf}}=\frac{1}{2} \int \text{d} \x \left\{\dot{\hat{u}}^2 (\eta,\x) + \delta^{ij}\partial_{i}{\hat{u}(\eta,\x)}\partial_j {\hat{u}(\eta,\x)}-\frac{2}{\eta^2}\hat{u}^2 (\eta,\x) \right\}\,. \end{equation} Imposing the form of the field operator $\hat{u}(\eta,\x)$ {[cf.~Eq.~\eqref{CanonicalVariable}]} we get {the following normal ordered Hamiltonian} \begin{equation} \hat{\mathcal{H}}_{\text{\tiny CSL}}^{\tiny I}(\eta,\x)=\int \int \text{d} \q \text{d} \p \frac{e^{i(\p + \q) \cdot \x}}{2(2 \pi)^3}\left(b_\eta^{\p,\q}\hat{a}_\p \hat{a}_\q+ d_\eta^{\p,\q}{\hat{a}_{-\q}^\dagger \hat{a}_\p}+ b^{*\p,\q}_\eta \hat{a}_{-\p}^\dagger \hat{a}_{-\q}^\dagger+ d^{*\p,\q}_\eta \hat{a}_{-\p}^\dagger \hat{a}_\q \right),\label{HamiltDensityStruc} \end{equation} where we introduced \begin{equation} b_\eta^{\p,\q}=j_\eta^{p,q}-\left(\p \cdot \q+\frac{2}{\eta^2}\right)f_\eta^{p,q},\quad d_\eta^{\p,\q}=l_\eta^{p,q}-\left(\p \cdot \q+\frac{2}{\eta^2}\right)g_\eta^{p,q}, \end{equation} with \begin{equation} f_\eta^{p,q}=v_p(\eta)v_q(\eta),\, \quad g_\eta^{p,q}=v_p(\eta) v^*_q(\eta), \quad j_\eta^{p,q}=\dot{v}_p(\eta)\dot{v}_q(\eta), \quad l_\eta^{p,q}=\dot{v}_p(\eta)\dot{v}^*_q(\eta).\label{fandg} \end{equation} Now, the quantity we are interested in to be compared with cosmological observations is ${\mathbb{E}[\braket{(\hat{\mathcal{R}}-\braket{\hat{\mathcal{R}}})^2}]}$. However, for a collapse operator which is quadratic in the perturbations, and hence in the creation and annihilation operators, one has that the CSL contribution to $\braket{\hat{\mathcal{R}}}$ is zero, as one can easily deduce from explicit substitution in Eq.~\eqref{PrDeltaPr2}. Thus, one can focus on ${\overline{\mathcal{R}^2}}$ only, which can be obtained from \begin{equation}\label{Oinflation} \hat{O}^{\tiny I} (\eta)={\hat{\mathcal{R}}^2(\eta,\x)}=\frac{\hat{u}^2(\eta,\x)}{z^2}=\frac{\hat{u}^2(\eta,\x)}{2 {\epsilon_{\text{inf}}} M_{\mathrm{P}}^2 a^2(\eta)}. \end{equation} From Eq.~\eqref{PrDeltaPr2}, the correction induced by the dynamical collapse model is encoded in the term \begin{equation} {\delta {\overline{\mathcal{R}^2}(\eta)}_{\text{\tiny CSL}}}=- \frac{\lambda}{2m_0^2} \int_{\eta_0}^{\eta} \frac{\text{d} \eta'}{a(\eta')}\int \text{d} \x' \int\text{d} \x'' {e^{-\frac{a^2(\eta')(\x''-\x')^2}{4 \rC^2}} \bra{0}\left[\hat{\mathcal{H}}_{\text{\tiny CSL}}^{\tiny I}(\eta',\x''),\left[\hat{\mathcal{H}}_{\text{ \tiny CSL}}^{\tiny I}(\eta',\x'),\hat{\mathcal{R}}^2(\eta,\x) \right] \right]}\ket{0}.\label{deltaRinflation} \end{equation} {Similarly to the result} in Eq.~\eqref{MeanSqauredExp}, by expressing the Hamiltonian density $\hat{\mathcal{H}}_{\text{\tiny CSL}}^{\tiny I}(\eta,\x)$ and the comoving curvature perturbation $\hat{\mathcal{R}}(\eta,\x)$ in terms of the creation and annihilation operators, explicit calculations show that the correction term ${\delta \overline{\mathcal{R}^2}(\eta)_{\text{\tiny CSL}}}$ is also independent of $\x$. More explicitly, the correction term becomes \begin{equation}} \newcommand{\eq}{\end{equation}\label{deltaRinflationFourier} {\delta {\overline{\mathcal{R}^2}(\eta)}_{\text{\tiny CSL}}}=-\frac{ \lambda \rC^3}{8 {\epsilon_\text{inf}} M_{\mathrm{P}}^2 m_0^2 a^2(\eta) \pi^{9/2}} \int_{\eta_0}^{\eta} \frac{\text{d} \eta'}{a^4(\eta')} \int \int \text{d} \q \text{d} \p \,e^{-\frac{\rC^2}{a^2(\eta')}(\q+\p)^2} \mathfrak{Re} \left[ b_{\eta'}^{\q,\p}\left\lbrace d_{\eta'}^{-\q,-\p}(f_\eta^{q,-q})^*-(b_{\eta'}^{-\q,-\p})^*g_\eta^{q,-q}\right\rbrace \right]. \eq Notice that, as the exponential is invariant under the interchange of the integration variables $\p$ and $\q$, the properties of the functions $b_{\eta}^{\q,\p}$, $d_{\eta}^{\q,\p}$, $f_{\eta}^{q,p}$, and $g_{\eta}^{q,p}$, allow to write the above result as \begin{equation}} \newcommand{\eq}{\end{equation} {\delta \overline{\mathcal{R}^2}(\eta)_\text{\tiny CSL}}=-\frac{ \lambda \rC^3}{8 {\epsilon_{\text{inf}}} M_{\mathrm{P}}^2 m_0^2 a^2(\eta) \pi^{9/2}} \int_{\eta_0}^{\eta} \frac{\text{d} \eta'}{a^4(\eta')} \int\int \text{d} \q \text{d} \p \,e^{-\frac{\rC^2}{a^2(\eta')}(\q+\p)^2}\mathcal{F}_{\eta'}^{\p,\q}, \label{spectrumcorrInf} \eq where \begin{equation}} \newcommand{\eq}{\end{equation} \mathcal{F}_{\eta'}^{\p,\q}=\mathfrak{Re}\left[b_{\eta'}^{\p,\q}d_{\eta'}^{\q,\p}(f_\eta^{q,q})^*-b_{\eta'}^{\p,\q}(b_{\eta'}^{\q,\p})^*g_\eta^{p,p} \right].\label{curlyFanalytic} \eq We are interested in calculating the correction $\delta \mathcal{P}_\mathcal{R}$ at the end of inflation, $\eta=\eta_e$. By {substituting} $v_k(\eta)$ with the expression in Eq.~\eqref{modeDS}, we obtain the following expression for $\mathcal{F}_{\eta'}^{\p,\q}$ \begin{align} &\mathcal{F}_{\eta'}^{\p,\q}=\nonumber\\ &\mathfrak{Re}\left[\frac{1}{{8}{\eta'} ^8 p^3 q^4}\left(1+\frac{i}{\text{$\eta $}_e q}\right)^2 e^{-2 i q (\eta' -\text{$\eta $}_e)}\left[\left(\left(-{\eta'} ^2 p^2+i \eta' p+1\right) \left({\eta'} ^2 q^2-i {\eta'} q-1\right)-({\eta'} p-i) ({\eta'} q-i) \left({\eta'} ^2 (\p \cdot \q)+2\right)\right)\right. \right. \nonumber\\ &\left.\left(\left({\eta'} ^2 p^2+i {\eta'} p-1\right) \left({\eta'} ^2 q^2-i{\eta'} q-1\right)-({\eta'} p+i) ({\eta'} q-i) \left({\eta'} ^2 (\p \cdot \q)+2\right)\right)\right] -\nonumber\\ &\frac{1}{{8}{\eta'} ^8 p^4 q^3}\left(1-\frac{i}{\text{$\eta $}_e p}\right) \left(1+\frac{i}{\text{$\eta $}_e p}\right)\left[ \left(\left(-{\eta'} ^2 p^2+i {\eta'} p+1\right) \left({\eta'} ^2 q^2-i {\eta'} q-1\right)-({\eta'} p-i) ({\eta'} q-i) \left({\eta'} ^2 (\p \cdot \q)+2\right)\right)\right.\nonumber\\ &\left. \left.\left(-\left({\eta'} ^2 p^2+i {\eta'} p-1\right) \left({\eta'} ^2 q^2+i {\eta'} q-1\right)-({\eta'} p+i) ({\eta'} q+i) \left({\eta'} ^2 (\p \cdot \q)+2\right)\right)\right]\right]. \label{curlyFanalytInf} \end{align} For times close to the end of inflation, the condition $q \eta' \ll 1$ is satisfied for the modes cosmological interest \cite{Uzan}. At earlier times, if this condition is not satisfied, then the exponential appearing in Eq.~\eqref{spectrumcorrInf} suppresses the corresponding contributions. Indeed, during inflation the exponential function in Eq.~\eqref{spectrumcorrInf} becomes $\exp{\left(-\rC^2 {H_{\text{inf}}}^2 {\eta^{\prime}}^2(\p+\q)^2\right)}$. To have an estimate of the orders of magnitudes involved, we notice that since the GRW value of $\rC\sim 10^{27}M^{-1}_{\mathrm{P}}$ \cite{Ghirardi1986} is much bigger than ${H_{\text{inf}}}\sim 10^{-5}M_{\mathrm{P}}$, it is safe to assume that $\rC {H_{\text{inf}}}\gg 1$ during inflation. Therefore, we can safely expand Eq.~\eqref{curlyFanalytInf} in powers of $q \eta'$ (or $p \eta'$), which to leading order gives \begin{equation} \mathcal{F}_{\eta'}^{\p,\q}=\frac{1}{8 p^3q^4{\eta'}^8}\left(-\frac{2q^4{\eta}^4_e}{9}+\frac{16 q^4 \eta_e \eta'^3}{9} -\frac{4p^3q{\eta'}^6}{{\eta}^2_e}-\frac{32q^4{\eta'}^6}{9\eta_e^2}\right). \label{curlyFapproxInf} \end{equation} Here, the leading order expression presented above contains only the terms that would survive after computing the integral in Eq.~\eqref{spectrumcorrInf}. That is, terms which are symmetrical in $\p$ and $\q$ but appear with opposite signs would not contribute to the integral and therefore do not appear in the effectively leading order expression {in Eq.~}\eqref{curlyFapproxInf}. Since $\eta_{e}\ll\eta'$, the last two terms on the RHS in Eq.~\eqref{curlyFapproxInf} give the dominant contribution to the corrections with $\mathcal{F}_{\eta'}^{\p,\q}\approx-1/(2q^3\eta^2_e\eta'^2) - 4/(9\eta^2_e\eta'^2p^3)$. Since $\p$ and $\q$ are dummy integration variables, to leading order $\mathcal{F}_{\eta'}^{\p,\q}$ effectively depends only on $q$ (or $p$). After completing the {$\p$} integral in Eq.~\eqref{spectrumcorrInf}, which now becomes a standard three dimensional Gaussian integral, we get \begin{equation} {\delta\overline{\mathcal{R}^2}(\eta_e)_{\text{\tiny CSL}}}\approx-\frac{17}{36}\frac{\lambda {H_{\text{inf}}}^3}{\pi^2 { \epsilon_{\text{inf}}} M_{\mathrm{P}}^2 m_0^2}\int_{\eta_0}^{\eta_e} { \text{d}}\ln{\eta} \int {\text{d}} \ln{q}\,. \end{equation} Following the definition of the power spectrum $\mathcal{P}$ [cf. Eq.~\eqref{PowerSpectrumdef}], we identify the correction $\delta\mathcal{P}_{\mathcal{R}}$ to the power spectrum $\mathcal{P}_\mathcal{R}$ with ${\delta\overline{\mathcal{R}^2}(\eta_e)_{\text{\tiny CSL}}}=\int \text{d} \ln q \,\delta\mathcal{P}_\mathcal{R}$. Therefore we obtain \begin{equation} \delta \mathcal{P}_\mathcal{R}\approx-\frac{17}{36} \frac{\lambda {H_{\text{inf}}}^3}{\pi^2 {\epsilon_{\text{inf}}} M_{\mathrm{P}}^2 m_0^2}\ln \left( \frac{\eta_e}{\eta_0}\right). \end{equation} \subsection{Radiation dominated era}\label{sec:Radiation dominated era} During the radiation dominated era, instead of Eq.~\eqref{MukhSas2}, we have \begin{equation}\label{EOMRad} \ddot{u}_k+\frac{1}{3}k^2 u_k=0, \end{equation} as in general, the action for the rescaled variable $u$ reads \cite{Mukhanov2005} \begin{equation} \delta S^{(2)}=\frac{1}{2}\int \text{d} \eta \int \text{d} \x \left[\dot{u}^2-c_s^2\delta^{ij}\partial_i u\partial_j u + \frac{\ddot{z}}{z}u^2 \right], \label{Actiongeneral} \end{equation} with $c_s$ being the speed of sound defined before. {As stated before, for simplicity we neglect in our analysis the reheating stage \cite{Martin2020}}. The scale factor during the radiation dominated era can then be approximated as \begin{equation} a(\eta)=\frac{1}{{H_\text{inf}} \eta_e^2}(\eta-2 \eta_e)\,. \label{scalefactorrad} \end{equation} By using this expression of $a(\eta)$ in the definition of $\epsilon$, one can show that $\epsilon=2$ during the radiation dominated era. {From this result, as well as the linear dependence of the scale factor on $\eta$, the definition of $z$ in Eq.~\eqref{ZFactor} leads to $\ddot{z}=0$.} This explains the absence of a term proportional to $\ddot{z}$ in the EOM~\eqref{EOMRad}. We can decompose $u$ in terms of the modes $v_k(\eta)$ as in Eq.~\eqref{fieldopu}, where now the mode satisfies \begin{equation} \ddot{v}_k(\eta)+\frac{1}{3}k^2 v_k(\eta)=0. \end{equation} Following the approach of Ref.~\cite{Martin2020}, the initial conditions required to specify the solution for the modes during the radiation dominated era, are fixed by matching the curvature perturbation and its derivative at the end of inflation. The full solution of $v_k(\eta)$ then becomes \cite{comment} \begin{equation} v_k(\eta)\!\!=\!\!\frac{\sqrt{3}}{2 \eta_e^2 \sqrt{{\epsilon_{\text{inf}}}} k^{5/2}}e^{-ik \eta_e}\!\left\{\left[(1\!+\!\sqrt{3})(k \eta_e)^2\!-\! \sqrt{3}\!-\!i(1\!+\!\sqrt{3})k \eta_e \right]e^{-ik \frac{\eta-\eta_e}{\sqrt{3}}}\!\!+\!\!\left[(1\!-\!\sqrt{3})(k\eta_e)^2 \!+\! \sqrt{3}\!-\!i(1\!-\!\sqrt{3}) k \eta_e\right]e^{ik \frac{\eta-\eta_e}{\sqrt{3}}} \right\}. \label{radiationmode} \end{equation} Now, one obtains the Hamiltonian density during the radiation dominated era from Eq.~\eqref{Actiongeneral}. This reads \begin{equation} \hat{\mathcal{H}}_{\text{\tiny CSL}}^{\tiny I}(\eta,\x)=\hat{\mathcal{H}}_{\text{rad}}^\text{h}(\eta,\x)=\frac{1}{2}\left(\dot{\hat{u}}^2 (\eta,\x) + \frac{1}{3}\delta^{ij}\partial_i\hat{u}(\eta,\x)\partial_j\hat{u}(\eta,\x) \right). \label{HamDensRad} \end{equation} From the expression of the operator $\hat{u}(\eta,\x)$ in Eq.~\eqref{CanonicalVariable}, and its decomposition in terms of the modes $v_k(\eta)$ of Eq.~\eqref{fieldopu}, straightforward calculations lead to an expression for $\hat{\mathcal{H}}_{\text{\tiny CSL}}^{\tiny I}(\eta,\x)$ that has the structure of Eq.~\eqref{HamiltDensityStruc}, but with the functions $b_\eta^{\q,\p}$ and $d_\eta^{\q,\p}$ modified as \begin{equation} b_\eta^{\q,\p}=j_\eta^{q,p}-\frac{1}{3}(\q \cdot \p)f_\eta^{q,p},\quad d_\eta^{\q,\p}=l_\eta^{q,p}-\frac{1}{3}(\q \cdot \p)g_\eta^{q,p}, \label{Frad} \end{equation} with the functions appearing on the RHS of Eq.~\eqref{Frad} defined as in Eq.~\eqref{fandg}. During the radiation dominated era, the same operator $\hat{O}^{\tiny I}(\eta)={\hat{\mathcal{R}}^2}(\eta,\x)$ defined in Eq.~\eqref{Oinflation} reads \begin{equation} {\hat{\mathcal{R}}^2}(\eta,\x)=\frac{\hat{u}^2 (\eta,\x)}{{12} M_P^2 a^2(\eta)}, \label{perturboprad} \end{equation} where the scale factor is given by Eq.~\eqref{scalefactorrad}. We have used the fact that $\epsilon(\eta_{e}\leq\eta\leq \eta_{r})=2$ and $c^2_s=1/3$ during the radiation dominated era (with $\eta_r$ denoting the conformal time at the end of this stage). From Eq.~\eqref{PrDeltaPr2}, the contribution to the modification of the comoving curvature power spectrum during the radiation dominated era is given by Eq.~\eqref{deltaRinflation}, where one substitutes $\eta_0$ with $\eta_e$ and $\eta$ with $\eta_r$. Using Eqs.~\eqref{HamDensRad},~\eqref{perturboprad} and~\eqref{scalefactorrad}, and calculating the double commutator explicitly, we obtain \begin{equation} {\delta \overline{\mathcal{R}^2}(\eta_r)_\text{\tiny CSL}}=-\frac{ \lambda \rC^3}{{48} M_{\mathrm{P}}^2 m_0^2 a^2(\eta_r) \pi^{9/2}} \int_{\eta_e}^{\eta_r} \frac{\text{d} \eta'}{a^4(\eta')} \int\int \text{d} \q \text{d} \p\, e^{-\frac{\rC^2}{a^2(\eta')}(\q+\p)^2}\mathcal{F}_{\eta'}^{\p,\q}, \label{spectrumcorrRad} \end{equation} where, using Eqs.~\eqref{fandg} and~\eqref{Frad}, the function $\mathcal F_{\eta'}^{\p,\q}$ [cf. Eq~\eqref{curlyFanalytic}] is given by \begin{align}\label{CurlyFRad} \begin{split} &\mathcal{F}_{\eta'}^{\p,\q}=-\frac{1}{8 p^5 q^5 \epsilon ^3 \eta_e^4}9 e^{-\frac{2 i (p (\eta' -\eta_e)+q (\eta' +\eta_r-2 \eta_e))}{\sqrt{3}}} \left( 4 e^{\frac{2 i (p+q) (\eta' -\eta_e)}{\sqrt{3}}} (\p \cdot \q) \left(-2 q \eta_e \left(q^3 \eta_e^3-2 q \eta_e+\sqrt{3} i\right)-3\right) p^5 \right.\\ &\left.+4 e^{\frac{2 i (p (\eta' -\eta_e)+q (\eta' +2 \eta_r-3 \eta_e))}{\sqrt{3}}} (\p \cdot \q) \left(2 q \eta_e \left(-q^3 \eta_e^3+2 q \eta_e+\sqrt{3} i\right)-3\right) p^5\right.\\ &\left.+2 e^{\frac{2 i (p+2 q) (\eta' -\eta_e)}{\sqrt{3}}} q^3 \left(p^2 q^2-(\p \cdot \q)^2\right) \left(4 p^4 \eta_e^4-2 p^2 \eta_e^2+3\right)+2 e^{\frac{2 i (p \eta' +2 q \eta_r-(p+2 q) \eta_e)}{\sqrt{3}}} q^3 \left(p^2 q^2-(\p \cdot \q)^2\right) \left(4 p^4 \eta_e^4-2 p^2 \eta_e^2+3\right)\right.\\ &\left.+4 e^{\frac{2 i (p (\eta' -\eta_e)+q (\eta' +\eta_r-2 {\eta_e}))}{\sqrt{3}}} \left(4 p^4 q^3 (p q+\p \cdot \q)^2 \eta_e^4-2 p^2 q^2 \left(2 (\p \cdot \q) p^3+q^3 p^2+q (\p \cdot \q)^2\right) \eta_e^2\right.\right.\\ &\left.\left.+3 \left(2 (\p \cdot \q) p^5+q^5 p^2+q^3 (\p \cdot \q)^2\right)\right)+e^{\frac{4 i (p \eta' +q \eta_r-(p+q) \eta_e)}{\sqrt{3}}} q^3 (-p q+\p \cdot \q)^2 \left(2 p \eta_e \left(p^3 \eta_e^3-2 p \eta_e-\sqrt{3}i\right)+3\right) \right.\\ &\left.+e^{\frac{4 i (p+q) (\eta' -\eta_e)}{\sqrt{3}}} q^3 (p q+\p \cdot \q)^2 \left(2 p \eta_e \left(p^3 \eta_e^3-2 p \eta_e-\sqrt{3}i \right)+3\right) \right.\\ &\left.+2 e^{\frac{2 i (2 p \eta' +q \eta' +q \eta_r-2 (p+q) \eta_e)}{\sqrt{3}}} q^3 \left(p^2 q^2-(\p \cdot \q)^2\right) \left(2 p \eta_e \left(p^3 \eta_e^3-2 p \eta_e-\sqrt{3} i\right)+3\right)\right.\\ &\left.+e^{\frac{4 i q (\eta' -\eta_e)}{\sqrt{3}}} q^3 (-p q+\p \cdot \q)^2 \left(2 p \eta_e \left(p^3 \eta_e^3-2 p \eta_e+\sqrt{3} i\right)+3\right)+e^{\frac{4 i q (\eta_r-\eta_e)}{\sqrt{3}}} q^3 (p q+\p \cdot \q)^2 \left(2 p \eta_e \left(p^3 \eta_e^3-2 p \eta_e+\sqrt{3} i\right)+3\right)\right.\\ &\left.+2 e^{\frac{2 i q (\eta' +\eta_r-2 \eta_e)}{\sqrt{3}}} q^3 \left(p^2 q^2-(\p \cdot \q)^2\right) \left(2 p \eta_e \left(p^3 \eta_e^3-2 p \eta_e+\sqrt{3} i\right)+3\right)\right). \end{split} \end{align} We now consider, as in the inflationary era, the expansion of $\mathcal{F}_{\eta'}^{\p,\q}$ in powers of $q \eta'$, $q\eta_e$ and $q\eta_r$. The leading order term reads \begin{equation}\label{CurlyFRadApprox} \mathcal{F}_{\eta'}^{\p,\q}\approx-{\frac{54}{q^3 {\epsilon_{\text{inf}}^3} \eta_e^4}}\,, \end{equation} where the terms that are symmetric in $\p$ and $\q$ but appear with opposite signs have been discarded in the effectively leading order expression {of Eq.}~\eqref{CurlyFRadApprox} as they would yield {a} zero contribution to the integral in Eq.~\eqref{spectrumcorrRad}. Using the leading order expansion {of Eq.}~\eqref{CurlyFRadApprox} in Eq.~\eqref{spectrumcorrRad}, as was the case for the inflationary era, the $\p$ integral becomes a standard three dimensional Gaussian integral. After completing the $\p$ integral first we get \begin{equation} {\delta \overline{\mathcal{R}^2}(\eta_r)_{\text{\tiny CSL}}}\approx\frac{9 \lambda {H_\text{inf}}^3 \eta_e^2}{{2} M_{{\mathrm{P}}}^2 {\epsilon_\text{inf}^3} (\eta_r-2 \eta_e)^2 \pi^2 m_0^2} \int_{\eta_e}^{\eta_r}\text{d} \ln(\eta'-2 \eta_e)\int \text{d} \ln q. \end{equation} Therefore, the correction $\delta \mathcal{P}_\mathcal{R}$ to the power spectrum $\mathcal{P}_\mathcal{R}$ is given by \begin{equation} \delta \mathcal{P}_\mathcal{R}\approx\frac{9 \lambda {H_\text{inf}^3} \eta_e^2}{2 M_{{\mathrm{P}}}^2 {\epsilon_{\text{inf}}^3} (\eta_r-2 \eta_e)^2 \pi^2 m_0^2}\ln \left(\frac{2 \eta_e-\eta_r}{\eta_e} \right). \end{equation} We must note that strictly speaking, due to the coupling between the modes in Eq.~\eqref{spectrumcorrRad}, the cosmological modes which might be outside the horizon in the outer $\p$ integral can also receive contributions from the subhorizon modes which satisfy $p\eta'\gg 1$, for which the approximation scheme breaks down. This was not a problem during inflation because in that era the scale factor is given by $a=-\frac{1}{{H_{\text{inf}}}\eta'}$. Consequently the exponential function in Eq.~\eqref{spectrumcorrInf} becomes $\exp{\left(-\rC^2 {H_{\text{inf}}}^2 {\eta^{\prime}}^2(\p+\q)^2\right)}$, and as explained before, since typically $\rC\gg 1/{H_{\text{inf}}}$, it becomes necessary to have $p^2{\eta'}^2\ll 1$ (and $ q^2{\eta'}^2\ll 1$) for the integrand to be non-zero. Due to the modified functional dependence of the scale factor during the radiation dominated era given in Eq.~\eqref{scalefactorrad}, the exponential function becomes $\exp{\left(\left(\p+\q\right)^2\eta^2_{e} {H_\text{inf}}^2 \rC^2\frac{\eta^2_{e}}{(\eta'-2 \eta_{e})^2}\right)}$. Clearly, it is no longer necessary to have $p\eta'\ll 1$ in order for the exponent to be non-zero. While the condition $p\eta_{e}\ll 1$ still remains valid, as all the modes of interest are outside the horizon at the end of inflation, the expansion in $p\eta'$ and $p\eta_{r}$ needs further justification. In order to see this we notice that the exact expression in Eq.~\eqref{CurlyFRad} depends on $\eta'$ and $\eta_r$ only via the terms $p\eta'$ and $p\eta_r$ appearing in the oscillating phases. Thus, when a mode enters the horizon during the radiation dominated era (i.e. the mode $p$ now satisfies $p\eta'\gg 1$ compared to $p\eta_e\ll 1$ at the end of inflation), this phase is expected to oscillate strongly and would not yield any significant contribution to the integrand. Moreover, the $a^4$ factor in the denominator would also suppress the contribution for a given mode $p$ at a later time, when $p$ enters the horizon and $p\eta'\gg 1$. Therefore, the assumption that $p\eta'\ll 1$ and $p\eta_{r}\ll 1$ for all modes $p$ and at all times $\eta'$ is expected to provide an upper bound on the integral. \section{Linearized collapse operator} Let us consider a generic collapse operator which is linear in the perturbation $\hat{u}(\eta,\x)$ and hence in the creation and annihilation operators. For such a collapse operator $\hat{l}(\eta,\x)$ one can write \begin{equation}\label{LinColOp} \hat{l}(\eta,\x)=\int \frac{\text{d} \k}{(2\pi)^{3/2}} e^{i \k \cdot \x} \left(\chi_{\k}(\eta) \hat{a}_\k + \chi^*_{-\k} (\eta) \hat{a}_{-\k}^\dagger \right), \end{equation} where $\chi_\k (\eta)$ is a suitable function. We calculate the correction to the expectation value of the comoving curvature perturbation $\hat{\mathcal{R}}(\eta,\x)$ due to the linearized collapse operator $\hat{l}(\eta,\x)$. In terms of the creation and annihilation operators, $\hat{\mathcal{R}}(\eta,\x)$ is given by (after normal ordering) \begin{equation} \hat{\mathcal{R}}^2(\eta,\x)=\frac{\hat{u}^2(\eta,\x)}{z^2}=\frac{1}{2 {\epsilon_{\text{inf}}} M_{\mathrm{P}}^2 a^2(\eta)}\int \frac{\text{d} \p \text{d} \q}{(2\pi)^3} e^{i (\p+\q) \cdot \x}\left(f_{\eta}^{\p,\q}\hat{a}_\p \hat{a}_\q + g_{\eta}^{\p,\q}\hat{a}_{-\q}^\dagger \hat{a}_\p + g^{*\p,\q}_{\eta}\hat{a}_{-\p}^\dagger \hat{a}_\q + f^{*\p,\q}_{\eta}\hat{a}_{-\p}^\dagger \hat{a}_{-\q}^\dagger \right), \end{equation} where $f,g$ are defined in Eq. \eqref{fandg}. Following the prescription described before [cf. Eq.~\eqref{PrDeltaPr2}], the correction due to the collapse dynamics is given by \begin{equation} {\delta \overline{\mathcal{R}^2}(\eta,\z)}= - \frac{\lambda}{4 m_0^2 \epsilon_{\text{inf}} M_{\mathrm{P}}^2 a^2(\eta)} \int_{\eta_0}^\eta \frac{\text{d} \eta'}{a(\eta')} \int \text{d} \x \text{d} \y e^{-\frac{a^2(\eta')}{4 \rC^2}(\x-\y)^2} \bra{0} \left[\hat{l}(\eta',\x),\left[\hat{l}(\eta',\y),\hat{u}^2(\z,\eta) \right]\right] \ket{0}. \end{equation} Therefore, we need to calculate the double-commutator \begin{equation} \begin{split} &\left[\hat{l}(\eta',\x),\left[\hat{l}(\eta',\y),\hat{u}^2 (\eta,\z)\right]\right]= \int \frac{\text{d} \k_1 \text{d} \k_2 \text{d} \q \text{d} \p}{(2\pi)^6} e^{i \p \cdot \x}e^{i \q \cdot \y}e^{i (\k_1+\k_2)\cdot \mathbf{z}} \\ &\left[ \chi_\q \hat{a}_\q + \chi^*_{-\q} \hat{a}_{-\q}^\dagger, \left[\chi_\p \hat{a}_\p + \chi^*_{-\p} \hat{a}_{-\p}^\dagger, f^{\k_1, \k_2}\hat{a}_{\k_1}\hat{a}_{\k_2}+g^{\k_1\k_2}\hat{a}_{-\k_2}^\dagger \hat{a}_{\k_1}+g^{*\k_1, \k_2}\hat{a}_{-\k_1}^\dagger \hat{a}_{\k_2}+ f^{*\k_1, \k_2}\hat{a}_{-\k_1}^\dagger \hat{a}_{-\k_2}^\dagger \right]\right]. \end{split}\end{equation} Explicitly, for the first term we obtain \begin{equation} \left[\chi_\q \hat{a}_\q + \chi^*_{-\q} \hat{a}_{-\q}^\dagger, \left[\chi_\p \hat{a}_\p + \chi^*_{-\p} \hat{a}_{-\p}^\dagger, f^{\k_1, \k_2}\hat{a}_{\k_1}\hat{a}_{\k_2}\right]\right]=\chi^*_{-\q} \chi^*_{-\p} f^{\k_1,\k_2} \left(\delta(\k_2+\p)\delta(\k_1+\q)+\delta(\k_1+\p)\delta(\k_2+\q) \right), \end{equation} for the second term \begin{equation} \begin{split} &\left[\chi_\q \hat{a}_\q + \chi^*_{-\q} \hat{a}_{-\q}^\dagger, \left[\chi_\p \hat{a}_\p + \chi^*_{-\p} \hat{a}_{-\p}^\dagger, g^{\k_1, \k_2}\hat{a}_{-\k_2}^\dagger \hat{a}_{\k_1}\right]\right]=\\ &-\chi_\q \chi^*_{-\p} g^{\k_1, \k_2} \delta(\k_1+\p)\delta(\q+\k_2)-\chi^*_{-\q} \chi_\p g^{\k_1, \k_2} \delta(\p+\k_2)\delta(\k_1+\q), \end{split} \end{equation} for the third term \begin{equation} \begin{split} &\left[\chi_\q \hat{a}_\q + \chi^*_{-\q} \hat{a}_{-\q}^\dagger, \left[\chi_\p \hat{a}_\p + \chi^*_{\p} \hat{a}_{-\p}^\dagger, g^{*\k_1, \k_2}\hat{a}_{-\k_1}^\dagger \hat{a}_{\k_2}\right]\right]=\\ &-\chi_\q \chi^*_{-\p} g^{*\k_1, \k_2} \delta(\k_2+\p)\delta(\q+\k_1)-\chi^*_{-\q} \chi_\p g^{*\k_1, \k_2} \delta(\p+\k_1)\delta(\k_2+\q), \end{split} \end{equation} and, for the last term \begin{equation} \left[\chi_\q \hat{a}_\q + \chi^*_{-\q} \hat{a}_{-\q}^\dagger, \left[\chi_\p \hat{a}_\p + \chi^*_{-\p} \hat{a}_{-\p}^\dagger, f^{*\k_1, \k_2}\hat{a}_{-\k_1}^\dagger \hat{a}_{-\k_2}^\dagger \right]\right]= \chi_\q \chi_\p f^{*\k_1, \k_2} \left( \delta(\p+\k_2)\delta(\q+\k_1) + \delta(\p+\k_1) \delta(\q+\k_2) \right). \end{equation} Collecting all the terms, and integrating over $\k_1, \k_2$, $\p$, $\x$ and $\y$, the correction ${\delta \overline{\mathcal{R}^2}(\eta)}$ is given by \begin{align} {\delta \overline{\mathcal{R}^2}(\eta)}=-\frac{\lambda \rC^3}{2m_0^2 \pi^{3/2} \epsilon_{\text{inf}} M_{\mathrm{P}}^2 a^2(\eta)} \int_{\eta_0}^\eta \frac{\text{d} \eta'}{a^4 (\eta')} \int \text{d} \q e^{-\frac{\rC^2}{a^2(\eta')}\q^2} &\left[\chi^*_\q (\eta') \chi^*_{-\q} (\eta') f^{-\q,\q}_{\eta}+\chi_{\q}(\eta') \chi_{-\q}(\eta')f^{*\q,-\q}_{\eta}\right.\nonumber\\&\left. -\chi_{\q}(\eta') \chi^*_{\q}(\eta') (g^{\q, -\q}_{\eta}+g^{*-\q,\q}_{\eta})\right]. \end{align} Using the properties $f^{-\q,\q}_{\eta}=f^{q,q}_{\eta}$, $g^{-\q,\q}_{\eta}=g^{q,q}_{\eta}$, and choosing the collapse operator as the one taken in Ref.~\cite{Martin2020}, with \begin{align} \chi_{-\k}(\eta')&=\chi_{\k}(\eta')=\chi_{k}(\eta')=\alpha_{k}(\eta')v_k(\eta')+\beta_{k}(\eta')\dot{v}_k(\eta')\label{Chi}\,,\\ \alpha_{k}(\eta) &=\frac{M_{\mathrm{P}}\eta H^3 \epsilon_{\text{inf}}}{\sqrt{2\epsilon_{\text{inf}}}} \left(-\frac{6 (\epsilon_{\text{inf}} (\epsilon_2/2+1))}{\eta ^2 k^2}+\epsilon_2+8\right)\label{alpha},\\ \beta_k(\eta)&=-\frac{M_{\mathrm{P}}\eta ^2 H^3 \epsilon_{\text{inf}}}{\sqrt{2 \epsilon_{\text{inf}}}}\left(\frac{6 \epsilon_{\text{inf}}}{\eta ^2 k^2}-2\right)\label{beta}, \end{align} the correction to the mean squared value of the comoving curvature perturbation becomes \begin{equation}\label{CurlyFIntLin} {\delta \overline{\mathcal{R}^2}(\eta_{e})}=-\frac{\lambda \rC^3}{2 m_0^2 \pi^{3/2} \epsilon_{\text{inf}} M_{\mathrm{P}}^2 a^2(\eta_{e})} \int_{\eta_0}^{\eta_{e}} \frac{\text{d} \eta'}{a^4 (\eta')} \int \text{d} \q e^{-\frac{\rC^2}{a^2(\eta')}\q^2}\mathcal{F}^q_{\eta'}\,, \end{equation} with \begin{equation} \mathcal{F}^q_{\eta'}= \chi^*_q (\eta') \chi^*_{q} (\eta') f^{q,q}_{\eta_{e}}+\chi_{q}(\eta') \chi_{q}(\eta')f^{*q,q}_{\eta_{e}}-\chi_{q}(\eta' ) \chi^*_{q}(\eta' ) (g^{q, q}_{\eta_{e}}+g^{*q,q}_{\eta_{e}}). \end{equation} During the inflationary era $a^4(\eta)=1/(H^4\eta^4)$, and to leading order in $q\eta'$, $\mathcal{F}^q_{\eta'}$ becomes \begin{equation}\label{CurlyFLinear} \mathcal{F}^q_{\eta'}\approx -18 \frac{\epsilon_{\text{inf}}^3 H^6 \eta'^2 M^2_{\mathrm{P}}}{q^{4}\eta_e^2}. \end{equation} We notice that for the GRW value of $\rC=1.24\times 10^{27}M^{-1}_{\mathrm{P}}$, the wavelength of the CMB modes $10^{-60}M_{{\mathrm{P}}}\lesssim q\lesssim 10^{-56}M_{{\mathrm{P}}}$ is stretched out on scales greater than $\rC$ at the end of inflation $\eta_e=-10^{34}M^{-1}_{\mathrm{P}}$, leading to $\rC\ll a(\eta_e)/q$. Thus, using Eq.~\eqref{CurlyFLinear} for completing the integral in Eq.~\eqref{CurlyFIntLin}, to leading order we get \begin{equation} {\delta \overline{\mathcal{R}^2}(\eta_{e})}\approx \frac{135}{4}\int \text{d} \ln q\, \frac{\epsilon_{\text{inf}}^2\lambda H^{5}}{m^2_{0}q^8 \rC^4}. \end{equation} The correction to the power spectrum for this choice of linearised collapse operator can be easily read off to be \begin{equation} \delta\mathcal{P}_{\mathcal{R}}\approx \frac{\epsilon_{\text{inf}}^2\lambda H^{5}}{m^2_{0}q^8 \rC^4}. \end{equation} We see that for the CMB modes $10^{-60}M_{{\mathrm{P}}}\lesssim q\lesssim 10^{-56}M_{{\mathrm{P}}}$, the correction is hundreds of orders of magnitude larger than the observed value, and is also strongly scale dependent. Our aim in this section is simply to highlight the differences that appear when choosing different collapse operators. When one takes the collapse operator to be quadratic in the perturbations (and henceforth in the creation and annihilation operators), which we took to be proportional to the Hamiltonian density of the perturbations, the dynamical collapse induced correction is negligible. However, the linearized collapse operator proportional to the linearized matter-energy density operator, as studied in Ref.~\cite{Martin2020}, indeed leads to much larger corrections inconsistent with observations. One possible explanation of the different results obtained for the two collapse operators is that the numerical value of the Hamiltonian density of the perturbations $\hat{\mathcal{H}}_{0}(\eta,\x)$ is several orders of magnitude smaller than the matter-energy density $\hat{\delta\rho}$, which can be obtained from $\hat{l}$ in Eq.~\eqref{LinColOp}, by setting $\chi_{\k}(\eta)$ as defined in Eq.~\eqref{Chi} - Eq.~\eqref{beta}. To get a crude estimate of this huge difference during inflation, one can compare $\bra{0}\hat{\mathcal{H}}_{0}(\eta,\x)\ket{0}\sim \bra{0}:\hat{\mathcal{H}}_{0}(\eta,\x):^2\ket{0}^{1/2}\sim \mathcal{O}\left(k^3 |b_\eta^{\k,\k}|\right)$ with $\bra{0}\hat{\delta\rho}^2(\eta,\x)\ket{0}^{1/2}\sim \mathcal{O}\left(k^{3/2} |\chi_{\k}|\right)$ at the time of horizon crossing $\eta= 1/k$, for modes of cosmological interest $k\approx 10^{-60} M_{\mathrm{P}}$. This difference at the level of perturbations is not surprising, since the total energy of the system and the total matter-energy differ by several orders of magnitude, already at the classical level. One can see this by noting that $\bar{\rho}\approx V(\bar{\phi})$ is approximately constant during inflation, and therefore that total matter-energy content increases by a factor of $a^3$. In contrast, the total energy of the system remains conserved, with the increase in matter-energy compensated by an equal increase in the negative gravitational potential \cite{Guth:2004tw}. \section{Localization of the wavefunction} We analyze the claim made in Ref.~\cite{Martin2021}, where it is argued that the power spectrum vanishes for our choice of the collapse operator, namely, the Hamiltonian density of scalar cosmological perturbations. Let us consider a quantum system evolving under a stochastic dynamics controlled by a noise $\xi_t$ with probability distribution $P[\xi]$. In particular, the standard CSL model \cite{Bassi2003} and the dynamical collapse model considered in our work are two particular cases of this kind of stochastic dynamics. For an operator $\hat{u}$, the expectation value is defined through the relation \begin{equation} \bar{u}=\mathbb{E}_\xi \ave{\hat{u}}_\xi = \int \text{d} \xi P[\xi] \ave{\hat{u}}_\xi. \end{equation} In the above expression, $\ave{\hat{u}}_\xi$ denotes the quantum expectation value for a given realization of the noise, and $\mathbb{E}_\xi$ denotes the average with respect to the noise. Let us consider the particular case in which $\bar{u}=0$. In this case, the variance $\sigma_u^2$ for $\hat{u}$ reduces to \begin{equation} \sigma_u^2 = \mathbb{E}_\xi \ave{\hat{u}^2}_\xi = \int \text{d} \xi P[\xi] \ave{ \hat{u}^2 }_\xi. \end{equation} Let us assume that for each realisation of the noise, the amplitude square of the wavefunction $\Psi[u]_\xi$ reduces to a Dirac delta centered around some value (which will be equal to the quantum expectation value for that realisation), i.e., {$|\Psi[u]_\xi|^2=\delta(u-\ave{u_{\xi}})$}. In this case, it trivially follows that $\ave{\hat{u}^2} _\xi=\ave{\hat{u}}_\xi^2$, and moreover \begin{equation} {\mathop{\mathbb{E}}}_\xi \langle \hat{u}^2 \rangle_{\xi} \; \stackrel{\left|\Psi[u]_{\xi}\right|^2 = \delta(u - \langle u \rangle_{\xi})}{\longrightarrow} \; {\mathop{\mathbb{E}}}_\xi \langle\hat{u}\rangle_{\xi}^2 \label{locwf}. \end{equation} We calculate the power spectrum using the LHS of Eq.~\eqref{locwf}, where no localization assumption is needed. In \cite{Martin2021}, on the contrary, the power spectrum is computed using the RHS of Eq.~\eqref{locwf}, under the assumption of a fully localised wave function. It is clear that the expression used by \cite{Martin2021} is a special case of the standard, more general formula used in our approach. Thus, we see that the assumption of Ref.~\cite{Martin2021} does not hold in general, and in fact need not to be applied to calculate the variance, as we have shown in our work. Moreover, our approach is a straightforward generalization to include the effects of the CSL dynamics of the power spectrum definition in standard cosmology [cf. Eq.~\eqref{PowerSpectrumdef}], as the difference is the incorporation of the stochastic average over the realizations. Therefore, the claim that the power spectrum of $\hat{\mathcal{R}}$ vanishes {would have been valid only under the assumption of a perfect localization of the wavefunction}, which is not the case considered in our work. Our results show that by choosing a collapse operator which scales with the Hamiltonian density of the scalar perturbations, one can obtain well-defined expressions for the power spectrum, consistent with observations. It is certainly true that with our choice of the collapse operator, there is no perfect localization of the wavefunction during inflation. However, as the CSL model describes a process continuous in time, it may be argued that the localization can occur later in the evolution of the universe, and not necessarily already at the level of perturbations. Therefore, there is still room to solve the measurement problem in Cosmology. \end{document}
{ "timestamp": "2021-09-02T02:14:45", "yymm": "2102", "arxiv_id": "2102.07688", "language": "en", "url": "https://arxiv.org/abs/2102.07688" }
\section{Introduction} \label{sec:org3daeb50} Today, supercomputers with 100,000~cores and more are common, and several machines beyond the 1,000,000~cores mark are already in production. These compute resources are interconnected through complex non-uniform memory hierarchies and network infrastructures. This complexity requires careful optimization of application parameters, such as granularity, process organization, or algorithm choice, as these have an enormous impact on load distribution and network usage. Scientific application developers and users often spend a substantial amount of time and effort running their applications at different scales solely to tune parameters for optimizing their performance. Whenever actual performance does not match expectations, it can be challenging to understand whether the mismatch originates from application misunderstanding or machine misconfiguration. Similar difficulties are encountered when (co-)designing supercomputers for specific applications. A large part of this tuning work could be simplified if a generic and faithful performance prediction tool was available. This article presents a decisive step in this direction. Several techniques have been proposed to predict the performance of a given application on a supercomputer. A first approach consists in building a mathematical performance model (i.e., an analytic formula) accounting for both platform and application key characteristics. However, it is rarely accurate, except for elementary applications on highly regular and well-provisioned platforms, and can thus be merely used to predict broad trends. A more precise approach consists in capturing a trace of the application at scale and replaying it using a simulator. This is an effective approach for capacity planning, but since the application trace is specific to a given set of parameters (and even specific to a given run for dynamic applications that exhibit non-deterministic behaviors due to, e.g., the use of asynchronous collective operations), it cannot be used to study how application parameters should be set for optimizing performance. The main difficulty resides in capturing and modeling the interplay between the application and the platform while faithfully accounting for their respective complexity. A promising approach recently pioneered in several tools\cite{smpi,sstmacro,xsim} consists in emulating the application in a controlled way so that a platform simulator governs its execution. Although this approach's scalability is a primary concern that has already received lots of attention, the accuracy of the simulation is even more challenging. It is still an open research question since Engelmann and Naughton\cite{xsim_network} report, for example, an error ranging from 20\% to 40\% for NPB LU when using 128 ranks. In a previous publication\cite{cornebize:hal-02096571}, we presented how an application like HPL can be emulated at a reasonable cost on a single commodity server to study scenarios similar to qualification runs of supercomputers for the Top500 ranking\cite{TOP500}. We also showed how to predict the performance of HPL for a specific set of parameters on a recent cluster (running a thousand MPI ranks) within a few percent of reality. The tuning of HPL is generally performed by skilled engineers but the way it is done is considered as a sensitive information for vendors and it is thus not well documented. HPL is particularly challenging to study because it implements several custom non-trivial MPI collective communication algorithms to overlap communications with computations efficiently. Since it is a tightly coupled application, it is also expected to be quite sensitive to platform variability (both spatial and temporal) but to the best of our knowledge, although it is well known by HPL experts, it has never been properly quantified. We demonstrated in our previous publication\cite{cornebize:hal-02096571} that a careful modeling of variability was a key ingredient to obtain good predictions and we study this sensitivity more in details hereafter (Section\ref{sec:whatif}). In this article, we conduct an extensive validation study. We show that our approach allows us to consistently predict the real-life performance of HPL within a few percent regardless of its input parameters, thereby showing that application parameters can be tuned fully in simulation. The reason why our approach is particularly effective is two-fold: (1) HPL control flow is data-independent (up to some micro-variations) and (2) the bulk of communications consists of large messages. Although our work focuses on HPL, it could be applied to other similar MPI applications satisfying these conditions. Throughout this validation, which spanned over two years, we also highlight key issues that may arise when modeling the platform and should be carefully addressed to obtain reliable predictions. Last, given the sensibility of applications to computing and communication resource variability, we showcase how to conduct what-if performance analysis of HPC applications in a capacity planning context. This article is organized as follows: Section\ref{sec:con} presents the main characteristics of the HPL application and provides information on how the runs are conducted on modern supercomputers. In Section\ref{sec:smpi}, we briefly present the simulator we used for this work, SimGrid/SMPI, and the modifications of HPL that were required to obtain a scalable simulation and some initial validation results presented in an earlier work\cite{cornebize:hal-02096571}. These results highlight the importance of modeling both spatial and temporal variability. In Section\ref{sec:validation}, we compare simulation results with real experiments through two typical HPL performance studies that cover a wider range of application parameters. Section\ref{sec:whatif} presents how our HPL surrogate can be used to study and possibly optimize the performance of HPL in the presence of uncertainty on the platform. Section\ref{sec:relwork} discusses related work and explains how our approach compares with other approaches. Section\ref{sec:cl} concludes by discussing future work. \section{Background on High-Performance Linpack} \label{sec:orgb8a8bb2} \label{sec:con} \label{sec:hpl} HPL implements a matrix factorization based on a right-looking variant of the LU factorization with row partial pivoting and allows for multiple look-ahead depths. In this work, we use the freely-available reference-implementation of HPL\cite{HPL}, which relies on MPI, and from which most vendor-specific implementations (e.g., from Intel or ATOS) have been derived. Figure\ref{fig:hpl_overview} illustrates the principle of the factorization which consists of a series of panel factorizations followed by an update of the trailing sub-matrix. HPL uses a two-dimensional block-cyclic data distribution of \(A\), which allows for a smooth load-balancing of the work across iterations. \begin{figure}[t] \newcommand{\mykwfn}[1]{{\bf\textsf{#1}}}% \SetAlFnt{\sf}% \SetKwSty{mykwfn}% \SetKw{KwStep}{step}% \centering \begin{minipage}[m]{0.4\linewidth} \begin{tikzpicture}[scale=0.23] \draw (0, 0) -- (0, 12) -- (12, 12) -- (12, 0) -- cycle; \foreach \i in {2}{ \draw [fill=lightgray] (\i, 0) -- (\i, 12-\i) -- (12, 12-\i) -- (12, 0) -- cycle; \draw [fill=gray] (\i, 12-\i) -- (\i, 12-\i-1) -- (\i+1, 12-\i-1) -- (\i+1, 12-\i) -- cycle; \draw[very thick, -latex] (\i,12-\i) -- (\i+2,12-\i-2); \draw[<->] (\i, 12-\i+0.5) -- (\i+1, 12-\i+0.5) node [pos=0.5, yshift=+0.15cm] {\scalebox{.8}{\texttt{NB}}}; } \foreach \i in {3}{ \draw [fill=white] (\i, 0) -- (\i, 12-\i) -- (12, 12-\i) -- (12, 0) -- cycle; \draw (\i,12-\i) -- (\i,0); \draw[very thick, -latex] (\i,12-\i) -- (\i+2,12-\i-2); } \draw[dashed] (0, 12) -- (12, 0); \node(L) at (2, 2) {\ensuremath{\boldsymbol{L}}}; \node(U) at (10, 10) {\ensuremath{\boldsymbol{U}}}; \node(A) at (8, 4) {\ensuremath{\boldsymbol{A}}}; \draw[<->] (0, -0.5) -- (12, -0.5) node [pos=0.5, yshift=-0.3cm] {$N$}; \end{tikzpicture} \end{minipage}% \begin{minipage}[m]{0.6\linewidth} \let\@latex@error\@gobble \begin{algorithm}[H] allocate and initialize $A$\; \For{$k=N$ \KwTo $0$ \KwStep \texttt{NB}}{ allocate the panel\; factor the panel\; broadcast the panel\; update the sub-matrix; } \end{algorithm} \vspace{1em} \end{minipage}\vspace{-.5em} \caption{Overview of High-Performance Linpack.}\vspace{-1.5em} \label{fig:hpl_overview} \end{figure} The sequential computational complexity of this factorization is \(\mathrm{flop}(N) = \frac{2}{3}N^3 + 2N^2 + \O(N)\) where \(N\) is the order of the matrix to factorize. The time complexity on a \(P\times Q\) processor grid can thus be approximated by $$T(N) \approx \frac{\left(\frac{2}{3}N^3 + 2N^2\right)}{P\cdot{}Q\cdot{}w} + \Theta((P+Q)\cdot{}N^2),$$ where \(w\) is the flop rate of a single node and the second term corresponds to the communication overhead which is influenced by the network capacity and many configuration parameters of HPL\@. Indeed, HPL implements several custom MPI collective communication algorithms to efficiently overlap communications with computations. The main parameters of HPL are thus: \begin{itemize} \item \(N\) is the order of the square matrix \(A\). \item \texttt{NB} is the ``blocking factor'', i.e.,\xspace the granularity at which HPL operates when panels are distributed or worked on. This parameter influences the efficiency of the \texttt{dgemm} BLAS kernel, which is the kernel used in the sub-matrix updates, but also the efficiency of MPI communications. \item \(P\) and \(Q\) denote the number of process rows and process columns. For this algorithm, the \emph{total} amount of data transfers is proportional to \((P+Q).N^2\), which generally favors virtual topologies where \(P\) and \(Q\) are approximately equal. \item \texttt{RFACT} determines the panel factorization algorithm. Possible values are \texttt{Crout}, \texttt{left-} or \texttt{right-looking}. \item \texttt{SWAP} specifies the swapping algorithm used while pivoting. Two algorithms are available: one is based on a \emph{binary exchange} (along a virtual tree topology) and the other one is based on a \emph{spread-and-roll} (with a higher number of parallel communications). HPL also provides a panel-size threshold triggering a switch from one variant to the other. \item \texttt{BCAST} sets the algorithm used to broadcast a panel of columns over the process columns. Legacy versions of the MPI standard only supported non-blocking point-to-point communications, which is why HPL ships with in total 6 self-implemented variants to overlap the time spent waiting for an incoming panel with updates to the trailing matrix: \texttt{ring}, \texttt{ring-modified}, \texttt{2-ring}, \texttt{2-ring-modified}, \texttt{long}, and \texttt{long-modified}. The \texttt{modified} versions guarantee that the process right after the root (i.e.,\xspace the process that will become the root in the next iteration) receives data first and does not further participate in the broadcast. This process can thereby start working on the panel as soon as possible. The \texttt{ring} and \texttt{2-ring} versions each broadcast along the corresponding virtual topologies while the \texttt{long} version is a \emph{spread and roll} algorithm where messages are chopped into \(Q\) pieces. This generally leads to better bandwidth exploitation. The \texttt{ring} and \texttt{2-ring} variants rely on \texttt{MPI\_Iprobe}, meaning they return control if no message has been fully received yet, hence facilitating partial overlap of communication with computations. In HPL 2.1 and 2.2, this capability has been deactivated for the \texttt{long} and \texttt{long-modified} algorithms. A comment in the source code states that some machines apparently get stuck when there are too many ongoing messages. \item \texttt{DEPTH} controls how many iterations of the outer loop can overlap with each other. As indicated in the HPL documentation, a depth equal to 1 often gives better results than a depth equal to 0 for large problem sizes, but a look-ahead of depth equal to 3 and larger is not expected to bring any improvement. \end{itemize} All the previously listed parameters interact uniquely with the interconnection network capability and the MPI library to influence the overall performance of HPL, which makes it very difficult to predict precisely. To illustrate the diversity of real-life configurations, we report in Table\ref{fig:typical_run} a few ones used for the TOP500 ranking that some colleagues agreed to share with us. \begin{table}[t] \vspace{-1em} \caption{Typical HPL configurations. \label{fig:typical_run} \scalebox{.9}{\begin{tabular}{l|lll} \multicolumn{1}{l|}{} & Stampede@TACC & Theta@ANL & \\ \multicolumn{1}{l|}{} & \#6th \qquad June 2013 & \#18th \qquad Nov. 2017 & \\ \hline \texttt{Rpeak} & \NSI{8520.1}{\tera\flops} & \NSI{9627.2}{\tera\flops} & \\ $N$ & \Num{3875000} & \Num{8360352} & \\ \texttt{NB} & \Num{1024} & 336 & \\ \texttt{P}$\times$\texttt{Q} & 77$\times$78 & 32$\times$101 & \\ \texttt{RFACT} & Crout & Left & \\ \texttt{SWAP} & Binary-exch. & Binary-exch. & \\ \texttt{BCAST} & Long modified & 2 Ring modified & \\ \texttt{DEPTH} & 0 & 0 & \\ \hline \texttt{Rmax} & \NSI{5168.1}{\tera\flops} & \NSI{5884.6}{\tera\flops} & \\ Duration & 2 hours & 28 hours & \\ Memory & \NSI{120}{\tera\byte} & \NSI{559}{\tera\byte} & \\ MPI ranks & 1/node & 1/node & \\ \end{tabular}}\vspace{-1em} \end{table} \label{sec:con:diff} The performance typically achieved by supercomputers (\texttt{Rmax}) needs to be compared to the much larger peak performance (\texttt{Rpeak}). The difference can be attributed to the node usage, to the MPI library, to the network topology that may be unable to deal with the intense communication workload, to load imbalance among nodes (e.g.,\xspace due to a defect, system noise,\ldots), to the algorithmic structure of HPL, etc. All these factors make it difficult to know precisely what performance to expect without running the application at scale. Due to the complexity of both HPL and the underlying hardware, simple performance models (analytic expressions based on \(N, P, Q\) and estimations of platform characteristics as presented in Section\ref{sec:hpl}) may at best be used to determine broad trends but can by no means accurately predict the performance for each configuration (e.g.,\xspace consider the exact effect of HPL's six different broadcast algorithms on network contention). Additionally, these expressions do not allow engineers to improve the performance through actively identifying performance bottlenecks. For complex optimizations such as partially non-blocking collective communication algorithms intertwined with computations, a very faithful model of both the application and the platform is required. \section{Emulating HPL with SimGrid/SMPI} \label{sec:orgc7ebd3c} \label{sec:smpi} In this section, we present an overview of Simgrid/SMPI and of the modifications of HPL required to obtain a scalable simulation and a first validation of the simulations. The results of Sections\ref{sec:smpi:nutshell}--\ref{sec:validation.single} previously appeared in a conference publication\cite{cornebize:hal-02096571} and are included here for completeness. \subsection{Simgrid/SMPI in a Nutshell} \label{sec:orgdee84ec} \label{sec:smpi:nutshell} SimGrid\cite{simgrid} is a flexible and open-source simulation framework that was initially designed in 2000 to study scheduling heuristics tailored to heterogeneous grid computing environments but was later extended to study cloud and HPC infrastructures. The main development goal for SimGrid has been to provide validated performance models, particularly for scenarios making heavy use of the network. Such a validation usually consists of comparing simulation predictions with real experiments to confirm or debunk and improve network and application models. SMPI, a simulator based on SimGrid, has been developed and used to simulate unmodified MPI applications written in C/C++ or FORTRAN\cite{smpi}. To this end, SMPI maps every MPI rank of the application onto a lightweight simulation thread. These threads are then run in mutual exclusion and controlled by SMPI, which measures the time spent computing between two MPI calls. This duration is injected in the simulator as a simulated delay, scaled up or down depending on the speed difference between the simulated machine and the simulation machine. The complex optimizations done in real MPI implementations need to be considered when predicting the performance of applications. For instance, the ``eager'' and ``Rendez-vous'' protocols are selected based on the message size, with each protocol having its synchronization semantic, which strongly impacts performance. Another problematic issue is to model network topologies and contention. SMPI relies on SimGrid's communication models, where each ongoing communication is represented as a single \emph{flow} (as opposed to a collection of individual packets). Assuming steady-state, contention between active communications can then be modeled as a bandwidth sharing problem while accounting for non-trivial phenomena (e.g.,\xspace cross-traffic interference\cite{Velho_TOMACS13}). The time spent in MPI is thus derived from the SMPI network model that accounts for MPI peculiarities (depending on the message size), the machine topology, and the contention with all other ongoing flows. For more details, we refer the interested reader to\cite{smpi}. \begin{figure}[ht] \centering \includegraphics[width=.9\linewidth]{figures/smpi_workflow.pdf} \caption{Experimental and simulation workflow with SMPI. }\vspace{-1em} \label{fig:smpi_workflow} \labspace \end{figure} Figure\ref{fig:smpi_workflow} provides an overview of how performance evaluation studies are conducted with SMPI and in this article. First, a series of benchmarks is conducted on the target machine to calibrate (step 1) the platform model. Once this model is built, the application can be simulated over SMPI (step 2) at low cost to predict performance while varying its parameters (e.g., for application tuning) without resorting to the target machine anymore. This approach should be contrasted with the classical one which solely relies on real executions on the target machine (step 3). In this article, as we are particularly interested in evaluating how accurate the predictions are, we propose an extensive comparison of predicted performance with measured performance (step 4). More precisely, we show in Sections\ref{sec:validation.single}-\ref{sec:validation} that predictions are particularly faithful for HPL provided the platform is calibrated with care (steps 1-4) and we show in Section\ref{sec:whatif} how specific characteristics of HPL can be studied in simulation by slightly varying and extrapolating the platform model (steps 1-2). \subsection{Emulating HPL} \label{sec:org728a7df} \label{sec:em} \begin{figure}[!b] \centering \lstset{frame=bt,language=C,numbers=none,escapechar=|}\lstinputlisting{HPL_dgemm_macro_simple.c} \caption{Non-intrusive macro replacement with a very simple performance model.\label{fig:macro_simple}} \end{figure} HPL relies heavily on BLAS kernels such as \texttt{dgemm} (for matrix-matrix multiplication). Since these kernels' output does not influence the control flow, simulation time can be reduced considerably by substituting these function calls with a performance model of the respective kernel. Figure\ref{fig:macro_simple} shows an example of this macro-based mechanism that allows us to keep HPL code modifications to an absolute minimum. The \texttt{(1.029e-11)} value represents the inverse of the flop rate for this compute kernel and is obtained by benchmarking the target nodes. The kernel's estimated duration is calculated based on the given parameters and passed on to \texttt{smpi\_execute\_benched} that advances the simulated clock of the executing rank by this estimate. Skipping compute kernels makes the content of output variables invalid, but in simulation, only the application's behavior and not the correctness of computation results are of concern. These minor modifications to the original source code (HPL comprises 16K lines of ANSI C over 149 files, our modifications only changed 14 files with 286 line insertions and 18 deletions) enabled us to simulate the configuration used for the Stampede cluster in 2013 for the TOP500 ranking (see Table\ref{fig:typical_run}) in less than 62 hours and using \NSI{19}{\giga\byte} on a single node of a commodity cluster (instead of \NSI{120}{\tera\byte} of RAM over a 6006 node supercomputer). Further speed-up could probably be obtained by modifying HPL further, but our primary interest in this article is on the prediction quality. \begin{figure*}[!t] \vspace{-1em} \begin{minipage}[t]{\linewidth}\null\vspace{-.75em} \centering \begin{subfigure}{.33\linewidth} \includegraphics[width=\linewidth]{figures/kernels/dgemm_heterogeneity_calib.png} \caption{\texttt{dgemm} heterogeneity\label{fig:dgemm_het}} \end{subfigure}% \begin{subfigure}{.33\linewidth} \includegraphics[width=\linewidth]{figures/kernels/dgemm_model_calib.png} \caption{\texttt{dgemm} model\label{fig:dgemm_poly}} \end{subfigure}% \begin{subfigure}{.33\linewidth} \includegraphics[width=\linewidth]{figures/kernels/dlatcpy_model.png} \caption{\texttt{HPL\_dlatcpy} model\label{fig:HPL_var}} \end{subfigure} \caption{Illustrating the realism of modeling for BLAS and HPL functions.}\vspace{-1em} \label{fig:blas_var} \end{minipage}\hfill% \end{figure*} \label{sec:modeling} Most BLAS kernels have several parameters from which a straightforward model can generally easily be identified (e.g.,\xspace proportional to the product of the parameters), but refinements including the individual contribution of each parameter as well as the \emph{spatial} and \emph{temporal} variability of the operation are also possible. In the following, all the simulations have been done with the following model for the \texttt{dgemm} kernel: \begin{equation} \label{eq:dgemm.complex} \begin{split} \text{For each processor $p$, } \textsf{dgemm}_{p}(M, N, K) \sim \mathcal{H}(\mu_{p}, \sigma_{p})\\ \begin{cases} \mu_p &= \alpha_pMNK + \beta_pMN + \gamma_pMK + \delta_pNK + \epsilon_p\\ \sigma_p &= \omega_pMNK + \psi_pMN + \phi_pMK + \tau_pNK + \rho_p \end{cases}, \end{split} \end{equation} where \(\mathcal{H}(\mu, \sigma)\) denotes a half-normal random variable with parameters \(\mu,\sigma\) accounting for the expectation and the standard deviation. The dependency on \(p\) allows to account for platform heterogeneity (since \(\alpha_p,\beta_p, \dots,\rho_p\) can be specific to each node), i.e.,\xspace the aforementioned spatial variability. Figure\ref{fig:dgemm_het} illustrates the importance of distinguishing between nodes: each color and each regression line under a simple linear model corresponds to a different cpu, whereas the black dotted line corresponds to a regression line over all the nodes. Figure\ref{fig:dgemm_poly} illustrates the gain brought by a fully polynomial model (blue) over a simple linear model (black) for a given node. Indeed, for \(M.N.K \approx 4.5 \times 10^9\) some duration are systematically higher regardless of the node. In this particular set of experiments the corresponding combination of \(M,N,K\) corresponds to some particular (e.g., tall and skinny) matrix geometry which are better handled by a full polynomial model. Last, the \(\sigma_p\) parameter allows to account for (short-term) temporal variability, i.e.,\xspace to model the fact that the duration of two successive calls to \texttt{dgemm} with the same parameters \(M, N, K\) are never identical. Modeling this variability is important as it may propagate through the communication pattern of the application (late sends and late receives). Figure\ref{fig:dgemm_poly} illustrates the gain brought by modeling this variability (orange). The rationale for using a half-normal distribution rather than a normal distribution stems from the natural positive skewness of compute kernel duration. This model is much more complex than the simple deterministic one used in Figure\ref{fig:macro_simple} but, as we will explain, this complexity is key to obtain good performance prediction\cite{cornebize:hal-02096571}. There are four other BLAS kernel (e.g., \texttt{daxpy}) and a few HPL kernels (often related to memory management) but their total duration represents a negligible fraction of the overall execution time, which have been modeled with a simple deterministic and homogeneous model such as $\texttt{daxpy}(N) = \alpha N + \beta$ or $\texttt{HPL\_dlatcpy}(M,N) = \alpha M.N + \beta$ (see Figure\ref{fig:HPL_var}). \subsection{Experimental Setup} \label{sec:org124b3f5} \label{sec:methodology.setup} To evaluate the soundness of our approach, we compare several real executions of HPL with simulations using the previous models. We used the Dahu cluster from the Grid'5000 testbed. It has 32 nodes connected through a single switch by \SI{100}{\giga\bit\per\second} Omnipath links. Each node has two Intel Xeon Gold 6130 CPUs with 16 cores per CPU, and we disabled hyperthreading. We used HPL version 2.2 compiled with GCC version 6.3.0. We also used the libraries OpenMPI version 2.0.2 and OpenBLAS version 0.3.1. Unless specified otherwise, HPL executions were done using a block size of 128, a matrix of varying size (from \num{50000} to \num{500000}), one single-threaded MPI rank per core, a look-ahead \texttt{depth} of 1, and the \texttt{increasing-2-ring} broadcast with the \texttt{Crout} panel factorization algorithms as this is the combination that led to the best performance overall. Although this machine is much smaller than top supercomputers, faithfully simulating an HPL execution with such settings is quite challenging. \begin{itemize} \item We used one rank per core to obtain a higher number (1024) of MPI processes. This configuration is more difficult than simulating one rank per node, as (1) it increases the amount of data transferred through MPI, and (2) the performance is subject to memory interference and network heterogeneity (intra-node communications vs. inter-node communications). \item We used a much smaller block size than what is commonly used, which leads to a higher number of iterations and hence more complex communication patterns. \item We used relatively small input matrices, which reduces the makespan and makes good predictions harder to obtain. \end{itemize} \subsection{A First Validation} \label{sec:orgf96d880} \label{sec:validation.single} \begin{figure}[pt] \begin{minipage}{0.8\linewidth} \centering \includegraphics[width=\linewidth]{figures/validation_performance.pdf} \end{minipage}% \begin{minipage}{0.2\linewidth} \vspace{-1em} \scalebox{.8}{\begin{tabular}{l|l} \texttt{NB} & \Num{128} \\ \texttt{P}$\times$\texttt{Q} & 32$\times$32 \\ \texttt{PFACT} & Crout \\ \texttt{RFACT} & Right \\ \texttt{SWAP} & Binary-exch. \\ \texttt{BCAST} & 2 Ring \\ \texttt{DEPTH} & 1 \\ \end{tabular}} \end{minipage} \caption{HPL performance: predictions (dashed lines) vs. reality (solid lines).} \vspace{-1em} \label{fig:validation_performance} \labspace \end{figure} We now present a first quantitative comparison of the prediction with the reality on a typical scenario. We report in Figure\ref{fig:validation_performance} the \si{\giga\flops} rate reported by HPL when varying the matrix size \(N\). Real executions are depicted in solid black, and the natural variability of the overall performance is illustrated by reporting eight runs of HPL for each matrix size. The dashed line (a), on top, is our first attempt to simulate HPL with the naive model (homogeneous and deterministic for both the kernels and for the network) illustrated in Figure\ref{fig:macro_simple}. This model overestimates HPL performance by more than \SI{30}{\percent}. Modeling the heterogeneity of \texttt{dgemm} (i.e.,\xspace introducing the dependency on \(p\) for \texttt{dgemm} as done in Eq\eqref{eq:dgemm.complex} but without the temporal variability induced by \(\sigma_p\)) increases significantly the realism of the simulation as the performance is then overestimated by only \SI{9}{\percent} (dashed line (b)). Finally, we found that adding the temporal variability is the key ingredient to obtain the last bit of realism. The prediction using the full-fledged model (dashed line (c)) is extremely close to reality as it slightly underestimates the performance by less than \SI{5}{\percent} and even as little as \SI{1}{\percent} for the larger matrices. As illustrated in Figure\ref{fig:validation_performance} and explained in our previous work\cite{cornebize:hal-02096571}, accurate predictions require careful modeling of both spatial and temporal variability, as they appear to have a very strong effect on HPL performance. Somehow, this is expected since HPL is an iterative program that synchronizes through the broadcast of factorization panels. A single slower or late process will eventually delay all the other ones. In the scenario presented in Figure\ref{fig:validation_performance}, a large fraction of the overall execution time is spent in MPI communications but foremost in synchronizations (induced by late sends and let receives) rather than in actual data transfers. As a consequence, careful modeling of computations is essential, but careless modeling of the network was enough to obtain good predictions. This article presents an extensive (in)validation study that demonstrates the importance of careful modeling of the whole platform. \subsection{Experiment Time Frame} \label{sec:org7d9b1a6} \label{sec:methodology.timeframe} This validation study has been carried out over several years (from 2018 to 2020). Despite our efforts to keep the experimental setup stable for the sake of reproducibility, the platform has evolved. The Linux kernel had a minor update, from version 4.9.0-6 to version 4.9.0-13, and the BIOS and firmware of the nodes have been upgraded. During this time frame, the cluster has also suffered from hardware issues, like a cooling malfunction on four of its nodes and several faulty memory modules that had to be changed. This malfunction had an enormous impact on the performance of HPL, which significantly complicated our validation study but also makes it more meaningful as it has been conducted on a particularly challenging setup. \label{sec:validation.temperature} \begin{figure}[pt] \begin{minipage}{0.8\linewidth} \centering \includegraphics[width=\linewidth]{figures/validation_temperature.pdf} \vspace{-1em} \labspace \end{minipage}% \begin{minipage}{0.2\linewidth} \vspace{-1em} \scalebox{.8}{\begin{tabular}{l|l} \texttt{NB} & \Num{128} \\ \texttt{P}$\times$\texttt{Q} & 32$\times$32 \\ \texttt{PFACT} & Crout \\ \texttt{RFACT} & Right \\ \texttt{SWAP} & Binary-exch. \\ \texttt{BCAST} & 2 Ring \\ \texttt{DEPTH} & 1 \\ \end{tabular}} \end{minipage} \caption{HPL performance: predictions vs. reality (effect of the cooling issue on the nodes dahu-\{13,14,15,16\}).} \label{fig:validation_temperature} \end{figure} Our simulation approach makes it possible to predict the performance of HPL for a new platform state by merely making a new calibration whenever a significant change is detected. This ability to reflect in simulation a platform change is illustrated in Figure\ref{fig:validation_temperature} which, similarly to Figure\ref{fig:validation_performance} (acquired in March 2019), showcases the influence of matrix size on the performance but at different periods. The left plot represents the \emph{normal} state of the cluster (in September 2020), whereas the right plot has been obtained (in March-April 2019) when 4 of the 32 nodes had a cooling issue which lowered their performance by about 10\%. In all cases, we consistently predict performance within a few percent and performing a new \texttt{dgemm} calibration on these four nodes was all that was needed to reflect this platform change in the simulation. This result illustrates both the faithfulness of our simulations and a potential use case for predictive simulations: a discrepancy between the reality and the predictions can sometimes indicate a real issue on the platform (similar situations have already been reported in\cite{smpi}). \section{Comprehensive Validation Through HPL Performance Tuning} \label{sec:orgf4c412a} \label{sec:validation} This section reports a few typical performance studies involving HPL through both real experiments and simulations. The comparison of both approaches allows us (1) to cover a broader range of parameters than solely matrix size as done in our earlier work, (2) to evaluate how faithful to reality our simulations are even in suboptimal configurations, and (3) to report the main difficulties encountered when conducting such a study. \subsection{Evaluating the Influence of the Geometry} \label{sec:orgdbb9bbc} \label{sec:validation.geometry} \begin{figure}[t] \begin{subfigure}{\linewidth} \centering \includegraphics[width=\linewidth]{figures/mpi_calibration.png} \caption{Illustrating the effect of the two MPI calibration methods.} \label{fig:mpi_calibration} \end{subfigure} \begin{subfigure}{\linewidth} \begin{minipage}{0.8\linewidth} \centering \includegraphics[width=\linewidth]{figures/validation_geometry.pdf} \vspace{-1em} \labspace \end{minipage}% \begin{minipage}{0.2\linewidth} \vspace{-1em} \scalebox{.8}{\begin{tabular}{l|l} \texttt{N} & \Num{250000} \\ \texttt{NB} & \Num{128} \\ \texttt{P}$\times$\texttt{Q} & 960 \\ \texttt{PFACT} & Crout \\ \texttt{RFACT} & Right \\ \texttt{SWAP} & Binary-exch. \\ \texttt{BCAST} & 2 Ring \\ \texttt{DEPTH} & 1 \\ \end{tabular}} \end{minipage} \caption{HPL performance: predictions vs. reality (testing all the possible geometries for 960 MPI ranks).} \label{fig:validation_geometry} \end{subfigure} \caption{The first (optimistic) network calibration gave poor predictions for very elongated geometries while the improved calibration provides perfect predictions.} \label{fig:validation_geometry_global} \end{figure} Figure\ref{fig:validation_geometry} illustrates the influence on the performance of the geometry of the virtual topology (\texttt{P} and \texttt{Q}) used in HPL\@. As expected, geometries that are too distorted lead to degraded performance. All the HPL parameters were fixed (matrix rank is fixed to \Num{250000} and the other parameters are the same as in Section\ref{sec:validation.single}) except for the geometry as we evaluate all the pairs (\texttt{P}, \texttt{Q}) such that \(\texttt{P}\times\texttt{Q}=960\). We used only 30 nodes instead of 32 to cover a larger number of geometries, as \Num{960} has more divisors than \Num{1024}. As in all our previous studies, we report both the predicted performance and the one measured in reality. Like the comparisons presented in the previous section, the simulation was done with the \texttt{dgemm} model from Eq\eqref{eq:dgemm.complex} (stochastic, heterogeneous, and polynomial) and the simplest linear models for the other kernels. In our first simulation attempt that relied on a relatively simple network model (deterministic yet piecewise-linear to account for protocol switch) depicted on the leftmost plot of Figure\ref{fig:mpi_calibration}, we obtained the unsatisfying orange line on top of Figure\ref{fig:validation_geometry} for the prediction. The simulations with the smallest value of \texttt{P} had relatively large prediction errors, with a systematic over-estimation that reaches up to +50\% for the \(1\times960\) and \(2\times480\) geometries. A qualitative comparison of the execution traces obtained in reality and simulation showed that the broadcast phases' duration was greatly underestimated in simulation. We found out that with such elongated geometries, the message size is significantly larger than what we had used in our calibration, and the performance surprisingly and significantly drops for such size (compare with the rightmost plots of Figure\ref{fig:mpi_calibration} for messages larger than \SI{160}{\mega\byte}). This performance drop is explained by poor optimization of the DMA locking mechanism in the Infiniband network layer\cite{denis:inria-00586015}. A similar performance drop also happens for intra-node communications that poorly manage the caches above a given size. Furthermore, the communication patterns generated by HPL during the ring broadcast are significantly impacted by the busy waiting of HPL that intensively calls \texttt{MPI\_Probe} and \texttt{dgemm} on small sub-matrices. Our initial procedure for calibrating the network did not capture this phenomenon since we did not inject any additional CPU load. We addressed this problem by improving our network calibration procedure: (1) we use a distinct model for local and remote calibrations, (2) we sample the message sizes in a larger interval (up to \SI{2}{\giga\byte} instead of only \SI{1}{\mega\byte}), and (3) we add calls to \texttt{dgemm} and \texttt{MPI\_Iprobe} between each call to \texttt{MPI\_Send} and \texttt{MPI\_Recv}. The goal was to make the calibration environment more similar to what happens in HPL\@. The resulting network model is illustrated in the rightmost plots of Figure\ref{fig:mpi_calibration}. This more realistic network model solved every previous misprediction and allows us to produce very faithful simulations (purple line on Figure\ref{fig:validation_geometry}), which are now a few percent of the reality regardless of the geometry. This figure also illustrates the influence of the geometry on overall performance since there is almost a factor of ten between the worst configuration (\(960\times1\)) and the best one (\(30\times32\)). Although it is not surprising to see that the geometries which are as square as possible lead to better performance as they minimize the overall amount of data movements, it is interesting to observe the asymmetric role of \texttt{P} and \texttt{Q} in the overall performance (smaller values for \texttt{P} lead to better performance) and which can be explained by the structure of the collective operations but requires a close look at the code. \subsection{Optimizing the HPL Configuration Through a Factorial Experiment} \label{sec:org6b0b53e} \label{sec:validation.factorial} \begin{figure}[pt] \centering \includegraphics[width=\linewidth]{figures/validation_factorial.pdf} \caption{Influence of HPL configuration on the performance (factorial experiment).}\vspace{-1em} \label{fig:validation_factorial} \labspace \end{figure} Although geometry is among the most important parameter to tune, six other parameters control the behavior of HPL. In Figure\ref{fig:validation_factorial}, we compare the performance reported by HPL when fixing the matrix rank to \Num{250000} and varying the following parameters: block size (128 or 256), depth (0 or 1), broadcast (the six available algorithms), swap (the three available algorithms). The geometry was fixed to \(\text{P}\times\text{Q}=32\times32=1024\) as it is optimal (the simpler calibration procedure and the network model depicted on the leftmost plot of Figure\ref{fig:mpi_calibration} were thus used). The parameters \texttt{pfact} and \texttt{rfact} (panel factorization) were respectively fixed to \texttt{Crout} and \texttt{Right}, as they had nearly no influence on HPL performance in our early experiments. Figure\ref{fig:validation_factorial} depicts the 72 parameter combinations we tested. Parameters have been reorganized based on their influence on performance to improve readability. The boxed configuration corresponds to the one boxed in Figure\ref{fig:validation_performance}. These parameters account for up to \SI{30}{\percent} of variability in the performance, which is less important than the geometry but is still quite significant. For 61 of them, the prediction error is lower than \SI{5}{\percent}. Only two combinations have shown a large error of approximately \SI{15}{\percent}, obtained with a block size of 256, a depth of 1, the \texttt{2-ring} broadcast algorithm, and either the \texttt{long} or the \texttt{mix} swap algorithm. This demonstrates the soundness of our approach, as our predictions are reasonably accurate most of the time. This experiment confirms that, although the prediction of HPL performance for a given parameter combination has a systematic bias, the error remains within a few percent most of the time. Therefore, this surrogate is good enough for parameter tuning and should be considered when preparing a large-scale run. While testing all the parameter combinations is the safest method to discover the combination that provides the highest performance, its cost can be prohibitive due to the high number of combinations. An alternative often used in practice is to explore only a small subset of the parameter space and to analyze variance (ANOVA) to identify the parameters with the more substantial effect on performance and then select the appropriate combination. We applied this procedure on samples of both datasets (the one obtained from real runs and the one obtained in simulation). In both cases, the two parameters with the highest effect were the block size \texttt{NB} and the \texttt{depth}, as shown in Figure\ref{fig:validation_factorial}, followed by \texttt{bcast} and \texttt{swap}. The best combinations selected in both cases were also identical, demonstrating once again the faithfulness of our simulation approach and how it can be used to reduce the experimental cost of parameter tuning. \subsection{Conclusion} \label{sec:orga1b1d33} Accurately predicting the performance of an application is not a trivial task. Discrepancies between reality and simulation can be multiple: the platform may have changed (e.g.,\xspace the cooling issue that affected four nodes in Section\ref{sec:methodology.timeframe}), the model could be inaccurate (e.g.,\xspace the homogeneous and deterministic \texttt{dgemm} model is too simple as in\cite{cornebize:hal-02096571}) or not correctly calibrated (e.g.,\xspace the calibration procedure does not cover the appropriate parameter space, or the experimental conditions are too different as in Section\ref{sec:validation.geometry}). As expected in any serious investigation of model validity, our validation study is not a mere collection of positive cases. Instead, it is the result of a thorough (we extensively covered the HPL parameter space) attempt to invalidate our model as well as explanations on how we did so. By meticulously overcoming each of these issues, we have demonstrated the ability of our approach to produce very faithful predictions of HPL performance on a given platform. \section{Sensibility Analysis in What-if Scenarios} \label{sec:org7906a2d} \label{sec:whatif} We have shown that many typical HPL case studies could be conducted in simulation. However, their conclusions (optimal geometry and parameters) are specific to the cluster we used and they require a precise model of several aspects of the target cluster, which may not be possible at early experimental stages. In particular, only a few cluster nodes may be available at first and the whole cluster model should then be constructed from a limited set of observations and carefully extrapolated. This section shows how typical \emph{what-if} simulation studies should be conducted given such uncertainty. Section\ref{sec:whatif.model} presents a generative model of node performance that can easily be fit from daily measurements and used to produce a similar platform. This model is used to quantify the importance on overall performance of temporal variability of the \texttt{dgemm} kernel in Section\ref{sec:whatif.temporal_variability} and of spatial variability of nodes in Section\ref{sec:whatif.spatial_variability}. In particular, we show how to study the efficiency of a simple \emph{slow node eviction} strategy. Finally, we study in Section\ref{sec:whatif.topology} the influence of the physical network topology on overall performance. Most of these studies are particularly difficult to conduct through real experiments because of the difficulty to finely control the platform. \subsection{A Generative Model of Node Performance} \label{sec:org0ecb574} \label{sec:whatif.model} As we have seen in Section\ref{sec:modeling}, the performance of nodes exhibits several kinds of variability: i) a spatial variability (between nodes) ii) a ``short-term'' temporal variability (the one experienced within an HPL run) but also iii) a ``long term'' temporal variability (from a day to another). As illustrated in Section\ref{sec:validation.single}, accounting for the first two kinds of variability is essential but during our investigation of the simulation validity, which spanned over several months, we also had to deal with the fact that the node performance from a day to another could significantly vary, thereby making our comparisons between a real experiment and the simulation driven by model obtained with past measurements sometimes irrelevant. This section explains how all sources of variability can be accounted for in a single unified model. From our observations, we assume that on a given node \(p\) and a given day \(d\), the duration of the \texttt{dgemm} kernel can be modeled as follows: \begin{equation} \label{eq:dgemm.basic} \forall M,N,K, \textsf{dgemm}_{p,d}(M, N, K) \sim \mathcal{H}(\alpha_{p,d}MNK + \beta_{p,d}, \,\, \gamma_{p,d}MNK)\\ \end{equation} Compared to the model\eqref{eq:dgemm.complex}, this model includes the daily variability but drops the complexity of a full-fledged polynomial. Such complexity may be important whenever trying to model a particular platform. However, when performing sensibility analysis, a simpler model is preferred, especially as not all terms of the polynomial may be statistically significant. In this model, the short-term temporal variability stems from the \(\gamma_{p,d}\) term while the average performance of the node stems from the \(\alpha_{p,d}\) and \(\beta_{p,d}\) terms, which we gather in a single 3-dimensional vector \begin{equation} \mu_{p,d}=(\alpha_{p,d},\beta_{p,d},\gamma_{p,d}). \end{equation} Now, since every machine is unique it is natural to assume that for each machine: \begin{equation} \label{eq:dgemm.temporal} \forall d, \mu_{p,d} \sim \mathcal{N}(\mu_{p},\Sigma_T) \end{equation} In this model, \(\mu_{p}\) accounts for the average performance of the machine \(p\), while \(\Sigma_T\) accounts for its day-to-day variability. From our observation we had no particular reason to assume that this variability was different from a machine to another, hence, \(\Sigma_T\) is not indexed by \(p\) but global to all machines. However, the parameters \(\alpha_{p,d},\beta_{p,d},\gamma_{p,d}\) are generally correlated to each others, hence \(\Sigma_T\) is full covariance matrix to account for interactions. The choice of a Normal distribution is natural since it is the simpler distribution that accounts for a specific mean and variance, but we will discuss its relevance later in this section. Finally, we need to account for the spatial variability, which we propose to model as follows: \begin{equation} \label{eq:dgemm.spatial} \forall p, \mu_{p} \sim \mathcal{N}(\mu,\Sigma_S) \end{equation} Again, in such a model \(\mu\) accounts for the machines' average performance while \(\Sigma_S\) accounts for the (weak) heterogeneity. This hierarchical model is depicted in Figure\ref{fig:generative}. The shaded node represents observed variables and diamond node represents deterministic variables, while non-shaded nodes represent latent variables. The solid node is the variable which is estimated when conducting (in)validation studies while the dashed ones are useful when conducting sensibility analysis and extrapolating to an hypothetical cluster. \begin{figure}[pt] \centering \includegraphics[scale=.911]{figures/generative_model.pdf} \caption{Generative model of kernel duration accounting for the spatial ($\Sigma_S$), long-term ($\Sigma_T$) and short-term variability ($\gamma_{p,d}$).}\vspace{-1em} \label{fig:generative} \labspace \end{figure} The relevance of model\eqref{eq:dgemm.basic} has already been illustrated in Section\ref{sec:validation.single}, \ref{sec:methodology.timeframe}, and\ref{sec:validation} but the relevance of models\eqref{eq:dgemm.temporal} and\eqref{eq:dgemm.spatial} requires some attention. Figure\ref{fig:whatif_calibration} represent the empirical distribution of \(\mu_{p,d} = (\alpha_{p,d},\beta_{p,d},\gamma_{p,d})\) (the result of the linear regression) for the 32 nodes of the Dahu cluster on 40 different days from November 2019 to February 2020. The distribution for each node appears approximately normal and passed a Shapiro-Wilk normality test. Although the distribution of the \(\beta_{p,d}\) appears slightly skewed toward larger values and one of the nodes (the one with the larger \(\alpha_{p,d}\)) stands out, there is no good reason for using a more complex distribution than a Gaussian one. Although the correlation between \(\alpha\), \(\beta\), and \(\gamma\) is very weak, it appeared to be statistically significant (most ellipses are slightly tilted toward North-East), hence a full variance matrix is needed (at least for \(\Sigma_T\)). \begin{figure}[pt] \centering \begin{subfigure}{\textwidth} \centering \raisebox{2em}{\rotatebox{90}{\fbox{\vphantom{y}Real data}}}~% \includegraphics[width=.48\linewidth]{figures/whatif_calibration_1.pdf}% \includegraphics[width=.48\linewidth]{figures/whatif_calibration_2.pdf} \caption{Distribution of $\alpha$, $\beta$, and $\gamma$ (observations on $2\times32$ CPUs from November 2019 to February 2020).} \label{fig:whatif_calibration} \end{subfigure} \begin{subfigure}{\textwidth} \centering \raisebox{1em}{\rotatebox{90}{\fbox{Synthetic data}}}~% \includegraphics[width=.48\linewidth]{figures/whatif_model_1.pdf}% \includegraphics[width=.48\linewidth]{figures/whatif_model_2.pdf} \caption{Distribution of $\alpha$, $\beta$, and $\gamma$ (synthetic data for 16 CPUs).} \label{fig:whatif_model} \end{subfigure}% \caption{Distribution of the regression parameters for around 20 \texttt{dgemm} calibrations made on each of the 32 nodes. Each color/ellipse corresponds to a different CPU.} \label{fig:whatif} \labspace \centering \begin{subfigure}{\textwidth} \centering \raisebox{2em}{\rotatebox{90}{\fbox{\vphantom{y}Real data}}}~% \includegraphics[width=.48\linewidth]{figures/whatif_calibration_slownodes_1.pdf}% \includegraphics[width=.48\linewidth]{figures/whatif_calibration_slownodes_2.pdf} \caption{Distribution of $\alpha$, $\beta$, and $\gamma$ (observations on $2\times32$ CPUs from October to November 2019).} \label{fig:whatif_slow_calibration} \end{subfigure} \begin{subfigure}{\textwidth} \centering \raisebox{1em}{\rotatebox{90}{\fbox{Synthetic data}}}~% \includegraphics[width=.48\linewidth]{figures/whatif_model_slow_1.pdf}% \includegraphics[width=.48\linewidth]{figures/whatif_model_slow_2.pdf} \caption{Distribution of $\alpha$, $\beta$, and $\gamma$ (synthetic data for 16 CPUs).} \label{fig:whatif_slow_model} \end{subfigure}% \caption{Same as Figure~\ref{fig:whatif}, except that 4 of these nodes had a cooling problem, leading to longer and more variable durations.} \label{fig:whatif_slow} \labspace \end{figure} \begin{table}[b] \caption{\(R^2\) values obtained with the linear regressions for \texttt{dgemm} durations, with both the linear and polynomial models.} \vspace{-1em} \label{tab:rsquared} \center \begin{tabular}{l|cc} & Linear & Polynomial\\ \hline per host and day (\(\mu_{p,d}\)) & $[0.9842, 0.9994]$ & $[0.9958, 0.9998]$\\ per host (\(\mu_{p}\)) & $[0.9960, 0.9971]$ & $[0.9994, 0.9997]$\\ global (\(\mu\)) & $0.9963$ & $0.9996$\\ \end{tabular} \end{table} Although fitting a full polynomial model for each day and each processor provides the best predictions it is important to understand that even cruder models are quite good. Table\ref{tab:rsquared} provides indication about the quality of the prediction (the coefficient of determination \(R^2\), which is comprised between 0 and 1, with 1 indicating a systematically perfect prediction) for the 32 Dahu nodes over the November 2019 to February 2020 period. Regardless of the model complexity (linear/polynomial) and of the granularity of the regression (global, per host, per host and day) the model quality is excellent and above \(0.99\), so it could appear that there is no need for a complex model. Yet, we have seen in Section\ref{sec:validation.single} that a simple and global linear model of \texttt{dgemm} which is yet an excellent microscopic model (\(R^2=0.9963\)) fails to provide good macroscopic prediction of HPL. The spatial variability is essential, just like the short-term and daily variability, which is why modeling the variability of \(\mu_{p,d}\) is key. Last, we recall that if a full polynomial model (with 10 parameters as in\eqref{eq:dgemm.complex} instead of 3 as in\eqref{eq:dgemm.basic}) is particularly suited to predict the performance of a specific machine and day, it is inadequate to extrapolate performance in general as the covariance matrices \(\Sigma_T\) and \(\Sigma_S\) would then have \(10\times9/2=45\) parameters instead of \(3\). This is why in the following, we opt for a simpler linear model. It is easy to estimate \(\mu_{p}\) and \(\Sigma_T\) by averaging over the \(\mu_{p,d}\) of each node, and then to estimate \(\mu\) and \(\Sigma_S\) by averaging over all the nodes. This moment-matching method is simple and provides very good estimates for \(\mu\), \(\Sigma_T\), and \(\Sigma_S\) because we have enough measurements at our disposal and because it is particularly suited to the Gaussian modeling assumption. Should more complex models (e.g., a mixture to account for ``outlier'' nodes or a SkewNormal distribution to account for the distribution's skewness) be used, a general Bayesian sampling framework like STAN\cite{stan} would be more adapted. Such frameworks allow to easily specify hierarchical generative models like the one presented in Figure\ref{fig:generative} and to draw samples from the posterior distribution of \(\mu\), \(\Sigma_T\), and \(\Sigma_S\), which can be used to generate realistic \(\mu_{p,d}\) values for a new hypothetical cluster easily. Such a process is depicted in Figure\ref{fig:whatif_model} where hypothetical regression parameters for 16 nodes have been generated. Comparing such synthetic data with the original samples from Figures\ref{fig:whatif_calibration} allows us to evaluate the model's potential weaknesses. Although the orders of magnitude of all parameters and the ellipses are excellent, a few subtle differences are visible. First, the variability of \(\alpha_{p}\) seems a bit overestimated (the spread along the x-axis is larger). This can be explained by the fact that one of the nodes seemed to be significantly slower (with much larger \(\alpha_{p}\)), which artificially increased the spatial variability. Second, as expected from a Gaussian model, the distributions of the \(\beta_{p,d}\) are symmetrical whereas there was a slight negative skew in the original samples but this should be of little significance for our study. The distributions of the \(\gamma_{p,d}\) however are particularly realistic. We also illustrate the generality of this model with the data from Figure\ref{fig:whatif_slow_calibration}. These measurements were obtained from October to November 2019 where the cluster was less stable and where some nodes particularly misbehaved. Three nodes (in orange, hence a total of 6 CPUs) are distinguished from the 28 others (in green) and have lower performance (higher values for \(\alpha\), \(\beta\), and \(\gamma\)) and one node (in blue) is particularly unstable. Although this last node may be considered too abnormal to represent anything, it would be reasonable to assume that a larger cluster would present at least the two kinds of behaviors (green for stable nodes, and orange for slower nodes). The higher layer of the model in Figure\ref{fig:generative} should then be replaced by a mixture of normal distributions (whose weights would then be sampled from a Dirichlet distribution). Again, hypothetical regression parameters for 16 CPUs have been generated with such a process on Figure\ref{fig:whatif_slow_model} are very similar, although different, to the original measurements. Overall this model is therefore of excellent quality and can be used to generate large configurations very easily and evaluate the influence of different kinds of variability on the performance of HPL. \subsection{Influence of \texttt{dgemm}'s Temporal Variability} \label{sec:org2cb3bd8} \label{sec:whatif.temporal_variability} In Section\ref{sec:validation.single}, we could highlight the importance of accounting for temporal variability of the \texttt{dgemm} kernel to obtain faithful HPL predictions. To the best of our knowledge, HPL developers and experts are often aware of this influence (or at least suspect it). However, they have never fully quantified it since designing and performing real experiments to evaluate this would be quite difficult. Although increasing this variability wouldn't be too hard, reducing it would be particularly complicated. This can however easily be done through simulation using the hierarchical model of the previous section. In our experiments, the order of magnitude of the temporal variability with respect to actual performance (i.e., the ratio between \(\gamma_{p,d}\) and \(\alpha_{p,d}\) in Equation\eqref{eq:dgemm.basic}) was around 3\%. This may be a ``normal'' value or could be considered too high and possibly improved by better controlling thread mapping or Operating System noise. Such a task can be quite tedious and knowing how much performance gain can be expected beforehand is thus quite useful. In this section, we study the influence of this variability by generating 10 cluster scenarios using the previous model (as in Figure\ref{fig:whatif}), comprising 1,024 nodes each, but by constraining \(\gamma_{p,d}\) to be equal to \(\gamma.\alpha_{p,d}\) with \(\gamma\in[0,0.1]\), which represents the coefficient of variation of the \texttt{dgemm} kernel. We evaluate the performance of HPL with one multi-threaded MPI rank per node, a block size of 512, a look-ahead \texttt{depth} of 1. We used the \texttt{increasing-2-ring} broadcast with the \texttt{Crout} panel factorization algorithms and \(\texttt{P}\times\texttt{Q}=8\times32\) and we tested matrix sizes ranging from 100,000 to 500,000. Let us denote by \(T(N,C_i,\gamma)\) the performance of HPL when factorizing a matrix of rank \(N\) on cluster \(C_i\) with a temporal variability of \(\gamma\). The overhead for this configuration is the ratio \begin{equation*} O(N,C_i,\gamma) = \frac{\mathbb{E}[T(N,C_i,\gamma)]}{T(N,C_i,0)}-1. \end{equation*} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figures/whatif_variability.pdf} \caption{The overhead on HPL duration appears to be linear in \texttt{dgemm} temporal variability. Although it is negligible for small matrices, it severely inflates for larger matrices.}\vspace{-1em} \label{fig:whatif_variability} \labspace \end{figure} Each bubble in Figure\ref{fig:whatif_variability} represents one such overhead. For any \(\gamma\), this overhead appears to be negligible for small matrices and to increase and flatten when \(N\) grows large. In most TOP5000 qualification runs, the matrix is made as large as possible and the overhead would thus appear to grow roughly linearly with \(\gamma\). On a new cluster, a simple statistical evaluation of the nodes' performance using the model of Section\ref{sec:whatif.model} would thus be a good first diagnosis of whether trying to decreasing temporal variability is a promising tuning target or not. \subsection{Influence of Spatial Variability} \label{sec:org427fd2e} \label{sec:whatif.spatial_variability} Although we showed in Section\ref{sec:validation.single} that temporal variability could account for about 9\% of performance, spatial variability was even more important as it was responsible for 22\% of overhead compared to a fully homogeneous cluster. In practice, the replacement of a few nodes may be possible but such spatial variability is expected and common\cite{rountree_15} and a workaround would have to be found. A common approach consists in dropping out a few of the slowest nodes. Indeed, since the matrix is evenly divided between the nodes, the computation inevitably progresses at the speed of the slowest node. However, removing the slowest nodes also decreases the overall processing capability and impacts the virtual topology's geometry (the \texttt{P} and \texttt{Q} parameters of HPL). Such adjustment is often done by trial and error and is all the more tricky as temporal variability and uncertainty from real experiments come into play. In this section, we show how such a subtle trade-off can be studied in simulation. \begin{figure}[pt] \centering \includegraphics[width=\linewidth]{figures/whatif_removing_nodes.pdf} \caption{Influence of the number of nodes and of the geometry of the virtual topology on the performance of HPL\@: $\mathtt{P}\times\mathtt{Q}$ configurations with a small $\mathtt{P}$ perform significantly better then those with a larger $\mathtt{P}$.}\vspace{-1em} \label{fig:whatif_removing_nodes} \labspace \end{figure} Using the model from Section\ref{sec:whatif.model}, we generate 10 mildly heterogeneous 256 node clusters (i.e., where nodes are similar to the ones of our cluster when operating in the normal state as in Figure\ref{fig:whatif_calibration}) and we study the performance obtained when removing 1 to 16 of the slowest nodes. When removing nodes, the geometry should be adjusted depending on how the number of remaining nodes decomposes in prime factors. As observed in Figure\ref{fig:validation_geometry}, having \texttt{P} \(\approx\) \texttt{Q} is generally a good idea to reduce the total amount of communication. However it may be counter-productive for a given broadcast or swap algorithm that serializes communications. Figure\ref{fig:whatif_removing_nodes} shows the overhead for a matrix of rank 250,000 compared to the best performance obtained using the whole cluster. We group the different \(\texttt{P}\times\texttt{Q}\) decompositions and order them by increasing \texttt{P}. Again, we use the \texttt{2-Ring} and \texttt{Binary-exch} algorithms, which are among the best configurations according to the study of Section\ref{sec:validation.factorial}. Each configuration is summarized through the average overhead over the 10 clusters and errorbars represent a 95\% confidence interval. It appears that the \(4\times64\) geometry now achieves the best trade-off between the total amount of communications and how well they overlap with each other. The optimal configuration for each number of nodes is boxed in Figure\ref{fig:whatif_removing_nodes}. It reveals that there is not much to gain, probably because of the mild spatial heterogeneity of our cluster, but that optimizing the virtual topology is particularly important. \begin{figure}[!t] \centering \includegraphics[width=.9\linewidth]{figures/whatif_removing_nodes_2.pdf} \caption{Influence of node removal on performance while taking into account the matrix rank. Due to the mild heterogeneity of these scenarios, evicting nodes brings no benefit.}\vspace{-1em} \label{fig:whatif_removing_nodes_2} \labspace \end{figure} Figure\ref{fig:whatif_removing_nodes_2} investigates how this overhead for the best geometry and node selection also depends on the matrix rank. It appears that in this scenario, except for very small matrices, removing nodes cannot help improving performance. Note that the overhead for \(5\times\texttt{Q}\) configurations with a matrix rank 200,000 appears to behave differently from what happens for other matrix sizes. This surprising effect probably arises from a subtle combination of matrix size and virtual topology. We could indeed observe on our cluster that such configurations had a weakly but significantly worse performance than the other configurations. Such interaction also explains why designing a faithful analytical model of HPL is so difficult and why a full simulation of the whole application is generally required. Although absolute performance should be taken with a grain of salt when studying such subtle effects, they are easily overlooked when conducting real experiments. In this particular small scale mild heterogeneity scenario, there is thus no gain in removing nodes but, as illustrated in Figure\ref{fig:whatif_removing_nodes_heterogeneous} where we used a multimodal spatial heterogeneity (as in Figure\ref{fig:whatif_slow}), this may be a relevant approach. This sensibility analysis shows how, for a given supercomputer, a simple statistical evaluation of the spatial heterogeneity allows evaluating whether spatial variability is a promising tuning target or not. \begin{figure}[!t] \centering \includegraphics[width=.9\linewidth]{figures/whatif_removing_nodes_2_heterogeneous.pdf} \caption{Influence of node removal on performance in a stronger heterogeneity scenario (extrapolation of our test cluster when it had a cooling problem on 4 of its nodes). Removing 6 to 12 nodes our of 256 nodes may bring substantial improvement and such optimization would therefore be worth investigating.}\vspace{-1em} \label{fig:whatif_removing_nodes_heterogeneous} \labspace \end{figure} \subsection{Influence of the Physical Topology} \label{sec:org641fd43} \label{sec:whatif.topology} \begin{figure}[pt] \centering \includegraphics[width=.9\linewidth]{figures/whatif_removing_switches.pdf} \caption{Influence of the physical topology on the overall performance. Up to 2 of the top-level switches can be removed without significantly hurting performances for large matrices. Beyond this point, communications become the main performance bottleneck.}\vspace{-1em} \label{fig:whatif_removing_switches} \labspace \end{figure} Finally, since virtual topology and communications appear to significantly influence the overall performance, one may wonder how much the physical topology influences the performance. Indeed, several recent articles\cite{tapered_fat_tree_16,tapered_fat_tree_19} report that interconnect networks are often oversized compared to the actual need of applications and that turning off some switches could sometimes go completely unnoticed by end-users. In this section we consider ten 256 node clusters with variable node performance (as in Figure\ref{fig:whatif}) interconnected by a 2-level fat-tree and quantify by how much performance degrades when the top-tier switches are gradually deactivated. More formally, we use a \texttt{(2;32,8;1,N;1,8)} fat-tree with \(\texttt{N}\in\{1,2,3,4\}\). Figure\ref{fig:whatif_removing_switches} depicts this degradation as a function of matrix size. As one could expect, the impact is more significant for smaller matrix sizes (where the execution is more network bound). Although removing one switch leads to absolutely no visible performance loss, removing two or three switches can have a dramatic effect. Again, such degradation depends on the broadcast and swap algorithms and may be slightly mitigated. To the best of our knowledge, it is the first time such sensibility analysis is conducted faithfully. Generating random node configurations allows avoiding potential bias, in particular against perfectly homogeneous scenarios. We believe such a tool can be quite useful in the earlier steps of a supercomputer design when performing capacity planning to adjust the network capacity to a given cost and power envelope. \section{Discussion} \label{sec:orgf490713} \label{sec:relwork} Analytical models are often used for estimating application performance. For instance, Aspen\cite{aspen} is a domain specific language for specifying formally the characteristics of kernels and combining them to model a whole application. However, such an approach can only provide rough estimates, subtle phenomena like network contention cannot be captured by these models. Another approach for predicting the performance of applications like HPL consists in statistical modeling the application as a whole\cite{hpl_prediction}. By running the application several times with small and medium problem sizes (of a few iterations of large problem sizes) and using simple linear regressions, it is possible to predict its makespan for larger sizes with an error of only a few percent and a relatively low cost. Unfortunately, the predictions are limited to the same application configuration and studying the influence of the number of rows and columns of the virtual grid, or the broadcast algorithms requires a new model and new (costly) runs using the whole target machine. Our attempts to build a black-box analytical model (involving, polynomials, inverse, and logarithms of \texttt{P} and \texttt{Q}) of HPL from a limited set of observations always failed to provide a faithful model with decent prediction and extrapolation capabilities. Furthermore, this approach does not allow studying \emph{what-if} scenarios (e.g.,\xspace to evaluate what would happen if the network bandwidth was increased or if node heterogeneity was decreased) that go beyond parameter tuning. Simulation provides the details and flexibility missing to such a black-box modeling approach. Performance prediction of MPI applications through simulation has been widely studied over the last decades but two approaches can be distinguished in the literature: offline and online simulation. With the most common approach, \emph{offline simulation}, a trace of the application is first obtained on a real platform. This trace comprises sequences of MPI operations and CPU bursts and is given as an input to a simulator that implements performance models for the CPUs and the network to derive predictions. Researchers interested in finding out how their application reacts to changes to the underlying platform can replay the trace on commodity hardware at will with different platform models. Most HPC simulators available today, notably BigSim\cite{bigsim_04}, Dimemas\cite{dimemas} and CODES\cite{CODES}, rely on this approach. The main limitation of this approach comes from the trace acquisition requirement. Not only is a large machine required but the compressed trace of a few iterations (out of several thousands) of HPL typically reaches a few hundred MB, making this approach quickly impractical\cite{suter}. Worse, tracing an application provides only information about its behavior of a specific run: slight modifications (e.g.,\xspace to communication patterns) may make the trace inaccurate. The behavior of simple applications (e.g.,\xspace \texttt{stencil}) can be extrapolated from small-scale traces\cite{scalaextrap,pmac_lspp13} but this fails if the execution is non-deterministic, e.g.,\xspace whenever the application relies on non-blocking communication patterns, which is, unfortunately, the case for HPL\@. The second approach discussed in the literature is \emph{online simulation}. Here, the application is executed (emulated) on top of a platform simulator that determines when each process is run. This approach allows researchers to study directly the behavior of MPI applications but only a few recent simulators such as SST Macro\cite{sstmacro}, SimGrid/SMPI\cite{simgrid} and the closed-source xSim\cite{xsim} support it. To the best of our knowledge, only SST Macro and SimGrid/SMPI are mature enough to faithfully emulate HPL. In this work, we decided to rely on SimGrid as its performance models and its emulation capabilities are quite solid but the work we present would a priori also be possible with SST. The SST simulator has also been used for uncertainty quantification (UQ)\cite{sst_uq}, but with a different purpose than what we did in Section\ref{sec:whatif}. Note that the HPL emulation we describe in Section\ref{sec:em} should not be confused with the application skeletonization\cite{sst_skeleton} commonly used with SST and more recently introduced in CODES. Skeletons are code extractions of the most important parts of a complex application whereas we only modify a few dozens of lines of HPL before emulating it with SMPI. Some researchers from Intel unaware of our recent work recently applied the same methodology as the one we proposed in\cite{cornebize:hal-02096571} to both Intel HPL and OpenHPL in the closed-source CoFluent simulator\cite{intel_20}. To the best of our knowledge, their work reports two faithful predictions for two large-scale supercomputers but without investigating at all the impact of variability, heterogeneity, nor of communications as we do in this article. Finally, it is important to understand that the approach we propose is intended to help studies at the whole machine and application level, not the influence of microarchitectural details as intended by gem5\cite{lowepower2020gem5} or MUSA\cite{musa_16}. \section{Conclusion} \label{sec:org55885b3} \label{sec:cl} HPC application developers implement many elaborate algorithmic strategies whose impact on performance is often dependent on both the input workload and the target platform. This structure makes it very difficult to model and accurately forecast the overall application performance, and many HPC application developers and users are often left with no other option but to study and tune their applications at scale, which can be very time- and resource-consuming. We believe that being capable of precisely predicting an application's performance on a given platform is useful for application developers and users and will become invaluable in the future as it can, for example, help computing centers with deciding which one of the envisioned technologies for a new machine would work best for a given application or if an upgrade of the current machine should be considered. Simulation is an effective approach in this context and SimGrid/SMPI has previously been successfully validated in several small-scale studies with simple HPC benchmarks\cite{smpi,heinrich:hal-01523608}. In an earlier work\cite{cornebize:hal-02096571}, we have explained how SMPI could be used to efficiently emulate HPL. The proposed approach only requires minimal code modifications and applies to any application whose behavior does not strongly depend on data-dependent intermediate computation results. Although HPL is not a \emph{real} application, it is quite optimized from an algorithmic point of view and its behavior can be controlled through 6 different parameters (granularity, geometry of the virtual topology, broadcast/swapping/factorization algorithm, and the number of concurrent iterations). HPL features classical optimization techniques such as heavily relying on \texttt{MPI\_Iprobe} to overlap communication with computations, making it particularly challenging both in terms of tuning and simulation. In this article (Section\ref{sec:smpi} and\ref{sec:validation}), we present an extensive validation study which covers the whole parameter space of HPL. Our study emphasizes the importance of carefully modeling (1) the platform heterogeneity (not all nodes have exactly the same performance), (2) the short-term temporal variability (e.g., system noise) for compute kernels as it may propagate in communication patterns, and (3) the complexity of MPI (performance often wildly differs between small and large messages and between intra-node and extra-node communications). We show that disregarding any of these aspects may lead to wildly inaccurate predictions even on an application as regular as HPL. By building on a few well-identified micro-benchmarks of the BLAS and MPI, we show that these aspects can be well modeled, which allows us to systematically predict the overall performance of HPL within a few percent. Our experimental results span over two years and we report situations (in Section\ref{sec:methodology.timeframe} and\ref{sec:validation.geometry}) where the simulation helped us to identify performance regression or anomalies incurred by the platform when the prediction did not match the real experiments. We show (in Section\ref{sec:validation}) how this faithful surrogate can be used to evaluate the significance of application parameters and tune them accordingly solely through simulations. We also propose a generative model for the compute nodes' performance that can easily be fit from daily measurements and used to produce synthetic platforms similar to the ones at hand. We show (in Section\ref{sec:whatif}) how this model, which allows us to easily control temporal and spatial variability, can feed our simulations to assess the impact of variability on the performance of the application or of mitigation strategies (e.g., the eviction of the slower nodes). Likewise, the simulation allows to easily assess the influence of the physical network on the overall performance. Most of these \emph{what-if} studies would be particularly difficult to conduct through real experiments because of the difficulty to finely control the platform. This is to the best of our knowledge one of the first sensitivity analyses of a real HPC code accounting for platform uncertainty. As future work, building on the effort of SimGrid developers on supporting the emulation of a wide variety of applications with SMPI\cite{smpi_proxy_apps}, we also intend to conduct similar studies with other HPC benchmarks (e.g.,\xspace HPCG\cite{HPCG} or HPGMG\cite{HPGMG}), real applications (e.g.,\xspace BigDFT\cite{bigdft}) and larger infrastructures. As explained in this article, a good model of compute kernels and the MPI library is essential. Thereby, the main challenge for systematic use of our simulation technique now lies in the automation of measurements through well-designed experiments and the automatic detection of when the envisioned models miss essential characteristics of the platform (multi-modal behaviors, heteroscedasticity, discontinuities,\dots{}). We intend to provide a fully automatic calibration procedure for MPI as well as for every BLAS function, which would allow us to effortlessly predict the performance of many applications by simply linking against a BLAS-replacement library. \def\subsection#1{\osubsection*{#1}} \subsection{Acknowledgments} \label{sec:org91e6663} Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER, and several Universities as well as other organizations (see \url{https://www.grid5000.fr}). Last, we warmly thank the SimGrid developers for their help in integrating our contributions and for their early feedback on this work before submission. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. \pagebreak \bibliographystyle{elsarticle-num}
{ "timestamp": "2022-01-10T02:08:46", "yymm": "2102", "arxiv_id": "2102.07674", "language": "en", "url": "https://arxiv.org/abs/2102.07674" }
\section{Introduction}\label{sec:introduction} Egocentric spatial memory is central to our understanding of spatial reasoning in biology \citep{klatzky1998allocentric, burgess2006spatial}, where an embodied agent constantly carries with it a local map of its surrounding geometry. Such representations have particular significance for action selection and motor control \citep{hinman2019neuronal}. For robotics and embodied AI, the benefits of a persistent local spatial memory are also clear. Such a system has the potential to run for long periods, and bypass both the memory and runtime complexities of large scale world-centric mapping. \cite{peters2001sensory} propose an EgoSphere as being a particularly suitable representation for robotics, and more recent works have utilized ego-centric formulations for planar robot mapping \citep{fankhauser2014robot}, drone obstacle avoidance \citep{fragoso2018dynamically} and mono-to-depth \citep{liu2019neural}. In parallel with these ego-centric mapping systems, a new paradigm of differentiable memory architectures has arisen, where a memory bank is augmented to a neural network, which can then learn read and write operations \citep{weston2014memory, graves2014neural, sukhbaatar2015end}. When compared to Recurrent Neural Networks (RNNs), the persistent memory circumvents issues of vanishing or exploding gradients, enabling solutions to long-horizon tasks. These have also been applied to visuomotor control and navigation tasks \citep{wayne2018unsupervised}, surpassing baselines such as the ubiquitous Long Short-Term Memory (LSTM) \citep{hochreiter1997long}. We focus on the intersection of these two branches of research, and propose Egospheric Spatial Memory (ESM), a parameter-free module which encodes geometric and semantic information about the scene in an ego-sphere around the agent. To the best of our knowledge, ESM is the first end-to-end trainable egocentric memory with a full panoramic representation, enabling direct encoding of the surrounding scene in a 2.5D image. We also show that by propagating gradients through the ESM computation graph we can learn features to be stored in the memory. We demonstrate the superiority of learning features through the ESM module on both target shape reaching and object segmentation tasks. For other visuomotor control tasks, we show that even without learning features through the module, and instead directly projecting image color values into memory, ESM consistently outperforms other memory baselines. Through these experiments, we show that the applications of our parameter-free ESM module are widespread, where it can either be dropped into existing pipelines as a non-learned module, or end-to-end trained in a larger computation graph, depending on the task requirements. \section{Related Work}\label{sec:related_work} \subsection{Mapping} Geometric mapping is a mature field, with many solutions available for constructing high quality maps. Such systems typically maintain an allocentric map, either by projecting points into a global world co-ordinate system \citep{newcombe2011kinectfusion, whelan2015elasticfusion}, or by maintaining a certain number of keyframes in the trajectory history \citep{zhou2018deeptam,bloesch2018codeslam}. If these systems are to be applied to life-long embodied AI, then strategies are required to effectively select the parts of the map which are useful, and discard the rest from memory \citep{cadena2016past}. For robotics applications, prioritizing geometry in the immediate vicinity is a sensible prior. Rather than taking a world-view to map construction, such systems often formulate the mapping problem in a purely ego-centric manner, performing continual re-projection to the newest frame and pose with fixed-sized storage. Unlike allocentric formulations, the memory indexing is then fully coupled to the agent pose, resulting in an ordered representation particularly well suited for downstream egocentric tasks, such as action selection. \cite{peters2001sensory} outline an EgoSphere memory structure as being suitable for humanoid robotics, with indexing via polar and azimuthal angles. \cite{fankhauser2014robot} use ego-centric height maps, and demonstrate on a quadrupedal robot walking over obstacles. \cite{cigla2017gaussian} use per-pixel depth Gaussian Mixture Models (GMMs) to maintain an ego-cylinder of belief around a drone, with applications to collision avoidance \citep{fragoso2018dynamically}. In a different application, \cite{liu2019neural} learn to predict depth images from a sequence of RGB images, again using ego reprojections. These systems are all designed to represent only at the level of depth and RGB features. For mapping more expressive implicit features via end-to-end training, a fully differentiable long-horizon computation graph is required. Any computation graph which satisfies this requirement is generally referred to as memory in the neural network literature. \subsection{Memory} The concept of memory in neural networks is deeply coupled with recurrence. Naive recurrent networks have vanishing and exploding gradient problems \citep{hochreiter1998vanishing}, which LSTMs \citep{hochreiter1997long} and Gated Recurrent Units (GRUs) \citep{cho2014properties} mediate using additive gated structures. More recently, dedicated differentiable memory blocks have become a popular alternative. \cite{weston2014memory} applied Memory Networks (MemNN) to question answering, using hard read-writes and separate training of components. \cite{graves2014neural} and \cite{sukhbaatar2015end} instead made the read and writes `soft' with the proposal of Neural Turing Machines (NTM) and End-to-End Memory Networks (MemN2N) respectively, enabling joint training with the controller. Other works have since conditioned dynamic memory on images, for tasks such as visual question answering \citep{xiong2016dynamic} and object segmentation \citep{oh2019video}. Another distinct but closely related approach is self attention \citep{vaswani2017attention}. These approaches also use key-based content retrieval, but do so on a history of previous observations with adjacent connectivity. Despite the lack of geometric inductive bias, recent results demonstrate the amenability of general memory \citep{wayne2018unsupervised} and attention \citep{parisotto2019stabilizing} to visuomotor control and navigation tasks. Other authors have explored the intersection of network memory and spatial mapping for navigation, but have generally been limited to 2D aerial-view maps, focusing on planar navigation tasks. \cite{gupta2017cognitive} used an implicit ego-centric memory which was updated with warping and confidence maps for discrete action navigation problems. \cite{parisotto2017neural} proposed a similar setup, but used dedicated learned read and write operations for updates, and tested on simulated Doom environments. Without consideration for action selection, \cite{henriques2018mapnet} proposed a similar system, but instead used an allocentric formulation, and tested on free-form trajectories of real images. \cite{zhang2018egocentric} also propose a similar system, but with the inclusion of loop closure. Our memory instead focuses on local perception, with the ability to represent detailed 3D geometry in all directions around the agent. The benefits of our module are complementary to existing 2D methods, which instead focus on occlusion-aware planar understanding suitable for navigation. \section{Method}\label{sec:method} In this section, we describe our main contribution, the egospheric spatial memory (ESM) module, shown in Figure \ref{fig:projection_pipeline}. The module operates as an Extended Kalman Filter (EKF), with an egosphere image $\mu_t\in\mathbb{R}^{h_s \times w_s \times (2+1+n)}$ and its diagonal covariance $\Sigma_t\in\mathbb{R}^{h_s \times w_s \times (1+n)}$ representing the state. The egosphere image consists of 2 channels for the polar and azimuthal angles, 1 for radial depth, and $n$ for encoded features. The angles are not included in the covariance, as their values are implicit in the egosphere image pixel indices. The covariance only represents the uncertainty in depth and features at these fixed equidistant indices, and diagonal covariance is assumed due to the large state size of the images. Image measurements are assumed to come from projective depth cameras, which similarly store 1 channel for depth and $n$ for encoded features. We also assume incremental agent pose measurements $u_{t}\in\mathbb{R}^6$ with covariance $\Sigma_{u_t}\in\mathbb{R}^{6\times6}$ are available, in the form of a translation and rotation vector. The algorithm overview is presented in Algorithm 1. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/ESM_pipeline.png} \caption{Overview of the ESM module. The module consists of projection and quantization steps, used to bring the belief from the previous agent frame to the current agent frame.} \label{fig:projection_pipeline} \end{figure} \begin{wrapfigure}[12]{R}{0.38\textwidth} \vspace{-15pt} \IncMargin{1em} \begin{algorithm}[H] \caption{ESM Step \label{alg:ekf}} \SetAlgoLined Given: $f_m, F_m, f_o, F_o$ \\ $\bar{\mu}_t = f_m(u_t, \mu_{t-1})$ \\ $\bar{\Sigma}_t = F_m(u_t, \mu_{t-1}, \Sigma_{t-1}, \Sigma_{u_t})$ \\ $\hat{\mu}_t = f_o((v_{ti}, p_{ti})_{i \in I})$ \\ $\hat{\Sigma}_t = F_o((v_{ti}, p_{ti}, V_{ti}, P_{ti})_{i \in I})$ \\ $K_t = \bar{\Sigma}_t \left[\bar{\Sigma}_t + \hat{\Sigma}_t\right]^{-1}$\\ $\mu_t = \bar{\mu}_t + K_t[\hat{\mu}_t - \bar{\mu}_t]$ \\ $\Sigma_t = \left[I - K_{t}\right]\bar{\Sigma}_t$ \\ return $\mu_t, \Sigma_t$ \end{algorithm} \end{wrapfigure} First, the motion step takes the state from the previous frame $\mu_{t-1}, \Sigma_{t-1}$ and transforms this into a predicted state for the current frame $\bar{\mu}_t, \bar{\Sigma}_t$ via functions $f_m$, $F_m$ and the incremental pose measurement $u_{t}$ with covariance $\Sigma_{u_t}$. Then in the observation step, we use measured visual features $(v_{ti}\in\mathbb{R}^{h_{vi} \times w_{vi} \times (1+n)})_{i \in \{1, ..., m\}}$ with diagonal covariances $(V_{ti}\in\mathbb{R}^{h_{vi} \times w_{vi} \times (1+n)})_{i \in \{1, ..., m\}}$ originated from $m$ arbitrary vision sensors, and associated pose measurements $(p_{ti}\in\mathbb{R}^6)_{i \in \{1, ..., m\}}$ with covariances $(P_{ti}\in\mathbb{R}^{6 \times 6})_{i \in \{1, ..., m\}}$, to produce a new observation of the state $\hat{\mu}_t\in\mathbb{R}^{h_s \times w_s \times (2+1+n)}$, again with diagonal covariance $\hat{\Sigma}_t\in\mathbb{R}^{h_s \times w_s \times (1+n)}$, via functions $f_o$ and $F_o$. The measured poses also take the form of a translation and rotation vector. Finally, the update step takes our state prediction $\bar{\mu}_t, \bar{\Sigma}_t$ and state observation $\hat{\mu}_t, \hat{\Sigma}_t$, and fuses them to produce our new state belief $\mu_t, \Sigma_t$. We spend the remainder of this section explaining the form of the constituent functions. All functions in Algorithm 1 involve re-projections across different image frames, using forward warping. Functions $f_m$, $F_m$, $f_o$ and $F_o$ are therefore all built using the same core functions. While the re-projections could be solved using a typical rendering pipeline of mesh construction followed by rasterization, we instead choose a simpler approach and directly quantize the pixel projections with variance-based image smoothing to fill in quantization holes. An overview of the projection and quantization operations for a single ESM update step is shown in Fig. \ref{fig:projection_pipeline}. \subsection{Forward Warping} Forward warping projects ordered equidistant homogeneous pixel co-ordinates ${pc}_{f1}$ from frame $f_1$ to non-ordered non-equidistant homogeneous pixel co-ordinates $\tilde{pc}_{f2}$ in frame $f_2$. We use $\tilde{\mu}_{f2} = \{\tilde{\phi}_{f2}, \tilde{\theta}_{f2}, {\tilde{d}_{f2}, \tilde{e}_{f2}}\}$ to denote the loss of ordering following projection from $\mu_{f1} = \{\phi_{f1}, \theta_{f2}, {d_{f1}, e_{f2}}\}$, where $\phi$, $\theta$, $d$ and $e$ represent polar angles, azimuthal angles, depth and encoded features respectively. We only consider warping from projective to omni cameras, which corresponds to functions $f_o, F_o$, but the omni-to-omni case as in $f_m, F_m$ is identical except with the inclusion of another polar co-ordinate transformation. The encoded features are assumed constant during projection $\tilde{e}_{f2} = {e_{f1}}$. For depth, we must transform the values to the new frame in polar co-ordinates, which is a composition of a linear transformation and non-linear polar conversion. Using the camera intrinsic matrix $K_1$, the full projection is composed of a scalar multiplication with homogeneous pixel co-ordinates ${pc}_{f1}$, transformation by camera inverse matrix $K_1^{-1}$ and frame-to-frame $T_{12}$ matrices, and polar conversion $f_p$: \vspace{-3mm} \begin{equation}\label{depth_to_coords} \{\tilde{\phi}_{f2}, \tilde{\theta}_{f2}, \tilde{d}_{f2}\} = f_p(T_{12} K_1^{-1} \lbrack pc_{f1} \odot d_{f1} \rbrack) \end{equation} Combined, this provides us with both the forward warped image $\tilde{\mu}_{f2} = \{\tilde{\phi}_{f2}, \tilde{\theta}_{f2}, \tilde{d}_{f2}, \tilde{e}_{f2} \}$, and the newly projected homogeneous pixel co-ordinates $\tilde{pc}_{f2} = \{k_{ppr}\tilde{\phi}_{f2}, k_{ppr}\tilde{\theta}_{f2}$, \textbf{1}\}, where $k_{ppr}$ denotes the pixels-per-radian resolution constant. The variances are also projected using the full analytic Jacobians, which are efficiently implemented as tensor operations, avoiding costly autograd usage. \vspace{-3mm} \begin{equation} \hat{\tilde{\Sigma}}_2 = J_V V_1 J_{V}^T + J_P P_{12} J_{P}^T \end{equation} \subsection{Quantization, Fusion and Smoothing} Following projection, we first quantize the floating point pixel coordinates $\tilde{pc}_{f2}$ into integer pixel co-ordinates $pc_{f2}$. This in general leads to quantization holes and duplicates. The duplicates are handled with a variance conditioned depth buffer, such that the closest projected depth is used, provided that it's variance is lower than a set threshold. This in general prevents highly uncertain close depth values from overwriting highly certain far values. We then perform per pixel fusion based on lines 6 and 7 in Algorithm 1 provided the depths fall within a set relative threshold, otherwise the minimum depth with sufficiently low variance is taken. This again acts as a depth buffer. Finally, we perform variance based image smoothing, whereby we treat each $N\times N$ image patch $(\mu_{k,l})_{k\in\{1,..,N\},l\in\{1,..,N\}}$ as a collection of independent measurements of the central pixel, and combine their variance values based on central limit theory, resulting in smoothed values for each pixel in the image $\mu_{i,j}$. Although we use this to update the mean belief, we do not smooth the variance values, meaning projection holes remain at prior variance. This prevents the smoothing from distorting our belief during subsequent projections, and makes the smoothing inherently local to the current frame only. The smoothing formula is as follows, with variance here denoted as $\sigma^2$: \vspace{-3mm} \begin{equation}\label{mean_smoothing} \mu_{i,j} = \frac{\sum_k \sum_l \mu_{k,l} \cdot \sigma^{-2}_{k,l}}{\sum_k \sum_l \sigma^{-2}_{k,l}} \end{equation} Given that the quantization is a discrete operation, we cannot compute it's analytic jacobian for uncertainty propagation. We therefore approximate the added quantization uncertainty using the numerical pixel gradients of the newly smoothed image $G_{i,j}$, and assume additive noise proportional to the $x$ and $y$ quantization distances $\Delta pc_{i,j}$: \vspace{-3mm} \begin{equation}\label{mean_smoothing} \Sigma_{i,j} = \tilde{\Sigma}_{i,j} + G_{i,j} \Delta pc_{i,j} \end{equation} \subsection{Neural Network Integration} The ESM module can be integrated anywhere into a wider CNN stack, forming an Egospheric Spatial Memory Network (ESMN). Throughout this paper we consider two variants, ESMN and ESMN-RGB, see Figure \ref{fig:simplified_networks}. ESMN-RGB is a special case of ESMN, where RGB features are directly projected into memory, while ESMN projects CNN encoded features into memory. The inclusion of polar angles, azimuthal angles and depth means the full relative polar coordinates are explicitly represented for each pixel in memory. Although the formulation described in Algorithm \ref{alg:ekf} and Fig \ref{fig:projection_pipeline} allows for $m$ vision sensors, the experiments in this paper all involve only a single acquiring sensor, meaning $m=1$. We also only consider cases with constant variance in the acquired images $V_t = k_{var}$, and so we omit the variance images from the ESM input in Fig \ref{fig:simplified_networks} for simplicity. For baseline approaches, we compute an image of camera-relative coordinates via $K^{-1}$, and then concatenate this to the RGB image along with the tiled incremental poses before input to the networks. All values are normalized to $0-1$ before passing to convolutions, based on the permitted range for each channel. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/simplified_networks.png} \caption{High level schematics of the ESM-integrated network architectures ESMN-RGB and ESMN, as well as other baseline architectures used in the experiments: Mono, LSTM and NTM.} \label{fig:simplified_networks} \end{figure} \section{Experiments} The goal of our experiments is to show the wide applicability of ESM to different embodied 3D learning tasks. We test two different applications: \begin{enumerate} \item Image-to-action learning for multi-DOF control (Sec \ref{sec:multi_dof_visuomotor_control}). Here we consider drone and robot manipulator target reacher tasks using either ego-centric or scene-centric cameras. We then assess the ability for ESMN policies to generalize between these different camera modalities, and assess the utility of the ESM geometry for obstacle avoidance. We train policies both using imitation learning (IL) and reinforcement learning (RL). \item Object segmentation (Sec \ref{sec:obj_seg}). Here we explore the task of constructing a semantic map, and the effect of changing the ESM module location in the computation graph on performance. \end{enumerate} \input{sections/experiments/4_1_multi_dof_visuomotor_control} \input{sections/experiments/4_2_object_segmentation} \section{Conclusion} Through a diverse set of demonstrations, we have shown that ESM represents a widely applicable computation graph and trainable module for tasks requiring general spatial reasoning. When compared to other memory baselines for image-to-action learning, our module outperforms these dramatically when learning both from ego-centric and scene-centric images. One weakness of our method is that is assumes the availability of both incremental pose measurements of all scene cameras, and depth measurements, which is not a constraint of the other memory baselines. However, with the ever increasing ubiquity of commercial depth sensors, and with there being plentiful streams for incremental pose measurements including visual odometry, robot kinematics, and inertial sensors, we argue that such measurements are likely to be available in any real-world application of embodied AI. We leave it to future work to investigate the extent to which ESM performance deteriorates with highly uncertain real-world measurements, but with full uncertainty propagation and probabilistic per pixel fusion, ESM is well suited for accumulating noisy measurements in a principled manner. \section{Appendix} \subsection{Multi-DOF Imitation Learning} \label{app:imitation_learning} \subsubsection{Offline Datasets} The image sequences for the offline datasets are captured following random motion of the agent in both the DR and MR tasks, but known expert actions for each of the three possible targets in the scene are stored at every timestep. The drone reacher is initialized in random locations at the start of each episode, whereas the manipulator reacher is always started in the same robot configuration overlooking the workspace, as in the original RLBench reacher task. For the scene-centric acquisition, we instantiate three separate scene-centric cameras in the scene. In order to maximise variation in the dataset to encourage network generalization to arbitrary motions at test-time, we reset the pose of each of these scene-centric cameras at every step of the episode, rather than having each camera follow smooth motions. Each new random pose has a rotational bias to face towards the scene-centre, to ensure objects are likely to be seen frequently. By resetting the cameras poses on every time-step, we encourage the networks to learn to make sense of the pose information given to the network, rather than learning policies which fully rely on smoothly and slowly varying images. Both tasks use ego-centric cameras with a wider field of view (FOV) than the scene-centric cameras. This is a common choice in robotics, where wide angle perception is especially necessary for body-mounted cameras. RLBench by default uses ego-centric FOV of $60$ degrees and scene-centric FOV of $40$ degrees, and we use the same values for our RLBench-derived MR task. For the drone reacher task, we use ego-centric FOV of $90$ degrees and scene-centric FOV of $55$ degrees, to enable all methods to more quickly explore the perceptual ego-sphere. For the manipulator reacher dataset, we also store a robot mask image, which shows the pixels corresponding to the robot for all ego-centric and scene-centric images acquired. Known robot forward kinematics are used for generating the masking image. All images in the offline dataset are $32\times 32$ resolution. \subsubsection{Training} To maximize the diversity from the offline datasets, a new target is randomly chosen from each of the three possible targets for each successive step in the unrolled time dimension in the training batch. This ensures maximum variation in sequences at train-time, despite a relatively small number of 100k 16-frame sequences stored in the offline dataset. This also strongly encourages each of the networks to learn to remember the location of every target seen so far, because any target can effectively be requested from the training loss at any time. Similarly, for the scene-centric cameras we randomly choose one of the three scene cameras at each successive time-step in the unrolled time dimension for maximum variation. Again this forces the networks to make use of the camera pose information to make sense of the scene, and prevents overfitting on particular repeated sequences in the training set, instead encouraging generalization to fully arbitrary motions. For these experiments, the baseline methods of Mono, LSTM, LSTM-Aux and NTM also receive the full absolute camera pose at each step rather than just incremental poses received by ESM, as we found this to improve the performance of the baselines. For training the manipulator reacher policies, we additionally use the robot mask images to set high variance pixel values before feeding to the ESM module. This prevents the motion of the robot from breaking the static-scene assumption adopted by ESM during re-projections. We also provide the mask image as input to the baselines. All networks use a batch size of 16, an unroll size of 16, and are trained for 250k steps using an ADAM optimizer with $1e-4$ learning rate. None of the memory baselines cache the the internal state between training batches, and so the networks must learn to utilize the memory during the 16-frame unroll. 16 frames is on average enough steps to reach all 3 of the targets for both tasks. \subsubsection{Network Architectures} The network architectures used in the imitation learning experiments are provided in Fig \ref{fig:IL_networks}. Both LSTM baselines use dual stacked architectures with hidden and cell state sizes of $1024$. For NTM we use a similar variant to that used by \cite{wayne2018unsupervised}, namely, we use sequential writing to the memory, and retroactive updates. Regarding the 16-frame unroll, we again emphasize that 16 steps is on average enough time to reach all targets once. In order to encourage generalization to longer sequences than only 16-steps, we limit the writable memory size to 10, and track the usage of these 10 memory cells with a \textit{usage indicator} such that subsequent writes can preferentially overwrite the least used of the 10 cells. This again is the same approach used by \cite{wayne2018unsupervised}, which is one of very few works to successfully apply NTM-style architectures to image-to-action domains. It's important to note that the use of retroactive updates makes the \textit{total} memory size actually 20, as half of the cells are always reserved for the retroactive memory updates. Regarding image padding at the borders for input to the convolutions, the Mono and LSTM/NTM baselines use standard zero padding, whereas ESMN-RGB and ESMN pad the outer borders with the wrapped omni-directional image. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/IL_networks.png} \caption{Image-to-action imitation learning network architectures for Mono, LSTM/NTM, ESMN-RGB and ESMN.} \label{fig:IL_networks} \end{figure} \subsubsection{Auxiliary Losses} \begin{wrapfigure}[14]{R}{0.7\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/LSTM_aux_network.png} \caption{Network architecture for the LSTM-Aux baseline.} \label{fig:lstm_aux_network} \end{wrapfigure} Motivated by the fact that many successful applications of LSTMs to image-to-actions learning involve the use of spatial auxiliary losses \citep{jaderberg2016reinforcement, james2017transferring, sadeghi2017sim2real, mirowski2018learning}, we also compare to an LSTM which uses two such auxiliary proposals, namely the attention loss proposed in \citep{sadeghi2017sim2real} and a 3-dimensional Euler-based variant of the heading loss proposed in \citep{mirowski2018learning}, which itself only considers 1D rotations normal to the plane of navigation. Our heading loss does not compute the 1D rotational offset from North, as this is not detectable from the image input. Instead, the networks are trained to predict the 3D Euler offset from the orientation of the first frame in the sequence. The modified LSTM network architecture is presented in Fig \ref{fig:lstm_aux_network}, and example images and classification targets for the auxiliary attention loss are presented in Fig \ref{fig:attention_losses}. We emphasize that we did not attempt to tune these auxiliary losses, and applied them unmodified to the total loss function, taking the mean of the cross entropy loss for each, and linearly scaling so that the total auxiliary loss is roughly the same magnitude as the imitation loss at the start of training. Tuning auxiliary losses on different tasks is known to be challenging, and the losses can worsen performance without time-consuming manual tuning, as evidenced in the performance of the UNREAL agent \citep{jaderberg2016reinforcement} compared to a vanilla RL-LSTM network demonstrated in \citep{wayne2018unsupervised}. We reproduce this general finding, and see that the untuned auxiliary losses do not improve performance on our reacher tasks. To further investigate the failure mechanism, we plot the two auxiliary losses on the validation set during training for each task in Fig \ref{fig:auxiliary_curves}. We find that the heading loss over-fits on the training set in all tasks, without learning any useful notion of incremental agent orientation. This is particularly evidenced in the DR tasks, which start each sequence with random agent orientations. In contrast, predicting orientation relative to the first frame on the MR task is much simpler because the starting pose is always constant in the scene, and so cues for the relative orientation are available from individual frames. This is why we observe a lower heading loss for the MR task variants in Fig \ref{fig:auxiliary_curves}. We do however still observe overfitting in the MR task. This overfitting on all tasks helps to explain why LSTM-Aux performs worse than the vanilla LSTM baseline for some of the tasks in Table \ref{table:main_results}. In contrast, the ESM module embeds strong spatial inductive bias into the computation graph itself, requires no tuning at all, and consistently leads to successful policies on the different tasks, with no sign of overfitting on any of the datasets, as we further discuss in Section \ref{sec:further_discussion}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/attention_losses.png} \caption{Top: pixel-wise attention classification targets for the LSTM-Aux network on DR (left) and MR (right) tasks. Bottom: the corresponding monocular images from the DR (left) and MR (right) tasks.} \label{fig:attention_losses} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/auxiliary_curves.png} \caption{Auxiliary attention (left) and heading (right) losses on the validation set during training for each of the different imitation learning reacher tasks.} \label{fig:auxiliary_curves} \end{figure} \subsubsection{Further Discussion of Results} \label{sec:further_discussion} The losses for each network evaluated on the training set and validation set during the course of training are presented in Fig \ref{fig:combined_curves}. We first consider the results for the drone reacher task. Firstly, we can clearly see from the DR-ego-rgb and DR-freeform-rgb tasks that the baselines struggle to interpret the stream of incremental pose measurements and depth, in order to select optimal actions in the training set, and this is replicated in the validation set, and in the final task performance in Tab \ref{table:main_results}. We can also see that ESMN is able to achieve lower training and validation losses than ESMN-RGB when conditioned on shape in the DR-ego-shape and DR-freeform-shape tasks, and also expectedly achieves higher policy performance, shown in in Tab \ref{table:main_results}. What we also observe is that the baselines have a higher propensity to over-fit on training data. Both the LSTM and NTM baselines achieve lower training set error than ESMN on the DR-ego-shape task, but not lower validation error. In contrast, all curves for ESM-integrated networks are very similar between the training and validation set. The ESM module by design performs principled spatial computation, and so these networks are inherently much more robust to overfitting. Looking to the manipulator reacher task, we first notice that the LSTM and NTM baselines are actually able to achieve lower losses than the ESM-integrated networks on both the training set and validation set for the MR-ego-rgb and MR-ego-shape tasks. However, this does not translate to higher policy performance in Table \ref{table:main_results}. The reason for this is that the RLBench reacher task always initializes the robot in the same configuration, and so the diversity in the offline dataset is less than that of the drone reacher offline dataset. The scope of possible robot configurations in each 16-step window in the dataset is more limited. In essence, the baselines are achieving well in both training and validation sets as a result of overfitting to the limited data distributions observed. What these curves again highlight is the strong generalization power of ESM-integrated networks. Despite seeing relatively limited robot configurations in the dataset, the ESM policies do not overfit on these, and are still able to use this data to learn general policies which succeed from unseen out-of-distribution images at test-time. We also again observe the same superiority of ESMN over ESMN-RGB when conditioned on shape input in the training and validation losses for the MR-ego-shape task. A final observation is that all methods fail to perform well on the MR-freeform-shape task. We investigated this, and the weak performance is a combined result of low-resolution $32\times32$ images acquired in the training set and the large distance between the scene-centric cameras and the targets in the scene. The shapes are often difficult to discern from the monocular images acquired, and so little information is available for the methods to successfully learn a policy. We expect that with higher resolution images, or with an average lower distance between the scene-cameras and workspace in the offline dataset, we would again observe the ESM superiority observed for all other tasks. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/combined_curves.png} \caption{Network losses on the training set (top 8) and validation set (bottom 8) during the course of training for imitation learning from the offline datasets, for the different reacher task.} \label{fig:combined_curves} \end{figure} \subsubsection{Implicit Feature Analysis} \label{app:implicit} Here we briefly explore the nature of the the features which the end-to-end ESM module learns to store in memory for the different reacher tasks. For each task, we perform a Principal Component Analysis (PCA) on the features encoded by the pre-ESM encoders in the ESMN networks. We compute the principal components (PCs) using encoded features from all monocular images in the training dataset. We present example activations for each of the 6 principal components for a sample of monocular images taken from each of the shape conditioned task variations in in Fig \ref{fig:pca_images}, with the most dominate principal components shown on the left in green, going through to the least dominant principal component on the right in purple. Each principal component is projected to a different color-space for better visualization, with plus or minus one standard deviation of the principal component mapping to the full color-space. Lighter colors correspond to higher PC activations. We can see that most dominant PC (shown in green) for the drone reacher tasks predominantly activate for the background, and the third PC (blue) appears to activate most strongly for edges. The principal components also behave similarly on the MR-Ego-Shape task. However, on the MR-Freeform-Shape task, which neither ESMN nor any of the baselines are able to succeed on, the first PC appears to activate strongly on both the arm and the target shapes. The main conclusion which we can draw from Fig \ref{fig:pca_images} is that the pre-ESM encoder does not directly encode shape classes as might be expected. Instead, the encoder learns to store other lower level features into ESM. However, as evidenced in the results in Table \ref{table:main_results}, the combination of these lower level features in ESM is clearly sufficient for the post-ESM convolutions to infer the shape id for selecting the correct actions in the policy, at least by using a collection of the encoded features within a small receptive field, which is not possible when using pure RGB features. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/pca_images.png} \caption{Principal Components (PCs) of the features from the pre-ESM encoder of the ESMN architecture on some example images for each of the four shape-conditioned reacher tasks, with each of the six the principal components mapped to different colors to maximise clarity. PCs go from most dominant on the left (green) to least dominant on the right (purple). Lighter values correspond to higher PC activation, with black indicating low activation.} \label{fig:pca_images} \end{figure} \subsection{Obstacle Avoidance} \label{app:obstacle_avoidance} For this experiment, we increase the drone reacher environment by $2\times$ in all directions, resulting in an $8\times$ increase in volume. We then also add 20 spherical obstacles into the scene with radius $r=0.1m$. For the avoidance, we consider a bubble around the agent with radius $R=0.2$, and flag a collision whenever any part of an obstacle enters this bubble. Given the closest depth measurement available $d_{closest}$, the avoidance algorithm simply computes an avoidant velocity vector $v_a$ whose magnitude is inversely proportional to the distance from collision, clipped to the maximum velocity $|v|_{max}$. Equation \ref{eq:obstacle_avoidance_mag} shows the calculation for the avoidance vector magnitude. We run the avoidance controller at $10\times$ the rate of the ESM updates. \begin{equation}\label{eq:obstacle_avoidance_mag} |v_a| = min \left[ \frac{10^{-3}}{max\left[d_{closest} - R, 10^{-12}\right]^2}, |v|_{max} \right] \end{equation} In order to prevent avoidant motion away from the targets to reach, we retrain the ESMN-RGB networks on the drone reacher task, but we train the network to also predict the full relative target location as an additional auxiliary loss. When evaluating on the obstacle avoidance task, we prevent depth values within a fixed distance of this predicted target location from influencing the obstacle avoidance. This has the negative effect of causing extra collisions when the agent erroneously predicts that the target is close, but it enables the agent to approach and reach the target without being pushed away by the avoidance algorithm. Regarding the performance against the baseline, we re-iterate that all monocular images have a large field-of-view of $90$ degrees, and yet we still observe significant reductions in collisions when using the full ESM geometry for avoidance, see Tab \ref{table:obstacle_avoidance_results}. \subsection{Multi-DOF Reinforcement Learning} \label{app:reinforcement_learning} \subsubsection{Training} For the reinforcement learning experiment, we train both ESMN and ESMN-RGB as well as all baselines on a similar sequential target reacher task as defined in Section \ref{sec:multi_dof_visuomotor_control} via DQN \citep{mnih2015human}, where the manipulator must reach red, blue and then yellow targets from egocentric observations. We use $(128\times128)$ images in this experiment rather than $(32\times32)$ as used in the imitation learning experiments. We also use an unroll length of 8 rather than 16. We use discrete delta end-effector translations, with $\pm 0.05$ meters for each axis, with no rotation (resulting in an action size of 6). We use a shaped reward of $r = -len(remaining\_targets) - \rVert e - g \rVert_2$, where $e$ and $g$ are the gripper and current target translation respectively. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/rl_results.png} \caption{Average return during RL training on sequential reacher task over 5 seeds. Shaded region represent the $min$ and $max$ across trials.} \label{fig:rl_results} \end{figure} \subsubsection{Network Architectures} In order to make the RL more tractable, and enable larger batch sizes, we use smaller network than used in the imitation learning experiments. Both methods use a Siamese network to process the RGB and coordinate-image inputs separately, and consist of 2 convolution (conv) layers with 16 and 32 channels (for each branch). We fuse these branches with a 32 channel conv and 1x1 kernel. The remainder of the architecture then follows the same as in the imitation learning experiments, but we instead use channel sizes of $64$ throughout. The network outputs 6 values corresponding to the Q-values for each action. All methods use a learning rate of $0.001$, target Q-learning $\tau = 0.001$, batch size of $128$, and \textit{leakyReLU} activations. We use an epsilon greedy strategy for exploration that is decayed over 100k training steps from 1 to 0.1. We show the average shaped return during RL training on the sequential reacher task over 5 seeds in Fig \ref{fig:rl_results}. Both ESM policies succeed in reaching all 3 targets, whereas all baseline approaches generally only succeed in reaching 1 target. The Partial-Oracle-Omni (PO2) baseline also succeeds in reaching all 3 targets. \subsection{Object Segmentation} \label{app:obj_seg} \begin{wrapfigure}[26]{R}{0.7\textwidth} \centering \includegraphics[width=\linewidth]{figures/objseg_networks.png} \caption{Object segmentation network architectures for Mono, ESMN-RGB and ESMN.} \label{fig:objseg_networks} \end{wrapfigure} \subsubsection{Dataset} For the object segmentation experiment, we use down-sampled $60 \times 80$ and $120 \times 160$ images from the ScanNet dataset, which we first RGB-Depth align. We use a reduced dataset with frame skip of 30 to maximize diversity whilst minimizing dataset memory. Many sequences contain slow camera motion, resulting in adjacent frames which vary very little. We use the Eigen-13 classification labels as training targets. \subsubsection{Network Architectures} The Mono, ESMN-RGB and ESMN networks all exhibit a U-Net architecture, and output object segmentation predictions in an ego-sphere map. ESMN-RGB and ESMN do so with a U-Net connecting the ESM output to the final predictions, and Mono does so by projecting and probabilistically fusing the monocular segmentation predictions in a non-learnt manner. The network architectures are all presented in Fig \ref{fig:objseg_networks}. Regarding image padding at the borders for input to the convolutions, the Mono and LSTM/NTM baselines use standard zero padding, whereas ESMN pads the outer borders with the wrapped omni-directional image. \subsubsection{Training} For training, all losses are computed in the ego-sphere map frame of reference, either following convolutions for ESMN and ESMN-RGB, or following projection and probabilistic fusion for the Mono case. We compute ground-truth segmentation training target labels by projecting the ground truth monocular frames to form an ego-sphere target segmentation image, see the right-hand-side of Fig \ref{fig:obj_seg} for an example. We chose this approach over computing the ground truth segmentations from the complete ScanNet meshes for implementational simplicity. All experiments use a batch size of 16, unroll size of 8 in the time dimension, and Adam optimizer with learning rate $1e-4$, trained for 250k steps. \subsection{Runtime Analysis} \label{app:runtime} In this section, we perform a runtime analysis of the ESM memory module. We explore the extent to which inference speed is affected both by monocular resolution and egosphere resolution, as well the differences between CPU and GPU devices, and the choice of machine learning framework. Our ESM module is implemented using Ivy \citep{lenton2021ivy}, which is a templated deep learning framework supporting multiple backend frameworks. The implementation of our module is therefore jointly compatible with TensorFlow 2.0, PyTorch, MXNet, Jax and Numpy. We analyse the runtimes of both the TensorFlow 2.0 and PyTorch implementations, the results are presented in Tables \ref{table:tf2_cpu_runtimes}, \ref{table:tf2_gpu_runtimes}, \ref{table:torch_cpu_runtimes}, and \ref{table:torch_gpu_runtimes}. All analysis was performed while using ESM with RGB projections to reconstruct ScanNet scene 0002-00 shown in Fig \ref{fig:reconstruction}. The timing is averaged over the course of the 260 frames in the frame-skipped image sequence, with a frame skip of 30, for this scene. ESM steps with $960\times1280$ monocular images were unable to fit into the 11GB of GPU memory when using the PyTorch implementation, and so these results are omitted in Table \ref{table:torch_gpu_runtimes}. \begin{figure}[h!] \centering \includegraphics[scale=0.22]{figures/FullScanNetImage.png} \caption{Left: point cloud representation of the ego-centric memory around the camera after full rotation in ScanNet scene 0002-00, with RGB features. Mid: (top) Equivalent omni-directional RGB image, (bottom) equivalent omni-directional depth image, both without smoothing to better demonstrate the quantization holes. Right: (top) A single RGB frame, (bottom) a single depth frame. This is the reconstruction produced during the time-analysis. This particular reconstruction used a monocular resolution of $120\times60$, and a memory resolution of $180\times360$} \label{fig:reconstruction} \end{figure} \begin{table}[h] \centering \begin{tabular}{|| c | c || c | c | c | c | c ||} \cline{3-7} \multicolumn{2}{c||}{} & \multicolumn{5}{c||}{Monocular Res} \\ \cline{3-7} \multicolumn{2}{c||}{} & $ 60\times80$ & $ 120\times160$ & $ 240\times320$ & $ 480\times640$ & $ 960\times1280$ \\ \hline \hline \parbox[t]{2mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Memory Res}}} & $ 45\times90$ & 245.4 & 162.6 & 83.7 & 24.4 & 6.3 \\ \cline{2-7} & $ 90\times180$ & 140.1 & 126.5 & 70.8 & 23.3 & 6.1 \\ \cline{2-7} & $ 180\times360$ & 63.9 & 64.0 & 47.5 & 19.2 & 5.8 \\ \cline{2-7} & $ 360\times720$ & 16.3 & 14.3 & 14.5 & 11.1 & 4.7 \\ \cline{2-7} & $ 720\times1440$ & 3.9 & 3.7 & 3.6 & 3.6 & 2.7 \\ \cline{2-7} & $ 1440\times2880$ & 1.1 & 1.1 & 1.0 & 1.0 & 0.9 \\ \hline \end{tabular} \caption{Average frames-per-second (fps) runtime for the TensorFlow 2 implemented ESM module on the ScanNet scene 0002-00, with RGB projections, running on 8 CPU cores.} \label{table:tf2_cpu_runtimes} \end{table} \begin{table}[h] \centering \begin{tabular}{|| c | c || c | c | c | c | c ||} \cline{3-7} \multicolumn{2}{c||}{} & \multicolumn{5}{c||}{Monocular Res} \\ \cline{3-7} \multicolumn{2}{c||}{} & $ 60\times80$ & $ 120\times160$ & $ 240\times320$ & $ 480\times640$ & $ 960\times1280$ \\ \hline \hline \parbox[t]{2mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Memory Res}}} & $ 45\times90$ & 262.1 & 223.8 & 166.5 & 76.2 & 24.1 \\ \cline{2-7} & $ 90\times180$ & 193.5 & 214.8 & 164.4 & 73.5 & 23.6 \\ \cline{2-7} & $ 180\times360$ & 184.6 & 181.6 & 147.6 & 71.5 & 23.0 \\ \cline{2-7} & $ 360\times720$ & 91.0 & 89.1 & 83.6 & 56.8 & 21.6 \\ \cline{2-7} & $ 720\times1440$ & 19.8 & 19.5 & 18.7 & 17.6 & 11.3 \\ \cline{2-7} & $ 1440\times2880$ & 4.9 & 4.8 & 4.8 & 4.5 & 4.2 \\ \hline \end{tabular} \caption{Average frames-per-second (fps) runtime for the TensorFlow 2 implemented and pre-compiled ESM module on the ScanNet scene 0002-00, with RGB projections, running on Nvidia RTX 2080 GPU.} \label{table:tf2_gpu_runtimes} \end{table} \begin{table}[h] \centering \begin{tabular}{|| c | c || c | c | c | c | c ||} \cline{3-7} \multicolumn{2}{c||}{} & \multicolumn{5}{c||}{Monocular Res} \\ \cline{3-7} \multicolumn{2}{c||}{} & $ 60\times80$ & $ 120\times160$ & $ 240\times320$ & $ 480\times640$ & $ 960\times1280$ \\ \hline \hline \parbox[t]{2mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Memory Res}}} & $ 45\times90$ & 98.9 & 45.8 & 26.3 & 6.9 & 1.8 \\ \cline{2-7} & $ 90\times180$ & 57.0 & 36.8 & 22.3 & 6.6 & 1.7 \\ \cline{2-7} & $ 180\times360$ & 14.1 & 11.6 & 10.5 & 5.0 & 1.5 \\ \cline{2-7} & $ 360\times720$ & 5.5 & 5.1 & 4.8 & 3.2 & 1.4 \\ \cline{2-7} & $ 720\times1440$ & 1.5 & 1.4 & 1.4 & 1.3 & 0.8 \\ \cline{2-7} & $ 1440\times2880$ & 0.4 & 0.4 & 0.4 & 0.4 & 0.3 \\ \hline \end{tabular} \caption{Average frames-per-second (fps) runtime for the PyTorch implemented ESM module on the ScanNet scene 0002-00, with RGB projections, running on 8 CPU cores.} \label{table:torch_cpu_runtimes} \end{table} \begin{table}[h] \centering \begin{tabular}{|| c | c || c | c | c | c | c ||} \cline{3-7} \multicolumn{2}{c||}{} & \multicolumn{5}{c||}{Monocular Res} \\ \cline{3-7} \multicolumn{2}{c||}{} & $60\times80$ & $120\times160$ & $240\times320$ & $480\times640$ & $960\times1280$ \\ \hline \hline \parbox[t]{2mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Memory Res}}} & $45\times90$ & 108.2 & 105.5 & 92.5 & 86.1 & - \\ \cline{2-7} & $90\times180$ & 102.0 & 92.2 & 85.0 & 83.8 & - \\ \cline{2-7} & $180\times360$ & 81.8 & 79.9 & 76.2 & 72.0 & - \\ \cline{2-7} & $360\times720$ & 44.1 & 43.0 & 42.3 & 41.7 & - \\ \cline{2-7} & $720\times1440$ & 13.9 & 13.9 & 13.8 & 13.1 & - \\ \cline{2-7} & $1440\times2880$ & 3.7 & 3.7 & 3.7 & 3.6 & - \\ \hline \end{tabular} \caption{Average frames-per-second (fps) runtime for the PyTorch implemented ESM module on the ScanNet scene 0002-00, with RGB projections, running on Nvidia RTX 2080 GPU.} \label{table:torch_gpu_runtimes} \end{table} What we see from these runtime results is that the off-the-shelf ESM module is fully compatible as a real-time mapping system. Compared to more computationally intensive mapping and fusion pipelines, the simplicity of ESM makes it particularly suitable for applications where depth and pose measurements are available, and highly responsive computationally cheap local mapping is a strong requirement, such as on-board mapping for drones. \subsubsection{Reinforcement Learning} \label{sec:image_to_action_RL} Assuming expert actions in partially observable (PO) environments is inherently limited. It is not necessarily true that the best action always rotates the camera directly to the next target for example. In general, for finding optimal policies in PO environments, methods such as reinforcement learning (RL) must be used. We therefore train both ESM networks and all the baselines on a simpler variant of the MR-Ego-Color task via DQN \citep{mnih2015human}. The manipulator must reach red, blue and then yellow spherical targets from egocentric observations, after which the episode terminates. We refer to this variant as MR-Seq-Ego-Color, due to the sequential nature. The only other difference to MR is that MR-Seq uses $128\times128$ images as opposed to $32\times32$. The ESM-integrated networks again outperform all baselines, learning to reach all three targets, while the baseline policies all only succeed in reaching one. Full details of the RL setup and learning curves are given in Appendix \ref{app:reinforcement_learning}. \subsection{Multi-DOF Visuomotor Control} \label{sec:multi_dof_visuomotor_control} While ego-centric cameras are typically used when learning to navigate planar scenes from images \citep{jaderberg2016reinforcement, zhu2017target, gupta2017cognitive, parisotto2017neural}, \textit{static} scene-centric cameras are the de facto when learning multi-DOF controllers for robot manipulators \citep{levine2016end, james2017transferring, matas2018sim, james2019sim}. We consider the more challenging and less explored setup of learning multi-DOF visuomotor controllers from ego-centric cameras, and also from \textit{moving} scene-centric cameras. LSTMs are the de facto memory architecture in the RL literature \citep{jaderberg2016reinforcement, espeholt2018impala, kapturowski2018recurrent, mirowski2018learning, bruce2018learning}, making this a suitable baseline. NTMs represent another suitable baseline, which have outperformed LSTMs on visual navigation tasks \citep{wayne2018unsupervised}. Many other works exist which outperform LSTMs for planar navigation in 2D maze-like environments \citep{gupta2017cognitive, parisotto2017neural, henriques2018mapnet}, but the top-down representation means these methods are not readily applicable to our multi-DOF control tasks. LSTM and NTM are therefore selected as competitive baselines for comparison. \subsubsection{Imitation Learning} For our imitation learning experiments, we test the utility of the ESM module on two simulated visual reacher tasks, which we refer to as Drone Reacher (\textit{DR}) and Manipulator Reacher (\textit{MR}). Both are implemented using the CoppeliaSim robot simulator~\citep{rohmer2013v}, and its Python extension PyRep~\citep{james2019pyrep}. We implement DR ourselves, while MR is a modification of the reacher task in RLBench~\citep{james2019rlbench}. Both tasks consist of 3 targets placed randomly in a simulated arena, and colors are newly randomized for each episode. The targets consist of a cylinder, sphere, and "star", see Figure \ref{fig:reacher_tasks}. \begin{wrapfigure}[14]{R}{0.5\textwidth} \centering \includegraphics[scale=0.1]{figures/reacher_diagrams.png} \caption{Visualization of (a) Drone Reacher and (b) Manipulator Reacher tasks.} \label{fig:reacher_tasks} \end{wrapfigure} In both tasks, the target locations remain fixed for the duration of an episode, and the agent must continually navigate to newly specified targets, reaching as many as possible in a fixed time frame of 100 steps. The targets are specified to the agent either as RGB color values or shape class id, depending on the experiment. The agent does not know in advance which target will next be specified, meaning a memory of all targets and their location in the scene must be maintained for the full duration of an episode. Both environments have a single body-fixed camera, as shown in Figure \ref{fig:reacher_tasks}, and also an external camera with freeform motion, which we use separately for different experiments. For training, we generate an offline dataset of 100k 16-step sequences from random motions for both environments, and train the agents using imitation learning from known expert actions. Action spaces of joint velocities $\dot{q}\in\mathbb{R}^{7}$ and cartesian velocities $\dot{x}\in\mathbb{R}^{6}$ are used for \textit{MR} and \textit{DR} respectively. Expert translations move the end-effector or drone directly towards the target, and expert rotations rotate the egocentric camera towards the target via shortest rotation. Expert joint velocities are calculated for linear end-effector motion via the manipulator Jacobian. For all experiments, we compare to baselines of single-frame, dual-stacked LSTM with and without spatial auxiliary losses, and NTM. We also compare against a network trained on partial oracle omni-directional images, masked at unobserved pixels, which we refer to as Partial-Oracle-Omni (PO2), as well as random and expert policies. PO2 cannot see regions where the monocular camera has not looked, but it maintains a pixel-perfect memory of anywhere it has looked. Full details of the training setups are provided in Appendix \ref{app:imitation_learning}. The results for all experiments are presented in Table \ref{table:main_results}. \begin{table}[h!] \begin{center} \resizebox{\columnwidth}{!}{% \begin{tabular}{|| c || c | c | c | c || c | c | c | c||} \cline{2-9} \multicolumn{1}{c||}{} & \multicolumn{4}{c||}{Drone Reacher} & \multicolumn{4}{c||}{Manipulator Reacher} \\ \cline{2-9} \multicolumn{1}{c||}{} & \multicolumn{2}{c|}{Ego Acq} & \multicolumn{2}{c||}{Freeform Acq} & \multicolumn{2}{c|}{Ego Acq} & \multicolumn{2}{c||}{Freeform Acq} \\ \cline{2-9} \multicolumn{1}{c||}{} & Color & Shape & Color & Shape & Color & Shape & Color & Shape \\ \hline \hline Mono & 0.6(0.7) & 0.9(1.7) & 2.4(5.0) & 0.5(1.6) & 1.8(1.5) & 1.6(1.1) & 0.1(0.2) & 0.1(0.2) \\ \hline LSTM & 12.7(3.4) & 4.1(2.3) & 1.0(1.0) & 0.6(0.8) & 1.0(0.5) & 0.1(0.2) & 0.1(0.2) & 0.1(0.4) \\ \hline LSTM Aux & 1.3(0.8) & 0.4(0.8) & 2.4(2.2) & 1.9(1.7) & 1.0(0.7) & 0.1(0.3) & 0.3(0.6) & 0.0(0.2) \\ \hline NTM & 10.5(4.2) & 2.5(1.9) & 3.2(2.9) & 1.6(1.5) & 1.0(0.6) & 0.2(0.4) & 0.1(0.3) & 0.1(0.2) \\ \hline ESMN-RGB & \textbf{20.6}(7.3) & 4.1(3.8) & \textbf{16.1}(12.7) & 1.1(2.6) & \textbf{11.4}(5.1) & 3.1(3.2) & \textbf{10.5}(5.7) & \textbf{0.9}(1.6) \\ \hline ESMN & \textbf{20.8}(7.8) & \textbf{18.3}(6.4) & \textbf{16.6}(12.9) & \textbf{8.5}(11.2) & \textbf{11.7}(5.3) & \textbf{4.7}(4.0) & \textbf{11.0}(5.8) & \textbf{1.0}(1.2) \\ \hline\hline Random & 0.1(0.2) & 0.1(0.2) & 0.1(0.2) & 0.1(0.2) & 0.1(0.2) & 0.1(0.2) & 0.1(0.2) & 0.1(0.2) \\ \hline PO2 & 21.0(8.6) & 14.4(6.1) & 19.1(12.7) & 3.9(8.1) & 13.0(5.9) & 4.1(3.5) & 12.5(6.1) & 2.6(2.5) \\ \hline Expert & 21.3(8.4) & 21.3(8.4) & 21.3(8.4) & 21.3(8.4) & 16.5(5.2) & 16.5(5.2) & 16.5(5.2) & 16.5(5.2) \\ \hline \end{tabular}% } \caption{Final policy performances on the various drone reacher (DR) and manipulator reacher (MR) tasks, from egocentric acquired (Ego Acq) or freeform acquired (Freeform Acq) cameras, with the network conditioned on either target color or shape. The values indicate the mean number of targets reached in the 100 time-step episode, and the standard deviation, when averaged over 256 runs. ESMN-RGB stores RGB features in memory, while ESMN stores learnt features.} \label{table:main_results} \end{center} \end{table} \input{sections/experiments/4_1_1_IL/4_1_1a_from_ego} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/trajectories.png} \caption{Sample trajectories through the memory for (a) ESMN on DR-Ego-Shape, and ESMN-RGB on (b) DR-Ego-Color, (c) DR-Freeform-Color, (d) MR-Ego-Color, (e) MR-Freeform-Color. The images each correspond to features in the full $90\times180$ memory at that particular timestep $t$.} \label{fig:trajectories} \end{figure} \input{sections/experiments/4_1_1_IL/4_1_1b_from_scene} \input{sections/experiments/4_1_1_IL/4_1_1c_obstacle_avoidance} \input{sections/experiments/4_1_1_IL/4_1_1d_cam_generalization} \input{sections/experiments/4_1_2_RL} \subsection{Object Segmentation} \label{sec:obj_seg} We now explore the suitability of the ESM module for object segmentation. One approach is to perform image-level segmentation in individual monocular frames, and then perform probabilistic fusion when projecting into the map \citep{mccormac2017semanticfusion}. We refer to this approach as Mono. Another approach is to first construct an RGB map, and then pass this map as input to a network. This has the benefit of wider context, but lower resolution is necessary to store a large map in memory, meaning details can be lost. ESMN-RGB adopts this approach. Another approach is to combine monocular predictions with multi-view optimization to gain the benefits of wider surrounding context as in \citep{zhi2019scenecode}. Similarly, the ESMN architecture is able to combine monocular inference with the wider map context, but does so by constructing a network with both image-level and map-level convolutions. These ESM variants adopt the same broad architectures as shown in Fig \ref{fig:simplified_networks}, with the full networks specified in Appendix \ref{app:obj_seg}. We do not attempt to quote state-of-the-art results, but rather to further demonstrate the wide applications of the ESM module, and to explore the effect of placing the ESM module at different locations in the convolutional stack. We evaluate segmentation accuracy based on the predictions projected to the ego-centric map. With fixed network capacity between methods, we see that ESMN outperforms both baselines, see Table \ref{table:obj_seg} for the results, and Figure \ref{fig:obj_seg} for example predictions in a ScanNet reconstruction. Further details are given in Appendix \ref{app:obj_seg}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figures/obj_seg.png} \caption{Top: Segmentation predictions in ESM memory for Mono and ESMN, and ground truth. Middle: Point cloud rendering of the predictions. Bottom: Monocular input image sequence.} \label{fig:obj_seg} \end{figure} \begin{table}[h] \centering \begin{tabular}{|| c || c | c || c | c || c | c ||} \cline{2-7} \multicolumn{1}{c||}{} & \multicolumn{2}{c||}{Images $ 60\times80$} & \multicolumn{2}{c||}{Images $ 120\times160$} & \multicolumn{2}{c||}{Images $ 60\times80$} \\ \cline{2-7} \multicolumn{1}{c||}{} & \multicolumn{2}{c||}{Memory $ 90\times180$} & \multicolumn{2}{c||}{Memory $ 90\times180$} & \multicolumn{2}{c||}{Memory $ 180\times360$} \\ \cline{2-7} \multicolumn{1}{c||}{} & 1-frame & 16-frame & 1-frame & 16-frame & 1-frame & 16-frame \\ \hline \hline Mono & \textbf{54.5}(19.0) & 55.1(13.6) & \textbf{54.9}(19.3) & 55.6(13.9) & \textbf{54.6}(19.2) & 55.3(13.7) \\ \hline ESMN-RGB & 51.4(19.6) & 54.1(13.8) & 51.7(19.3) & 54.4(13.2) & \textbf{54.0}(19.2) & 57.1(13.5) \\ \hline ESMN & \textbf{55.0}(19.0) & \textbf{59.4}(12.7) & \textbf{55.3}(19.1) & \textbf{59.8}(13.0) & \textbf{55.2}(19.4) & \textbf{59.7}(12.9) \\ \hline \end{tabular} \caption{Object segmentation segmentation accuracies on the ScanNet test set for a monocular fusion baseline (Mono), as well as ESMN-RGB and ESMN.} \label{table:obj_seg} \end{table}
{ "timestamp": "2021-02-18T02:26:50", "yymm": "2102", "arxiv_id": "2102.07764", "language": "en", "url": "https://arxiv.org/abs/2102.07764" }
\section{Introduction} \label{sec:introduction} Feedback mechanisms from active galactic nuclei (AGN) are widely considered to play a key role in galaxy formation and evolution (\citeads{1998A&A...331L...1S, 2010MNRAS.402.1516K, 2012ARA&A..50..455F, 2015ARA&A..53..115K}). AGN feedback is indeed included in all theoretical, semianalytic, and numerical studies of galaxy formation and evolution (e.g. \citeads{2004ApJ...600..580G, 2005Natur.433..604D, 2010ApJ...717..708C, 2016MNRAS.461.3457G, 2019MNRAS.490.3234N}) as it allows us to reconcile theoretical predictions with observed galaxy properties. Such AGN activity is considered the main factor responsible for the quenching of star formation in more massive galaxies (so-called ‘negative feedback’). The energy output of a supermassive black hole (SMBH) accreting close to the Eddington limit is large enough to drive massive, wide-angle outflows on large scales (e.g. \citeads{2012ApJ...745L..34Z, 2014MNRAS.439..400Z, 2014MNRAS.444.2355C, 2015ARA&A..53..115K, 2017MNRAS.465..547P}) that are capable of either sweeping the gas out of the host galaxy, thus reducing the host galaxy gas reservoir for star formation, or heating the intergalactic medium of the host galaxy through the injection of thermal energy, thus preventing the gas from cooling and collapsing to form stars (‘ejective’ versus ‘preventive’ feedback; e.g. see \citeads{2017ApJ...839..120W, 2018NatAs...2..179C}). Both processes are expected to halt the accretion onto the central black hole and, consequently, to give rise to the SMBH mass values observed to correlate with the physical properties of the host galaxy bulge (i.e. its mass, velocity dispersion and luminosity; e.g. \citeads{2000ApJ...539L...9F, 2013ARA&A..51..511K}). Additional observational evidence supporting the mutual influence of the central SMBH and the host galaxy comes from the observed similarity between the star formation (SF) and BH accretion histories across cosmic time. In fact, both activity histories are seen to peak at $z\sim2$ \citepads{1996MNRAS.283.1388M, 2014ARA&A..52..415M}, meaning that the bulk of both SF and BH accretion occurred within $z\sim1-3$ (e.g. \citeads{2004MNRAS.351..169M, 2010MNRAS.401.2531A, 2015MNRAS.451.1892A}). The epoch $z\sim1-3$ (also referred to as ‘cosmic noon’) is hence crucial to the study of such phenomena and their effects in action since this is the time when AGN feedback is expected to be more effective. Thanks to advanced integral-field spectroscopic (IFS) facilities, AGN-driven outflows have been exhaustively observed from optical to IR and mm bands, in both local (e.g. \citeads{2010A&A...518L.155F, 2015A&A...583A..99F, 2012A&A...543A..99C, 2014A&A...562A..21C}) and high-redshift galaxies (e.g. \citeads{2012A&A...537L...8C, 2012MNRAS.425L..66M, 2015A&A...580A.102C, 2015ApJ...799...82C}). It is worth mentioning that while theoretical predictions usually refer to the whole outflowing gas, observations usually trace the emission produced by a single gas phase of the outflow. Therefore, in order to properly compare model predictions with observational results, it is fundamental to obtain a complete, multi-phase description of the outflow (e.g. \citeads{2018NatAs...2..176C, 2018NatAs...2..198H}). Even though the existence of AGN-driven outflows has been widely confirmed thanks to observations, there are a number of relevant open questions that remain unanswered, considering, for instance, how the energy released by the accreting BH is coupled with the interstellar medium (ISM), thus driving large-scale outflows, and how efficient the coupling is between the nuclear and galaxy-scale outflows Theoretical models (e.g. \citeads{2003ApJ...596L..27K, 2005ApJ...635L.121K, 2015ARA&A..53..115K}) predict fast ($v\sim0.1c$), highly ionised winds, accelerated on sub-pc scales by the AGN radiative force as the origin of the strong galaxy-scale feedback. As the nuclear wind impacts the ISM of the host galaxy, it produces an inner reverse shock that slows down the wind, along with an outer forward shock accelerating the galactic ISM. Depending on the efficiency of cooling processes (typically radiative) in removing energy from the hot shocked inner gas, there are two main wind driving-modes (e.g. \citeads{2010MNRAS.408L..95K, 2014MNRAS.444.2355C, 2015ARA&A..53..115K}). If the cooling occurs on a timescale that is shorter than the wind flow time, most of the inner wind kinetic energy is lost (usually via inverse Compton scattering) and, therefore, the wind momentum is the only conserved physical quantity (‘momentum-driven’ regime). Vice versa, if the cooling is negligible, the postshock gas retains all the mechanical energy and expands adiabatically (‘energy-driven’ regime), sweeping up a significant amount of the host galaxy gas. According to a widely accepted picture, the observed scaling relations are the result of the effect of AGN feedback acting on the host galaxy through two distinct, subsequent phases (e.g. \citeads{2012ApJ...745L..34Z, 2015ARA&A..53..115K}): an initial momentum-driven regime lasting so long as the BH mass has not yet reached the $M_{\rm BH}-\sigma$ relation and the outflow is confined within $\sim$1 kpc from the central BH, followed by a later energy-driven phase after the BH mass has settled on the relation, during which the outflow can propagate beyond 1 kpc scales. From an observational point of view, the most promising candidates for acting as the ‘engine’ of the large-scale feedback are the so-called ultra-fast outflows (UFOs), which are highly-ionised, accretion disk winds with mildly relativistic velocities that originate at sub-pc scales. They are usually detected in AGN X-ray spectra \citepads{2002ApJ...579..169C, 2016ApJ...824...53C, 2013MNRAS.430...60G, 2015MNRAS.451.4169G, 2015Sci...347..860N} via the presence of strongly blueshifted absorption lines of highly-ionised metals (typically iron, e.g. Fe XXV and Fe XXVI). UFOs are found in at least 40\% of the local sources \citepads{2010A&A...521A..57T, 2011ApJ...742...44T, 2012MNRAS.422L...1T, 2013MNRAS.430.1102T}, with typical mass outflow rates of $\sim0.01-1$ \(\text{M}_\odot\) yr$^{-1}$ and kinetic powers of log$\dot{E}_{\rm K}\sim42-45$ erg s$^{-1}$ \citepads{2012MNRAS.422L...1T}. Moving to high redshifts ($z>1$), the UFO detection is hampered by the resolution or sensitivity limits of the current observational facilities. In fact, the number of AGN hosting UFOs at $z>1$ that have been discovered thus far drastically decreases to 14 objects (see \citeads{2018A&A...610L..13D} on the list of published objects, and \citealt{Chartas.high-z.UFOs} for the latest updates); of these, twelve are gravitationally lensed systems (\citeads{2002ApJ...573L..77H, 2003ApJ...595...85C, 2007ApJ...661..678C, 2009NewAR..53..128C, 2016ApJ...824...53C, 2018A&A...610L..13D}; \citealt{Chartas.high-z.UFOs}). Strong gravitational lensing is indeed a well-known, powerful tool to investigate the physical properties of distant quasars (QSOs). The magnified view delivered by gravitational lenses allows us to separate the active nuclei from their hosts, enabling new measurements and spatially resolved studies, which otherwise would not have been possible beyond the local Universe (e.g. \citeads{2006ApJ...649..616P, 2009ApJ...702..472R, 2017ApJ...845L..14B,2020MNRAS.495.2387S,2021MNRAS.500.3667S}). To test theoretical predictions and to shed light on the acceleration and propagation mechanisms of large-scale outflows, we need to compare the energetics of the X-ray nuclear UFO with that of the large-scale outflow. In this work, we focus on the ionised phase of large-scale outflows traced by the optical emission of the [O III]$\lambda \lambda$4959,5007 line doublet. Given that it is a forbidden transition, it preferentially traces the emission originating from the kpc-scale, typical of the AGN Narrow Line Region (NLR), since it cannot be produced on the high-density ($n\sim10^{10}$ cm$^{-3}$), sub-pc scale of the AGN broad line region (BLR). In the presence of outflows, the [O III] line profile is highly asymmetric with a broad, blueshifted wing corresponding to high speeds along the line of sight ($v\gtrsim1000$ km s$^{-1}$ (see e.g. \citeads{2015A&A...580A.102C, 2015ApJ...799...82C, 2015A&A...574A..82P, 2016A&A...588A..58B, 2020A&A...644A..15M}). This work is aimed at studying the connection between nuclear, X-ray UFOs, and the ionised phase of large-scale outflows, for the first time, in two QSOs close to the peak of AGN activity ($z\sim2$). We use the [O III]$\lambda$5007 emission line to trace the ionised outflow, along with results on X-ray UFOs from the literature to properly compare the wind energetics on different scales. In doing so, we aim to highlight (and take advantage of) the crucial role of gravitational lensing as powerful tool for overcoming the current observational limits and going on to investigate the physical properties of distant QSOs. This paper is organised as follows. In Sect. \ref{sec:description}, we present the selected targets and describe the observation and data reduction procedure. In Sect. \ref{sec:analysis}, we show our data analysis and spectral fitting. The inferred results are then presented in Sect. \ref{sec:results}. In Sect. \ref{sec:discussion}, we discuss the wind acceleration mechanism in our two QSOs and compare our results with findings from the literature. Finally, in Sect. \ref{sec:conclusion}, we outline our conclusions. We adopt a $\Lambda$CDM flat cosmology with $\Omega_{\rm m,0} = 0.27$, $\Omega_{\rm \Lambda,0} = 0.73$ and $H_{\rm 0} =70$ km s$^{-1}$ Mpc$^{-1}$ throughout this work. \section{Description of the observed QSOs} \label{sec:description} \subsection{Selection of targets} \label{sec:sample_sel} Our sample consists of two $z\sim1.5$ multiple lensed QSOs, HS 0810+2554 and SDSS J1353+1138, observed with the Spectrograph for INtegral Field Observations in the Near Infrared (SINFONI, \citeads{2003SPIE.4841.1548E}) at the ESO Very Large Telescope (VLT) Unit Telescope 3 (UT3) within the framework of the program 0102.B-0377(A) (PI: G. Cresci). These objects were specifically selected as they are known to host UFOs (\citeads{2016ApJ...824...53C}; \citealt{Chartas.high-z.UFOs}) and to be at redshifts ($z\sim1.5$ and $z\sim1.6$ for HS 0810+2554 and SDSS J1353+1138, respectively), such that the optical rest-frame emission (tracing ionised outflows) falls in the range of wavelengths observed by SINFONI, namely, in the near-IR \text{J}-band ($\lambda \sim 1.1-1.4$ $\mu$m). Consequently, this selection in redshifts corresponds to study objects at epochs close to the peak of AGN activity ($z\sim2$). In total, there are fourteen high redshift ($z > 1$) QSOs with UFO detection, of which seven are found in the literature (among them, HS 0810+2554; see \citeads{2018A&A...610L..13D} for an updated list), while the rest have not yet been published (including SDSS J1353+1138; \citealt{Chartas.high-z.UFOs}). These are among the brightest - in terms of 2--10 keV luminosity ($L_{2-10~\text{keV}}>10^{45}$ erg s$^{-1}$, except for PID352 $L_{2-10~\text{keV}}\sim10^{44}$ erg s$^{-1}$) - QSOs at high redshift; this is either because they are intrinsically luminous (of the total 14-QSO sample, only HS 1700+6416 and PID352; \citeads{2012A&A...544A...2L, 2015A&A...583A.141V}) or because they are subject to gravitational lens magnification (including APM 08279+5255, PG1115+080, H1413+117, HS 0810+2554 and MG J0414+0534\footnote{Here, we list only the sources with published results on UFO detection, but also the remaining unpublished objects are known to be gravitationally lensed \citep{Chartas.high-z.UFOs}.}; \citeads{2002ApJ...573L..77H, 2003ApJ...595...85C, 2007ApJ...661..678C, 2009NewAR..53..128C, 2016ApJ...824...53C, 2018A&A...610L..13D}). Thus, these objects deliver high quality X-ray spectra which clearly exhibit UFO absorption features, in spite of their high redshift ($z>1$). Amongst this $z>1$ sample, APM 08279+5255 ($z\sim3.9$; \citeads{2002ApJ...573L..77H}) has been the only $z>1$ QSO known to host a large-scale (molecular) outflow \citepads{2017A&A...608A..30F}, hence, this is where a connection between UFO and galaxy-scale outflow has been put forward thus far. Therefore, within the field of large-scale ionised outflows in HS 0810+2554 and in SDSS J1353+1138, this work will extend the number of $z>1$ QSOs for which the connection between nuclear and large scales has been accessed. In the following, we provide a short description of HS 0810+2554 and SDSS J1353+1138, with their main properties listed in Table \ref{tab:sample}. \begin{table*}[htp] \centering \begin{tabular}{c|cccc} \hline \hline Target name & $\alpha$(J2000) & $\delta$(J2000) & $z^{\rm a}$ & scale \\ \hline HS 0810+2554 & $08^{\rm h}13^{\rm m}31^{\rm s}.3$ & $+25^{\circ}45'03''$ & $1.508\pm0.002$ & 8.67 kpc/$''$\\ SDSS J1353+1138 & $13^{\rm h}53^{\rm m}06^{\rm s}.34$ & $+11^{\circ}38'04''.7$ & $1.632\pm0.002$ & 8.69 kpc/$''$\\ \hline \end{tabular}% \caption{{\small Properties of our two-QSOs sample. $^{\rm a}$Redshifts are measured from the [O III] systemic component in integrated spectra extracted from the nuclear region of both sources (Sects. \ref{sec:BLR_fit} and \ref{sec:double_BLR}).}} \label{tab:sample} \end{table*}% \subsubsection{HS 0810+2554} \label{sec:HS} HS 0810+2554 is a radio-quiet, narrow absorption line (NAL; FWHM$~\lesssim500$ km s$^{-1}$) QSO at $z\sim1.5$, which was discovered by \citetads{2002A&A...382L..26R}. It consists of four lensed images in a typical fold lens configuration with the two southern, brightest images in a merging pair configuration (A+B), as shown in the \textit{HST} image in Fig. \ref{fig:opt_images} (\textit{left} panel). The lens galaxy (labelled with G) is detected in the \textit{HST} image, and its redshift is estimated to be $z_{\rm l}\sim0.89$ from the separation and the redshift distribution of existing lenses \citepads{2011ApJ...738...96M}. Quadruply lensed QSOs occur in strong gravitational lensing regimes (e.g. \citeads{1996astro.ph..6001N}), when the compact and bright UV accretion disk and X-ray corona emission regions overlap the lens caustics. This leads to high magnification factors, whose values strongly depend on the image and lens positions. As a consequence, a small change in the input parameters to the lens models (image and lens positions) can lead to a significant change in the image magnifications. For HS 0810+2554, estimates of the magnification factor $\mu$ in different spectral bands are found in the literature, in particular for the X-ray ($\mu \sim103$; \citeads{2016ApJ...824...53C}), optical ($\mu \sim120$; \citeads{2020MNRAS.492.5314N}), and radio emission ($\mu \sim25$; \citeads{2015MNRAS.454..287J}). HS 0810+2554 was singled out as an exceptionally X-ray bright lensed object during an X-ray survey of NAL-AGN with outflows of UV absorbing material \citepads{2009NewAR..53..128C}. More recent \textit{Chandra} and \textit{XMM-Newton} observations \citepads{2016ApJ...824...53C} provided definitive proofs for the presence of a highly ionised, relativistic wind in the source nuclear region. The strongly blueshifted absorptions of highly-ionised metals (i.e. Fe XXV, Si XIV) indicate that the outflow velocity components are within the range of $0.1-0.4~c$. The VLT/UVES spectrum of HS 0810+2554 also shows blueshifted absorptions of the C IV and N V doublets, indicating the existence of UV absorbing material moving with an outflowing speed of $v_{\rm C IV}=19,400$ km s$^{-1}$ \citepads{2014ApJ...783...57C,2016ApJ...824...53C}. Even though is classified as radio-quiet object, VLA observations at 8.4 GHz \citepads{2015MNRAS.454..287J} indicate that HS 0810+2554 hosts a radio core, thus demonstrating that it is not radio-silent. HS 0810+2554 was also recently observed with ALMA in the mm-band \citepads{2020MNRAS.496..598C}. The analysis of ALMA data has shown the tentative detection of high-velocity clumps of CO(J$=$3$\rightarrow$2) emission, suggesting the presence of a massive molecular outflow on kpc-scales. With our characterisation of the ionised outflow in HS 0810+2554, we now have, for the first time ever, a three-phase description of an AGN-driven wind at high redshift, from the nuclear to the galaxy scale: the highly-ionised (on nuclear scales), ionised, and neutral molecular phases (on galaxy scales) thanks to a broadband spectral coverage ranging from the X-ray to the optical and mm bands. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{opt_images_pdf}} \caption{{\small Lensed images of HS 0810+1154 (\textit{left}) and SDSS J1353+1138 (\textit{right}). \textit{Left:} \textit{HST} ACS F555W image of HS 0810+2554 showing the four magnified images of the background quasar in fold lens configuration: the C and D images are spatially resolved, while the pair A+B is blended together. At the centre, we can see the emission from the foreground lens galaxy. \textit{Right:} \textit{V} and \textit{H}-band images of SDSS J1353+1138 taken at the UH88 telescope (\textit{upper} panels) and corresponding images after the subtraction of A and B components (\textit{lower} panels), clearly showing the lens galaxy (component G). Image from \citetads{2006ApJ...653L..97I}.}} \label{fig:opt_images} \end{figure} \subsubsection{SDSS J1353+1138} \label{sec:J} Unlike HS 0810+2554, which has been widely observed in several spectral bands, SDSS J1353+1138 has been less intensively studied, as its discovery is more recent \citepads{2006ApJ...653L..97I}. This object was selected from the Sloan Digital Sky Survey (SDSS) as candidate double lensed QSO at $z\sim1.6$. \citetads{2006ApJ...653L..97I} during the University of Hawai'i 88-inch Telescope (UH88) follow-up observations of SDSS J1353+1138, obtaining \textit{V}, \textit{R}, \textit{I,} and \textit{H}-band images of the source. The two lensed images are well-distinguishable (see the \textit{right} panel in Fig. \ref{fig:opt_images}), with an angular separation of $\Delta \sim 1.40''$\citepads{2006ApJ...653L..97I}. More recently, on 2016 January 13, SDSS J1353+1138 was observed with \textit{XMM-Newton} . The analysis of the X-ray spectrum \citep{Chartas.high-z.UFOs} revealed a significant absorption at $\sim6.8$ keV (consistent with Fe XXV), indicating the presence of a $\sim0.31c$ UFO. \subsection{SINFONI observations and data reduction} SINFONI observations of HS 0810+2554 and SDSS J1353+1138 were carried out on two different nights in February and March 2019, respectively, in the near-IR \textit{J}-band ($\lambda \sim 1.1-1.4$ $\mu$m) and with a spectral resolution $R=2000$. The observations were performed in seeing-limited mode\footnote{The SINFONI adaptive optics module (AO-mode) was not available at the time of the observations, since SINFONI had been moved from UT4 to UT3 for the last few months of its research activity.}, using the $0.250''\times0.125''$ pixel scale which provides a total field of view (FOV) of $8''\times8''$, which is essential for mapping the gas dynamics on galaxy scales. The airmasses are different for each target, spanning the ranges of $\sim1.7-1.9$ and $\sim1.2-1.3$ during the observations of HS 0810+2554 and SDSS J1353+1138, respectively. The data were obtained in eight and sixteen integrations of 300s each, for a total of 40 min for HS 0810+2554, and 80 min for SDSS J1353+1138. During each observing block, an ABBA pattern was followed: the target was put alternatively in two different positions of the FOV about $4.3''$ apart, to perform the sky subtraction through a nodding technique. A dedicated star observation to measure the point-spread-function (PSF) was not available in either case but the estimated angular resolution is $\sim$0.7$''$ ($\sim$0.8$''$) for HS 0810+2554 (SDSS J1353+1138), based on the measured extent of the spatially unresolved BLR emission (see Sect. \ref{sec:test_spatially_res}). Finally, a standard B-type star for telluric correction and flux calibration was observed shortly before or after the on-source exposures. We used the ESO-SINFONI pipeline (v. 3.2.3) to reduce the SINFONI data. Before flux calibration and co-addition of single exposure frames, we corrected for atmospheric dispersion effects consisting in a significant change of the AGN continuum emission across the FOV of both sources. This is a consequence of the differential atmospheric dispersion at different wavelengths, which makes it so the measured spectra not ‘straight’ as expected. In practical terms, as the wavelength increases, an increasingly larger fraction of the emission gets deposited into adjacent pseudo-slits, producing a coherent spatial shift of the target as a function of $\lambda$. Additionally, possible flexures in the instrument may contribute to producing the observed shift. In order to limit the impact of these optical distortions, we spatially aligned the emission centroid channel-by-channel in each single-exposure, sky-subtracted cube by adopting the following procedure. For each cube, we first determined, in every spectral channel, the average position of the emission centroid on the FOV through a 2D-Gaussian fit. Then we calculated the shift of the centroid mean position with respect to the centroid position in the first spectral channel, assumed as the reference channel. In both spatial directions on the FOV we found an increasing trend of the shift at increasing wavelengths, which we modelled with a two-degree polynomial to neglect the presence of some spikes that are due to noisier channels. The spatial shift, totally observed from the bluest to the reddest spectral channel, spanned the range of $\sim0.5-1$ pixel among the various single-exposure cubes of both targets. As the spatial shifts were fractional in units of pixels, we adopted the Drizzle algorithm \citep{drizzle_book, drizzle_book_bis} to perform the optimised alignment of every spectral channel in each raw single-exposure data cube. Finally, we performed the flux calibration and the co-addition of the single-exposure cubes. The final sky-subtracted, flux-calibrated data consist of $100\times72\times2234$ data cubes, hence, each one including more than 7,000 spectra. Each spectrum is sampled by 2234 channels with a $\SI{1.25}{\angstrom}$ channel width and covers the spectral range $\sim1.1-1.4$ $\mu$m, corresponding to about $4400-5600$ $\SI{}{\angstrom}$ rest-frame wavelengths. \subsection{Lens models for the two quasars} \label{sec:lens_model} As both objects are gravitationally lensed QSOs, lens models are required in order to infer the intrinsic (i.e. unlensed) physical properties of the outflow, such as the intrinsic radius and unlensed flux, which are key ingredients for calculating the outflow energetics. Both QSOs are lensed by a foreground elliptical galaxy and detailed lens models for these two objects can be found in the literature. In particular, for HS 0810+2554, there are several lens models reported in literature obtained from observations performed in different spectral bands (e.g. from VLA-radio data in \citeads{2015MNRAS.454..287J}, from ALMA-mm data in \citeads{2020MNRAS.496..598C}). In this work, we adopted, for HS 0810+2554, the most recent model by \citetads{2020MNRAS.492.5314N}, inferred from \textit{HST}-WFC3 IR observations: images and lens galaxy positions have been measured from direct F140W wide imaging (central wavelength $\sim$1392 nm), while slitless dispersed spectra have been provided by the grism G141 (useful range: $1075-1700$ nm). Assuming $z_{\rm l}\sim0.89$ for the lens galaxy \citepads{2011ApJ...738...96M}, \citetads{2020MNRAS.492.5314N} modelled the deflector mass distributions with a singular isothermal ellipsoid (SIE), plus an external shear to account for tidal perturbations from nearby objects. Detailed lens models for SDSS J1353+1138 are presented in \citetads{2006ApJ...653L..97I} and \citetads{2016MNRAS.458....2R}, based on imaging observations in the \textit{i}-band with the Magellan Instant Camera (MagIC) on the Clay 6.5m Telescope and in the \textit{K}-band with the Subaru Telescope adaptive optics system, respectively. \citetads{2006ApJ...653L..97I} modelled the lens mass distribution using either a SIE model, or a singular isothermal sphere (SIS) model plus a shear component ($\gamma$), and estimated the lens redshift to be $z_{\rm l}\sim0.3$ based on the Faber-Jackson relation \citepads{1976ApJ...204..668F}. The resulting total magnification factors $\mu$ are 3.81 and 3.75 from the SIS+$\gamma$ and SIE model, respectively. Also assuming $z_{\rm l}\sim0.3$, \citetads{2016MNRAS.458....2R} found slightly lower values for total magnification: $\mu \sim3.47$ (SIS+$\gamma$), $\mu \sim3.42$ (SIE) and $\mu \sim3.53$ (SIE+$\gamma$). We used all these magnification values from the literature to determine the unlensed flux carried by the ionised outflow in SDSS J1353+1138. Instead, for HS 0810+2554, we were able to provide an estimate of the ionised outflow magnification ($\mu_{\rm out}\sim2$) starting from our data. Such values will be discussed further in Sect. \ref{sec:intrinsic} and Appendix \ref{app:2D_rec}. \section{Data analysis} \label{sec:analysis} \subsection{Fitting procedure} For the spectral analysis of SINFONI data, we adopted the fitting code used in \citetads{2020A&A...644A..15M} to analyse MUSE data of two local QSOs. Here, we implemented the code to also handle the SINFONI data, introducing adjustments and new functionalities depending on the specific necessities of our data. In the following, we illustrate the basics of our spectral-fitting method, highlighting the required changes for the analysis of our SINFONI data (see \citeads{2020A&A...644A..15M} for the detailed description of the fitting code). The entire fitting procedure aims at performing the kinematical analysis of the diffuse ionised gas, with a primary focus on the [O III]$\lambda$5007 emission line (hereafter [O III]) that is the optimal tracer of ionised outflows, as previously mentioned. Our strategy consists of the following three key-steps. \textit{Phase I.} We built a template model for the bright BLR emission using an integrated, high signal-to-noise ratio (S/N) spectrum. \textit{Phase II.} The BLR template built in phase I was used to map spaxel-by-spaxel the contribution of the BLR to the emission across the entire FOV. The resulting BLR model cube was then subtracted from the data cube. \textit{Phase III.} Finally, we performed, spaxel-by-spaxel, a finer modelling of the faint emission lines produced by the diffuse ionised gas, originating on galactic scales. Hereafter, we refer to the diffuse gas emission lines as ‘narrow’ in order to distinguish them from the typical ‘broad’ emission lines (FWHM $>$ 1000 km s$^{-1}$; e.g. \citeads{1981ApJ...249..462O}) originating from within the dense and highly turbulent BLRs. A single noise value has been associated to each channel in our SINFONI data cubes, computed as the root mean square (rms) of the fluxes extracted spaxel-by-spaxel in a region with no significant emission from the target. The details of the spectral analysis of SINFONI data of both QSOs are provided below. \captionsetup[subfigure]{labelformat=empty} \begin{figure* \centering \includegraphics[width=17cm,keepaspectratio]{spectra_pdf} \caption{{\small Representative scheme of our fitting procedure. \textit{Top panels:} SINFONI \textit{J}-band spectra of HS 0810+2554 (\textit{left}) and SDSS J1353+1138 (\textit{right}), zoomed in the spectral region of [O III] and Balmer hydrogen emission lines. Both spectra were extracted using an aperture of $\sim$0.44$''$-radius, centred on the peak of the AGN continuum emission (located on image A in SDSS J1353+1138). Black dashed lines show the data, while red solid lines show the best-fit models obtained in phase II of our analysis. The latter is partitioned into the contributions of various components: AGN continuum (yellow), BLR emission in Balmer hydrogen emission lines (green) and Fe II (purple), and narrow line emission from the diffuse gas (light blue) represented as sum of multiple Gaussian components. \textit{Middle panels:} \textit{J}-band spectra extracted from the subtracted cubes, with the same aperture used in the top panels. Subtracted data (black lines) are compared to the best-fit models (red lines) resulting from the finer emission lines (EL) modelling implemented in phase III. The green lines highlight the outflow component alone. \textit{Bottom panels:} Residuals obtained by subtracting the full EL model from the subtracted spectra.}} \label{fig:int_spectra} \end{figure*} \subsubsection{I. Modelling the BLR emission} \label{sec:BLR_fit} The fitting code starts with modelling the bright BLR emission in a spectrum extracted from the nuclear region, while also fitting the other spectral components. The spectral components to be fitted are: the AGN continuum, BLR emission lines, and narrow emission lines from the diffuse gas. In principle, we should also have the stellar continuum emission, but in our data the AGN continuum is entirely dominant. The BLR model is built by the code as the sum of two independent components: the broad Balmer hydrogen emission lines (H$\beta$ in HS 08010+2554, H$\beta$ and H$\gamma$ in SDSS J1353+1138) and several Fe II broad emission lines, which are the two main BLR components in the rest-frame range observed by SINFONI ($\lambda \sim4200-5600$ $\SI{}{\angstrom}$). The diffuse emission instead consists of the [O III] emission doublet and the narrow components of the Balmer hydrogen lines. For HS 0810+2554, we extracted a high-S/N spectrum from an aperture of 0.3$''$ radius, centred on the observed blended emission of A+B images (see Fig. \ref{fig:opt_images}), and fitted all the previously mentioned components simultaneously. We modelled the AGN continuum through a 1st-degree polynomial. The Fe II emission lines were modelled using the semi-analytic templates of \citetads{2010ApJS..189...15K}, while the BLR component of H$\beta$ was fitted by two broad Gaussian components. The narrow emission lines were fitted through two Gaussian components. Given the complexity of the BLR-H$\beta$ line profile in HS 0810+2554, we additionally associated spatially unresolved residuals from the fit to this component, following the approach detailed in \citetads{2020A&A...644A..15M}. In case of SDSS J1353+1138, where the two lensed images (A and B in Fig. \ref{fig:opt_images}) are well-distinguishable and spatially resolved, matters were complicated by the fact that we observed a significant change between nuclear spectra extracted from the two distinct images. As a consequence, this prevented us from considering a single BLR template. The procedure followed for SDSS J1353+1138 is described separately in Sect. \ref{sec:double_BLR}. \subsubsection{II. Mapping the unresolved BLR emission across the FOV} \label{sec:tot_fit} As the BLR emission goes unresolved in our data, we expect that its observed spatial variation follows the PSF of our observations. Therefore, we allowed the BLR template obtained in phase I to change only in amplitude across the FOV and we proceeded to fit the whole data cubes with the software pPXF (\citeads{2017MNRAS.466..798C}). For the modelling of the narrow emission lines, we used multiple Gaussian components and adopted a statistical approach (a Kolgomorov-Smirnov test) to select, spaxel-by-spaxel, the minimal, optimal number of Gaussian components to aptly reproduce the emission line profiles (see \citeads{2020A&A...644A..15M} for details). For both HS 0810+2554 and SDSS J1353+1138, we considered a number of Gaussian components ranging from 1 to 3, finding the latter optimal for reproducing the most complex line profiles. Then we subtracted spaxel-by-spaxel the BLR and AGN continuum emissions and, for each galaxy, we created a cube containing only the residual emission lines due to the diffuse gas. In the following, we refer to this cube as the ‘subtracted cube’. \subsubsection{III. Modelling the narrow emission lines} \label{sec:line_fit} In phase III, we focused on the finer modelling of the narrow emission lines that remained after the subtraction of the BLR and AGN continuum emission components. The only significant residual emission in our data is the [O III] emission doublet, while the narrow components of Balmer hydrogen lines are very weak and marginally resolved. Therefore, we modelled the residual narrow emission lines through multiple Gaussian components, adopting some reasonable constraints: the two emission lines of the [O III] doublet were fitted by imposing the same central velocity and velocity dispersion, with the intensity ratio $I(5007)/I(4959)$ fixed at 3 according to the theoretical expectations of the atomic theory; whereas, for the weak narrow components of Balmer hydrogen lines, we assumed the same [O III]$\lambda$5007 line profile shape and left the flux as a free parameter. This is a reasonable assumption as we expect that the narrow Balmer hydrogen and [O III] emission lines come from regions with the same gas kinematics. Similarly to the fit in phase II, we considered a number of Gaussian components ranging from 1 to 3, using, as before, a statistical approach to determine spaxel-by-spaxel the optimal number of components required. In both QSOs, most of the line profiles are well reproduced by two Gaussian components: a narrow, bright component close to the systemic velocity, plus a broad blueshifted component to reproduce the [O III] blue wing observed in most of the FOV, which we identify with approaching outflow emission. In HS 0810+2554, the 3-Gaussian fit was selected in some spaxels to reproduce properly the faint but still visible red wing in the [O III] line profile, tracing the fainter outflow component receding from us. On the contrary, in SDSS J1353+1138, two Gaussian components are sufficient to reproduce the most complex line profiles, as we do not detect the [O III] red wing anywhere. In order to study the physical and dynamical properties of the outflow emission, which is the main focus of this work, we had to properly identify the [O III] emission due to the high-speed outflowing gas by disentangling its contribution to the [O III] emission line due to the gas bulk motion within the host galaxy. To do so, we adopted the same selection criterium used by \citetads{2020A&A...644A..15M}. For each Gaussian component used to reproduce the [O III] line profile in a given spaxel, we focused on the fraction of the total line flux contained in the line wings with a velocity shift $|v-v_{\rm peak}|$ larger than a certain threshold width $w_{\rm th}$, where $v_{\rm peak}$ is the peak velocity of the line in each spaxel. If the fraction of total flux in the line wings was higher than a given threshold $\tau$, the Gaussian component has been classified as a possible ‘outflow’ component, to be confirmed by the following kinematical analysis (Sect. \ref{sec:kin_analysis}); otherwise, it has been classified as a ‘narrow’ component, due to systemic gas motions in the host galaxy. We verified the decomposition in several representative spaxels to select the optimal threshold values. In HS 0810+2554, we used $\tau=0.5$ and $w_{\rm th}=300$ km s$^{-1}$: the Gaussian component reproducing the narrowest, brightest emission near the systemic velocity has been typically classified as narrow; while any additional Gaussian component used to model either the blue or the red wing in [O III] profile has been identified as outflow component. In SDSS J1353+1138, in order to get the expected classification we slightly relaxed the width threshold ($w_{\rm th}=250$ km s$^{-1}$). Figure \ref{fig:int_spectra} summarises our strategy. The top panels show \textit{J}-band spectra of HS 0810+2554 and SDSS J1353+1138, extracted from SINFONI data cubes with an aperture of $\sim$0.44$''$-radius, centred on the peak of the observed emission (located on image A in SDSS J1353+1138). The best-fit models shown were obtained in phase II. Since no distinctions between a possibly broad outflow and narrow systemic components were made at this stage of the procedure, the diffuse gas model is plotted as sum of multiple components. We notice that in HS 0810+2554, there is a faint, broad emission line at $\sim4700-4750$ $\SI{}{\angstrom}$ (rest-frame) present in the Fe II templates, which is not present in our data. However, this does not affect the overall fitting procedure, as the observed Fe II emission is reproduced well by the templates at all the other wavelengths. The middle panels show the spectra extracted from the subtracted cubes using the same aperture as above, along with the results from the finer multi-Gaussian fit of the diffuse gas emission lines (phase III). In both QSO-subtracted spectra, the [O III] line profile exhibits a prominent, asymmetric blue wing that is already visible in the full spectra shown above. This strongly suggests the presence of high-speed outflowing material moving towards the observer, which we discuss in greater detail in Sect. \ref{sec:kin_analysis}. \subsection{Modelling a ‘double’ BLR in SDSS J1353+1138} \label{sec:double_BLR} \begin{figure} \begin{subfigure}{1.0\textwidth} \includegraphics[width=0.45\columnwidth, keepaspectratio]{J_full_1_new} \caption{{\small}} \label{fig:J_blr_1} \end{subfigure} \newline \begin{subfigure}{1.0\textwidth} \includegraphics[width=0.45\columnwidth, keepaspectratio]{J_full_2_new} \caption{{\small}} \label{fig:J_blr_2} \end{subfigure} \caption{{\small Best-fit models of nuclear spectra of SDSS J1353+1138 extracted from an aperture of 0.3$''$ radius, centred on image A (upper panel) and on image B (lower panel). The various spectral components, the total model and data are represented with different colours (see the plot legend). In spectrum A, a broken power law distribution is perfectly suited to reproduce the BLR-H$\beta$ profile, while in spectrum B, an additional broad Gaussian component was required to adequately reproduce the broad peak in the BLR-H$\beta$ line profile, which is entirely dominant over the barely detected H$\beta$ narrow component (solid light-blue line). The two spectra clearly differ from each other mostly for the lack of [O III] detection and the presence of a prominent blue wing in the H$\beta$ line profile in spectrum B.}} \label{fig:J_blr} \end{figure} As noted in Sect. \ref{sec:BLR_fit}, for SDSS J1353+1138, we found a significant change in the spectral shape within the wavelength range including the H$\beta$ and [O III] lines, while comparing the nuclear spectrum of the brighter image A (spectrum A) with that of the fainter image B (spectrum B), shown in the upper and lower panels of Fig. \ref{fig:J_blr}, respectively. In particular, while in the former it is possible to easily identify the [O III] emission lines, we did not detect any counterpart in the latter. Moreover, the H$\beta$ line profile in spectrum B is broader, with an evident brighter blue wing. Both effects are likely due to an overall increase in the Fe II emission in image B, as the H$\beta$ line width is not expected to intrinsically vary between different lensed images. The anti-correlation between Fe II and [O III] emissions in AGN spectra reflects a well-known effect known as Eigenvector-1 \citepads{1992ApJS...80..109B} and represents one of the most frequent differences among AGN properties. Even though it has been the subject of many studies, a clear physical understanding of its origin is still lacking. \citetads{1992ApJS...80..109B} suggest that high column densities in the BLR enhance Fe II, while reducing the ionising radiation able to reach the NLR. In a spectral analysis of AGN principal components in SDSS, \citetads{2009ApJ...706..995L} argue instead that the covering factor of the NLR is the likely cause of the range in [O III] strength, while \citetads{2009ApJ...707L..82F} suggest that the higher column densities required for the infall in more luminous AGNs would additionally account for the observed correlation of Fe II strength with $L/L_{\rm Edd}$. In spite of its still unclear origin, there are two main possible explanations for the observed significant variation in the Fe II emission between the two lensed images. The first one is based on the typical short time scales (i.e. days, weeks; e.g. \citeads{2000ApJ...533..631K}) on which the BLR is seen to vary. Because of the different path followed by the light from the background QSO, the two lensed images are produced with a time delay of about 16 days \citepads{2006ApJ...653L..97I}. This temporal shift is comparable with the typical BLR variation timescale, therefore, it could be sufficient to have a significant change in the BLR emission explaining the effect we observed. Given the short time scale probed here, this could carry interesting implications for the accretion variations in the AGN and in the consequent response of the BLR gas. Alternatively, the observed variation could be the consequence of microlensing effects (e.g. see \citeads{2020MNRAS.492.5314N}) produced by either single stars or low-mass dark matter halos intervening along the line of sight. Microlensing effects typically affect only the emission originated on small scales, while the emission from the NLR is insensitive. Of the two possibilities, the latter seems to be less likely, as we do not observe any significant counterpart variation in the strength of the BLR-H$\beta$ component, in addition to that observed in the Fe II strength. However, a remarkable simultaneous variation in both H$\beta$ and Fe II strength is what we would expect in the case when the two emissions are strictly co-spatial, whereas we know that the BLR is stratified and that microlensing magnifies the emission from the most compact regions more strongly. Therefore, we do not exclude the microlensing hypothesis. A detailed analysis of the different BLR spectra from the two images is beyond the scope of this work and will be presented in a forthcoming paper. Therefore, in this work, we focus on how we accounted for this effect during the spectral analysis. Consequently, in SDSS J1353+1138, we extracted two distinct nuclear spectra, namely, spectrum A and B that are shown in Fig. \ref{fig:J_blr}, using a 0.3$''$ radius aperture centred on the emission peak of each lensed image. We proceeded to fit them separately. In both spectra we used a first-degree polynomial to model the AGN continuum that is still dominant over the stellar continuum. Unlike the BLR modelling of HS 0810+2554, the multiple Gaussian fit was not sufficient to reproduce the broader and complex profile of the broad Balmer emission lines, especially the H$\beta$ line in spectrum B. In fact, even though both H$\beta$ prominent wings are likely due to the Fe II emission, as discussed above, the Fe II templates employed by the code were not able to reproduce such observed emission. Therefore, we modelled both wings as part of the broad H$\beta$ line profile without any focus on their physical interpretation, as we were simply interested in identifying the overall BLR spectrum in order to remove it. For the modelling of the broad Balmer emission lines observed in SDSS J1353+1138 (i.e. H$\beta$ and H$\gamma),$ we used a broken power law distribution convolved with a Gaussian profile \citepads{2006A&A...447..157N}: \begin{equation} \label{eq:broke_powlaw} F(\lambda) = \begin{cases} F_0 \times \left( \frac{\lambda}{\lambda_0} \right )^{+\alpha}, &\text{for }\ \lambda < \lambda_0 \\ F_0 \times \left( \frac{\lambda}{\lambda_0} \right )^{-\beta}, &\text{for }\ \lambda > \lambda_0 \end{cases} ,\end{equation} where the free parameters of the fit, for each line, are the central wavelength $\lambda_0$, the two power-law indices $\alpha$ and $\beta$, the normalisation $F_{\rm 0}$, and the width $\sigma$ of the Gaussian kernel. In spectrum A (upper panel), the H$\beta$ and H$\gamma$ were modelled separately through the line profile described in Eq. (\ref{eq:broke_powlaw}). Spectrum B (lower panel) required instead an additional broad Gaussian component to suitably reproduce both the more extended red wing and the broad peak ($\sigma \sim 800$ km s$^{-1}$) in the H$\beta$ profile. Moreover, we had to constrain the H$\gamma$ line profile through that of H$\beta$. Different Fe II templates have been selected in the BLR best-models for the two nuclear spectra. To model the narrow emission line profiles, we used three Gaussian components in spectrum A, while a single Gaussian component in spectrum B, since we did not actually observe any narrow component. Then we proceeded to fit the whole data cube following the procedure described in Sect. \ref{sec:tot_fit}, with the main difference that in each spatial pixel, pPXF considered a linear combination of the two BLR models, weighing their relative contribution and providing their most suitable combination as the BLR model in that specific spaxel. In general, in those spaxels close to one of the lensed images, we basically got the BLR model directly obtained in the modelling of the respective nuclear spectrum; while a combination of the two BLR-models in those spaxels ended up roughly between the two images, as we indeed expected. \subsection{Testing the spatially resolved emission of the ionised outflows} \label{sec:test_spatially_res} Before analysing the outflow kinematics across the FOV, we tested whether the emission of the detected ionised outflows was spatially resolved. This is crucial for the calculation of the outflow energetics. As our observed data have missed a dedicated PSF star, we compared the spatial extent of the [O III] outflow emission with that of the BLR emission (both obtained from the previous spectral modelling). In fact, given that the latter is unresolved in our data, it is suitable for reproducing the instrumental response. We fitted a 2D-Gaussian profile to the BLR flux map, obtained by integrating in wavelength our BLR model, in order to estimate the angular resolution of our seeing-limited observations. The resulting best-fit Gaussian profiles are not circularly symmetric, especially in the case of HS 0810+2554, whose profile is elongated in the NW-SE direction. Such an elongation is mostly due to lens stretching effects and to the blending of A and B images, rather than to a possible intrinsic asymmetry of the PSF. Therefore, for both sources we assumed the angular size of the minor axis of the best-fit Gaussian profile as representative for the true PSF extent, as this is heading in the direction where the lens stretching effects are expected to be minimised. For HS 0810+2554 and SDSS J1353+1138, we estimated for the PSF a FWHM ($\theta_{\rm res}$) of 0.7$''$ and 0.8$''$, respectively. These NIR values are slightly smaller than the optical seeing measurements obtained with the differential image motion monitor (DIMM) during the observations, namely, $0.9''$ and $1.0''$, respectively \citepads{cite-key}. \begin{figure*} \centering \includegraphics[width=17cm,keepaspectratio]{test_resolved_pdf_real} \caption{{\small Spatially resolved ionised outflows in HS 0810+2554 (top) and in SDSS J1353+1138 (bottom). \textit{Left panels:} Maps of the ratio between the [O III] outflow flux and the BLR flux (both from best-fit models). Coloured pixels refer to a S/N $\gtrsim$ 2 (S/N $\gtrsim$ 3) on the full [O III] emission line (i.e. narrow + outflow components) for HS 0810+2554 (SDSS J1353+1138). The positions of the continuum emission peak of the lensed images are marked with white ‘+’, while the dotted white lines indicate the contour levels of the BLR emission at 75\%, 50\%, and 25\% of its peak; for SDSS J1353+1138, it is represented also the level at 90\%. \textit{Right panels:} Normalised intensity profiles along the pseudo-slit (black dotted-dashed lines in the ratio map) and in circular annuli of increasing radius for HS 0810+2554 (top) and SDSS J1353+1138 (bottom), respectively: BLR model (red lines), [O III] outflow model (dotted blue lines) and [O III] blue wing from data (cyan lines). Blue points represent ratio values of the [O III] outflow flux over the BLR flux and they are referred to the right-hand logarithmic scale. The dashed black line in the plot of SDSS J1353+1138 corresponds to the radial distance of the centre of image B.}} \label{fig:test_resolved} \end{figure*} To test whether the detected ionised outflows are spatially resolved, we adopted the following procedure. First, we created the flux maps for the [O III] outflow and BLR components. Then we calculated spaxel-by-spaxel the ratio of the [O III] outflow ([O III]$_{\rm out}$) flux over the BLR flux, and produced the maps reporting the flux-ratio values across the FOV. The ratio maps obtained for both QSOs are shown in the left panels of Fig. \ref{fig:test_resolved}. The increasing trend in [O III]$_{\rm out}$-to-BLR ratios with the distance from the emission peak indicates that the [O III] outflows are spatially resolved in both QSOs. Moreover, the ratio map of HS 0810+2554 highlights the existence of a preferred NE-SW direction along which the highest ratio values are found. Such a direction is perpendicular to the blending direction of the images A+B. Unlike HS 0810+2554, the two lensed images of SDSS J1353+1138 are spatially well-resolved and not affected by significant lens stretching effects. As a consequence, the [O III]$_{\rm out}$-to-BLR ratio maps shows an isotropic pattern of increasing ratio values moving outwards from the centre of image A (we recall that we detected the [O III] emission only from this image, as previously discussed in Sect. \ref{sec:double_BLR}). What we discuss above, based on the use of 2D-ratio maps, can be better appreciated using spatial profiles. We determined the spatial profiles of the [O III] outflow and BLR emissions, as well as of their ratio values and studied their variability with increasing distance from the peak of the overall emission. In HS 0810+2554, since the [O III] emission is preferentially located along the NE-SW direction, we defined a pseudo-slit in such a direction ($\theta\sim130^{\circ}$) with a width of five spaxels, along which we calculated the emission spatial profiles. On the contrary, given the isotropic pattern of the whole emission in SDSS J1353+1138, we determined the spatial profiles in circular annuli of increasing radial distance from the centre of image A (lower panel). All the spatial profiles thus obtained are shown in the right panels of Fig. \ref{fig:test_resolved}. Each one has been normalised to its own $0''$-value, that is, to the value at the peak position of the overall emission; in the case of the [O III] outflow and BLR profiles, the $0''$-value corresponds also to their own peak value. This is helpful in further confirming our previous conclusion that the [O III] outflows are spatially resolved in both galaxies since the [O III] outflow profiles are broader than the respective BLR profiles. A unique exception occurs in SDSS J1353+1138, in a correspondence of about 1.5$''$ from the centre where we can observe a clear bump in the BLR emission profile: this is due to the flux contribution from image B. Therefore, this does not affect our previous conclusion. As a further test, in addition from that obtained from our fitting procedure, we determined the [O III] outflow profile by collapsing the spectral channels in the subtracted-data cube, including the blue wing of the [O III] line profile ($4976-5000$ $\SI{}{\angstrom}$ and $4970-4996$ $\SI{}{\angstrom}$ for HS 0810+2554 and SDSS J1353+1138, respectively). The two [O III] outflow profiles, from the fit (blue dotted line) and from the spectrally-integrated (cyan solid line) subtracted-data, agree very well. In order to estimate the angular extent of ionised outflows on the image plane\footnote{To be still corrected for lens effects.}, we focused on the ratio values of the [O III] outflow flux over the BLR flux. These are plotted in logarithmic scale (on the right-hand side of the plots), after having been rescaled to 1 in the central pixel. In this way, we can easily identify values higher than 1 as regions producing a significant [O III] outflow emission and, hence, we can determine the spatial extent of resolved ionised outflows. The associated errorbars were computed by propagating the uncertainties on the [O III] and BLR fluxes in the spatial pixels involved. These were computed by propagating the error (mostly due to the noise) associated to the spectral channels, over which we integrated to get the total flux contained in that spatial pixel. We took the maximum distance, including solely ratio values not consistent with 1, as both radius within which to spatially integrate the flux of the [O III] outflow component, and outflow observed radius ($R_{\rm obs}$) to be still corrected for the SINFONI-PSF and lensing stretching effects. To correct for the PSF-smearing, we applied the correction $R_{\rm PSF}= \sqrt{R^2_{\rm obs}-(\theta_{\rm res}/2)^2}$, where $R_{\rm PSF}$ is the PSF-corrected radius, $R_{\rm obs}$ is the radius observed in the image plane and $\theta_{\rm res}$ is our seeing estimate (0.7$''$ and 0.8$''$ for HS 0810+2554 and SDSS J1353+1138, respectively). In HS 0810+2554, we spatially integrated the [O III] outflow flux up to $R_{\rm obs}=1.25''$, finding a total observed flux of $(3.73\pm0.05)\times10^{-15}$ erg s$^{-1}$ cm $^{-2}$ and $R_{\rm PSF}\sim1.2''$. In SDSS J1353+1138, we took $R_{\rm obs}=1.13''$ and assessed a total observed flux of $(8.6\pm0.6)\times10^{-16}$ erg s$^{-1}$ cm$^{-2}$ and $R_{\rm PSF}\sim1.06''$\footnote{The estimates for $R_{\rm PSF}$ are here provided with no uncertainty. We evaluate the error on the outflow intrinsic radius in Sect. \ref{sec:intrinsic}, after correcting for the lensing effects.} for the [O III] outflow. In Sect. \ref{sec:intrinsic}, we account for the lens effects and estimate the intrinsic extent (by correcting for stretching effects) and unlensed flux (by correcting for magnification effects) of the ionised outflows in HS 0810+2554 and in SDSS J1353+1138. \section{Results} \label{sec:results} \subsection{Distribution and kinematics of the ionised gas} \label{sec:kin_analysis} The main purpose of this work is to map the kinematics of the [O III] emission, with a primary focus on the outflow component. Figure \ref{fig:kin_maps} shows a global overview of the distribution and the kinematics of the ionised gas resulting from the modelling of the [O III] emission line. The moment-0 (intensity field), moment-1 (velocity field), and moment-2 (dispersion field) maps for the narrow and the outflow components are shown separately in order to better trace their distinct spatial and velocity distributions. All maps have been produced reporting only spatial pixels with a S/N equal or higher than 2 for HS 0810+2554 and higher than 3 for SDSS J1353+1138. The candidate [O III]-outflow component is extended up to large distance from the galaxy centre of both QSOs, and it stands out for its strongly-blueshifted velocities and high velocity dispersion values ($|v|\gtrsim500$ km s$^{-1}$ and $\sigma \gtrsim 600$ km s$^{-1}$, respectively; see moment-1 moment-2 maps in Fig. \ref{fig:kin_maps} relative to the outflow component). Such velocity dispersions are well above the values measured in typical star-forming systems at these redshifts ($\sigma \sim 100$ km s$^{-1}$, e.g. \citeads{2009ApJ...697..115C,2009ApJ...697.2057L}) and, along with the overall blue-shifted motion, they provide clear evidence for large-scale outflows in these galaxies. Moreover, while in HS 0810+2554 the outflow and the narrow components have almost the same intensity across the FOV, we note that in SDSS J1353+1138, the [O III] outflow emission is brighter than the [O III] narrow emission produced by the bulk of the gas of the host galaxy. For the narrow component, which is expected to trace mostly systemic galactic motions, we obtained low-velocity and low velocity-dispersion values ($|v|\lesssim50$ km s$^{-1}$, $\sigma \lesssim200$ km s$^{-1}$ for HS 0810+2554, and $|v|\lesssim100$ km s$^{-1}$, $\sigma \lesssim300$ km s$^{-1}$ for SDSS J1353+1138; see moment-1 moment-2 maps in Fig. \ref{fig:kin_maps} relative to the narrow component) in the central region where also the outflow emission is detected, further supporting our decomposition in the spectral analysis of both QSOs. In the more external region of HS 08101+2554, we observe slightly higher values in the velocity dispersion ($\sigma \lesssim300$ km s$^{-1}$), as the line profile is modelled with a unique Gaussian component, given that the [O III] emission line is fainter and the S/N is lower. This could indicate that the [O III] outflow component is still present, but cannot be isolated from the [O III] narrow component because of its faintness and the worse S/N. \begin{figure*} \centering \includegraphics[width=17cm,keepaspectratio]{kin_maps_pdf} \caption{{\small Moment-0 (intensity field), moment-1 (velocity field), and moment-2 (dispersion field) maps of the [O III] line emission in HS 0810+2554 (\textit{left}) and in SDSS J1353+1138 (\textit{right}). The maps for the total, narrow and outflow components are shown separately, reporting only spatial pixels with a S/N equal or higher than 2 for HS 0810+2554, and than 3 for SDSS J1353+1138. The black ‘+’ indicates the emission centroid in each QSO, while the dotted lines represent the contour levels of the BLR emission at 75\%, 50\%, and 25\% of its peak.}} \label{fig:kin_maps} \end{figure*} Similarly to \citetads{2020A&A...644A..15M} and given the complexity of the [O III] line profile across the FOV, we preferred adopting the following definitions of velocity and width for the outflow characterisation (e.g. see also \citeads{2014MNRAS.441.3306H, 2014MNRAS.442..784Z, 2015ApJ...799...82C, 2015A&A...580A.102C, 2016A&A...588A..58B}), rather than the moment-1 and moment-2 values. The latter are indeed more affected by geometrical projection and dust absorption effects. In each spatial pixel, we determined the 10th and 90th velocity percentiles ($v_{\rm 10}$ and $v_{\rm 90}$) of the overall emission line profile (i.e. narrow + outflow components if present), as representative velocities of the approaching and receding outflow components, respectively. The null velocity value corresponds to the systemic velocity peak of the narrow component in the central spectrum. From $v_{\rm 10}$ and $v_{\rm 90}$, we computed the line width $W_{\rm 80}$ defined as $v_{\rm90}-v_{\rm10}$. The $W_{\rm80}$ width is approximately equal to the full width at half maximum (FWHM) for a Gaussian profile. Maps of $v_{\rm10}$, $v_{\rm90}$ and $W_{\rm80}$ are shown in Fig. \ref{fig:perc_map}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{perc_map_pdf}} \caption{{\small $v_{\rm10}$, $v_{\rm90}$, and $W_{\rm80}$ maps of the [O III] emission line in HS 0810+2554 (upper panels) and in SDSS J1353+1138 (lower panels). We applied the same cut in S/N as in the moment maps of Fig. \ref{fig:kin_maps}, that is, with a S/N equal or higher than 2 and for HS 0810+2554 and higher than 3 for SDSS J1353+1138.}} \label{fig:perc_map} \end{figure} The maps of $v_{\rm10}$ show highly blueshfited velocities in most of the field of HS 0810+2554 and SDSS J1353+1138. In the former, a slightly steeper velocity gradient is present in the west direction from the centre, where we observe velocities as high as about $-2170$ km s$^{-1}$; in the latter, the outflow region is preferentially elongated in the NE-SW direction with highest velocity values (up to $-2410$ km s$^{-1}$) at the SW end of the strongly blueshifted region. In HS 0810+2554, we clearly detect also the redshifted component of the outflow in the two reddest regions in the $v_{\rm90}$ map, where the outflow is seen receding from us at velocities up to about $1730$ km s$^{-1}$ along the line of sight. Looking at the $W_{\rm80}$ maps, we observe extremely large values ($1100~\text{km s$^{-1}$}\lesssim v \lesssim 3500$ km s$^{-1}$) in the outflow regions, which is consistent with other $z\sim2$ QSO outflows found in the literature \citepads{2015A&A...580A.102C}. Furthermore, by comparing the [O III]-outflow moment-1 map with the $v_{\rm10}$ map for each QSO, we note that the shape of the [O III]-outflow moment-1 map reflects the bluest region in the $v_{\rm10}$ map, suggesting that any additional Gaussian component added to model the wings in the [O III] profile has been correctly classified as outflow component (compare also [O III]-outflow moment-2 maps and the respective $W_{\rm80}$ maps). We rule out the possibility of alternative scenarios, such as galactic inflows or a galaxy merger event. In fact, in the few reported cases of their detection, galactic inflows have been observed mostly in absorption and with quite small bulk velocities ($\sim200$ km s$^{-1}$) and velocity dispersions (e.g. \citeads{2013Sci...341...50B}). Moreover, for the inflowing gas theoretical modelling predicts a small covering factor (e.g. \citeads{2010ApJ...717..289S}), making its direct observation rare especially at high redshift (e.g. \citeads{2010Natur.467..811C}). Finally, we exclude also the galaxy-merger scenario since the deep optical images of both HS 0810+2554 and SDSS J1353+1138 (see Fig. \ref{fig:opt_images}) do not show any continuum emission counterpart in correspondence of the outflow region, which could support such a scenario. Finally, we stress that the obtained maps are relative to the lens plane and, thus, they do not account for gravitational lensing effects. While these are expected not to significantly affect the observed gas kinematics (hence the outflow velocity), they strongly alter the observed gas spatial distribution and the observed surface brightness: fluxes are magnified and spatial dimensions are stretched. Therefore, the obtained maps could not be used to infer directly the outflow intrinsic radius and its total flux, which are key ingredients, along with the outflow velocity, in the computation of the outflow energetics. We first need to quantify the lensing effects and then we can derive the unlensed physical properties of the outflow. This aspect is discussed in Sect. \ref{sec:intrinsic} (see also Appendix \ref{app:2D_rec}). \subsection{Inferring unlensed size and flux of the outflow} \label{sec:intrinsic} As we discuss in Sect. \ref{sec:kin_analysis}, we performed both the spectral analysis and the kinematical study in the lens plane. For this reason, we had to correct for the lens effects to determine the actual extent and flux of the outflow. There are several adaptive-mesh fitting codes which, given a mass distribution for the lens and a surface brightness profile for the background source, fit the resulting forward lensed image to the observed data and use a statistics test (e.g. the minimum $\chi^2$ method) to establish the best-fit models for both the lens and the source. These algorithms usually require the knowledge of the instrumental PSF to allow a correct comparison with the observed data. The output of these fitting-codes is a 2D or 3D reconstruction (depending on the code used) of the unlensed source. In order to achieve an accurate reconstruction, it is required that the lensed images are all detected and spatially resolved\footnote{In addition or alternatively to single lensed images, fitting codes handle also lensed arcs.}, as their position depends on the first derivative of the gravitational potential of the lens, while their flux on the second derivative (e.g. \citeads{2015MNRAS.454..287J, 2020MNRAS.492.5314N}). In other words, the knowledge of the position of the multiple lensed images and of their flux provides strong constraints on the lens and background source models. Unfortunately, we could not use such fitting codes to fully reconstruct the unlensed outflow in the source plane for HS 0810+2554 nor SDSS J1353+1138 since our data did not satisfy the necessary requirements. In fact, in the case of HS 0810+2554 the spatial resolution of the SINFONI data was too low to resolve the various lensed images and, thus, we were not able to achieve an accurate full reconstruction of the background source. However, we were able to recover a partial reconstruction of the background outflow using the lens-fitting code from \citetads{2018MNRAS.481.5606R} and adopting an approximated procedure (see Appendix \ref{app:2D_rec}). In this way, we estimated the outflow magnification factor and intrinsic radius to be, respectively, $\mu_{\rm out}=2.0\pm0.2$ and $R_{\rm out}=(8.7\pm1.7)$ kpc, with $z=1.508\pm0.002$ as the redshift measured from the nuclear spectrum extracted for the BLR-modelling (described in Sect. \ref{sec:BLR_fit}) and, here, adopted to convert the outflow angular size into kpc units. Our $z$ measurement is consistent with the centroids of the ALMA CO(J$=$3$\rightarrow$2) and CO(J$=$2$\rightarrow$1) emission lines of HS 0810+2554 reported in \citetads{2020MNRAS.496..598C}. By correcting for $\mu_{\rm out}$ the observed [O III] outflow flux (determined in Sect. \ref{sec:test_spatially_res}), we found the unlensed outflow flux to be $F_{\rm out}=(1.9\pm0.2)\times10^{-15}(2.0/\mu_{\rm out})$ erg s$^{-1}$cm$^{-2}$. Our $\mu_{\rm out}\sim2$ estimate is close to what \citetads{2020MNRAS.496..598C} found for a high-velocity CO-clump at similar distance in ALMA data of HS 0810+2554. On the contrary, it remarkably differs (up to two orders of magnitude) from the values from the literature, presented in Sect. \ref{sec:HS}. This follows from the fact that the latter are estimates of the magnification of the emission from the more compact (UV disk and X-ray corona) region, while our estimate is relative to a large-scale ($\sim$8 kpc) emission. Moreover, the reconstructed unlensed outflow does not intercept the lens caustics (see Appendix \ref{app:2D_rec}) and, hence, it misses the magnification contribution from those regions, where the lens magnification is drastically larger. In SDSS J1353+1138, the complete lack of [O III] detection in image B prevented us from attempting any background source reconstruction starting from our data, as no constraints could be put on this image. Therefore, we had to make simplified assumptions, referring to the lens models for SDSS J1353+1138 by \citetads{2006ApJ...653L..97I} and \citetads{2016MNRAS.458....2R} (discussed in Sect. \ref{sec:lens_model}), based on AGN plus host galaxy emission in the \textit{i} and \textit{K}-band, respectively. Our assumptions rely on the fact that in SDSS J1353+1138 gravitational lensing effects are supposed to be smaller than in HS 0810+1154 (e.g. \citeads{1996astro.ph..6001N}). As a consequence, as compared to HS 0810+1154, for this object we expect: (1) a milder and almost isotropic stretching of physical dimensions, as we indeed observed; (2) smaller and less variable-in-space values of differential magnification. On the basis of the first argument, we neglected the stretching lens effect and approximated the unlensed outflow angular size to $R_{\rm PSF}=1.06''\pm0.13''$ (determined in Sect. \ref{sec:test_spatially_res}). Considering instead the lens magnification, we expect total magnification factors of a few units that is weakly dependent on the geometrical details of the flux distribution for background emissions with comparable spatial extent. Consequently, we used the average between the \textit{i}-band (i.e. $\mu=3.81$ and $\mu=3.75$; \citeads{2006ApJ...653L..97I}) and \textit{K}-band total magnification factors (i.e. $\mu=3.47$, $\mu=3.42$ and $\mu=3.53$; \citeads{2016MNRAS.458....2R}) as a proxy for the total outflow magnification $\mu_{\rm out}$, under the assumption of comparable unlensed physical sizes. Given the unknown real unlensed flux distribution of the \textit{J}-band outflow, we conservatively assumed an uncertainty of 10\% on our adopted $\mu_{\rm out}$ value, thus obtaining $\mu_{\rm out}=3.6\pm0.4$. Correcting, finally, for the lens magnification and converting to kpc-units, we found the unlensed flux for the outflow to be $F_{\rm out}=(2.4\pm0.3)\times10^{-16}(3.6/\mu_{\rm out})$ erg s$^{-1}$cm$^{-2}$ and its intrinsic radius to be $R_{\rm out}=(9.2\pm1.1)$ kpc, using $z=1.632\pm0.002$ as measured from the nuclear spectra extracted during the BLR-modelling (described in Sect. \ref{sec:double_BLR}). In Table \ref{tab:kin_properties}, we summarise the main outflow properties for HS 0810+2554 and SDSS J1353+1138, obtained up to this point. The first columns show the maximum velocity values observed in the $v_{\rm10}$, $v_{\rm90}$ and $W_{\rm80}$ maps (described in Sect. \ref{sec:kin_analysis}), respectively, referred to as $v^{\rm max}_{\rm10}$, $v^{\rm max}_{\rm90}$ and $W^{\rm max}_{\rm80}$. Then we report our lens-corrected estimates of $R_{\rm out}$ and $F_{\rm out}$ (inferred as discussed above), the latter corrected for the outflow magnification factors, $\mu_{\rm out}$, shown in the last column. Most of these physical quantities are also employed in the computation of the outflow energetics in Sect. \ref{sec:energetics}. \begin{table* \centering \begin{tabular}{c|cccccc} \hline \hline QSO & $v^{\rm max}_{\rm10}$ & $v^{\rm max}_{\rm90}$ & $W^{\rm max}_{\rm80}$ & $R_{\rm out}$ & $F_{\rm out}$ & $\mu_{\rm out}$\\ & km s$^{-1}$ & km s$^{-1}$ & km s$^{-1}$ & kpc & erg s$^{-1}$cm$^{-2}$ & \\ \hline HS 0810+2554 & $-2170\pm70$ & $1730\pm60$ & $3360\pm110$ & $8.7\pm1.7$ & $(1.9\pm0.2)\times10^{-15}$ & $2.0\pm0.2$\\ SDSS J1353+1138 & $-2410\pm80$ & $2270\pm130$ & $3850\pm90$ & $9.2\pm1.1$ & $(2.4\pm0.3)\times10^{-16}$ & $3.6\pm0.4$\\ \hline \end{tabular}% \caption{{\small Directly measured properties of the [O III] outflows in HS 0810+2554 and SDSS J1353+1138, obtained from our analysis, and adopted outflow magnification factors. Starting from left, columns are defined as follows: maximum velocity values observed in the $v_{\rm10}$, $v_{\rm90}$ and $W_{\rm80}$ maps ($v^{\rm max}_{\rm10}$, $v^{\rm max}_{\rm90}$ and $W^{\rm max}_{\rm80}$, respectively), intrinsic outflow radius ($R_{\rm out}$), unlensed [O III] outflow flux ($F_{\rm out}$) corrected for the outflow magnification factor ($\mu_{\rm out}$), reported in the last column.}} \label{tab:kin_properties} \end{table*}% \subsection{Outflow energetics} \label{sec:energetics} We derived the physical properties of the large-scale ionised outflows in HS 0810+2554 and SDSS J1353+1138 from the observed [O III]$\lambda$5007 emission, following the prescriptions described in \citetads{2012A&A...537L...8C} as done also in \citetads{2020A&A...644A..15M}. The [O III] line luminosity is given by: \begin{equation}\label{eq:oiii_line_lum} L_{\rm [O III]}=\int_V \epsilon_{\rm [O III]}f~dV, \end{equation} where $V$ is the volume occupied by the ionised outflow, $f$ is the filling factor of the [O III] emitting clouds in the outflow, and $\epsilon_{\rm [O III]}$ is the [O III] line emissivity which, at the temperatures typical of the NLR ($\sim10^4$ K), is weakly dependent on the temperature ($\propto T^{0.1}$) and can be written as: \begin{equation}\label{eq:line_emissivity} \epsilon_{\rm [O III]}=1.11\times10^{-9}E_{\rm [O III]}n_{\rm O^{2+}}n_{\rm e}~\text{erg s}^{-1}\text{cm}^{-3}, \end{equation} with $E_{\rm [O III]}$ as the energy of the [O III] photons, $n_{\rm O^{2+}}$ and $n_{\rm e}$, the volume densities of the $\rm O^{2+}$ ions, and of the electrons, respectively. Then assuming that most of the oxygen in the ionised outflow is in the form of $\rm O^{2+}$, it follows that: \begin{equation}\label{eq:line_emissivity_approx} \epsilon_{\rm [O III]}\approx5\times10^{-13}E_{\rm [O III]}n^2_{\rm e}10^{\rm [O/H]}~\text{erg s}^{-1}\text{cm}^{-3}, \end{equation} where $\rm [O/H]$ gives the oxygen abundance in solar units. The mass of the outflowing ionised gas can be derived from the following expression: \begin{equation}\label{eq:mass_out_int} M_{\rm out}=\int_V 1.27m_{\rm H}n_{\rm e}f~dV, \end{equation} where $m_{\rm H}$ is the mass of the hydrogen atom and the factor of 1.27 follows from including the mass contribution of helium. By combining Eqs. (\ref{eq:oiii_line_lum}) and (\ref{eq:mass_out_int}), we get: \begin{equation}\label{eq:mass_out} \resizebox{0.9\columnwidth}{!}{% $M_{\rm out}=5.33\times10^7~\bigg( \frac{L_{\rm [O III]}}{10^{44}~\text{erg s}^{-1}} \bigg)~\bigg( \frac{\langle n_{\rm e} \rangle}{10^3~\text{cm}^{-3}} \bigg)^{-1}~\frac{C}{10^{\rm [O/H]}}~\text{\(M_\odot\)},$ } \end{equation} \\ where $\langle n_{\rm e} \rangle$ is the electron density averaged over the ionised outflow volume (i.e. $\langle n_{\rm e} \rangle=\int_V n_{\rm e}f~dV/\int_Vf~dV$) and $C=\langle n_{\rm e} \rangle^2/\langle n^2_{\rm e} \rangle$ is the so-called ‘condensation factor’. Under the simplifying hypothesis that all ionising gas clouds have the same density, we get $C=1$ and eliminate the outflow mass dependance on the filling factor of the emitting clouds. In order to compute the energetics of the ionised outflow, we have to make further simplifying assumptions for the outflow geometry and structure: we assume that the outflow has a (bi-)conical geometry with an opening angle $\Omega$ and a radial extent, $R_{\rm out}$, and that it consists of a collection of ionised gas clouds, uniformly distributed within the cone and outflowing with a speed $v_{\rm out}$. The mass outflow rate is given by: \begin{equation}\label{eq:mdot_def} \dot{M}_{\rm out}=\langle \rho \rangle~v_{\rm out}~\Omega R^2, \end{equation} where $\langle \rho \rangle$ is the average mass density computed over the total volume $V$ occupied by the conical outflow\footnote{We note that unless $f=1$, in general $\langle \rho \rangle \neq 1.27m_{\rm H}\langle n_{\rm e} \rangle$, with $\langle n_{\rm e} \rangle$ averaged over the volume occupied by the emitting clouds and not over the whole conical volume.}. By substituting $\langle \rho \rangle$ in Eq. (\ref{eq:mdot_def}) with its definition in terms of $\dot{M}_{\rm out}$ (using Eq. \ref{eq:mass_out}) and $V$, we obtain: \\ \begin{equation}\label{eq:mdot} \resizebox{0.9\columnwidth}{!}{% $\dot{M}_{\rm out}=164\bigg( \frac{L_{\rm [O III]}}{10^{44}\text{erg s}^{-1}} \bigg)\bigg( \frac{\langle n_{\rm e} \rangle}{10^3\text{cm}^{-3}} \bigg)^{-1}\bigg( \frac{v_{\rm out}}{10^3\text{km s}^{-1}} \bigg)\bigg( \frac{R_{\rm out}}{\text{kpc}} \bigg)^{-1}\frac{\text{\(\text{M}_\odot\)yr}^{-1}}{10^{\rm [O/H]}}$ } \end{equation} \\ where we have assumed $C=1$. The mass outflow rate thus inferred does not depend on either the opening angle $\Omega$ of the outflow cone or the filling factor $f$ of the emitting clouds. Finally, we calculate the kinetic energy ($E_{\rm kin}$), kinetic luminosity ($L_{\rm kin}$) and momentum rate ($\dot{p}_{\rm out}$) of the outflow by means of the following expressions: \\ \begin{equation}\label{eq:kin_energy} E_{\rm kin}=9.94\times10^{42}~\bigg( \frac{M_{\rm out}}{\text{\(\text{M}_\odot\)}} \bigg)~\bigg( \frac{v_{\rm out}}{\text{km s}^{-1}} \bigg)^2~\text{erg} ,\end{equation} \begin{equation}\label{eq:kin_lum} L_{\rm kin}=3.16\times10^{35}~\bigg( \frac{\dot{M}_{\rm out}}{\text{\(\text{M}_\odot\)yr}^{-1}} \bigg)~\bigg( \frac{v_{\rm out}}{\text{km s}^{-1}} \bigg)^2~\text{erg s}^{-1} ,\end{equation} \begin{equation}\label{eq:pdot} \dot{p}_{\rm out}=6.32\times10^{30}~\bigg( \frac{\dot{M}_{\rm out}}{\text{\(\text{M}_\odot\)yr}^{-1}} \bigg)~\bigg( \frac{v_{\rm out}}{\text{km s}^{-1}} \bigg)~\text{dyne}. \end{equation} \\ Equations (\ref{eq:mass_out})-(\ref{eq:pdot}) require the knowledge of different physical properties of the outflow, some of which we were able to derive, while others had to be assumed. These are: the oxygen abundance, which we fixed to the solar abundance, and the electron density, which we assumed to be $n_{\rm e}\sim1000$ cm$^{-3}$, in agreement with the values measured in similar studies at high redshift (see e.g. \citeads{2017A&A...606A..96P, 2019ApJ...875...21F}). The second assumption, in particular, affects the derived outflow energetics \citepads{2020MNRAS.498.4150D, 2020A&A...642A.147K} but it is necessary, nonetheless, since we cannot measure $n_{\rm e}$ directly from our data. We now focus on the physical quantities we were able to calculate. In Sect. \ref{sec:intrinsic}, we provided the values of the intrinsic radius $R_{\rm out}$ and flux $F_{\rm out}$ of the ionised outflows, traced by the [O III] line emission. From $F_{\rm out}$ we calculated the intrinsic [O III] line luminosity used in the mass outflow expression (Eq. \ref{eq:mass_out}). We found ($\mu_{\rm out}$-corrected) [O III] luminosity values of $L_{\rm [O III]}=(2.8\pm0.3)\times10^{43}$ erg s$^{-1}$ and $L_{\rm [O III]}=(4.4\pm0.6)\times10^{42}$ erg s$^{-1}$ for HS 0810+2554 and SDSS J1353+1138, respectively. In order to establish the velocity of the ionised outflows, we focused on the kinematical maps of $v_{\rm 10}$ and $v_{\rm 90}$, shown in Sect. \ref{sec:kin_analysis}. The spectral analysis and the study of the gas kinematics in HS 0810+2554 and in SDSS J1353+1138 have revealed the presence of an extended central region hosting outflowing gas directed towards us along the line of sight (Sect. \ref{sec:kin_analysis}), leading to very high values of $v_{\rm 10}$. Moreover, in HS 0810+2554, we detected also a red wing in the [O III] line profile in two smaller regions apart from the peak of the overall emission, corresponding to the high-velocity receding component of the ionised outflow. In this case, following \citetads{2020A&A...644A..15M}, we defined the outflow velocity as: \begin{equation}\label{eq:v_out_HS} v_{\rm out}=max~(|v^{\rm max}_{\rm 10}-v_{\rm sys}|,~ |v^{\rm max}_{\rm 90}-v_{\rm sys}|) ,\end{equation} where $v^{\rm max}_{\rm 10}$ and $v^{\rm max}_{\rm 90}$ are the maximum value, respectively, observed in the $v_{\rm 10}$ and $v_{\rm 90}$ maps (described in Sect. \ref{sec:kin_analysis}), and $v_{\rm sys}$ is the bulk (or systemic) velocity of the galaxy, inferred from the nuclear spectrum used for the BLR-fitting (in Sect. \ref{sec:BLR_fit}) and set to the value of 0 km s$^{-1}$, as previously described in Sect. \ref{sec:kin_analysis}. This definition is required by the unknown geometry and orientation of the outflow with respect to the line of sight: since we ignore the true angle of the outflow with respect to the line of sight, and given that the bulk of the outflow unlikely points towards the observer, we assume that the best representation of the outflow speed is provided by the velocity ‘tail’ of the line profile, that is, $v_{\rm 10}$ and $v_{\rm 90}$ in Eq. (\ref{eq:v_out_HS}). These values are thought to be more suited to represent $v_{\rm out}$ than the mean (or median) velocity of the line, which strongly depends on projection effects and dust absorption (e.g. \citeads{2015ApJ...799...82C}). The same argument holds also for the determination of the outflow velocity in SDSS J1353+1138. But in this case, possibly because of the different orientation with respect to the observer and higher dust absorption, we did not observe any asymmetric red wing in the [O III] profile produced by the receding part of the outflow, as stressed in Sect. \ref{sec:kin_analysis}. Therefore, for SDSS J1353+1138 we focused only on the blue tail of the [O III] line associated to the outflow, and hence assumed $v_{\rm out}=v^{\rm max}_{\rm 10}$ as outflow velocity for SDSS J1353+1138. To calculate the quantities in Eqs. (\ref{eq:mass_out})-(\ref{eq:pdot}) and their uncertainties, we used the error propagation considering the inferred errors on $R_{\rm out}$, $L_{\rm [O III]}$ and $v_{\rm out}$, and a typical uncertainty of 50\% on $n_{\rm e}$ (e.g. \citeads{2017A&A...606A..96P, 2019ApJ...875...21F}). The physical properties of the ionised outflows detected in HS 0810+2554 and in SDSS J1353+1138 are reported in Table \ref{tab:energetics}, including also our estimates of the kinetic efficiency and of the momentum-boost. The former is defined as the ratio between the outflow kinetic luminosity, $L_{\rm kin}$, (defined in Eq. \ref{eq:kin_lum}) and the AGN bolometric luminosity, $L_{\rm Bol}$, (corrected for the lens magnification), while the latter is defined as the ratio between the momentum rate of the outflow ($\dot{p}_{\rm out}$) and the momentum initially provided by the AGN-radiation pressure (i.e. $L_{\rm Bol}/c$), which is approximately identified also with the momentum rate of the X-ray UFO. The values of $L_{\rm Bol}$ adopted in this work are for $(2.5\pm0.9)\times10^{45}$ erg s$^{-1}$ and $(39\pm2)\times10^{45}$ erg s$^{-1}$ HS 0810+2554 and SDSS J1353+1138, respectively, and they will be discussed in Sect. \ref{sec:connecting}. We found mass outflow rates of $\sim$2 \(\text{M}_\odot\)yr$^{-1}$ and $\sim$12 \(\text{M}_\odot\)yr$^{-1}$, and kinetic efficiencies of $\sim9\times10^{-5}$ and $\sim700\times10^{-5}$ for SDSS J1353+1138 and HS 0810+2554, respectively. The values obtained for HS 0810+2554 are in good agreement with the predictions at $L_{\rm Bol}\sim2\times10^{45}$ erg s$^{-1}$ of the $\dot{M}_{\rm out}-L_{\rm Bol}$ and $L_{\rm kin}-L_{\rm Bol}$ scaling relations \citepads{2015A&A...580A.102C, 2017A&A...601A.143F} for the ionised outflow component. On the contrary, the inferred $\dot{M}_{\rm out}$ value for SDSS J1353+1138 is small (by a factor of $\sim100$) compared to the predictions for the ionised outflow component at $L_{\rm Bol}\sim4\times10^{46}$ erg s$^{-1}$. Our findings are however relative only to the ionised phase of large-scale outflows, while a significant amount of outflowing gas may be in neutral molecular and/or atomic phase. Given the typical bolometric luminosities of our galaxies ($L_{\rm Bol}\sim10^{45}-10^{46}$ erg s$^{-1}$), outflow mass rates predicted for the molecular component may indeed exceed our measurements for the ionised gas by a factor $\sim100$. \citetads{2020MNRAS.496..598C} have recently claimed the tentative detection of a massive ($M_{\rm out,~mol}\sim4\times10^{9}$ \(\text{M}_\odot\)) CO-molecular outflow in HS 0810+2554, with total mass rate and velocity of $\sim$400 \(\text{M}_\odot\)yr$^{-1}$ and 1040 km s$^{-1}$, respectively. Compared to molecular outflows, which have been observed in $z>1$ QSOs in spite of the uncertainty due to detection limits (e.g. \citeads{2017A&A...605A.105C, 2017A&A...608A..30F, 2018A&A...612A..29B}), neutral atomic outflows mainly reside in ultra-luminous infrared galaxies (ULIRGs), showing both intense SF and AGN activity (e.g. \citeads{2005ApJS..160..115R, 2016A&A...590A.125C, 2017A&A...606A..96P, 2019MNRAS.483.4586F, 2020arXiv200613232F}). This may suggest that neutral atomic outflows are mostly powered by SF rather than by AGN activity (e.g. \citeads{2017A&A...606A..36C, 2018ApJ...853..185B}) or that they can occur in obscured AGN hosting large quantities of cold gas to be channeled into galaxy-scale outflows \citepads{2017A&A...606A..96P}. However, the contribution of any additional gas phase may significantly increase the overall outflow mass and energetics (e.g. \citeads{2014A&A...562A..21C, 2016A&A...591A..28C, 2017A&A...601A.143F}). The implications of all these results will be discussed in details in Sect. \ref{sec:discussion}. \begin{table* \centering \begin{tabular}{c|cccccc} \hline \hline QSO & $v_{\rm out}$ & $M_{\rm out}$ & $\dot{M}_{\rm out}$ & $E_{\rm kin}$ & $L_{\rm kin}/L_{\rm Bol}$ & $\dot{p}_{\rm out}$\\ & \text{km s$^{-1}$} & $10^6$ \(\text{M}_\odot\) & \(\text{M}_\odot\)yr$^{-1}$ & $10^{56}$ erg & $10^{-5}$ & $L_{\rm Bol}/c$\\ \hline HS 0810+2554 & $2170\pm70$ & $15\pm8$ & $12\pm6$ & $7\pm4$ & $700\pm500$ & $1.9\pm1.2$\\ SDSS J1353+1138 & $2410\pm80$ & $2.4\pm1.2$ & $1.9\pm1.0$ & $1.4\pm0.7$ & $9\pm7$ & $0.022\pm0.018$\\ \hline \end{tabular}% \caption{{\small Derived properties of the ionised outflow in HS 0810+2554 and SDSS J1353+1138, as derived from the analysis of the [O III] line emission. From the left, columns report for the ionised outflows the measured values of: velocity, mass, mass rate, kinetic energy, kinetic efficiency and momentum-boost. The values of $L_{\rm Bol}$ are shown in Table \ref{tab:energetics_UFOs}.}} \label{tab:energetics} \end{table*}% \section{Discussion} \label{sec:discussion} \subsection{Connection with the nuclear X-ray UFOs} \label{sec:connecting} The main purpose of this work is to shed light on the acceleration mechanism of ionised outflows on large scales. In this regard, we compared the energetics of the galaxy-scale ionised outflows to the UFOs present at nuclear scales, in order to test whether they are causally connected (i.e. whether they are subsequent phases of the same AGN-accretion burst). In the case of a causal connection, we could go on to investigate the nature of the acceleration mechanism distinguishing between momentum-driven and energy-driven winds. We show the energetics measurements of the large scale ionised outflows in HS 0810+2554 and SDSS J1353+1138 in Sect. \ref{sec:energetics}. In this section, we present the X-ray measurements of the hosted UFOs from the literature that are used to determine the sub-pc scale wind energetics. For both QSOs, we refer to \citet{Chartas.high-z.UFOs}, who present the first X-ray spectral analysis of SDSS J1353+1138, and new UFOs measurements for HS 0810+2554 obtained from a new \textit{Chandra} observation acquired in 2016 using the updated version of the photoionisation code \textit{XSTAR} \citepads{2001ApJS..134..139B}. As done in \citetads{2020A&A...644A..15M}, we followed \citetads{2018MNRAS.478.2274N} in order to re-compute the UFOs energetics in a consistent way based on the same assumption we made in the calculation of the large-scale outflow energetics. We assumed that the UFO is launched from the escape radius $r_{\rm esc}\equiv2GM_{\rm BH}/v^2_{\rm UFO}$ of the BH and we derived the mass outflow rate for the nuclear wind as: \begin{equation}\label{eq:mdot_UFO} \dot{M}_{\rm UFO}\simeq0.3~ \bigg( \frac{\Omega}{4\pi} \bigg) \bigg( \frac{N_{\rm H}}{10^{24}\text{cm}^{-2}} \bigg)\bigg( \frac{M_{\rm BH}}{10^8\text{\(\text{M}_\odot\) yr}^{-1}} \bigg)\bigg( \frac{v_{\rm UFO}}{c} \bigg)^{-1} \text{\(\text{M}_\odot\)yr}^{-1} ,\end{equation} where $\Omega$ is the solid angle subtended by the UFO, $M_{\rm BH}$ the black hole mass, $N_{\rm H}$ the hydrogen gas column density, and $v_{\rm UFO}$ the wind speed. We took the values of $v_{\rm UFO}$ and $N_{\rm H}$ inferred by \citet{Chartas.high-z.UFOs}. For HS 0810+2554, these values (reported in Table \ref{tab:energetics_UFOs}) slightly differ from those previously published (i.e. $v_{\rm UFO,1}=0.12^{+0.02}_{-0.01}c$, $N_{\rm H,1}=3.4^{+1.9}_{-2.0}\times10^{23}~\text{cm}^{-3}$ and $v_{\rm UFO,2}=0.41^{+0.07}_{-0.04}c$, $N_{\rm H,2}=2.9^{+2.0}_{-1.6}\times10^{23}~\text{cm}^{-3}$ for the two distinct UFO components; \citeads{2016ApJ...824...53C}), but are still in agreement. The adopted $M_{\rm BH}$ values are virial estimates based on H$\beta$ for HS 0810+2554 \citepads{2011ApJ...742...93A} and on C IV for SDSS J1353+1138 \citep{Chartas.high-z.UFOs}. The C IV-based measurement of $M_{\rm BH}$ for SDSS J1353+1138 has been corrected following the prescription published in \citetads{2017MNRAS.465.2120C} for C IV-based virial black hole mass estimates, which are known to be systematically biased compared to masses derived from the Balmer hydrogen lines. Finally, we assumed for both UFOs a covering factor of $f=\frac{\Omega}{4\pi}=0.4^{+0.2}_{-0.2}$, on the basis that about 40\% of local AGNs have been observed to host UFOs \citepads{2010A&A...521A..57T, 2013MNRAS.430...60G}. With all these ingredients, we calculated the mass rate $\dot{M}_{\rm UFO}$ (with Eq. \ref{eq:mdot_UFO}), the momentum rate $\dot{p}_{\rm UFO}$ (i.e. $\dot{p}_{\rm UFO}=\dot{M}_{\rm UFO}v_{\rm UFO}$) and the momentum-boost of the nuclear winds defined as $\dot{p}_{\rm UFO}/(L_{\rm Bol}/c)$. As uncertainty on all quantities, the minimum-maximum range of possible values was considered, considering the values of $v_{\rm UFO}$, $N_{\rm H}$ and $M_{\rm BH}$ as inferred in \citet{Chartas.high-z.UFOs}. For $L_{\rm Bol}$ we took the average between the $\mu$-corrected values obtained in \citet{Chartas.high-z.UFOs} with two different methods: the 2-10 keV bolometric correction \citepads{2012MNRAS.425..623L} and the estimate from the continuum luminosity at 1450 $\SI{}{\angstrom}$ \citepads{2011ApJ...742...93A, 2012MNRAS.422..478R}. Table \ref{tab:energetics_UFOs} summarises the physical properties of the UFOs in HS 0810+2554 and SDSS J1353+1138, derived following the prescriptions in \citetads{2018MNRAS.478.2274N}, along with the physical quantities taken from \citet{Chartas.high-z.UFOs}. For HS 0810+2554 the values of $\dot{M}_{\rm UFO}$ and $\dot{p}_{\rm UFO}/(L_{\rm Bol}/c)$ refer to the whole hosted UFO, as sum of the two detected UFO components at different speeds (for which we separately report $N_{\rm H}$ and $v_{\rm UFO}$ in Table \ref{tab:energetics_UFOs}), originally discovered in \citetads{2016ApJ...824...53C} and confirmed in \citet{Chartas.high-z.UFOs}. \begin{table* \centering \setlength{\tabcolsep}{6pt} \renewcommand{\arraystretch}{1.3} \begin{tabular}{c|ccccc|cc} \hline \hline QSO & log($M_{\rm BH}$)$~^{\rm a}$ & $L_{\rm Bol}~^{\rm b}$ & $N_{\rm H}$ & $v_{\rm UFO}$ & $f$ & $\dot{M}_{\rm UFO}$ & $\dot{p}_{\rm UFO}$\\ & \(\text{M}_\odot\) & $10^{45}$ erg s$^{-1}$ & $10^{23}$ cm$^{-3}$ & $c$ & & \(\text{M}_\odot\)yr$^{-1}$ & $L_{\rm Bol}/c$\\ \hline \multirow{2}{*}{HS 0810+2554} & \multirow{2}{*}{$8.62^{+0.22}_{-0.22}$} & \multirow{2}{*}{$2.5\pm0.9$} & $2.1^{+1.0}_{-1.1}$ & $0.11^{+0.05}_{-0.03}$ & \multirow{2}{*}{$0.4\pm0.2$} & \multirow{2}{*}{$1.2^{+4.2}_{-1.1}$} & \multirow{2}{*}{$3.9^{+17.6}_{-3.5}$}\\ & & & $1.4^{+0.3}_{-0.5}$ & $0.43^{+0.04}_{-0.05}$ & & & \\ \hline SDSS J1353+1138 & $9.41^{+0.30}_{-0.30}$ & $39\pm2$ & $3.9^{+2.4}_{-2.3}$ & $0.34^{+0.02}_{-0.09}$ & $0.4\pm0.2$ & $4^{+9}_{-3}$ & $1.7^{+9.7}_{-1.6}$\\ \hline \end{tabular}% \caption{{\small Physical properties of the UFOs in HS 0810+2554 and SDSS J1353+1138, as derived from the X-ray measurements reported in \citet{Chartas.high-z.UFOs} by using the prescription of \citetads{2018MNRAS.478.2274N}. (a) The $M_{\rm BH}$ values are virial estimates based on H$\beta$ in HS 0810+2554 \citepads{2011ApJ...742...93A}, and on C IV in SDSS J1353+1138 \citep{Chartas.high-z.UFOs}, respectively. The C IV-based measurement of $M_{\rm BH}$ in SDSS J1353+1138 has been corrected following the prescription for C IV-based virial black hole mass estimates published in \citetads{2017MNRAS.465.2120C}. (b) The values of $L_{\rm Bol}$ are corrected for the lens magnification and computed as average of the two independent estimates of the AGN bolometric luminosity, obtained through the 2-10 keV bolometric correction \citepads{2012MNRAS.425..623L} and from the continuum luminosity at 1450 $\SI{}{\angstrom}$ \citepads{2011ApJ...742...93A, 2012MNRAS.422..478R}.}} \label{tab:energetics_UFOs} \end{table*}% \begin{figure} \resizebox{\hsize}{!}{\includegraphics{pboost_pdf_real}} \caption{{\small Momentum-boost versus wind velocity diagram for both UFOs (represented as circles) and galactic outflow components (represented as squares) of HS 0810+2554 and SDSS J1353+1138. The dashed lines represent the predictions for a momentum-conserving regime (horizontal) and energy-conserving mode (diagonal). For HS 0810+2554, also the point relative to the ionised+molecular large-scale outflow is shown, identified by the star symbol.}} \label{fig:pboost} \end{figure} Figure \ref{fig:pboost} shows the momentum-boost as a function of the outflow speed for HS 0810+2554 and SDSS J1353+1138. Given the two X-ray UFOs points (represented as circles), the predictions for a momentum-driven and an energy-driven scenario are represented by the dashed horizontal and diagonal lines, respectively. Errorbars on UFOs points indicate the minimum-maximum range of possible values, as mentioned previously. In comparing with ionised outflow points (represented as squares), we observe that while the [O III] outflow of HS 0810+2554 is consistent (within the uncertainty) with the expectations for the momentum-driven mode, that of SDSS J1353+1138 is smaller by a factor of $\sim$100 than the predictions for a momentum-driven propagation, therefore, it is far from any tentative connection with the UFOs on nuclear scale. We reiterate that our measurement is a lower limit, as we did not detect [O III] emission from the second lensed image and, moreover, we approximated the outflow intrinsic radius to the one observed (just PSF-corrected). Nevertheless, even accounting for all these issues and approximations we made, such a discrepancy between observations and theoretical predictions could hardly be explained, as it amounts to about two orders of magnitude. Even associating half of the [O III] flux observed in image A to image B, on the basis of the flux ratio measurement of the two \textit{H}-band images \citepads{2006ApJ...653L..97I}, we would obtain a momentum-boost larger than the previous one by only a factor $\sim 1.5$. Similarly, it is unlikely that we are underestimating the stretching effects so much as to overestimate $R_{\rm out}$ by a factor of $\sim$100. Therefore, we conclude that such a significant discrepancy must have a different origin. The more plausible hypotheses are: 1) either the likely presence of a massive molecular outflow in this galaxy that our work is not accounting for; or 2) the possibility that the observed UFO is caused by an extraordinary burst episode (see e.g. \citeads{2020MNRAS.498.3633Z}) in the BH accretion activity of SDSS J1353+1138, while the large-scale outflow must be considered as the resultant effect of the AGN activity averaged over longer time-scales \citepads{2017ApJ...839..120W}. For this object, \citet{Chartas.high-z.UFOs} estimate a photon index $\Gamma \sim2.2$, which is typical of Narrow-line Seyfert 1 galaxies \citepads{1999ApJS..125..317L, 1999MNRAS.309..113V} and a typical signature of highly accreting systems \citepads{2019ApJ...876..102H}. The Eddington ratio $\lambda_{\rm Edd}$ (defined as $L_{\rm Bol}/L_{\rm Edd}$) estimated for SDSS J1353+1138 is $\lambda_{\rm Edd}=0.20\pm0.02$ \citep{Chartas.high-z.UFOs}, which is larger by a factor $\sim$3 than that inferred for HS 0810+2554 ($\lambda_{\rm Edd}=0.07\pm0.03$; \citealt{Chartas.high-z.UFOs}). Such results could support the recent post-burst scenario. Certainly, each hypothesis does not necessarily exclude the other and the real situation can be a combination of the two (effects). The small values of momentum-boost ($\sim0.02-2$) and of kinetic efficiency ($\sim9-700\times10^{-5}$), inferred for the ionised outflows in SDSS J1353+1138 and HS 0810+2554, could be explained by an overall scarcity of [O III] (not only for the outflow component) in highly accreting AGNs, observed in other local galaxies with similar properties (e.g. \citeads{2000ApJ...536L...5S}). In our two QSOs, the poor outflow energetics is indeed due mainly to small values of outflow mass, and not to particularly low velocities. Such a scenario is related to the Eigenvector-1 effect: while the sources accreting at high rates (close to the Eddington limit) are actually the most promising candidates for hosting an active UFO (e.g. \citeads{2019MNRAS.482L.134N}), they usually present a very bright Fe II emission and a faint, outshined [O III] emission. As a consequence, the [O III] may be not ideal to trace the ionised phase of the outflow in AGNs accreting at high rates since the bulk of the ionised gas could be in the form of different chemical species. It is also possible we underestimated the uncertainty of the [O III] outflow in HS 0810+2554, mostly due to our approximated procedure in the unlensed reconstruction. Moreover, given the recent tentative detection of a more massive CO-outflow on large scales claimed in \citetads{2020MNRAS.496..598C}, such a small value of the momentum-boost of the ionised outflow is not as unexpected given that it accounts only for the ionised gas traced by the [O III] emission. Hence, we also determined the momentum-boost relative to the ionised plus molecular large-scale outflow ($p_{\rm out,~tot}/(L_{\rm Bol}/c)\sim38$), assuming its velocity to be equal to the mass-weighted average between the ionised and the CO-molecular outflow velocities ($v_{\rm out, tot}\sim1044$ km s$^{-1}$). Given the two orders of magnitude of difference between ionised and molecular outflow masses, the mass-weighted velocity is essentially the molecular outflow velocity ($v_{\rm out,~tot}=1040$ km s$^{-1}$; \citeads{2020MNRAS.496..598C}). In Fig. \ref{fig:pboost}, we report the CO+[O III] point with its uncertainty. Once the contribution of the molecular component is included, the energetics of the overall large-scale outflow in HS 0810+2554 is compatible with an energy-driven scenario of wind propagation, within the (large) UFO uncertainty. However, deeper observations are required to confirm the CO-outflow detection and to constrain its energetics. \subsection{Comparison with other QSOs hosting UFOs} \label{sec:comparison} \begin{figure*} \centering \includegraphics[width=17cm,keepaspectratio]{momentum_boost_ratio_new_new_h} \caption{{\small Ratio between the galaxy-scale and sub-pc scale outflow momentum rates for different QSOs hosting UFOs. Measurements for individual objects are shown in blue with the respective errorbars, using different markers according to the gas phase of the observed large scale outflow. The galaxy points are ordered by increasing $L_{\rm Bol}$. The horizontal dashed line shows the prediction for a momentum-driven wind ($\dot{p}_{\rm out}/\dot{p}_{\rm UFO}=1$), while the orange rectangles indicate the individual predictions for energy-driven winds. Filled and empty squares represent ionised outflow measurements based on H$\alpha$ and [O III] emission, respectively. For HS 0810+2554, our [O III]-based measurement is shown both alone and combined with the CO-measurement of the molecular outflow \citepads{2020MNRAS.496..598C}, with the respective symbol and combination of symbols (see the plot legend).}} \label{fig:ratio} \end{figure*} Our study has revealed that the energetics of the galaxy-scale ionised outflow in HS 0810+2554 is consistent with the expectations of the momentum-driven mechanism, whereas in SDSS J1353+1138, the ionised outflow does not seem to be related with the detected UFO event on sub-pc scales. However, the exiguity of our two-QSOs sample prevents us from testing the predictions of theoretical models, as well as making general considerations on the nature of the mechanism powering outflows on large scales. Therefore, we considered our results along with those of a sample of well-studied QSOs, hosting both UFOs and galactic outflows, recently collected in \citetads{2020A&A...644A..15M}. The QSO-sample consists of the two local objects, MR225-178 and PG 1126-041, analysed in \citetads{2020A&A...644A..15M} to trace the ionised phase of the large-scale outflows, similarly to this work, but at low redshift; in addition, the other local QSOs having reliable UFOs and molecular or atomic outflow measurements. In terms of redshift, the only exception is the lensed QSO APM 08279+5255 at $z\sim3.9$ \citepads{2002ApJ...573L..77H}, which has been found to host a molecular outflow in energy-driven regime \citepads{2017A&A...608A..30F}. In addition to HS 0810+2554 and SDSS J1353+1138, this is the only other case at high redshift, for which it has been possible to study the connection between the nuclear and the galaxy-scale winds. This highlights once again the importance of gravitational lensing as a powerful tool to overcome the limits imposed by the current observations. \citetads{2020A&A...644A..15M} re-computed the UFO mass rate (and the wind energetics, consequently) for each QSO of their gathered sample, starting from the known estimates for $M_{\rm BH}$, $N_{\rm H}$, and $v_{\rm UFO}$. All measurements for molecular outflows have been re-scaled to the same luminosity-to-mass conversion factor $\alpha_{\rm CO}=0.8~(\text{K km s}^{-1}\text{pc}^2)^{-1}\text{ \(\text{M}_\odot\)}$, typical of QSOs, starburst and submillimeter galaxies \citepads{1998ApJ...507..615D, 2013ARA&A..51..207B, 2013ARA&A..51..105C}. The main AGN, X-ray-wind, and large-scale outflow properties of the known QSO-sample are listed in Table B.1 in \citetads{2020A&A...644A..15M}. Figure \ref{fig:ratio} represents the updated version of Fig. 9 in \citetads{2020A&A...644A..15M}, including the measurements relative to HS 0810+2554 and SDSS J1353+1138: it shows the ratios between the galaxy-scale and sub-pc scale outflow momentum rates ($\dot{p}_{\rm out}$ and $\dot{p}_{\rm UFO}$, respectively), ordered according to the increasing $L_{\rm Bol}$ of the host AGN. This is an alternative way (with regard to Fig. \ref{fig:pboost}) to compare observational results with theoretical predictions for each QSO and to distinguish between the two main regimes. The energetics of the large-scale outflow in 10 out of 12 QSOs results to be consistent (or nearly consistent) with either a momentum-driven or an energy-driven regime: in seven (six) and three (four) objects, respectively, if we exclude (include) the contribution of the tentatively detected molecular outflow in HS 0810+2554. This globally confirms the conclusion drawn by \citetads{2020A&A...644A..15M} that models of either momentum- or energy-driven outflows describe the mechanism of wind propagation on galaxy-scales very well. The only two exceptions are SDSS J1353+1138 and IRAS 17020+4544, representative of the two extreme opposite situations in which the energetics on large and nuclear scales seem to be completely unrelated. As discussed in Sect. \ref{sec:connecting}, the low value of $\dot{p}_{\rm out}/\dot{p}_{\rm UFO}$ in SDSS J1353+1138 could be due to the presence of a massive molecular outflow, which our work does not account for. Alternatively, it could indicate that SDSS J1353+1138 has recently undergone a burst episode of its AGN activity. In terms of high AGN variability, the high value of $\dot{p}_{\rm out}/\dot{p}_{\rm UFO}$ inferred for IRAS 17020+4554 could be explained by invoking a past higher BH accretion activity compared to present day, but observations reveal that IRAS 17020+2554 is now accreting at a substantial fraction of its Eddington rate ($\lambda_{\rm Edd} \sim 0.7$; \citeads{2015ApJ...813L..39L}). Finally, we do not observe any remarkable trend in $\dot{p}_{\rm out}/\dot{p}_{\rm UFO}$ values with $L_{\rm Bol}$, nor any evident dependence of wind acceleration mechanism on galaxy redshift upon separately inspecting the results obtained for the high-redshift (our two objects plus APM 08279+5255) and low-redshift QSOs (the remaining ones) in the sample. \section{Conclusions} \label{sec:conclusion} Galaxy-wide outflows powered by AGN activity are thought to play a fundamental role in shaping the evolution of galaxies, as they allow us to reconcile theoretical models to observations. However, even though observations have widely confirmed their presence in both local and high-redshift galaxies, a clear understanding of the mechanism which accelerates these powerful, galaxy-scale winds is still lacking. To test the predictions of the current theoretical models, we need to compare, in a given object, the energetics of the sub-pc wind with that of the galaxy-wide outflow. The optimal sources in attempting the make a connection between different scales are the powerful QSOs near the peak of AGN activity ($z\sim2$), where the AGN feedback is expected to be more effective. Given such a perspective, this work focuses on two $z\sim1.5$ multiple lensed QSOs, specifically selected to host UFOs (HS 0810+2554 and SDSS J1353+1138) and observed with the near-IR integral field spectrometer SINFONI. Thanks to the strong lens magnification and to the spatially resolved SINFONI data, which trace the dynamics of the ionised gas phase through rest-frame optical emission lines, it has become possible, for the first time, to attempt to make the connection between the sub-pc winds and the large-scale ionised outflows in two QSOs near the peak of AGN activity. The only other well-studied case at high redshift is APM 08279+5255 ($z\sim3.9$), which has been suggested to host a molecular outflow in energy-conserving regime \citepads{2017A&A...608A..30F}. Moreover, the recent analysis of ALMA data of HS 0810+2554 has revealed the tentative detection of CO-molecular outflow \citepads{2020MNRAS.496..598C}. Therefore, the characterisation of the ionised phase of the outflow in HS 0810+2554 provides the first three-phase description of an AGN-driven wind at high redshift, from the X-ray to the optical and mm bands, corresponding to highly ionised, ionised, and molecular gas phases, respectively. In the following, we summarise the main results obtained in this work: \begin{enumerate} \item We studied the gas kinematics to identify the presence of outflowing gas by tracing the emission of the forbidden line doublet [O III]$\lambda \lambda$4959,5007, whose line profile is highly asymmetric in presence of outflows, with a typical broad, blueshifted wing corresponding to high speeds along the line of sight. In both QSO spectra, we detected the presence of extended ($\sim8$ kpc) ionised outflows moving up to $v\sim2000$ km s$^{-1}$ in the image lens plane. \item After correcting for the gravitational lensing effects, we found that the ionised outflow in HS 0810+2554 is consistent within the uncertainty with the predictions for a momentum-driven regime and with an energy-driven propagation once the contribution of the molecular outflow is included. On the contrary, the ionised outflow in SDSS J1353+1138 appears to be unrelated to the nuclear scale energetics, likely requiring either the presence of a massive molecular outflow or a high variability among the QSO activity. \item By comparing our inferred results with those of the small sample of known QSOs from the literature, each hosting both sub-pc scale UFOs and neutral or ionised winds on galaxy scales, we found that the momentum- and energy-driven frameworks describe all the observed targets very well, with the exception of SDSS J1353+1138 and IRAS 17020+4544. Therefore, these driving mechanisms appear to explain how the energy released by the AGN activity is coupled with the galactic ISM, thus driving the wind propagation on a large scale. \end{enumerate} Altogether, the observations presented in this work provide important pieces of information on the long sought-after ‘engine’ of large scale outflows and feedback, for the first time, at the crucial epoch for AGN feedback in galaxies, highlighting once again the power of integral field spectroscopy in this type of study. Follow-up CO observations of these sources will be necessary to confirm the molecular outflow detection and to constrain its energetics in HS 0810+2554 and to test whether a massive molecular outflow is responsible for the mismatch between the wind energetics at different scales observed in SDSS J1353+1138. {\footnotesize \textit{Acknowledgments.} We thank the anonymous referee for comments and suggestions, which have improved the paper. We acknowledge support from the Italian Ministry for University and Research (MUR) for the BLACKOUT project funded through grant PRIN 2017PH3WAT.003. MP is supported by the Programa Atracción de Talento de la Comunidad de Madrid via grant 2018-T2/TIC-11715. MP acknowledges support from the Spanish Ministerio de Economía y Competitividad through the grant ESP2017-83197-P, and PID2019-106280GB-I00. Some data shown in this work were obtained from the Mikulski Archive for Space Telescopes (MAST).} \bibliographystyle{aa}
{ "timestamp": "2021-03-03T02:07:04", "yymm": "2102", "arxiv_id": "2102.07789", "language": "en", "url": "https://arxiv.org/abs/2102.07789" }
\section{Introduction} Much of condensed matter research is currently focused on a search for an experimental realization of a quantum spin liquid (QSL) state. This is a magnetic state, where spins do not order despite strong interactions, but nevertheless their behavior is determined by strong non-local correlations~\cite{Broholm2020,Savary2016}. It is already understood that this state can be brought about by a presence of a strong geometric frustration or competing interactions. In addition, many candidate systems show some levels of structural disorder. It is still a question, if the disorder prevents quantum phenomena and mocks them experimentally, or it can it be a factor leading to a QSL state~\cite{Savary2016}. Another important question is how to experimentally distinguish the effects of structural disorder from the effects produced by dynamics of the lattice. In this work on a quantum spin ice candidate Pr$_2$Zr$_2$O$_7$ \cite{Kimura2013,Wen2017} we show that Raman scattering spectroscopy is able to separate dynamic effects from the effects of structural disorder. In the case of Pr$_2$Zr$_2$O$_7$ our study finds evidence of dynamic lattice effects and shows their importance for the magnetic state of this material. Pr$_2$Zr$_2$O$_7$ is a pyrochlore material, where the crystal structure provides a three dimensional frustrated lattice. Pyrochlores are known to host classical spin ice ~\cite{Gardner2010,Den2000,Melko2001,Bramwell2001} and quantum spin ice, where quantum fluctuations are no longer negligible~\cite{Ross2011,Gingras2014}. Magnetic properties of Pr$_2$Zr$_2$O$_7$ are defined by the magnetic moment of Pr$^{3+}$. In a crystal, $^3H_4$ level of $4f$ atomic orbitals of Pr$^{3+}$ which carry $J=4$ magnetic moment is split under the influence of the electric fields related to the local $D_{3d}$ crystal symmetry~\cite{koohpayeh2008,koohpayeh2014} into $2A_{1g} + A_{2g}+3E_g$ multiplets. This splitting determines magnetic properties of the system. The ground state of the system is the $E_g$ non-Kramers doublet. A non-Kramers doublet ground state makes the magnetic system sensitive to the lattice degree of freedom, since a small deviation from $D_{3d}$ local symmetry can bring about a splitting of the ground state. This is the basis of suggestions that a local disorder is an important factor in the formation of a quantum spin ice ground state in Pr$_2$Zr$_2$O$_7$~\cite{Wen2017,Martin2017}, as well as a possibility of studies of quantum spin ice by magnetostriction, which were applied to Pr$_2$Zr$_2$O$_7$~\cite{Tang2020,Patri2020}. The crystal electric field (CEF) description takes into account the lattice degrees of freedom as static, and decoupled from the electronic and magnetic degrees of freedom. This is not always a good approximation. For rare earth materials in particular, the presence of vibronic states, where phonons modulate crystal field levels, is possible, due to the overlapping energy ranges of these excitations. These interactions are not yet widely studied for rare earth pyrochlores, but the example of Tb$_2$Ti$_2$O$_7$, where vibronic coupling is so close to the ground state that it can affect its properties leading to spin liquid state, shows that this dynamic interactions cannot be neglected ~\cite{Constable2017,Zhang2020,Rau2019}. Here we present our new findings on the crystal field levels of Pr$^{3+}$ in Pr$_2$Zr$_2$O$_7$ and their coupling to the lattice. High spectral resolution and symmetry selectivity of Raman scattering spectroscopy allows a new look at the importance of interactions with the lattice, going beyond simple crystal field splitting and the static approximation. We observe a splitting of the doubly-degenerate crystal field levels with the nature of the splitting being different for the $E_g$ levels of different energies. We demonstrate that the $E_g$ level at 55 meV shows a splitting of 2.3 meV due to vibronic interactions missed in previous studies~\cite{Kimura2013,Bonville2016,Martin2017}. We also probe the $E_g$ ground-state splitting and its evolution with temperature through the analysis of transitions to the lowest excited $A_{1g}$ state. Our results suggest that the splitting is present prominently at temperatures around 100~K, and decreases on cooling. We discuss possible static and dynamic origins of this effect. \section{Results} The temperature dependence of the Raman spectra of Pr$_2$Zr$_2$O$_7$ in the spectral range from 3.7 to 70~meV (30 to 565~cm$^{-1}$ at temperatures between 6 and 300~K is presented in Fig.~\ref{Fig1}. Raman spectra in the range up to 125 meV can be found in SI. In the Raman spectra of Pr$_2$Zr$_2$O$_7$ we observe two types of excitations: (i) Raman-active phonons; (ii) CEF excitations of Pr$^{3+}$. Phonons were identified by polarization-resolved Raman measurements on the (100) surface~\cite{Xu2020phonons} and a comparison to the DFT phonons calculations, while CEF excitations were assigned based on neutron scattering results~\cite{Kimura2013,Princep2013,Bonville2016}. \begin{figure}[!htb] \includegraphics[width=\linewidth]{Fig1.pdf} \caption{Temperature dependence of the Raman scattering spectra of Pr$_2$Zr$_2$O$_7$ in the temperature range from 6 to 300~K in parallel polarization configuration $(x,x)$ in the spectral region of the CEF. The spectra are shifted along y-axis for clarity. CEF excitations are marked by green triangles. The full measured spectral range up to 125 meV can be found in SI.} \label{Fig1} \end{figure} In this work we focus our attention on the CEF excitations in the spectra of Pr$_2$Zr$_2$O$_7$. Raman scattering can detect spectral lines of CEF excitations with much higher energy resolution (0.125 meV for our experiments) than that of the neutron scattering measurements, which can typically go down only to 1 meV in the high resolution measurements for low signals~\cite{Gaudet2018}. CEF excitations show a much stronger temperature dependence of the line width than phonons~\cite{Sanjurjo1994} (see Fig.~\ref{Fig1}), which allows to observe most of them only at low temperatures. \begin{table}[H] \caption{Frequencies and widths of Pr$^{3+}$ CEF levels obtained from the Raman scattering spectra at $T=14$~K. For the $A_{1g}$ excitation at 9.5 meV, the second component has below 10\% of the spectral weight and is not included into the table. For the CEF excitations around 55 meV, $v_1$ and $v2$ are magnetoelastically induced vibronic states which possess $A_{1g}$ and $E_g$ symmetry respectively. \label{table:CEF}} \centering \begin{ruledtabular} \begin{tabular}{ >{\centering\arraybackslash}l >{\centering\arraybackslash}m{0.38\linewidth} >{\centering\arraybackslash}m{0.38\linewidth}} Level & Frequency (meV) & Line width (meV) \\ \hline $A_{2g}$ & 109.0 & 1.5 \\ $E_g$ & 94.4 & 2.8 \\ $A_{1g}$ & 82.1 & 1.5 \\ $v_2~(E_g)$ & 57.1 & 1.2 \\ $v_1~(A_{1g})$ & 54.8 & 0.6 \\ $A_{1g}$ & 9.5 & 1.0 \\ \end{tabular} \end{ruledtabular} \end{table} Energies of Pr$^{3+}$ CEF excitations observed in the Raman spectra of Pr$_2$Zr$_2$O$_7$ (see Tab~\ref{table:CEF}) correspond well to that observed by neutron scattering~\cite{Kimura2013,Bonville2016}. However, the line shapes measured with the higher spectral resolution provided a lot of new information. We present spectra of CEF at 14~K with phonons subtracted in Fig.~\ref{Fig2}(a). The main result is the Raman observation of splitting of the lower energy doublet levels, and the evidence for the different physical origins of the splitting of different excitations. For higher energy CEF states, we observe a difference in line width between singlet and doublet excitations. At 14~K the spectral line corresponding to the excitation of the doublet $E_g$ at 94 meV (762 cm$^{-1}$) shows a width of 2.8 meV, which is about two times larger than the width of the spectral lines of singlet $A_{1g}$ and $A_{2g}$ excitations (1.5~meV), see Table~\ref{table:CEF}, Fig~\ref{Fig2}(a). The line of a doublet $E_g$ at about 55 meV (460~cm$^{-1}$) is split into two components ($v_1$ and $v_2$) separated by 2.3 meV. The $v_1$ and $v_2$ components of the excitation show different symmetries, with the low-frequency component $v_1$ following the properties of $xx+yy$ basis functions ($A_{1g}$ scattering channel), and the higher frequency one ($v_2$) following $x^2-y^2$ basis functions ($E_{g}$ scattering channel), see Fig.~\ref{Fig2}a. On increasing the temperature, the excitation lines broaden (for details see SI, Fig.~3). All the lines of the CEF excitations harden by about 1~meV on the increase of temperature. At temperatures below about 20~K, the line of the CEF excitation to the lowest excited singlet $A_{1g}$ level at 9.5~meV (Fig.~\ref{Fig3}) shows an asymmetric shape. It can be well described by two symmetric Gaussian-Lorentzian line shapes with the higher-frequency component of approximately 10 \% of the total spectral weight of the excitation line. On the increase of the temperature, the spectral weight of the high-energy component increases, and the line develops into a well-defined doublet line with overlapping components at about 9.5~meV and 10.4~meV. The energy difference between the two components of the doublet increases, and lines broaden on temperature increase (see Fig.~\ref{Fig3}(b)), until the components cannot be distinguished above 110~K. The doublet is identified most distinctly in the temperature range between 50~K and 100~K. The components of the doublet do not show any polarization dependence. The slight asymmetry is also present in the line shapes of the higher-frequencies $A_{1g}$ CEF excitations, however, at all temperatures they are much broader than the energy difference between the two components of the excitation at 9.5 meV. \section{Discussion} According to the average crystal structure~\cite{koohpayeh2014}, the $4f$ level of Pr$^{3+}$ in Pr$_2$Zr$_2$O$_7$ is split by the crystal field of $D_{3d}$ symmetry into $2A_{1g} + A_{2g} +3E_g$ multiplets. In the Raman scattering spectra we observe excitations from the ground state doublet to all higher-energy components of this multiplet~\cite{Kimura2013}. However, neither the symmetry-dependent splitting of the $E_g$ level at 55 meV, nor the splitting of the $A_{1g}$ level at 9.5~meV, can be understood within simple picture of the crystal field splitting according to $D_{3d}$ symmetry. \subsection{Vibronic state for $E_g$ excitation at 55 meV.} First, we discuss the origin of the splitting of the $E_g$ level at 55~meV. The splitting between the resulting lines (2.3 meV) is larger that the splitting of the ground state doublet (1~meV), the latter estimated according to our data and to previously published neutron scattering data ~\cite{Wen2017,Martin2017}. The well-defined symmetry of the components shows that the splitting cannot be understood in terms of structural disorder leading to a relief of the double degeneracy of the level, contrary to the previous interpretation~\cite{Martin2017}. Such a symmetry-defined splitting can occur on mixing with another excitation of $E$ symmetry, with $E \otimes E = E + A_1 + A_2$, where the resulting $A_1$ and $E$ excitations will be observed in $(x,y)$ and $(x,x)$ scattering channels, as detected in our experiment. Thus the candidate excitation should have $E$ symmetry and energy close to the 55 meV of the crystal field level. In Pr$_2$Zr$_2$O$_7$, similar to other rare-earth based crystals, a good candidate for such an excitation is a phonon. Theory describing this vibronic process was first developed by P.~Thalmeier et.~al~\cite{Thalmeier1982}. Typically, this mixing occurs when a CEF state and a phonon have the same symmetry and are close in frequency. The system can be described by the following Hamiltonian~\cite{Thalmeier1982} \begin{equation} \mathcal{H} = \mathcal{H}_0 + \mathcal{H}_{\rm{int}}, \label{eq:1} \end{equation} with the non-interacting part: \begin{equation} \mathcal{H}_0 = \sum_{\alpha n}\epsilon_\alpha \ket{\Gamma_\alpha^n}\bra{\Gamma_\alpha^n} + \hbar\omega_0\sum_\mu (a_\mu^\dagger a_\mu + \frac{1}{2}) \label{eq:2} \end{equation} and the interacting part \begin{equation} \mathcal{H}_{\rm{int}} = -g_0\sum_\mu U_\mu O_\mu. \label{eq:3} \end{equation} The non-interacting part of the Hamiltonian (Eqn.~\ref{eq:2}) is composed of a CEF level $\ket{\Gamma_\alpha}$ of a rare earth ion with degeneracy index $n$ and a coupled phonon with energy $\hbar\omega_0$. In the interacting part (Eqn.~\ref{eq:3}), $U_\mu = a_\mu + a_\mu^\dagger$ is the phonon displacement operator and $O_\mu$ is the quadrupolar operator transforming like the symmetry of the phonon~\cite{Lovesey2000, Ruminy2017}. The magnetoelastic coupling constant is given by $g_0$. Importantly, this process is not restricted to any particular part of the Brillouin zone. In the case of Pr$_2$Zr$_2$O$_7$, we can identify the phonon which produces the vibronic state. The calculated phonon dispersion in the relevant energy range is presented in Fig~\ref{Fig2}(d). At the $\Gamma$-point of the BZ, the calculations are in good agreement with the experimentally determined frequencies of the Raman-active phonons~\cite{Xu2020phonons}, and both show an absence of a doubly-degenerate phonon in the region of 55~meV. However, mixing is possible at the other parts of the BZ, which also puts less restrictions on the phonon symmetry. A simple assumption is that the energy of an unperturbed $E_g$ crystal field excitation would be found between the two split components at 56 meV, as marked with a dashed line in Fig.~\ref{Fig2}. There are two phonon candidates that are very close in energy to this CEF excitation in the X to W part of the BZ. The most probable phonon candidate is a phonon which is observed at 64.6~meV (63.5~meV is the calculated frequency) at the $\Gamma$ point ($T_{2g}$). The calculated dispersion of this phonon is plotted as a red line in Fig.~\ref{Fig2}(d). The eigenvector involves the movement of O1 oxygens, which modulate the Pr$^{3+}$ oxygen environment (see Fig.~\ref{Fig2}(e)). We observe this vibronic effect in the Raman spectra at $\Gamma$-point, because the CEF excitations do not show dispersion, and the splitting which results from interactions in a certain part of BZ, leads to the splitting of the CEF levels observed over the whole BZ~\cite{Gaudet2018}. Our high resolution measurements of CEF excitations allow us to refine the crystal field parameters~\cite{Bonville2016} and obtain values of magneto-elastic coupling constants, as shown in detail in the SI. Rare earth atoms show CEF excitations in the energy range of the lattice phonons in many materials, and the vibronic effect involving the CEF excitations can be relatively common~\cite{Thalmeier1982,Thalmeier1984,Gaudet2018}, though not broadly studied. Among pyrochlore rare earth based compounds, vibronic states are found, for example, in Ho$_2$Ti$_2$O$_7$~\cite{Gaudet2018} and Tb$_2$Ti$_2$O$_7$~\cite{Constable2018}. The latter is an especially interesting case, because a mixing of the very-low lying first excited state with a phonon might be an origin of a spin liquid state in this material~\cite{Gingras2014}. \begin{figure}[!htb] \includegraphics[width=\linewidth]{Fig2.pdf} \caption{(a) Raman spectra of CEF excitation of Pr$_2$Zr$_2$O$_7$ at 14~K. (b) A scheme of the CEF levels of Pr$^{3+}$ in Pr$_2$Zr$_2$O$_7$. (c) Diagram of the vibronic coupling between the phonons and CEF states. (d) Pr$_2$Zr$_2$O$_7$ phonon dispersion obtained by DFT calculations. The energy range close to the vibronic features is shown. Dashed blue lines mark the experimentally observed CEF vibronic excitations. The vibronic coupling is expected to occur at the intersection point of the nonsplit CEF doublet (solid blue line) and the $T_{2g}$ phonon branch (red curve). (e) Atomic displacements of the $T_{2g}$ phonon mode. Upper panel: view of the unit cell. Lower panel: PrO$_8$ octahedron site viewed from the $[111]$ direction.} \label{Fig2} \end{figure} \subsection{Probing the splitting of the ground state doublet.} \begin{figure}[!htb] \includegraphics[width=\linewidth]{Fig3.pdf} \caption{(a) Temperature dependence of the Raman scattering spectra in the energy range of the lowest lying CEF excitation $E_g^0 \rightarrow A_{1g}$. (b) Upper panel: positions of the two components of the transition ($\omega_a$ and $\omega_b$); middle panel: line widths of the two components of the excitations ($\omega_a$ and $\omega_b$); lower panel: the ratio of the spectral weight of $\omega_b$ to $\omega_a$. (c) The scheme of the excitation $E_g^0 \rightarrow A_{1g}$ illustrates that the shape of the electronic density distribution $\rho$ of the $E^0_g$ level, such as splitting, can define the shape of the observed spectral excitation line. Histograms demonstrate the density of the ground state splitting $\rho(\omega)$ obtained by the deconvolution of the Raman scattring spectra at 14, 40, and 80 K.} \label{Fig3} \end{figure} The splitting of the $A_{1g}$ CEF excitation at 9.5~meV has a very different character. The doublet line of this excitation does not show any polarization dependence. This symmetry consideration, together with an absence of the $A_{1g}$ $\Gamma$- point phonons close to 9.5 meV, allows us to dismiss a vibronic state interpretation~\footnote{For $A_{1g}$ singlet level only vibronic interaction with a $\Gamma$-point phonon would produce a doublet}. A splitting of the spectral line corresponding to the $E_g^0 \rightarrow A_{1g}$ excitation can reflect the doublet structure of the $E_g^0$ ground state level, as is schematically shown in Fig.~\ref{Fig3}. It can be understood easily by considering the relevant Raman intensity $\chi''(\omega)$ at each frequency $\omega_i$: \begin{equation} \chi''(\omega_i, T) = B_{nk}\int_{-\infty}^{\infty}\rho(\omega', T)L(\omega_i - \omega', T)d\omega'. \end{equation} Here the natural width of the $A_{1g}$ singlet level is a Lorenzian function $L(\omega,T)$ determined by the lifetime of the level $\tau(T)$~\cite{Sanjurjo1994}, and $\rho(\omega,T)$ is the density of the $E^0_g$ level. $B_{nk}$ a probability of $E_g \rightarrow A_{1g}$ transition. We can use a deconvolution procedure for experimental Raman intensity $\chi(\omega)$ to obtain $\rho(\omega,T)$. The strong temperature dependence of the shape of the Raman excitation at around 9.5~meV reflects the change of $\rho(\omega,T)$ with temperature. We show $\rho(\omega,T)$ for a number of temperatures in Fig.~\ref{Fig3}(c). While the doublet structure is pronounced at temperatures between 100 and 40~K, the relative spectral weight of the high-frequency component decreases on cooling, reaching only 10\% of the spectral weight of the lower-frequency component below 30 K. This temperature dependence is reversed to that expected due to the thermal population of the levels. $\rho(\omega_i,T)$ is approaching that of a single non-split $E_g^0$ level as the temperature is reduced. The non-smooth temperature dependence of the parameters of the $E^0_g$ components at around 30 K reflects the decrease of the low-energy weak component down to below 10\% of the total weight. Interestingly, in this temperature range, a change of slope in magnetic susceptibility is observed (see SI) \cite{Kimura2013, Bonville2016}. At temperatures below about 30 K susceptibility is well-described by a non-split ground state CEF level only \cite{Bonville2016}. An observation of the $E^{0}_g$ doublet in Raman scattering was possible due the high spectral resolution, and presents a more complicated picture than a single band Gaussian distribution with width of 1~mW obtained from neutron scattering data~\cite{Martin2017,Wen2017}. The latter was suggested to be a result of structural disorder~\cite{Wen2017}, and in particulary random strain~\cite{Martin2017}. The presence of the two components of $E^{0}_g$ separated by about 1~mW can explain a feature in the heat capacity of Pr$_2$Zr$_2$O$_7$ observed at about 10~K ~\cite{Kimura2013}. While random disorder cannot be the origin of the well-defined splitting, the split ground state doublet can be a result of a deviation of Pr$^{3+}$ environment from $D_{3d}$. Such a deviation may be produced, for example, by a shift of the Pr$^{3+}$ atom from the position in the average structure, and was suggested by B.~Trump et.~al~\cite{Trump2018}. The temperature dependence of the spectra corresponds to a decrease of a deviation of Pr$^{3+}$ environment from $D_{3d}$ on cooling, which can occur due to changes of structure on thermal contraction of the crystals. Alternatively, the splitting can originate from a dynamic process. Such a process could be a phonon-like ``jump'' of Pr atom between a central position and an off-center potential minimum. Such a picture can explain the temperature dependence of both intensities and width of the $E_g^0$ components (Fig~\ref{Fig3}). On cooling, this dynamic process slows down or the phonon excitation gets de-populated, leading to the redistribution of spectral weight and narrowing of the levels that belong to the ground state $E_g^0$ doublet. If such a phonon mode exists, it would be a dipole-active excitation, observed in GHz regime. \section{Conclusions} In this work we perform a high-resolution symmetry-resolved Raman scattering study of crystal field levels of Pr$^{3+}$ in Pr$_2$Zr$_2$O$_7$, and show that dynamic interactions with the lattice are the dominant reason for the splitting of the doublet crystal fields levels. We show that a 2.3~meV splitting of an $E_g^0$ crystal field level at 55~meV originates from a vibronic interaction with a phonon. We detect a splitting of the ground state doublet by analyzing a transition to the first excited state $E_g^0 \rightarrow A_{1g}$. The splitting has a strong temperature dependence. We suggest possible interpretations in terms of static or dynamic shift of Pr$^{3+}$ from the $D_{3d}$. \section{Acknowledgements} The authors are thankful to J.~Gauget, C.~Broholm, S.~Bhattacharjee, T.~McQueen, J-J.~Wen, and O.~Tchernyshev for useful discussions. This work was supported as part of the Institute for Quantum Matter, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No.~DE-SC0019331. This work in Japan is partially supported by CREST (Grant Number: JPMJCR18T3 and JPMJCR15Q5), by New Energy and Industrial Technology Development Organization (NEDO), by Grants-in-Aids for Scientific Research on Innovative Areas (Grant Number: 15H05882 and 15H05883) from the Ministry of Education, Culture, Sports, Science, and Technology of Japan, and by Grants-in-Aid for Scientific Research (Grant Number: 19H00650) and Program for Advancing Strategic International Networks to Accelerate the Circulation of Talented Researchers (Grant Number: R2604) from the Japanese Society for the Promotion of Science (JSPS).
{ "timestamp": "2021-07-28T02:00:38", "yymm": "2102", "arxiv_id": "2102.07808", "language": "en", "url": "https://arxiv.org/abs/2102.07808" }
\section{Introduction} A pair $(N,M)$ of commuting von Neumann algebras is called \emph{split} if there is a Type I factor $F$ such that $N \subset F \subset M'$~\cite{DoplicherL}. In applications to physics typically $N$ and $M$ are generated by local observables located in two disjoint (or, in relativistic theories, spacelike separated) regions $\Lambda_1$ and $\Lambda_2$. The split property then can be interpreted as a type of statistical independence between regions. More precisely, one can locally prepare a normal state $\varphi$ such that restricted to measurements in $\Lambda_i$ we have $\varphi(AB) = \varphi_1(A)\varphi_2(B)$, for given normal states $\varphi_i$ on the algebra generated by observables localized in $\Lambda_i$~\cite{Werner}. In particular, it means that there is no entanglement between the two parts. Alternatively, the Type I factor allows us to find a tensor product decomposition of the Hilbert space, with the algebras $N$ and $M$ acting on distinct factors. Such a decomposition is far from obvious in systems with infinitely many degrees of freedom and may even not exist for a given bipartition of the system. Early applications have been in algebraic quantum field theory~\cite{BuchholzW}, for example in the study of entanglement properties of the vacuum~\cite{SummersW}. More recently the split property has found applications in the classification of phases of 1D gapped quantum spin systems. Under quite general conditions one can show that the split property holds in ground state representations. In particular, Matsui~\cite{Matsui13} showed that if $\omega$ is a pure ground state of a gapped local Hamiltonian (on the chain), it satisfies the split property in the sense that $\omega$ is quasi-equivalent to $\omega_L \otimes \omega_R$. Here $\omega_L$ (resp. $\omega_R$) is the ground state restricted to the left (resp. right) half-chain ${\mathcal A}_L$ (${\mathcal A}_R$). In this case this is equivalent to saying that the inclusion $\pi_\omega({\mathcal A}_L)'' \subset \pi_\omega({\mathcal A}_R)'$ is split in the sense above, where $\pi_\omega$ is a GNS representation for $\omega$~\cite{Matsui01} (see also~\cite[Remark 1.5]{Ogata19b}). The split property can then be used to define a $H^2(G, U(1))$-index for a unique gapped ground state on a quantum spin chain with finite group on-site symmetry ~\cite{Ogata19a}, as well as ${\mathbb Z}_2$-valued index for reflection symmetry, generalizing a construction by Pollmann \emph{et al.}~\cite{PollmannTBO} for matrix product states. The index was used to prove a general Lieb-Schultz-Mattis type theorem in \cite{OgataTT}. For fermionic chains, the split property for a unique gapped ground state is proven in \cite{Matsui20}. Bourne and Schulz-Baldes and independently Matsui introduced a $\mathbb{Z}_2$-index for \emph{fermionic} chains {\it without symmetry}~\cite{BourneSB,Matsui20}. A classification of SPT-phases with on-site symmetry in 1D fermionic chain based on the split property was carried out in ~\cite{BO}. There, a ${\mathbb Z}_2\times H^1(G,{\mathbb Z}_2)\times H^2(G, U(1)_{\mathfrak p})$-valued index was found using the split property. The split property is essential in all these constructions: it allows one to factor the Hilbert space into a tensor product with the left half-chain acting on one factor, and the right half-chain on the other. The Type I factor $F$ is such that $F \simeq \alg{B}(\mathcal{H}_L) \otimes I$ with respect to this decomposition. This can then be used to extend a symmetry $\beta_L$ of the spin chain to an automorphism of $F$, which by Wigner's theorem can be implemented by a (anti-)unitary. This in turn can be used to define an index. In higher dimensions the situation is much more complicated, and the split property fails to hold in interesting models. For example, consider Kitaev's toric code model~\cite{KitaevQD}. Then one can consider a cone-like region (extending to infinity) $\Lambda$ and its complement, as an analogue of the two half-chains in 1D. It turns out that the translation invariant ground state $\omega$ of the toric code is \emph{not} split with respect to this bipartition~\cite{Naaijkens12,FiedlerN}, in contrast with the 1D case discussed above. In fact, one of the goals of the present work is to argue that the failure of the split property to hold is in fact necessary to get anyonic excitations. More precisely, the failure of the split property to hold is because the state is long-range entangled. Thus, our work confirms the folklore statement that long-range entanglement is a necessary condition for anyonic excitations. It turns out that at least for abelian quantum double models a weaker version of the split property \emph{is} true. That is, if one considers a pair of cones $\Lambda_1 \subset \Lambda_2$ whose boundaries are sufficiently far apart, there is a Type I factor $F$ such that $\pi_\omega(\alg{A}_{\Lambda_1})'' \subset F \subset \pi_\omega(\alg{A}_{\Lambda_2^c})'$~\cite{FiedlerN}. ``Sufficiently far'' depends on the model: in the abelian quantum double models, it is enough that the distance between their boundaries is greater than one. In general, and in this paper as well, we need in addition that $\Lambda_2$ has a wider opening angle than $\Lambda_1.$ This should be compared with the setting in relativistic quantum field theory mentioned earlier, where the split property fails if the intersection of the closures of the two regions has non-empty intersection, but holds when they are spacelike separated. This property is sometimes called the \emph{distal} or \emph{approximate} split property to distinguish it from the situation in e.g. 1D systems. Despite being weaker than the split property, it still has important applications. For example, in two dimensional systems the approximate split property is one of the assumptions used in relating the total quantum dimension (a property of the superselection sectors) to the index of a certain subfactor~\cite{NaaijkensKL}. This result can be used to show one has found all superselection sectors of a given model. A variant also plays a role in the discussion of ``approximately localized'' superselection sectors~\cite{ChaNN18}. The interest of this paper is in these split and approximate split properties in 2D quantum spin systems. Although most of our results can be straightforwardly generalised to higher dimensions, we restrict to 2D. The reason is that we are particularly interested in applying our results to study anyons, and in higher dimensions the cone-localized sectors we consider automatically have bosonic or fermionic statistics (cf.~\cite{BuchholzF}). We regard a state with the split property as having small entanglement with respect to the given cut. From this point of view, a state which cannot be transformed into a split state via quasi-local automorphisms has long-range entanglement. Or to be more precise, we consider a slightly more restrictive class of automorphisms which we call \emph{quasi-factorizable}. (See subsection \ref{qasubsec} for the definition of quasi-local automorphisms and their importance in the theory of gapped ground state phases.) Anyons, if they exist, can be identified with superselection sectors of the model (see Section~\ref{sec:select} for an introduction). We show that the existence of a non-trivial superselection sector of a state $\omega$ implies that the state $\omega$ is long-range entangled. That is, long-range entanglement is a necessary condition to have non-trivial anyons. Moreover, this is stable under applying ``quasi-factorizable'' automorphisms, defined below. For a class of Hamiltonians consisting of local commuting projectors, Haah~\cite{Haah} introduced an ingenious index such that it having a non-trivial value implies that one needs a quantum circuit with depth on the order of the system size to transform into product states. Our result is in accordance with these results. In general, the split property itself in 2D is not stable under quasi-local automorphisms. We show, however, the approximate split property is stable under it. The key technical ingredient for the proof is a factorization property of quasi-local automorphisms $\alpha_s$. This result may be of independent interest. More precisely, we show that under mild assumptions, $\alpha_s$ is \emph{quasi-factorizable} in the following sense. In the definition below, $\Gamma$ is the set of all sites of the system, and for any subset $\Lambda \subseteq \Gamma$, $\mathcal{A}_\Lambda$ is the corresponding quasi-local $C^*$-algebra of observables localized in $\Lambda$ (see below). \begin{definition}\label{def:quasifactor} Let $\alpha$ be an automorphism of $\mathcal{A}_\Gamma$ and consider an inclusion of cones \[ \Gamma_1' \subset \Lambda \subset \Gamma_2' \subset \Gamma. \] We say that $\alpha$ is \emph{quasi-factorizable} with respect to this inclusion if there is a unitary $u \in \mathcal{A}$ and automorphisms $\alpha_\Lambda$ and $\alpha_{\Lambda^c}$ of $\mathcal{A}_{\Lambda}$ and $\mathcal{A}_{\Lambda^c}$ respectively, such that \[ \alpha = \mathop{\mathrm{Ad}}\nolimits(u) \circ \widetilde{\Xi} \circ (\alpha_{\Lambda} \otimes \alpha_{\Lambda^c}), \] where $\Xi$ is an automorphism on $\Gamma_2' \setminus \Gamma_1'$ and $\Lambda^c := \Gamma \setminus \Lambda$. \end{definition} The key advantage is that one can replace the ``exponential tails'' of $\alpha_s$ by strict locality, up to conjugation with a unitary in $\alg{A}_\Gamma$. In for example the sector theory, such strict locality is very useful, and one is only interested in representations up to unitary equivalence. This factorization property was first used in ~\cite{Ogata19a}, in proving the stability of the index of 1D SPT. Following this idea, in ~\cite{Moon} the stability of split property in 1D was shown. Its $2$-dimensional version is essential here, but an extra complication is that in 2D or higher, the boundary between the regions we will consider is infinite. This makes locality estimates much more subtle. Coincidentally, this more complicated geometry is also a key reason why Matsui's result on the split property for 1D spin chains~\cite{Matsui13} does not generalize to higher dimensions. A special case of the 2D-version (with respect to cone like regions with common apex) of the factorization property is also used in ~\cite{Ogata21}, to define a $H^{3}(G,U(1))$-valued index and to show its stability. In Section~\ref{sec:prelim} we fix notation and recall some basic facts about Lieb-Robinson bounds and quasi-local maps, and give a brief overview of the relation between anyons and superselection sectors. Then, in Section~\ref{sec:stable}, we prove the factorization property of quasi-local automorphisms in a general setting. In Section~\ref{sec:lre} we consider states in 2D which are quasi-equivalent to a product state, and hence satisfy the strict split property. In particular, we show that the states in this gapped quantum phase have trivial superselection structure. Finally, in Section~\ref{sec:approxsplit} we show that our main technical result applies to a natural class of quasi-local automorphisms, and use this to show that the approximate split property is stable in such models. \emph{Acknowledgments.} PN was supported in part by funding from the European Union’s Horizon 2020 research and innovation program under the European Research Council (ERC) Consolidator Grant GAPS (No. 648913). YO is supported is supported in part by JSPS KAKENHI Grant Number 16K05171 and 19K03534. She was also supported by JST CREST Grant Number JPMJCR19T2. \section{Preliminaries}\label{sec:prelim} We first fix the setting and introduce the main definitions. A key part is played by quasi-local maps and Lieb-Robinson bounds. For a state-of-the-art overview of the topic see~\cite{NSY}; for our purpose the most relevant facts will be recalled here. We largely adopt the notation of~\cite{NSY}. We assume basic familiarity with the operator algebraic formulation of quantum spin systems (see e.g.~\cite{BratteliR1,BratteliR2}). Let $(\Gamma, d)$ be a countable metric space which is $\nu$-regular i.e., \begin{align}\label{nreg} \sup_{x\in \Gamma}\left \vert b_{x}(n)\right \vert\le \kappa n^{\nu},\quad 1\le n\in {\mathbb N}, \end{align} for some constant $\kappa>0$. Here, we used the notation \begin{align} b_{x}(n):=\left\{ y\in\Gamma \mid d(x,y)\le n \right\}. \end{align} In concrete applications we typically consider $\Gamma = \mathbb{Z}^\nu$ (or its edges) with the usual metric, but for now we keep the discussion as general as possible. Let ${\mathcal P}_{0}(\Gamma)$ be the set of all finite subsets of $\Gamma$. For $\Lambda \in {\mathcal P}_{0}(\Gamma)$ we set \begin{align}\label{ag} {\mathcal A}_{\Lambda}:=\bigotimes_{x\in\Lambda}{\mathcal B}({\mathcal H}_{x}), \end{align} where ${\mathcal H}_{x}$ are finite dimensional Hilbert spaces whose dimensions are uniformly bounded: \begin{align}\label{agg} \sup_{x\in\Gamma} \dim {\mathcal H}_{x}<\infty. \end{align} If $\Lambda_1 \subset \Lambda_2$ there is a natural inclusion of algebras, and hence we can write \begin{align} {\mathcal A}_{\Gamma}^{\rm loc} :=\bigcup_{\Lambda\in {\mathcal P}_{0}(\Gamma)}{\mathcal A}_{\Lambda} \end{align} for the algebra of local observables. To get the $C^*$-algebra ${\mathcal A}_{\Gamma}$ of quasi-local observables we take the norm closure of ${\mathcal A}_{\Gamma}$. In general, if $\Lambda \subset \Gamma$ is any subset of $\Gamma$, ${\mathcal A}_\Lambda$ is the norm closure of $\bigcup_{\Lambda_0 \subset \Lambda, \Lambda_0 \in {\mathcal P}_{0}(\Gamma)} {\mathcal A}_{\Lambda_0} $. We denote by $ {\mathcal U}\left ( {\mathcal A}_{\Gamma}\right )$ the set of all unitaries in $ {\mathcal A}_{\Gamma}$. For any subset $X$ of $\Gamma$, we denote by $\Pi_{X}$ the conditional expectation onto ${\mathcal A}_{X}$ given by the tracial state on ${\mathcal A}_{X^{c}}$. These maps will be used to approximate quasi-local observables by local ones. For any $m\in\mathbb{N}}\newcommand{\bb}{\mathbb{B}\cup \{0\}$ and $X\subset\Gamma$, we set \begin{align} X(m):=\left\{ x\in\Gamma\mid d(x,X)\le m\right\}. \end{align} Furthermore, we define \begin{align} \Delta_{X(m)}:=\Pi_{X(m)}-\Pi_{X(m-1)},\quad m\in\mathbb{N}}\newcommand{\bb}{\mathbb{B},\; X\subset \Gamma. \end{align} Note that we have \begin{align} \left \Vert \Delta_{X(m)}\left ( A\right ) \right \Vert \le 2 \left \Vert A\right \Vert,\quad A\in{\mathcal A}_{\Gamma}, \end{align} since $\Pi$ is a projection. \subsection{Split property} We will be interested in the split property with respect to different regions of $\Gamma$, leading to the following definition. \begin{definition} \label{defn:split} Let $\Gamma_1 \subset \Gamma_2 \subset \Gamma$ and $\omega$ a pure state of ${\mathcal A}_\Gamma$. Then we say that $\omega$ is \emph{split} with respect to the inclusion $\Gamma_1 \subset \Gamma_2$ if there is a Type I factor $F$ such that \begin{equation} \pi( {\mathcal A}_{\Gamma_1} )'' \subset F \subset \pi( {\mathcal A}_{\Gamma_2})'', \end{equation} where $\pi$ is a GNS representation for $\omega$. \end{definition} Conjugating with a unitary does not affect the split property. Furthermore, one would expect that automorphisms of ${\mathcal A}_{\Gamma_1}$ and ${\mathcal A}_{\Gamma_2^c}$ have no effect on the existence of the Type I factor $F$. We can even allow for a non-trivial automorphism on a ``widening'' of the region $\Gamma_2 \setminus \Gamma_1$, at the expense of shrinking (resp. growing) the two regions in the definition of the split property. This is the idea behind the next proposition. \begin{prop}\label{prop:splitstable} Let $\Gamma_0\subset\Gamma_1\subset\Gamma_2\subset\Gamma_3$ be a sequence of subsets in $\Gamma$. Let $\omega$ be a pure state on ${\mathcal A}_{\Gamma}$ and suppose that it is split with respect to $\Gamma_1 \subset \Gamma_2$. Let $\alpha$ be an automorphism of ${\mathcal A}_{\Gamma}$. Let $\alpha_{\Gamma_2^c}$, $\alpha_{\Gamma_2\setminus \Gamma_1}$, $\alpha_{\Gamma_1}$ be automorphisms of ${\mathcal A}_{\Gamma_2^c}$, ${\mathcal A}_{\Gamma_2\setminus \Gamma_1}$, ${\mathcal A}_{\Gamma_1}$ respectively. Define an automorphism $\tilde\alpha$ of ${\mathcal A}_{\Gamma}$ by \begin{align} \tilde \alpha:=\alpha_{\Gamma_2^c}\otimes \alpha_{\Gamma_2\setminus \Gamma_1}\otimes \alpha_{\Gamma_1}. \end{align} Suppose moreover that there is an automorphism $\beta_{\Gamma_3\setminus \Gamma_0}$ of ${\mathcal A}_{\Gamma_3\setminus \Gamma_0}$ and a unitary $u\in{\mathcal A}_{\Gamma}$ such that \begin{align}\label{eq:factorize} \alpha=\mathop{\mathrm{Ad}}\nolimits(u)\circ \tilde\alpha\circ\left (\tilde \beta_{\Gamma_3\setminus \Gamma_0}\right ), \end{align} where $\tilde \beta_{\Gamma_3\setminus \Gamma_0} =\beta_{\Gamma_3\setminus \Gamma_0}\otimes{\mathop{\mathrm{id}}\nolimits_{(\Gamma_3\setminus \Gamma_0)^c}}$. Then $\omega \circ \alpha$ is split for the inclusion $\Gamma_0 \subset \Gamma_3$. In fact, if $({\mathcal H},\pi,\Omega)$ is a GNS triple of $\omega$ and $F$ the interpolating factor from Def.~\ref{defn:split}, we can choose $\tilde F=\mathop{\mathrm{Ad}}\nolimits(\pi(u))(F)$ as the interpolating Type I factor in the GNS representation $\pi \circ \alpha$ for the state $\omega \circ \alpha$. \end{prop} \begin{proof} We have \begin{align} \pi\circ\tilde \alpha\left ({\mathcal A}_{\Gamma_0}\right )'' =\pi\circ\alpha_{\Gamma_1}\left ({\mathcal A}_{\Gamma_0}\right )'' \subset \pi\left ({\mathcal A}_{\Gamma_1}\right )''\subset F. \end{align} We also have $ \tilde\alpha^{-1} ({\mathcal A}_{\Gamma_2}) = {\mathcal A}_{\Gamma_2} \subset {\mathcal A}_{\Gamma_3}$, and hence $\pi\left ( {\mathcal A}_{\Gamma_2}\right ) \subset \pi \circ \tilde\alpha\left ( {\mathcal A}_{\Gamma_3}\right )$. Therefore we have \begin{align} \pi\circ\tilde \alpha({\mathcal A}_{\Gamma_3})' \subset \pi({\mathcal A}_{\Gamma_2})' \subset F', \end{align} and by taking commutants $F\subset \pi\circ\tilde \alpha({\mathcal A}_{\Gamma_3})''$. Hence we obtain \begin{align} \pi\circ\tilde \alpha\left ({\mathcal A}_{\Gamma_0}\right )''\subset F\subset \pi\circ\tilde \alpha({\mathcal A}_{\Gamma_3})''. \end{align} Note that by assumption and the fact that $\tilde{\beta}_{\Gamma_3\setminus \Gamma_0}$ acts trivially on ${\mathcal A}_{\Gamma_0}$, \begin{align} \alpha\left ({\mathcal A}_{\Gamma_0}\right ) =\mathop{\mathrm{Ad}}\nolimits(u)\circ \tilde\alpha\circ\tilde\beta_{\Gamma_3\setminus \Gamma_0}\left ( {\mathcal A}_{\Gamma_0}\right ) =\mathop{\mathrm{Ad}}\nolimits(u)\circ \tilde\alpha\circ\left ( {\mathcal A}_{\Gamma_0}\right ), \end{align} and similar with ${\mathcal A}_{\Gamma_0}$ replaced by ${\mathcal A}_{\Gamma_3}$. Hence we have \begin{equation} \begin{split} \left ( \pi\circ\alpha\left ({\mathcal A}_{\Gamma_0}\right )\rmk'' = & \mathop{\mathrm{Ad}}\nolimits\left (\pi(u)\right )\left ( \pi\circ \tilde\alpha\left ({\mathcal A}_{\Gamma_0}\right )''\right ) \subset \mathop{\mathrm{Ad}}\nolimits\left (\pi(u)\right )\left ( F\right ) \\ &\subset \mathop{\mathrm{Ad}}\nolimits\left (\pi(u)\right )\left ( \left ( \pi\circ\tilde \alpha\left ({\mathcal A}_{\Gamma_3}\right )\rmk''\right ) =\left (\pi\circ\alpha\left ({\mathcal A}_{\Gamma_3}\right )\rmk''. \end{split} \end{equation} This completes the proof. \end{proof} Note that the condition on $\alpha$ implies that $\alpha^{-1}$ is quasi-factorizable for the inclusion $\Gamma_0 \subset \Gamma_2 \subset \Gamma_3$ in the sense of Definition~\ref{def:quasifactor}. The main technical contribution of the paper consists in proving that the quasi-local automorphisms $\alpha_s$ admit a decomposition as in~\eqref{eq:factorize} of the proposition. \subsection{Sector theory}\label{sec:select} The present work is at least partly motivated by superselection sector theory, in the sense of Doplicher, Haag and Roberts (DHR). See~\cite{Araki99,HaagLQP} for an introduction. In two dimensional systems with long-range topological order, there is the possibility of quasi-particles with braided exchange statistics. Typical examples of such models are Kitaev's quantum double models~\cite{KitaevQD} and the Levin-Wen string-net models~\cite{LevinW}. Mathematically, the algebraic properties of the anyons are described by a braided tensor category~\cite{Wang}. Thus, the question is how one can extract this tensor category from first principles. Typical methods to extract the braided tensor category from a ground state rely quite heavily on certain properties (e.g. symmetries) of the underlying model, and are therefore less suitable for a general analysis. In fact, in finite systems it is not always clear how to even define a single anyonic excitation, in particular once one loses strict locality as a result of perturbations. We therefore take a different approach, motivated by DHR sector theory in algebraic quantum field theory~\cite{HaagLQP}, in which one in principle can recover the full anyon structure from a few general and physically motivated principles. The idea of a superselection sector stems from the observation that it appears to be impossible to make coherent superpositions between certain states, in particular when they carry a different `charge' or `anyon type'. Mathematically this phenomenon is related to the existence of non-equivalent representations of the algebra of observables. One way to interpret this is to think of charge conservation: with local operations it is not possible to change the total charge of the system. In particular, say we create a conjugate pair of anyons (thus preserving the total charge) from the ground state, and move one far away. Then acting \emph{locally} the total charge in that region cannot be changed. Or, to give an example, it is impossible to create a vector state describing a single charged anyon in the ground state representation of a topologically ordered model, using quasi-local observables only. Equivalently, it is not possible to create coherent superpositions of disjoint states (cf.~\cite[Thm. 6.1]{Araki99}). The $C^*$-algebra ${\mathcal A}_\Gamma$ has many inequivalent representations, but most of them are not physically relevant. Hence we need a selection criterion to select the relevant representations that correspond to charged states (that is, states describing single anyon excitations). It is perhaps helpful to illustrate how this works in the prototypical example of the toric code~\cite{KitaevQD}. We refer to~\cite{Naaijkens11,FiedlerN} for details on the following discussion. In the thermodynamic limit, one can show that there is a translation invariant ground state, uniquely characterised by the condition that $\omega_0(A_s) = \omega_0(B_p) = 1$. Here $A_s$ and $B_p$ are the `star' and `plaquette' operators appearing in the Hamiltonian for the toric code. It is well-known that one can define `string operators' $F_\xi$ that create a pair of excitations (anyons) when acting on the ground state of the toric code. Note that the excitations at the end of the path $\xi$ are conjugate to each other, so that the total charge of the anyons created by this operator is trivial. Thus $A \mapsto \omega_0(F_\xi A F_\xi^*)$ is a state describing a pair of anyons. To get a state describing a \emph{single} anyon, one can take the limit where one end of the path is sent off to infinity. This converges, and one can show that the resulting state is inequivalent to $\omega_0$. Moreover, by construction, this state can be interpreted as describing a single anyon, located at the endpoint that was kept fixed. The corresponding GNS representation $\pi$ has additional properties, reminiscent of the topological charges in algebraic quantum field theory~\cite{BuchholzF}. For example, suppose that the paths $\xi$ in the construction above all lie in some cone $\Lambda$. Then it is easy to show that \emph{outside} of the cone the GNS representation for $\omega$ is unitarily equivalent to the ground state representation $\pi_0$. This means that the anyon is localized in the cone $\Lambda$. What is less obvious is that if we choose a path going off to infinity in a different direction, the corresponding GNS representation is unitarily equivalent to $\pi$. The same is true if we choose a different endpoint for the path $\xi$. This property ultimately boils down to the property of the toric code that the state $\omega_0(F_\xi A F_\xi^*)$ only depends on the endpoints of the path $\xi$, and not on the path itself. To summarise, the single anyon representation $\pi$ is irreducible, and satisfies \begin{equation} \label{eq:sselect} \pi_0|\mathcal{A}_{\Lambda^c} \cong \pi|\mathcal{A}_{\Lambda^c}, \end{equation} for \emph{any} cone $\Lambda$.\footnote{The choice of cones as localization region is merely a convenient one, motivated by space-like cones in algebraic QFT~\cite{BuchholzF}. What is more important is that it extends to infinity. This allows us to send one of the ends of a ``string operator'' creating a pair of anyons in e.g. the toric code to infinity. For technical reasons, we need the region to ``widen'' towards infinity, so that any finite region can be transported into it, and that the region has no holes. An advantage of cones is that they are easy to parametrise, cf.~\cite{Ogata21a}. } Here $\pi_0$ is the (reference) ground state representation, and $\cong$ denotes unitary equivalence of the representation restricted to $\mathcal{A}_{\Lambda^c}$, the observables localized outside of the cone $\Lambda$. Since the criterion is required to hold for \emph{any} cone, the localization region can be moved around. This is called \emph{transportability} of the charges, and we say that the charge is \emph{transportable} (see e.g.~\cite[Section IV.2]{HaagLQP}). For the toric code, it is straightforward to construct four different inequivalent representations that satisfy this property, corresponding to the four anyon types of the model. For general topologically ordered models, one expects the charges to have the same localization properties (for example based on the string operators that are typical for such models). Thus, in general, a reasonable approach is to take a ground state representation $\pi_0$, and identify irreducible representations $\pi$ satisfying~\eqref{eq:sselect} with the charges (or, anyons) of the theory. A sector is then a (unitary) equivalence class of representations $\pi$ satisfying the selection criterion. The \emph{trivial} sector is the equivalence class containing the reference representation $\pi_0$. Later we will slightly relax the criterion~\eqref{eq:sselect} to require only quasi-equivalence. It is perhaps surprising that by just imposing this single selection criterion, we obtain a very rich structure. In fact, based on the DHR program and using a technical property called Haag duality, one can show that the set of representations satisfying this criterion has the structure of a braided tensor category~\cite{BuchholzF,Naaijkens11,Ogata21a}. In addition, in concrete models such as the toric code there are natural candidates to construct representations $\pi$ satisfying the criterion, even without resorting to Haag duality, as outlined above. Moreover, one can prove that these representations are the only ones satisfying the selection criterion~\eqref{eq:sselect}, and it follows that the category is equivalent to the representation of the quantum double of the group $G = \mathbb{Z}_2$, as expected~\cite{NaaijkensKL}. This result can be generalised to abelian quantum double models~\cite{FiedlerN}. Thus, we take the viewpoint that each type of anyon gives rise to an equivalence class of representations $\pi$ satisfying~\eqref{eq:sselect}. The split property enters the analysis in various ways. We first note that the topological phenomena in our systems of interest, in particular the existence of anyons, are believed to be due to the presence of \emph{long-range entanglement}~\cite{ChenGW}. Product states exhibit no entanglement, and hence should be in the trivial phase without any anyons. A state with long-range entanglement is then roughly speaking a state that cannot be transformed into a product state by applying a finite sequence of local unitaries throughout the system. Consider the case where we have a pure state $\omega = \omega_\Lambda \otimes \omega_{\Lambda^c}$ that is a product state with respect to a cone $\Lambda$ and its complement. It is easy to see (see Section~\ref{sec:lre}) that in this case $\pi_\omega({\mathcal A}_\Lambda)''$ is a Type I factor and the inclusion $\pi_\omega({\mathcal A}_\Lambda)'' \subset \pi_\omega({\mathcal A}_{\Lambda^c})'$ therefore is split. In Section~\ref{sec:lre} we show that in this case the sector theory is trivial: any representation $\pi$ satisfying~\eqref{eq:sselect} is a direct sum of copies of the reference representation $\pi_0$. That is, we only have the trivial charge or anyon. This corroborates the notion that the sector theory is a good invariant for topological phases by proving that indeed states without long-range entanglement have a trivial sector structure. Indeed, we will prove that this still is the case for pure states $\omega$ such that $\omega \circ \alpha$ is quasi-equivalent to a product state. Here, $\alpha$ is a quasi-factorizable automorphism, which can be seen as a generalization of finite-depth quantum circuits to infinite systems. This result also explains why in models such as the toric code, which \emph{do} have a non-trivial sector theory, we only have a weaker form of the split property, where we have to consider an inclusion $\Lambda_1 \subset \Lambda_2$ of cones whose boundaries are sufficiently far apart~\cite{Naaijkens12}. This weaker form of the split property also plays a role in the analysis in~\cite{NaaijkensKL}, where the index of a certain subfactor is shown to be related to the total quantum dimension of the sectors. This result can be used to show that a given list of sectors is complete. It also is necessary in showing that approximately localized sectors, a generalisation of the notion of a sector discussed above, is stable under applying a path of quasi-local automorphisms~\cite{ChaNN18}. In either case, the split property for an inclusion $\Lambda_1 \subset \Lambda_2$ allows us to obtain a tensor product decomposition of the ground state Hilbert space such that observables in ${\mathcal A}_{\Lambda_1}$ and those in ${\mathcal A}_{\Lambda_2^c}$ act on the distinct factors. In contrast to finite systems such a decomposition need not exist if the split property fails to hold. This decomposition can then be used to approximately localize endomorphisms or observables~\cite{ChaNN18}. This plays a crucial role in the proof of stability of superselection sectors. Although the proof only requires a variant of the split property to hold at one point along the path of gapped Hamiltonians, it is nevertheless important to understand the stability of the split property itself. \subsection{Quasi-local maps}\label{qasubsec} In the classification problem of gapped ground state phases, we say that two states are in the same phase if they can be realized as ground states of gapped Hamiltonians that can be connected via a continuous (or, for technical reasons, $C^1$) path, in such a way that the energy gap does not close along the path. Using the spectral flow~\cite{BachmannMNS}, an adaptation of Hastings and Wen's quasi-adiabatic continuation~\cite{HastingsW} to the thermodynamic limit, one obtains a path of automorphisms $s \mapsto \alpha_s $ relating the ground states along the path of gapped Hamiltonians. Its infinite system version, where a uniform gap for the local Hamiltonians can be replaced by the spectral gap of the bulk Hamiltonian in the GNS representation was shown in \cite{MoonO19}. Quasi-local automorphisms are essential transformation in the theory of gapped ground state phases. A \emph{quasi-local map} on ${\mathcal A}_\Gamma$ is a map that maps strictly localized observables to observables that can still be approximately localized in a slightly larger region, with error bounds satisfying a Lieb-Robinson type of estimate. Our discussion draws heavily on~\cite{NSY}, which in turn incorporates decades of advancements in Lieb-Robinson bounds. Typically the quasi-local maps are obtained as the dynamics generated by some sufficiently local interaction. The notion of ``sufficiently local'' is made precise in the following definitions. \begin{definition} An $F$-function $F$ on $(\Gamma, d)$ is a non-increasing function $F:[0,\infty)\to (0,\infty)$ such that \begin{description} \item[(i)] $\left \Vert F\right \Vert:=\sup_{x\in\Gamma}\left ( \sum_{y\in\Gamma}F\left ( d(x,y)\right )\rmk<\infty$, and \item[(ii)] $C_{F}:=\sup_{x,y\in\Gamma}\left ( \sum_{z\in\Gamma} \frac{F\left ( d(x,z)\right ) F\left ( d(z,y)\right )}{F\left ( d(x,y)\right )}\right )<\infty$. \end{description} These are called \emph{uniform integrability} and the \emph{convolution identity}, respectively. \end{definition} For an $F$-function $F$ on $(\Gamma, d)$, define a function $G_{F}$ on $t\ge 0$ by \begin{align}\label{gfdef} G_{F}(t):= \sup_{x\in\Gamma}\left ( \sum_{y\in\Gamma, d(x,y)\ge t} F\left ( d(x,y)\right ) \right ),\quad t\ge 0. \end{align} Note that by uniform integrability the supremum is finite for all $t$. Our goal is to interpolate continuously between two local interactions. Hence we will mainly be considering \emph{paths} of local interactions, in the following sense: \begin{definition} A norm-continuous interaction on ${\mathcal A}_{\Gamma}$ defined on an interval $[0,1]$ is a map $\Phi:{\mathcal P}_{0}(\Gamma)\times [0,1]\to {\mathcal A}_{\Gamma}^{\rm loc}$ such that \begin{description} \item[(i)] for any $t\in[0,1]$, $\Phi(\cdot; t):{\mathcal P}_{0}(\Gamma)\to {\mathcal A}_{\Gamma}^{\rm loc}$ is an interaction, and \item[(ii)] for any $Z\in{\mathcal P}_{0}(\Gamma)$, the map $\Phi(Z;\cdot ):[0,1]\to {\mathcal A}_{Z}$ is norm-continuous. \end{description} \end{definition} To ensure that the interactions induce quasi-local automorphisms we need to impose sufficient decay properties on the interaction strength. \begin{definition} Let $F$ be an $F$-function on $(\Gamma,d)$. We denote by ${\mathcal B}_{F}([0,1])$ the set of all norm continuous interactions on ${\mathcal A}_{\Gamma}$ defined on an interval $[0,1]$ such that the function $\left \Vert \Phi \right \Vert: [0,1]\to {\mathbb R}$ defined by \begin{align} \left \Vert \Phi \right \Vert(t):= \sup_{x,y\in\Gamma}\frac{1}{F\left ( d(x,y)\right )}\sum_{Z\in{\mathcal P}_{0}(\Gamma), Z\ni x,y} \left \Vert\Phi(Z;t)\right \Vert,\quad t\in[0,1], \end{align} is uniformly bounded, i.e., $\sup_{t\in[0,1]}\left \Vert \Phi \right \Vert(t)<\infty$. It follows that $t \mapsto \left \Vert \Phi \right \Vert(t)$ is integrable, and we set \begin{align} I(\Phi):=I_{1,0}(\Phi):= C_{F} \int_{0}^{1} dt\left \Vert \Phi \right \Vert(t) . \end{align} \end{definition} We will need some more notation. For $\Phi\in {\mathcal B}_{F}([0,1])$ and $0\le m\in{\mathbb R}$, we introduce a path of interactions $\Phi_{m}$ by \begin{align}\label{pm} \Phi_{m}\left ( X;t\right ):=|X|^{m}\Phi\left ( X;t\right ),\quad X\in\caP_{0}(\Gamma),\quad t\in[0,1]. \end{align} Next we recall that an interaction gives rise to local (and here, time-dependent) Hamiltonians, via \begin{align} H_{\Lambda,\Phi}(t):=\sum_{Z\subset\Lambda}\Phi(Z;t),\quad t\in[0,1]. \end{align} We denote by $U_{\Lambda,\Phi}(t;s)$, the solution of \begin{align} \frac{d}{dt} U_{\Lambda,\Phi}(t;s)=-iH_{\Lambda,\Phi}(t) U_{\Lambda,\Phi}(t;s),\quad t\in[0,1]\\ U_{\Lambda,\Phi}(s;s)=\mathbb I. \end{align} We define corresponding automorphisms $\tau_{t,s}^{(\Lambda),\Phi}, \hat{\tau}_{t,s}^{(\Lambda), \Phi}$ on ${\mathcal A}_{\Gamma}$ by \begin{align} \tau_{t,s}^{(\Lambda), \Phi}(A):=U_{\Lambda,\Phi}(t;s)^{*}AU_{\Lambda,\Phi}(t;s),\\ \hat{\tau}_{t,s}^{(\Lambda), \Phi}(A):=U_{\Lambda,\Phi}(t;s)AU_{\Lambda,\Phi}(t;s)^{*}, \end{align} with $A \in {\mathcal A}_\Gamma$. Note that \begin{align}\label{inv} \hat{\tau}_{t,s}^{(\Lambda), \Phi}={\tau}_{s,t}^{(\Lambda), \Phi}, \end{align} by the uniqueness of the solution of the differential equation. Using standard techniques one can prove locality estimates for time-evolved local observables in the form of Lieb-Robinson bounds, which in turn can be used to show that the local dynamics $\tau_{t,s}^{(\Lambda), \Phi}$ induce global dynamics. Since we will make use of these facts repeatedly we recall the main points here. \begin{theorem}[\cite{NSY}]\label{tni} Let $(\Gamma, d)$ be a countable metric space, and let $F$ be a $F$-function on $(\Gamma, d)$. Suppose that $\Phi\in{\mathcal B}_F([0,1])$. The following holds: \begin{enumerate}[label={\rm\textbf{(\roman*)}}] \item \label{it:tdlimit} The limit \begin{align} \tau_{t,s}^{\Phi}(A):=\lim_{\Lambda \nearrow\Gamma}\tau_{t,s}^{(\Lambda), \Phi}(A),\quad A\in{\mathcal A}_{\Gamma}, \quad t,s\in[0,1] \end{align} exists and defines a strongly continuous family of automorphisms on ${\mathcal A}_{\Gamma}$ such that \begin{align} \tau_{t,s}^{\Phi}\circ\tau_{s,u}^{\Phi}=\tau_{t,u}^{\Phi},\quad \tau_{t,t}^{\Phi}=\mathop{\mathrm{id}}\nolimits_{{\mathcal A}_{\Gamma}}, \quad t,s,u\in[0,1]. \end{align} \item \label{it:lr}For any $X,Y\in {\mathcal P}_{0}(\Gamma)$ with $X\cap Y=\emptyset$, and $A\in {\mathcal A}_{X}$, $B\in{\mathcal A}_{Y}$ we have \begin{align} \left \Vert \left[ \tau_{t,s}^{\Phi}(A), B \right] \right \Vert \le \frac{2\left \Vert A\right \Vert\left \Vert B\right \Vert}{C_{F}}\left ( e^{2I(\Phi)}-1\right )\left \vert X\right \vert G_{F}\left ( d(X,Y)\right ). \end{align} If $\Lambda \in {\mathcal P}_{0}(\Gamma)$ and $X \cup Y \subset \Lambda$, a similar bound holds for $\tau_{t,s}^{(\Lambda),\Phi}$. \item \label{it:approx} For any $X\in {\mathcal P}_{0}(\Gamma)$ we have \begin{align} &\left \Vert \Delta_{X(m)}\left ( \tau_{t,s}^{\Phi}(A)\right ) \right \Vert \le \frac{4\left \Vert A\right \Vert}{C_{F}}\left ( e^{2I(\Phi)}-1\right )\left \vert X\right \vert G_{F}\left ( m\right ), \end{align} for all $\Lambda\in{\mathcal P}_{0}(\Gamma)$ and $A\in {\mathcal A}_{X}$. A similar bound holds for $\tau_{t,s}^{(\Lambda),\Phi}$. \item \label{it:localdyn} For any $X,\Lambda\in \caP_{0}(\Gamma)$ with $X\subset\Lambda$, and $A \in {\mathcal A}_X$ we have \begin{align} \left \Vert \tau_{t,s}^{(\Lambda), \Phi}(A)-\tau_{t,s}^{\Phi}(A) \right \Vert \le\frac{2}{C_{F}} \left \Vert A\right \Vert e^{2I(\Phi)}I(\Phi) \left \vert X\right \vert G_{F}\left ( d\left ( X,\Gamma\setminus\Lambda\right ) \right ). \end{align} \end{enumerate} \end{theorem} \begin{proof} Item~\ref{it:tdlimit} is Theorem~3.5 of~\cite{NSY}, while~\ref{it:lr} and~\ref{it:localdyn} follow from Corollary~3.6 of the same paper by a straightforward bounding of $D(X,Y)$ and the summation in eq.~(3.80) of~\cite{NSY} respectively. Finally,~\ref{it:approx} can be obtained using~\ref{it:lr} and~\cite[Cor. 4.4]{NSY} (see also the proof of Lemma~5.1 in the same paper). \end{proof} Consider the same notation and assumptions as in Theorem~\ref{tni}. To continue we need to make additional assumptions on the function $F$. In particular, we assume that there is an $\alpha\in(0,1)$ such that \begin{align}\label{as:galp} \sum_{n=0}^{\infty} (1+n)^{2\nu+1}G_F(n)^{\alpha}<\infty, \end{align} where $G_F$ is as defined in~\eqref{gfdef}. Furthermore, we assume that there is an $F$-function $\tilde F$ on $(\Gamma, d)$ such that \begin{align}\label{as:gf} \max\left\{ F\left ( \frac r 3\right ), \sum_{n=[\frac r 3]}^{\infty} (1+n)^{2\nu+1}G_F(n)^{\alpha} \right\}\le \tilde F(r). \end{align} With these additional assumptions we can distill the following result. It gives us a way to apply a quasi-local automorphism to a given dynamics. The result will generally \emph{not} be an interaction, since the interaction terms will not localized in finite regions any more. Nevertheless, the theorem shows that we can define a proper interaction that gives the correct local Hamiltonians. \begin{theorem}\label{tsan} Let $(\Gamma, d)$ be a countable $\nu$-regular metric space and $F$ be an $F$-function on $(\Gamma, d)$ such that there are $\alpha$ and $\tilde{F}$ satisfying~\eqref{as:galp} and~\eqref{as:gf}. Let $\Phi\in{\mathcal B}_{F}([0,1])$ be a path of interactions such that $\Phi_{1}\in {\mathcal B}_{F}([0,1])$, where $\Phi_1$ is defined in~\eqref{pm}. Finally, choose an increasing sequence $\Lambda_n \in {\mathcal P}_0(\Gamma)$ such that $\Lambda_n \nearrow \Gamma$, and let $\tau_{t,s}^\Phi$ and $\tau_{t,s}^{(\Lambda_n),\Phi}$ be as in Theorem~\ref{tni}. Then, with $s \in [0,1]$, the right hand side of the following sum \begin{align}\label{eq:psis} \Psi^{(s)}\left ( Z, t \right ):= \sum_{m\ge 0} \sum_{X\subset Z,\; X(m)=Z} \Delta_{X(m)}\left ( \tau_{t,s}^\Phi\left ( \Phi\left ( X; t\right ) \right ) \right ) \end{align} defines an interaction $\Psi^{(s)}\in{\mathcal B}_{\tilde F}([0,1])$. Furthermore, the formula \begin{align}\label{eq:psisn} \Psi^{(n)(s)}\left ( Z, t \right ):= \sum_{m\ge 0} \sum_{X\subset Z, X(m)\cap\Lambda_{n}=Z} \Delta_{X(m)}\left ( \tau_{t,s}^{(\Lambda_n, \Phi)}\left ( \Phi\left ( X; t\right ) \right ) \right ) \end{align} defines $\Psi^{{(n)(s)}}\in {\mathcal B}_{\tilde F}([0,1])$ such that $\Psi^{(n)}\left ( Z, t \right )=0$ unless $Z\subset \Lambda_{n}$, and satisfies \begin{align}\label{psio} \tau_{t,s}^{(\Lambda_n), \Phi} \left ( H_{\Lambda_n, \Phi}(t)\right ) =H_{\Lambda_n, \Psi^{(n)}}(t). \end{align} For any $t,u\in[0,1]$, we have \begin{align}\label{convconv} \lim_{n\to\infty}\left \Vert \tau_{t,u}^{\Psi^{(n)(s)}}\left ( A\right ) -\tau_{t,u}^{\Psi^{(s)}}\left ( A\right ) \right \Vert=0,\quad A\in{\mathcal A}_{\Gamma}. \end{align} \end{theorem} \begin{proof} If $Z$ is a finite set, we see that the right-hand side of~\eqref{eq:psis} contains only finitely many terms and hence is well-defined. Moreover, because of the $\Delta_{X(m)}$, it follows that $\Psi^{(s)}(Z,t) \in {\mathcal A}_Z$. Since $\tau_{t,s}$ is in automorphism we see that $\Psi^{(s)}(Z,t)$ is self-adjoint, and hence defines an interaction. That this interaction is in ${\mathcal B}_{\tilde{F}}([0,1])$ follows then from Theorem 5.17(i) of~\cite{NSY}. The conditions of this theorem can be verified using Theorem~\ref{tni}, where in the notation of~\cite{NSY} we have $p=0$ and $q=r=1$. Similarly, equation~\eqref{eq:psisn} defines an interaction, and~\eqref{psio} can be verified by an explicit calculation, if we note that $\tau_{t,s}^{(\Lambda_n), \Phi}(\Phi(X;t))$ is in ${\mathcal A}_{\Lambda_n}$. By part (ii) of Theorem~5.17 of~\cite{NSY} it follows that $\Psi^{(n)(s)} \in {\mathcal B}_{\tilde{F}}([0,1])$, and moreover that $\Psi^{(n)(s)}$ converges to $\Psi^{(s)}$ in $F$-norm with respect to $\tilde{F}$. Theorem 5.13 of~\cite{NSY} implies \begin{align} \sup_{n}\int_{0}^{1}\left \Vert \Psi^{(n)(s)}\right \Vert_{\tilde F}(t) dt<\infty, \end{align} see also~\cite[eq. (5.101)]{NSY}. Therefore, from Theorem 3.8 of \cite{NSY}, we obtain~\eqref{convconv}. \end{proof} \section{Factorization of quasi-local automorphisms}\label{sec:stable} In this section we give our main technical result. In particular, we study conditions under which a quasi-local automorphism $\tau_{1,0}^\Phi$ ``factorizes'' as in Proposition~\ref{prop:splitstable}, in particular equation~\eqref{eq:factorize}. In the next theorem we give a sufficient condition in terms of the regions involved and the $F$-function for $\Phi$. Before we state the full conditions and prove the result, let us briefly outline the main steps. The idea behind the proof is to compare the full dynamics generated by the interaction $\Phi$ with the ``decoupled'' dynamics $\Phi^{(0)}$. The latter simply omits all interaction terms of $\Phi$ crossing the boundary of $\Gamma_2 \setminus \Gamma_1$. The first step is to show that the difference between the dynamics, $\tau^{\Phi}_{1,0} \circ \left(\tau^{\Phi^{(0)}}_{1,0}\right)^{-1}$ is quasi-local, and generated by an interaction as in Theorem~\ref{tsan}. In the second step we show that this interaction can be well approximated by interaction terms localized in $\Gamma_2' \setminus \Gamma_1'$, with $\Gamma_1' \subset \Gamma_1 \subset \Gamma_2 \subset \Gamma_2'$, in the sense that the contributions \emph{outside} this region sum up to a bounded operator in $\mathcal{A}_\Gamma$. In Step 3 this is then used to show that the difference of the full and decoupled dynamics can be written as an automorphism of $\mathcal{A}_{\Gamma_2'\setminus \Gamma_1'}$ followed by conjugation with a unitary. This ultimately allows us to write the interaction in form that allows us to apply Proposition~\ref{prop:splitstable}, and provide natural examples of quasi-factorizable automorphisms. \begin{theorem} \label{thm:quasiauto} Let $(\Gamma, d)$ be a countable $\nu$-regular metric space with constant $\kappa$ as in~\eqref{nreg}. Let $F$ be an $F$-function on $(\Gamma, d)$ such that the function $G_F$ defined by (\ref{gfdef}) satisfies (\ref{as:galp}) for some $\alpha\in(0,1)$. Suppose that there is an $F$-function $\tilde F$ satisfying (\ref{as:gf}) for this $F$. Let ${\mathcal A}_\Gamma$ be a quantum spin system given by (\ref{ag}) and (\ref{agg}). Let $\Phi\in {\mathcal B}_{F}([0,1])$ be a path of interactions satisfying $\Phi_1\in {\mathcal B}_F([0,1])$. (Recall from definition (\ref{pm}) that this means that $X \mapsto |X| \Phi(X;t)$ is in ${\mathcal B}_F([0,1])$). Let \begin{align} \Gamma_1'\subset \Gamma_{1}\subset \Gamma_{2}\subset \Gamma_2'\subset \Gamma. \end{align} For $m\in\mathbb{N}}\newcommand{\bb}{\mathbb{B}\cup \{0\}$, $x,y\in\Gamma$, set \begin{align} \label{eq:defnf} f(m,x,y):=\sum_{X\ni x,y, d\left ( \left ( \Gamma_{2}'\setminus \Gamma_{1}'\right )^{c}, X\right )\le m} |X| \sup_{t\in[0,1]} \left \Vert\Phi(X,t)\right \Vert. \end{align} We assume that \begin{align}\label{anan} \left ( \sum_{x\in \Gamma_1} \sum_{y\in \Gamma_2^{c}}+ \sum_{x\in \Gamma_2\setminus \Gamma_1} \sum_{y\in \left ( \Gamma_2\setminus \Gamma_1\right )^c}\right ) \sum_{m=0}^\infty G_F(m)f(m,x,y)<\infty \end{align} Define $\Phi^{(0)}\in {\mathcal B}_{F}([0,1])$ by \begin{align} \Phi^{(0)}\left ( X; t\right ):= \left\{ \begin{gathered} \Phi\left ( X; t\right ),\quad \text{if}\; X\subset \Gamma_{1}\; \text{or}\; X\subset \Gamma_{2}\setminus\Gamma_{1}\; \text{or} \; X\subset \Gamma_{2}^{c}\\ 0,\quad \text{otherwise} \end{gathered} \right., \end{align} for each $X\in{\mathcal P}_{0}(\Gamma)$, $t\in[0,1]$. Then there is an automorphism $\beta_{\Gamma_2'\setminus \Gamma_1'}$ on ${\mathcal A}_{\Gamma_2'\setminus \Gamma_1'}$ and a unitary $u\in{\mathcal A}_{\Gamma}$ such that \begin{align}\label{eq:quasifactor} \tau_{1,0}^\Phi=\mathop{\mathrm{Ad}}\nolimits(u)\circ \tau_{1,0}^{\Phi^{(0)}}\circ\left (\tilde \beta_{\Gamma_2'\setminus \Gamma_1'}\right ). \end{align} \end{theorem} \begin{proof} {\it Step 1.} First we would like to represent $\tau_{1,0}^{ \Phi}\circ \left ( \tau_{1,0}^{\Phi^{(0)}}\right )^{-1}$ as some quasi-local automorphism, applying Theorem \ref{tsan}. Let $\{\Lambda_{n}\}_{n=1}^{\infty}\subset{\mathcal P}_{0}\left ( \Gamma\right )$ be an increasing sequence $\Lambda_{n}\nearrow\Gamma$. We also define $\Phi^{(1)}\in {\mathcal B}_{F}([0,1])$ by \begin{align} \Phi^{(1)}\left ( X; t\right ):=\Phi^{(0)}\left ( X; t\right )-\Phi\left ( X; t\right ), \end{align} for each $X\in{\mathcal P}_{0}(\Gamma)$, $t\in[0,1]$. Let $t,s\in[0,1]$. We apply Theorem~\ref{tsan} to $\Phi^{(1)}$. Hence we set \begin{align} \Psi^{(s)}\left ( Z, t \right ):= \sum_{m\ge 0} \sum_{X\subset Z,\; X(m)=Z} \Delta_{X(m)}\left ( \tau_{t,s}^{\Phi}\left ( \Phi^{(1)}\left ( X; t\right ) \right ) \right ) \end{align} and \begin{align} \Psi^{(n)(s)}\left ( Z, t \right ):= \sum_{m\ge 0} \sum_{X\subset Z, X(m)\cap\Lambda_{n}=Z} \Delta_{X(m)}\left ( \tau_{t,s}^{(\Lambda_n)\Phi}\left ( \Phi^{{(1)}}\left ( X; t\right ) \right ) \right ). \end{align} Corresponding to (\ref{psio}), we obtain \begin{align} \tau_{t,s}^{(\Lambda_n),\Phi} \left ( H_{\Lambda_n,\Phi^{(1)}} \right ) = H_{\Lambda_n,\Psi^{(n)(s)}}(t). \end{align} Applying Theorem~\ref{tsan}, we have $\Psi^{(n)(s)}, \Psi^{(s)}\in {\mathcal B}_{\tilde F}([0,1])$, and \begin{align}\label{convconv1} \lim_{n\to\infty}\left \Vert \tau_{t,u}^{\Psi^{(n)(s)}}\left ( A\right ) -\tau_{t,u}^{\Psi^{(s)}}\left ( A\right ) \right \Vert=0,\quad A\in{\mathcal A}_{\Gamma},\quad t,u\in [0,1] \end{align} holds. Note that \begin{align} \begin{split} \frac{d}{dt} \hat\tau_{t,s}^{(\Lambda_n), \Psi^{(n)(s)}}(A) &=-i\left[ H_{\Lambda_n, \Psi^{(n)(s)}}(t), \hat\tau_{t,s}^{(\Lambda_n), \Psi^{(n)(s)}}(A) \right] \\ &=-i \left[ \tau_{t,s}^{(\Lambda_n),\Phi} \left ( H_{\Lambda_n, \Phi^{(1)}} \right ), \hat\tau_{t,s}^{(\Lambda_n), \Psi^{(n)(s)}}(A) \right]. \end{split} \end{align} On the other hand, we have \begin{align} \begin{split} \frac{d}{dt} \tau_{t,s}^{(\Lambda_n), \Phi}&\circ \left ( \tau_{t,s}^{(\Lambda_n),\Phi^{(0)}}\right )^{-1}(A) \\ &=\tau_{t,s}^{(\Lambda_n),\Phi} \left ( i\left[ H_{\Lambda_n,\Phi}(t)-H_{\Lambda_n,\Phi^{(0)}}(t), \left ( \tau_{t,s}^{(\Lambda_n),\Phi^{(0)}}\right )^{-1}(A) \right] \right ) \\ &=-i\left[ \tau_{t,s}^{(\Lambda_n),\Phi} \left ( H_{\Lambda_n, \Phi^{(1)}} \right ), \tau_{t,s}^{(\Lambda_n), \Phi}\circ \left ( \tau_{t,s}^{(\Lambda_n),\Phi^{(0)}}\right )^{-1}(A) \right]. \end{split} \end{align} Hence $\hat\tau_{t,s}^{(\Lambda_n), \Psi^{(n)(s)}}(A)$ and $ \tau_{t,s}^{(\Lambda_n), \Phi}\circ \left ( \tau_{t,s}^{(\Lambda_n),\Phi^{(0)}}\right )^{-1}(A)$ satisfy the same differential equation with the $\hat\tau_{s,s}^{(\Lambda_n), \Psi^{(n)(s)}}(A)=\tau_{s,s}^{(\Lambda_n), \Phi}\circ \left ( \tau_{s,s}^{(\Lambda_n),\Phi^{(0)}}\right )^{-1}(A)=A$. Therefore we obtain \begin{align}\label{ata} \hat\tau_{t,s}^{(\Lambda_n), \Psi^{(n)(s)}}(A)= \tau_{t,s}^{(\Lambda_n), \Phi}\circ \left ( \tau_{t,s}^{(\Lambda_n),\Phi^{(0)}}\right )^{-1}(A),\quad t\in [0,1],\quad A\in{\mathcal A}_{\Gamma}. \end{align} From the fact that $\hat\tau_{t,u}^{\Psi^{(n)(s)}}\left ( A\right )=\hat \tau_{t,u}^{(\Lambda_n), \Psi^{(n)(s)}}=\tau_{u,t}^{(\Lambda_n), \Psi^{(n)(s)}}=\tau_{u,t}^{\Psi^{(n)(s)}}$ converges strongly to an automorphism $\tau_{u,t}^{\Psi^{(s)}}$ on ${\mathcal A}_{\Gamma}$ (\ref{convconv1}), we have \begin{align}\label{convconvh} \lim_{n\to\infty}\left \Vert \hat\tau_{t,s}^{\Psi^{(n)(s)}}\left ( A\right ) -\tau_{s,t}^{\Psi^{(s)}}\left ( A\right ) \right \Vert=0,\quad A\in{\mathcal A}_{\Gamma}. \end{align} On the other hand, by Theorem~\ref{tni}, we have for $t \in [0,1]$ and $A \in {\mathcal A}_\Gamma$ \begin{align} \lim_{n\to\infty}\left \Vert \tau_{t,s}^{(\Lambda_n), \Phi}\circ \left ( \tau_{t,s}^{(\Lambda_n),\Phi^{(0)}}\right )^{-1}(A) -\tau_{t,s}^{ \Phi}\circ \left ( \tau_{t,s}^{\Phi^{(0)}}\right )^{-1}(A) \right \Vert=0. \end{align} Therefore, taking $n\to\infty$ limit in (\ref{ata}), we obtain \begin{align} \tau_{s,t}^{ \Psi^{(s)}}(A)= \tau_{t,s}^{\Phi}\circ \left ( \tau_{t,s}^{\Phi^{(0)}}\right )^{-1}(A),\quad t,s\in [0,1],\quad A\in{\mathcal A}_{\Gamma}. \end{align} Hence we have \begin{align} \tau_{s,t}^{\Phi}=\left ( \tau_{t,s}^{\Phi}\right )^{-1} =\left ( \tau_{t,s}^{\Phi^{(0)}}\right )^{-1}\left (\tau_{s,t}^{ \Psi^{(s)}}\right )^{-1}= \tau_{s,t}^{\Phi^{(0)}}\tau_{t,s}^{ \Psi^{(s)}} \end{align} In particular, we get \begin{align}\label{ttt} \tau_{1,0}^{\Phi}= \tau_{1,0}^{\Phi^{(0)}}\tau_{0,1}^{ \Psi^{(1)}}. \end{align} {\it Step 2.} We show that the summation \begin{align}\label{vt} V(t):=\sum_{Z\in\caP_{0}(\Gamma)} \left ( \mathop{\mathrm{id}}\nolimits- \Pi_{\Gamma_{2}'\setminus \Gamma_{1}'} \right ) \left ( \Psi^{(1)}\left ( Z,t\right ) \right )\in{\mathcal A}_\Gamma \end{align} converges absolutely in the norm topology, and uniformly in $t\in[0,1]$. Set \begin{align} V_{n}(t):=\sum_{Z\in\caP_{0}(\Gamma),\; Z\subset \Lambda_{n} } \left ( \mathop{\mathrm{id}}\nolimits- \Pi_{\Gamma_{2}'\setminus \Gamma_{1}'} \right ) \left ( \Psi^{(1)}\left ( Z,t\right ) \right )\in{\mathcal A}_{\Lambda_{n}},\quad n\in\mathbb{N}}\newcommand{\bb}{\mathbb{B}. \end{align} From the convergence of (\ref{vt}) uniform in $t$, we get \begin{align}\label{kub} \lim_{n\to\infty}\sup_{t\in[0,1]}\left \Vert V_{n}(t)-V(t)\right \Vert=0. \end{align} To prove the convergence of (\ref{vt}), it suffices to prove \begin{align}\label{smsm} \lim_{n\to\infty}\sup_{t\in[0,1]}\sum_{Z\in\caP_{0}(\Gamma),\; Z\cap \Lambda_{n}^{c}\neq \emptyset } \left \Vert \left ( \mathop{\mathrm{id}}\nolimits- \Pi_{\Gamma_{2}'\setminus \Gamma_{1}'} \right )\left ( \Psi^{(1)}\left ( z,t\right ) \right ) \right \Vert =0. \end{align} To prove this, we introduce the following functions. For $m\in\mathbb{N}}\newcommand{\bb}{\mathbb{B}\cup \{0\}$, $n\in\mathbb{N}}\newcommand{\bb}{\mathbb{B}$, and $x,y\in\Gamma$, set \begin{align} f_{n}(m,x,y):=\sum_{X\ni x,y, d(X, \Lambda_{n}^{c})\le m\; d\left ( \left ( \Gamma_{2}'\setminus \Gamma_{1}'\right )^{c}, X\right )\le m} |X| \sup_{t\in[0,1]} \left \Vert\Phi(X,t)\right \Vert. \end{align} Note that $f_n(m,x,y)$ is bounded by $f$ point-wise (by definition) and converges to zero point-wise, by~\eqref{anan}. Hence by~\eqref{anan} and Lebesgue's dominated convergence theorem, we obtain \begin{align}\label{bnbn} \lim_{n\to\infty} \left ( \left ( \sum_{x\in \Gamma_1} \sum_{y\in \Gamma_2^{c}}+ \sum_{x\in \Gamma_2\setminus \Gamma_1} \sum_{y\in \left ( \Gamma_2\setminus \Gamma_1\right )^c}\right ) \sum_{m=0}^\infty G_F(m)f_{n}(m,x,y)\right )=0. \end{align} We have \begin{align} &\sup_{t\in[0,1]}\sum_{Z\in\caP_{0}(\Gamma),\; Z\cap \Lambda_{n}^{c}\neq \emptyset } \left \Vert \left ( \mathop{\mathrm{id}}\nolimits- \Pi_{\Gamma_{2}'\setminus \Gamma_{1}'} \right ) \left ( \Psi^{(1)}\left ( Z,t\right ) \right )\right \Vert \\ \begin{split} &\le \sum_{Z\in\caP_{0}(\Gamma),\; Z\cap \Lambda_{n}^{c}\neq \emptyset } \sum_{m\ge 0} \sum_{X\subset Z,\; X(m)=Z} \\ &\quad\quad\quad \left[ \sup_{t\in[0,1]}\left \Vert \left ( \mathop{\mathrm{id}}\nolimits-\Pi_{\Gamma_{2}'\setminus \Gamma_1'} \right ) \Delta_{X(m)}\left ( \tau_{t,1}^{\Phi}\left ( \Phi^{(1)}\left ( X; t\right ) \right ) \right ) \right \Vert \right] \end{split}\\ &\le \sum_{m\ge 0} \sum_{X \in {\mathcal P}_0(\Gamma) \; X(m)\cap \Lambda_{n}^{c}\neq \emptyset } \sup_{t\in[0,1]}\left \Vert\left ( \mathop{\mathrm{id}}\nolimits- \Pi_{\Gamma_{2}'\setminus \Gamma_{1}'} \right ) \Delta_{X(m)}\left ( \tau_{t,1}^{\Phi}\left ( \Phi^{(1)}\left ( X; t\right ) \right ) \right ) \right \Vert\\ &\le2\sum_{m\ge 0} \sum_{X\in\caP_{0}(\Gamma)\; X(m)\cap \Lambda_{n}^{c}\neq \emptyset , X(m)\cap\left ( \Gamma_{2}'\setminus \Gamma_{1}'\right )^{c}\neq\emptyset} \sup_{t\in[0,1]}\left \Vert\Delta_{X(m)}\left ( \tau_{t,1}^{\Phi}\left ( \Phi^{(1)}\left ( X; t\right ) \right ) \right ) \!\right \Vert\\ \begin{split} & \le2\sum_{m\ge 0} \sum_{X\in\caP_{0}(\Gamma)\; X(m)\cap \Lambda_{n}^{c}\neq \emptyset , X(m)\cap\left ( \Gamma_{2}'\setminus \Gamma_{1}'\right )^{c}\neq\emptyset}\\ &\quad\quad\quad\quad\quad\quad\quad \left[ \sup_{t\in[0,1]} \frac{4\left \Vert \Phi^{(1)}\left ( X; t\right ) \right \Vert}{C_{F}}\left ( e^{2I(\Phi)}-1\right )\left \vert X\right \vert G_{F}\left ( m\right ) \right] \end{split} \\ \begin{split} &=\frac{8}{C_{F}}\left ( e^{2I(\Phi)}-1\right ) \sum_{m\ge 0} \sum_{X\in\caP_{0}(\Gamma)\; X(m)\cap \Lambda_{n}^{c}\neq \emptyset , X(m)\cap\left ( \Gamma_{2}'\setminus \Gamma_{1}'\right )^{c}\neq\emptyset } \\ &\quad\quad\quad\quad\quad\quad\quad \left[ \sup_{t\in[0,1]}\left ( \left \Vert \Phi^{(1)}\left ( X; t\right ) \right \Vert\right ) \left \vert X\right \vert G_{F}\left ( m\right ) \right] \end{split} \label{remi} \end{align} For the fourth inequality, we used Theorem~\ref{tni}~(iii). From the definition of $ \Phi^{(1)}$, we have $ \Phi^{(1)}\left ( X; t\right ) =0, $ unless $X$ has a non-empty intersection with at least two of $\Gamma_{1}$, $\Gamma_{2}^c$, $\Gamma_{2}\setminus \Gamma_{1}$. In particular, we have $ \Phi^{(1)}\left ( X; t\right ) =0, $ unless $X\cap \Gamma_{1}\neq\emptyset, X\cap \Gamma_{2}^{c}\neq\emptyset$ or $X\cap\left ( \Gamma_{2}\setminus \Gamma_{1}\right )\neq\emptyset, X\cap\left ( \Gamma_{2}\setminus \Gamma_{1}\right )^{c}\neq\emptyset$. Therefore, if $ \Phi^{(1)}\left ( X; t\right ) \neq 0, $ there should be $x\in\Gamma_{1}$, $y\in\Gamma_{2}^{c}$ with $X\ni x, y$ or $x\in\Gamma_{2}\setminus \Gamma_{1}$ $y\in\left ( \Gamma_{2}\setminus \Gamma_{1}\right )^{c}$ with $X\ni x, y$. We also note that if $X(m)\cap \Lambda_{n}^{c}\neq \emptyset$ and $X(m)\cap\left ( \Gamma_{2}'\setminus \Gamma_{1}'\right )^{c}$, then we have $d(X, \Lambda_{n}^{c})\le m$ and $d(X,\left ( \Gamma_{2}'\setminus \Gamma_{1}'\right )^{c})\le m$. Therefore we have \begin{align} &(\ref{remi}) \le \frac{8}{C_{F}}\left ( e^{2I(\Phi)}-1\right ) \left ( \sum_{x\in\Gamma_{1}}\sum_{{y\in\Gamma_{2}^{c}}} +\sum_{x\in\Gamma_{2}\setminus \Gamma_{1}}\sum_{y\in\left ( \Gamma_{2}\setminus \Gamma_{1}\right )^{c}} \right )\\ &\sum_{m\ge 0} \sum_{X\in\caP_{0}(\Gamma)\; d(X, \Lambda_{n}^{c})\le m,\; d(X,\left ( \Gamma_{2}'\setminus \Gamma_{1}'\right )^{c})\le m,\; X\ni x,y} \sup_{t\in[0,1]}\left ( \left \Vert \Phi^{(1)}\left ( X; t\right ) \right \Vert\right ) \left \vert X\right \vert G_{F}\left ( m\right )\\ &=\frac{8}{C_{F}}\left ( e^{2I(\Phi)}-1\right ) \left ( \sum_{x\in\Gamma_{1}}\sum_{{y\in\Gamma_{2}^{c}}} +\sum_{x\in\Gamma_{2}\setminus \Gamma_{1}}\sum_{y\in\left ( \Gamma_{2}\setminus \Gamma_{1}\right )^{c}} \right ) \sum_{m\ge 0} f_{n}(m,x,y)G_{F}\left ( m\right ). \end{align} The last part converges to $0$ as $n\to\infty$ because of (\ref{bnbn}). This proves (\ref{smsm}), and hence that~\eqref{vt} converges. \\ {\it Step 3.} Next we decompose $\Psi^{(1)}$ into a $\Gamma_{2}'\setminus \Gamma_{1}'$-part \begin{align} \tilde\Psi(Z,t):=\Pi_{\Gamma_{2}'\setminus \Gamma_{1}'} \left ( \Psi^{(1)}(Z,t) \right ) \end{align} and the rest. Clearly, we have $\tilde \Psi\in {\mathcal B}_{\tilde F}([0,1])$. Note that \begin{align}\label{hvh} H_{\Lambda_{n}, \tilde\Psi}(t)+V_{n}(t) = H_{\Lambda_{n}, \Psi^{(1)}}(t). \end{align} As a uniform limit of $[0,1]\ni t\mapsto V_{n}(t)\in {\mathcal A}_{\Gamma}$, $[0,1]\ni t\mapsto V(t)\in {\mathcal A}_{\Gamma}$ is norm-continuous. Because of $\tilde \Psi\in {\mathcal B}_{\tilde F}([0,1])$, $[0,1]\ni t\mapsto \tau_{t,s}^{\tilde\Psi}\left ( V(t)\right )\in {\mathcal A}_{\Gamma}$ is also norm-continuous, for each $s\in[0,1]$. Therefore, for each $s\in [0,1]$, there is a unique norm-differentiable map from $[0,1]$ to $ {\mathcal U}\left ( {\mathcal A}_{\Gamma}\right )$ such that \begin{align} \frac{d}{dt} W^{(s)}(t)=-i \tau_{t,s}^{\tilde\Psi}\left ( V(t)\right ) W^{(s)}(t),\quad W^{(s)}(s)=\mathbb I. \end{align} The solution is given by \begin{align}\label{wsexp} W^{(s)}(t) :=\sum_{k=0}^{\infty }(-i)^{k} \int_{s}^{t}ds_{1}\int_{s}^{s_{1}}ds_{2}\cdots \int_{s}^{s_{k-1}}ds_{k} \tau_{s_{1},s}^{\tilde\Psi}\left ( V(s_{1})\right ) \cdots \tau_{s_{k},s}^{\tilde\Psi}\left ( V(s_{k})\right ). \end{align} Analogously, for each $s\in[0,1]$ and $n\in\mathbb{N}}\newcommand{\bb}{\mathbb{B}$, we define a unique norm-differentiable map from $[0,1]$ to $ {\mathcal U}\left ( {\mathcal A}_{\Gamma}\right )$ such that \begin{align} \frac{d}{dt} W_{n}^{(s)}(t)=-i \tau_{t,s}^{(\Lambda_{n})\tilde\Psi}\left ( V_{n}(t)\right ) W_{n}^{(s)}(t),\quad W_{n}^{(s)}(s)=\mathbb I. \end{align} This differential equation can be solved similarly as in equation~\eqref{wsexp}. By the uniform convergence (\ref{kub}), we then have \begin{align} \lim_{n}\sup_{t\in[0,1]}\left \Vert W_{n}^{(s)}(t)- W^{(s)}(t) \right \Vert =0. \end{align} From this and Theorem \ref{tni} (iv) for $\Psi^{(1)}, \tilde \Psi\in {\mathcal B}_{\tilde F}([0,1])$, we have \begin{align}\label{limlim} \lim_{n\to\infty} \tau_{s,t}^{(\Lambda_{n}), \tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W_{n}^{(s)}(t)\right ) (A)= \tau_{s,t}^{ \tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W^{(s)}(t)\right ) (A),\\ \lim_{n\to\infty} \tau_{s,t}^{(\Lambda_{n}), \Psi^{(1)}}(A)= \tau_{s,t}^{ \Psi^{(1)}}(A), \end{align} for any $A\in{\mathcal A}_{\Gamma}$. Note that for any $A\in{\mathcal A}_{\Gamma}$ \begin{align} &\frac{d}{dt} \tau_{s,t}^{(\Lambda_{n}), \tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W_{n}^{(s)}(t)\right ) (A)\\ \begin{split} &=-i\left[ H_{\Lambda_{n}, \tilde\Psi}(t), \tau_{s,t}^{(\Lambda_{n}), \tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W_{n}^{(s)}(t)\right ) (A) \right] \\ &\quad\quad\quad\quad\quad\quad\quad -i\tau_{s,t}^{(\Lambda_{n}), \tilde\Psi}\left ( \left[ \tau_{t,s}^{(\Lambda_{n}), \tilde\Psi}\left ( V_{n}(t)\right ), \mathop{\mathrm{Ad}}\nolimits \left ( W_{n}^{(s)}(t)\right ) (A) \right] \right ) \end{split}\\ &=-i\left[ H_{\Lambda_{n}, \tilde\Psi}(t)+V_{n}(t), \tau_{s,t}^{(\Lambda_{n}), \tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W_{n}^{(s)}(t)\right ) (A) \right]\nonumber\\ &=-i\left[ H_{\Lambda_{n}, \Psi^{(1)}}(t), \tau_{s,t}^{(\Lambda_{n}), \tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W_{n}^{(s)}(t)\right ) (A) \right]. \end{align} We used (\ref{inv}) for the second equality and (\ref{hvh}) for the third equality. On the other hand, for any $A\in{\mathcal A}_{\Gamma}$, we have \begin{align} \frac{d}{dt} \tau_{s,t}^{(\Lambda_{n}), \Psi^{(1)}} (A)=-i\left[ H_{\Lambda_{n}, \Psi^{(1)}}(t), \tau_{s,t}^{(\Lambda_{n}), \Psi^{(1)}} (A) \right]. \end{align} Therefore, $\tau_{s,t}^{(\Lambda_{n}), \tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W_{n}^{(s)}(t)\right ) (A)$ and $ \tau_{s,t}^{(\Lambda_{n}), \Psi^{(1)}} (A)$ satisfy the same differential equation. Also note that we have \[ \tau_{s,s}^{(\Lambda_{n}), \tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W_{n}^{(s)}(s)\right ) (A)= \tau_{s,s}^{(\Lambda_{n}), \Psi^{(1)}} (A)=A. \] Therefore, we get \begin{align} \tau_{s,t}^{(\Lambda_{n}), \tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W_{n}^{(s)}(t)\right ) (A) =\tau_{s,t}^{(\Lambda_{n}), \Psi^{(1)}} (A). \end{align} By (\ref{limlim}), we obtain \begin{align} \tau_{s,t}^{\tilde\Psi}\circ \mathop{\mathrm{Ad}}\nolimits \left ( W^{(s)}(t)\right ) (A) =\tau_{s,t}^{\Psi^{(1)}} (A),\quad A\in{\mathcal A}_{\Gamma}, \; t,s\in[0,1]. \end{align} Taking inverse, we get \begin{align}\label{www} \mathop{\mathrm{Ad}}\nolimits \left ( W^{(s)^{*}}(t)\right )\circ\tau_{t,s }^{\tilde\Psi} =\tau_{t,s}^{\Psi^{(1)}},\; t,s\in[0,1]. \end{align} {\it Step 4.} Combining (\ref{ttt}) and (\ref{www}) we have \begin{align} \tau_{1,0}^{\Phi}= \tau_{1,0}^{\Phi^{(0)}}\tau_{0,1}^{ \Psi^{(1)}} =\tau_{1,0}^{\Phi^{(0)}}\circ\mathop{\mathrm{Ad}}\nolimits \left ( \left ( W^{(1)}(0)\right )^{*}\right )\circ\tau_{0,1 }^{\tilde\Psi}. \end{align} Setting \begin{align} \beta_{\Gamma_2'\setminus \Gamma_1'}:=\tau_{0,1 }^{\tilde\Psi},\quad u:=\tau_{1,0}^{\Phi^{(0)}}\left ( \left ( W^{(1)}(0)\right )^{*}\right ) \end{align} completes the proof. \end{proof} \section{Long-range entanglement}\label{sec:lre} An interesting problem is to find conditions that lead to a trivial superselection structure. Topological order is associated to ``long-range entanglement'' that cannot be removed by local operations. This should be contrasted with product states, which are not entangled at all. Hence one is interested in states that cannot be transformed into product states by such local operations. The product states are said to be in the topologically trivial phase~\cite{ChenGW}. The goal of this section is to show that such a topologically trivial state indeed leads to a trivial superselection structure, at least when we restrict to strictly localized sectors as in equation~\eqref{eq:sselect}. To make this precise, we recall that the equivalence relation defined in terms of finite depth quantum circuits is somewhat too restrictive in the thermodynamic limit, and one has to look at limits of such automorphisms as well. In addition, we will only require to be able to ``decouple'' a cone-like region. Because of transportability of the anyons that is assumed, the choice of cone is not important. We therefore adopt the following definition. \begin{definition}\label{def:lre} Let $\mathcal{A}_\Gamma$ be the quasi-local algebra of a quantum spin system with $\Gamma = \mathbb{Z}^\nu$. We say that a pure state $\omega$ has \emph{long-range entanglement} (LRE) if there is no quasi-factorizable automorphism $\alpha \in \operatorname{Aut}(\mathcal{A}_\Gamma)$ such that $\omega \circ \alpha$ is a product state with respect to some cone $\Lambda$. Here we say that a state is a product state for a cone $\Lambda$ if it is of the form $\omega = \omega_\Lambda \otimes \omega_{\Lambda^c}$, with $\omega_\Lambda$ a state on $\alg{A}_\Lambda$, and similarly for $\omega_{\Lambda^c}$. \end{definition} \begin{remark} Since the idea is to capture the trivial phase, the set of allowed automorphisms is dictated by the equivalence relation one puts on the ground states. Our proofs depend on $\alpha$ being quasi-factorizable, which is why we choose this class of automorphisms in our definition of long-range entanglement. As we show in Section~\ref{sec:approxsplit}, the notion of quasi-factorizable automorphisms includes natural examples of gapped paths of uniformly bounded finite range interactions. As we show below, any state that is not long-range entangled has a trivial sector structure. In fact, the sector structure for states in other phases is also preserved under applying quasi-factorizable automorphisms, if one makes the additional assumption of \emph{approximate Haag duality}~\cite{Ogata21a}. \end{remark} The condition that $\Gamma = \mathbb{Z}^\nu$ is not essential. However, in the general case one should define the appropriate analogue of a cone. This depends on the localization properties of the excitations one would want to consider, but for the definition to be non-trivial a cone should at least have infinitely many sites. Note that for a state to be \emph{not} long-range entangled, we only require the condition to hold for a \emph{single} cone $\Lambda$.That is, a state is not long-range entangled if we can disentangle the cone $\Lambda$ from its complement. Typically the states we are interested in have a large degree of `homogeneity', for example because they will be translation invariant. Moreover, we will be interested in transportable charges, in the sense that we can move a charge localized in a specific cone to \emph{any} with a unitary operator. Thus typically one expects that if it is possible to decouple a single cone in this situation, one can do it for more cones. Since we will not actually need that, we restrict to this simpler definition. In the following we first consider the situation where the pure reference state $\omega_0$ is a product state with respect to a fixed cone $\Lambda$, i.e., $\omega_0 = \omega_\Lambda \otimes \omega_{\Lambda^c}$ for some states $\omega_\Lambda$ and $\omega_{\Lambda^c}$ on $\mathcal{A}_\Lambda$ and $\mathcal{A}_{\Lambda^c}$ respectively. Below we consider general pure states without long-range entanglement. We first recall the following Lemma (compare with e.g.~\cite{Matsui01,Matsui10}). \begin{lemma}\label{lem:split} Let $\varphi$ be a pure state on $\mathcal{A}_\Gamma$ and suppose that there is a cone $\Lambda$ such that $\varphi$ is quasi-equivalent to $\varphi_\Lambda \otimes \varphi_{\Lambda^c}$, where $\varphi_\Lambda := \varphi|\mathcal{A}_\Lambda$. Then $\mathcal{R}_\Lambda := \pi_\varphi(\alg{A}_\Lambda)''$ is a factor of Type I, and so is $\mathcal{R}_{\Lambda^c}$. Moreover, Haag duality holds: $\mathcal{R}_\Lambda = \mathcal{R}_{\Lambda^c}'$. \end{lemma} \begin{proof} Write $(\pi_\varphi, \mathcal{H}_\varphi, \Omega_\varphi)$ for the GNS representation of $\varphi$. Because $\varphi$ is pure, $\pi_\varphi(\alg{A}_\Gamma)''$ is a Type I factor. Note that $\mathcal{R}_\Lambda \vee \mathcal{R}_{\Lambda^c} = \alg{B}(\mathcal{H}_{\varphi})$. Here $\mathcal{R}_\Lambda \vee \mathcal{R}_{\Lambda^c}$ is the smallest von Neumann algebra containing both $\mathcal{R}_\Lambda$ and $\mathcal{R}_{\Lambda^c}$. Taking the commutant of this equation, and noting that by locality we have that $\mathcal{R}_\Lambda \subset \mathcal{R}_{\Lambda^c}'$, one obtains \[ \mathcal{R}_\Lambda' \cap \mathcal{R}_\Lambda \subset \mathcal{R}_\Lambda' \cap \mathcal{R}_{\Lambda^c}' = \mathbb{C} I. \] Hence $\mathcal{R}_\Lambda$ is a factor, and so is $\mathcal{R}_{\Lambda^c}$. Since $\varphi$ is quasi-equivalent to $\varphi_\Lambda \otimes \varphi_{\Lambda^c}$ it follows that there is a normal isomorphism $\tau: \pi_\varphi(\alg{A}_\Gamma)'' \to \pi_{\varphi_\Lambda}(\alg{A}_\Lambda)''\,\, \overline{\otimes}\,\, \pi_{\varphi_{\Lambda^c}}(\alg{A}_{\Lambda_c})''$. The notation $\mathcal{N} \overline{\otimes} \mathcal{M}$ denotes the von Neumann-algebraic tensor product, which by definition is the smallest von Neumann algebra containing the algebraic tensor product $\mathcal{N} \odot \mathcal{M}$. Because the tensor product of two von Neumann algebras is Type I if and only if both factors are Type I, it follows that $\pi_{\varphi_\Lambda}(\alg{A}_\Lambda)''$ must be Type I, and similarly for $\pi_{\varphi_{\Lambda^c}}(\alg{A}_{\Lambda^c})''$. Finally, since $\mathcal{R}_\Lambda$ is a factor, every subrepresentation of $\pi_\Lambda := \pi_\varphi | \alg{A}_\Lambda$ is quasi-equivalent to $\pi_{\Lambda}$ itself. This is true in particular for $\pi_{\varphi_\Lambda}$, and hence $\mathcal{R}_\Lambda$ must be of Type I as well. The same is true for $\mathcal{R}_{\Lambda^c}$. Finally, since $\mathcal{R}_{\Lambda}$ is of Type I, there are Hilbert spaces $\mathcal{H}_1$ and $\mathcal{H}_2$ and a unitary $U : \mathcal{H}_{\varphi} \to \mathcal{H}_1 \otimes \mathcal{H}_2$, with \[ U \mathcal{R}_\Lambda U^* = \alg{B}(\mathcal{H}_1) \> \overline{\otimes} \> I, \quad\textrm{ and }\quad U \mathcal{R}_{\Lambda^c} U^* \subset I \> \overline{\otimes} \> \alg{B}(\mathcal{H}_2). \] The inclusion follows because $\mathcal{R}_{\Lambda^c} \subset \mathcal{R}_\Lambda'$ by locality, and because $(\alg{B}(\mathcal{H}_1) \overline{\otimes} I)' = I \> \overline{\otimes}\> \alg{B}(\mathcal{H}_2)$. Because $\mathcal{R}_\Lambda$ and $\mathcal{R}_{\Lambda^c}$ generate $\alg{B}(\mathcal{H}_{\varphi})$, it follows that in fact it must be an equality. Therefore $\mathcal{R}_\Lambda = \mathcal{R}_{\Lambda^c}'$. \end{proof} \begin{remark} As is shown in the references cited above, the factors being Type~I implies that $\varphi$ is quasi-equivalent to a product state. However, Haag duality does not necessarily imply the split property. \end{remark} This allows us to prove that if the reference is a product state with respect to a cone, there are no non-trivial representations that are both strictly localizable and transportable. In other words, the superselection structure is trivial. We will in fact slightly relax the superselection criterion, and only assume that the representations $\pi$ of interest are \emph{quasi-}equivalent to $\pi_0$. More precisely, we will be interested in representation $\pi$ such that \begin{equation} \label{eq:nsselect} \pi_0|\mathcal{A}_{\Lambda^c} \sim_{q.e.} \pi|\mathcal{A}_{\Lambda^c}, \end{equation} for \emph{all} cones $\Lambda$. This is true in particular when $\pi$ is unitarily equivalent to $n \cdot \pi_0$ when restricted to observables outside a cone. Here $n \cdot \pi_0$ is the direct sum of $n$ copies of $\pi_0$, as usual. The reason to allow this relaxation is that such representations can be constructed naturally when considering non-abelian models~\cite{Szlachanyiv93,Naaijkens2015}. Note that the condition that~\ref{eq:nsselect} should hold for \emph{every} cone $\Lambda$ is very strong, and as we argued above, captures precisely the localization properties one expects from anyons in 2D. The fact that it holds for every cone $\Lambda$ often allows us to draw conclusions about all cones from a result for a single, fixed cone (up to quasi-equivalence). The following proof is inspired by Proposition~4.2 of~\cite{Mueger99}. \begin{theorem}\label{thm:trivial} Let $\omega_0$ be a pure state such that its GNS representation $\pi_0$ is quasi-equivalent to $\pi_\Lambda \otimes \pi_{\Lambda^c}$, with $\pi_\Lambda$ and $\pi_{\Lambda^c}$ irreducible representations of $\mathcal{A}_\Lambda$ and $\mathcal{A}_{\Lambda^c}$ respectively. Consider $\omega_0$ to be the reference state in the superselection criterion. Then the corresponding sector theory is trivial, in the sense that each representation $\pi$ satisfying the selection criterion~\eqref{eq:nsselect} is quasi-equivalent to $\pi_0$. In particular, if $\pi$ is irreducible, then $\pi$ and $\pi_0$ are equivalent. \end{theorem} \begin{proof} Because $\pi\vert_{{\mathcal A}_{\Lambda^c}}$ is quasi-equivalent to $\pi_0\vert_{{\mathcal A}_{\Lambda^c}}$, which is quasi-equivalent to $\pi_{\Lambda^c}$, and $\pi_{\Lambda^c}$ is irreducible, there is a Hilbert space ${\mathcal K}$ and a unitary $W:{\mathcal H}\to {\mathcal H}_{\Lambda^c}\otimes{\mathcal K}$ such that \begin{align} W \pi(B) W^*=\pi_{\Lambda^c}(B)\otimes\mathbb I_{{\mathcal K}},\quad B\in {\mathcal A}_{\Lambda^c}. \end{align} Because $\pi_{\Lambda^c}({\mathcal A}_{\Lambda^c})''$ is a Type I factor, it follows that \[ \left(\pi_{\Lambda^c}(\alg{A}_{\Lambda^c}) \otimes \mathbb I_{{\mathcal K}}\right)' = \mathbb I_{{\mathcal H}_{\Lambda_c}} \otimes {\mathcal B}({\mathcal K}). \] By the commutativity of ${\mathcal A}_{\Lambda}$ and ${\mathcal A}_{\Lambda^c}$, it follows that for all $A \in {\mathcal A}_{\Lambda}$, we have that $W \pi(A) W^* \in \left(\pi_{\Lambda^c}(\alg{A}_{\Lambda^c}) \otimes \mathbb I_{{\mathcal K}}\right)' $. Thus we see that there is a representation $\rho$ of ${\mathcal A}_{\Lambda}$ on ${\mathcal K}$ such that \begin{align}\label{wpw} W\pi(A)W^*=\mathbb I_{{\mathcal H}_{\Lambda^c}}\otimes \rho(A),\quad A\in {\mathcal A}_{\Lambda}. \end{align} Consider a cone $\Lambda'$ such that $\Lambda \subset (\Lambda')^c$. Then, by applying the superselection criterion and restricting to the cone $\Lambda$, it follows that the representation $\pi\vert_{{\mathcal A}_{ \Lambda}}$ is quasi-equivalent to $\pi_0\vert_{{\mathcal A}_{\Lambda}}$, which in turn is quasi-equivalent to the irreducible representation $\pi_{\Lambda}$. On the other hand, from equation~\eqref{wpw}, $\rho$ is quasi-equivalent to $\pi\vert_{{\mathcal A}_{\Lambda}}$. Hence $\rho$ is quasi-equivalent to the irreducible $\pi_{\Lambda}$. Therefore, there are a Hilbert space ${\mathcal K}_1$ and a unitary $V: {\mathcal K}\to {\mathcal H}_\Lambda\otimes {\mathcal K}_1$ such that \begin{align} V\rho(A)V^*=\pi_{\Lambda}(A)\otimes \mathbb I_{{\mathcal K}_1},\quad A\in{\mathcal A}_{\Lambda}. \end{align} Hence we get \begin{align} \left ( \mathbb I_{\Lambda^c}\otimes V\right ) W\pi(AB) W^* \left ( \mathbb I_{\Lambda^c}\otimes V\right )^* =\pi_{\Lambda^c}(B)\otimes\pi_{\Lambda}(A)\otimes\mathbb I_{{\mathcal K}_1} \end{align} for all $ A\in{\mathcal A}_{\Lambda}$ and $B\in{\mathcal A}_{\Lambda^c}$. As the right hand side is quasi-equivalent to $\pi_0$, $\pi$ is quasi-equivalent to $\pi_0$. \end{proof} \begin{remark} Note that the assumption in the theorem is a 2D analogue of the split property for 1D spin chains. It should be noted that it does \emph{not} hold for models such as the toric code, which have non-trivial excitations (or sectors) localized in cones. The reason is that the ground state has long-range entanglement and cannot be converted into a product state with local operations. However, as we already mentioned in the introduction, we still have the \emph{approximate} or \emph{distal} split property: a Type I factor $\mathcal{R}_{\Lambda_1} \subset F \subset \mathcal{R}_{\Lambda_2^c}'$ exists if the boundary of the cones $\Lambda_1 \subset \Lambda_2$ are sufficiently distant \cite{FiedlerN}. What is ``sufficiently distant'' depends on the model, as mentioned in the introduction. In general, for example if we perturb with a quasi-local automorphism with a non-zero Lieb-Robinson bound, we need to have that the cone $\Lambda_2$ has a wider opening angle than $\Lambda_1$ as well. In any case, if the (strict) split property does not hold, it is no longer possible to decompose the representation as a tensor product of representations of $\alg{A}_{\Lambda}$ and $\alg{A}_{\Lambda^c}$. \end{remark} The theorem says that, as expected, the product state does not have any non-trivial superselection sectors. For a general state without long range entanglement, we can try use the quasi-local automorphism $\alpha$ from Definition~\ref{def:lre} to relate the sectors of $\omega \circ \alpha$ with those of $\omega$. In general there is no reason why $\omega$ should be quasi-equivalent to $\omega \circ \alpha$, so it does not follow directly that $\omega \circ \alpha$ has trivial sectors. However if $\alpha$ comes from quasi-local dynamics satisfying Theorem~\ref{thm:quasiauto}, we can relate the sectors of $\pi_\omega$ and $\pi_\omega \circ \alpha$. The key point is that we can almost ``factorize'' the automorphism $\alpha$ into automorphisms acting on a cone $\Lambda$ and its complement, up to conjugation with a unitary in $\mathcal{A}_\Gamma$ and an automorphism acting non-trivially only near the border of $\Lambda$. More precisely, we will consider $\alpha$ that are quasi-factorizable in the sense of Definition~\ref{def:quasifactor}. In Section~\ref{sec:approxsplit} we will show how such automorphisms can be obtained using Theorem~\ref{thm:quasiauto}. \begin{theorem}\label{thm:invariant} Let $({\mathcal H}_0,\pi_0)$ be a representation. {Let $\alpha$ be a quasi-local automorphism such that for every cone $\Lambda$, we can find an inclusion of cones $\Gamma_1 \subset \Lambda \subset \Gamma_2$ such that $\alpha$ is quasi-factorizable with repsect to this inclusion.} Suppose that a representation $\pi$ satisfies the superselection criterion for $\pi_0$ in the sense that for all cones $\Lambda$ in $\mathbb{Z}^2$, we have \begin{align} \pi\vert_{{\mathcal A}_{\Lambda^c}}\sim_{q.e.} \pi_0\vert_{{\mathcal A}_{\Lambda^c}}. \end{align} Then $\pi \circ \alpha$ satisfies the superselection criterion~\eqref{eq:nsselect} for $\pi_0 \circ \alpha$ \end{theorem} \begin{proof} Let $\Lambda$ be a cone. We will show that \begin{align} \pi\circ\alpha\vert_{{\mathcal A}_{\Lambda^c}}\sim_{q.e.} \pi_0\circ\alpha\vert_{{\mathcal A}_{\Lambda^c}}. \end{align} By assumption we can factorize $\alpha$ as \begin{align} \alpha=\mathop{\mathrm{Ad}}\nolimits(u)\circ\widetilde{\Xi} \circ \left (\alpha_{\Lambda}\otimes \alpha_{\Lambda^c}\right ), \end{align} as in Definition~\ref{def:quasifactor}. From this, for any $A\in{\mathcal A}_{\Lambda^c}$, we have \begin{equation} \begin{split} \pi\circ\alpha(A) &=\pi\circ\mathop{\mathrm{Ad}}\nolimits(u)\circ\widetilde{\Xi}(\alpha_{\Lambda^c}(A)) \\ &=\mathop{\mathrm{Ad}}\nolimits\left (\pi(u)\right )\circ \pi \circ \widetilde{\Xi}(\alpha_{\Lambda^c}(A)) \\ &=\mathop{\mathrm{Ad}}\nolimits\left (\pi(u)\right )\circ \pi \vert_{{\mathcal A}_{\Gamma_1^c}}\circ \widetilde{\Xi}(\alpha_{\Lambda^c}(A)). \end{split} \end{equation} This implies \begin{align}\label{aaa} \pi\circ\alpha\vert_{{\mathcal A}_{\Lambda^c}} \sim_{q.e.} \pi \vert_{{\mathcal A}_{\Gamma_1^c}}\circ \widetilde{\Xi}\circ\alpha_{\Lambda^c}\vert_{{\mathcal A}_{\Lambda^c}}. \end{align} (In fact this is even a unitary equivalence). Similarly, we have \begin{align}\label{bbb} \pi_0\circ\alpha\vert_{{\mathcal A}_{\Lambda^c}} \sim_{q.e.} \pi_0 \vert_{{\mathcal A}_{\Gamma_1^c}}\circ \widetilde{\Xi} \circ \alpha_{\Lambda^c}\vert_{{\mathcal A}_{\Lambda^c}}. \end{align} Because we have $\pi\vert_{{\mathcal A}_{\Gamma_1^c}}\sim_{q.e.} \pi_0\vert_{{\mathcal A}_{\Gamma_1^c}}$ by virtue of the superselection criterion, we get \begin{align} \pi \vert_{{\mathcal A}_{\Gamma_1^c}}\circ \widetilde{\Xi}\circ \alpha_{\Lambda^c}\vert_{{\mathcal A}_{\Lambda^c}} \sim_{q.e.} \pi_0 \vert_{{\mathcal A}_{\Gamma_1^c}}\circ \widetilde{\Xi} \circ \alpha_{\Lambda^c}\vert_{{\mathcal A}_{\Lambda^c}}. \end{align} Combining this with (\ref{aaa}) and (\ref{bbb}), we get \begin{align} \pi\circ\alpha\vert_{{\mathcal A}_{\Lambda^c}}\sim_{q.e.} \pi_0\circ\alpha\vert_{{\mathcal A}_{\Lambda^c}}. \end{align} This proves the claim. \end{proof} Combining the two theorems in this section then shows that short-range entangled states indeed have a trivial sector structure. \begin{corollary}\label{cor:triviality} Let $({\mathcal H}_0,\pi_0)$ be an irreducible representation which factorizes as $\pi_0=\pi_{\Lambda}\otimes \pi_{\Lambda^c}$ for some cone $\Lambda$, where $(\pi_{\Lambda},{\mathcal H}_{\Lambda})$, $(\pi_{\Lambda^c},{\mathcal H}_{\Lambda^c})$ are irreducible representations of ${\mathcal A}_{\Lambda}$, ${\mathcal A}_{\Lambda^c}$ respectively. Let $\alpha$ be a quasi-local automorphism which is quasi-factorizable for all cones $\Lambda$. Suppose that a representation $\pi$ satisfies the superselection criterion for $\pi_0 \circ \alpha$ in the sense that for all cones $\widetilde\Lambda$ in $\mathbb{Z}^2$, we have \begin{align} \pi\vert_{{\mathcal A}_{\widetilde\Lambda^c}}\sim_{q.e.} \pi_0 \circ \alpha\vert_{{\mathcal A}_{\widetilde\Lambda^c}}. \end{align} Then $\pi$ is quasi-equivalent to $\pi_0 \circ \alpha$. In particular, if $\pi$ is irreducible, then $\pi$ and $\pi_0$ are equivalent. \end{corollary} \begin{proof} If $\alpha$ is a quasi-local automorphism, the same is true for $\alpha^{-1}$, and it is quasi-factorizable as well. Because $\pi$ satisfies the superselection criterion for $\pi_0 \circ \alpha$ and $\alpha^{-1}$ is a quasi-local automorphism, by Theorem \ref{thm:invariant}, $\pi\circ\alpha^{-1}$ satisfies the superselection criterion for $\pi_0\circ\alpha\circ\alpha^{-1}=\pi_0$. Then by Theorem \ref{thm:trivial}, $\pi\circ\alpha^{-1}$ is quasi-equivalent to $\pi_0$. From this, it follows that $\pi$ is quasi-equivalent to $\pi_0 \circ \alpha$. \end{proof} Note that this applies in particular to states which are not long-range entangled according to Definition~\ref{def:lre}. Indeed, suppose that $\omega$ is a pure state, and $\alpha$ a quasi-factorizable automorphism such that $\omega \circ \alpha = \omega_\Lambda \otimes \omega_{\Lambda^c}$ for some cone $\Lambda$ and states $\omega_\Lambda$ of $\mathcal{A}_{\Lambda}$ and $\omega_{\Lambda^c}$ of $\mathcal{A}_{\Lambda^c}$. Then $\omega_0 := \omega_\Lambda \otimes \omega_{\Lambda^c}$ is a pure state, and so must be $\omega_\Lambda$ and $\otimes_{\Lambda^c}$, as otherwise we could write $\omega_0$ as a non-trivial convex combination of two distinct states. But then the GNS representation $\pi_0$ of $\omega_0$ satisfies the assumptions of Corollary~\ref{cor:triviality}. Since $\pi_0 \circ \alpha$ is a GNS representation for $\omega \circ \alpha$, it follows that $\omega \circ \alpha$ has no non-trivial sectors. \begin{remark} We argued that a state that satisfies the strict split property for a given cone is trivial in the sense that there are no anyonic excitations (superselection sectors). It is however possible to further classify this trivial sector, for example if there is an on-site symmetry $G$. In that case, it is natural to demand that two states are only in the same gapped phase if they can be connected by a continuous path of gapped Hamiltonians respecting the $G$-symmetry~\cite{ChenGW}. In two dimensions, the set of states that are in the trivial phase (i.e., containing the product state with respect to each site) can then be classified by a cocycle in $H^3(G, U(1))$~\cite{Ogata21}. However, in our definition, the absence of long-range entanglement does not necessarily imply that the state is such a product of single-site states. It seems plausible that if we demand the split property to hold for \emph{any} cone, this would follow. \end{remark} We conclude this section with a brief discussion. Here, we focussed on necessary conditions for the existence of anyons. While we have showed that long-range entanglement is a necessary condition, it remains an open problem to find \emph{sufficient} conditions. In particular, there is no guarantee that a state with long-range entanglement has any non-trivial sectors at all (and in fact given the selection criterion~\ref{eq:nsselect} that should generally not be expected if the reference state is far from homogeneous). In addition, even if non-trivial sectors do exist, they are not necessarily anyons. In fact, in three or higher spatial dimensions, cone-localized sectors have bosonic or fermionic statistics (cf.~\cite{BuchholzF}), but in 2D anyons are a possibility, as for example the abelian quantum double models show~\cite{FiedlerN}. Although there is a technical condition that implies the corresponding category is modular (which in particular implies that all sectors are anyons), the physical interpretation of this criterion is unclear~\cite[Thm. 5.3]{NaaijkensKL}. We focussed on the trivial phase here, but one can show that if there are \emph{non-}trivial sectors, the full braided tensor category describing the sectors is invariant under quasi-factorizable automorphisms~\cite{Ogata21a}. This requires that approximate Haag duality holds, a weaker version of Haag duality that can be shown to be stable under quasi-factorizable automorphisms. There is another natural generalization of the superselection criterion~\eqref{eq:sselect}, which does not require Haag duality, but a variant of the split property instead~\cite{ChaNN18}. Given that the spectral flow is quasi-local, it is natural to look at representations that can be localized in cones up to some exponentially decaying error. This leads to the notion of approximately localizable endomorphisms, and one can develop the full sector theory (including e.g. braiding of charges) using them. These properties are stable upon applying the quasi-local spectral flow. We should add the caveat that this is a result about \emph{approximately} localized sectors, i.e.\ localized up to some exponentially decaying error, and we cannot rule out that despite the absence of strictly localized sectors, there is a non-trivial \emph{approximately} localized sector. In abelian quantum double models, this can be ruled out by imposing an ``energy criterion'', essentially excluding any possible confined charges~\cite{ChaNN18}. We presently do not know if the absence of such sectors can be proven from more fundamental assumptions. For example, in the case of strict localization it is not necessary. The results in this section and in \cite{ChaNN18} strongly suggest that in a state with short-range entanglement, there are no approximately localizable sectors either. \section{Approximate split property for cone algebras}\label{sec:approxsplit} We apply the results of Section~\ref{sec:stable} to two-dimensional models, and give natural examples of quasi-factorizable automorphisms. In Section~\ref{sec:lre} we have already discussed the split property for a cone and its complement. As already mentioned, this strong version does not hold for, for example, abelian quantum double models, where only a weaker version is true~\cite{FiedlerN,Naaijkens12}. This in turn is a key assumption in the stability of superselection sectors analysis in~\cite{ChaNN18}. Although there we only need the approximate split property for the ``unperturbed'' model, it is interesting to know if it is in fact a property of the whole phase. Hence, in this section, we show that for suitable perturbations this is indeed the case, and the perturbed model also satisfies the approximate split property. For simplicity we restrict to 2D systems and finite range interactions, although we expect that with a more careful analysis, the results extend to a wider class of interactions and to systems in three or more spatial dimensions. Let us recall that if $F$ is an $F$-function, $F_r(r) := e^{-r} F(r)$ is also an $F$-function. This is an example of a \emph{weighted} $F$-function in the terminology of Ref.~\cite{NSY}. Such weighted $F$-functions have favorable decay properties, as can be seen in the following Lemma. \begin{lemma} Let $(\Gamma,d)$ be $\mathbb{Z}^2$ with the usual metric. Then there is a $C>0$ such that we have the following estimate for all $m > \sqrt{2}$: \label{lem:gfdecay} \begin{equation} G_{F_r}(m) \leq C F(m-\sqrt{2}) m e^{-m}, \end{equation} where $G_{F_r}$ is as defined in equation~\eqref{gfdef}. \end{lemma} \begin{proof} By translation invariance of the metric and $\Gamma$ we do not need the supremum in equation~\eqref{gfdef}. Hence we get \begin{align*} G_{F_r}(m) &= \sum_{|x| \geq m} e^{-|x|} F(|x|) \\ & \leq 2 \pi \int_m^\infty r e^{-r+\sqrt{2}} F(r-\sqrt{2})\,dr \\ & \leq 2 \pi e^{\sqrt{2}} F(m-\sqrt{2}) \int_m^\infty r e^{-r}\,dr \\ & \leq 4 \pi e^{\sqrt{2}} F(m-\sqrt{2}) m e^{-m}. \end{align*} This can be seen by noting that \begin{equation} \int_x^{x+1} \int_y^{y+1} e^{-|(x,y)|+\sqrt{2}} F(|(x,y)|) dx dy \geq e^{-|(x,y)|} F(|(x,y)|) \end{equation} for $x,y \geq 0$ (since $F$ is positive and decreasing), and doing a coordinate transformation to polar coordinates. \end{proof} It is possible to generalize the lemma to other suitable weightings $F_{g}(r) := e^{-g(r)} F(r)$ (see e.g.~\cite{ChaNN18}). This could be necessary because in applications one would need to assume that interactions have finite interaction norm with respect to the \emph{weighted} $F$-function, instead of $F$ itself. Since we will consider only bounded range interactions, this is not an issue for us and we restrict to the easier case for simplicity. \begin{theorem} \label{thm:conesum} Let $\Gamma = \mathbb{Z}^2$ with the usual metric $d$ and consider the corresponding quantum spin system $\mathcal{A}_\Gamma$, where the local dimension of the spins is uniformly bounded. Let $t \mapsto \Phi(X;t)$ be a path of dynamics such that $\|\Phi(X;t)\|$ is uniformly bounded both in $X$ and $t$. Moreover assume that $\Phi$ is of bounded range, and let $F$ be an $F$-function. Then $\Phi \in {\mathcal B}_{F_r}([0,1])$, and it generates quasi-local dynamics $\tau^\Phi_{t,s}$. Assume that $\Gamma_1 \subset \Gamma_2$ is an inclusion of cones such that their borders are sufficiently far away, in the sense that the lines marking the boundaries of the cones are not parallel. Then there exist cones $\Gamma_1' \subset \Gamma_1$ and $\Gamma_2' \supset \Gamma_2$ such that the conditions of Theorem~\ref{thm:quasiauto} are satisfied. \end{theorem} \begin{proof} \begin{figure} \includegraphics[width=\textwidth]{cones.pdf} \caption{Cones as in Theorem~\ref{thm:conesum}.} \label{fig:cones} \end{figure} Without loss of generality we may assume that the cones $\Gamma_1$ and $\Gamma_2$ have their center line in the direction of the positive $x$-axis. We write $\alpha$ for the opening angle of $\Gamma_2$ and $\beta$ for the opening angle $\Gamma_1$ (see Figure~\ref{fig:cones}). The distance between their tips will be denoted by $d_2$. Let $0 < \epsilon < \beta$ such that $\alpha + \epsilon < \pi/2$. We can then choose cones $\Gamma_1'$ and $\Gamma_2'$ as in the figure. Later in the proof we will provide convenient values for $d_1$ and $d_2'$, but we note that with a little extra work is is possible to show that any positive value will do. We show that we can apply Theorem~\ref{thm:quasiauto}. First note that $\Gamma$ is 2-regular, since the number of points in a disk of radius $r$ scales with the area. Because the interaction range is uniformly bounded and because of 2-regularity, there are constants $C_{\#}$ and $d_\Phi$ such that $\Phi(X;t) = 0$ whenever $|X| > C_{\#}$ or $\diam(X) > d_\Phi$. It follows that $\Phi_1 \in {\mathcal B}_{F_r}([0,1])$. With Lemma~\ref{lem:gfdecay} it is also clear that $G_{F_r}^{\alpha}$ has finite moments for $\alpha \in (0,1]$ (in the sense of equation~\eqref{as:galp}) and we can find a suitable $F$-function $\widetilde{F}$ such that equation~\eqref{as:gf} is satisfied for $F_r$. It remains to be shown that equation~\eqref{anan} is satisfied. As a first step we study the function $f(m,x,y)$ of equation~\eqref{eq:defnf}. Note that the summation in the definition is over certain subsets of $X$ such that $x,y \in X$. Hence if $d(x,y) > d_\Phi$ we have $\Phi(X;t) = 0$ and consequently $f(m,x,y) = 0$. Similarly, the summation is only over $X$ such that $d(X, (\Gamma_2' \setminus \Gamma_1')^c) \leq m$. Hence, it follows that $f(m, x, y) = 0$ unless $d(x, (\Gamma_2'\setminus\Gamma_1')^c) \leq m + d_\Phi$, or the same is true for $y$. Or giving a rougher estimate, $f(m,x,y) = 0$ unless $d(x, (\Gamma_2'\setminus\Gamma_1')^c) \leq m + 2 d _\Phi$, regardless of $y$. Now consider the case where $d(x,y) \leq d_\Phi$ and $m$ large enough such that $d(x, (\Gamma_2' \setminus \Gamma_1')^c) \leq m + 2 d_\Phi$. In that case, we have \begin{equation} \label{eq:fmxybound} f(m, x, y) = \sum_{x, y \ni X} |X| \sup_{t} \| \Phi(X; t) \| \leq C_{\#} M 2^{|b_0(d_\Phi)|}, \end{equation} where $M := \sup_X \sup_{t \in [0,1]} \| \Phi(X;t) \|$, which is finite by assumption. We also used translation invariance of the metric (and $\Gamma$), and that by the finite range assumption any contributing subset $X$ must be contained in $b_x(d_\Phi)$. There are at most $2^{|b_0(d_\Phi)|}$ of such subsets, leading to the claimed bound. Next note that Lemma~\ref{lem:gfdecay} gives us the following estimate: \begin{equation} \label{eq:gsumbd} \sum_{m = k}^\infty G_{F_r}(m) \leq C F(k-\sqrt{2}) \sum_{m = k}^\infty m e^{-m} \leq C F(0) \frac{e^{-k+1}((e-1)k+1)}{(e-1)^2} \end{equation} whenever $k \geq 2$. Note in particular the factor of $e^{-k+1}$, which will be important to guarantee convergence in our case. We now return to equation~\eqref{anan}. Note that $d(\Gamma_1, \Gamma_2^c) = d_2 \sin \alpha$. If this is greater than $d_\Phi$, by the remarks above the first summation (over $x \in \Gamma_1$ and $y \in \Gamma_2^c$) vanishes. In general, since the cone $\Gamma_2$ has a wider opening angle than $\Gamma_1$, we see that there are only finitely many pairs $x \in \Gamma_1$ and $y \in \Gamma_2^c$ with $d(x,y) \leq d_\Phi$, and hence only finitely many contributions to the summation. Together with equations~\eqref{eq:fmxybound} and~\eqref{eq:gsumbd} it can be seen that this contribution is finite. \begin{figure} \includegraphics[width=\textwidth]{conedistances.pdf} \caption{Definition of various distances.} \label{fig:distances} \end{figure} At this point we are left with estimating the following summation: \begin{equation} \sum_{x\in \Gamma_2\setminus \Gamma_1} \left ( \sum_{y\in \Gamma_2^c} + \sum_{y \in \Gamma_1} \right ) \sum_{m=0}^\infty G_{F_r}(m)f(m,x,y), \end{equation} where we have split up the summation over $(\Gamma_2 \setminus \Gamma_1)^c$ into two parts. We consider the summation over $\Gamma_2^c$, the other one can be handled in the same manner. Note that $d(\Gamma_2, (\Gamma_2')^c) = d_2' \sin(\alpha+\epsilon)$. Similarly, $d(\Gamma_2 \cap b_0(n)^c, \Gamma_2') = d_2' \sin(\alpha+\epsilon) + n \sin(\epsilon)$. Write $d_{\Gamma_1}(n)$ for the distance between the tip of the cone $\Gamma_1$ and the circle of radius $n$ based on the tip of $\Gamma_2$, where we set $d_{\Gamma_1}(n) = 0$ if they do not intersect (see Fig.~\ref{fig:distances} for an idea of the various distances we need to introduce). In case it is non-zero, we see that in fact \[ d_{\Gamma_1}(n) = \sqrt{n^2 - d_2^2(1-\cos^2 \beta)} - d_2 \cos \beta. \] Let $\gamma_0$ be the distance from the tip of $\Gamma_1'$ to the intersection of the line perpendicular to the boundary of $\Gamma_1'$ and the boundary of $\Gamma_1$. We write $d_{\gamma_0}$ for the distance of the tip of $\Gamma_1$ to this intersection. Then for large enough $n$ the distance of the intersection of the circle of radius $n$ with the boundary of $\Gamma_1$ and the boundary of $\Gamma_1'$ is given by \[ \gamma(n) = \gamma_0 + (d_{\Gamma_1}(n)-d_{\gamma_0}) \sin \epsilon. \] From the geometric situation we see that $d_{\Gamma_1}(n+k) - d_{\Gamma_1}(n) \geq k$, hence $\gamma(n)$ grows at least linearly in $n$. Let $n_0$ be the smallest integer such that \begin{equation} d_0 := \min \{ d_2' \sin(\alpha+\epsilon) + n_0 \sin(\epsilon), \gamma(n_0) \} > 2 d_\Phi. \end{equation} Write $B_k := \left(b_0(d_0 + (k+1)/\sin(\epsilon)) \setminus b_0(d_0 + k/\sin(\epsilon)) \right)$. We now rewrite the summation as \[ \left( \sum_{x \in b_0(d_0) \cap (\Gamma_2 \setminus \Gamma_1)} + \sum_{k=0}^\infty \sum_{x \in B_k \cap (\Gamma_2 \setminus \Gamma_1)} \right) \sum_{y \in \Gamma_2^c} \sum_{m = 0}^\infty G_{F_r}(m) f(m,x,y). \] For the first summation over all $x \in \Gamma_2 \setminus \Gamma_1$ in the ball around the origin we note that there are only finitely many such $x$. We have already seen that for any given $x$, there are only finitely many $y$ (in fact, this number can be bounded from above independently of $x$) such that $f(m,x,y)$ is non-zero. Again by equations~\eqref{eq:fmxybound} and~\eqref{eq:gsumbd} it follows that the first summation is finite. For the second summation, note that if $x \in B_k \cap (\Gamma_2 \setminus \Gamma_1)$, then $d(x, (\Gamma_2')^c ) \geq k + 2 d_\Phi$ and $d(x, \Gamma_1') \geq k + 2 d_\Phi$, and hence $d(x, (\Gamma_2' \setminus \Gamma_1')^c)) \geq k + 2 d_\Phi$. By what we have seen earlier, this implies that $f(m, x, y) = 0$ if $m < k$ for such $x \in B_k \cap (\Gamma_2 \setminus \Gamma_1)$. Furthermore, because of the finite range assumption, contributing pairs $x \in B_k$ and $y \in \Gamma_2^c$ must be within a ``band'' of width $d_\Phi$ around each side of the boundary of $\Gamma_2 \setminus \Gamma_1$. It follows that we can bound the number of pairs $(x,y) \in (B_k \cap \Gamma_2 \setminus \Gamma_1) \times \Gamma_2^c$ by some constant $C_p > 0$ independent of $k$. Putting this together we can estimate the second summation as follows. \begin{equation} \begin{split} \sum_{k=0}^\infty \sum_{x \in B_k \cap (\Gamma_2 \setminus \Gamma_1)} & \sum_{y \in \Gamma_2^c} \sum_{m = 0}^\infty G_{F_r}(m) f(m,x,y) \\ &\leq \sum_{k=0}^\infty C_p C_{\#} 2^{|b_0(d_\Phi)|} \sum_{m=k}^\infty G_{F_r}(m) \\ &\leq C' \sum_{k=0}^\infty e^{-k+1} ((e-1)k +1) < \infty \end{split} \end{equation} for some $C' > 0$. Here we again used the estimates~\eqref{eq:fmxybound} and~\eqref{eq:gsumbd}. This completes the proof. \end{proof} We expect that with a more careful analysis one could allow for more general interactions, as long as they decay sufficiently fast. It does however seem necessary that that $\Gamma_2'$ has a bigger opening angle than $\Gamma_2$, so that towards infinity the distance between their respective boundaries grows. This is necessary to ensure that for $x,y$ far from the origin, $f(m,x,y)$ is non-zero only for large $m$. Together with the decay properties of $G_F$ of Lemma~\ref{lem:gfdecay} this ensures that the sum converges. The following now follows immediately from the theorem, by using Proposition~\ref{prop:splitstable}. \begin{corollary} Let $\mathcal{A}_\Gamma$ and $t \mapsto \Phi(X;t)$ be as in Theorem~\ref{thm:conesum} and $\tau^\Phi_{t,s}$ the corresponding quasi-local dynamics. Assume that $\Gamma_1 \subset \Gamma_2$ is an inclusion of cones such that their borders are sufficiently far away and in the representation $\pi$ of $\mathcal{A}_\Gamma$ we have the split property with respect to these cones. Then there exist cones $\Gamma_1' \subset \Gamma_1$ and $\Gamma_2' \supset \Gamma_2$ such that $\pi \circ \tau^\Phi_{1,0}$ satisfies the split property with respect to $\Gamma_1' \subset \Gamma_2'$. \end{corollary} Finally, it allows us to construct examples of quasi-factorizable automorphisms. \begin{corollary} Let $\alpha = \tau_{0,1}^\Phi$, with $\Phi$ as in Theorem~\ref{thm:conesum}. Then, for every cone $\Lambda$, we can find cones $\Gamma_1' \subset \Lambda \subset \Gamma_2'$ such that $\alpha$ is quasi-factorizable with respect to this inclusion. \end{corollary} \begin{proof} We will apply Theorem~\ref{thm:quasiauto}; we shall see later why the conditions are satisfied. Suppose that the cone $\Lambda$ has opening angle $\theta$. Fix some cone $\Lambda_0$ which has the same apex and central axis as $\Lambda$ but with a larger angle $\theta_0>\theta$, satisfying $\Lambda\subset \Lambda_0$. Set $\Gamma_1:=\Lambda$, $\Gamma_2:=\Lambda_0$. Then, by Theorem~\ref{thm:conesum}, there are cones $\Gamma_1' \subset \Gamma_1$ and $\Gamma_2' \supset \Gamma_2$ such that the conditions of Theorem~\ref{thm:quasiauto} are satisfied. Recall that $\alpha=\tau_{0,1}^{\Phi}$. Then $\tau_{0,1}^{\Phi^{(0)}}$, in the notation of Theorem~\ref{thm:quasiauto}, decomposes as \begin{align} \tau_{0,1}^{\Phi^{(0)}} =\alpha_{\Gamma_1}\otimes\alpha_{\Gamma_2\setminus \Gamma_1}\otimes \alpha_{\Gamma_2^c} \end{align} where $\alpha_{\Gamma_1} \in \operatorname{Aut}(\mathcal{A}_{\Gamma_1})$, and similar for the others. Moreover, by noting that $u \in \mathcal{A}_\Gamma$ and taking inverses on both sides of equation~\eqref{eq:quasifactor}, we obtain from Theorem~\ref{thm:quasiauto} that there is $\widetilde{u} \in \mathcal{A}$ such that \begin{align*} \alpha &=\tau_{0,1}^{\Phi}=\mathop{\mathrm{Ad}}\nolimits(\tilde u)\circ \left (\widetilde \beta_{\Gamma_2'\setminus\Gamma_1'}^{-1}\circ \tau_{0,1}^{\Phi^{(0)}} \right )\\ &=\mathop{\mathrm{Ad}}\nolimits(\tilde u)\circ\left (\widetilde \beta_{\Gamma_2'\setminus\Gamma_1'}^{-1}\circ\alpha_{\Gamma_2\setminus \Gamma_1}\right )\circ \left ( \alpha_{\Gamma_1}\otimes\alpha_{\Gamma_2^c} \right ) \\ &= \mathop{\mathrm{Ad}}\nolimits(\widetilde{u}) \circ \widetilde{\Xi} \circ (\alpha_{\Lambda} \otimes \alpha_{\Lambda^c}). \end{align*} Here, $\Xi:=\widetilde \beta_{\Gamma_2'\setminus\Gamma_1'}^{-1}\circ\alpha_{\Gamma_2\setminus \Gamma_1}$ is an automorphism on ${\mathcal A}_{\Gamma_2'}={\mathcal A}_{\Lambda_0}$, $\alpha_\Lambda:=\alpha_{\Gamma_1}$ is an automorphism on ${\mathcal A}_{\Lambda}={\mathcal A}_{\Gamma_1}$, and $\alpha_{\Lambda^c}:= \alpha_{\Gamma_2^c} \otimes \operatorname{id}_{\Lambda \setminus \Gamma_2}$ is an automorphism on ${\mathcal A}_{\Lambda^c}$. \end{proof} \bibliographystyle{alpha}
{ "timestamp": "2021-11-18T02:02:55", "yymm": "2102", "arxiv_id": "2102.07707", "language": "en", "url": "https://arxiv.org/abs/2102.07707" }
\section{Introduction}\label{introduction} Configuration \cite{felfernighotzbagleytiihonen2014,Stumptner1997} is one of the most successful applications of Artificial Intelligence technologies applied in domains such as telecommunication switches, financial services, furniture, and software components. In many cases, configuration knowledge bases are represented in terms of variability models such as feature models that provide an intuitive way of representing variability properties of complex systems \cite{Kang1990,Czarnecki2005}. Starting with rule-based approaches, formalizations of variability models have been transformed into model-based knowledge representations which are more applicable for the handling of large and complex knowledge bases, for example, in terms of knowledge base maintainability and expressivity of complex constraints \cite{Benavides2010,felfernighotzbagleytiihonen2014}. Examples of model-based knowledge representations are constraint-based representations \cite{Tsang1993}, description logic and answer set programming (ASP) \cite{felfernighotzbagleytiihonen2014}. Besides variability reasoning for single users, latest research also shows how to deal with scenarios where groups of users are completing a configuration task \cite{Felfernig2018}. In this paper, we focus on single user scenarios where variability models are represented as a constraint satisfaction problem (CSP) \cite{Benavides2005,Tsang1993}. There exist a couple of approaches dealing with the issue of integrating knowledge bases. First, \emph{knowledge base alignment} is the process of identifying relationships between concepts in different knowledge bases, for example, classes describe the same concept but have different class names (and/or attribute names). Approaches supporting the alignment of knowledge bases are relevant in scenarios where numerous and large knowledge bases have to be integrated (see, for example, \cite{Galarraga2013}). Ardissono et al. \cite{Ardissono2003} introduce an approach to \emph{distributed configuration} where individual knowledge bases are integrated into a distributed configuration process in which individual configurators are responsible for configuring individual parts of a complex product or service. The underlying assumption is that individual knowledge bases are consistent and that there are no (or only a low number of) dependencies between the given knowledge bases. The \emph{merging of knowledge bases} is related to the task of exploiting various merging operators to different belief sets \cite{Delgrande2007,Liberatore1998}. For example, Delgrande and Schaub \cite{Delgrande2007} introduce a consistency-based merging approach where the result of a merging process is a maximum consistent set of logical formulas representing the union of the individual knowledge bases. In the line of existing consistency-based analysis approaches, the resulting knowledge bases represent a logical union of the original knowledge bases that omits minimal sets of logical sentences inducing an inconsistency \cite{84_Reiter1987}. \emph{Contextual modeling} \cite{felfernig2000} is related to the task of decentralizing variability knowledge related development and maintenance tasks. Approaches to merging \emph{feature models} represented on a graphical level on the basis of merging rules have been introduced, for example, in \cite{Broek2010,Segura2007}. In this context, feature models including specific constraint types such as \emph{requires} and \emph{excludes}, are merged in a semantics-preserving fashion. Compared to our approach, the merging of variability models introduced in \cite{Broek2010,Segura2007} is restricted to specific constraint types and does not take into account redundancy. Our approach provides a generalization to existing approaches especially due to the generalization to arbitrary constraint types and redundancy-free knowledge bases as a result of the merge operation. We propose an approach to the \emph{merging of variability models} (represented as constraint satisfaction problems) which guarantees semantics preservation, i.e., the union of the solutions determined by individual constraint solvers (configurators) is equivalent to the solution space of the integrated variability model (knowledge base). In this context, we assume that the knowledge bases to be integrated (1) are consistent and (2) use the same variable names for representing individual item properties (knowledge base alignment issues are beyond the scope of this paper). The contributions of this paper are the following. (1) We provide a short analysis of existing approaches to knowledge base integration and point out specific properties of variability model integration scenarios that require alternative approaches. (2) We introduce a new approach to variability knowledge integration which is based on the concepts of contextualization and conflict detection. (3) We show the applicability of a our approach on the basis of a performance analysis. The remainder of this paper is organized as follows. First, we introduce a working example from the automotive domain (see Section \ref{workingexample}). On the basis of this example, we introduce our approach to variability model integration (merging) in Section \ref{integratingconfigurationknowledgebases}. In Section \ref{performanceevaluation}, we present a performance evaluation. Section \ref{threatstovalidity} includes a discussion of threats to validity of the presented merging approach. The paper is concluded in Section \ref{conclusionsandfuturework} with a discussion of issues for future work. \section{Example Variability Models}\label{workingexample} In the following, we introduce a working example which will serve as a basis for the discussion of our approach to knowledge integration (Section \ref{integratingconfigurationknowledgebases}). Let us assume the existence of two different variability models. For the purpose of our example, we introduce two car configuration knowledge bases represented as a constraint satisfaction problem. One car configuration knowledge base is assumed to be defined for the U.S. market and one for the German market. For simplicity, we assume that (1) both knowledge bases are represented as a constraint satisfaction problem (CSP) \cite{Tsang1993} and (2) that both knowledge bases operate on the same set of variables and corresponding domain definitions.\footnote{We are aware of the fact that this assumption does not hold for real-world scenarios in general. However, we consider tasks of concept matching as an upstream task we do not take into account when integrating knowledge bases on a formal level.} Our two knowledge bases consisting of variable definitions and corresponding constraints \{$CKB_{us}$, $CKB_{ger}$\} are the following. \begin{itemize} \item{$CKB_{us}$: \{country(US), type(combi, limo, city, suv), color(white, black), engine(1l, 1.5l, 2l), couplingdev(yes,no), fuel(electro, diesel, gas, hybrid), service(15k, 20k, 25k), $c_{1us}:fuel \neq hybrid$, $c_{2us}:fuel = electro \rightarrow coupling-$ $dev = no$, $c_{3us}:fuel = diesel \rightarrow color = black$\}} \item{$CKB_{ger}$: \{country(GER), type(combi, limo, city, suv), color(white, black), engine(1l, 1.5l, 2l), couplingdev(yes,no), fuel(electro, diesel, gas, hybrid), service(15k, 20k, 25k), $c_{1ger}:fuel \neq gas$, $c_{2ger}:fuel = electro \rightarrow couplingdev = no$, $c_{3ger}:fuel = diesel \rightarrow type \neq city$\}} \end{itemize} In these knowledge bases, we denote the variable \emph{country} as contextual variable since it is used to specify the country a configuration belongs to but is not directly associated with a specific component of the car. Table \ref{solutionspacesindividual} shows a summary of the solution spaces (in terms of the number of potential solutions) that are associated with the country-specific knowledge bases $CKB_{us}$ and $CKB_{ger}$. For simplicity, we kept the number of constraints the same in both knowledge bases, however, the integration concepts introduced in Section \ref{integratingconfigurationknowledgebases} are also applicable to knowledge bases with differing numbers of constraints. \vspace{-0.15cm} \begin{table}[!ht] \caption{Solution spaces of individual knowledge bases.} \label{solutionspacesindividual} \centering{}\begin{tabular}{|c|c|c|c|c|c|} \hline Knowledge base & \#constraints & \#solutions \tabularnewline \hline \hline $CKB_{us}$ & 3 & 288 \tabularnewline \hline $CKB_{ger}$ & 3 & 324 \tabularnewline \hline \end{tabular} \end{table} \section{Merging Variability Models}\label{integratingconfigurationknowledgebases} In this section, we introduce our approach to merge variability models represented as constraint satisfaction problems (CSPs) \cite{Tsang1993}. Our approach is based on the assumption that the constraints of the two original knowledge bases $CKB_1$ and $CKB_2$ are contextualized, i.e., each constraint of knowledge base $CKB_1$ gets contextualized on the basis of predefined contextualization variables. For example: assuming a context variable \emph{country}(US,GER), each constraint $c_{[i]us}$ of the US knowledge base is contextualized with (transformed into) $country = US \rightarrow (c_{[i]us})$. Constraint $c_{1us}: fuel \neq hybrid$ would be translated into $c_{1us'}: country = US \rightarrow (fuel \neq hybrid)$. $CKB_{us}$ and $CKB_{ger}$ have been transformed into their contextualized variants $CKB_{us}'$ and $CKB_{ger}'$ where $CKB_{us}' \cup CKB_{ger}' = CKB'$. \begin{itemize} \item{$CKB_{us}'$: \{country(US), type(combi, limo, city, suv), color(white, black), engine(1l, 1.5l, 2l), couplingdev(yes,no), fuel(electro, diesel, gas, hybrid), service(15k, 20k, 25k), $c_{1us}': country = US \rightarrow (fuel \neq hybrid$), $c_{2us}':country = US \rightarrow (fuel = electro \rightarrow couplingdev = no$), $c_{3us}:country = US \rightarrow (fuel = diesel \rightarrow color = black$)\}} \item{$CKB_{ger}'$: \{country(GER), type(combi, limo, city, suv), color(white, black), engine(1l, 1.5l, 2l), couplingdev(yes,no), fuel(electro, diesel, gas, hybrid), service(15k, 20k, 25k), $c_{1ger}': country = GER \rightarrow (fuel \neq gas$), $c_{2ger}': country = GER \rightarrow (fuel = electro \rightarrow couplingdev = no$), $c_{3ger}': country = GER \rightarrow (fuel = diesel \rightarrow type \neq city$)\}} \end{itemize} The solution spaces of the contextualized knowledge bases $CKB_{us}'$ and $CKB_{ger}'$ are shown in Table \ref{solutionspacescontextualized}. They have the same solution spaces as $CKB_{us}$ and $CKB_{ger}$. \begin{table}[!ht] \caption{Solution spaces when merging knowledge bases.} \label{solutionspacescontextualized} \centering{}\begin{tabular}{|c|c|c|c|c|c|} \hline Knowledge base & \#solutions \tabularnewline \hline \hline $CKB_{us}'$ & 288 \tabularnewline \hline $CKB_{ger}'$ & 324 \tabularnewline \hline $CKB' = CKB_{us}' \cup CKB_{ger}'$ & 612 \tabularnewline \hline $CKB_{us}'\cap CKB_{ger}'$ & 126 \tabularnewline \hline \end{tabular} \end{table} \vspace{0.25cm} On the basis of such a contextualization, we are able to preserve the consistency and semantics of the two original knowledge bases in the sense that (1) the solution space ($CKB_1$) is equivalent to the solution space ($CKB_1'$), (2) the solution space ($CKB_2$) is equivalent to the solution space ($CKB_2'$), and (3) the solution space ($CKB_1$) $\cup$ solution space ($CKB_2$) is equivalent to the solution space ($CKB_1' \cup CKB_2' = CKB'$). Based on this representation, we are able to (1) get rid of contextualizations (see line $7$ of Algorithm 1) that are not needed in the integrated version of the two original configuration knowledge bases and (2) delete redundant constraints (see line 15 of Algorithm 1). In Line $7$ it is checked whether a contextualization is needed for the constraint $c$ ($c$ is the decontextualized version of $c'$). If the negation of $c$ is consistent with the union of the contextualized knowledge bases, solutions exist that support $\neg c$. Consequently, $c$ must remain contextualized. Otherwise, the contextualization is not needed and $c$ is added to the resulting knowledge base -- with this, it replaces $c'$, i.e., the corresponding contextualized constraint. Each constraint in the resulting knowledge base $CKB$ (the decontextualized knowledge base) is thereafter checked with regard to redundancy (see Line $15$). A constraint $c$ is regarded as redundant if $CKB - \{c\}$ is inconsistent with $\neg c$. In this case, $c$ does not reduce the search space and thus can be deleted from $CKB$ -- it is redundant with regard to $CKB$. \vspace{0.25cm} \algsetup{ linenosize={\small } } \begin{algorithm}[ht] \caption{\textsc{CKB-Merge}($CKB_1', CKB_2'$)$:CKB$} \label{alg:Sequential} \begin{algorithmic}[1] \STATE \COMMENT{$CKB_{1',2'}$: two contextualized and consistent configuration knowledge bases} \STATE \COMMENT{$c'$: a contextualized version of constraint c} \STATE \COMMENT{$CKB$: knowledge base resulting from merge operation} \STATE $CKB ~ \gets$ $\emptyset$; \STATE $CKB' ~ \gets$ $CKB_{1'} \cup CKB_{2'}$; \FORALL {$c'$ $\in$ $CKB'$} \IF {$inconsistent(\{\neg c\} \cup CKB' \cup CKB$)} \STATE $CKB ~ \gets$ $CKB ~ \cup ~ \{c\};$ \ELSE \STATE $CKB ~ \gets$ $CKB ~ \cup ~ \{c'\};$ \ENDIF \STATE $CKB' ~ \gets$ $CKB' ~ - ~ \{c'\};$ \ENDFOR \FORALL {$c$ $\in$ $CKB$} \IF {$inconsistent((CKB - \{c\}) \cup \{\neg c\})$} \STATE $CKB ~ \gets CKB ~ - ~ \{c\};$ \ENDIF \ENDFOR \STATE $return ~ CKB;$ \end{algorithmic} \end{algorithm} \vspace{0.25cm} The knowledge base $CKB$ resulting from applying Algorithm 1 to the individual knowledge bases $CKB_{us}'$ and $CKB_{ger}'$ looks like as follows. In $CKB$, constraint $c_{2us}'$ is represented in a decontextualized fashion since the context information is not needed. Furthermore, constraint $c_{2ger}'$ has been deleted since it is redundant. \begin{itemize} \item{$CKB$: \{country(US, GER), type(combi, limo, city, suv), color(white, black), engine(1l, 1.5l, 2l), couplingdev(yes,no), fuel(electro, diesel, gas, hybrid), service(15k, 20k, 25k), $c_{1us}': country = US \rightarrow (fuel \neq hybrid$), $c_{2us}: fuel = electro \rightarrow couplingdev = no$, $c_{3us}':country = US \rightarrow (fuel = diesel \rightarrow color = black$), $c_{1ger}': country = GER \rightarrow (fuel \neq gas$), $c_{3ger}': country = GER \rightarrow (fuel = diesel \rightarrow type \neq city$)\}} \end{itemize} \section{Performance Evaluation}\label{performanceevaluation} In this section, we discuss the results of an initial analysis we have conducted to evaluate \textsc{CKB-Merge} (Algorithm 1). For this analysis, we applied 10 different synthesized variability models $CKB'$ ($CKB' = CKB_1' \cup CKB_2'$) represented as constraint satisfaction problems \cite{Tsang1993}) that differ individually in terms of the number of constraints (\#constraints) and the degree of contextualization (expressed as percentages in Tables \ref{runtimeckbmerge} and \ref{runtimeckbs}). In order to take into account deviations in time measurements, we repeated each experimental setting 10 times where in each repetition cycle the constraints in the individual (contextualized) knowledge bases $CKB'$ were ordered randomly. The number of consistency checks needed for decontextualization is linear in terms of the number of constraints in $CKB'$. A performance evaluation of \textsc{CKB-Merge} with different knowledge base sizes and degrees of contextualized constraints in $CKB$ is depicted in Table \ref{runtimeckbmerge}. In \textsc{CKB-Merge}, the runtime (measured in terms of milliseconds needed by the constraint solver\footnote{For the purposes of our evaluation we generated variability models represented as constraint satisfaction problems formulated using the \textsc{Choco} constraint solver -- www.choco-solver.org.} to find a solution) increases with the number of constraints in $CKB'$ and decreases with the number of contextualized constraints in $CKB$. The increase in efficiency can be explained by the fact that a higher degree of contextualization includes more situations where the inconsistency check in Line 7 (Algorithm 1) terminates earlier (a solution has been found) compared to situations where no solution could be found. In addition, Table \ref{runtimeckbs} indicates that the performance of solution search does not differ depending on the degree of contextualization in the resulting knowledge base $CKB$. Consequently, integrating individual variability models can trigger the following improvements. (1) De-contextualization in $CKB$ can lead to less cognitive efforts when adapting / extending knowledge bases (due to a potentially lower number of constraints and a lower degree of contextualization). (2) Reducing the overall number of constraints in $CKB$ can also improve runtime performance of the resulting integrated knowledge base. \begin{table} \caption{Avg. runtime (\emph{msec}) of \textsc{CKB-Merge} measured with different \emph{knowledge base sizes} ($CKB'$) and shares of contextualized constraints in $CKB$ (10-50\% contextualization).} \label{runtimeckbmerge} \centering{}\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $CKB'$ & \#constraints & 10\% & 20\% & 30\% & 40\% & 50\% \tabularnewline \hline \hline 1 & 10 & 749 & 219 & 195 & 118 & 97 \tabularnewline \hline 2 & 20 & 559 & 653 & 666 & 679 & 487 \tabularnewline \hline 3 & 30 & 1541 & 813 & 644 & 588 & 664 \tabularnewline \hline 4 & 40 & 1888 & 1541 & 1345 & 1177 & 1182 \tabularnewline \hline 5 & 50 & 3773 & 3324 & 3027 & 3171 & 2643 \tabularnewline \hline 6 & 60 & 5376 & 4458 & 4425 & 3304 & 3056 \tabularnewline \hline 7 & 70 & 7300 & 6912 & 7362 & 5619 & 4896 \tabularnewline \hline 8 & 80 & 10795 & 8793 & 7580 & 6821 & 5909 \tabularnewline \hline 9 & 90 & 13365 & 11770 & 10103 & 8916 & 7831 \tabularnewline \hline 10 & 100 & 15992 & 14443 & 14679 & 12417 & 11066 \tabularnewline \hline \end{tabular} \end{table} \begin{table} \caption{Avg. runtime (\emph{msec}) of the merged configuration knowledge bases (CKB) measured with different \emph{knowledge base sizes} ($CKB'$) and shares of contextualized constraints in $CKB$ (10-50\% contextualization).} \label{runtimeckbs} \centering{}\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $CKB'$ & \#constraints & 10\% & 20\% & 30\% & 40\% & 50\% \tabularnewline \hline \hline 1 & 10 & 244 & 159 & 203 & 167 & 274 \tabularnewline \hline 2 & 20 & 305 & 230 & 250 & 362 & 271 \tabularnewline \hline 3 & 30 & 310 & 378 & 251 & 426 & 243 \tabularnewline \hline 4 & 40 & 425 & 453 & 522 & 502 & 563 \tabularnewline \hline 5 & 50 & 500 & 640 & 603 & 637 & 657 \tabularnewline \hline 6 & 60 & 881 & 728 & 899 & 801 & 698 \tabularnewline \hline 7 & 70 & 830 & 778 & 802 & 888 & 876 \tabularnewline \hline 8 & 80 & 917 & 1054 & 1011 & 848 & 1030 \tabularnewline \hline 9 & 90 & 1017 & 1117 & 1042 & 960 & 667 \tabularnewline \hline 10 & 100 & 1387 & 1363 & 1297 & 1297 & 1308 \tabularnewline \hline \end{tabular} \end{table} \section{Threats to Validity}\label{threatstovalidity} The main threat to (external) validity is the overall representativeness of the knowledge bases used for evaluating the performance of \textsc{CKB-Merge}. The current evaluation is based on a set of synthesized knowledge bases which do not directly reflect real-world variability models. We want to point out that the major focus of our work is to provide an algorithmic solution that allows semantics-preserving knowledge integration which is a new approach and regarded as the major contribution of our work. The application of \textsc{CKB-Merge} to real-world variability models, i.e., not synthesized ones, is in the focus of our future work. \section{Conclusions and Future Work}\label{conclusionsandfuturework} In this paper, we have introduced an approach to the consistency-based merging of variability models represented as constraint satisfaction problems. The approach helps to build semantics-preserving knowledge bases in the sense that the solution space of the resulting knowledge base (result of the merging process) corresponds to the union of the solution spaces of the original knowledge bases. Besides the preservation of the original semantics, our approach also helps to make the resulting knowledge base compact in the sense of deleting redundant constraints and not needed contextual information. The performance of our approach is shown on the basis of a first performance analysis with synthesized configuration knowledge bases. Future work will include the evaluation of our concepts with more complex knowledge bases and the development of alternative merge algorithms with the goal to further improve runtime performance. \bibliographystyle{ecai2014}
{ "timestamp": "2021-02-16T02:37:27", "yymm": "2102", "arxiv_id": "2102.07643", "language": "en", "url": "https://arxiv.org/abs/2102.07643" }
\section{Introduction} Exciting materials by light is one of the most fundamental ways to study their physical properties. With light we can prepare distinct excited electronic states and follow their evolution over time. An electromagnetic transition occurs if the superposition between the charge distribution of the initial and final state matches the structure of the exciting field.\cite{Schmiegelow2016,Rochester2001} The transitions are characterized by the multipole structure of the electromagnetic field: Dipole transitions are induced by the oscillating field, quadrupole transitions by the oscillating field gradients, and so on.\cite{Rochester2001} The rates of dipole transitions are by orders of magnitude larger than of the higher-order multipoles and generally dominate the response of materials.\cite{Rivera2016} The scope of our work is, therefore, to examine transitions that get induced by the field amplitude. The states that are accessible to dipole excitations are restricted to a set of ``optically active'', ``dipole-allowed'', or ``bright'' transitions that readily interact with unpolarized radiation of moderate intensity.\cite{InuiBook,CardonaBuch,NovotnyBook2012} The subset of dipole-allowed excitations is identified by optical selection rules. They are derived from the symmetry of the material and the dipole moment of the photon as its external perturbation.\cite{InuiBook,CardonaBuch,ReichBuch} The standard optical selection rules are based on the fundamental assumption that the electromagnetic field is constant over the characteristic length scale of the material. Since the size of molecules and crystal unit cells is $\lesssim 1\,$nm this is an excellent assumption for visible and infrared photons with a vacuum wavelength $\lambda>400\,$nm. To increase the number and the type of available excitations, we need to construct situations where the electric field amplitude varies over the characteristic length scale of the material. One recent proposal was to shrink the wavelength of light so that the field varies more rapidly in space along its propagation direction.\cite{Rivera2016} We will explore another possibility and study selection rules when changing the in-plane spatial distribution of the electric field. Structured light describes light beams where the phase and polarization profile vary across the beam profile.\cite{Zhan2009,Yao2011} Cylindrical vector beams, for example, are laser beams where the polarization has cylindrical symmetry.\cite{Zhan2009} In radial polarization the electric field points towards the beam center; in azimuthal polarization the electric field is oriented tangentially to the beam.\cite{Zhan2009,NovotnyBook2012} Another form of structured light are beams with a helical phase structure, which means that the beams carry orbital angular momentum.\cite{Yao2011,Allen1992,OAMCollection2017} Despite its varying polarization and phase, structured light excites the same dipole transitions in traditional materials as linearly polarized light. Because the photon field is huge compared to the material system, it only samples the local linear polarization and not the entire polarization profile. Interestingly, this is different for quadrupole transitions that are induced by the more rapidly varying field gradients. Ionic quadrupole transitions experimentally showed a strong dependence on the helical phase structure of the exciting beam.\cite{Schmiegelow2016,Afanasev2018} Nanotechnology introduced artificial systems with dimensions $1-100\,$nm into physics, materials science, and many other fields. The optical excitations of such nanoscale structures are of particular interest due to their well-defined mode character and the confinement of the electromagnetic field.\cite{NovotnyBook2012,MaierBuch} With hundreds of nanometers, the size of the structures becomes comparable to the photon wavelength and the quasi-static approximation of constant field no longer applies. Structured light indeed excites optically forbidden or dark modes of nanoscale systems that are inaccessible to unpolarized and linearly polarized light. \cite{Volpe2009,Parramon2012,Hentschel2013,Yanai2014,Gomez2013,Deng2018,Kerber2017,Kerber2018,Machado2018} So far, these excitations have been studied in a case-by-case manner using numerical simulations and experiments. Universal, symmetry-derived selection rules beyond the quasi-static dipole approximation remain missing. In this paper, we present the symmetry-imposed selection rules for optical absorption including retardation and spatial variation in the field. We study nanostructures that get excited by cylindrical vector beams, light with orbital angular momentum, and field retardation. To do so, we first construct the symmetry-derived eigenmodes of nanoscale oligomers. We consider modes that are induced by the dipole and the quadrupole of the monomer and discuss the general extension to higher-order electric multipoles. We then derive the selection rules for dipole-induced absorption and scattering by structured light. We calculate exemplary excitation spectra in nanoplasmonic systems using finite-difference time-domain (FDTD) techniques and discuss the properties of nominally bright and dark modes in the spectra. In addition to linear optics we present the selection rules for non-linear multi-photon processes. We predict second-harmonic generation in centrosymmetric structures when nanooligomers are excited by two photons of $\pm1$ difference in total angular momentum. Our findings apply to any system as long as the spatial extension of the excited state is a considerable fraction of the photon wavelength and beam focus. To make the paper more accessible, we focus on plasmonic excitations in nanoscale metallic oligomers. Our formalism may be extended to other excitations of interest like plasmon-enhanced optical processes and dielectric nanophotonics. Metal nanostructures have been studied for their intriguing optical properties as much as their potential photonic application in fields ranging from analytic chemistry and sensing to quantum information technology.\cite{MaierBuch,Halas2011,Tame2013} Light excites localized surface plasmon resonances in metal nanostructures, which are collective oscillations of the metal free electrons.\cite{MaierBuch,NovotnyBook2012} These excitations strongly absorb and scatter photons. They also induce electromagnetic near fields in close vicinity to the metal surface ($<50$\,nm for visible light). Many applications of plasmonics implicitly or explicitly exploit the near-field excitation. Among the most prominent examples is surface- (or plasmon-) enhanced Raman scattering (SERS), where the plasmonic near field enhances the Raman process by up to ten orders of magnitude.\cite{EtchegoinBuch,Langer2019,LeRu2006,MuellerFaraday2018,Zhu2014} Plasmonic oligomers are regular arrangements of plasmonic building blocks like particles, triangles, and discs.\cite{Prodan2003,MaierBuch} They are extremely helpful to understand light-matter interaction in nanosystems, because they allow to construct plasmon eigenstates in a rational way and are straightforward to fabricate.\cite{Prodan2003,Guerrero2012,Zohar2014} In an oligomer the electromagnetic near fields of close-by monomers interact and collective electromagnetic states emerge.\cite{Prodan2003,Hentschel2011,Guerrero2012,Forestiere2013,Zohar2014,Lamowski2018,Pascale2019} The formation of these states resembles the construction of molecular electronic orbitals from the valence wave functions of the atoms: The oligomer eigenmodes are symmetric and antisymmetric combinations of the optical excitation in the monomers.\cite{Brandl2006,Gomez2010} The bonding configurations have eigenergies below the energy of the monomer excitation; the antibonding configurations are higher in energy. Oligomers are fabricated through the assembly of solution-processed nanoparticles (spheres, cubes, rods, stars etc.) or through the nanofabrication of assemblies of discs, squares, and bars using electron-beam lithography.\cite{Hentschel2011,Hentschel2013,Shafiei2013,Schietinger2009} They typically extend over several 100\,nm and sample the distribution of phase and polarization for visible light.\cite{MuellerFaradayDiscussions2019,Yanai2014,Parramon2012,Kerber2017} Structured light excites dipole-forbidden plasmons as shown for cylindrical vector beams\cite{Gomez2013,Hentschel2013} and light with orbital angular momentum.\cite{Kerber2017,Kerber2018} Recent work on the absorption of light by self-organized nanoparticle layers considered retardation effects and the change of optical selection rules due to the finite wavelength of light.\cite{MuellerACSPhotonics2018,MuellerFaradayDiscussions2019} \section{Methods}\label{SEC:methods} We combine the symmetry analysis of plasmonic oligomers, structured beam profiles, field-retardation, and multi-photon processes with simulations of plasmon eigenmodes, optical absorption, and light scattering. Our symmetry analysis requires straightforward manipulations of group theory: Reducing representations, finding the representations of higher-order multipoles, finding induced representations for a symmetric arrangement of building blocks, and projecting eigenstates. These tools are described in many textbooks on group theory. We recommend Refs.~\citenum{InuiBook, WilsonBook}. For projecting symmetry-adapted eigenstates, we use graphical projection operators, as explained by Reich~\textit{et al.}\cite{ReichBuch}. Two online resources facilitate group theory manipulations like reducing representations, obtaining higher-order moments and so forth: The Bilbao Crystallographic Server\cite{BilbaoCrystI, BilbaoCrystII} and the tables for point groups compiled by Gernot Katzer.\cite{KatzerOnline} For the $D_{2h}$ point group we use $z$ as the basis function for $B_{1u}$, $y$ for $B_{2u}$, and $x$ for $B_{3u}$, which is the convention most commonly found in the group-theory literature. \begin{figure} \includegraphics[width=7.5cm]{./oligomers.pdf} \caption{Plasmonic oligomers constructed from nanodiscs. (a) Dimer belonging to the $D_{2h}$ point group, (b) trimer ($D_{3h}$), (c) tetramer ($D_{4h}$), (d) pentamer ($D_{5h}$), and (e) hexamer ($D_{6h}$). The geometry of the discs ($d=100\,$nm, $h=40\,$nm) and their separation ($g=20\,$nm) is identical for all oligomers. The arrows indicate the $(x,y)$ coordinate system used throughout the paper except in the section on field retardation.} \label{FIG:oligomers} \end{figure} In addition to the molecular Sch\"onflies notation for point groups of finite systems, we used the formalism that has been developed in connection with line groups of one-dimensional systems.\cite{DamnjanovicBuch,ReichBuch,Damnjanovic1999,Bozovic1985} We briefly introduce the notation for the $D_{nh}$ point groups that are in the focos of our work, see Ref.~\citenum{DamnjanovicBuch} for an extended introduction. The irreducible representations of $D_{nh}$ may be specified by combinig the quantum number $m$ and the parities under horizontal $\sigma_h$ and vertical $\sigma_v$ mirror operation. $m$ can be identified with the $z$ component of the angular momentum along the principle axis of rotation.\cite{DamnjanovicBuch,Damnjanovic1999,ReichBuch} Irreducible representations that are denoted by $A$ in the Sch\"onflies notation have $m=0$ and $B$ $m=n/2$; only $D_{nh}$ groups with even $n$ have $B$ representations. Representations that are denoted by $E_i$ have $m=i$; the subscript gets dropped for $D_{3h}$ ($D_{4h}$) where $E'$ ($E$) has $m=1$. The parity of the mirror operations are either $+1$ or $-1$ for the non-degenerate $A$ and $B$ representations. For the $E$ representations the parity may also be undefined with a character of zero. There is a one-to-one correspondence between the set of $m$ and the two parities and the Sch\"onflies notation that we present in Suppl. Table S1 for the relevant point groups. We give all selection rules and final results in the paper in the Sch\"onflies notation. The line group notation is particularly powerful to analyse optical excitations by beams with orbital angular momentum (OAM). The $m$ quantum number for such a beam corresponds to the combined angular momentum of orbit (i.e., OAM) and spin (polarization). The line group formalism, therefore, greatly facilitates finding the selection rules for OAMs compared to a direct evaluation of the polarization patterns. We simulated the optical properties for a set of plasmonic oligomers that were constructed from gold nanodiscs, see Fig.~\ref{FIG:oligomers}. The gold discs had a diameter $d=100\,$nm and height $h=40\,$nm. We arranged them in highly symmetric oligomers as shown in Figs.~\ref{FIG:oligomers}(a)-(e) using $g=20\,$nm gaps between adjacent discs. The background dielectric constant $\varepsilon=1.65$ mimicks a dielectric like SiO$_2$ as the oligomer substrate. The simulations use the dielectric function of gold measured by Johnson and Christy.\cite{JohnsonChristy1972} We numerically calculated absorption and light scattering by plasmonic oligomers using the finite-difference time-domain (FDTD) method as implemented in Lumerical. We used a mesh-override region with 2 nm cells to discretize the space around the plasmonic oligomers. For excitation with linearly polarized light, we used a total-field scattered-field plane wave source. For excitation with structured light, i.e. cylindrical vector beams with radial and azimuthal polarization, we used a customized total-field scattered-field source based on a k-space method.\cite{Mansuripur1986} The method is suited to calculate the field distribution near the focus of high numerical-aperture objectives. We implemented cylindrical vector beams with a doughnut radius of $\sim$ 700\,nm at the position of the oligomer. The optical cross sections were recorded with power monitors. The scattering and absorption coefficients were calculated by dividing the cross sections by the area of the oligomer discs $A = n\pi d^2/4$, where $n$ is the number of discs and $d$ their diameter. Plasmon eigenmodes, including their surface charge-density distribution, were obtained with the boundary-elements method, using the eigenmode solver of the MNPBEM Matlab package. \cite{MNPBEM2012} To fit plasmon eigenenergies in the absorption spectra we subtracted a background due to the interband transitions of gold. We obtained the background functional form by calculating a slab of gold using identical parameters as for the oligomer simulation.\cite{Bruno2019} The absorption spectra were fit by one (azimuthal, radial polarization) and three (linear) Lorentzian peaks. The scattering spectra were fit with the analytic model by Pinchuk~\textit{et al.}\cite{Pinchuk2004}, which we extended to the case of several plasmon resonances, with amplitudes and frequencies as fitting parameters, see Suppl. Information. \section{Plasmonic eigenmodes}\label{SEC:gt} In this section we show how to obtain the symmetry-adapted eigenmodes of a nanooligomer from the excitations of the monomer.\cite{Prodan2003,Guerrero2012,Forestiere2013,Zohar2014} We project the dipole and quadrupole excitations of the discs onto representations of the oligomer using graphical projection operators.\cite{ReichBuch} The approach can be applied to all other multipoles as well. The construction of oligomer eigenmodes within the hybridization model is often restricted to combinations of dipole excitations in the monomers.\cite{Brandl2006,Zohar2014,Haran2018} This assumes that only optically active eigenmodes of the monomer will induce optically active modes of the oligomer, but this is actually not the case. Dipole-inactive monomer modes like a quadrupole combine in an oligomer into a mode with a finite dipole moment. The collective mode will interact with far-field radiation even if the monomer excitation was dark. The symmetry-adapted eigenvectors are compared to simulated modes obtained by the boundary elements method. \subsection{Irreducible representations of plasmons in nanooligomers} \begin{table*} \caption{Selected point groups of nanooligomers, plasmonic tips, and colloidal crystals; example structures are given for each point group. The irreducible representations of the dipole and quadrupole moments within each point group are necessary to construct the symmetry-adapted plasmonic or dielectric eigenvectors. ``in-plane" (positive parity under $\sigma_h$) and ``out-of-plane" (negative parity under $\sigma_h$) refer to the $(x,y)$ plane.} \label{TAB:moments} \begin{tabular}{llcccccc}\hline\hline point&example structures&\multicolumn{2}{c}{dipole}&\multicolumn{2}{c}{quadrupole}\\ group&&\multicolumn{2}{c}{representations}&\multicolumn{2}{c}{representations}\\ &&in-plane&out-of-&in-plane&out-of-\\ &&&plane&&plane\\\hline $D_{2h}$&disc dimer, bowtie&$B_{2u}\oplus B_{3u}$&$B_{1u}$&$2A_{g}\oplus B_{1g}$&$B_{2g}\oplus B_{3g}$\\ &disc chain, dagger\\ $D_{3h}$&trimer&$E'$&$A''_2$&$A_1'\oplus E'$&$E''$\\ $D_{4h}$&tetramer, cross, square&$E_u$&$A_{2u}$&$A_{1g}\oplus B_{1g}\oplus B_{2g}$&$E_g$\\ $D_{5h}$&pentamer&$E_1'$&$A_2''$&$A_1'\oplus E_2'$&$E_1''$\\ $D_{6h}$&hexamer, hexagon, colloidal&$E_{1u}$&$A_{2u}$&$A_{1g}\oplus E_{2g}$&$E_{1g}$\\ &hcp layer, bilayer, crystal\\ $D_{\infty h}$&sphere dimer, tip and image&$E_{1u}/\Pi_u$&$A_{1u}/\Sigma^+_u$&$A_{1g}/\Sigma^+_g \oplus E_{2g}/\Delta_g$&$E_{1g}/\Pi_g$\\ $C_{2v}$&asymmetric disc dimer&$B_1\oplus B_2$&$A_1\oplus $&$2A_1\oplus A_2$&$B_1\oplus B_2$\\ $C_{\infty v}$&tip&$E_1/\Pi$&$A_1/\Sigma^+$&$A_1/\Sigma^+\oplus E_2/\Delta$&$E_1/\Pi $\\\hline\hline \end{tabular} \end{table*} The optical excitations of nanooligomers can be described in a basis of electric multipoles. The multipoles in the monomer give rise to a set of collective eigenmodes in the oligomer that we will find with the help of the oligomer symmetry. We consider an oligomer that is composed of $n$ monomers (nanoparticles, discs, rods) arranged in a symmetric fashion as shown in Fig.~\ref{FIG:oligomers}. Each monomer has many electric multipole excitations that combined will yield the excitations of the oligomer. In the language of group theory, the representations of the electric multipoles of the monomer induce the symmetry-adapted eigenmodes of the oligomer. Table~\ref{TAB:moments} lists the point groups for the oligomers in Fig.~\ref{FIG:oligomers} and other nanoplasmonic structures. The table also gives the representation of the dipole and quadrupole moment in each group, which we need to find the representations of the oligomer eigenmodes. To find the eigenmodes, we first set up and reduce the atomic representation $\Gamma_{ar}$, see Table \ref{TAB:plasmon_irreps}.\cite{InuiBook, WilsonBook, CardonaBuch, ReichBuch} It describes the permutation of the monomers under the symmetry operations of the oligomer.\cite{CardonaBuch} The characters of the atomic representation are found by counting the monomers that are left unchanged (= they remain in their original position) by each symmetry operation of the point group. The atomic representation has to be combined with the multipole representation $\Gamma_{mult}$ of the monomer. We restrict the multipoles to the dipole and quadrupole excitations of the disc, but distinguish between the in-plane and out-of-plane components, see Table \ref{TAB:moments}. The oligomer representation induced by a multipole component $\Gamma_{mult}^{i/o}$ is then given by \begin{equation} \Gamma_{pl}(mult,i/o)=\Gamma_{mult}^{i/o}\otimes \Gamma_{ar},\label{EQ:pl} \end{equation} where $i/o$ specifies in-plane and out-of-plane, respectively. Reducing $\Gamma_{pl}$ yields the irreducible representations of the plasmonic eigenmodes. We performed this analysis for the oligomers in Fig.~\ref{FIG:oligomers}, a linear disc trimer, and a nanosphere dimer. The symmetry of the eigenstates that are induced in the oligomers by the dipole and quadrupole excitation of the disc or sphere are given in Table~\ref{TAB:plasmon_irreps}. Group theory predicts a set of symmetry-adapted eigenmodes that get induced by the monomer multipoles.\cite{InuiBook, WilsonBook} We project them using graphical projection operators.\cite{ReichBuch} In this method one starts from a graphical representation of the multipole in the monomer. We show the surface charge distribution $+/-$ as red/blue. When applying the symmetry operations of the point group, the starting monomer with its charge distribution is transformed into the other monomers. To project onto a given non-degenerate representation, the charge distribution pattern is multiplied by the character of the representation. A character of $+1$ leaves the pattern unchanged, whereas a character of $-1$ transforms red into blue and vice versa. Summing over all patterns yields an eigenvector of the irreducible representation. The formal treatment and the projection to degenerate representations are discussed in Ref.~\citenum{ReichBuch}. \subsection{Dipole- and quadrupole-induced eigenmodes} \begingroup \begin{table} \caption{Irreducible representation of plasmonic or dielectric eigenmodes that are induced by the dipole and quadrupole representation of the monomer. The table includes the atomic representation $\Gamma_{ar}$ of selected oligomers; it differentiates between in-plane and out-of-plane moments, see Table~\ref{TAB:moments}.} \label{TAB:plasmon_irreps} \begin{tabular}{lc}\hline\hline &disc dimer $D_{2h}$\\\hline $\Gamma_{ar}$&$A_g\oplus B_{3u}$\\ dipole, in-plane&$A_g\oplus B_{1g}\oplus B_{2u}\oplus B_{3u}$\\ dipole, out-of-plane&$B_{2g}\oplus B_{1u}$\\ quad., in-plane&$2A_g\oplus B_{1g}\oplus B_{2u}\oplus2 B_{3u}$\\ quad., out-of-plane&$B_{2g}\oplus B_{3g}\oplus A_u\oplus B_{1u}$\\\hline &linear trimer $D_{2h}$\\\hline $\Gamma_{ar}$&$2A_g\oplus B_{3u}$\\ dipole, in-plane&$A_g\oplus B_{1g}\oplus 2B_{2u}\oplus 2B_{3u}$\\ dipole, out-of-plane&$B_{2g}\oplus 2B_{1u}$\\ quad., in-plane&$4A_g\oplus 2B_{1g}\oplus B_{2u}\oplus2 B_{3u}$\\ quad., out-of-plane&$2B_{2g}\oplus 2B_{3g}\oplus A_u\oplus B_{1u}$\\\hline &trimer $D_{3h}$\\\hline $\Gamma_{ar}$ &$A_1'\oplus E'$\\ dipole, in-plane&$A_1'\oplus A_2'\oplus 2E'$\\ dipole, out-of-plane&$A_2''\oplus E''$\\ quad., in-plane&$2A'_1\oplus A'_2\oplus 3E'$\\ quad., out-of-plane&$A''_1\oplus A''_2\oplus 2 E''$\\\hline &tetramer $D_{4h}$\\\hline $\Gamma_{ar}$ &$A_{1g}\oplus B_{2g}\oplus E_u$\\ dipole, in-plane&$A_{1g}\oplus A_{2g}\oplus B_{1g}\oplus B_{2g}\oplus2 E_u$\\ dipole, out-of-plane&$E_g\oplus A_{2u}\oplus B_{1u}$\\ quad., in-plane&$2A_{1g}\oplus A_{2g}\oplus B_{1g}\oplus 2B_{2g}\oplus 3E_u$\\ quad., out-of-plane&$A_{1u}\oplus A_{2u}\oplus B_{1u}\oplus B_{2u}\oplus 2E_g$\\\hline &pentamer $D_{5h}$\\\hline $\Gamma_{ar}$ &$A'_1\oplus E'_1\oplus E'_2$\\ dipole, in-plane&$A'_1\oplus A'_2\oplus 2E'_1\oplus 2E'_2$\\ dipole, out-of-plane&$A''_2\oplus E''_1\oplus E''_2$\\ quad., in-plane&$2A'_1\oplus A'_2\oplus 3E'_1\oplus 3E'_2$\\ quad., out-of-plane&$A''_1\oplus A''_2\oplus 2E''_1\oplus 2 E''_2$\\\hline &hexamer $D_{6h}$\\\hline $\Gamma_{ar}$ &$A_{1g}\oplus E_{2g}\oplus B_{2u}\oplus E_{1u}$\\ dipole, in-plane&$A_{1g}\oplus A_{2g}\oplus 2E_{2g}\oplus B_{1u}\oplus B_{2u}\oplus 2E_{1u}$\\ dipole, out-of-plane&$B_{1g}\oplus E_{1g}\oplus A_{2u}\oplus E_{2u}$\\ quad., in-plane&$2A_{1g}\oplus A_{2g}\oplus 3E_{2g}\oplus B_{1u}\oplus 2B_{2u}\oplus 3E_{1u}$\\ quad., out-of-plane&$B_{1g}\oplus B_{2g}\oplus 2 E_{1g}\oplus A_{1u}\oplus A_{2u}\oplus 2E_{2u}$\\\hline &nanosphere dimer, gap mode $D_{\infty}$\\\hline $\Gamma_{ar}$&$A_{1u}\oplus A_{1g}$\\ dipole, in-plane&$E_{1g}\oplus E_{1u}$\\ dipole, out-of-plane&$A_{1g}\oplus A_{1u}$\\ quad., in-plane&$A_{1g}\oplus E_{2g}\oplus A_{1u}\oplus E_{2u}$\\ quad., out-of-plane&$E_{1g}\oplus E_{1u}$\\\hline\hline \end{tabular} \end{table} \endgroup \begin{figure} \includegraphics[width=8.5cm]{./eigenmodes_dimer_area_scale.pdf} \caption{Symmetry-adapted eigenmodes of a dimer that are induced by the in-plane monomer dipole (top) and quadrupole (bottom) excitation. The colors represent the sign of the surface charge distribution: red for positive and blue for negative charges. The real charge distribution will differ, because eigenmodes of identical symmetry are allowed to mix.} \label{FIG:dimer_em} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{./eigenmodes_hexamer_all.pdf} \caption{Symmetry-adapted eigenvectors of a hexamer that are induced by the in-plane component of the (a) dipole and (b) quadrupole moment in a nanodisc. The multipoles are represented through positive (red) and negative (blue) surface charges. The area of the pattern indicates the relative amplitude of the multipole in each monomer. The yellow circles represent the disc monomer; a full yellow circle means that the eigenmode has zero amplitude at this point and no visible yellow represents maximum amplitude.} \label{FIG:hexamer_em} \end{figure} To demonstrate the eigenmode analysis and the use of projection operators for nanooligomers we consider a disc dimer and hexamer, Fig.~\ref{FIG:oligomers}. The dimer belongs to the $D_{2h}$ point group and has an atomic representation, see Table~\ref{TAB:plasmon_irreps}, \begin{equation*} \Gamma_{ar}=A_g\oplus B_{3u}. \end{equation*} The in-plane dipole transforms according to $B_{2u}\oplus B_{3u}$ within $D_{2h}$, Table~\ref{TAB:moments}. We obtain a total of four in-plane dipole-induced oligomer representations $\Gamma_{pl}(dip, i)=A_g\oplus B_{1g}\oplus B_{2u}\oplus B_{3u}$. When projecting the monomer dipole onto these irreducible representations, we find the well-known set of four non-degenerate dipolar eigenmodes, see Fig.~\ref{FIG:dimer_em}: $x$-polarized anti-bonding $A_{g}(1)$, $y$-polarized bonding $B_{1g}(1)$, $y$-polarized anti-bonding $B_{2u}(1)$, and $x$-polarized bonding $B_{3u}(1)$. Modes with index $g$ have even (gerade) parity under inversion; modes with index $u$ have odd (ungerade) parity. The representation of the in-plane quadrupole moment is $A_g\oplus B_{1g}$. Although it differs from the dipole representation, the quadrupole induces the same set of irreducible representations in the dimer, see Table~\ref{TAB:plasmon_irreps}. The projected eigenmodes are shown in Fig.~\ref{FIG:dimer_em}. Modes within one column belong to the same representation and have identical selection rules in response to any perturbation. We will discuss the signatures of the dipole- and quadrupole-derived modes in the optical spectra further below. The hexamer belongs to the $D_{6h}$ point group. This point group is also found in hexagonally packed colloidal layers and crystals.\cite{MuellerFaradayDiscussions2019} The atomic representation of the disc hexamer is $\Gamma_{ar}=A_{1g}\oplus E_{2g}\oplus B_{2u}\oplus E_{1u}$. The in-plane disc dipole belongs to the $E_{1u}$ representation of $D_{6h}$, see Table~\ref{TAB:moments}. The in-plane dipole moment of the disc induces the following representations in the hexamer \begin{equation} \begin{split} \Gamma_{pl}(dip, i)&=E_{1u}\otimes(A_{1g}\oplus E_{2g}\oplus B_{2u}\oplus E_{1u}) \\&=A_{1g}\oplus A_{2g}\oplus 2E_{2g}\oplus B_{1u}\oplus B_{2u}\oplus 2E_{1u}. \end{split} \end{equation} We project the dipole-induced, in-plane eigenmodes as shown in Fig.~\ref{FIG:hexamer_em}(a). The non-degenerate $A$ and $B$ eigenmodes have constant amplitude around the hexagon. In the degenerate $E$ modes the amplitude varies around the circumference: $E_1$ eigenmodes have two and $E_2$ eigenmodes have four nodes around the hexagon.\cite{ReichBuch, Bozovic1985} The quadrupole mode of the disc induces a second set of hexamer eigenmodes with identical symmetry to the disc dipole. We show the $A$, $B$, and $E_1$ modes that were induced by the quadrupole in Fig.~\ref{FIG:hexamer_em}(b). The symmetry-adapted eigenmodes allow comparative predictions on the strength of optical absorption. For example, the $E_{1u}$ eigenmodes will contribute to the absorption of linearly polarized light as we will discuss in detail below. The absorption intensity depends on the number and amplitude of the electromagnetic hotspots. Hotspots are places of very high electric field amplitude that form through near-field coupling from two adjacent discs.\cite{Prodan2003,Forestiere2013} A strong hotspot requires that two adjacent discs face each other with areas of opposite accumulated charge. An inspection of the $E_{1u}$ eigenmodes in Fig.~\ref{FIG:hexamer_em} shows that the number and strength of the hotspots differs from one mode to the other. The $E_{1u}(1)$ mode has four hotspots close to the amplitude maximum in the left eigenvector and two strong hotspots in the right eigenvector. Such a mode will efficiently absorb and emit far-field radiation; it will also contribute strongly to plasmon-enhanced optical processes such as SERS.\cite{EtchegoinBuch, Langer2019, LeRu2006} The $E_{1u}(2)$ eigenmode, in contrast, has only two hotspots close to the point of vanishing amplitude in one eigenvector and no hotspot between the discs in the other eigenvector. We expect less radiative interaction with far-field photons. The $E_{1u}(3)$ mode forms interparticle hotspots comparable to $E_{1u}(1)$, and we expect strong light absorption by this mode although it was derived from an optically forbidden excitation of the monomer. Eigenmodes that belong to the same irreducible representation of the oligomer are allowed to mix. The real eigenvectors will be a superposition of various monomer excitations.\cite{Pascale2019} The mixing will increase with decreasing gap between the particles, because Coulomb interaction between the monomers alters the charge distribution. Formally, this is described as a contribution by higher-order multipoles of the monomer.\cite{Pascale2019} Although the mixing affects the eigenvectors, the selection rules remain strictly applicable, since the mode symmetry has to be identical. For typical nanooligomers, the calculated eigenmodes remain predominantly dipole-like, quadrupole-like and so forth. To demonstrate this we show the calculated $A_{2g}$ modes and the $E_{1u}(1)$ mode of the hexamer in Fig.~\ref{FIG:hexamer_bem}. The $A_{2g}(1)$ eigenstate is energetically well separated from the other $A_{2g}$ modes of the hexamer. The calculated eigenvector is essentially identical to the symmetry-adapted mode. The $E_{1u }(1)$ eigenmode, in contrast, has a contribution from a quadrupole-induced excitation as is most visible in the left eigenvector. \begin{figure} \includegraphics[width=8.5cm]{./hexamer_bem_eigenmodes_line.pdf} \caption{$A_{2g}(1), A_{2g}(2)$, and $E_{1u}(1)$ eigenmodes calculated for the hexamer within the boundary elements method. } \label{FIG:hexamer_bem} \end{figure} We also projected the eigenmodes for a regular trimer, a tetramer, and a pentamer; the symmetry-adapted eigenmodes are shown in Supplementary Figs. S1-S3. The modes show similar features as discussed for the dimer and hexamer above. In particular, there are always dipole- and quadrupole-induced eigenmodes that belong to the same irreducible representation of the oligomer. \section{Optical selection rules: Linear polarization and cylindrical vector beams} \label{sec:AllResults} Optical selection rules predict whether a given transition is allowed by considering the symmetry of the system and the incoming photon. If the system size is much smaller than the wavelength of the light, transitions that transform like the vector representation are allowed by symmetry to interact with the dipole moment of the electromagnetic field.\cite{InuiBook,WilsonBook, CardonaBuch} Plasmonic and dielectric oligomers, however, are comparable in size to the focus of an incoming light beam. This activates a new set of optical transitions if structured light is used for excitation. In this section we will consider linear polarization and cylindrical vector beams. Optical absorption from the ground state excites eigenmodes that transform like the incoming photon.\cite{InuiBook, WilsonBook} The incoming light acts as a perturbation with symmetry $\Gamma_\mathcal{H}$ on an initial plasmonic state $\Psi_{pl}^i$ with symmetry $\Gamma_i$. The final state is denoted by $\Psi_{pl}^f$ and symmetry $\Gamma_f$. This transition will be allowed if the direct product\cite{InuiBook} \begin{equation} \Gamma_f\otimes \Gamma_\mathcal{H}\otimes\Gamma_i\supset\Gamma_1, \label{EQ:GT} \end{equation} where $\Gamma_1$ is the totally symmetric representation of the point group. We assume that the initial state is the oligomer ground state belonging to $\Gamma_1$. Then Eq.~\eqref{EQ:GT} is equivalent to requiring $\Gamma_f\subset\Gamma_\mathcal{H}$. That means that the representation of the light needs to contain the irreducible representation of the oligomer eigenstate. The representation of the optical dipole interaction Hamiltonian is given by the vector representation in case of linearly polarized light. The polarization patterns of radial and azimuthal polarization are shown in Fig.~\ref{FIG:dimer_spectra}(a) further below. We find their representations by inspecting the transformation of the polarization patterns under the symmetry operations of the oligomer point groups.\cite{InuiBook, CardonaBuch} Table~\ref{TAB:sel} lists the selection rules we obtained. Linearly polarized light excites dipole-type eigenmodes of the oligomer.\cite{Chuntonov2011,Kerber2017,Jorio2017} Cylindrical vector beams excite modes with vanishing dipole moment that are normally considered dark.\cite{Hentschel2013,Yanai2014,Parramon2012} Radially polarized beams interact with excitations that belong to the totally symmetric representation. Light with azimuthal polarization will be absorbed by states that transform like the rotation around the $z$ axis within the point group of the oligomer. We expect the optical absorption spectra to change drastically when varying polarization. \begin{table} \caption{Optical selection rules for linear, radial, and azimuthal polarization. The light propagation direction is along $z$. $\lambda\rightarrow\infty$ implies the quasi-static approximation where the electric field is considered to be translationally invariant along the propagation direction.} \label{TAB:sel} \begin{tabular}{lccc}\hline\hline &linear&radial&azimuthal\\\hline \multicolumn{4}{l}{$\lambda\rightarrow\infty$}\\\hline $D_{2h}$&$B_{3u}(x), B_{2u}(y)$&$A_{1g}$&$B_{1g}$\\ $D_{3h}$&$E'(x,y)$&$A'_{1}$&$A'_{2}$\\ $D_{4h}$&$E_u(x,y)$&$A_{1g}$&$A_{2g}$\\ $D_{5h}$&$E_1'(x,y)$&$A'_{1}$&$A'_{2}$\\ $D_{6h}$&$E_{1u}(x,y)$&$A_{1g}$&$A_{2g}$\\ $C_{2v}$&$B_1(x), B_2(y)$&$A_{1}$&$A_{2}$\\ $D_{\infty h}$&$E_{1u}(x,y)$&$A_{1g}$&$A_{2u}$\\ $C_{\infty v}$&$E_1(x,y)$&$A_{1}$&$A_{2}$\\\hline\hline \end{tabular} \end{table} \subsection{Light scattering and Fano resonances} Elastic or Rayleigh scattering of light is a prime characterization tool for nanoplasmonic and nanophotonic oligomers. Resonant Rayleigh scattering also known as dark field spectroscopy detects excitations with very high sensitivity.\cite{Knight2010, Crut2014, Wang2019, Kuznetsovaag2016} The symmetry-imposed selection rules of Rayleigh scattering allow any eigenstate as intermediate scattering state, but resonances occur only if the energy and the symmetry of the excited state match the incoming photon. In Rayleigh scattering, an incoming photon with symmetry $\Gamma_\mathcal{H}$ excites the system into the intermediate state $\Gamma_n$. The light is immediately re-emitted into the scattered photon with $\Gamma_\mathcal{H}$ symmetry. Since the square of any representation contains the totally symmetric representation, Rayleigh scattering is allowed for any intermediate state, i.e., \begin{equation} \Gamma_f\otimes\Gamma_\mathcal{H}\otimes\Gamma_\mathcal{H}\otimes\Gamma_i\supset\Gamma_1 \label{EQ:Rayleigh} \end{equation} is true irrespective of the intermediate state ($\Gamma_f=\Gamma_i$ is the ground state). Resonant Rayleigh scattering, in addition, requires the intermediate excited state to coincide with an eigenstate of the plasmonic system. This will occur if the symmetry of the intermediate state $\Gamma_n$ is contained in $\Gamma_\mathcal{H}$ and the photon energy matches the eigenenergy of $\Gamma_n$. Resonances increase the cross section for light scattering by several orders of magnitude making resonant scattering dominant in the Rayleigh spectra.\cite{Knight2010, Crut2014, Wang2019} Dark field spectra of plasmonic oligomers often show Fano resonances that arise from the superposition of scattering channels.\cite{Lukyanchuk2010,Miroshnichenko2010,Francescato2012,Forestiere2013,Hopkins2013} Fano resonances may result in anti-resonances in the spectra, i.e., a broad scattering peak with a strong dip at the energy of a second excitation.\cite{Gallinet2013} Fano resonances occur if identical initial and final state are connected by more than one scattering pathway. For resonant Rayleigh scattering this means that two plasmonic excitations of symmetry $\Gamma_n$ contribute to the resonance, because they overlap in excitation energy. A typical case is one superradiant plasmonic mode with a large full width at half maximum (FWHM) that overlaps with a narrow mode with smaller oscillator strength and line width.\cite{Hao2009,Hentschel2011,Forestiere2013,Hopkins2013} We will show that Fano resonances occur for linear polarization in all plasmonic oligomers with three-fold or higher principle axis of rotation.\cite{Forestiere2013,Hopkins2013} An understanding of the symmetry properties of plasmonic eigenmodes allows to tailor Fano resonances by manipulating the oligomer geometry. \subsection{Spectra of plasmonic oligomers}\label{SEC:results} In this section we exemplary present optical absorption and elastic light scattering for plasmonic oligomers. We simulated spectra for linear polarization and cylindrical vector beams, see Fig.~\ref{FIG:dimer_spectra}(a). We will relate the peaks to the projected eigenmodes of the oligomers. Cylindrical vector beams excite dark, non-degenerate plasmon modes that have narrower line width than bright eigenmodes. For the higher-order oligomers the azimuthal mode is the plasmon of lowest energy. Since all irreducible representations contain more than one plasmon eigenstate, Fano resonances are predicted in the scattering spectra. They are particularly pronounced in linear excitation, because of the large FWHM of the bright plasmon modes. \begin{figure} \includegraphics[width=8.5cm]{./dimer_absorption_scattering_polpattern.pdf} \caption{(a) Direction of the electric field across the beam focus for linear, radial, and azimuthal polarization. Radial and azimuthal polarization contain a vortex at the beam center. (b)-(e) Optical spectra of a disc dimer ($d=100\,$nm, $h=40\,$nm, $g=20\,$nm, and $\varepsilon=1.65$). (b) Absorption and (c) scattering cross section for linearly polarized light. Black line: $x$ polarization, red line: $y$ polarization. (d) Absorption and (e) scattering cross section for excitation by cylindrical vector beams. Cyan line: radial polarization, magenta: azimuthal polarization. The labels indicate the eigenmode assignment, see Fig.~\ref{FIG:dimer_em}.} \label{FIG:dimer_spectra} \end{figure} We first consider a nanodimer with $D_{2h}$ symmetry. The $x$ and $y$ polarized absorption spectra, Fig.~\ref{FIG:dimer_spectra}(b), each show a dominant peak that arises from the dipole-induced $B_{2u}(1)$ and $B_{3u}(1)$ modes. The spectra contain additional weaker peaks; most pronounced is the quadrupole-induced $B_{3u}(2)$ mode in $x$ polarization. The quadrupole is optically forbidden in the monomer, but gets activated in the dimer by combining two quadrupoles out of phase, which leads to a hotspot in the dimer void. Effectively, the hotspot provides a way to interact with far-field radiation. With decreasing gap size (stronger hotspot) the $B_{3u}(2)$ peak becomes more and more pronounced in the absorption spectrum, see Suppl.\ Fig.\ S5. The $B_{3u}(1)$ mode continues to have the highest integrated intensity, but the peak height is small because of the large FWHM due to the radiative decay of this superradiant mode. The two $B_{3u}$ modes overlap in excitation energy. They will interfere in light scattering creating the characteristic Fano dip close to 2\,eV in Suppl.\ Fig.~S4. When exciting the dimer with radially and azimuthally polarized light, the optical spectra change drastically in the energies of the peaks, their FWHM, and their intensity. Cylindrical vector beams excite gerade representations that are optically inactive in the quasi-static dipole approximation. The activation occurs because the dimer is quite large (220\,nm) compared to the wavelength of light (900-450\,nm in Fig.~\ref{FIG:dimer_spectra}). The right and left disc interact with electromagnetic fields of antiparallel polarization. Interestingly, the absorption cross section of the $A_{1g}(1)$ and $B_{1g}(1)$ plasmons in Fig.~\ref{FIG:dimer_spectra}(d) is by a factor of two to five higher than for linearly polarized light in Fig.~\ref{FIG:dimer_spectra}(b). Although cylindrical vector beams get absorbed by the $A_{1g}$ and $B_{1g}$ plasmons, these states do not radiate efficiently into the far field. Light scattering represents a combined excitation and radiation event. The scattering cross section of the vector beams, Fig.~\ref{FIG:dimer_spectra}(e), is much weaker than the absorption cross section, Fig.~\ref{FIG:dimer_spectra}(d), and scattering by linearly polarized light, Fig.~\ref{FIG:dimer_spectra}(c). Strong absorption combined with weak scattering (or radiation) is interesting for several reasons. The small probability for radiation into the far field increases the radiative lifetime of the plasmon eigenmode and reduces its broadening as we discuss in the next section. Also, low scattering and strong absorption effectively cloaks strong scatterers like plasmonic nanostructures.\cite{AndreaAlu2009} \begin{figure} \includegraphics[width=8.5cm]{./trimer_absorption_scattering.pdf} \caption{Optical spectra of a disc trimer ($d=100\,$nm, $h=40\,$nm, $g=20\,$nm, and $\varepsilon=1.65$). (a) Absorption and (b) scattering for linearly polarized light. The arrow indicates a dip in the scattering cross section that comes from the interference of the $E'(1)$ and $E'(2)$ modes. (c) Absorption and (d) scattering for excitation by cylindrical vector beams. Cyan line: radial polarization, magenta: azimuthal polarization. The labels indicate the eigenmode assignment, see Suppl. Fig.~S1. } \label{FIG:trimer_spectra} \end{figure} A trimer belongs to the $D_{3h}$ point group, Fig.~\ref{FIG:oligomers}. Figure~\ref{FIG:trimer_spectra} shows the calculated absorption and scattering spectra under linear, radial, and azimuthal polarization. In $D_{3h}$ and all groups with a higher order of principle axis of rotation, the $x$ and $y$ direction are degenerate. Therefore, in-plane linearly polarized light ($E'$ representation) will yield the absorption spectrum in Fig.~\ref{FIG:trimer_spectra}(a) and the scattering spectrum in Fig.~\ref{FIG:trimer_spectra}(b) irrespective of the polarization direction within the plane. Table~\ref{TAB:plasmon_irreps} lists two dipole-induced eigenmodes belonging to the $E'$ representation. Indeed, the absorption spectrum shows two peaks $E'(1)$ and $E'(2)$ that result in a Fano feature in light scattering [arrow in Fig.~\ref{FIG:trimer_spectra}(b)]. In contrast to the dimer where the Fano feature arose from interference between a dipole- and a quadrupole-derived eigenmode, the two peaks in the trimer are induced by monomer dipoles. Absorption and scattering by the quadrupole-induced $E'(3)$ mode, Suppl. Fig.~S1, is too weak to be identified in the spectra. A similar mode will appear more prominently in the higher-order oligomers. Cylindrical vector beams yield much narrower plasmon resonances than linearly polarized light. Especially, the azimuthal spectrum is remarkable for its small FWHM $\gamma = 107\,$meV, Fig.~\ref{FIG:trimer_spectra}(c). Bulk damping at the $A_2'(1)$ plasmon energy (1.75\,eV) is $\approx70\,$meV,\cite{JohnsonChristy1972,Gallinet2013} so that the contribution from radiative damping appears to be very small. For comparison, the $E'(1)$ mode at almost the same energy (1.74\,eV) has $\gamma=380\,$meV. The simulation highlights the advantage of the dipole-forbidden plasmon modes that cannot decay easily by coupling to the photonic far field. \begin{figure} \includegraphics[width=8.5cm]{./higher_order_oligomers.pdf} \caption{Absorption and scattering coefficient for higher-order oligomers: tetramer (red, square), pentamer (blue, dots), and hexamer (black, line). (a) Absorption and (b) scattering coefficient for linear in-plane polarization. The labels stand for the following modes: tetramer -- $L(1)=E_u(1)$, $L(2)=E_u(2),$ and $L(3)=E_u(3)$, pentamer -- $L(1)=E_1'(1)$, $L(2)=E_1'(2)$, and $L(3)=E_1'(3)$, and hexamer -- $L(1)=E_{1u}(1)$, $L(2)=E_{1u}(2)$, and $L(3)=E_{1u}(3)$. (c) Absorption and (d) scattering coefficient for radial in-plane polarization. Labels: tetramer and hexamer -- $R(1)=A_{1g}(1)$ and pentamer $R(1)=A_1'(1)$. (e) Absorption and (f) scattering coefficient for azimuthal in-plane polarization. Labels: tetramer and hexamer -- $A(1)=A_{2g}(1)$ and $A(2)=A_{2g}(2)$, pentamer -- $A(1)=A_2'(1)$ and $A(2)=A_2'(2)$. Except for panel (e), the spectra were shifted vertically for clarity.} \label{FIG:oligomer_linear} \end{figure} The optical properties of higher-order oligomers -- tetramer, pentamer, and hexamer -- evolve incrementally, see Fig.~\ref{FIG:oligomer_linear}. All have degenerate in-plane polarized $(x,y)$ representations. The disc dipole and quadrupole induce a total of four linearly polarized in-plane oligomer eigenstates. The linearly polarized absorption spectra contain one pronounced peak $L(2)$ and two weaker features at lower $L(1)$ and higher $L(3)$ energy as shown in Fig.~\ref{FIG:oligomer_linear}. They arise from the two dipole-induced and the lowest-energy quadrupole-induced eigenmodes. Quite remarkably, the broadening of some of the peaks is so strong that the most prominent dipole modes are hardly visible in the absorption spectra. For example, $L(1)=E_{1u}(1)$ in the hexamer, Fig.~\ref{FIG:hexamer_em}. This mode had the strongest hotspots of the $E_{1u}$ states resulting in strong far-field coupling and a smeared-out peak with $\gamma=570\,$meV. The $E_{1u}(1)$ mode dominates, however, the scattering spectrum in Fig.~\ref{FIG:oligomer_linear}(b) where the other two $E_{1u}$ modes appear as kinks and dips. The scattering spectra of the higher-order oligomers are remarkably asymmetric, which is a result of interferences between the resonantly scattering modes. The energy of the maximum intensity is higher than the eigenenergy of the L(1) state as shown for the hexamer by the vertical line in Fig.~\ref{FIG:oligomer_linear}(b). This shift needs to be kept in mind when extracting plasmon energies from dark-field spectra. The higher-order oligomers absorb radially and azimuthally polarized light, Fig.~\ref{FIG:oligomer_linear}(c)-(f). The absorption cross section is very high; it exceeds their geometrical cross section by up to a factor of four. Azimuthally polarized light is also strongly scattered. The peak scattering intensity of $A(1)$ is higher than the linearly polarized $L(1)$ peak. The $A(1)$ peak position shifts to smaller energies with increasing oligomer order making it the state with the smallest energy for $n \geq 3$. At the same time, its scattering intensity and FWHM increase. This behavior reflects the increase in the number of hotspots that simultaneously reduces the plasmon energy in the bonding configuration and increases the coupling to far field radiation. The radially polarized $R(1)$ mode has almost constant eigenenergy and a much smaller increase in the ratio between light scattering and absorption. The totally symmetric $R(1)$ eigenmodes produce no strong hotspots and the number of monomers in the oligomer is less important. \subsection{Radiative and non-radiative decay} \label{sec:PlasmonDecay} \begin{figure} \includegraphics[width=8.5cm]{./rad_nonrad.pdf} \caption{(a) FWHM $\gamma$, (b) non-radiative $\gamma_{nr}$, and (c) radiative $\gamma_r$ damping for oligomers of order $n=2-6$. Blue dots are for the lowest-energy dipole allowed transition [dimer $B_{3u}(1)$, trimer $E'(1)$, tetramer $E_u(1)$, pentamer $E_1'(1)$, and hexamer $E_{1u}(1)$] and red squares for the lowest-energy transition for azimuthally polarized light [dimer $B_{1g}(1)$, trimer $A_2'(1)$, tetramer $A_{2g}(1)$, pentamer $A_2'(1)$, and hexamer $A_{2g}(1)$]. The gray area in (b) marks the range of non-radiative damping in bulk gold for all simulated plasmon energies. The lines in panel (c) are a guide to the eye. The error is within the size of the symbols except for panel (b) where error bars are shown.} \label{FIG:rad_nonrad} \end{figure} Many applications of plasmonic and nanophotonic systems require engineering the radiative and non-radiative decay. Plasmon-enhanced spectroscopy, for example, relies on increasing the radiative damping of a nearby dipole via radiating plasmons.\cite{LeRu2006, Li2017, Langer2019} For hot-electron generation, on the other hand, the non-radiative relaxation should be maximized at the expense of radiative decay.\cite{Hartland2017, MuellerFaradayDiscussions2019, Hoeing2020} We will now analyse the radiative $\gamma_r$ and non-radiative $\gamma_{nr}$ decay in the plasmonic oligomers using the simulated spectra. The ratio of the scattering $\sigma_{sca}$ and absorption $\sigma_{abs}$ cross section is related to the two relaxation channels, as $\gamma_r/\gamma_{nr} = \sigma_{sca}/\sigma_{abs}$.\cite{Liu2009} Using the fact that the total decay (FWHM) is $\gamma=\gamma_r+\gamma_{nr}$, we obtain \begin{equation} \gamma_{nr}=\gamma(1+\sigma_{sca}/\sigma_{abs})^{-1}. \end{equation} As examples we analysed the linearly $L(1)$ and azimuthally $A(1)$ polarized modes with the lowest energy, see caption of Fig.~\ref{FIG:rad_nonrad}. For both plasmons the FWHM increases strongly with increasing order of the oligomer, Fig.~\ref{FIG:rad_nonrad}. This increase in $\gamma$ is entirely caused by the rising radiative decay. The non-radiative decay $\gamma_{nr}$ drops from $80$\,meV close to the bulk value in the dimer [see gray area in Fig.~\ref{FIG:rad_nonrad}(b)] to $20-30\,$meV in the higher-order oligomers. This corresponds to $30-40\%$ of the bulk damping rate at the energies of the plasmon modes.\cite{Wang2006} For the hexamer only $5-10\%$ of the FWHM is caused by non-radiative decay channels. The reason is that a large fraction of the plasmon mode energy is stored in the oligomer hotspots.\cite{Gallinet2013} This reduces the overlap with the metal electrons and thus non-radiative decay, but increases the radiative damping. We found that $\sim90\%$ of the $A(1)$ mode volume is outside the metal in the hexamer, see Suppl.\ Sect.\ S3, in excellent agreement with its contribution to $\gamma$. The small contribution of $\gamma_{nr}$ is quite remarkable; it implies that a higher quality of the plasmonic material -- e.g. single crystals of a metal -- will hardly affect losses in oligomers with $n>3$. The radiative decay $\gamma_r$ increases linearly with oligomer order, see lines in Fig.~\ref{FIG:rad_nonrad}(c). This is equally true for the linearly polarized (bright) and the azimuthal (dark) mode, although radiative damping of the azimuthal modes remains smaller than for the linearly polarized excitations. Nevertheless, the notion of a ``dark" or ``forbidden" mode is clearly no longer justified for reasonably large plasmonic oligomers. \section{Light with orbital angular momentum}\label{SEC:OAM} \begin{figure} \includegraphics[width=8.5cm]{./ring_eigenmodes.pdf} \caption{(a) Nanoscale ring or torus. (b) Plasmon eigenmodes of the ring. Blue (red) areas stand for positive (negative) surface charge density.} \label{FIG:ring} \end{figure} Another class of structured light are beams that carry orbital angular momentum (OAM).\cite{Allen1992,Yao2011,OAMCollection2017} Since angular momentum is a preserved quantity, we expect novel selection rules for OAM beams. This was elegantly confirmed in recent experiments that examined quadrupole excitations in ions, i.e., transitions that get induced by the quadrupole moment of the electromagnetic field.\cite{Schmiegelow2016,Afanasev2018} Different transitions were excited by OAM beams when varying the magnitude and sign of the orbital angular momentum. For the orbital momentum to have an effect on dipole excitations, however, the size of the absorbing structure needs to be comparable to the focused beam. Then, the angular momentum is conserved for the entire structure during light absorption and scattering,\cite{Kerber2017,Kerber2018,Machado2018,Konzelmann2019} which excites plasmon eigenmodes with an angular momentum that matches the momentum of the incoming beam. \begin{table*} \begin{tabular}{cccccc}\hline\hline $l_z$&radial&azimu.&left circ.&right circ.&linear\\\hline \multicolumn{6}{l}{ring, $D_{\infty h}$}\\\hline 0&$A_{1g}$&$A_{2g}$&$E_{1u}$&$E_{1u}$&$E_{1u}$\\ 1&$E_{1u}$&$E_{1u}$&$E_{2g}$&$A_{1g}\oplus A_{2g}$&$A_{1g}\oplus A_{2g}\oplus E_{2g}$\\ 2&$E_{2g}$&$E_{2g}$&$E_{3u}$&$E_{1u}$&$E_{1u}\oplus E_{3u}$\\ 3&$E_{3u}$&$E_{3u}$&$E_{4g}$&$E_{2g}$&$E_{2g}\oplus E_{4g}$\\ even, $\ge 2$&$E_{l_zg}$&$E_{l_zg}$&$E_{(l_z+1)u}$&$E_{(l_z-1)u}$&$E_{(l_z+1)u}\oplus E_{(l_z-1)u}$\\ odd, $\ge 3$&$E_{l_zu}$&$E_{l_zu}$&$E_{(l_z+1)g}$&$E_{(l_z-1)g}$&$E_{(l_z+1)g}\oplus E_{(l_z-1)g}$\\\hline \multicolumn{6}{l}{hexamer, $D_{6h}$}\\\hline 0&$A_{1g}$&$A_{2g}$&$E_{1u}$&$E_{1u}$&$E_{1u}$\\ 1&$E_{1u}$&$E_{1u}$&$E_{2g}$&$A_{1g}\oplus A_{2g}$&$A_{1g}\oplus A_{2g}\oplus E_{2g}$\\ 2&$E_{2g}$&$E_{2g}$&$B_{1u}\oplus B_{2u}$&$E_{1u}$&$B_{1u}\oplus B_{2u}\oplus E_{1u}$\\ 3&$B_{1u}$&$B_{2u}$&$E_{2g}$&$E_{2g}$&$E_{2g}$\\ 4&$E_{2g}$&$E_{2g}$&$B_{1u}\oplus B_{2u}$&$E_{1u}$&$B_{1u}\oplus B_{2u}\oplus E_{1u}$\\\hline\hline \end{tabular} \caption{Selection rules for OAM beams within the $D_{\infty h}$ and $D_{6h}$ point groups. The selection rules for $l_z<0$ are obtained by flipping the sign of $l_z$ and $m_p$ simultaneously.} \label{TAB:OAM} \end{table*} To study the optical selection rules for OAMs within group theory, we first consider a ring with nanoscale dimensions, Fig.~\ref{FIG:ring}a. The structure belongs to the $D_{\infty h}$ point group. The eigenstates of the ring are standing waves around the circumference, Fig.~\ref{FIG:ring}.\cite{Aizpurua2003} The wavelength of these excitations $\lambda$ inside the material is given by $\pi d_r = m \lambda$, where $d_r$ is the diameter of the ring. The integer $m$ can be identified with the $z$ component of the angular momentum with respect to the principle axis; it is a conserved quantity.\cite{InuiBook,Damnjanovic1999,ReichBuch,Bozovic1985} To specify an eigenstate of the ring we need a set of three quantum numbers, see Methods:\cite{DamnjanovicBuch,ReichBuch,Bozovic1985} The $z$ component of the angular momentum $m$ as introduced above, the parity of the horizontal mirror plane $\sigma_h$, and the parity of the mirror planes $\sigma_v$ that contain the $z$ axis. To obtain the $m$ quantum number for the state excited by an OAM, we have to add the orbital angular momentum $l_z$ of the beam and the spin angular momentum $m_p$ related to polarization, $m=l_z + m_p$. Radial polarization has $m_p=0$, $\sigma_h=+1$, and $\sigma_v=+1$; azimuthal polarization $m_p=0, \sigma_h=+1$, and $\sigma_v=-1$. For left (right) handed circular polarization $m_p=+1$ ($-1$), $\sigma_h=+1$, and $\sigma_v=\pm1$ is not defined. This means that both the representation for $\sigma_v=+1$ and $\sigma_v=-1$ will contribute for circular polarization. Finally, linear polarization is the superposition of left- and right-handed circularly polarized light. Taken together, we find the selection rules listed in Table~\ref{TAB:OAM} for the ring. They apply to nanostructures with full rotational symmetry around the propagation direction of the OAM ($z$ axis). Ordinary, linearly polarized light ($l_z=0$) excites the $E_{1u}$ modes in Fig.~\ref{FIG:ring}b; a beam with $l_z=+3$ will excite the $E_{3u}$ mode if it is radially or azimuthally polarized, the $E_{4g}$ mode for left-handed, and the $E_{2g}$ modes for right-handed circular polarization, and $E_{2g}$ and $E_{4g}$ modes for linear polarization. We now proceed from the ring structure to nanoscale oligomers. The rotational symmetry of the oligomer is described by the principle axis of rotation $C_n$ with order $n$. Because only rotations by certain angles preserve the symmetry of the oligomer, $m$ can only take on integer values with $|m|\le n/2$.\cite{DamnjanovicBuch,Damnjanovic1999,ReichBuch,Bozovic1985} Higher absolute values of $m$ are brought back into the allowed range with an Umklapp rule, where $m$ is replaced by $m'=n-m$.\cite{DamnjanovicBuch,Bozovic1985} We apply the Umklapp rule and combine it with the parity selection rules of the polarization patterns into the OAM selection rules of a hexamer, Table~\ref{TAB:OAM}. We find that through the right combination of OAM and polarization, all eigenmodes of the hexamer become accessible to optical spectroscopy. The selection rules excellently explain the simulated scattering of an OAM beam by a hexamer (and other oligomers).\cite{Kerber2017} For examples, the modes excited for $l_z=2$ and 4 in Ref.~\citenum{Kerber2017} are identical as expected from the selection rules. The breathing-like $A_{1g}$ eigenmode appears for $m=l_z+m_p=-1+1=0$. The parallel and antiparallel spectra for $|l_z|=3$ are identical and so forth. The calculated eigenmodes in Ref.~\citenum{Kerber2017} likewise agree with the projected eigenstates in Fig.~\ref{FIG:hexamer_em} when replacing the discs by rod monomers. The conservation of the $m$ quantum number during excitation also explains the orbital angular momentum dichroism proposed in Ref.~\citenum{Kerber2018}. Since $m$ and not $l_z$ is the conserved quantity, the excited states change when changing the sign of $l_z$, if the light also carries spin momentum. Beams with angular momentum will allow addressing a wide range of optical excitations in nanophotonic systems, see Suppl.~Tables~S2 and S3. The angular momentum provides an additional degree of freedom to tailor the properties and light-matter interaction for plasmonic and dielectric modes. Such excitations will produce near-fields with well-defined angular momentum. In this way, angular momentum may be transferred to much smaller nanostructures via plasmon-mediated excitations. \section{Retardation of the incoming light}\label{SEC:retardation} In discussing novel selection rules from variations in the electric field, we have focused so far on the spatial extension of the oligomers compared to the focus of the light beam. Since light is an electromagnetic wave, the electric field also varies along its propagation direction at a given time. The field retardation will excite dipole-forbidden modes of the oligomer, if the extension along the propagation direction becomes a sizable fraction of the light wavelength.\cite{Rivera2016} For example, in oligomers with discs consisting of a vertical metal-insulator-metal stack, antiparallel plasmonic dipoles are excited in the upper and lower metal disc.\cite{Pakizeh2006, Verre2015, Chang2012} Similarly, light propagating normal to a gold nanoparticle bilayer excites plasmons with antiparallel dipole moments in the two layers.\cite{MuellerFaradayDiscussions2019,MuellerACSPhotonics2018} To understand the selection rules introduced by field retardation, we consider a dimer of two spherical nanoparticles (point group $D_{\infty h}$) and light propagating along its $z$ axis (that is the $C_\infty$ axis of the dimer). The in-plane dipole moment induces an $E_{1u}$ and an $E_{1g}$ dimer eigenmode, Table.~\ref{TAB:plasmon_irreps}. The $E_{1u}$ mode is optically active within the quasi-static approximation.\cite{InuiBook, WilsonBook} To introduce field retardation we assume that half the wavelength matches the center-to-center distance of the two spheres $\lambda = (d+g)/2$. In this situation the electric field points in opposite direction at the two spheres. Field retardation will affect selection rules if the point group contains the inversion and/or horizontal mirror plane. The parity for these operations changes to $-1$.\cite{MuellerFaradayDiscussions2019} This replaces a given gerade representation by its ungerade counterpart and vice versa. Instead of the $E_{1u}$ mode, the $E_{1g}$ eigenstate is allowed for the retarded field and linearly polarized light. This mode has the two dipoles pointing in opposite direction.\cite{MuellerFaradayDiscussions2019,Bruno2019} In a real experiment, the wavelength will neither be infinite nor match the dimer size and both modes will contribute to the optical spectra. Indeed, optical experiments on hexagonal layers of nanoparticles observed absorption by a plasmon mode in the bilayer that was absent from the spectrum of a monolayer.\cite{MuellerFaradayDiscussions2019,MuellerACSPhotonics2018} The mode had parallel dipoles within a layer, but antiparallel dipoles from one layer to the next, which corresponds to the $E_{1g}$ mode of the nanosphere dimer.\cite{MuellerFaradayDiscussions2019} The excitation of the plasmon under normal incidence was due to field retardation and the comparatively large nanoparticle diameters (30-50\,nm) used in the experiment. The selection rules for structured light within the quasi-static limit were given by Tables~\ref{TAB:sel} and \ref{TAB:OAM}, which we now extend to the retarded cases. For cylindrical vector beams with $\lambda=(d+g)/2$, radially polarized light excites the $A_{1u}$ mode of the dimer and azimuthally polarized light the $A_{2u}$ eigenmodes. Observing modified selection rules due to field retardation for structured light requires nanostructures that are a sizable fraction of the focus in the $(x,y)$ plane as well as the wavelength along $z$. Such systems need a careful design to realize the predicted non-standard excitations. \section{Multi-photon processes}\label{SEC:multi} Multi-photon processes occur in nonlinear optics: A material gets excited by absorbing two photons, three incoming photons convert into one photon with three times the frequency, or light gets scattered inelastically in Raman or hyper-Raman processes.\cite{ButcherBook,BoydBook,CardonaBuch} In this section we discuss selection rules of exemplary multi-photon processes when using structured light for excitation.\cite{Bautista2018,Kroychuk2019} We will show how OAMs may be used to induce second-harmonic generation in centrosymmetric oligomers. Such an experiment will verify the transfer of angular momentum between the photons and the oligomer. \begin{table} \begin{tabular}{lccc}\hline\hline point group&linear&radial&azimuthal\\\hline \multicolumn{4}{l}{2-photon absorption}\\\hline $D_{2h}$&$A_{g}$&$A_g$&$A_g$\\ $D_{3h}$&$A_1'\oplus A'_2\oplus E'$&$A_1'$&$A_1'$\\ $D_{4h}$&$A_{1g}\oplus A_{2g}\oplus B_{1g}\oplus B_{2g}$&$A_{1g}$&$A_{1g}$\\ $D_{5h}$&$A_1'\oplus A'_2\oplus E_2'$&$A_1'$&$A_1'$\\ $D_{6h}$&$A_{1g}\oplus A_{2g}\oplus E_{2g}$&$A_{1g}$&$A_{1g}$\\ $D_{\infty h}$&$A_{1g}\oplus A_{2g}\oplus E_{2g}$&$A_{1g}$&$A_{1g}$\\\hline \multicolumn{4}{l}{3-photon absorption}\\\hline $D_{2h}$&$B_{2u}\oplus B_{3u}$&$A_g$&$B_{1g}$\\ $D_{3h}$&$A_1'\oplus A'_2\oplus E'$&$A_1'$&$A_2'$\\ $D_{4h}$&$E_u$&$A_{1g}$&$A_{2g}$\\ $D_{5h}$&$E_1'\oplus E_2'$&$A_1'$&$A_2'$\\ $D_{6h}$&$B_{1u}\oplus B_{2u}\oplus E_{1u}$&$A_{1g}$&$A_{2g}$\\ $D_{\infty h}$&$E_{1u}\oplus E_{3u}$&$A_{1g}$&$A_{1g}$\\\hline \end{tabular} \caption{Selection rules for two- and three-photon absorption for linear polarization and cylindrical vector beams.} \label{TAB:multi_photon} \end{table} In two-photon absorption two incoming photons excite an eigenstate of the system. The group theory treatment is identical to linear absorption except that we consider two perturbations (=photons) with symmetry $\Gamma_\mathcal{H}$. For simplicity, we assume the two photons to have identical symmetry. Equation~(\ref{EQ:GT}) is rewritten as \begin{equation} \Gamma_f\otimes \Gamma_\mathcal{H}\otimes \Gamma_\mathcal{H}\otimes\Gamma_i\supset\Gamma_1,\label{EQ:2pt} \end{equation} which is equivalent to $\Gamma_f\subset \Gamma_\mathcal{H}\otimes \Gamma_\mathcal{H}$. Equation~\eqref{EQ:2pt} appears at first to be identical to the conditions for Rayleigh scattering in Eq.~\eqref{EQ:Rayleigh}, but in two-photon absorption the initial $\Gamma_i=\Gamma_1$ and final state $\Gamma_f$ differ. Table~\ref{TAB:multi_photon} lists the selection rules of the $D_{n h}$ point groups considered here. Cylindrical vector beams always excite the totally symmetric representation of the oligomers in two-photon absorption. Two linearly polarized photons allow exciting states that are forbidden for a single photon as is standard in this technique. Particularly interesting is the trimer ($D_{3h}$) where the $E_1'$ representation is active in one- and two-photon excitation. This means that the trimer will produce a second-harmonic signal (SHG, second-harmonic generation). In none of the other oligomers the dipole-active states contribute to two-photon absorption, which is the standard requirement for SHG activity. However, the absorption of two photons from a cylindrical vector beam should leads to the emission of radially polarized light, which would be extremely interesting to observe. When OAM is added as an additional degree of freedom, SHG under emission of linearly polarized photons may be activated in all oligomers as we show now. For this we have to allow for photons with different OAM. First, we consider two photons $p1$ and $p2$ with $l_z=0$, linear polarization, and an oligomer belonging to $D_{6h}$. The two photons may combine into a state with $m=\pm2$ ($E_{2g}$ eigenstates) or $m=0$ ($A_{1g}, A_{2g}$). We now change the OAM of $p2$ to $l_z^{p2}=+1$. The total angular momentum may add up to $m=3$ ($E_{3u}$) or $m=\pm1$ ($E_{1u}$). The $E_{1u}$ excitation may decay by emitting a single linearly polarized photon. We find that exciting a hexamer with two photons that have $l_z^{p1}=0$ and $l_z^{p2}=+1$ will give rise to a second-harmonic signal. SHG will be at maximum if the sum of the two photon energies matches the one-photon transition of the oligomers. This experiment would be particularly interesting to perform, because it proves the transfer of angular momentum from the photon to the oligomer. Selection rules for other higher-order processes may be derived in a similar manner by reducing the product of the one-photon representations. In Table~\ref{TAB:multi_photon} we list the selection rules for three-photon absorption in oligomers. The representations that are allowed for the emission/absorption by one photon are also reached through three-photon absorption. This means that all oligomers will produce third-harmonic signal. The selection rules for Raman scattering are identical to two-photon absorption; hyper-Raman scattering obeys the selection rules of three-photon absorption, and so forth. A comprehensive group theoretical treatment of surface-enhanced Raman scattering was published by some of us recently.\cite{Jorio2017} \section{Conclusion}\label{SEC:conclusion} We derived the optical selection rules in nanoscale systems excited by linearly polarized and structured light. The nanosystems have extensions that are comparable to the wavelength of light and the focus of a light beam. When excited by structured light, the oligomers experience the varying phase and polarization patterns. We derived the selection rules for absorption and scattering of cylindrical vector beams and light with orbital angular momentum considering the dipole moment of the elctromagnetic field. Structured light allows the excitation of oligomer eigenmodes that are dark/optically forbidden under linear polarization. We discussed the changes in the optical spectra for exemplary nanostructures using FDTD simulations of highly symmetric disc oligomers with $n=2-6$ monomers. The radiative and non-radiative decay rate depends systematically on the mode under study as well as the number of monomers. The non-radiative damping rate falls below the lower bound predicted from the quasi-static approximation,\cite{Wang2006} which needs to be considered when engineering plasmonic structures for plasmon-enhanced spectroscopy and hot-electron applications. Using structured light modifies the selection rules in multi-photon processes. Specifically we showed that SHG gets activated when using two linearly polarized photons that differ in their OAM by one. Structured light will unlock a rich world of optical excitations in nanoscale oligomers. Such structures may excite molecules and nanomaterials via their optical near fields. We envision near-field absorption as a way to channel structured light to materials excitations. This would unlock novel excitations and spectroscopic techniques in a wide range of physical systems. \begin{acknowledgement} This work was supported by the European Research Council (ERC) within the project DarkSERS (772108) and the Focus Area NanoScale of Freie Universit\"at Berlin. The simulated absorption and scattering spectra of the oligomers are available at the repository REFUBIUM, under identifier http://dx.doi.org/10.17169/refubium-26906. \end{acknowledgement} \begin{suppinfo} Supporting information is provided as a pdf file: Images of the symmetry-adapted eigenvectors in the $D_{3h}, D_{4h}$, and $D_{5h}$ point groups, simulated extinction spectra, table for the correspondence between quantum numbers and irreducible representations in $D_{nh}$ up to $n=6$, tables for the selection rules for OAM beams for the oligomers considered in this work and the general $D_{qh}$ point groups, and details of methods. This material is available free of charge via the internet at http://pubs.acs.org. \end{suppinfo}
{ "timestamp": "2021-02-16T02:37:43", "yymm": "2102", "arxiv_id": "2102.07649", "language": "en", "url": "https://arxiv.org/abs/2102.07649" }
\section{Introduction} The sixth generation (6G) of wireless cellular systems must cater to radically new services, such as immersive remote presence, holographic teleportation, \ac{CRAS}, \ac{XR}, and digital twins. These bandwidth-intensive applications require the delivery of $1000\times$ capacity increase \cite{saad2019vision} compared to what is expected from today's 5G cellular systems. These applications also require multi-purpose wireless functions that could encompass communications, sensing, localization, and control. These requirements can only be attained by boosting the existing wireless spectrum bands at sub-6 GHz and \ac{mmWave} with abundant bandwidth through a migration towards the higher frequency \ac{THz} bands \cite{sarieddeen2020overview}. In particular, the \ac{THz} band, namely $0.1-\SI{10}{THz}$\footnote{The terahertz gap consists of the band $0.1-\SI{10}{THz}$, whereas the frequencies in the range $0.1-\SI{0.3}{THz}$ are typically considered to be sub-THz band. According to the ITU \cite{ITUreport}, the band above 275 GHz exhibits the unique THz properties and is the main part of the THz band. Nonetheless, this article targets the continuum of the sub-THz and THz bands, as a result of the prevailing feasibility of transceiver design at the sub-THz band. For simplicity of nomenclature, we will use the term \ac{THz} to refer to this entire range.}, was the last gap in the spectrum to be bridged and provides a golden mean between radio and optical signals. As such, the late discovery of the \ac{THz} gap and the unknown peculiarities of its behavior delayed the deployment of the \ac{THz} frequency band in real-world wireless networks, however, this status is rapidly changing today \cite{elayan2019terahertz}.\\ \begin{figure*} [t!] \begin{centering} \includegraphics[width=.8\textwidth]{Freq_Spec_Edited_Illustrator.pdf} \caption{\small{Illustrative figure showcasing the paradigm shift on the frequency spectrum whereby the sub-$\SI{6}{GHz}$ $-$ \ac{THz} region is being jointly populated by communication and sensing functionalities.}} \label{fig:frequency_spec]} \end{centering} \end{figure*} \indent The \emph{THz gap} results from the inability of producing efficient transceivers and antennas at \ac{THz} frequencies. In particular, at \ac{THz} bands, semiconductor devices fail to effectively convert electrical energy into electromagnetic energy. For example, at these frequencies, electrons cannot travel the distance needed to enable a semiconductor device to work, before the polarity of the potential voltage and electrons directions changes. Thus, on the one hand, these challenges at the transceiver level delayed real-world access to the \ac{THz} frequencies. On the other hand, prior to the emergence of the aforementioned 6G services, \ac{THz} frequencies exhibited more challenges than opportunities given their transceiver design difficulties and their cumbersome channel behavior.\\ \indent Owing to the recent advances in plasmonic devices and graphene-based designs, gradually, the aforementioned challenges at the transceiver level are being successfully mitigated \cite{leuthold2015plasmonic}. In contrast to conventional electronic and photonic-based designs, plasmonics do not rely on electrons or photons but, instead, they require electromagnetic excitation at the metal surface level at optical frequencies\cite{elayan2019terahertz}. Similarly, graphene-based designs provide interesting electrical and optical properties that allow transceivers to be tunable, compact, high-speed, and energy efficient. As these innovations started seeing light, \ac{THz} first gained popularity in the realm of nano-networks \cite{lemic2019assessing}. Transceiver design in this domain was less challenging given that the generation of a hundred femtosecond pulses at low power foreshadows the feasibility of short-range communication and nano-networks \cite{jornet2014femtosecond}. Connecting nano-machines, nano-sensors, and nano-actuators allows these devices to interact at the same scale of living systems and chipsets, ultimately resulting in the \ac{IoNT}. Thus, the rise of the \ac{IoNT} would potentially yield wireless innovations ranging from interconnecting chip networks, all the way to biosensors monitoring human and organ activity at the \ac{THz} band. Next, we discuss the specific technical enhancements needed to guarantee a successful \ac{THz} deployment for the \ac{IoE}. \subsection{Towards \ac{THz} for 6G and the \ac{IoE}} \indent Recent evolutions at the transceiver design level are ushering in a transition of \ac{THz} communication from its limited \ac{IoNT} application domain to the realms of the \ac{IoE} and the forthcoming wireless 6G systems. Nonetheless, such a transition towards the real-world deployment of \ac{THz} as a means to provide communication and sensing services for \ac{IoE} applications faces many modeling, network analysis, design, and optimization challenges. \emph{First}, the characteristics of \ac{THz} frequencies require an evolution in the wireless system architecture so as to account for the highly varying channel, the short range of nature of the \ac{THz} links, and the dependence on narrowbeam \ac{LoS} links. \emph{Second}, well-designed \ac{THz} architectures, consisting of distributed and heterogeneous \acp{SBS}, necessitate novel approaches to estimate the \ac{THz} channel, maximize the \ac{THz} network coverage, and synchronize the \ac{THz} system resources with other frequency bands. \emph{Third}, emerging 6G services have considerably stringent reliability, latency, and rate requirements. In turn, meeting these requirements mandates novel real-time sensing, optimization, and scheduling approaches. As a result, a successful transition towards \ac{IoE} \ac{THz} deployment requires a significant rethinking of conventional physical (PHY) layer and networking procedures.\\ \indent In addition to delivering extremely high communication data rates, \emph{the quasi-optical nature of \ac{THz} frequency bands holds under its belt promising capabilities for sensing, imaging, and localization functions} that are not yet very well understood. On these grounds, it is worth noting that as shown in Fig.~\ref{fig:frequency_spec]}, the frequency spectrum is being jointly populated by communication and sensing functionalities in the sub-$\SI{6}{GHz} -$ \ac{THz} region. Effectively, this particular region has become attractive to wireless sensing and communication services as a result of several factors. \emph{First}, in contrast to the customary frequency bands adopted for imaging (e.g. X-Rays), this region has non-ionizing radiation. \emph{Second}, in contrast to optical links, the sensing \ac{EM} links have a less intermittent behavior. \emph{Third}, from a communication standpoint, this region offers various data rates ranging from average to extremely high data rates. As such, based on the properties of this populated region, the lower band (key players here are sub- $\SI{6}{GHz}$ and sub-\ac{mmWave} frequency bands) offers average data rates and sensing capabilities with longer ranges, higher reliability, and lower resolution. Meanwhile, and of rising interest, the higher band of this region (sub-\ac{THz} and \ac{THz} band) provides high-data rates and high-resolution sensing but at the expense of shorter distances and more intermittent links as reflected in Fig.~\ref{fig:frequency_spec]}. Furthermore, deploying the majority of services in the higher band of this region can help alleviate the challenge of spectrum scarcity for wireless communications that is experienced in the sub-6 GHz bands. Thus, \emph{given the particular benefits that can be procured in communications, sensing, and imaging at the \ac{THz} band, jointly deploying such systems allows sharing resources and provides \ac{THz} communications high-resolution localization and situational awareness.} Such configurations not only provide a higher spectrum efficiency, but also pave the way for novel opportunities stemming from the possibility of performing coordinated sensing and communications. \begin{figure*} [t!] \begin{centering} \includegraphics[width=.85\textwidth]{Comprehensive.pdf} \caption{\small{Illustrative figure showcasing the seven unique defining features of wireless THz systems and their key use cases.}} \label{fig:comprehensive} \end{centering} \end{figure*} \subsection{Prior Works} \indent Recently, a number of surveys and tutorials related to the deployment of \ac{THz} over wireless networks appeared in \cite{sarieddeen2020overview, elayan2019terahertz}, and \cite{petrov2018last, zhang2020beyond, peng2020channel, polese2020toward, sarieddeen2020next, ghafoor2020mac}. The works in \cite{petrov2018last, zhang2020beyond, peng2020channel} primarily focused on the PHY layer or propagation challenges. In fact, while such works discuss key channel models along with the caveats of \ac{THz} propagation, they do not scrutinize the quasi-optical properties of the \ac{THz} frequency band and its significance on the PHY layer challenges and opportunities at \ac{THz}. Meanwhile, the works in \cite{sarieddeen2020overview, elayan2019terahertz}, and \cite{polese2020toward, sarieddeen2020next, ghafoor2020mac} summarize the latest literature findings associated with the deployment of \ac{THz} wireless networks. Although this important prior art exposes key concepts about \ac{THz} networks, it does not propose transformative solutions that are needed to deploy realistic \ac{THz} networks effectively. Furthermore, these works do not articulate the challenges of adopting known channel estimation techniques and network optimization methods when dealing with the high uncertainty of the \ac{THz} channel. As such, they do not outline the breakthroughs required to defy the non-stationary and time varying \ac{THz} channel so as to provide initial access, maximize the \ac{THz} coverage, and enable a seamless network optimization. Moreover, although some of these works (e.g. \cite{sarieddeen2020overview} and \cite{sarieddeen2020next}) recognize the importance of the \ac{THz} sensing capability, they fail to highlight the challenges, opportunities, and the prospective techniques needed to deploy fully-fledged joint sensing and communication systems at \ac{THz} frequencies. \subsection{Contributions} The main contribution of this article is a novel and holistic vision that articulates the unique role of \ac{THz} in next-generation wireless systems. We first examine the fundamentals of \ac{THz} frequency bands and their propagation characteristics. By leveraging this fundamental examination of the \ac{THz} properties, we identify and provide a comprehensive treatment of the \emph{seven unique features} that will define future \ac{THz} wireless systems. Subsequently, we examine the behavior and needs of each feature, and we propose opportunistic techniques that harvest peculiar \ac{THz} benefits. Hence, these benefits maximize the overall system performance and could potentially elevate \ac{THz} wireless systems to another level. The seven unique characteristics that we envision to be the defining features of future THz-based wireless systems are shown on the left-hand of Fig.~\ref{fig:comprehensive} and discussed next: \subsubsection{Quasi-opticality of the \ac{THz} band} The \ac{EM} properties of the \ac{THz} band, by virtue of its quasi-opticality lead to distinct communication challenges. Most importantly, the molecular absorption effect is seen as a limiting factor to the propagation of \ac{THz} waves. Nonetheless, we will underscore how this quasi-opticality is a double-edged sword for the foreseen joint communication and sensing paradigm. For instance, the molecular absorption opens up various sensing opportunities that are not found at other frequency bands. However, this comes at the expense of shorter communication ranges. We will also highlight the other sensing functionalities offered by this quasi-opticality. \subsubsection{\ac{THz}-tailored architectures} Deploying wireless \ac{THz} systems requires accounting for a higher density of \acp{SBS}, a shorter communication range, the ability to deliver multiple functions (sensing, communication, imaging), and a set of unique channel conditions. These factors urge adopting more opportunistic \ac{THz}-tailored network architectures that can exploit the advantages of \ac{THz} systems. As such, we particularly emphasize the importance of adopting cell less architectures, as well as their accompanying challenges and opportunities. Furthermore, we highlight the pivotal role of \acp{RIS} in \ac{THz} networks \cite{chaccour2020risk, huang2020holographic, jung2020performance}, their holographic capability due to the small \ac{THz} footprint, their massive sensing elements, and the near-field communication opportunities and challenges. \subsubsection{Synergy of \ac{THz} with lower frequency bands} Communication systems at \ac{THz} frequencies will be deployed in a radio spectrum that is already highly populated with sub-$\SI{6}{GHz}$ and \ac{mmWave} technologies. In that sense, \ac{THz} systems are expected to have a certain level of synergy (cooperation and seamless coexistence) with the lower frequency band wireless technologies. For instance, certain use cases, such as immersive remote presence, could opportunistically use all available wireless frequencies to deliver the target end-to-end experience. Thus, we underline the strategies that enable the coexistence of \ac{THz} frequencies with \ac{mmWave} and sub-$\SI{6}{GHz}$ bands, services, and infrastructure. We further point out how this synergy across the different frequency bands opens the door for exciting opportunities for both communication and sensing functionalities. \subsubsection{Joint sensing and communication systems} Owing to the quasi-optical nature of \ac{THz} bands, a harmonious fellowship of \emph{high-rate communications and high-resolution sensing} can be formed. Consequently, we accentuate the role of joint communication and sensing systems for future \ac{THz} wireless networks. Particularly, we emphasize the effectiveness of mutual feedback between the sensing and communication functionalities that can improve the overall system performance. Naturally, adopting such configurations can help transform wireless networks into a new generation of \emph{versatile} systems that can offer multiple functions to their users thus opening the door for novel services and use cases to be used at the \ac{THz} band. \subsubsection{PHY-layer procedures} The spatially-sparse and low rank \ac{THz} channel imposes distinct challenges on the PHY-layer procedures such as wireless channel estimation and initial access. To overcome these challenges, we propose novel channel estimation techniques that bring to light the role of generative learning networks in predicting the full \ac{THz} \ac{CSI}. Furthermore, we highlight the role of sensing in ensuring an enhanced initial network access for \ac{THz} devices. \subsubsection{Spectrum access techniques} Conventional access schemes adopted in previous wireless generations cannot be directly applied to \ac{THz} frequency bands due to hardware constraints and the unique nature of the \ac{THz} propagation environment. Consequently, we examine possible spectrum access techniques that are suitable for \ac{THz} systems. Particularly, we discuss the benefits that can be reaped from the concept of \ac{OAM} given the quasi-optical nature of \ac{THz} systems. We also explore the role of \ac{NOMA} in \ac{THz} systems. In fact, we will see how the synergy between \ac{NOMA} and \ac{THz} bands is strengthened by the natural adoption of \ac{RIS} architectures at those frequencies. \subsubsection{Real-time network optimization} 6G services such as \ac{XR}, holography, and digital twins necessitate an \ac{E2E} co-design of communication, control, sensing, and computing functionalities which to date have been attempted with limited success. Nevertheless, the inherent coupling of communication and sensing in \ac{THz} makes it plausible to hypothesize that a joint co-design of the aforementioned aspects will be feasible and necessary. In that sense, we scrutinize the networking challenges particular to \ac{THz} systems. Subsequently, we examine novel algorithmic approaches and techniques that can be used to optimize \ac{THz} networks thus allowing them to meet the stringent requirements of beyond 5G applications. In particular, we explore the potential of \ac{AI}, particularly, the emerging concepts of generalizeable and specialized learning, as well as meta-learning in optimizing the resources of the highly varying and non-stationary \ac{THz} channel.\\ \indent After providing a panoramic exposition of the seven defining features of \ac{THz} systems, we conclude with an extensive overview of the prospective use cases. In particular, we examine the challenges and open problems of promising \ac{THz}-enabled use cases. We further underscore different ways to exploit the aforementioned defining features in each application. The four major 6G use cases for \ac{THz} systems are shown in the right hand-side of Fig.~\ref{fig:comprehensive}.\\ \indent The rest of this paper is organized as follows. The fundamentals of \ac{THz} frequency bands are discussed in Section II. Then, the seven defining features of \ac{THz} wireless systems are developed in the subsequent sections. In Section III, the quasi-optical nature of the band is discussed. Section IV introduces our vision for \ac{THz}-tailored network architectures. Section V exposes the synergy of the \ac{THz} band with lower frequency bands. Section VI examines the possibility of performing joint sensing and communication systems at the \ac{THz} band. Section VII introduces the channel estimation and intial access process at the \ac{THz} band. Spectrum access techniques for \ac{THz} bands are proposed in Section VIII. Real-time network optimization approaches are presented in Section IX, and key \ac{THz} use cases are discussed in Section X. Finally, conclusions are drawn in Section XI. \renewcommand{\arraystretch}{0.92} \begin{table*}[t!] \caption{ List of our main acronyms.} \centering \begin{tabular}{||c | c|| c | c ||} \hline \bf{Acronym} & \bf{Description} & \bf{Acronym} & \bf{Description}\\ \hline THz & Terahertz & mmWave & Millimeter wave\\ \hline SBS & Small base station & LoS & Line-of-sight\\ \hline IoE & Internet of Everything & XR & Extended reality \\ \hline CRAS & Connected robotics and autonomous systems & NTN & Non-terrestrial network \\ \hline QoPE & Quality of physical experience & QoS & Quality of service\\ \hline E2E & End-to-end & EM & Electromagnetic\\ \hline GAN & Generative adversarial network & RL & Reinforcement learning \\ \hline UE & User equipment& RIS & Reconfigurable intelligent surface\\ \hline ML & Machine learning & VR & Virtual reality\\ \hline AR & Augmented reality & MR & Mixed reality\\ \hline AoI & Age of information & HRLLC & High-rate and high-reliability low latency communications \\ \hline IoNT & Internet of nano-things & AoSA & Array of subarray \\ \hline OFDM& Orthogonal frequency division multiplexing & OFDMA & Orthogonal frequency division multiple access\\ \hline LEO & Low earth orbit & AI& Artificial intelligence\\ \hline SLAM & Simultaneous localization and mapping & IS & Information shower\\ \hline OAM & Orbital angular momentum & NOMA & Non-orthogonal mutliple access\\ \hline UAV & Unmanned aerial vehicle& MIMO & Multiple-input and multiple-output\\ \hline MAC & Medium-access-control& BS & Base station\\ \hline CSI & Channel state information & TDD & Time division duplex\\ \hline OMA & Orthogonal multiple access & RF & Radio frequency \\ \hline \end{tabular} \end{table*} \section{Fundamentals of THz frequency bands} The key advantage of communications at \ac{THz} frequencies compared to other frequency bands is the availability of an abundant bandwidth. Nevertheless, migrating towards higher frequencies tends to be naturally limited by a shorter communication range and an intermittent (on/off) link behavior. At the \ac{THz} frequencies, these phenomena are a result of three main characteristics: a) high path and reflection losses, b) sporadic availability of \ac{LoS} links, and c) molecular absorption. The losses in a) are naturally observed when migrating towards higher carrier frequencies. In fact, these losses are similar to the ones occuring at \ac{mmWave} frequencies, \emph{but they are more pronounced at \ac{THz} frequencies}. Nonetheless, the molecular absorption and limited narrow \ac{LoS} links have key properties that are peculiar to the \ac{THz} band. These unique \ac{THz} characteristics will be discussed next. \subsection{Pencil Beam LoS Links and their Repercussions} \ac{THz} communication links are \ac{LoS} dominant. In fact, moving towards higher frequencies widens the power gap between the \ac{LoS} and the \ac{NLoS} components. Particularly, compared to the \ac{LoS} link, the power of the first-order and the second-order reflected paths are attenuated by $5-10$ $\SI{}{dB}$ and more than $\SI{15}{dB}$ respectively as shown in \cite{lin2016terahertz} for $\SI{300}{GHz} - \SI{1}{THz}$. Furthermore, on the one hand, high attenuation losses require focusing the power of the link within a very narrow beam. On the other hand, the small footprint of \ac{THz} antennas enables transceivers to sharpen their beams and achieve beamforming gains. Thus, extremely narrow beam \ac{LoS} links, namely \emph{pencil beam \ac{LoS} links} \cite{guerboukha2020efficient}, could alleviate the attenuation losses, equip \ac{THz} with natural interference mitigation capabilities, and pave the way for overcoming the short communication range. Nonetheless, the use of narrow beams leads to new challenges related to beam tracking, beam alignment, and mobility management.\\ \indent In fact, pencil beams can be easily disrupted by blockages, a sudden deep fade, or a slight beam misalignment following any change in a user's direction. Three types of blockages arise in \ac{THz} systems: Static (buildings, trees, etc), dynamic (neighboring users), and self blockages. Static blockages are deterministic and can be reasonably modeled, in general, and neglected for indoor THz networks. However, predicting dynamic and self blockage depends on human behavior which varies based on the type of the surrounding environment. For instance, pedestrians on the road behave differently from \ac{VR} users in an indoor area. As such, a blockage model that correctly represents an intended scenario needs to faithfully capture the unique features of the considered environment, but it must not be limited to a single environmental model. Subsequently, one major challenge in modeling blockage is the \emph{generalizability} to a broad range of circumstances.\\ \indent One could naturally wonder why the use of existing solutions that work well for blockage mitigation at \ac{mmWave} frequencies \cite{zarifneshat2017protocol} is not possible at \ac{THz}. The shortcoming of such existing solutions can be attributed to three key factors, as explained next. The communication range at \ac{mmWave} is $10\times$ larger than the \ac{THz} range, \ac{THz} \ac{LoS} beams are significantly narrower, and \ac{THz} \ac{LoS} links are less penetrative. Hence, novel and more accurate blockage mitigation methods need to account for unique characteristics of \ac{THz} bands. For instance, blockages must be predicted within a margin of error smaller than the radii of pencil beams. Thus, there is a need for highly accurate prediction models that can characterize micro-mobility and minute changes in users orientation. Therefore, guaranteeing a sustainable \ac{LoS} \ac{THz} link is strongly intertwined with the available information on the mobility and range of motion of users. This is particularly important for future applications like holography and XR that require continuously stable THz links. Indeed, in \cite{chaccour2020can}, we showed that that guaranteeing an \ac{LoS} link is critical for ensuring immersive \ac{XR} services at \ac{THz} bands.\\ \indent To alleviate the short-range nature of \ac{THz} communication links that is caused by the inevitable molecular absorption effect and high path loss, \emph{\ac{THz} could be densely deployed}. Nonetheless, given the sensitivity of beam alignment, a dense deployment could eventually create increased \ac{LoS} interference as well as significant handovers. As a matter of fact, dense networks allow users to be at a close proximity to their serving \ac{SBS} and tend to have better \emph{best-case scenarios}. Nonetheless, small cells could lead to an intermittent association between mobile users and their respective \acp{SBS}, while also jeopardizing cell-edge users performance. Thus, leading to more pronounced \emph{worst-case scenarios}. \subsection{Molecular Absorption Effect} At \ac{THz} frequencies, path and reflection losses are accompanied by yet another physical phenomenon detrimental to communications, the so-called \emph{molecular absorption}. This phenomenon will not only degrade the received power, but it can also intensify the noise. Thus, it introduces \emph{molecular absorption noise} in addition to the thermal noise observed at lower frequency bands. In fact, the molecular absorption effect is observed at all frequencies, nonetheless, it only has a pronounced effect at the \ac{THz} frequencies, which is why it was often neglected for lower frequencies (\ac{mmWave} and sub-6 GHz)\cite{rappaport20175g}. In fact, compared to \ac{THz}, the \ac{mmWave} frequency band exhibits richer multi-paths, has \ac{NLoS} links with a higher power, and requires wider \ac{LoS} beams. Thus, performing beam alignment and mobility management at \ac{mmWave} bands is naturally less complex than at \ac{THz} frequencies. Furthermore, molecular absorption results from the difference in energy between the higher and lower energy states experienced by the molecules of the physical medium when being transmitted. At the \ac{THz} band, molecular absorption is mainly a byproduct of water and oxygen vapor in the air. Thus, changes in meteorological conditions will lead to drastic effects on the air composition and the molecular absorption. This, in turn, makes \ac{THz} communications more suited to indoor scenarios due to a lower water vapor percentage.\\ \indent Although molecular absorption increases with the frequency, this increase is neither smooth nor monotonic. In fact, there exist some regions where a dip in the baseline of molecular absorption is observed at some carrier frequencies. Hence, some works such as \cite{rajatheva2020scoring} and \cite{rappaport2019wireless} argue that these particular windows could be targeted to benefit from their lower absorption coefficient. Nevertheless, the air composition changes based on meteorological conditions. In fact, given that the \ac{THz} molecular absorption is highly correlated to the air's water vapor, different air humidity levels will contribute to different molecular absorption levels. Hence, these windows can be potentially unreliable given their variability based on air composition. Henceforth, targeting these frequency windows makes more sense in controlled and indoor environments. Moreover, the molecular absorption baseline increases with the carrier frequency. Subsequently, at higher frequencies, more abundant bandwidth can be exploited. In turn, \emph{the molecular absorption and the large \ac{THz} bandwidth are two opposing forces.} The first is jeopardizing the performance by incurring more losses and noise, while the latter is improving the performance by boosting the capacity of the system. For example, in an indoor \ac{AR} application, we showed in \cite{chaccour2020ruin} that an increase in the bandwidth could lead to fresher \ac{AR} content and more reliable communication on average. Nonetheless, operating at higher frequencies led to a more exacerbated worst-case user scenario (in terms of content freshness) due to the molecular absorption effect.\\ \indent From a communication perspective, molecular absorption has a non-monotonic detrimental effect on the \ac{THz} links. Effectively, it adds a margin of noise, limits the communication range, and entangles outdoor opportunities. Nevertheless, the molecular absorption effect is a double-edged sword that provides several gains to the \ac{THz} sensing mechanism. In fact, the quasi-optical nature of the \ac{THz} band and its abundant bandwidth lead to additional sensing functionalities that will be discussed next. The quasi-optical nature of the \ac{THz} band is in fact the first identified THz characteristic among the seven unique defining features that we envisioned in Fig.~\ref{fig:comprehensive}. \begin{figure*} [t!] \begin{centering} \includegraphics[width=.75\textwidth]{Sensing_Fig.pdf} \caption{\small{Illustrative figure showcasing the different contexts of \ac{THz} sensing.}} \label{fig:sensing} \end{centering} \end{figure*} \section{Quasi-Opticality of the THz Band} \ac{THz}'s distinct \ac{EM} nature allows it to be uniquely suited for use in \emph{wireless environmental sensing}. The first factor that unravels sensing capabilities at \ac{THz} frequencies is the molecular absorption effect as shown on the left hand-side of Fig.~\ref{fig:sensing}. In fact, although molecular absorption could hinder communication at \ac{THz} bands, this particular characteristic significantly improves \ac{THz}'s sensing capability. Effectively, molecular absorption allows \ac{THz} to have \emph{electronic smelling} capabilities, i.e. the interactions of the \ac{THz} wave with a gaseous medium result in a specific molecular absorption loss. Such a technique is known as \emph{rotational spectroscopy} \cite{sharma2016200}, and it has a high level of specificity, sensitivity, and determination of concentration. This is because this technique relies on measuring the energies of the transitions between the quantized rotational states of molecules in gases. Moreover, if more complex substances or fluid dynamics need to be identified, sensing measurements must be further processed. Therefore, after some manipulation, the measurement data can be fed to a particular neural network (based on the desired identification or classification task) to learn more entangled patterns. The implementation of such learning mechanisms for sensing is an open research area. This sensing potential has seen light in environmental nano-sensors for pollution monitoring. \ac{THz} frequency bands are capable of sensing the chemical compounds present in the environment with one part per billion density as shown in \cite{tekbiyik2019terahertz}. Hence, examining the interactions of the molecular absorption paves the way for many revolutionary sensing applications that require an extremely high accuracy and precision. As a matter of fact, beyond environmental sensors, this functionality can be beneficial to the medical field in procedures such as cancer diagnosis. For instance, \ac{THz} radiation has the potential to non-invasively detect stem cells \cite{yu2019medical} and lead to a major leap in the diagnostic. \\ \indent Meanwhile, for high-resolution environmental sensing\footnote{It is important to note that the terms imaging and sensing have been used interchangeably by the literature to denote personal radars and the ability to illuminate objects and scenes with \ac{THz} waves to capture their reflection and angular precision. For consistency, given that \emph{sensing measurements} are being procured, throughout this work, this functionality will be referred to as sensing.} at a larger scale, the usage of large arrays coupled with \ac{THz}'s abundant bandwidth brings forth additional key sensing capabilities. These capabilities include high-resolution \ac{THz} radar, accurate \ac{UE} localization \cite{sarieddeen2020next}, precise 3D mapping (e.g. acting as a personal radar in indoor areas), and minute device/user orientation sensing. \ac{THz}'s different sensing contexts and capabilities are illustrated in the right hand-side of Fig.~\ref{fig:sensing}. In all of these processes, sensing parameters from the transmitted or the reflected \ac{THz} \ac{EM} wave are measured and examined. These parameters include but are not limited to: time delay, angle-of-arrival, angle-of-departure, Doppler frequency, and physical patterns of objects detected \cite{rahman2020enabling}. Subsequently, the sensing mechanism at \ac{THz} frequencies can provide active \acp{BS} with \emph{situational awareness}, i.e., instantaneous location and orientation information at the centimeter level and in 3D space of nearby \acp{UE}. This situational awareness provides a new breed of data that can be exploited in data-driven \ac{THz} networks for effective beam management, mobility, and blockage avoidance. Similar to the \ac{THz}'s band communication functionality that exhibits a high-rate but is limited by a short range, \ac{THz}'s sensing capability has a high resolution but its detection ranges are short. Thus, sensing at \ac{THz} frequencies is apropos for capturing subtle user movements by characterizing the range of motion of \acp{UE} within errors in the centimeter range, i.e., micro-mobility and micro-orientation changes. This is clearly useful in the context of many 6G services, including \ac{XR} and holography (discussed in Section X-A) that necessitate a continuous \ac{SLAM} input to guarantee an immersive experience. \\ \indent Thus, on the one hand, wireless \ac{THz} systems can provide communication services with unprecedented high rates that are challenged by the \ac{THz} band's unique channel impediments. On the other hand, \ac{THz} systems can be used as a high-resolution sensing facility limited by a short range. Effectively, the quasi-optical nature of the \ac{THz} band brings to light a fellowship between \emph{high-rate communication and high-resolution sensing} capabilities. These capabilities are the core assets of \ac{THz} systems, and they can eventually make them superior to the lower \ac{RF} bands as well as the higher optical bands. In fact, \emph{each blockage of a communication link is a sensing opportunity}. Hence, to ensure a strategic deployment of such integrated communication and sensing functionalities, we next discuss effective architectures for wireless \ac{THz} systems. \section{How to Strategically Deploy THz Networks?} Migrating towards higher frequencies such as \ac{THz} overcomes the bandwidth scarcity challenge, but it leads to highly varying and uncertain channels. For instance, the distinct physical characteristics of \ac{THz} outlined in Section II and III mandate a novel network architecture capable of sustaining the traffic capacity and connection density requirements, as well as the multi-function nature of \ac{THz} systems. In particular, it is important to design the underlying architecture in a way to take advantage of the spectrum supremacy of \ac{THz} frequency bands and enable joint communication and sensing while mitigating their uncertain channel and intermittent nature. \begin{figure*} [t!] \begin{centering} \includegraphics[width=.8\textwidth]{Architecture_Comparison_edited.pdf} \caption{\small{ Illustrative figure showcasing \ac{THz} enabling architectures and the synergy with lower frequency bands. The deployment of \ac{THz} of wireless networks should bring out novel cell less architectures, rely on cooperative \acp{RIS}, and synergistically coexist with lower frequency bands.}} \label{fig:architecture_thz} \end{centering} \end{figure*} \subsection{From Ultra Massive MIMO to Cell-Free Massive MIMO} \subsubsection{Motivation} \indent Leveraging the \ac{THz} small antenna footprint, allows us to embed ultra massive \ac{MIMO} systems within a few square millimeters thus providing pencil beamforming in a scalable and efficient way \cite{sarieddeen2020next}. Furthermore, adopting an \ac{AoSA} architecture allows dividing large antenna arrays into multiple sub-arrays, which helps in improving not only the beamforming gain but also the energy efficiency \cite{faisal2019ultra}. Indeed, massive \ac{MIMO} was a staple of 5G systems, and, similarly, in beyond 5G systems, ultra-massive \ac{MIMO} can potentially provide \ac{THz} communications with improved beamforming, energy efficiency, and multiplexing benefits. Nonetheless, the true success of ultra massive \ac{MIMO} for sub-6 GHz was a result of channel hardening and favorable propagation. These phenomena occur with Rayleigh fading channels having \ac{i.i.d.} channel coefficients. However, \ac{THz} frequency bands are dominated by \ac{LoS} links and characterized by very sparse and low rank channels. In fact, the number of \ac{NLoS} links decreases as we increase the carrier frequency of operation. Also, while a few \ac{NLoS} links exist in the channel matrix, capitalizing them becomes an ineffective task due their weak received power at the \ac{UE}. Hence, as a result of the peculiarity of the \ac{THz} channel, our ability to leverage these attractive phenomena of massive \ac{MIMO} is questionable. For instance, when deploying massive \ac{MIMO} at lower frequency bands, the \emph{energy efficiency-capacity} tradeoff is an important metric in the system design. Primarily, this tradeoff is determined by the \ac{MIMO} technique used: On the one hand, spatial multiplexing is capable of increasing spectral efficiency linearly with the number of transmit antennas, however, that comes at the expense of a higher energy consumption and complexity. On the other hand, spatial modulation is limited by increasing the spectral efficiency by base two logarithm of the number of transmit antennas. However, spatial modulation enables a system with a higher energy efficiency, lower complexity, and cost, given that one transmit antenna is active at a time. Meanwhile, at \ac{THz} frequency bands, the dynamics of this tradeoff are slightly different. First, \ac{THz} frequency bands naturally provide a high data rate, and thus the need to expand the capacity and spectral efficiency further is less motivated than at lower frequency bands. Second, given that \ac{THz} bands have a small antenna footprint, the number of antennas can be made very large at low costs \cite{sarieddeen2019terahertz}. As such, spatial modulation's system efficiency is higher at \ac{THz} frequency bands. Nonetheless, it is worth mentioning that hardware impairments need to be considered in the analysis of spatial modulation at \ac{THz} frequency bands as done in \cite{mao2020spatial}. Indeed, the works in \cite{sarieddeen2019terahertz} and \cite{mao2020spatial} should be followed by research that investigates ultra-massive spatial modulation \ac{MIMO} at \ac{THz} frequency bands under non-idealized channel conditions and assumptions.\\ \indent Furthermore, at \ac{THz} frequencies, network densification with massive \ac{MIMO} small cells is a necessary technique to ensure seamless coverage despite the short communication range. Subsequently, this phenomenon raises multiple challenges such as, intermittent connectivity of cell edge users, increased inter-cell interference, and significant overhead with global \ac{CSI}. Here, we note that, although massive \ac{MIMO} solutions have been proposed for \ac{mmWave} networks \cite{mumtaz2016mmwave} and \cite{rappaport2017overview}, these solutions are not directly applicable to the \ac{THz} frequency bands due to the poorer multi-path propagation and the sparser, lower-rank \ac{THz} channels. \subsubsection{Opportunities} \indent Given the challenges of using dense \ac{THz} massive \ac{MIMO} networks and the fact that \ac{THz} cannot reap all the benefits of ultra massive \ac{MIMO}, we naturally pose the following question: \emph{Can we engineer an opportunistic, \ac{THz} tailored architecture that reaps more benefits from THz's disposition?}\\ \indent An evident answer to this question would be to untie the architecture from cellular boundaries. In that sense, we can think of leveraging the concept of cell-free massive \ac{MIMO} \cite{ngo2017cell} that is viewed as an effective approach to address the challenges brought by dense massive \ac{MIMO} small cells. Cell-free massive \ac{MIMO} could, in principle, provide a uniform experience across all users and improve personalized \ac{QoE} without jeopardizing the experience of cell-edge users. As such, it allows us to close the gap between best-case and worst-case performance. This is particularly beneficial for \ac{THz} systems given the considerable gap between the best-case and worst-case user performance.\\ \indent We now recall that cell-free massive \ac{MIMO} only requires local \ac{CSI} at each \ac{BS} in contrast to the global \ac{CSI} needed in massive \ac{MIMO}. This, in turn, could significantly reduce the overhead and improve the reliability and latency at the \ac{THz} band. As such, this allows mitigating one of the challenges pertaining to the \ac{THz} channel estimation process. However, as explained in Section VII, this process suffers from multiple other challenges that are independent of the architecture adopted.\\ \indent Furthermore, in cell-free massive \ac{MIMO}, one can exploit the concept of user-centric clustering to constrain the number of \acp{UE} that can be served per \ac{BS} without creating cell boundaries. Such a clustering approach, consists in deploying dynamic and possibly overlapping clusters of \acp{BS} based on the needs of the user, as shown in Fig.~\ref{fig:architecture_thz}. In dense \ac{THz} deployments, clustering can potentially suppress inter-cell interference and connect multiple cooperative \acp{BS} to a given user. This is uniquely beneficial for \ac{THz} systems given their short range of communication, their intermittent links, and their high likelihood of handover failures. As such, having multiple active links enhances the \ac{THz} link reliability and continuity. Clearly, cell-free massive \ac{MIMO} can potentially improve the availability of \ac{LoS} \ac{THz} links, mitigate interference, as well as reduce frequent handover, handover failures, and \ac{CSI} overhead. \subsubsection{Challenges} \indent \\$\bullet$ \emph{Scalability:} Despite the aforementioned benefits of cell-free massive \ac{MIMO} for \ac{THz} systems, the scalability challenge of cell-free architectures remains a partially solved problem. In \cite{interdonato2019scalability}, the authors addressed some system scalability challenges of cell-free massive \ac{MIMO} by proposing a scheme in which a user is served by all cell-centric clusters related to a given user-centric cluster. However, such solutions are still at their nascent stage and more realistic channel models that include channel correlations and consider multi-antenna \ac{UE} are needed. Furthermore, user-centric cell-free massive \ac{MIMO} requires a cooperative, dynamic, and time variant clustering mechanism, which is another key challenge. When the size of the network grows, one could investigate the open problem of providing a scalable network cooperation to these clusters.\\ $\bullet$ \emph{Coherent Processing:} The joint coherent processing among \acp{BS} will be highly demanding in terms of rate and computation; yet, the demands for high-capacity and high data rates can be met given the operation at \ac{THz} frequencies. Nevertheless, this process could still lead to a significant amount of processing and signaling overhead (e.g. power control, synchronization, pilot assignment) that will be needed to reliably exchange \ac{CSI}. Overcoming such challenges requires novel channel estimation techniques that are designed to cope with both the cell-free architecture as well as the highly varying \ac{THz} channel simultaneously. These methods will be explained in Section VII. \subsection{Cooperative Reconfigurable Intelligent Surfaces} \subsubsection{Motivation} Since the \ac{THz} performance is contingent on the availability of \ac{LoS} connections \cite{chaccour2020risk} and \cite{chaccour2020can}, it would be beneficial if one can to customize the wireless propagation environment in a way to guarantee a continuous \ac{LoS} link. Here, the concept of \acp{RIS} can be exploited to carefully re-engineer the propagation environment. \acp{RIS} are surfaces of electromagnetic material that can be electronically controlled by integrated circuits \cite{basar2019wireless}. In fact, \acp{RIS} can potentially transform wireless environments into software-defined platforms thus providing enhanced beam and mobility management over the uncertain \ac{THz} channel. To increase the level of control exerted, \emph{multiple \acp{RIS} could be used to cooperatively sense the environment and connect with narrow \ac{THz} beams to provide continuous and non-intermittent communication links to users} as shown in Fig.~\ref{fig:architecture_thz}. \subsubsection{Opportunities in using Holographic \acp{RIS} for THz frequency bands} Owing to the small footprint of \ac{HF} band transmissions and \ac{THz} communications, a new breed of intelligent surfaces, dubbed \emph{holographic} \acp{RIS} \cite{wan2020terahertz} has come to light. This new concept, integrates a very large number of small antenna elements to realize a holographic array having a spatially \emph{continuous} aperture. Such structures can be viewed as a theoretically infinite number of antennas, i.e., an asymptotic limit of ultra massive \ac{MIMO}. As such, they can potentially outperform non-holographic \acp{RIS} given the increase in the number of metasurfaces, by achieving a higher spatial resolution, and enabling the transmission and detection of \ac{EM} waves with arbitrary spatial frequency components \cite{huang2020holographic}. Particular to \ac{THz} networks, \emph{the detection capability of a holographic \ac{RIS} can provide a better sensing system}\footnote{Enhancing sensing using \acp{RIS} entails improving the localization, radar, and situational awareness capability of \ac{THz}.}, which is an essential process for the assessment of a wireless user's orientation and position \emph{instantaneously}. Additionally, given the small number of propagation paths between a \ac{THz} \ac{SBS} and a \ac{UE}, having multiple \acp{RIS} mimics the behavior of a multi-path propagation environment and converts limited \ac{LoS} channel models into richer ones, thus improving spatial multiplexing capabilities. \acp{RIS} are also particularly suitable for sensing because they can optimize and program their metasurface configurations. This allows generating a massive number of independent paths to enhance the object detection and localization process. Indeed, holographic RISs can enable precise 3D environmental mapping through the inherent wireless sensing capabilities of the THz bands that we discussed in Section III and VI. \subsubsection{Opportunities in Near-field \ac{RIS} communication at \ac{THz}} \indent Given the dense \ac{THz} architecture, the high number of metasurfaces, and the operation at a short range, \acp{RIS} can potentially satisfy the \emph{co-phase condition} and operate in the near-field \cite{liu2020reconfigurable}. Near-field communications allow focusing the reflected beams towards a new focal point, in contrast to far-field communications that only provide anomalous reflections to the incident wave. Furthermore, near-field communication allows the network to exploit wavefront curvature to reduce the need for infrastructure and synchronization. Thus, due to the shorter range of communication and network conditions at \ac{THz}, \acp{RIS} can first use their beamforming capability by boosting links suffering from low power. Second, they create a virtual \ac{LoS} path to overcome blockages that constitute a big challenge in the \ac{THz} operation. Third, the phase of arrival measurement of the transmitted signal obtained from near-field propagation opens the door for better sensing capabilities using \acp{RIS}, as it allows us to obtain information about the angle and the distance to an \ac{RIS}. This information obtained along with the time of arrival of the signal can be further exploited to determine unknown clock biases and spherical wave localization \cite{wymeersch2020radio}. Henceforth, novel channel models and signal processing methods are needed in order to conform with the \ac{THz}-\ac{RIS} near-field propagation environment and to characterize their behavior accurately. \subsubsection{Challenges} \indent \\$\bullet$ \emph{Complex Cooperation: }Simultaneously employing multiple holographic \acp{RIS} imposes novel challenges on the network due to the complexity of enabling their cooperation. For instance, having a cooperative \ac{RIS} system makes it infeasible to use channel acquisition schemes that rely on penalizing the achievable rates to account for pilot overhead from channel estimation \cite{wymeersch2020radio}. Furthermore, an \ac{RIS} acting as an intelligent reflective surface cannot process and estimate the channel without dedicated receive chains. Thus, such a compound channel will exacerbate the overhead stemming from the channel estimation and processing delays of the network. As some of the beyond-5G services have \emph{very strict latency requirements}, i.e., a near-zero \ac{E2E} latency, cooperation and scheduling among multiple \acp{RIS} must operate in a stringent real-time fashion. Nonetheless, relying on partial \ac{THz} \ac{CSI}, i.e., statistical information of the channel such as spatial, frequency, and path correlation or properties of the channel matrix, hinders the high reliability of links. \\ \indent $\bullet$ \emph{Accurate \ac{CSI} acquisition:} Obtaining accurate \ac{CSI} for multiple \acp{RIS} is a key challenge. First, it involves the estimation of multiple channels simultaneously. Second, \ac{THz} \acp{RIS} will have a very large number of metasurfaces\footnote{This phenomenon is more pronounced with holographic \acp{RIS}}, each of which might have particular non-linear hardware characteristics. Third, the highly-varying \ac{THz} channel limits the \ac{CSI}'s validity to very short period of time. One potential solution to the aforementioned challenges is to exploit the sparse nature of the channel and extract features from the geometric configuration of the network to learn more accurate \ac{CSI}.\\ \indent $\bullet$ \emph{Data-driven methods:} Exploiting the aforementioned channel features calls for data-driven and proactive predictive methods, capable of bypassing the highly-varying \ac{THz} channel and inferring accurate \ac{CSI} within the coherence time of \ac{THz} and the stringent \ac{E2E} latency requirements of 6G applications. These methods will be further developed in Section VII. We note that, for all considered architectures one can also envision several enhancements to further overcome the unique \ac{THz} challenges. For example, in order to further improve the coverage range, one can consider the use of multi-hop directional transmissions as discussed in \cite{ahmadi2021reinforcement}.\\ \indent After comprehensively discussing the potential \ac{THz} architectures, their specific challenges, the opportunities that they present, and the open research problems that need to be examined; we next discuss the synergy of the \ac{THz} band with lower frequency bands and its role in the deployment of future wireless systems. \renewcommand{\arraystretch}{1.30} \begin{table*}[t] \footnotesize \caption{ Wireless cellular frequency bands rivalry and complementarity.} \centering \begin{tabular}{|p{2.5cm} | p{4cm} | p{4.5cm} | p{4.5cm}|} \hline & \textbf{Sub-6 GHz} & \textbf{mmWave} ($30- \SI{100}{GHz}$)& \textbf{ sub-THz and THz} ($0.1- \SI{10}{THz}$) \\ \hline \textbf{Distance} & High range & Medium to short range ($\leq \SI{200}{m}$) & Short range ($\leq \SI{20}{m}$)\\ \hline \textbf{Bandwidth} & Limited & Medium to Large &Large\\ \hline \textbf{Data rates} & Limited & Medium to High (up to $\SI{10}{Gbps}$) & High (up to $\SI{100}{Gbps})$\\ \hline \textbf{Interference} & Mitigated by techniques like OFDM and OFDMA & Mitigated by beamforming & Mitigated by sharp pencil beamforming \\ \hline \textbf{Noise source} & Thermal noise & Thermal noise & Molecular absorption noise and Thermal noise\\ \hline \textbf{Blockage} & Not susceptible & Susceptible & Highly susceptible \\ \hline \textbf{Beamforming} & Medium to narrow beams & Narrow beams & Very narrow beams \\ \hline \textbf{Horizons to explore} & Expanding the midband coverage & NLoS communications and long-range sensing functions & Reliable and low latency communications, integrated sensing and communication systems\\ \hline \textbf{Viable architectures} & Massive MIMO & Ultra massive MIMO, RIS, and UAV-RIS & Cell-free massive MIMO and holographic intelligent surfaces\\ \hline \textbf{Significant caveats} & Low rates and spectrum inefficient & Susceptibility to mobility and blockages & Susceptibility to micro-mobility, orientation, air composition and blockages \\ \hline \textbf{Applications} & Low-rate and latency tolerant services& Vehicular networks, radar, UAVs, and IoT & XR, holography, IoE, NTNs, sensing, and nanosensors\\ \hline \end{tabular} \label{table:Rivalry} \end{table*} \section{Synergy with Lower Frequency Bands} \subsection{Motivation} Independent of the deployed architecture, integrating \ac{THz} communications with \ac{mmWave} and sub-6 GHz bands provides many opportunities. Effectively, investing on this synergy between the \ac{THz} band and lower freqeuncy bands allows future wireless systems to achieve a realistic universal coverage, and to deliver more scalable network solutions. Particularly, such an integration allows us to exploit the benefits of \ac{THz} communications and sensing capabilities in outdoor scenarios as well as to service highly mobile \ac{UE}, despite the short communication distance of \ac{THz} (due to severe power limitations and propagation losses). For example, highly mobile \ac{CRAS} applications like autonomous vehicles or even drones require downloading high-quality 3D maps instantaneously. Providing enhanced data rates to this process can be done by exploring the use of \ac{THz} \ac{SBS} as enablers of an \emph{\ac{IS}} as illustrated in Fig.~\ref{fig:architecture_thz}. Conversely, blockage-prone short-range \ac{THz} links can benefit from the exchange of control information at the lower frequency bands thus facilitating a reliable real-time reconfiguration of \ac{THz} networks\\ \indent Similarly, this integration can be done by deploying dual-band \ac{THz} and \ac{mmWave} \acp{SBS}, or even triple-band \acp{SBS}. Meanwhile, we expect \ac{THz} \acp{IS} to be the first stage in the evolution of this foreseen frequency band coexistence (since this will only require overlaying new \ac{THz} operated \acp{SBS} to existing dual-band networks). This \ac{IS} deployment provides accessibility to \ac{THz} in current \ac{mmWave} and sub-6 GHz networks and allows serving network slices that require extremely high rates that can only be satisfied by \ac{THz}. In the next stage, we envision more networks to rely on dual-band \ac{mmWave} and \ac{THz} frequencies, in a way to provide high rates and extremely high rates, respectively. Such a deployment will see light after the \ac{IS} stage as a result of: a) The major infrastructure changes needed to achieve a ubiquitous coverage of dual-band \ac{mmWave}-\ac{THz} \acp{SBS}, b) The maturity of a higher number of applications that require extremely high data rates, and c) The compliance of \ac{mmWave}-\ac{THz} \acp{SBS} with existing wireless standards and network elements. \subsection{Opportunities} \indent Integrating the \ac{THz} frequencies with lower frequency bands (\ac{mmWave} and sub-$\SI{6}{GHz}$) allows next-generation wireless systems to seize the following opportunities: \begin{itemize} \item Improve the reliability and continuity of \ac{THz} communication links by enhancing blockage prediction and exchanging control information using the links with larger and reliable range over the lower frequency bands. \item Incorporate sensing mechanisms that are capable of providing high-resolution information at short range using \ac{THz} bands and radar-like information at long ranges using \ac{mmWave} bands. The concept of joint sensing and communication will be elaborated in Section VI. \item Benefit from the spatial correlation between the three frequency bands and develop knowledge for traffic scheduling and training overhead reduction \cite{alrabeiah2019deep}. \item Provide a versatile range of network slices for 6G systems with different rate, latency, and synchronization by selecting unique communication band support and smoothly handing off users from one band to another. \item Prefetch high-rate demanding data such as \ac{AR} content using \acp{IS} while supporting conventional lower rate services using the existing \acp{SBS}. \end{itemize} Clearly, integrating \ac{THz} frequency bands with \ac{mmWave} and sub-$\SI{6}{GHz}$ facilitates its introduction to current outdoor wireless networks, improves its link reliability, and extends its coverage. However, differences in the \ac{EM} behavior, signal propagation properties, and available bandwidths lead to significant differences in the achievable rate, reliability, and latency. Such differences and heterogeneity lead to many challenges and open problems as explained next. \subsection{Challenges and Open Problems} \indent Integrating the \ac{THz} band with lower frequency bands (\ac{mmWave} and sub-6 GHz) \acp{BS} \cite{semiari2017joint, semiari2017caching, coll2019sub} hybridizes the network and enhances the reliability of its links. While such hybridization improves the coverage, reliability, and provides more versatile services, it has also lead to multiple novel challenges. \\ \indent $\bullet$ \emph{Hybridization Techniques:} When using heterogeneous frequency bands, a key challenge is to implement new schemes that allocate interfaces between the three (or more) different frequency bands. One way to implement this hybridization efficiently, is to deploy \emph{time-critical and vitally-robust} interfaces at lower frequency bands to support beamforming, initial access, and channel estimation. This, in turn, reduces the training overhead associated with beamforming and channel estimation at \ac{HF} bands particularly \ac{THz}. Similarly, heavy data transmission interfaces must rely on bandwidth-abundant \ac{THz} \acp{IS} to enhance the data rates. Nonetheless, orchestrating control, time-critical, and data-hungry information over multiple frequency bands while maintaining a high \ac{QoS} and \ac{QoE} necessitates novel network management techniques. In fact, to propose efficient resource and network management schemes, that reinforce the synergy between \ac{THz} and lower frequency bands, one can leverage the data collected from \acp{BS} and \acp{RIS}. As a result, this allows us to learn the complexities entailed by the alliance of versatile frequency bands used. Indeed, achieving a successful hybridization is an open research area that opens the door for novel \ac{ML} and \ac{AI} techniques capable of learning the most effective interface allocation, while optimizing the cooperation between the heterogeneous frequency bands.\\ \indent $\bullet$ \emph{Effective Control:} Another key challenge of integrating multiple frequency bands is to devise effective control and signaling protocols that can handle the different characteristics of the different bands (e.g., see \cite{semiari2019integrated} and \cite{shokri2015millimeter} for the case of \ac{mmWave} $-$ sub-$\SI{6}{GHz}$ band integration).\\ \indent $\bullet$ \emph{THz \acp{IS}:} There are several challenges that are peculiar to the use of THz \acp{IS}. For instance, the fixed location of \acp{IS} might limit the ability of \acp{UE} to prefetch high-rate content or experience very high data rates. To address this challenge, deploying denser dual-band \ac{mmWave}-\ac{THz} \acp{SBS} increases the likelihood of the delivery of a high-rate \ac{THz} connection to a user. Nevertheless, this deployment leads to higher infrastructure costs and an increased complexity.\\ \indent $\bullet$ \emph{Open Problems:} The coexistence of multiple frequency bands brings forward several additional open problems such as: a) Mapping control information and payload to different frequency bands, while taking into account their causal relationship, i.e. the fact that data can only be interpreted if the appropriate control information is in place, b) Joint scheduling and spectrum management to make use of multi-connectivity across bands and maintain high reliability, c) Associating users to cells while maintaining a load balance, and d) Learning the \ac{THz} network conditions by leveraging data and information from lower frequency bands. \\ \indent It is worth noting that handling and solving such problems efficiently requires capitalizing on the distinct features and complementarities of these frequency bands as shown in Table~\ref{table:Rivalry}. Clearly, a successful deployment of wireless \ac{THz} networks relies on adopting the aforementioned architectures and investing in the coexistence with lower frequency bands. Effectively, to take advantage of all the benefits of the quasi-optical behavior, such networks need to migrate towards \emph{versatile wireless systems} that can perform multiple functions such as sensing, communications, and localization, among others. Thus, we next discuss the prominent role of joint sensing and communication systems. \section{Joint Sensing and Communication Systems} Wireless systems are rapidly evolving from solely relying on communication services to versatile systems with joint sensing and communication capabilities \cite{rahman2020enabling}. This transformation is a byproduct of two main factors. \emph{First}, future wireless services necessitate some form of sensing input as part of the functionality of the application. For example, autonomous vehicles necessitate radar capabilities that enable them to avoid collisions. \emph{Second}, equipping systems with a joint sensing and communication capability leads to more effective: \begin{enumerate}[label=(\roman*)] \item Sensing served by communication feedback. \item Communication served by sensing feedback. \item Mutual sensing and communication feedback to fulfill joint goals.\end{enumerate} These abilities bring an \emph{interesting fellowship of communication and sensing, as each communication blockage represents a sensing opportunity and vice versa}. In fact, \ac{THz}'s sensing outcomes are versatile (e.g. object detection, \ac{UE} tracking, 3D mapping) as previously shown in Fig.~\ref{fig:sensing}. Meanwhile, the common denominator of these outcomes is the ability to analyze the transmitted or reflected \ac{THz} \ac{EM} wave in a way to infer measurements. Such measurements include the angle-of-arrival, angle-of-departure, time-of-arrival, Doppler frequency, and physical patterns of objects detected. Collecting and processing these high-resolution measurements enable \ac{THz} systems to localize \ac{UE} and detect objects in the $\SI{}{cm}$ and $^{\circ}$ range for example. \subsection{Opportunities and Advantages} \indent Incorporating integrated sensing and communication functions in future wireless \ac{THz} systems holds under its belt many opportunities and advantages: \begin{itemize} \item \emph{Improved pencil beamforming:} Maintaining reliable pencil \ac{THz} beams has always been a difficult process for highly mobile environments. Having readily available and continuous sensing feedback to build a situational awareness leads to a better initial access, an improved beam tracking, and enhanced user association. \item \emph{Predictive resource usage:} Sensing paves the way for situational awareness and the means to characterize environmental changes and user gestures. Such actions are usually followed by a certain communication demand in services like \ac{XR}. Subsequently, the sensing input can be used to make predictive allocation of communication resources. \item \emph{Spectrum efficiency and coexistence:} Allowing \ac{THz} sensing and communications functionalities to dynamically share the spectrum inherently increases the spectrum efficiency. For instance, if sensing and communication functionalities do not need to operate simultaneously, a significantly improved spectral efficiency would be observed compared to disjoint systems. In essence, spectrum sharing is an option for the coexistence of sensing and communication functionalities given \ac{THz}'s ample bandwidth. Alternatively, one can benefit from \ac{THz}'s quasi-optical nature and exploit techniques like \ac{OAM} to multiplex sensing and communication functionalities. \ac{OAM} will further be elaborated in Section VIII. \item \emph{New use cases:} Benefiting from the sensing feedback makes \ac{THz} systems an ideal candidate to many applications requiring a dual sensing and communication feedback. For example, \ac{XR} services require a network that can provide an instantaneous high-resolution localization of the user as well as a high data rates. Herein, \ac{THz} systems can jointly serve such a use-case especially in indoor areas. \item \emph{Cost efficiency:} Integrating the communication and sensing functionality on a single platform leads to reduced costs and size \cite{rahman2020enabling}. \end{itemize} Henceforth, joint sensing and communications systems provide ample opportunities for innovative wireless systems. Nonetheless, the short communication range and the intermittent links, on the one hand limit the reliability and coverage of communication links. On the other hand, they limit the sensing scope and the situational awareness coverage. Next, we will discuss the means to improve joint sensing and communication systems. \subsection{Effective Strategies for Joint Sensing and Communications} The overall performance of joint sensing and communication systems is primarily contingent upon improving the link reliability, continuity, and coverage. Incorporating systems that have multiple independent paths extends the communication reliability and improves the sensing richness, i.e., measurements become capable of capturing longer range objects and larger number of details in terms of situational awareness. Attaining these goals can be performed by capitalizing on the role of \acp{RIS} and on the synergies between the lower frequency bands and \ac{THz}. The roles and challenges accompanying such effective strategies include: \begin{itemize} \item \emph{Role of lower frequency bands:} Extending the sensing capability for longer ranges can be achieved by integrating \ac{THz} and \ac{mmWave}. This combination allows establishing sensing services with more versatile functionalities. For instance, this allows us to extend the joint sensing and communication capability to outdoor applications for services that require radar and environment sensing like autonomous vehicles. Nonetheless, this gives rise to a tradeoff between the resolution achieved and the range of sensing. Hence, having a continuous feedback between communication and sensing signals diminishes the uncertainty surrounding \ac{HF} bands like \ac{THz} and \ac{mmWave}. For instance, here, the \ac{THz} sensing input potentially detects closer objects with higher precision. Moreover, \ac{mmWave}'s sensing input can better sense farther objects with a lower resolution. Combining this dual sensing input could provide a better situational awareness and an improved blockage mitigation for both \ac{THz} and \ac{mmWave}. While such a coexistence improves the joint system's capability, it leads to a complexity in the deployment. The challenges surrounding the coexistence with lower frequency bands are explained in Section V-C. \item \emph{Role of \acp{RIS}:} The use of \ac{RIS}-enhanced architectures could further improve the \ac{THz} sensing performance by enabling the creation of multiple independent paths carrying out richer information about the dynamics of the environment \cite{hu2019reconfigurable}. Particularly, \acp{RIS} allow us to perform the sensing process by reflecting the intended signals in precise directions, i.e., by adjusting their metasurfaces via a controller without consuming any additional radio resources. Nonetheless, \acp{RIS} acting as an intelligent reflector are limited by their inability to emit sensing or pilot signals to initiate an active sensing process.\footnote{Active sensing refers to the process of emitting radiation in the direction of a desired target to be investigated. An active sensor detects and measures the radiation that is reflected or backscattered from the target. Passive sensing, on the other hand, detects radiation that is emitted or reflected by the object or scene being observed. In this case, \acp{RIS} will be detecting the \ac{THz} radiation emitted by \acp{SBS}.} In contrast, \acp{RIS} acting as transceivers \cite{jung2020performance, jung2019reliability, jung2019performance} can send pilot signals, have an improved processing capability, and could perform both active and passive sensing. Nonetheless, their deployment comes with an increased infrastructure and energy cost. Thus, selecting the mode of operation of \acp{RIS}, i.e., as intelligent reflectors vs. transceivers, will lead to an \emph{energy efficiency $-$ processing capability} tradeoff. Here, intelligent passive reflectors can provide sensing and communication control to the network operator with a high energy efficiency, but at a low processing and transmission capability. Meanwhile, transceivers can better transmit and process sensing and communication \ac{EM} waves, at the cost of increased complexity and a reduced energy efficiency. \end{itemize} After discussing the deployment strategies and integrated frequency bands techniques that can reap the benefits of joint sensing and communication systems, we next discuss the challenges in employing joint sensing and communication systems at \ac{THz} bands. \subsection{Challenges} While joint sensing and communications systems present many opportunities, they also give rise to several new challenges that must be addressed:\begin{itemize} \item \emph{Resource sharing and allocation:} The resources (e.g. time, space, frequency) that must be used for sensing and communication signals can be statically or dynamically shared. Dynamic sharing of spatial resources (e.g. number of antennas or metasurfaces allocated) is more efficient, however, \ac{THz} communication beams must be stable and pointing sharply, whereas \ac{THz} sensing requires time-varying directional scanning beams. The design of scheduling and dynamic resource allocation schemes while balancing the \emph{high data rate and high-resolution sensing tradeoff} is therefore an open research area. \item \emph{Coexistence schemes:} Sensing and communication can co-exist throughout different approaches such as coexistence in spectral overlap, coexistence via cognition, or through functional coexistence \cite{zheng2019radar}. Each of these coexistence schemes has its own advantages and drawbacks based on the application served. For instance, the goal of the first scheme is the mitigation of mutual interference while guaranteeing satisfactory performance for both functions. Meanwhile, the second category avoids spectral overlap by enabling radar to sense the communication channel at very low rates. The last category joins the functionalities through hardware and no resource negotiation takes place. Hence, it is necessary to scrutinize these schemes and engineer \ac{THz}-tailored schemes can meet the particular needs of a given application. \item \emph{Waveform design:} The designated waveforms for radar sensing are typically unmodulated single-carrier signals or short pulses and chirps. As such, this results in high power radiation and simple receiver processing \cite{rahman2020enabling}. In contrast, communication signals consist of a mix of unmodulated (pilots) and modulated signals. This results in a higher transceiver complexity. Henceforth, joint systems need to take into account these waveform differences and characterize tradeoffs in a way to jointly optimize the performance. \item \emph{Design complexity:} Extending joint sensing and communication to long-range use cases and ubiquitous coverage requires integrating \ac{THz} with \ac{mmWave}. Subsequently, implementing a dual-frequency band system complicates the feedback mechanism between sensing and communications. Also, a dual-frequency band system leads to problems in sensing scheduling schemes due to differences in the interference, accuracy level, and ranges of these frequency bands. \end{itemize} \indent Indeed, joint sensing and communication systems will have a central role for wireless \ac{THz} networks and next wireless generations. This results from the symbiotic integration of a high carrier frequency, immense bandwidth, quasi-optical characteristics, and natural use of \acp{RIS}. In fact, a significant use-case of the sensing feedback for communication takes place in the initial access stage. As such, sensing paves the way for the initiation of a successful channel estimation process. Hence, we next elaborate on the challenges and the transformative solutions needed at the PHY-layer for a successful channel estimation and initial access process in \ac{THz} wireless systems. \section{PHY-layer Procedures} \renewcommand{\arraystretch}{1.5} \begin{table*}[t] \footnotesize \caption{Summary of the impact and challenges associated to each one of the THz-enabling solutions.} \centering \begin{tabular}{m{5.5cm}m{5.5cm}m{5.5cm}} \hline \textbf{THz Enabling Solution} & \textbf{\hspace{0.4cm}Impact} & \textbf{\hspace{0.4cm}Challenges} \\ \hline \textbf{Cell-free massive MIMO}&\begin{itemize} \item Reduction of handovers and handover failures. \item Suppression of interference. \item Local CSI with lower overhead. \end{itemize}& \begin{itemize} \item Highly varying dynamic user-centric clustering. \item Connectivity of all \acp{SBS} to a cloud. \item Complex distributed network operation. \end{itemize}\\ \hline \textbf{RIS-enhanced architecture}& \begin{itemize} \item Increased \ac{LoS} probability. \item Improved multi-path sensing. \item Cost efficiency. \item Generation of \acp{OAM} through meta-surfaces. \end{itemize} & \begin{itemize} \item Complexity in aggregating CSI. \item Strategic localization of RISs for enhanced cooperation. \end{itemize} \\ \hline \textbf{Integration with mmWave/sub-6 GHz} & \begin{itemize} \item Upgraded \ac{THz} service to outdoor and highly mobile services. \item Enhanced reliability. \end{itemize} & \begin{itemize} \item Complexity in resource allocation between the three frequency bands. \item Abundance of THz ISs in contrast to the density of mmWave/sub-$\SI{6}{GHz}$ \acp{SBS} . \end{itemize}\\ \hline \textbf{OAM} & \begin{itemize} \item Higher spectral efficiency. \item Novel multiplexing and multiple access dimension through OAM modes. \end{itemize}& \begin{itemize} \item Lack of wireless models to characterize OAM modes. \item Decoding data from OAM carrying modes successfully. \item Compliance with existing wireless infrastructure. \end{itemize} \\ \hline \textbf{NOMA} & \begin{itemize} \item Higher spectral efficiency. \item Efficient resource allocation to worst-case scenarios. \end{itemize} & \begin{itemize} \item Low received power. \item Substantial interference. \end{itemize} \\ \hline \textbf{Joint sensing and communication systems} & \begin{itemize} \item Improved network learning by augmenting channel data with sensing data. \item \acp{UE} granted with 3D mapping and situational awareness capabilities. \item Accessibility to a broader range of applications that require radar/sensing capabilities. \end{itemize} & \begin{itemize} \item Dynamic spatial multiplexing amid contrasting beam requirements in communications and sensing. \item Complexity in coexistence approach and resource management. \end{itemize} \\ \hline \textbf{Distributed multi-agent network optimization} & \begin{itemize} \item Various network patterns and service trends broken down into dynamic clusters. \item Hierarchical structure leveraged to generalize and specialize into network behavior/specific service requirements. \end{itemize} & \begin{itemize} \item Seamless cooperation between multi-agents. \item Scarcity of big datasets. \item Intra-cluster variations due to time-varying data. \end{itemize} \\ \hline \textbf{Meta-learning driven network optimization} & \begin{itemize} \item Mapping heterogeneity in behavior to multiple tasks. \item High generalizability. \end{itemize} & \begin{itemize} \item Difficulty in defining tasks vis-à-vis different requirements. \item Violation of HRLLC requirements and near real-time operation. \end{itemize} \\ \hline \end{tabular} \label{table:Vision} \end{table*} PHY-layer procedures, such as channel estimation and initial access, face new challenges in \ac{THz} frequencies due to multiple reasons. First, \ac{THz} channels are highly dimensional and often very sparse due to their beam representation. Indeed, the high number of antenna elements results in a high number of pilots, leading to significant overhead. Second, the very narrow-beamed \ac{THz} links of the \acp{SBS} and their corresponding \acp{UE} cannot meet in space at initial access, i.e., prior to channel estimation and any information exchange. This results in the so-called \emph{deafness problem}. While this problem was encountered at \ac{mmWave} frequencies, the key difference at \ac{THz} bands is that quasi-omnidirectional antennas cannot be used due to the higher propagation losses \cite{xia2019expedited}. Third, the highly varying \ac{THz} channel has a very small coherence time that must accommodate the combined duration of uplink training, downlink payload data transmission, and the uplink payload data transmission. In fact, the network will not only experience a coherence time significantly shorter than the one at \ac{mmWave}, but, also, the \ac{THz} \ac{EM} wave is highly susceptible to multiple factors such as molecular absorption, blockage, and minute beam misalignment. Hence, this further hinders guaranteeing the aforementioned combined duration below the \ac{THz} coherence time. These intertwined challenges imply that conventional low frequency protocols and schemes used for channel estimation and initial access cannot be used to capture the distinct features of the \ac{THz} channel behavior. \subsection{Channel Estimation} \indent \ac{THz} networks are likely to be deployed in a fairly dense distributed architecture, empowered with \acp{RIS}. Having multiple \acp{RIS} cooperating in cell-less architectures makes it very difficult to capture the instantaneous \ac{CSI}. On the one hand, the dynamic user-centric clusters are highly time-varying. On the other hand, multiple \acp{RIS} are operating in different modes, i.e., as an \ac{SBS} or intelligent reflector. These factors lead to non-stationary channel behaviors and compound channels respectively. Predicting partial \ac{CSI} might be feasible in such conditions, nonetheless, partial \ac{CSI} amid the highly varying \ac{THz} channel leads to poor beam alignment, user association, and network optimization, ultimately yielding a highly unreliable communication link. Capturing and characterizing an instantaneous \ac{CSI} is an essential building block. Therefore, it is necessary to exploit the data used to build partial \ac{CSI} to predict accurately the instantaneous \ac{CSI} of the network.\\ \indent Every user-centric cluster exhibits unique channel conditions that are highly varying with time. \emph{First}, the limited number of \acp{UE} within a user-centric cluster and its small geographical area leads to a scarcity in the channel data. \emph{Second}, the user-centric clusters exhibit high correlations between each other that need to be capitalized. Hence, these factors require on the one hand supplementing the channel data to improve the generalizability of channel estimation. On the other hand, multiple agents need to be deployed to cooperatively exchange \emph{generalizable} channel behaviors across user-centric clusters, while specializing by the use cases exhibited within their cluster. Thus, this calls for novel multi-agent generative \ac{ML} mechanisms. In such mechanisms, each agent will generate synthetic environments from the available small data to train and estimate the channel. In fact, the concept of generative networks has started seeing light in channel estimation. For instance, in \cite{balevi2020high} the authors performed channel reconstruction by eliminating the need for a priori knowledge for the sparsifying basis and capturing the deep generative model as a prior. The work in \cite{kasgari2020experienced} proposed a \ac{GAN} approach that pre-trains a deep-\ac{RL} framework using a mix of real and synthetic data to assimilate a broad range of network conditions. Clearly, the approaches adopted in \cite{balevi2020high} and\cite{kasgari2020experienced} are still in their infancy given their dependence on a single learning agent and their inability to learn compound channel models for \ac{RIS} enhanced networks. Thus, such techniques should be extended to handle the highly varying \ac{THz} channel as well as the need for cooperation among different learning agents and their corresponding user-centric clusters. Here, one can build on our recent results in \cite{qqzhang2020} and \cite{zhang2021distributed} which showed that the use of fully-distributed \acp{GAN} ~\cite{ferdowsi2020brainstorming} (i.e., generative models that go beyond classical federated learning) can be effective for \ac{mmWave} channel estimation in multi-drone networks. \subsection{Initial Access} \indent Having a successful initial access at \ac{THz} frequencies requires finding a solution that allows the beams of the \acp{SBS} or \acp{RIS} and their corresponding \acp{UE} to meet in space (prior to any information exchange). Hence, to address this so-called \emph{deafness problem}, one can envision two prospective solutions: \begin{itemize} \item Leverage the integration of \ac{THz} networks with lower frequency bands to perform the link configuration, beam association, and gather all the control information prior to information exchange \item Explore \ac{THz} sensing and localization capabilities. In other words, instead of solely relying on communication data exchange, one can rely on sweeping sensing beams to acquire a \emph{situational awareness} prior to any information exchange. As such, this situational awareness continuously updates the \acp{SBS} with the location and orientation of the potential \acp{UE} they will associate to. \end{itemize} For the sensing solution, relying on a single narrow-beam \ac{LoS} link to obtain sensing data is highly time consuming and subject to errors due to frequent blockages. As such, to obtain richer information about the environment, \acp{RIS} with a high number of metasurfaces can create multiple independent paths characterizing richer information of human positions and orientations. In turn, this will successfully set the stage for a \emph{spatial rendez-vous} between the communicating beams prior to the channel estimation period. In fact, beam management for dynamic scenarios at \ac{THz} frequencies must be maintained \emph{instantaneously} given the high susceptibility of narrow beams to the fluctuations in user orientation. Thus, situational awareness can provide the needed instantaneous feedback, owing to the \ac{THz}'s highly accurate sensing capability as a byproduct of its large bandwidth. Next, after laying out the foundation needed in terms of architectures and systems of operation for versatile wireless \ac{THz} systems with joint sensing and communication functions, we will discuss the measures that allow us to improve the spectrum efficiency and support more users. \section{Spectrum Access Techniques} Conventional spectrum access schemes like \ac{OFDMA} and \ac{CDMA} used at lower frequency bands and previous wireless generations cannot be directly applied to \ac{THz} wireless systems. On the one hand, the peculiar quas-optical propagation features and its corresponding hardware constraints make it difficult to directly employ traditional spectrum access techniques \cite{wei2018multi}. On the other hand, the stringent requirements of emerging 6G services in terms of delay and resources require more efficient spectrum access techniques. For instance, while \ac{TDD} is a widely adopted multiplexing technique for current massive \ac{MIMO} systems, yet it still leads to an overhead latency that is added to the \ac{E2E} delay. This calls for alternative techniques that do not rely on time resources and that can eliminate such latency from the equation. Effectively, \ac{THz}'s quasi-opticality paves the way for many such techniques that do not rely on traditional time, frequency, and space resources. Hence, we next delve into these \emph{spectrum access \ac{THz}-tailored techniques} that lead to an improved multiplexing and multiple access efficiency. \subsection{\Ac{OAM}} The concept of \ac{OAM} \cite{ni2020electromagnetic} is a physical property of \ac{EM} waves that has recently drawn attention as means to dramatically improve the channel capacity and the spectral efficiency of communication systems. Effectively, the roots of this physical property pertain to the rotation of optical beams. This rotation is characterized through an angular momentum that has two components: the first component is the polarization vector rotations called \emph{spin}, whereas the second component of substantially higher magnitude is the phase structure rotations called \emph{orbital angular momentum}. Moreover, to enable carrying information, the internal \ac{OAM} is a helical wavefront that enables boosting the optical and quantum information capacities through a spatial modal basis set \cite{yao2011orbital}. As such, the literature uses the term \ac{OAM} in short to denote the internal \ac{OAM} when considering it as an independent information carrier.\\ \indent Unlike the lower frequency bands that showcase a minimal improvement with the adoption of \ac{OAM} \cite{edfors2011orbital}, \ac{THz}'s quasi-opticality enables it with a robust \ac{OAM} capability. Such a capability can be leveraged to provide a novel dimension for multiple access, multiplexing, and increased spectral efficiency. Particularly, \ac{OAM} has a great number of \emph{modes}, i.e., topological charges, that are orthogonal to each other. On the one hand, \ac{OAM} modes can enable \ac{THz} \ac{LoS} links to bypass conventional spatial multiplexing and deploy \ac{OAM} multiplexing. This could be a promising way to improve system capacity and provide an alternative to orthogonal frequency division multiplexing\footnote{OFDM is very complex to implement at the \ac{THz} band due to the high peak-to-average power ratio.} (OFDM) by mitigating the interference of narrow-beam \ac{LoS} links. On the other hand, these \ac{OAM} modes provide a new dimension for user multiple access without exhausting the time or frequency resources. This, in turn, preserves the high bandwidth available at \ac{THz} in spite of traffic intensive services, and without acquiring additional system delays. Additionally, this allows us to also multiplex resources among the \ac{OAM} modes to further increase the spectral efficiency of \ac{THz} communications. Owing to \ac{OAM}'s multiplexing power, an open research area is to investigate whether sensing and communication functionalities can be multiplexed using \ac{OAM} modes.\\ \indent Moreover, \ac{OAM} can be generated using metasurfaces by regulating their phases. Interestingly, as we discussed in Section IV-B, \ac{THz} networks are likely to heavily rely on \acp{RIS} that are made of a discrete or continuous number of metasurfaces. As such, capitalizing on the existing network architecture and exploiting metasurfaces allows us to achieve a milestone in spectrum efficiency, resource allocation, multiplexing, and multiple access. Thus, it is crucial to adjust different \ac{MAC} protocols in a way to conform with \acp{SBS} and \ac{UE} adopting \ac{OAM} multiplexing and multiple access schemes, i.e., achieving orthogonality in a domain independent of space, time, and frequency. Moreover, new signal design and processing methods are needed to optimize the generation and transmission of \ac{THz} \ac{EM} waves that adopt \ac{OAM} to carry information over the \ac{OAM} modes. \begin{figure*} [t!] \begin{centering} \includegraphics[width=.8\textwidth]{Distributed_Hierarchy.pdf} \caption{\small{Illustrative figure showing the need for generalized and specialized learning for 6G systems operating at THz bands.}} \label{fig:hierarchy]} \end{centering} \end{figure*} \subsection{\Ac{NOMA}} While \ac{OAM} is a novel dimension that provides multiple degrees of freedom for \ac{THz} transmissions, in terms of multiple access and multiplexing, one can alternatively rely on \ac{NOMA} techniques. \ac{NOMA} is a multiple access technique that uses the power domain to introduce quasi-orthogonality. As such, it pairs up users experiencing higher channel gains with those facing lower ones, thus, reducing the disparity in the network by providing weak user greater power allocation \cite{naqvi2016combining}. Particularly to \ac{THz}, where the gap between the best-case user scenario and the worst-case user scenario is considerably large, \emph{\ac{NOMA} allows shrinking this gap and providing a fairer experience across users}. Henceforth, \ac{NOMA} can improve the service to users suffering from the dynamic and extreme network conditions such as deep fades, blockage, and user mobility. Such conditions traditionally lead to intermittent and unreliable \ac{THz} links, and, thus, \ac{NOMA} can provide better guarantees for the \ac{QoE} of critical services like holography and next-generation \ac{XR}.\\ \indent Furthermore, it has been shown that \ac{NOMA} performs better in systems that have high \ac{SINR} \cite{ding2014performance}. Given that \ac{THz} systems have considerably large \ac{SINR} due to the short communication range, adopting \ac{NOMA} schemes at \ac{THz} is highly beneficial and provides a higher spectral efficiency compared conventional orthogonal schemes. Nonetheless, due to the high directionality of \ac{THz} links and pencil beams, adopting single-beam \ac{NOMA} remains limited by the number of users that can be served. Furthermore, difficulties arise in providing suitable user pairing schemes with beamforming, as well as optimal power and bandwidth allocation. This calls for novel schemes that allow an efficient user pairing despite the high directionality of the beams \cite{zhang2019joint}. Moreover, in comparison to \ac{OMA}, \ac{NOMA} exhibits a lower received power and a more severe interference that must be properly managed for effective multiple access.\\ \indent To overcome the aforementioned challenges, one potential solution is to benefit from the \ac{THz} \ac{RIS} enhanced architectures. For instance, we can leverage the \ac{RIS}-\ac{NOMA} synergy through a network with \ac{THz} \acp{SBS} and \acp{RIS} performing passive beamforming. In that sense, relying on an \ac{RIS}-aided network improves the energy efficiency of the system. In turn, \ac{NOMA} provides \acp{RIS} architectures with a multiple access technique capable of enhancing massive connectivity and spectral efficiency. Hence, the \ac{RIS}-\ac{NOMA} synergy can be reaped whether \acp{RIS} are active or passive. Moreover, \emph{on the one hand}, \ac{RIS}-\ac{NOMA} based architectures, in contrast to massive \ac{MIMO}-\ac{NOMA} systems, can potentially overcome the fluctuations resulting from deep fades, blockages, and mobility by exploiting reflected links. \acp{RIS} overcome these events by providing extended communication ranges and virtual \ac{LoS} links to \ac{THz} \ac{NOMA} systems. The ease of beam control through \acp{RIS} allows exploiting techniques like multi-beam \ac{NOMA} \cite{wei2018multi}. This concept is a beam splitting technique that relies on generating multiple beams to serve multiple \ac{NOMA} users over each radio frequency. Hence, it can mitigate the limitations resulting from narrow beams. \emph{On the other hand}, instead of conventionally determining the \ac{NOMA} decoding order of the users using their channel power gains, reconfiguring the \ac{RIS} phase shifts allows us to flexibly design the users' decoding order \cite{xiu20201reconfigurable}. Moreover, \acp{RIS} can improve the data rates of weak \acp{UE} without the need for additional transmit power \cite{de2020role}. Hence, it can provide an extra benefit without any additional energy consumption. Thus, \ac{RIS} networks introduce multiple degrees of freedom for improving the \ac{THz}-\ac{NOMA} performance.\\ \indent After equipping wireless \ac{THz} systems with effective multiple access and multiplexing schemes, fulfilling the diversified requirements and functions (sensing, communication, localization) of 6G services requires departing from conventional network optimization. Such methods are simply not adequate for \ac{THz} systems because of the real-time nature of the optimization needed on the one hand, and because of the highly varying \ac{THz} channel on the other hand. Thus, novel algorithmic approaches are needed so that \ac{THz} systems can ensure a real-time network optimization, these will be discussed next. \section{Real-Time \ac{THz} Network Optimization} \ac{THz} will mainly be the driver of high data rates and high-resolution sensing for 6G services. Effectively, \ac{THz} offers 6G applications many benefits in terms of abundant bandwidth, improved spectral efficiency, and enhanced localization. Nevertheless, its uncertain channel jeopardizes its robustness to provide real-time communication, control, and computing functionalities for these services. To enhance \ac{THz}'s robustness, the following caveats need to be taken into account: \begin{itemize} \item \emph{Highly varying channel nature:} \ac{THz} has a highly varying channel, i.e., its coherence time is extremely short. At the other end, emerging 6G services like \ac{XR} for instance highly rely on a real-time response. In other words, the network needs to serve such applications \emph{continuously} without millisecond disruptions in the service. Henceforth, if the \ac{THz} beam tracking, resource allocation, and user association is performed based on an \emph{outdated} coherence time, the real-time communication will be disrupted. \item \emph{Susceptibility to extreme events:} \ac{THz}'s quasi-optical nature increased \ac{THz}'s susceptibility to molecular absorption, blockage, and minute beam misalignment. Such events inherently jeopardize the \emph{instantaneous} reliability of \ac{THz}'s links to serve 6G services. \item \emph{Heterogeneous frequency bands:} The coexistence of \ac{THz} with \ac{mmWave} and sub-6 GHz links (as outlined in Section V) calls for mechanisms capable of estimating the channel parameters, managing resources, and tracking beams despite the heterogeneous characteristics of communication and sensing over different frequency bands. \item \emph{Joint resolution and rate optimization:} Deploying joint sensing and communication \ac{THz} systems requires the joint optimization of different objectives (e.g. high data-rates and high-resolution). Meanwhile, such objectives potentially dictate conflicting modes of operation. For example, beams used for communication must be narrow and sharply pointing, whereas sensing beams must be varying directional scanning beams. Henceforth, the real-time network control needs to be also assessed in terms of sensing. \item \emph{Compound channels:} \ac{RIS} enabled architectures mitigate multiple challenges pertaining to the \ac{THz} channel. Nonetheless, such an architecture leads to compound channels and an increased complexity whereby more than one link need to be continuously synchronized. \end{itemize} Given the aforementioned caveats and the lack of explicit models that allow us to clearly draw performance tradeoffs in terms of rate, reliability, latency, and synchronization. One could exploit the concept of data-driven networks, and let the data collected in terms of channel measurements, sensing measurements, and \ac{QoS} measurements be the decisive factor to examine system performance and improve it. Nonetheless the following key challenges need to be examined: \begin{itemize} \item \emph{Non-stationary data:} The \ac{THz} channel and the key performance indicators of the \ac{THz} network such as handover, beam-tracking, and molecular absorption have time-varying and non-stationary distributions that are jointly correlated. Thus, predicting and generalizing these distribution patterns is inherently complex. Hence, this calls for mechanisms capable of breaking the correlation between the events in order to simplify the prediction process. \item \emph{Failure of centralized methods:} On the one hand, satisfying the low latency requirements of 6G cannot be met by wasting communication resources to access a centralized server. On the other hand, the distribution patterns predicted at a specific location in the \ac{THz} network (e.g. scheduling policy or cell-association) can be invalid and outdated at another location. Consequently, a single decision maker cannot generalize the \ac{THz} performance at different instances. Thus, this calls for learning frameworks that deploy multiple edge agents which can collect and locally learn the data. \item \emph{Scarcity of data:} Solely relying on location-specific data to learn distributions characterizing the \ac{THz} network performance will be challenging due to the insufficient training periods. Consequently, \ac{ML} algorithms would learn corner cases instead of \emph{generalizing} the distributions learned. To address this challenge, one should consider complementing existing channel data from other synthetic or real data to achieve improved training processes. Another alternative approach is to leverage the concept of theory-guided data science \cite{karpatne2017theory}. This concept integrates theoretical channel models with data science models to improve the scientific consistency of data science models. \item \emph{Real-time response:} Current \ac{ML} methods still incur long training periods that will not allow learning agents to operate in real-time. Performing the training offline might not be a valid solution due to the non-stationarity of the data. In other words, the distributions learned offline are not valid when used to perform decision-making in an online fashion. Thus, this calls for \emph{real-time \ac{ML} methods} that can incur shorter training periods. \end{itemize} Based on these key challenges, we next underline the need for a multi-agent learning framework capable predicting the generalized \ac{THz} network performance, while capturing peculiar specialties based on the specific use cases and events foreseen. \subsection{Towards Generalizeable and Specialized Learning} Addressing the intertwined and non-stationary traits of data in a \ac{THz} wireless system can be performed by recognizing the latent generalizable traits common among the resources and services being optimized. Subsequently, leveraging multi-agent learning in contrast to centralized \ac{ML} techniques allows us to extract the specific and specialized characteristics within a type of service, a mobility pattern, or a subset of \acp{UE} and resources. After the learning agents (e.g. an active \ac{SBS} or \ac{RIS}) have collected the data, dynamic clustering of the data can be performed by using unsupervised learning schemes. This clustering allows breaking the complex and joint correlations among different resources and data points, it also assigns each learning agent particular specialties based on the frequency of events seen in the data. For instance, multiple learning agents attempting to predict the mobility distributions of users exhibiting homogeneous mobility patterns will share the models learned. Effectively, this allows aggregating more data to achieve more robust training vis-a-vis the non-stationary data. Similarly, the same learning agents might potentially have different specialities pertaining to the scheduling policy adopted. These models are learned seperately by each agent to reduce the overhead and incremental delays. For example, one agent can be specialized in \ac{AR} use cases, low and medium mobility patterns, and a molecular absorption level that corresponds for indoor environments. Meanwhile a second agent that collaborates with this agent can also be specialized in \ac{AR} use cases, however, the skillsets acquired for mobility and molecular absorption correspond to highly mobile \acp{UE} and outdoor environments. Subsequently, the collaboration between such agents serves to strengthen the common skillsets, and to minimize communication resources on the exclusive skillsets learned. \\ \indent Hence, one agent can be specialized in one or more skillsets, this depends on the level of heterogeneity in its channel data. Effectively, if a skillset is common among a high number of agents, the prediction capability of all these agents is improved vis-à-vis this skill. Furthermore, the learning capability of an isolated skillset (attributed to a single agent only) depends on the complexity of this skill and the amount of data gathered. As such, our suggested \ac{ML} framework allows learning agents to benefit from the common denominator and shared specialties they exhibit, while reducing the overhead for heterogeneous patterns learned separately. The overall approach that we propose here is captured in Fig~\ref{fig:hierarchy]}. In this figure, we show an example in which dynamic clusters are formed based on three hierarchical levels. By moving to the top of the figure, the clusters become more specialized. Hence, this allow us to conquer and divide the intertwined trends and enable agents with a high generalizability and specialization.\\ \indent A recent distributed learning framework, dubbed \emph{democratized learning}, was proposed in \cite{nguyen2020self} and \cite{nguyen2020distributed}, in order to capture specialization and generalization in a network of learning agents. This framework is a potentially promising solution for the considered \ac{THz} wireless system problems because it provides means to mimic human cognitive capabilities by collaboratively performing multiple complex learning tasks. In this framework, the agents according to their different characteristics form appropriate groups that are tailored for a specific specialization. Such groups are self-organized in a hierarchical structure where the biggest group shares the most common knowledge across all agents, then groups start to shrink in size with more specialized skills. Nonetheless, in those prior works, such algorithms have only been applied on MNIST and Fashion MNIST datasets. As such, this framework was not tested in time-sensitive, scarce, and heterogeneous data from \ac{THz} networks. Also, these existing learning frameworks were primarily designed for simple classification tasks in contrast to the real-time reinforcement learning needed to control a wireless network. Thus, it is necessary to empower each learning agent with an engine capable of discerning the complex patterns in the data in a real-time fashion. In other words, the algorithms trying to predict and optimize the network process need to be empowered with more \emph{expressive power}.\footnote{Expressive power is the ability to represent a large number of learning algorithms, i.e., more expressive power means that the technqiue allows us to represent more sophisticated learning procedures \cite{finn2018learning}.} As such, to build intelligent wireless systems that can learn with the same versatility and flexibility as the human brain, one could exploit the concepts of multi-task and meta-learning which will be discussed next. \subsection{Towards Multi-Task Learning and Meta-Learning} While multi-agent learning allows breaking trends into clusters, the highly varying data structure of \ac{THz} systems leads to intra-cluster inconsistency. For example, even after clustering a group of \acp{UE} onto a common service type and a single mobility pattern, standard \ac{ML} methods like deep Q-learning might not be able to find reasonable solutions when faced with data that is slightly out of the distribution learned. In fact, this aspect is highly relevant for \ac{THz} systems due to their heavy tail as a result of their susceptibility to extreme events like blockages or deep fades. To mitigate these challenges, one could divide every single learning task problem into multiple tasks.\\ \indent For example, one could partition a typical \ac{THz} beam alignment problem into multiple learning tasks. It is important to note here that the term \emph{learning task} holds a different significance than its semantic meaning in the English language. For instance, the presence of a high density of blockages and a low likelihood of \ac{LoS} might constitute one beam alignment task. Meanwhile, an average density of blockages consequently leading to a \emph{nominal \ac{THz} beam alignment process} is another beam-alignment task, i.e., one that would only take place under average environmental figures. \emph{In the first case}, the learning agent will typically perform a risk-averse action, whereby the reward and the learning setting are tuned to account for a high number of extreme and catastrophic events. In that sense, for this first task, the learning agent's goal would be to guarantee a higher number of active links connected to a given user (this can be achieved, for example, by generating more independent links from active \acp{RIS} towards a \ac{UE}, or by optimizing more reflected links from passive \acp{RIS} towards a \ac{UE}). Hence, under the first task, for a given \ac{UE}, an exhaustive number of active links would be associated by the learning agent in order to guarantee at least a single perfectly aligned \ac{LoS} link. Also, because of this riskiness associated to the environment, the learning agent may continuously exhaust a high number of frequency, energy, and space resources to sensing functionalities. This is because of the intrinsic need to continuously process the user's instantaneous location and orientation in such an extreme environment. Hence, a significant network overhead would potentially incur to account for \acp{UE} under the first task, this is a result of the higher number of resources associated to sensing functionalities.\\ \indent \emph{In contrast, the second learning task}, that focuses on nominal operation, requires a lower level of caution by the agent. Here, the learning agent can allocate more resources to multiple \acp{UE} communication links (rather than possibly needing to exhaust wireless resources on sensing a single \ac{UE} under a risky environment) which naturally reduces the overhead incurred from the continuous feedback needed from \ac{SLAM} in the first task. In other words, instead of expending a significant amount of radio and energy resources to establish a high number of active communication links as well as a continuous sensing feedback for a small subset of \acp{UE} experiencing dire conditions, the decisions made by the agent in this second learning task can achieve a higher level of spectral and energy efficiency. As such, assimilating both learning tasks allows the learning agent to have a full view of the beam-alignment distribution in a \ac{THz} network. Thus, this contributes to a decision making process that can \emph{cautiously solve problems without being too restrictive}. Instead, if one is to approach this problem as a single task problem, then each learning agent will tend to learn an \emph{average} behavior vis-à-vis its experienced environment. In essence, this will not allow the agents to tune their cautiousness level according to the type of environment encountered. Also, note that, while the \emph{riskiness of the environment} and the blockage density are the varying parameter across tasks in this this illustrative beam alignment problem, another wireless problem will have a set of different parameters that could distinguish a task from another. \\ \indent It is important to note that every learning problem exhibits a different number or type of learning tasks. In fact, often times, such tasks cannot be known a priori. Effectively, a single \emph{task} distribution is characterized by a collection of homogeneous data that typify it. Learning multiple tasks simultaneously allows us to capture an umbrella distribution that covers all the possible events. On the one hand, this enhances the expressive power and scalability of the learning agent vis-à-vis beam alignment in contrast to learning corner-cases. On the other hand, despite the scarcity of data and its inability to cover the full distribution of beam alignment, the learning agent can generalize for out of distribution events.\\ \indent Henceforth, the concept of multi-task and meta-learning can be leveraged \cite{finn2017model} as an effective approach to address data scarcity and improve the generalizeability of a learning agent that operates in a complex environment. Nonetheless, applying such approaches to wireless \ac{THz} networks faces many challenges. The number and type of tasks that surround a particular networking problem are not always known a priori, and, thus, precisely defining a task is a key challenge. Furthermore, existing meta-learning techniques have been mostly procured on supervised learning problems \cite{nichol2018first, finn2017model,rajeswaran2019meta}, where data is inherently labeled. In contrast, wireless networking problems are effectively stochastic \ac{RL} problems in which a learning agent is fed with \emph{data environments}. Dealing with such data environments is cumbersome which is why \ac{RL} algorithms tend to be traditionally slow when applied to stochastic problems that have large state and action spaces. In essence, such algorithms have very long training periods, whereas 6G services need to operate in a low latency realm. Thus, such long training periods could disrupt the real-time operation of these services. Reducing the \ac{RL} training and exploration periods can be done through many measures. First, meta-\ac{RL} solutions potentially reduce training periods because they transform datasets to task specific datasets and, thus, they acquire fast adaptation to dynamic and potentially non-stationary environments. However, meta-\ac{RL} algorithms may be still incapable of continuously building upon previous experiences in a way to reach near real-time training periods through generalizability. Second, to further reduce exploration and training periods, the \ac{RL} structure needs to be re-architected whereby the data environments are processed or complemented with synthetic data before being directly used. Third, given the lack of labels in an \ac{RL} setting, designing the reward function is crucial in the convergence of the algorithm. Additionally, the uncertain \ac{THz} channel makes it difficult to explicitly define an oracle reward function for every single problem encountered. One method to overcome this challenge is through inferring reward functions using inverse meta-\ac{RL} \cite{xu2019learning}.\\ \indent Here, we note that some recent works such as \cite{maggi2020bayesian} proposed the use of Bayesian optimization with Gaussian processes as a potentially superior alternative to \ac{RL} (in terms of convergence and interpretability), for solving radio resource management problems. However, using this technique for real-world wireless problems is prohibitive because it requires satisfying multiple conditions such as a low number of control parameters, a smooth performance function, and a low update frequency to cope with the environmental dynamics. In fact, many of the functions dealt with in wireless problems, particularly with \ac{THz} systems are not smooth (e.g. because of factors such as molecular absorption). Also, 6G services mandate a near real-time network control, thus necessitating a high update frequency. For these reasons, it is natural to posit that \ac{RL}'s universal and flexible setup still constitutes the fundamental building block necessary for network optimization and control and, as such, novel \ac{ML} methods will have build on its basis. In that sense, adopting meta-\ac{RL} can be one approach that scales up \ac{RL} for wireless and non-wireless settings. Meanwhile, meta-\ac{RL} is still in its nascent stage and is an open research area which is ripe for exploration in the realm of 6G. \\ \indent Now that we have provided a detailed panorama of the seven defining features of \ac{THz} wireless system, an overview of the \ac{THz} enabling approaches is summarized in Table~\ref{table:Vision}. Next, we discuss the most prominent \ac{THz} use cases that can potentially leverage those seven unique features. \begin{figure*} [t!] \vspace{-0.3cm} \begin{centering} \includegraphics[width=.8\textwidth]{Apps.pdf} \caption{\small{Illustrative figure showcasing the four main uses cases for \ac{THz} systems.}} \label{fig:apps} \end{centering} \end{figure*} \section{Use Cases for \ac{THz} Wireless Systems} Fig.~\ref{fig:apps} presents the major 6G use cases that can potentially adopt \ac{THz} frequency band and exploit its seven defining features. Based on their distinct needs and mode of operation, these use cases can be deployed on different potential \ac{THz} architectures. \subsection{XR and Holographic Teleportation} \ac{XR} encompasses \ac{AR}, \ac{MR}, and \ac{VR}. \ac{VR} services will immerse the user in a seamless experience, while \ac{AR} services will overlay virtual components to the user's real time experience. Such applications have stringent \ac{HRLLC} requirements \cite{chaccour2020can}, because of the need to maintain the joint quality of visual and haptic components, and the need to sustain a high \ac{QoE} to the user. Furthermore, the definition of \ac{MR} is still not very concrete in the literature. Nonetheless, \ac{MR}'s main objective is to combine the capability of both technologies $-$ \ac{AR} and \ac{VR} in the same device \cite{speicher2019mixed}. With the evolution of 3D imaging, \ac{MR} is morphing to create the highly coveted \emph{holographic teleportation} application domain. In addition to their more stringent \ac{HRLLC} requirements, i.e., a rate of $\SI{5}{Tbps}$ \cite{li2018towards}, holographic flows require very tight synchronization in terms of feedback of five senses.\\ \indent Clearly, only \ac{THz} communications can cater to the potential of \ac{XR} and holographic teleportation by delivering the extremely high rates needed. Nonetheless, satisfying the rate requirement does not necessarily lead to a good \ac{QoE}. In fact, \ac{XR} and holographic services have diversified requirements that include, along with the high-rate needs, continuous low latency and jitter requirements, a need for fresh information (particularly for \ac{AR}), and synchronization among all five senses in holography. Furthermore, these requirements are not only diverse, but they also need to be satisfied with a high precision and accuracy. For instance, a momentarily disruption in the \ac{THz} \ac{LoS} link will lead to a disrupted experience. Thus, it is necessary \emph{first} to deploy \ac{THz} nodes on an architecture that increases the likelihood of \ac{LoS} links in indoor areas such as active and passive \acp{RIS}, while increasing the cost-efficiency of the network \cite{chaccour2020risk}. \emph{Second}, to guarantee a seamless \ac{XR} and holographic experience, different types of sensing functionalities need to be leveraged in a way to: a) Provide high precision and high resolution information about subtle changes in the user orientation and movement, which can be performed by exploiting the high-resolution \ac{THz} sensing feedback, b) Surround the user with a situational awareness of short range blockages, which can be performed by enabling multiple independent \ac{THz} sensing path with the use of \acp{RIS}, and c) Take the situational awareness one step further to account for farther surrounding objects, which can be performed by \ac{mmWave} sensing. \emph{Third}, the aforementioned sensing data and network communication data should be intelligently processed and then fed to the real-time \ac{ML} algorithms described in Section IX. This leads to a cross-layer intelligent system capable of overcoming \ac{THz}'s uncertainty to satisfy a plethora of requirements thus guaranteeing a high user \ac{QoE}.\\ \indent Extending such services to larger scale outdoor scenarios faces multiple challenges. On the one hand, the molecular absorption and the longer communication ranges significantly jeopardize the achievable rates by \ac{THz}. On the other hand, relying on \ac{THz} \acp{IS} in a predominantly \ac{mmWave}/sub-6 GHz \acp{SBS} networks cannot guarantee the instantaneous rates needed by \ac{XR} and holographic services. This leads to a tradeoff between the versatility of operation and the user \ac{QoE}. One prospective opportunity is to explore foveated rendering and compression techniques \cite{kaplanyan2019deepfovea} capable of providing savings in \ac{XR} content size. Furthermore, outdoor services tend to exhibit higher mobility use cases (e.g. \ac{AR} for assisted driving), such use cases have their particular challenges and open problems: a) outdated information in such use-cases leads to hazardous damages, thus, the freshness of the uplink here is highly significant, b) the increased mobility increases the overhead needed for beam tracking and mobility management, and c) a \emph{safe- seamless user experience} tradeoff takes place, optimizing such a tradeoff heavily relies on the techniques adopted to hybridize the network using lower frequency bands (as outlined in Section V). \renewcommand{\arraystretch}{1.5} \begin{table*}[t] \footnotesize \caption{Characteristics contrasting the needs of different 6G services.} \centering \begin{tabular}{ m{3cm} m{3.5cm} m{3.55cm} m{2.9cm} m{3.2cm}} \hline \textbf{Key metric} & \textbf{XR and Holography} & \textbf{Industry 4.0 and Digital Twins} &\textbf{CRAS} &\textbf{\acp{NTN}}\\ \hline \textbf{Potential mode of operation} & \begin{itemize} \item Indoor or confined spaces \item Standalone \ac{THz} architecture (mmWave cannot satisfy the rate requirements) \end{itemize}& \begin{itemize} \item Controlled setting \item Standalone \ac{THz} architecture \end{itemize} & \begin{itemize} \item Outdoor \item THz \acp{IS} \end{itemize} & \begin{itemize} \item Integrated fronthaul and backhaul \item mmWave/THz dual-band architecture \end{itemize} \\ \hline \textbf{Mobility support} & Low/Medium & Low & High & Medium \\ \hline \textbf{Rate} & Order of $\SI{}{Tbps}$ & Depends on update between cyber and physical twins & Order of $\SI{}{Tbps}$ & $\geqq \SI{1}{Tbps}$ \\ \hline \textbf{Latency} & \begin{itemize} \item $\SI{5}{ms}$ to achieve motion to photon latency \item Sub-millisecond for haptic capabilities \end{itemize}& $ 0.1-\SI{1}{ms}$ round trip time& $\SI{1}{ms}$ round trip for reaction time & Application dependent \\ \hline \textbf{Reliability paradigm} & High downlink reliability and five senses synchronization & High bidirectional reliability & High bidirectional reliability & High connection density \\ \hline \textbf{Service characteristic} & Transmitting real-time multi-sensory experiences & Cyber twin mimics the physical twin (especially in mission critical scenarios) & Real-time high definition content, in optimal time-space boundaries & Manage a plethora of aerial platforms such as \acp{UAV} and satellites \\ \hline \textbf{Major challenge} & Tracking the micro-orientation and micro-mobility of users to maintain reliability & Synchronizing different building blocks of the complete digital twin & \begin{itemize} \item Strategically deploy \ac{THz} \acp{IS} \item Risk-aware autonomy \end{itemize} & 3D coverage\\ \hline \textbf{Use of sensing} & Guiding every communication link with 3D \ac{SLAM} input and situational awareness of the surrounding environment & Sensing environmental changes in the physical-space with risky outcomes (e.g. disoriented controller) & Equipping \ac{CRAS} with multi-range and multi-resolution radar capabilities & Providing information about the Doppler effect and increased speeds in air and space. \\ \hline \textbf{Opportunity that favors THz} & Indoor settings limited by shorter range of communication & Setting is controlled and not subject to high mobility & \acp{IS} are assisted by mmWave/sub-6GHz & Molecular absorption and losses are lower at heights above $\SI{16}{km}$ \end{tabular} \label{table:services} \end{table*} \subsection{Industry 4.0 and Digital Twins} The evolution towards Industry 4.0 is leading towards highly autonomous operations among machines and robots, requiring only occasional human intervention. In light of this, high precision and high accuracy manufacturing processes require novel instantaneous control mechanisms. Particularly, the rapid development of such systems and their automated processes requires data rates in the order of $\SI{}{Tbps}$, a latency in the order of hundreds of microseconds, and a connection density of $10^7/\text{km}^2$ \cite{giordani2020toward}. Meeting such high data rates can be naturally performed by deploying \ac{THz} networks. However, the use of \ac{THz} networks for Industry 4.0 applications brings forth many unique challenges that must be overcome in order to achieve near-zero latency, dense coverage, and precision-driven control mechanisms.\\ \indent Moreover, present industrial systems require the collection of large volumes of data from different sensors. Nonetheless, such data is only locally available and limits the flexibility of designing, developing, and preventing unwanted situations in large-scale autonomous systems. To provide an \ac{E2E} digitization, the concept of \emph{digital twins} \cite{rasheed2020digital} and \cite{farsi2020digital} has recently emerged as a means for creating a model for complex physical assets, thus scaling up the digitization of complex industrial structures and empowering them with full autonomy. Such digital twins should be characterized with trustworthiness so that engineers can rely on the remote control of physical systems by manipulating its cyber-space counter model.\\ \indent Providing such high-fidelity representation of the operational dynamics puts a burden on the underlying wireless network. In particular, the physical counterpart can only be fully mimicked by enabling a real-time synchronization between the cyber-space and physical spaces \cite{lu2020digital}. Given the large number of sensors continuously aggregating data to update the cyber-space model, the connection needs to be delivered at extremely high data rates at \ac{THz} frequencies. In turn, this synchronization imposes stricter bi-directional reliability requirements across thousands of devices with near-zero response times. There are several open problems in this area. First, one must investigate whether \ac{THz} systems can deliver bi-directional reliability for such a large amount of devices and real-time data transfer translated by very low delay jitter (in the order of $\SI{1}{\mu s}$). Subsequently, in case they fail \ac{mmWave} alone cannot satisfy the rate requirements, and thus, it is necessary to examine whether dual-band \ac{mmWave}/\ac{THz} systems can cooperate to update the cyber-space model in an \ac{HRLLC} fashion. Furthermore, extending the \ac{THz} coverage further calls for exploring some of the multiplexing and multiple access schemes discussed in Section VII. Such schemes open the door for multiple opportunities like improving the scalability of digital twins, accounting for a higher number of devices replicated, and improving the spectral efficiency.\\ \indent Digital twins not only allow real-time control, but they are also used to make predictions on the evolutionary dynamics and future states of large scale systems. This in turn enables anticipating failures, optimizing the system to design novel features and to guide the decision making process. Thus, offloading data from physical models is highly error-sensitive, especially in the initialization of new processes. Faulty initialized cyber-space models will have biases that will propagate throughout the whole cycle of this process. Henceforth, communication links need to be driven by novel \ac{ML} models that combine real-time \emph{small data} and control theoretic models \cite{karpatne2017theory} to build cyberspace models with higher accuracy. However, this process incurs large overhead to the transmitted content. Here, novel \ac{THz} control scheduling schemes need to be investigated. As a matter of fact, providing digital twins with real-time control amid their uncertain industrial setting can be performed by using \ac{ML} algorithms adopted in real-time \ac{THz} network optimization (the extreme events of the \ac{THz} channel and extreme events of industrial settings share a common denominator). In other words, \ac{ML} techniques adopted for the real-time optimization of \ac{THz} networks (elaborated in Section IX) are not limited to wireless networks, but can be later passed on to different real-time control environments. Subsequently, minimizing the prediction processing latency is the biggest delaying factor of digital twins, given that the cyber-space and physical-space are always mutually communicating using high \ac{THz} data rates. \subsection{CRAS} \vspace{-0.1cm} \ac{CRAS} services include autonomous driving, autonomous drone swarms, and vehicle platoons, among others. To be driven by full autonomy, such systems need to exchange large amounts of data such as high-resolution real-time maps, with their environment, e.g., other vehicles, or \acp{BS}. Additionally, such cyber-physical systems are often characterized by a high mobility and need to accurately sense and track their environment so as to determine their route optimization, and traffic and safety information \cite{zeng2019joint}. Thus, such systems require simultaneous sensing and communication for short range and medium range communications. Furthermore, such systems not only consume a huge amount of data to maintain their autonomy, but they also generate large volumes of data that could be of different types (e.g. 3D video of road conditions, radar data from nearby vehicles and objects). Henceforth, the wireless system should provide bidirectional high rates and reliable communication at the uplink and the downlink. As such, \ac{THz} frequencies can play an important role in enhancing the bidirectional rate for \ac{CRAS}. \\ \indent Although \ac{THz} systems can provide the rate requirements needed for CRAS, given the high mobility of \ac{CRAS} devices, the system reliability will be disrupted due to intermittent links and the unavailability of continuous \ac{LoS} links. Henceforth, to mitigate this challenge, \ac{THz} \acp{IS} can provide the high rates needed for high-resolution real-time maps, while being complemented by \ac{mmWave} and sub-6 GHz links to exchange less data intensive content, as indicated in Section V. To deploy CRAS over THz networks, several key challenges must be addressed. For instance, characterizing the \ac{THz} propagation in different outdoor environments (e.g. highway, urban, etc.) is an important challenge because of the high variability of \ac{THz} propagation. Moreover, one must develop new approaches for optimizing the location and density of \ac{THz} \acp{IS} versus \ac{mmWave} and sub-6 GHz links. Another key challenge is to provide an energy-efficient coverage despite the growing \acp{SBS} density. Furthermore, \ac{CRAS} can benefit from joint sensing and communication configurations. Particularly, given its longer range of communication, \ac{CRAS} can utilize \ac{mmWave}'s radar capability to detect objects and major environmental changes. Meanwhile, it can also exploit \emph{\ac{THz}'s high resolution sensing} to track subtle-moving targets and micro-mobility changes. The collected sensing data can augment the communications measurements to provide predictive control driven by \ac{HRLLC}. Hence, such systems will witness a synergy of integrated \ac{THz} and \ac{mmWave} sensing and communication. This synergy further calls for novel network modeling schemes, increased coverage to account for more devices, and novel predictive resource management schemes.\\ \indent Furthermore, \ac{ML} mechanisms controlling the autonomy of these systems need to act and learn reliably in the presence of out of distribution events \cite{filos2020can} (for example, collision avoidance systems in autonomous vehicles need to account for sudden extreme events like a sudden pedestrian crossing the street). Adopting centralized black-box machine learning models here fails to provide strategic decision learning mechanisms, capable of acquiring and accounting for the uncertainty in a human methodical manner. For instance, such \ac{ML} methods might learn spurious relationships that might lead to misleading interpretations. The paucity of datasets exacerbates such interpretations further and can potentially lead to hazardous damages in \emph{high-risk settings} like vehicular environments. Thus, such systems need to be driven by novel trustworthy and real-time \ac{ML} mechanisms. To act instantaneously, in contrast to centralized \ac{ML}, multi-agent \ac{RL} mechanisms can be leveraged whereby agents perform local decision making without consuming extra computing and communication resources. Furthermore, to improve the generalizability of the learning agents, such agents can use \ac{HF} bands such as \ac{THz} and \ac{mmWave} frequencies to share their local models and/or data. Adopting such distributed schemes can reduce the bidirectional overhead in centralized \ac{ML} mechanisms and allow for a better cooperation as pointed out in Section IX. Clearly, the success of \ac{CRAS} is contingent upon developing a framework for providing autonomous control for such systems through wireless \ac{THz} systems. This framework will likely be characterized by explainable and low latency intelligence, high resolution sensing, and \ac{HRLLC} bidirectional communications. \subsection{\acp{NTN}} \vspace{-0.1cm} 6G systems are expected to be characterized by ubiquitous 3D coverage, which can be provided by integrated space-air-ground communications. In fact, with the emergence of 5G systems, 3GPP started initiating plans for supporting \ac{NTN} to provide wide coverage and improve scalability \cite{anttonen20193gpp}. Effectively, by 2020-2025, more than 100 geostationary earth orbit and mega-constellations of \ac{LEO} based high throughput satellite systems with the capacity of Tbps will be launched \cite{giambene2018satellite}. On the one hand, \emph{compared to lower frequency bands}, \ac{THz} frequencies can provide air-to-air communications at extremely high data rates owing to its ultra high bandwidth. Furthermore, the attenuation of \ac{THz} links and their inability to penetrate the troposphere eliminates terrestrial spectral noise, interference, and jamming \cite{mehdi2018thz}. On the other hand, \emph{compared to optical communications}, \ac{THz} links have considerably larger beamwidths, thus, making the beam positioning and alignment more practical. In fact, fast and precise electronic beam alignment is possible when adopting phase-array antenna architectures which are a unique feature of \ac{RF} frequencies \cite{nagatsuma2018terahertz}. Additionally, \ac{THz} links do not suffer from molecular absorption in higher altitudes and free space, this opens the door for longer communication ranges. These advantages render the deployment of \ac{THz} frequencies for air-to-air links a natural choice.\\ \indent The main goals of non-terrestrial networks in 6G will be: a) Ensuring service continuity for highly mobile platforms (e.g. airplanes, trains) , b) Providing ubiquitous access by reaching out to under-served areas, and c) Enhancing the network scalability and providing more efficient backhaul. In light of these goals, space aerial communications are envisioned to migrate towards a new class of integrated \acp{UAV} and miniaturized satellites consisting of \ac{LEO} satellites and CubeSats. This migration is a result of their low costs associated with their production and deployment. Subsequently, such satellites have lower power transmission capacities compared to conventional satellite specifications. Furthermore, as shown in Fig.~\ref{fig:apps}, the use of \ac{THz} air-to-air links allow providing higher capacity backhaul links than fiber-optics. Subsequently, air-to-ground's larger communication ranges necessitate \ac{mmWave} links. Thus, the successful synergy of \ac{THz} and \ac{mmWave} links in air-to-air and air-to-ground links will allow providing a continuous links, ubiquitous communication, and high capacity backhauls.\\ \indent While the use of \ac{THz} and \ac{mmWave} links in \acp{NTN} paves the way for multiple opportunities, several challenges and open problems need to be considered. \emph{First}, integrated satellite-terrestrial backhaul networks utilizing \ac{THz} and \ac{mmWave} bands must co-exist with current satellite and terrestrial systems. For instance, such a deployment will lead to interference from terrestrial backhauling transmitters to the satellite backhauling terminals. Thus, the development of flexible spectrum sharing techniques is needed to maintain suitable isolation between different network operations. \emph{Second}, satellites are fast moving and experience larger propagation delays due to their greater physical distances. Here, novel channel models for \ac{THz} and \ac{mmWave} need to take into account these unique propagation environments as well as the considerable Doppler effects compared to terrestrial channel models. \emph{Third}, \ac{THz} links have pencil-beam directionality that necessitate the fine alignment of beams amid the increased Doppler and higher speeds in air and space. One approach that could facilitate the beam alignment process in a highly energy efficient fashion is the deployment of \acp{RIS} enabled \ac{THz} architectures, as discussed in \cite{zhang2020millimeter} and \cite{tekbiyik2020reconfigurable}, and as further elaborated in Section IV-B. In particular, \acp{UAV} or satellites can carry a passive \ac{RIS} to improve air-to-air and air-ground links. Subsequently, based on the environment changes and the minute beam misalignments, the metasurfaces of these carried \acp{RIS} are continuously controlled to maintain reliable \ac{LoS} links. \emph{Fourth}, ubiquitous access through these systems necessitates the use of edge computing and storage. Nonetheless, computing problems will arise due to the on-board limitations of small satellites \cite{tataria20206g}. Thus, it necessary to develop novel networking frameworks that overcome the computing limitations and provide low latency \ac{THz} communications for \acp{NTN}. Other challenges that arise in \acp{NTN} include initial access, spectrum resource management, and cross-layer power management \cite{hu2020joint}. The design of such integrated space-air-ground systems is an open research area. \section{Conclusion and Recommendations} In this paper, we have laid out a comprehensive roadmap outlining the seven defining features of \ac{THz} wireless systems that guarantee a successful deployment in next wireless generations. In particular, we have first presented a comprehensive overview of the fundamentals of \ac{THz}'s frequency bands. Based on these, we have examined the opportunities offered by the quasi-opticality of the \ac{THz}. Subsequently, we have investigated prospective architectures and essential network breakthroughs that can improve the connectivity of highly directional and \ac{LoS} dependent \ac{THz} links. Then, to guarantee a universal coverage and improve the network's scalability, we have scrutinized the synergy between the \ac{THz} band and lower frequency bands. Then, we have articulated the advantages, challenges, and opportunities surrounding joint sensing and communication systems. We have also investigated the techniques needed to guarantee a successful channel estimation and initial access process. Furthermore, we have also proposed novel \ac{ML} approaches to mitigate challenges surrounding network design and optimization. Then, we have shed light on specific auspicious \ac{THz} services that are expected to be the most anticipated technologies in the next decade. Clearly, \ac{THz} bands are still in their nascent stage and hold a promise for more revolutionary changes in the next-generation wireless networks.\\ \indent Given the insights gathered from this comprehensive tutorial, we conclude with several \emph{recommendations} to achieve a successful \ac{THz} deployment and operation in next-generation wireless systems: \begin{itemize} \item \textbf{Slow-Start:} Because of the short range links and the low accessibility to \ac{THz} \acp{SBS}, a first step towards the deployment of \ac{THz} wireless systems needs to occur in indoor areas or through the use of \acp{IS} overlaid on existing networks. \item \textbf{Towards Versatile Wireless Systems:} We envision that the success of future wireless \ac{THz} systems can be achieved by migrating towards versatile systems that have joint sensing and communication functions (and possibly other functions such as control and localization) as opposed to pure communication networks. \item \textbf{Towards Holographic Surfaces:} Investing in a massive amount of metasurfaces and avoiding cellular boundaries enables a richer sensing and more reliable communication and sensing links for \ac{THz} systems. \item \textbf{Prominent Role of Integrated Frequency Bands:} The inherent short communication range and intermittent links of \ac{THz} require a high-level of coexistence with sub-$\SI{6}{GHz}$ and \ac{mmWave} to deliver high-rate communication and high-resolution sensing in outdoor and mobile environments. \item \textbf{Pronounced Role of \ac{ML}:} Enabling successful channel estimation and a real-time network optimization at \ac{THz} systems requires novel out of the box \ac{ML} techniques that can address the peculiar properties of the \ac{THz} channel. \end{itemize} \bibliographystyle{IEEEtran}
{ "timestamp": "2021-09-28T02:07:41", "yymm": "2102", "arxiv_id": "2102.07668", "language": "en", "url": "https://arxiv.org/abs/2102.07668" }
\section{Introduction} The intrinsic properties of coherent states can enable efficient and practical classical \cite{giovannetti04,li09,ip08,kikuchi08} and quantum \cite{arrazola14,ghorai19,pirandola17,gisin02,gisin07} communications. When utilizing the phase of coherent states combined with their intensity to encode and transmit information, higher rates of information transfer may be achieved compared to communication schemes using intensity-only encodings \cite{kikuchi08,kikuchi16}. However, channel noise can severely limit the advantage of communications with coherent encodings. In conventional coherent communications, the optical receiver performs a heterodyne measurement with shot-noise-limited sensitivity, corresponding to the quantum-noise limit (QNL). This measurement allows for the use of post processing methods of the collected data to estimate channel noise and correct the data to recover the transmitted information \cite{jouguet03,armada98,marie17,kikuchi16,qi15,soh15,wang19,ip07,lygagnon06,bina16}. While current coherent optical communications rely on these conventional approaches, a heterodyne measurement cannot reach the ultimate limit of sensitivity \cite{helstrom76} and information transfer \cite{giovannetti04,giovannetti14,banaszek20}. In contrast to conventional strategies, non-Gaussian receivers can surpass the QNL, providing higher measurement sensitivities for decoding information \cite{mueller12,mueller15,becerra13,becerra15,lee16,ferdinand17,izumi12,izumi20b}. However, in the presence of channel noise, the benefit of non-Gaussian receivers over conventional strategies critically depends on the ability to perform efficient channel-noise tracking. Recent work demonstrated an efficient method for phase tracking for non-Gaussian receivers \cite{dimario20}. This phase tracking method estimates and corrects for the phase noise in real time, which is required by the strategies used in non-Gaussian receivers, as opposed to post processing of the collected data with heterodyne detection. This method enabled sub-QNL sensitivity in the presence of phase noise, which is particularly damaging for coherent encodings \cite{bina17,dimario20}. In more realistic situations there may be multiple sources of noise present in the communication channel, such as thermal noise \cite{habif19,yuan20}, phase diffusion \cite{dimario19,genoni11,genoni12}, phase noise, and amplitude noise. In such situations the non-Gaussian receiver must perform efficient high-dimensional parameter estimation and tracking in order to maintain the expected sub-QNL performance. However, current methods for single-parameter noise tracking cannot be efficiently scaled to higher dimensions for tracking and correction of multiple sources of noise. Thus, enabling noise tracking for non-Gaussian receivers in channels with complex and dynamic noise requires novel and efficient methods for multi-parameter estimation that scale favorably to higher dimensions. Practical parameter tracking also requires estimation on a time-scale which is very small ($\ll1\%$) compared to the bandwidth of the channel noise. For example, realistic kilohertz scale phase noise \cite{lygagnon06,ip07,kikuchi08} would require estimation on at least megahertz time scales, and a Bayesian estimator may not be compatible with this requirement. Machine learning has been shown to be a powerful tool for solving many problems in coherent communications where conventional methods may be inefficient or computationally difficult \cite{carleo19,wallnoefer19,khan19,mata18,chen19}. In particular, artificial neural networks \cite{schmidhuber15} have seen broad applications in quantum information \cite{dunjko18, melnikov18, hentschel10, lumino18, fiderer20, cimini19, giordani20, lohani18, steinbrecher19, beer20} and optical communications \cite{thrane17, wu09, zibar16, lohani19, karanov18}, and for channel noise estimation and monitoring \cite{zibar15, khan17, wang17}. While machine learning techniques benefit current communication technologies, their application for parameter tracking for non-Gaussian receivers with sub-shot-noise limited performances have yet to be investigated. In this work, we numerically investigate a method for multi-parameter channel noise tracking based on a neural network (NN) estimator for a non-Gaussian receiver with sub-QNL sensitivity for state discrimination of quaternary phase-shift-keyed (QPSK) coherent states. We construct a NN as a precise and computationally efficient multi-parameter estimator for tracking the time-varying phase and intensity of the input coherent states, and benchmark its performance against a Bayesian estimator, which is expected to be accurate but is computationally expensive to calculate. We find that, across a broad range of channel noise strengths and input powers, the NN-based method for noise tracking shows similar performance to a Bayesian-based noise tracking approach, and allows the non-Gaussian receiver to maintain sub-QNL sensitivity. This shows that a NN estimator is a viable method for real-time, multi-parameter channel noise tracking in non-Gaussian receivers due to its efficiency and potential scalability to higher dimensions. In Sec. II we describe the non-Gaussian receiver strategy and the NN estimator used for the noise tracking method. In Sec. III we investigate the performance of the channel noise tracking. We discuss the results of the work in Sec. IV. \section{Receiver and estimation strategy} We numerically study the use of a NN-based method for noise parameter tracking for non-Gaussian receivers based on adaptive measurements and photon counting. As a proof-of-concept, we investigate a NN-based method for tracking phase and amplitude channel noise, that uses only the data collected during the state discrimination measurement. This NN-based method can be easily extended to perform higher dimensional parameter estimation for tracking additional sources of noise in the channel such as thermal noise \cite{habif19,yuan20} or phase diffusion \cite{dimario19,genoni11,genoni12}. In this section, we describe (A) the measurement strategy of the non-Gaussian receiver for coherent state discrimination; (B) how the data from the measurement is used by the NN; and (C) the NN estimator, which can be used for estimation of channel noise from multiple sources. \subsection{State Discrimination Measurement} Figure \ref{strategy}(a) shows a scenario where a receiver attempts to perform coherent state discrimination with an adaptive photon counting measurement with sensitivity below the QNL \cite{becerra13,becerra15}. Dynamic phase and amplitude noise induced by the communication channel degrades the attainable sensitivity of the receiver. Tracking the phase and amplitude noise of the input states induced in the channel using the data collected during the discrimination measurement can in principle allow the receiver to correct its strategy and maintain sub-QNL sensitivity. Here, we study a method for channel noise tracking for a receiver based on an adaptive non-Gaussian strategy \cite{becerra15} for phase coherent states $|\alpha_{k}\rangle\in\{|\alpha e^{i2\pi k/M} \rangle \}$, where $k=0,1,...,M-1$. For $M=4$, this corresponds to quaternary phase-shift-keyed (QPSK) coherent states. The state discrimination strategy consists of $L$ adaptive measurement steps. Each step performs a hypothesis test of the input state using a local oscillator (LO) to implement a displacement operation $\hat{D}(\beta)$ through interference and single photon counting. In each adaptive step $j=1, 2, ... L$, the receiver attempts to displace the most likely state to the vacuum state by adjusting the LO phase $\mathrm{arg}(\beta) = \theta_{j}\in \{0, \pi/2, \pi, 3\pi/2\}$ with $|\beta|=|\alpha_{k}|$, followed by single photon detection. The detector has a finite photon number resolution (PNR) where up to $m$ photons can be resolved, denoted as PNR($m$), before becoming a threshold detector \cite{becerra15}. At the end of the $L$ adaptive steps, the best guess of the receiver $\theta_{\mathrm{disc}}$ for the true input phase is the state with maximum \textit{a posteriori} probability given the entire detection history. As described in Section II(B) and Section II(C), the photon counting data from the adaptive measurement steps together with $\theta_{\mathrm{disc}}$ allows the receiver to perform phase and amplitude tracking, where estimates of the channel noise are fed-forward to the LO in order to maintain the sub-QNL performance of the receiver. Figure \ref{strategy}(b) shows an example of the error probability for the adaptive non-Gaussian receiver for QPSK states for an average input mean photon number of $\langle \hat{n} \rangle_{0} = |\alpha|^{2}=5.0$, which is proportional to the intensity, averaged over 5000 noise realizations and obtained through Monte-Carlo simulations. For all Monte-Carlo simulations in this study, we assume ideal detection efficiency, zero detector dark counts, a photon number resolution of PNR(10), and $L$=10 adaptive steps. To represent a realistic experiment, we use an interference visibility of the displacement operation of 99.7$\%$ \cite{becerra15}. The blue (orange) points show the error probability for the non-Gaussian receiver with (without) perfect noise tracking. Perfect tracking refers to a situation where the receiver has complete knowledge of the time-dependent input intensity and phase noise induced by the channel. The black points show the error of an ideal heterodyne measurement, performing at the QNL, with perfect tracking \footnote{We note that in conventional optical communications, the use of amplification prior to detection can be used to reduce the probability of error. In those situations, the use of non-Gaussian receivers can further reduce the error rates far beyond what could be achieved by heterodyne measurements. For example, given an input pulse power corresponding to $\langle \hat{n} \rangle_0 = 5.0$, the error reduction for a non-Gaussian receiver ($P_{\mathrm{E,NG}}$) compared to the heterodyne receiver at the QNL equals a factor of QNL$/P_{\mathrm{E,NG}}=17$. At this initial power, the heterodyne receiver would require ~3 dB of noiseless gain to reach the non-amplified non-Gaussian receiver. On the other hand if the same 3 dB amplifier is used with the non-Gaussian receiver, and compared to the amplified heterodyne receiver, then this ratio grows to QNL$/P_{\mathrm{E,NG}}=210$}. The dashed lines show the expected error in the absence of noise for a heterodyne (gray) and non-Gaussian (black) receiver. The error for the non-Gaussian measurement remaining below the heterodyne limit (QNL) shows that if the receiver can implement accurate parameter tracking, then its benefit over the QNL can be maintained. Furthermore, any tracking method for the non-Gaussian receiver requires correcting for dynamical noise in real-time to ensure sub-QNL performance \cite{dimario20}, in contrast to methods for heterodyne receivers, where estimation and correction can be done in post-processing of the data. \subsection{Detection Matrix} The measurement data collected by the non-Gaussian receiver from the discrimination of $N$ input states is used for parameter estimation. For the discrimination of one input state, this data consists of the $L$ photon detections $\{ d_{j} \}_{L}$ and relative phases $\{ \Delta_{j} \}_{L}$ between the LO and input state for each adaptive step $j$. Due to the low error rate achieved by the non-Gaussian measurement, the guess $\theta_{\mathrm{disc}}$ of the phase of the input state corresponds to the true input phase with high probability. Thus, $\theta_{\mathrm{disc}}$ can be used to infer the relative phase $\Delta_{j}$ between the LO and actual input state at every adaptive measurement step $j$ such that $\Delta_{j} = \theta_{j} - \theta_{\mathrm{disc}}$, as in \cite{dimario20}. This state discrimination data $\{ \Delta_{j}, d_{j} \}$ is binned into what we refer to as the detection matrix $\textbf{D}$, which is a $M\times(m+1)$ matrix, where $m$ is the PNR of the receiver. After each measurement, the matrix elements $\mathrm{D}_{l,k}$ are incremented by the total number of times that the number of detected photons in an adaptive step $j$ was $d_{j}=l$ and the relative phase was $\Delta_{j}=2\pi k/M$ for $k \in \{0, 1, ..., M-1\}$. Thus, the rows of the matrix $\textbf{D}$ represent the photon number distributions for different relative phases $k\pi/2$ between the LO $(\theta_{j})$ and final hypothesis $(\theta_{\mathrm{disc}})$ for QPSK states \cite{dimario20}. After completing $N$ experiments, the matrix $\textbf{D}$ contains $N\times L$ pairs $\{d_{j}, \Delta_{j}\}$ and it is used for parameter estimation. Once estimation has been performed, the matrix is reset such that $\mathrm{D}_{l,k}=0$ for all $l$ and $k$. \begin{figure}[!t] \includegraphics[width = 8.5cm]{figure2} \caption{\textbf{Neural network}. The neural network (NN) for noise estimation has 10 layers (8 hidden) with sizes described in Table \ref{nn_param} in Appendix A. The NN inputs are a flattened version of the detection matrix \textbf{D} normalized across each row, and the LO intensity $\mathcal{B}$ for the measurements whose data is contained in the detection matrix. The outputs of the NN are estimates for the phase offset $\hat{\phi}_{NN}$ and input intensity $\hat{\mathcal{A}}_{NN}$.} \label{nn_fig} \end{figure} In order to extract information from $\textbf{D}$ to correct for channel noise affecting the measurement, the receiver must utilize a particular estimator. A Bayesian estimator, which uses the full likelihood functions, will yield estimates for the channel noise with small uncertainty \cite{lehmann98}. However, this estimator is computationally demanding to calculate. Since the estimation and correction of the channel noise for non-Gaussian receivers must be performed in real-time, a Bayesian method may be incompatible with applications requiring high bandwidth sub-QNL receivers. Therefore, to enable practical implementations of non-Gaussian receivers requires an estimator that is both precise and computationally efficient while being easily scalable to higher dimensions to track multiple sources of channel noise. For example, the simple case of single parameter estimation for phase tracking for non-Gaussian measurements has been experimentally demonstrated \cite{dimario20} using a simple estimator, which is calculated in real-time with minimal computational resources. \subsection{Neural Network Estimator} We construct a NN as a multi-parameter estimator which maps the data collected from the state discrimination measurement to estimates for the input intensity and phase offset. We compare the performance of the NN estimator to a Bayesian estimator. The Bayesian-based method for noise tracking serves as a benchmark and is calculated from the same state discrimination measurement data, i.e. the detection matrix $\textbf{D}$. Although we study phase and amplitude tracking, a properly trained NN can in principle be used as an efficient high-dimensional estimator for tracking many sources of communication channel noise. Figure \ref{nn_fig} shows a diagram of the NN architecture for the proposed noise tracking method, which has 10 layers (8 hidden), each with a Leaky ReLU activation function \cite{maas13}. To obtain the input for the NN, the detection matrix $\textbf{D}$ is first normalized across each row, and then arranged into a one-dimensional vector $(D_{l,k} \rightarrow D_{l(m+1) + k})$. This vector, along with the LO intensity for the previous $N$ measurements, are the inputs to the NN. For ease of notation, we denote the time-dependent input intensity of the QPSK coherent states as $\mathcal{A}(\tau) = |\alpha|^{2}(\tau)$ where $\tau$ represents time discretized into steps of $\Delta T$, where $1/\Delta T$ is the experimental repetition rate. For a single state discrimination measurement at time $\tau$, the intensity of the LO is denoted as $\mathcal{B}(\tau)=|\beta|^{2}(\tau)$. The NN output, denoted as $\hat{\mathcal{A}}_{NN}$ and $\hat{\phi}_{NN}$, are raw estimates of the input intensity $\mathcal{A}(\tau)$ and relative phase offset $\phi(\tau)$ during the previous $N$ state discrimination measurements. The NN is trained on $5\times10^{5}$ samples of the state discrimination measurement generated from Monte-Carlo simulations of the experiment in Python. For training the NN, we use the Tensorflow library \cite{abadi16} with a weighted mean squared error cost function (See Appendix A for details) \cite{sze17, jain96, goodfellow16, geron17}. The trained NN is then included in the Monte-Carlo simulations to perform parameter tracking on the state discrimination data such that the estimates from the NN are fed-forward to the LO to correct the measurement. \section{Results} We simulate the performance of the noise tracking method based on the NN estimator for a variety of scenarios with amplitude and phase noise for \textit{average} input intensities $\langle \hat{n} \rangle_{0} = \mathcal{A}(0) = \langle \mathcal{A}(\tau) \rangle$ equal to 2, 5, and 10. Here $\langle \cdot \rangle$ denotes the average across all noise realizations at the time step $\tau$. We benchmark the NN against a Bayesian estimator where the prior probability distribution for both parameters is uniform \cite{dimario20}. For all simulations, we use a single NN to perform multi-parameter estimation and noise tracking across a range of input powers and noise parameter regimes. As a model for phase noise $\phi(\tau)$, we simulate a discrete Gaussian random walk in phase \cite{dimario20}. A single step of this walk has a variance of $\sigma_{1}^{2} = 2\pi \Delta\nu \Delta T$ where $\Delta\nu$ is the phase noise bandwidth due to finite laser linewidth \cite{lygagnon06,ip07,kikuchi08} or other phase noise sources \cite{kikuchi08,khanzadi15}. The experimental repetition rate is set to $1/\Delta T=100\mathrm{ MHz}$ such that $\Delta T = 10\mathrm{ ns}$ to represent a feasible, near-term communication bandwidth for non-Gaussian receivers \cite{holzman19}. To model amplitude noise of the input states, we simulate noise in the input intensity $\mathcal{A}(\tau)$. As a noise model, we use an Ornstein--Uhlenbeck (OU) process \cite{uhlenbeck30,gillespie96} whose stochastic differential equation is given by: \begin{equation} \Delta \mathcal{A}(\tau) = \gamma [\langle \hat{n} \rangle_{0} - \mathcal{A}(\tau)] \Delta T + \Sigma \sqrt{\Delta T} dW \label{ou_eq} \end{equation} where $\gamma$ is the amplitude noise bandwidth, $\Sigma$ controls deviation of the walks, and $dW$ denotes a Wiener process. The long--time variance of $\mathcal{A}(\tau)$ is given by $\Sigma^{2}_\infty = \Sigma^{2}/2\gamma$ and the maximum long--time variance we implement is $\Sigma_{\infty}^{2}=\{0.25, 1.5, 6.0\}$ for $\langle \hat{n} \rangle_{0}=\{2, 5, 10\}$, respectively, corresponding to a relative noise level of $\langle \hat{n} \rangle_{0}/\Sigma_{\infty} \approx 0.25$. \begin{figure}[!t] \includegraphics[width = 8.25cm]{figure3} \caption{\textbf{Probability of error as a function of time}. (a) Error probability as a function of time for $\langle \hat{n} \rangle_{0}=5.0$ when both phase (b) and intensity (c) noise are applied and tracked. Here the noise parameters are: $\gamma=25$kHz, $\Sigma_{\infty}^{2}=1.5$, and $\Delta\nu=2$kHz. Blue (orange) points show the error for the NN (Bayes) based estimator. Green and black points show the error with no correction and perfect correction, respectively. Gray points show the effective QNL of a perfectly corrected heterodyne measurement. Black and gray dashed lines show the error for a non-Gaussian and heterodyne receiver, respectively, with no noise.} \label{mpn5_ex} \end{figure} \begin{figure*}[t] \includegraphics[width = 0.90\textwidth]{figure4} \caption{\textbf{Phase noise tracking.} Error probability as a function of phase noise bandwidth (BW) $\Delta\nu$ without (a)--(c) and with (d)--(f) amplitude noise of $\gamma=25$kHz and $\Sigma_{\infty}^{2}$ = 0.25, 1.5, and 6.0 for average intensities $\langle \hat{n} \rangle_{0}$ = 2, 5, and 10, respectively. Blue and orange lines show the performance of the noise tracking methods based on NN and Bayesian estimators, respectively. The orange dashed line shows the error probability for a non-Gaussian receiver with no correction. The purple and gray dashed lines show the error probability for a non-Gaussian and heterodyne measurement with perfect correction, respectively.} \label{lwscan} \end{figure*} After $N$ state discrimination measurements, estimates are calculated from the detection matrix \textbf{D}. To implement correction of the receiver, we set the LO intensity $\mathcal{B}(\tau)$ to the current estimated value $\hat{\mathcal{A}}(\tau)$ of the input intensity $\mathcal{A}(\tau)$. For phase tracking we add a correction $\delta(\tau)$ to the LO phase such that $\mathrm{arg}\{\beta\} = \theta_{j} + \delta(\tau)$. The correction $\delta(\tau)$ is equal to the cumulative sum of individual estimates $\hat{\phi}$ up to the current time step $\tau$. This is because the receiver is always estimating only the phase shift accumulated in the previous $N$ experiments. The phase and intensity corrections remain fixed at these values for $N$ experiments until new estimates are made and applied to the LO. To reduce uncertainty in the phase and intensity estimates for noise tracking, we implement a Kalman filter \cite{kalman60} for both estimates (See Appendix B for details). The input for the filter are the current raw estimates for the input intensity and phase offset $(\hat{\mathcal{A}}_{NN}, \hat{\phi}_{NN})$, and the filter outputs are updated, filtered estimates for the intensity $\hat{\mathcal{A}}(\tau)$ and phase $\hat{\phi}$. The same procedure is done to obtain filtered Bayesian estimates from the raw estimates $(\hat{\mathcal{A}}_{B}, \hat{\phi}_{B})$. To implement the Kalman filter, we assume that the raw NN estimates are Gaussian distributed, and use Monte-Carlo simulations with fixed phase offset and input intensity to empirically obtain the variance of the NN estimator. We note that although we study two particular models for phase and amplitude noise, we believe this NN-based tracking method can be applied to a variety of noise forms such as power-law amplitude noise or damping noise. To study different noise models, the NN would need to be re-trained using data generated from the new model and the noise dynamics would need to be incorporated into the Kalman filter accordingly. \begin{figure*}[t] \includegraphics[width = 0.90\textwidth]{figure5} \caption{\textbf{Amplitude noise tracking.} Error probability as a function of amplitude noise bandwidth (BW) $\gamma$ for $\langle \hat{n} \rangle_{0}=\{2, 5, 10\}$ without and with phase noise with bandwidth $\Delta\nu=5\mathrm{kHz}$. Purple and gray dashed lines show the error for a non-Gaussian and heterodyne measurement with perfect correction, respectively. Beyond $\gamma\approx 10^{7}$ Hz, the amplitude noise is effectively random across the $N$ experiments.} \label{thscan} \end{figure*} Figure \ref{mpn5_ex} shows (a) the error probability of the non-Gaussian receiver with noise tracking for 1000 different realizations with both phase $(\Delta\nu=2 \mathrm{kHz})$, and intensity $(\gamma=25$kHz $, \Sigma_{\infty}^{2}=1.5)$ noise shown in (b) and (c), respectively, for an input intensity $\langle \hat{n} \rangle_{0}=5.0$ and $N=10$ experiments per estimation period. The blue (orange) points show the results the noise tracking method based on NN (Bayesian) estimators. The black points show the error probability with perfect correction, which corresponds to the case where the receiver has complete knowledge of the phase and intensity noise, so that $\mathcal{B}(\tau)=\mathcal{A}(\tau)$ and $\delta(\tau)=\phi(\tau)$. The green points show the error probability of an uncorrected non-Gaussian measurement, and the gray points show that of an ideal heteroydne measurement with perfect phase tracking (equivalent to $\phi(\tau)=0$). We note that even though the receiver may have perfect knowledge of the noise, the overall effect of the amplitude noise increases the error probability. This is because input powers smaller than the average power $(\mathcal{A}(\tau)<\langle \hat{n} \rangle_{0})$ increase the errors more than the reduction of error for larger powers $(\mathcal{A}(\tau)>\langle \hat{n} \rangle_{0})$. The dashed black and gray lines show the error for an adaptive non-Gaussian measurement, and an ideal heterodyne measurement with no noise, respectively. By comparing the error of the non-Gaussian measurement with perfect correction (black points) to the black dashed reference line, we observe that non-Gaussian measurements are more sensitive to amplitude noise than a heterodyne measurement (gray points vs gray dashed line), even when they are perfectly corrected. We observe that the NN-based tracking method performs equivalently to the Bayesian method, and both can allow the receiver to maintain an error probability significantly below the QNL. This result demonstrates the capabilities of a NN for efficient and reliable noise tracking for non-Gaussian receivers for state discrimination. We study the robustness of the NN-based method in scenarios with different noise strengths and bandwidths in the phase and amplitude. For these studies, we use the heterodyne measurement with perfect phase tracking as the limit for conventional strategies, serving as the effective QNL when the same noise is applied to both receivers. In this section, we study (A) the error probability as a function of phase noise with fixed amplitude noise, and (B) the error probability when the amplitude noise levels are varied with phase noise with a fixed bandwidth. \subsection{Phase noise with different bandwidths} We study the performance of the noise tracking method based on the NN estimator as a function of the phase noise bandwidth $\Delta\nu$ for fixed values of amplitude noise $\Sigma_{\infty}^{2}$ and $\gamma$. We compare these results to the tracking method based on a Bayesian estimator, as well as a perfectly-corrected non-Gaussian measurement. We use different amplitude noise parameters for different values of $\langle \hat{n} \rangle_{0}$ such that the relative amplitude noise strength $\langle \hat{n} \rangle_{0}/\Sigma_{\infty}$ is constant. For an average intensity of $\langle \hat{n} \rangle_{0}=2, 5, $ and 10 we simulate 250, 250, and 500 different realizations of the noise, respectively. The simulations are run for $2\times10^{3}$ time bins of $N=10$ experiments each, giving a total of $2\times10^{4}$ individual experiments per noise realization. We calculate the average error across all realizations for all time bins. Figure \ref{lwscan} shows the average error probability as a function of the bandwidth $\Delta\nu$ for intensities $\langle \hat{n} \rangle_{0}=$ 2, 5, and 10. Figures \ref{lwscan}(a)--\ref{lwscan}(c) have no amplitude noise ($\gamma=0$, $\Sigma^{2}_{\infty}=0$), and \ref{lwscan}(d)--\ref{lwscan}(f) have $\gamma=2$kHz, and $\Sigma_{\infty}^{2}=\{0.25, 1.5, 6.0\}$, respectively, corresponding to relative strength of $\langle \hat{n} \rangle_{0}/\Sigma_{\infty} \approx 0.25$. The performance of the noise tracking method based on the NN estimator (blue) is equivalent to the Bayesian-based method (orange) while being computationally efficient to implement. The purple and gray dashed lines show the average error of the non-Gaussian and ideal heterodyne receivers with perfect parameter tracking, respectively. We observe that for all the investigated average input intensities $\langle \hat{n} \rangle_{0}$, the NN-based method performs as well as the Bayesian method both with and without amplitude noise. The NN-based method can enable the non-Gaussian receiver to surpass the QNL up to a phase noise bandwidth of $\Delta\nu\approx15$kHz, even in the presence of significant amplitude noise. We note that the situation in Figs. \ref{lwscan}(a)--\ref{lwscan}(c) with no amplitude noise is equivalent to the single parameter problem of phase tracking for non-Gaussian receivers as demonstrated in \cite{dimario20}. For large phase noise bandwidths for $\langle \hat{n} \rangle_{0}=5$, and 10, we observe that a NN estimator can perform slightly better than a Bayesian estimator. We believe this is due to the relatively small number of samples ($N=10$) from which estimates are made. In this regime with a few samples for estimation, there may be estimators which perform better than the Bayesian estimator, which is asymptotically optimal in the limit of many samples. Another potential cause of this effect is that in the training process of the NN, the relative weight between error in phase estimates and error in mean photon number estimates can be adjusted. This freedom may allow for fine tuning of the overall training error to allow for a slightly better overall performance, in terms of error probability, for specific channel models. \subsection{Amplitude noise with different bandwidths} To investigate the effect of the amplitude noise bandwidth $\gamma$, we fix the long-time variance $\Sigma_{\infty}^{2}$ and the phase noise bandwidth $\Delta\nu$. This allows for studying the performance of the NN-based method when the amplitude noise bandwidth $\gamma$ ranges from much smaller to much larger than the bandwidth for parameter estimation $1/N\Delta T$. Figure \ref{thscan} shows the average probability of error for different amplitude noise bandwidths $\gamma$ without and with phase noise of bandwidth $\Delta\nu=5$kHz, for $\langle \hat{n} \rangle_{0}=\{2, 5, 10\}$ with $\Sigma_{\infty}^{2}=\{0.25, 1.5, 6.0\}$, respectively. Blue (orange) lines show the error rates for the NN (Bayesian) based tracking method. Purple and gray dashed lines show the error probability for a non-Gaussian and heterodyne measurement with perfect correction, respectively. We find that the NN-based method performs closely to the Bayesian-based method, and enables the receiver to achieve sub-QNL error rates across a broad range of amplitude noise bandwidths even in the presence of phase noise. We note that for intensity $\langle \hat{n} \rangle_{0}=2$ in Fig. \ref{thscan}(a), the error for both the noise tracking methods are below the perfectly corrected non-Gaussian measurement when $\Delta\nu=0$. At low input powers, strategies that optimize the LO intensity $(|\beta|^{2}>|\alpha|^{2})$ yield lower error probabilities than when $|\beta|^{2} = |\alpha|^{2}$ \cite{ferdinand17}. Due to the small number of samples $(N\times L)$ used for estimation, the NN and Bayesian estimators have a bias in $\hat{\mathcal{A}}_{B, NN}$, such that $\mathcal{B}(\tau)>\mathcal{A}(\tau)$. The effect of this bias in the intensity estimates $\hat{\mathcal{A}}_{B, NN}$ is that the corrected measurement unintentionally approximates an optimized strategy \cite{ferdinand17}. This effect results in error probabilities of the corrected receiver with both NN and Bayesian based methods that are below the error of a perfectly corrected nulling receiver where $\mathcal{B}(t)=\mathcal{A}(t)$, which is due to the bias of the estimators from finite sampling. Further investigation is needed to determine the capabilities of NN based noise tracking for optimized non-Gaussian receivers \cite{ferdinand17}. The performance of the non-Gaussian receiver also depends on the long-time variance $\Sigma_{\infty}^{2}$ of the amplitude noise. In our main results, $\Sigma_{\infty}^{2}$ was set to represent a ``worst-case" scenario of $\approx 25\%$ relative amplitude noise (See Fig. \ref{mpn5_ex}). Appendix C describes our study of noise tracking of amplitude noise with different long-time variance $\Sigma_{\infty}^{2}$. In our findings, we observe that in the absence of phase noise, both the NN and Bayesian-based tracking methods enable the receiver to perform below the QNL, and close to the performance of perfect noise correction. In the presence of phase noise with bandwidth $\Delta\nu=5$kHz, the sub-QNL performance of the receiver is maintained, and the effect of increasing $\Sigma_{\infty}^{2}$ is small compared to the effects of increasing phase or amplitude bandwidths. \section{Discussion} The numerical studies in this work show that methods for channel noise tracking based on NN estimators are able to accurately track dynamic phase and amplitude noise to allow an adaptive non-Gaussian measurement to maintain performance below the QNL. We note that in the asymptotic limit of many samples available for parameter estimation for noise tracking, a Bayesian estimator can achieve minimal mean-square error \cite{lehmann98}. However, when noise tracking and correction need to be realized in real time to reduce errors in the state discrimination measurement and generate reliable data for parameter estimation, there is always a limited number of samples from which estimates are made. In these situations, there is a trade-off between estimation precision and noise tracking bandwidth. Other estimators, such as a NN, may balance these two parameters better than a Bayesian estimator for increased precision with finite samples. This property can enable efficient methods for high dimensional parameter tracking of complex dynamic channel noise. The computational efficiency of the NN estimator is rooted in the small number of multiplications required to calculate a single estimate, the limited memory requirements, and in the fact that the NN method does not explicitly depend on the value of $N$ or the number of adaptive steps $L$, as opposed to the Bayesian approach. For example, the NN-based estimator in this work requires $\approx5500$ multiplications. On the other hand, a Bayesian estimator using likelihood functions which are discretized into a $100 \times 100$ grid would require $100^{2}\times N \times L = 10^{6}$ multiplications, which may not be compatible with devices such as FPGAs \cite{becerra15,dimario20}. While there may be methods to reduce this computational cost, the Bayesian estimator also would require storage in memory of the full photon counting likelihood functions, putting stringent requirements on the device memory. For example, the $100\times100$ grid for the Bayesian estimator with 16 bit precision would require 800 kB of memory simply to store the likelihood functions. Moreover, to extend the noise tracking method for estimation of three noise parameters, a NN would simply require a single added output and proper retraining, while a Bayesian estimator would require possibly $10^{8}$ multiplications and 80 MB of memory. The robustness and versatility of the NN-based noise tracking method described here, shows that NN-based methods can be practical and very useful tools for non-Gaussian receivers. In addition, other machine learning techniques, such a reinforcement learning \cite{bilkis20, foesel18}, could provide a further benefits to these non-conventional measurements when the best detection strategy may be unknown or infeasible to calculate. We anticipate that neural networks and machine learning will have a great benefit for non-Gaussian measurements, just as these techniques have proven worthwhile for conventional measurement strategies \cite{thrane17, wu09, zibar15, zibar16, khan17, wang17, lohani19, karanov18}. \section{Conclusion} We investigate the use of a neural network (NN) as a computationally efficient multi-parameter estimator of dynamic channel noise, enabling robust noise tracking for adaptive non-Gaussian measurements for coherent state discrimination. We study the NN-based tracking method for simultaneous amplitude and phase noise and find that the NN estimator can perform as well as a more complex Bayesian estimator. This performance is observed across a broad range of noise strengths and bandwidths for different average powers of the input coherent states. The non-Gaussian receiver used in this study can have broad applications in classical \cite{lee16,becerra15} and quantum communication \cite{liao18,guan16} due to its ability to attain sensitivities beyond the quantum noise limit (QNL). Moreover, the proposed method for noise tracking uses only the data collected during the state discrimination measurement without requiring extra resources such as strong reference pulses. This makes the receiver and the proposed method for noise tracking well suited for energy efficient low power communications. Thus, NN based methods are ideal candidates for real-time tracking of multiple sources of channel noise for non-Gaussian receivers, allowing them to maintain their sub-QNL sensitivity in the presence of complex dynamic channel noise. \begin{acknowledgments} This work was supported by the National Science Foundation (NSF) (PHY-1653670, PHY-1521016, PHY-1630114). The source code is available at: github.com/UNM-QOlab/phase\_amp\_tracking\_nn \end{acknowledgments}
{ "timestamp": "2021-02-16T02:38:26", "yymm": "2102", "arxiv_id": "2102.07665", "language": "en", "url": "https://arxiv.org/abs/2102.07665" }
\section{Supplementary Materials for Online hyper-parameter optimization by real-time recurrent learning} \iffalse \begin{figure*}[H] \centering \begin{minipage}{0.49\textwidth} \begin{minipage}{0.32\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLr_wrt_lr_curve_comp_sgd_mlr0.000050.pdf} \subcaption*{Mlr=5e-5} \end{minipage} \begin{minipage}{0.32\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLr_wrt_lr_curve_comp_sgd_mlr0.000010.pdf} \subcaption*{Mlr=1e-5} \end{minipage} \begin{minipage}{0.32\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLr_wrt_lr_curve_comp_sgd_mlr0.000005.pdf} \subcaption*{Mlr=5e-6} \end{minipage} \subcaption{Influence matrix norm w.r.t $\alpha$} \end{minipage} \begin{minipage}{0.49\textwidth} \begin{minipage}{0.32\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLl2_wrt_l2_curve_comp_sgd_mlr0.000050.pdf} \subcaption*{Mlr=5e-5} \end{minipage} \begin{minipage}{0.32\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLl2_wrt_l2_curve_comp_sgd_mlr0.000010.pdf} \subcaption*{Mlr=1e-5} \end{minipage} \begin{minipage}{0.32\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLl2_wrt_l2_curve_comp_sgd_mlr0.000005.pdf} \subcaption*{Mlr=5e-6} \end{minipage} \subcaption{Influence matrix norm w.r.t $\lambda$} \end{minipage} \caption{L2 norm of influence matrix w.r.t learning rate $\alpha$ and weight decay coefficient $\lambda$ VS meta Learning rate} \label{fig:normJ_fix_mlr} \begin{minipage}{\textwidth} \begin{minipage}{0.245\textwidth} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLr_wrt_mlr_curve_comp_sgd_lr0.001000.pdf} \subcaption*{ILr=0.0001} \end{minipage} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLr_wrt_mlr_curve_comp_sgd_lr0.000100.pdf} \subcaption*{ILr=0.001} \end{minipage} \subcaption{Influence matrix norm w.r.t $\alpha$} \end{minipage} \begin{minipage}{0.245\textwidth} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLl2_wrt_mlr_curve_comp_sgd_lr0.001000.pdf} \subcaption*{ILr=0.0001} \end{minipage} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_dPdLl2_wrt_mlr_curve_comp_sgd_lr0.000100.pdf} \subcaption*{ILr=0.001} \end{minipage} \subcaption{Influence matrix norm w.r.t $\lambda$} \end{minipage} \begin{minipage}{0.245\textwidth} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_mlr_dPdLr_curve_comp_sgd_lr0.001000.pdf} \subcaption*{ILr=0.0001} \end{minipage} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_mlr_dPdLr_curve_comp_sgd_lr0.000100.pdf} \subcaption*{ILr=0.001} \end{minipage} \subcaption{Influence norm w.r.t $\alpha$} \end{minipage} \begin{minipage}{0.245\textwidth} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_mlr_dPdLl2_curve_comp_sgd_lr0.001000.pdf} \subcaption*{ILr=0.0001} \end{minipage} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_mlr_dPdLl2_curve_comp_sgd_lr0.000100.pdf} \subcaption*{ILr=0.001} \end{minipage} \subcaption{Influence norm w.r.t $\lambda$} \end{minipage} \end{minipage} \caption{L2 norm of Influence w.r.t learning rate $\alpha$ and weight decay coefficient $\lambda$ VS meta Learning rate} \label{fig:normJ_lr} \end{figure*} \fi \section{Background} \subsection{Iterative optimization} A standard technique for training large-scale machine learning models, such as deep neural networks (DNN), is stochastic optimization. In general, this takes the form of parameters changing as a function of the the data and the previous parameters, iteratively updating until convergence: \begin{align} \boldsymbol{\theta}_{\tau+1} = g(\boldsymbol{\theta}_{\tau}, \mathbf{B}_{\tau}; \boldsymbol{\varphi}), \label{eq:update_rule} \end{align} where $\boldsymbol{\theta}$ denotes the parameters of the model, $g$ is the update rule reflecting the learning objectives, ${\boldsymbol \varphi}$ denotes the hyperparameters, and $\mathbf{B}_{\tau}$ is the random subset of training data (the minibatch) at iteration $\tau$. As a representative example, when training a DNN $f$ using stochastic gradient descent, (SGD) learning follows the loss gradient approximated using a random subset of the training data at each time step \begin{align} \label{eq:sgd} \boldsymbol{\theta}_{\tau+1} \leftarrow \boldsymbol{\theta}_{\tau} - \frac{\alpha}{|\mathbf{B}_\tau|} \sum_{(x,y) \in \mathbf{B}_\tau} \nabla_{\theta} l(y, f(x; \boldsymbol{\theta}_{\tau})), \end{align} where $l(\cdot)$ is the per-example loss function and $\alpha$ is the learning rate. In this case, $g(\cdot)$ is a simple linear update based on the last parameter and the scaled gradient. Other well-known optimizers, such as RMSprop~\citep{Tieleman2012} or Adam~\citep{Kingma2015}, can also be expressed in this general form. \subsection{Recurrent neural networks} \label{sec:rtrl} An RNN is a time series model. At each time step $t =\{ 1\cdots T\}$, it updates the memory state $\mathbf{h}_t$ and generates the output $\mathbf{o}_{t+1}$ given the input $\mathbf{x}_t$: \begin{align} \label{eq:rnn} \left( \mathbf{h}_{t+1}, \mathbf{o}_{t+1} \right) = r(\mathbf{h}_{t}, \mathbf{x}_t; \boldsymbol{\phi}), \end{align} where the recurrent function $r$ is parametrized by $\boldsymbol{\phi}$. Learning this model involves optimizing the total loss over $T$ steps with respect to $\boldsymbol{\phi}$ by gradient descent. BPTT is typically used to calculate the gradient by unrolling the network dynamics and backward differentiating through the chain rule: \begin{align*} \nabla_\phi \mathcal{L} &= \sum^T_{t=1} \Big(\sum^T_{s\geq t+1}\frac{\partial \mathcal{L}_{s}}{\partial \mathbf{h}_{t+1}} \frac{\partial \mathbf{h}_{t+1}}{\partial \mathbf{h}_{t}} + \frac{\partial \mathcal{L}_t}{\partial \mathbf{h}_{t}} \Big) \frac{\partial \mathbf{h}_t}{\partial \boldsymbol{\phi}}, \end{align*} where $\mathcal{L}_t$ is the instantaneous loss at time $t$, and the double summation indexes all losses and all the applications of the parameters. Because the time complexity for BPTT grows with respect to $T$, it can be challenging to compute the gradient using BPTT when the temporal horizon is lengthy. In practice, this is mitigated by truncating the computational graph (`truncated BPTT'). \subsection{Real-time recurrent learning} RTRL \citep{Williams1989} is an online alternative to BPTT, which --instead of rolling out the network dynamics-- stores a set of summary statistics of the network dynamics that are themselves updated online as new inputs come in. To keep updates causal, it uses a forward view of the derivatives, instead of backward differentiation, with the gradient computed as \begin{align} \nabla_{\boldsymbol{\phi}} \mathcal{L} & = \sum^T_{t=1} \frac{\partial \mathcal{L}_{t+1}}{\partial \mathbf{h}_{t+1}} \Big(\frac{\partial \mathbf{h}_{t+1}}{\partial \mathbf{h}_{t}}\frac{\partial \mathbf{h}_{t}}{\partial \boldsymbol{\phi}} +\frac{\partial \mathbf{h}_{t+1}}{\partial \boldsymbol{\phi}_t} \Big). \label{eq:rtrl} \end{align} The Jacobian $\Gamma_{\tau}=\frac{\partial \mathbf{h}_{t}}{\partial \boldsymbol{\phi}}$ (also referred to as the {\em influence matrix}), dynamically updates as \begin{align*} \Gamma_{\tau} = D_\tau \Gamma_{\tau-1} + G_{\tau-1}, \end{align*} where $D_\tau = \frac{\partial \mathbf{h}_{\tau}}{\partial \mathbf{h}_{\tau-1}}$, and $G_\tau = \frac{\partial \mathbf{h}_{\tau}}{\partial \boldsymbol{\phi}_{\tau-1}}$. In this way, the complexity of learning no longer depends on the temporal horizon. However, this comes at the cost of memory requirements $\mathcal{O}(|\boldsymbol{\phi}||\mathbf{h_t}|)$, which makes RTRL rarely used in ML practice. \section{Experiments} We conduct empirical studies to assess the proposed algorithm's performance and computational cost. We compare them to widely used hyperparameter optimization methods, on standard learning problems such as MNIST and CIFAR10 classification. To better understand the properties of OHO, we examine hyperparameter dynamics during training and how they are affected by various meta-hyperparameter settings. We explore layerwise hyperparameter sharing as a potential way to trade-off between computational costs and the flexibility of the learning dynamics. Lastly, we analyze the stability of the optimization procedure with respect to meta-hyperparameters. For the MNIST~\citep{LeCun1998MNIST} experiments, we use 10,000 out of the 60,000 training examples as the validation set, for evaluating OHO outer-loop gradients. We assess two architectures: a 4-layer neural network and a 4-layer convolutional neural network, with ReLU activations and $5\times5$ kernel size. We use 128 units and 128 kernels at each layer for both networks. For the CIFAR10 dataset~\citep{Krizhevsky2009CIFAR}, we split the images into 45,000 training, 5,000 validation, and 10,000 test images and normalize them such that each pixel value ranges between [0, 1]. We use the ResNet18 architecture~\citep{He2016ResNet} and apply random cropping and random horizontal flipping for data augmentation \citep{Shorten2019}. For both MNIST and CIFAR10, the meta optimizer is SGD, with meta learning rate $0.000005$ and initial weight decay coefficient $0$. We set initial learning rates to $0.001$ and $0.01$, and validation batch size to $100$ and $1000$ for MNIST and CIFAR10, respectively. \subsection{Performance} We compare OHO against hyperparameter optimization by uniform random search and Bayesian optimization, in terms of the number of training runs and the computation time required to achieve a predefined level of generalization performance (test loss $\leq0.3$, see Fig.~\ref{fig:performance}). We vary the optimizer $g$ (Eq.~\ref{eq:update_rule}) by considering five commonly used learning rate schedulers: 1) SGD with fixed learning rate (`Fixed'), 2) Adam, 3) SGD with step-wise learning rate annealing (`Step'), 4) exponential decay (`Exp'), and 5) cosine (`Cosine') schedulers \citep{Loshchilov2017}. We optimize the learning rate and weight decay coefficient, and all models are trained for 100 and 300 epoch for MNIST and CIFAR10, respectively. Additionally, we tune step size, decay rate, and momentum coefficients for Step, Exp, and Adam, respectively. In our experiments, we find that OHO usually takes a single run to find a good solution. The single training run takes approximately $12\times$ longer than training the network once without OHO, and yet, the other hyperparameter algorithms require multiple training of neural network to reach the same level of performance. Overall OHO ends up significantly faster at finding a good solution in terms of total wall-clock time (Fig.~\ref{fig:performance}, bottom row). For each method, we measure the final test loss distribution obtained for various initial learning rates and weight decay coefficients. The initial learning rates and weight decay coefficients were randomly chosen from the range of $[0.0001,0.2]$ and $[0,0.0001]$, respectively. For both MNIST and CIFAR10, our procedure, which dynamically tunes the learning rate and degree of regularization during model learning, results in better generalization performance and much smaller variance compared to other optimization methods (Fig.~\ref{fig:perm_stability}, `global OHO'). This demonstrates the robustness of OHO to the initialization of hyperparameters and makes it unnecessary to train multiple models in order to find good hyperparameters. These results also suggest that there may be a systematic advantage in jointly optimizing parameters and hyperparameters. \begin{figure}[t] \centering \includegraphics[width=\linewidth,page=3]{figs/figsOHO.pdf} \vspace{-0.5cm} \caption{(A) The learning rate dynamics for each layer during the early stage of the training for different initializations. (B) Test loss comparison for layerwise vs.\ global OHO, for several initial learning rates.} \label{fig:layewise_lr} \vspace{-0.25cm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth,page=4]{figs/figsOHO.pdf} \vspace{-0.25cm} \caption{Test loss and wallclock time statistics when using OHO to optimize different numbers of hyper-parameter sets, where each set contains three hyper-parameters: two learning rates for weights and bias, and one L2 weight decay coefficient. The hyper-parameter sets are allocated to neighboring layers to evenly partition the full network. Layerwise OHO has 6 sets for MNIST, 62 for CIFAR10. } \label{fig:tradeoff_perm_speed} \vspace{-0.25cm} \end{figure} \begin{figure}[t] \centering \includegraphics [width=0.9\linewidth,page=7]{figs/figsOHO.pdf} \vspace{-0.25cm} \caption{ (A) Learning rate dynamics, after being manually re-initialized to 0.2 at epoch 30, 50, 90, 180, and 270 (vertical dashed lines). (B) Test loss of dynamic learning rate (solid line) and non-corrected learning rate (dashed line) after the reset. (C) Weight decay dynamics, after being manually re-initialized, details as in (a). (D) Corresponding test loss on CIFAR10.} \label{fig:perm_corruption} \vspace{-0.5cm} \end{figure} \subsection{Layerwise OHO} In the earlier experiment (Fig.~\ref{fig:perm_stability}, `layerwise OHO'), we saw that {\em layer-specific} learning rates and weight decay coefficients can lead to even better test loss on CIFAR10. Here we further study richer hyperparametrizations. To do so, we examine the effects of layerwise OHO on final test performance and quantify the trade-offs between performance and computational time, as we vary the number of hyperparameters. First, we experiment with having a separate learning rate and weight decay coefficient per layer. We train both global OHO and layerwise OHO on CIFAR10 with several initial learning rates, $\alpha_0 = [0.1,0.15,0.2]$, and no initial regularization, $\lambda_0 = 0$, and compare their test losses (Fig.~\ref{fig:layewise_lr}). We find that the layerwise OHO test loss (green curves) is generally lower than that of global OHO (purple curves) is. Moreover, when analyzing the early stage of training across runs with different initial learning rates, we obverse that learning rates tend to cluster according to their layers regardless of the starting points, and their values are layer-specific (Fig.~\ref{fig:layewise_lr}). This suggests that layerwise grouping of the hyperparameters may be an appropriate thing to do in such models. We analyze the performance and computational speed trade-off of layerwise OHO with coarser hyperparmater grouping schemes. If we define one hyperparameter set as the learning rates for the weight and bias, together with the L2 regularization weights, then global OHO has one set, while layerwise OHO has as many sets as there are layers in the network (6 for MNIST and 62 for CIFAR10). We can then interpolate between these extremes. Specifically, for $k$ hyperparameter sets, we partition the network layers into $k$ neighboring groups, each with their own hyperparameter set (Fig.~\ref{fig:tradeoff_perm_speed}). For example, for $k=2$, the 6-layer network trained on MNIST would be partitioned into top 3 and bottom 3 layers, with 2 separate hyperparameter sets. For each configuration, we ran OHO optimization 10 times with different random initializations. As shown in Fig.~\ref{fig:tradeoff_perm_speed}, the average performance improves as the number of hyperparameter sets increases, while the variance shrinks. In contrast, the wallclock time increases with the number of hyperparameter sets. This demonstrates a clear trade-off between the flexibility of the learning dynamics (and subsequently generalization performance) and computational cost, which argues for selecting the richest hyperparameter grouping scheme possible given a certain computational budget. \subsection{Response to hyperparameter perturbations} Due to its online nature, the OHO algorithm lends itself to being used for non-stationary problems, for instance when input statistics or task objectives change over time. Here we emulate such a scenario by perturbing the hyperparameters directly under a static learning scenario, and track the evolution of (meta-)learning in response to these perturbations. In particular, we initialize the learning rates to a fixed value (0.1) and then reset them to a higher value (0.2) at various points during CIFAR10 training (Fig.~\ref{fig:perm_corruption}A). We observe that the learning rates decrease rapidly early on and that after being reset, they drop almost immediately back to the value before the perturbation. This illustrates the resilience and rapid response of OHO against learning rate corruption which reflects sudden change in an environment. We further compare the performance of OHO to the setting where the learning rate is fixed after re-initialization. Figure~\ref{fig:perm_corruption}B shows the test losses for fixed (dashed line) and OHO (solid line). We find that the dynamic optimization of hyperparameters leads to systematically better performance. In the next set of experiments we reset the weight decay coefficient to zero at various epochs: 30, 50, 90, 180, and 270. The results are similar, with the weight decay coefficients quickly adapting back to their previous values (Fig.~\ref{fig:perm_corruption}C and D). The test loss fluctuates less for continually optimized OHO, relative to the fixed hyperparameters setting. Altogether, these results suggest that joint optimization of parameters and hyperparameters is more robust to fluctuations in a learning environment. \begin{figure} \centering \includegraphics[width=\linewidth,page=6]{figs/figsOHO.pdf} \vspace{-0.5cm} \caption{Sensitivity analysis on CIFAR10. A) The comparison of learning when the outer-gradient $\frac{\partial L_{\mathcal{D}}}{\partial \boldsymbol \varphi}$ is subject to training vs validation dataset. B) The performance when using 100, 500, and 1000 validation batch sizes. } \label{fig:sensitivity_analysis} \vspace{-0.25cm} \end{figure} \begin{SCfigure} \includegraphics[width=0.5\linewidth,page=10]{figs/figsOHO.pdf} \vspace{-0.5cm} \caption{Test loss as a function of learning time, when the influence get reset to zero at every 1, 100, and 1000 steps; `no reset' corresponds to standard infinite-horizon global OHO. } \label{fig:sensitivity_analysisC} \end{SCfigure} \subsection{Long-term dependencies in learning and their effects on hyperparameters } Past work has suggested that the temporal horizon for the outer-loop learning influences both performance and the stability of hyperparameter optimization \citep{Metz2019}. In our case, the influence matrix is computed recursively over the entire learning episode and there is no strict notion of a horizon. We can however control the extent of temporal dependencies by resetting the influence matrix at a predefined interval (effectively forgetting all previous experience). We compare such resetting against our regular infinite-horizon procedure (global-OHO; see Fig.~\ref{fig:sensitivity_analysisC}). We find that both the average test loss and its variance decrease as we reset less frequently; ultimately global OHO performs the best. Thus, we conclude it is important to take into account past learning dynamics for improving generalization performance. \subsection{Sensitivity analysis for meta-hyperparameters.} In order to better understand how OHO works in practice, we explore its sensitivity to meta-hyperparameter choices: the validation dataset, initial learning rates, and meta learning rates. \paragraph{Validation dataset.} We explore the gradient of the meta-optimization objective $\nabla_{\boldsymbol \varphi} \mathcal{L}_{\mathcal{D}_{\text{val}}}$ in Eq.~\ref{eq:oho_gradient}, where the outer gradient $\frac{\partial \mathcal{L}_{\mathcal{D}_{\text{val}}}}{\partial {\boldsymbol \theta}_{\tau+1}}$ is computed with respect to the validation dataset $\mathcal{D}_{\text{val}}$. To show the importance of having a separate validation loss for the outer loop optimization, we replace the validation dataset with the training dataset, and find that it yields to systematically worse generalization performance (Fig.~\ref{fig:sensitivity_analysis}B). Expectedly, hyperparameters overfit to the training loss when using the training dataset, but not when using the validation dataset. When using the validation dataset, OHO prevents overfitting by shrinking the learning rates and increasing the weight regularization over time. In other words, OHO early-stops on its own. In some practical applications it may prove computationally cumbersome to use the full validation dataset. We experiment using a random subset of validation set to compute the outer gradient while varying the validation minibatch size: 100, 500, and 1,000. Unsurprisingly we find that bigger is better (Fig.~\ref{fig:sensitivity_analysis}C): the test loss is lowest for the largest size considered, while the test loss for the smallest (100) minibatch size is on par with the step-wise learning rate scheduler. The performance gradually changes as we vary the size, which allows us to make trade-off between computational cost and the final performance. \begin{figure}[t] \centering \includegraphics[width=\linewidth,page=9]{figs/figsoho.pdf} \vspace{-0.5cm} \caption{Norm of influence matrix for different meta learning rates and initial learning rates. (A) Large meta learning rates lead to a discontinuous landscape (red curve). (B) The initial learning rates $1e^{-3}$ and $1e^{-2}$ lead to instability when combined with large meta learning rates (green and blue curve).} \label{fig:normJ_fix_mlr} \vspace{-0.5cm} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.75\linewidth,page=8]{figs/figsOHO.pdf} \vspace{-0.5cm} \caption{(A,B) Demonstrating that the influence matrix norm w.r.t the learning rate and the weight coefficient plateaus, rather than exploding. (C) The gradients within a single epoch (100 updates) do not correlate with each other. The average and the standard deviation of the correlation decrease over time.} \label{fig:stability} \vspace{-0.25cm} \end{figure*} \iffalse \begin{figure*}[t] \centering \begin{minipage}{0.24\textwidth} \includegraphics[width=\linewidth]{figs/sequential_mlp_tr.pdf} \vspace{-0.5cm} \subcaption{Training Loss} \label{fig:cont_mlp_tr} \end{minipage} \begin{minipage}{0.24\textwidth} \includegraphics[width=\linewidth]{figs/sequential_mlp_te.pdf} \vspace{-0.5cm} \subcaption{Test Loss} \end{minipage} \begin{minipage}{0.24\textwidth} \includegraphics[width=\linewidth]{figs/sequential_mlp_lr.pdf} \vspace{-0.5cm} \subcaption{Learning Rate} \end{minipage} \begin{minipage}{0.24\textwidth} \includegraphics[width=\linewidth]{figs/sequential_mlp_l2.pdf} \vspace{-0.5cm} \subcaption{Weight Decay Coefficient} \end{minipage} \caption{The performance on continual learning for various optimization methods. We apply multiple augmentation over the MNIST dataset during 100 epochs of training. First 10 epoch of regular training and then 5 epochs of horizontal flips, followed by 5 epochs of regular training and then 15 epochs of clockwise rotation, 10 epochs of regular training and then 5 epochs of vertical flip, 5 epochs of regular training and then 10 epochs of target class shift by one, 10 epochs of regular training and then 10 epochs of clockwise rotation, and finally 10 epochs of regular training (10 regular, 5 hflip, 5 regular, 15 rotation, 5 vflip, 5 regular, 10 target shift, 10 regular0, 10 rotation, 10 regular).} \label{fig:cont_learning} \end{figure*} \begin{figure*}[t] \centering \begin{minipage}{0.32\textwidth} \includegraphics[width=\linewidth]{figs/dPdLrs_all.pdf} \vspace{-0.5cm} \subcaption{Influence matrix w.r.t $\alpha$} \end{minipage} \begin{minipage}{0.32\textwidth} \includegraphics[width=\linewidth]{figs/dPdL2s_all.pdf} \vspace{-0.5cm} \subcaption{Influence matrix w.r.t $\lambda$} \end{minipage} \begin{minipage}{0.32\textwidth} \includegraphics[width=0.68\linewidth]{figs/gradient_correlation_sgd.pdf} \vspace{-0.5cm} \subcaption{Gradients correlations} \end{minipage} \caption{Stability of the influence matrix norm for learning rates (A) and weight regularizer (B) for multiple runs with different initialization, which corresponds to multiple lines. (C) The average and the standard deviation of the norm correlations over learning.} \label{fig:stability} \end{figure*} \begin{figure*}[ht] \centering \begin{minipage}{0.48\textwidth} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{figs/mlp_bag_dPdLlr_wrt_lr_curve_comp_sgd.pdf} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{figs/mlp_bag_dPdLl2_wrt_l2_curve_comp_sgd.pdf} \end{minipage} \subcaption{sensitivity w.r.t initial learning rate and weight coefficient} \label{fig:normJ_fix_mlr} \end{minipage} \begin{minipage}{0.48\textwidth} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_bag_dPdLlr_wrt_mlr_curve_comp_sgd.pdf} \subcaption*{Learning rates} \end{minipage} \begin{minipage}{0.47\textwidth} \includegraphics[width=\linewidth]{figs/mlp_bag_dPdLl2_wrt_mlr_curve_comp_sgd.pdf} \subcaption*{L2 Weight coefficients} \end{minipage} \subcaption{sensitivity w.r.t meta learning rate} \end{minipage} \vspace{-0.25cm} \caption{Norm of influence matrix w.r.t learning rates. (A) The large meta learning rate results in discontinuous norm surfaces (red curve). (B) The initial learning rates 1e-3 and 1e-2 are sensitivity when combined with large meta learning rates (green and blue curve). } \label{fig:stability} \vspace{-0.25cm} \end{figure*} \fi \paragraph{Meta learning rates and initial hyper-parameters.} \label{exp:gradient_stability} To deploy OHO in practice, one needs to choose a meta learning rate and the initial hyperparameters. We investigate the sensitivity of OHO to these choices in order to inform users on what may be a sensible range for each meta-hyperparameter. We define the sensible range to be the region where training is stable. We consider learning to be stable when the gradients are well-defined along the entire learning trajectory, in which computing the gradient hinges upon the product of Hessians induced by the recursion in the influence matrix. In sum, sensible meta-hyperparameters should keep the norm of the influence matrix bounded throughout the training. We visualize the evolution of the norms of the influence matrices for the learning rates, $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\alpha}\|^2_F$, and weight decay coefficients, $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\lambda}\|^2_F$, for different meta learning rates and initial learning rates (Fig.~\ref{fig:normJ_fix_mlr}A, B). The norm is numerical ill-conditioned (blows up) with large meta learning rates ($>1e^{-4}$) at the end stage of training, but is smooth with smaller meta learning rates. Recall that the OHO-optimized learning rates decrease during training to avoid overfitting. Later in training this eventually leads to a regime where the meta learning rate is larger than the learning rate. Some elements of the scaled gradient can then become greater than the corresponding learning rate $\alpha_{t-1}^i < \epsilon \left[\nabla_\alpha L({\boldsymbol \theta})\right]^i$, causing instability. The ideal meta learning rate should thus be small. We further investigate the norm of influence matrix during training. We train multiple models with different initialization of hyperparameters while fixing the meta learning rate to $5 \times 10^{-5}$. We find that the norm rarely explodes in practice (Fig.~\ref{fig:stability} A, B). The norms plateau as training converges, but they do not explode. We observe similar results for the norm of the Hessian matrix $\|\frac{\partial {\boldsymbol \theta}^{(T)}}{\partial {\boldsymbol \theta}^{(T-1)}}\|^2_2$ as well. In order to understand why the norm of influence matrix is stable along the trajectory, we look into exploding gradient phenomena. As we discussed in Section~\ref{sec:gradient_computation}, exploding gradients likely happen when a series of gradients are correlated. We compute the moving average and standard deviation of gradient correlations with the window size of 100 updates (a single epoch). The mean correlation very rapidly approaches zero, with the standard deviation also decreasing as training progresses (Fig.~\ref{fig:stability}C). The gradients only correlate at the beginning of training but quickly decorrelate as learning continues. This causes the norm of the influence matrix to plateau and prevents the gradients from exploding. \iffalse Let us analyze the $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\alpha}\|^2_2$ and $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\lambda}\|^2_2$ landscape to understand the stability of optimization. First, we examine the norm of influence matrix over the range of learning rates $\alpha$ and weight decay coefficient $\lambda$ at every 10 epoch out of 100 epochs conditioned on three different meta learning rates, $5e^{-5}$, $1e^{-5}$, and $5e^{-6}$ (see Figure~\ref{fig:normJ_fix_mlr}A). The curves are widely spread in the early stage of training (each curve corresponds to every 10th epoch), and then converges (highly concentrated) as training continues. We observe that the surface blows up when the meta learning rate is $5e^{-5}$ for both $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\alpha}\|^2_2$ and $\|\frac{d{\boldsymbol \theta}^{(T)}}{d\lambda}\|^2_2$ near the final stage of training. However, the surfaces are smooth and steady for smaller meta learning rates, $1e^{-5}$ and $5e^{-6}$. From Figure~\ref{fig:normJ_fix_mlr}b, the landscape blew up for learning rates $1e^{-3}$ and $1e^{-2}$. This time, we examine the norm of influence with respect to the range of meta learning rates given that the networks are trained using learning rates $1e^{-3}$ and $1e^{-2}$ in Figure~\ref{fig:normJ_fix_mlr}. The norm surface is less stable as the meta learning rates become larger. This illustrates that the gradient computation, which is subject to the influence matrix, depends on meta learning rate size. After seeing the results, it is not surprising that the optimization is unstable for large meta learning rate at near finally stage of training. The learning rate decreases over the training and actually becomes zero by the end stage when trained with OHO. This causes the meta learning rate to become larger than the learning rate at some point. Then, the scaled gradient can potentially become larger than the learning rate $\alpha_{t-1} < \epsilon \nabla_\alpha \frac{d l({\boldsymbol \theta}^{(T)})}{d\alpha}$, which causes instability in training. Therefore, we want the ideal meta learning rate to be small. \fi \section{Discussion} Truly online hyperparameter optimization remains an open problem in machine learning. In this paper, we presented a novel hyperparameter optimization algorithm, OHO, which takes advantage of online temporal credit assignment in RNNs to jointly optimize parameters and hyperparameters based on minibatches of training and validation data. This procedure leads to robust learning, with better generalization performance than competing offline hyperparameter optimization procedures. It is also competitive in terms of total wallclock time. The dynamic interaction between parameter and hyperparameter optimization was found to not only improve test performance but to also reduce variability across runs. Beyond automatic shrinking of learning rates that avoids overfitting, OHO quickly adapts the hyperparameters to compensate for sudden changes, such as perturbations of the hyperparameters and allows the learning process to reliably find better models. The online nature of OHO updates makes it widely applicable to both stationary and nonstationary learning problems. We expect the same set of benefits to apply to problems such as life-long learning \citep{German2019,Kurle2020}, or policy learning in deep reinforcement learning \cite{Padakandla2019,Xie2020,Igl2020}, in which the statistics of data and learning objectives change over time. Up to now, it has been virtually impossible to do automated hyperparameter tuning in such challenging learning settings. OHO on the other hand may prove to be a key steppingstone for achieving robust automated hyperparameter optimization in these domains. In machine learning, an enormous amount of time and energy is spent on hyperparameter tuning. Out of the box, RTRL-based OHO is already quite efficient, with memory requirement that is linear in the number of parameters and outer-loop gradients computation having a similar cost to that of computing inner gradients. This covers many practical use cases of meta-optimization. We believe that OHO and related ideas can dramatically reduce the burden of tuning hyperparameters and becoming a core component of future off-the-shelf deep learning packages. \section{Online hyperparameter optimization} \label{sec:oho} With our analogies in place, we are ready to construct our online hyperparameter optimization (OHO) algorithm. We adapt the hyperparameters at each time step, such that both the parameters and hyperparameters are jointly optimized in a single training process (Fig.~\ref{fig:my_label}). To achieve this, we update the parameters using training data and update the hyperparameters using validation data. Then, the update rules for $\boldsymbol{\theta}$ and $\boldsymbol{\varphi}$ are: \begin{align} \boldsymbol{\theta}_{\tau+1} &= \boldsymbol{\theta}_{\tau} - \alpha_{\tau} \Delta_{\mathcal{D}_{\text{tr}}} (\boldsymbol{\theta}_{\tau}) + \alpha_{\tau} w(\boldsymbol{\theta}_{\tau},\lambda_{\tau}) \label{eq:param_updates} \\ \boldsymbol{\varphi}_{\tau+1} &= \boldsymbol{\varphi}_{\tau} - \eta \Delta_{ \mathcal{D}_{\text{val}}} (\boldsymbol{\varphi}_{\tau}), \label{eq:meta_update} \end{align} where $\Delta_{ \mathcal{D}_{\text{tr/val}}}$ is a descent step with respect to training or validation data, respectively, and $w(\boldsymbol{\theta}_{\tau};\lambda_\tau)$ is a regularization function. We can use any differentiable stochastic optimizer, such as RMSProp and ADAM, to compute the descent direction $\Delta$. Without the loss of generality, from now on, we use SGD to compute the $\Delta$ and use weight decay penalty as a regularizer for the rest of the paper. Expanding $\Delta(\boldsymbol{\varphi}_{\tau}; \mathcal{D}_{\text{val}})$ in Eq.~\ref{eq:meta_update}, we have \begin{align} &\Delta_{\mathcal{D}_{\text{val}}} (\boldsymbol{\varphi}_{\tau}) = \nabla_{\boldsymbol{\varphi}} \mathcal{L}_{\mathcal{D}_{\text{val}}}(\boldsymbol{\theta}_{\tau+1}, \boldsymbol{\varphi}_{\tau}) \nonumber \\ &= \nabla_{\boldsymbol{\varphi}} \mathcal{L}_{\mathcal{D}_{\text{val}}} \big(\boldsymbol{\theta}_{\tau} -\alpha_{\tau} \big( \Delta_{\mathcal{D}_{\text{tr}}}(\boldsymbol{\theta}_{\tau})- w(\boldsymbol{\theta}_{\tau},\lambda_{\tau})\big), \boldsymbol{\varphi}_{\tau}\big) \label{eq:delta_metaopt}. \end{align} The gradient of hyperparameters depends on $\varphi_0\cdots \varphi_\tau$ at iteration $\tau$. We apply RTRL to compute this gradient in an online fashion. Let us re-write the gradient expression of RTRL in Eq.~\ref{eq:rtrl} taking into account our mappings, \begin{align} \nabla_{\boldsymbol{\varphi}} \mathcal{L}_{\mathcal{D}_{\text{val}}} &= \sum^T_{\tau=1} \frac{\partial \mathcal{L}_{\mathcal{D}_{\text{val}}}}{\partial \boldsymbol{\theta}_{\tau+1}} \Bigg( \underbrace{ \frac{\partial \boldsymbol{\theta}_{\tau+1}}{\partial \boldsymbol{\theta}_{\tau}}\frac{\partial \boldsymbol{\theta}_{\tau}}{\partial \boldsymbol{\varphi}} +\frac{\partial \boldsymbol{\theta}_{\tau+1}}{\partial \boldsymbol{\varphi}_{\tau}} }_{\Gamma_{\tau+1}} \Bigg). \label{eq:oho_gradient} \end{align} Generally, meta-optimization involves computing the gradient through a gradient, as shown in Eq.~\ref{eq:delta_metaopt}. This causes the temporal derivative to contain the Hessian matrix, \begin{align*} \frac{\partial \boldsymbol{\theta}_{\tau+1}}{\partial \boldsymbol{\theta}_{\tau}} = I - \alpha_\tau H_\tau - 2 \alpha_\tau\lambda_\tau, \end{align*} where $H_\tau=\mathbb{E}_{\mathcal{B}}[\nabla^2_{\boldsymbol{\theta}} \mathcal{L}]$ is the Hessian of the minibatch loss with respect to the parameters. Then, we plug $\frac{\partial \boldsymbol{\theta}_{\tau+1}}{\partial \boldsymbol{\theta}_{\tau}}$ into the influence matrix's recursive formula, \begin{align} \Gamma_{\tau+1} = \big(I-\alpha_\tau H_{\tau} -2\alpha_\tau\lambda_\tau \big) \Gamma_\tau + G_{\tau}. \label{eq:influence_formula} \end{align} By approximating the Hessian-vector product using the finite difference method,\footnote{ See \url{https://justindomke.wordpress.com/2009/01/17/hessian-vector-products/} for details. } we compute the gradient $\frac{d l(\boldsymbol{\theta}_{\tau+1})}{d\alpha_\tau}$ in linear time. The formulation shows that the influence matrix is composed of all the relevant parts of the learning history and is updated online as new data come in. In other words, it accumulates the long-term influence of all the previous applications of the hyperparameters $\boldsymbol{\varphi}$ on the parameters $\boldsymbol{\theta}$. Overall RTRL is more efficient than truncated BPTT for hyperparameter optimization, in both computation and memory complexity. First, RTRL greatly reduces the computational cost since the influence matrix formulation circumvents the explicit unrolling of temporal dependencies. Our approach is intrinsically online, while BPTT is strictly an offline algorithm (although it can sometimes be loosely approximated to mimic online methods, see~\citep{Lorraine2018}). Moreover, the standard memory challenge of training RNNs with RTRL does not apply in the case of hyperparameter optimization, as the number of hyperparameters is in general much smaller than the number of parameters. For example, one popular choice of $\varphi$ is to just include a global learning rate and some regularization coefficients. \subsection{Gradient computation} \label{sec:gradient_computation} Since we frame hyperparameter optimization as RNN training, one might reasonably wonder about the fundamental issues of vanishing and exploding gradients~\citep{hochreiter2001gradient} in this setup. Gradients do not vanish, because the SGD update in Eq.~\ref{eq:sgd} is additive. This resembles RNN architectures such as long- short-term memory~\citep[LSTM;][]{hochreiter1997long} and gated recurrent units~\citep[GRU;][]{chung2014empirical}, both of which were explicitly designed to have an additive gradients, as a way of avoiding the vanishing gradient issue. Nonetheless, exploding gradient may still be an issue and is worth further discussion. \begin{figure} \centering \includegraphics[page=5,width=0.75\linewidth]{figs/figsOHO.pdf} \vspace{-0.25cm} \caption{Overview of OHO joint optimization procedure. The parameters and hyperparameters update in parallel based on training/loss gradients computed using the current minibatch.} \label{fig:my_label} \vspace{-0.5cm} \end{figure} Because the influence matrix $\Gamma_{\tau+1}$ is defined recursively, the gradient of the validation loss with respect to ${\boldsymbol \varphi}$ contains the product of Hessians: \begin{align*} {\small \nabla_{\boldsymbol{\varphi}} \mathcal{L}_{\mathcal{D}_{\text{val}}} = \Bigg\langle G_{\tau}, -\sum^{\tau}_{i=0} \Bigg(\prod^{\tau}_{j=i+1} (I-\alpha_j H_j-2\alpha_j\lambda_j)\Bigg) G_i\Bigg\rangle , } \end{align*} where $\langle \cdot, \cdot \rangle$ denotes an inner product. $G_i$ and $H_j$ are the gradient and Hessian of the loss $\mathcal{L}(\boldsymbol{\theta}_\tau)$ at iteration $i$ and $j$, respectively. The product of the Hessians can lead to gradient explosion, especially when consecutive gradients are correlated~\citep{Metz2019}. It remains unclear whether the gradient explosion is an issue for RTRL (or, more generally, forward-differentiation based optimization). We study this in our experiments. \iffalse In order to analyze the exploding gradient issue, we consider the gradient of the validation loss after $\delta$ steps w.r.t. the learning rate at time $t$: \begin{align*} \frac{\partial L(\boldsymbol{\theta}_t+\delta)}{\partial \eta^{(t)}} &= \frac{\partial L(\boldsymbol{\theta}_t+\delta)}{\partial \boldsymbol{\theta}_{t+\delta}} \frac{\partial \boldsymbol{\theta}_{t+\delta}}{\partial \boldsymbol{\theta}_t} \frac{\partial \boldsymbol{\theta}_t}{\partial \eta^{(t)}} \end{align*} where $\eta^{(t)}$ is the use of the learning rate $\eta$ at time $t$, and $g(\theta) = \frac{\eta}{B} \sum_{(x,y) \in B} \nabla_\theta l(y, f(x; \theta))$. More specifically, we look at the temporal derivative: \begin{align*} \frac{\partial \theta_{t+\delta}}{\partial \theta_t} = \frac{\partial \theta_{t+\delta}}{\partial \theta_{t+\delta-1}} \cdots \frac{\partial \theta_{t+1}}{\partial \theta_{t}}, \end{align*} where the one-step temporal derivative is written as \begin{align} \label{eq:1-step-temporal-derivative} \frac{\partial \theta_{t+1}}{\partial \theta_{t}} = I - \eta^{(t+1)} H(\theta_t). \end{align} We use $H(\theta_t)$ as a shorthand for the Hessian of the mini-batch loss w.r.t. the parameters: \begin{align} \label{eq:hessian} H(\theta_t) = \frac{1}{B} \sum_{(x,y) \in B} \text{Hess}_{\theta} l(y, f(x; \theta_t)). \end{align} \fi \section{Introduction} The success of training complex machine learning models critically depends on good choices for hyperparameters that control the learning process. These hyperparameters can specify the speed of learning as well as model complexity, for instance, by setting learning rates, momentum, and weight decay coefficients. Oftentimes, finding the best hyperparameters requires not only extensive resources, but also human supervision. Well-known procedures, such as random search and Bayesian hyperparameter optimization, require fully training many models to identify the setting that leads to the best validation loss~\citep{Bergstra2012, Snoek2012}. While popular for cases when the number of hyperparameters is small, these methods easily break down when the number of hyperparameters increases. Alternative gradient-based hyperparameter optimization methods take a two level approach (often referred to as the `inner' and the `outer loop'). The outer loop finds potential candidates for the hyperparameters, while the inner loop is used for model training. These methods also require unrolling the full learning dynamics to compute the long-term consequences that perturbation of the hyperparameters has on the parameters~\citep{luketina2016scalable, Lorraine2018, Metz2019}. This inner loop of fully training the parameters of the model for each outer loop hyperparameter update makes existing methods computationally expensive and inherently offline. It is unclear how to adapt these ideas to nonstationary environments, as required by advanced ML applications, e.g.\ lifelong learning~\cite{German2019,Kurle2020}. Moreover, while these type of approaches can be scaled to high-dimensional hyperparameter spaces~\citep{Lorraine2018}, they can also suffer from stability issues~\citep{Metz2019}, making them nontrivial to apply in practice. In short, we are still missing robust and scalable procedures for online hyperparameter optimization. We propose an alternative class of online hyperparameter optimization (OHO) algorithm that update hyperparameters in parallel to training the parameters of the model. At the core of our framework is the observation that hyperparameter optimization entails a form of \emph{temporal credit assignment}, akin to the computational challenges of training recurrent neural networks (RNN). Drawing on this similarity allows us to adapt real-time recurrent learning (RTRL)~\citep{Williams1989}, an online alternative to backprop through time (BPTT), for the purpose of online hyperparameter optimization. We empirically show that our joint optimization of parameters and hyperparameters yields systematically better generalization performance compared to standard methods. These improvements are accompanied sometimes with a substantial reduction in overall computational cost. We characterize the behavior of our algorithm in response to various meta-parameter choices and to dynamic changes in hyperparameters. OHO can rapidly recover from adversarial perturbations to the hyperparameters. We also find that OHO with layerwise hyperparameters provides a natural trade-off between a flexible hyperparameter configuration and computational cost. In general, our framework opens a door to a new line of research, that is, real-time hyperparameter learning. Since we have posted the initial version of this preprint, it has been brought to our attention that the early work by~\citep{Franceschi2017} presented a very closely related method to ours, from a different perspective. \citet{Franceschi2017} studied both forward and reverse gradient-based hyperparameter optimization. They derived RTRL \citep{Williams1989} by computing the gradient of a response function from a Lagrangian perspective. In contrast, here we directly map an optimizer to a recurrent network, by taking the process of fully training a model as propagating a sequence of minibatches through an RNN, which allows us to use RTRL for the purpose of online hyperparameter optimization. Moreover, if \citet{Franceschi2017} included a single small experiment comparing their approach to random search on phonetic recognition, here we empirically evaluate our algorithm's strengths and properties extensively, and compare them to several state of the art methods. In doing so, we demonstrate OHO's i) effectiveness: performance w.r.t the wall clock time improves existing approaches, ii) robustness: low variance of generalization performance across parameter and hyparparameter initializations, iii) scalability: sensible trade-off between performance vs. computational cost w.r.t.\ the number of hyperparameters, and iv) stability of hyperparameter dynamics during training for different choices of meta-hyperparameters. \input{background} \input{setup} \input{hyperopt} \input{relatedwork} \input{experiment} \nocite{langley00} \section{Related work} Optimizing hyperparameters based on a validation loss has been tackled from various stances. Here, we provide a brief overview of state-of-the-art approaches to hyperparameter optimization. \paragraph{Random search.} For small hyperparameter spaces, grid or manual search methods are often used to tune hyperparameters. For a moderate number of hyperparameters, random search can find hyperparameters that are as good or better than those obtained via grid search, at a fraction of its computation time \citep{Bengio2000, Bergstra2012, Bergstra2011, Jamieson2015, Li2016}. \paragraph{Bayesian optimization approaches.} Bayesian optimization (BO) is a smarter way to search for the next hyperparameter candidate \citep{Snoek2012, Swersky2014, Snoek2015, Eriksson2019, Kandasamy2020} by explicitly keeping track of uncertainty. BO iteratively updates the posterior distribution of the hyperparameters and assigns a score to each hyperparameter based on it. Because evaluation of hyperparameter candidates must largely be done sequentially, the overall computational benefit from BO's smarter candidate proposal is only moderate. \paragraph{Gradient-based approaches.} Hyperparameter optimization with approximate gradient (HOAG) is an alternative technique that iteratively updates the hyperparameters following the gradient of the validation loss. HOAG approximates the gradient using an implicit equation with respect to the hyperparameters \cite{Domke2012, Pedregosa2016}. This work was extended to DNNs with stochastic optimization settings by \citet{Maclaurin2015, Lorraine2020}. In contrast to our approach, which uses the chain rule to compute the exact gradient online, this approach exploits the implicit function theorem and inverse Hessian approximations. While the hyperparameter updates appear online in form, the method requires the network parameters to be nearly at a stable point. In other words, the network needs to be fully trained at each step. \citet{Metz2019} attempt to overcome the difficulties of backpropagation through an unrolled optimization process by using a surrogate loss, based on variational and evolutionary strategies. Although their method addresses the exploding gradient problem, it still truncates backpropagation \citep{Shaban2019}, which introduces a bias in the gradient estimate of the per step loss. In contrast, our method naturally addresses both issues: It is unbiased by algorithm design, and the gradients are empirically stable, as will be shown in Experiment~\ref{exp:gradient_stability}. \paragraph{Forward differentiation approaches.} Our work is closely related to \citep{Franceschi2017, Baydin2018, Donini2020}, which share the goal of optimizing hyperparameters in real-time. \citet{Franceschi2017} first introduced a forward gradient-based hyperparameter optimization scheme (RTHO), where they compute the gradient of the response function from a Lagrangian perspective. Their solutions ends up being equivalent to computing the hypergradient using RTRL \citep{Williams1989}, as in OHO. Our conceptual framework is however different, as we derive the hyperparameter gradient by directly mapping an optimizer to a recurrent network and then applying RTRL. Unlike \citet{Franceschi2017}, whose only quantitative evaluation was performed on TIMIT phonetic recognition, we evaluate OHO extensively in a large experimental setup and rigorously analyze its properties. \citet{Donini2020} recently proposed adding a discount factor $\gamma \in [0,1]$ to the influence function, as an heuristic designed to compensate for inner loop non-stationarities by controlling the range of dependencies of the past influence function. One potential disadvantage of our approach is that the memory complexity grows with the number of hyperparameters. There are however several approximate variants of RTRL that can be used to address this issue. As an example, unbiased online recurrent optimization (UORO) reduces the memory complexity of RTRL from quadratic to linear \cite{Tallec2017}. UORO uses rank-1 approximation of the influence matrix and can be extended to rank-$k$ approximation~\citep{Mujika2018}. Although UORO gives an unbiased estimate of the gradient, it is known to have high variance, which may slow down or impair training \cite{Cooijmans2019, Marschall2020}. It remains an open question whether these more scalable alternatives to RTRL are also applicable for online hyperparameter optimization. \section{Learning as a recurrent neural network} Both RNN parameter learning and hyperparameter optimization need to take into account long-term consequences of (hyper-)parameter changes on a loss. This justifies considering an analogy between the recurrent form of parameter learning (Eq.~\ref{eq:update_rule}) and RNN dynamics (Eq.~\ref{eq:rnn}). \paragraph{Mapping SGD to a recurrent network.} To establish the analogy, consider the following mapping: \begin{align*} \text{parameters } \boldsymbol{\theta}_\tau &\rightarrow \text{state } \mathbf{h}_t \\ \text{batch } \mathbf{B}_\tau &\rightarrow \text{input } \mathbf{x}_t\\ \text{update rule } g(\cdot) &\rightarrow \text{recurrent function } r(\cdot)\\ \text{hyper-parameters } \boldsymbol{\varphi} &\rightarrow \text{parameters } \boldsymbol{\phi}. \end{align*} Parameters ${\boldsymbol \theta}_\tau$ and recurrent network state $\mathbf{h}_t$ both evolve via recurrent dynamics, $g(\cdot)$ and $r(\cdot)$, respectively. These dynamics are each parametrized by hyper-parameters ${\boldsymbol \varphi}$ and parameters ${\boldsymbol \phi}$. The inputs $\mathbf{x}_t$ in the RNN correspond to the minibatches of data $\mathbf{B}_\tau$ in the optimizer, and the outputs $\mathbf{o}_t$ to $\left\{ f(x; {\boldsymbol \theta}_\tau) \Big| x \in B_\tau \right\}$, respectively. In short, we interpret a full model training process as a single forward propagation of a sequence of minibatches in an RNN. \paragraph{Mapping the validation loss to the RNN training loss.} The same analogy can be made for the training objectives. We associate the per-step loss of the recurrent network to the hyperparameter optimization objective: \begin{align*} \mathcal{L}(\boldsymbol{\theta}_\tau, \boldsymbol{\varphi}_\tau; \mathcal{D}_\text{val}) = \frac{1}{|\mathcal{D}_{\text{val}}|} \sum_{(\mathbf{x},\mathbf{y}) \in \mathcal{D}_{\text{val}}} l\big(\mathbf{y}, f(\mathbf{x}; \boldsymbol{\theta}_\tau(\boldsymbol{\varphi}_\tau))\big), \end{align*} where $l$ is the per-example loss function, with $\boldsymbol{\varphi}_\tau = \lbrace \alpha, \lambda \rbrace$ the set of tune-able hyperparameters, which in our concrete example corresponds to learning rate $\alpha$ and regularizer $\lambda$. The hyperparameter optimization objective is calculated over the validation dataset $\mathcal{D}_{\text{val}}$. The total loss over a full training run is $L_{\text{Total}} = \sum_{\tau=1}^{\infty} \mathcal{L}(\boldsymbol{\theta}_\tau, \boldsymbol{\varphi}_\tau; \mathcal{D}_{\text{val}}),$ where there are multiple parameter updates within a run. We use $\infty$ to emphasize that the number of epochs is often not determined {\it a priori}. Both the hyperparameter optimization objective and the update rule, Eq.~\ref{eq:sgd}, contain the per-example loss $l$. The former updates the parameters of RNN using $\mathcal{D}_{\text{val}}$, while the latter updates the state of RNN using training batch ${\bf{B}_{\tau}}$. This meta-optimization loss function is commonly used for hyperparameter optimization in practice~\citep{Bengio2000, Pedregosa2016}. Nevertheless, the mapping from the optimizer to RNN and the mapping from the validation loss to RNN training objective is novel and enables us to treat the whole hyperparameter optimization process as an instance of RNN training.
{ "timestamp": "2021-04-09T02:24:11", "yymm": "2102", "arxiv_id": "2102.07813", "language": "en", "url": "https://arxiv.org/abs/2102.07813" }
\section{Introduction} \label{Intro} Complex behavioral systems are ubiquitous in many areas of science, including biology, neuroscience, engineering, as well as in social sciences and operations research in general. The development of more realistic complex behavioral models offers many challenging questions to science and technology. Hence, in the modelling of such mathematical models, the uncertainty quantification plays an important role for the setting of mathematical formulation of the problems \cite{Bellomo2017}. However, in terms of the mathematical formulation of the corresponding problems, the overall dynamics of such systems should, in most cases, be considered in confined domains, possibly with obstacles. For this particular class of stochastic dynamic models, the boundary conditions must be specified and the frequent choices are absorbing and reflecting or even mixed boundary conditions. Therefore, we turn our attention to the recent developments provided in \cite{Thieu2020-3,Zhang2016,Battiston2020, Thieu2021} that bring us to the study of a system of reflecting stochastic dynamics, namely Skorokhod-type stochastic differential equations (SDEs), modelling the dynamics of active-passive populations. Furthermore, operations research that includes queueing theory models is one of important applications of such reflecting stochastic dynamics. The class of queueing theory models is important in many applied fields, including the emergency medical aid systems, passenger services in air terminals, bio-social systems and neuroscience, as well as various types of other systems and processes. The models of queueing theory and those based on systems of SDEs are frequently studied separately, although the links between them have been explored by a number of authors. In \cite{Ward2003}, the authors considered a single-server queue with a Poisson arrival process and exponential processing times in which each customer independently reneges after an exponentially distributed amount of time. The authors in \cite{Ward2005} discussed a single-server queue with a renewal arrival process and generally distributed processing times in which each customer independently reneges if service has not begun within a distributed amount of time. The topic of heavy traffic limit approximations in queueing network models has been investigated in \cite{Williams1998, Ramanan2003, Kruk2011}. In \cite{Ramanan2006}, the author considered the extended Skorokhod problem (ESP) and associated extended Skorokhod map (ESM) that enabled a pathwise construction of reflected diffusions. In \cite{Thieu2020-3}, the authors proved the well-posedness of a coupled system of Skorohod-like SDEs modeling the dynamics of active-passive pedestrians. A discretization scheme for reflected SPDEs driven by space-time white noise through systems of reflecting SDEs has been discussed in \cite{Zhang2016}. We note also \cite{Pilipenko2014} where a discussion and references on a relationship between reflecting SDEs and some models of queueing theory were provided. On the other hand, our better understanding of the activities of neuronal cells in nervous systems is one of the major current challenges. The neural activity is intrinsically noisy and the specific neural activity in living organisms can be treated via the paradigm of complex systems, so that stochastic dynamics with various types of uncertainties have to be incorporated into the models (see e.g. in \cite{Laing2010}). Furthermore, since the biophysical dynamics of neurons (e.g. membrane potential and synaptic processes) are complicated and hard to measure, then stochastic tools can be useful in the study of such dynamics of neurons. Up to date, a number of relevant results are available in the topic of stochastic dynamics of neurons. One of the most prominent examples for this class of models is given by the Hodgkin-Huxley model \cite{Hodgkin1952} that has been introduced in 1952 to describe the membrane potential in the squid giant axon. A two-compartment neuronal model has been reported in \cite{Lansky1999}. In \cite{Reutimann2003}, the authors proposed event-based strategy for efficiently simulating large networks of simple model neurons. In \cite{Tusbo2012}, the authors have shown an alternative hypothesis belonging to the maximization of mutual information for neural code from the neuronal activities recorded juxtacellularly in the sensorimotor cortex of behaving rats. An inverse first passage time method for a two dimensional Ornstein-Uhlenbeck process and its applications in neuroscience have been discussed in \cite{Civallero2019}. Finally, the authors in \cite{Hsu2021} have provided a review of the current challenges and further progress in building stochastic models for single-cell data. Motivated by the study of a system of Skorokhod-type SDEs together with applications in queueing theory models and neuroscience, our representative examples here are pertinent to (i) the modelling of customer services in a queueing theory setting in general and (ii) neural dynamics in a cell's membrane potential model. In this paper, we introduce two reflecting stochastic dynamic models for active-passive populations. In particular, in the first model, we study the stochastic dynamics of active-passive populations that can be represented as customer services in a queueing theory setting. Specifically, the system consists of a crowd of customers based on two distinct populations: (a) active customers who have priority and valid accesses to the servers and leave the domain immediately after reaching the servers, and (b) passive customers who do not have priority and valid accesses to the servers. In what follows we show that the model of queueing theory converges to a system of reflected SDEs via a limit theorem. In the second model, we study a cell's membrane potential model of two distinct populations of ions: $\text{Na}+$ and $\text{Cl}-$. The main question we ask, while applying this model, is \textquotedblleft How to capture the behaviour of many ions when the action potential reaches peak and then goes down to the hyperpolization state?\textquotedblright. To address this question we consider a system of SDEs of Skorokhod type with reflecting boundary conditions based on similar ideas of a particular case of our queueing theory model to capture the behaviour of two different ions in a cell's membrane potential model. The analysis of this model is carried out by combining low- and high-fidelity results obtained from the solution of the underlying coupled system of SDEs and from the simulations with a statistical-mechanics-based lattice gas model, where we employ a kinetic Monte Carlo procedure. The rest of this paper is organized as follows. In Section \ref{model}, we provide details of the model and consider an application of reflecting stochastic dynamics in queueing theory. A system analysis based on a multi-fidelity approach has also been presented in this section. In Section \ref{neuro-model}, we focus on an application of reflecting stochastic dynamics in neuroscience. The hyperpolarization phenomenon and its influence on the dynamics of neuronal cells have been discussed in detail. Finally, concluding remarks are provided in Section \ref{concluding-remarks}. \section{An application in queueing theory}\label{model} In this section, we consider a system of reflecting stochastic dynamics for an active-passive customers service. In particular, we define the model, develop its implementation, and provide numerical examples. \subsection{Model description}\label{model-1} We consider an arriving customer service for active and passive customers. The geometry is a square lattice $\Lambda:=\{1,\dots,L\}\times\{1,\dots,L\}\subset \mathbb{Z}^2$ of side length $L$, where $L$ is an odd positive integer number. $\Lambda$ will be referred in this context as {\em room}. An element $x=(x_1,x_2)$ of the room $\Lambda$ is called \emph{site} or \emph{cell}. Two sites $x,y\in\Lambda$ are said \emph{nearest neighbors} if and only if $|x-y|=1$. In addition, $N_A$ is the total number of active customers, $N_P$ is the total number of passive customers with $N:=N_A+N_P$ and $N_A, N_P, N \in \mathbb{N}$. We assume that an exit door located on the top row of the geometry, and that there are $\omega$ servers at the exit door. Every active customer's target is to reach these servers and to leave the geometry immediately after that, while the passive customers cannot leave the domain due to invalid access to the servers. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{ccmt-figure01.pdf} \vspace*{-40mm} \caption{(Color online) Schematic representation of our lattice model. Blue and red squares denote passive and active customers, respectively, while the white squares within the geometry represents the empty spots. The thick dashed line surrounding a large fraction of the grid denotes the presence of reflecting boundary conditions. The exit door is located in presence of the arrow, with its width equal to $\omega$.} \label{fig:fig0} \end{figure} This behavioral system in the context of queueing theory can be viewed as an $M/M/\omega/N$\footnote[1]{The notation $M/M/\omega/N$ means that the queueing model contains $\omega$ serves and $N$ places for waiting, while $M$ represents Poisson arrival process (i.e., exponential inter-arrival times) see e.g. \cite{Kalashnikov1994}.} queueing system with the following assumptions: \begin{itemize} \item [a)] there are $N$ customers that request services, \item [b)] the maximum number of requests in a system equals $\omega$, \item [c)] the arrival times of active and passive customers are exponential i.i.d random variables with intensities $\alpha$ and $\mu$, respectively. \end{itemize} Next, we highlight key steps in our model construction. \begin{itemize} \item[i.] The active population of the system is described as the following process: \begin{align}\label{active-eqn} X_{A_i}^\alpha(t) = X_{A_i}^\alpha(0) + Y_{A_i}^{\alpha}(t) + \Phi_{A_i}^{\alpha}(t), \end{align} where $X_{A_i}^\alpha(t)$ denotes the position of the customer $i \in \{1,\ldots, N_A\}$ at time $t\geq 0$. On the other hand, $Y_{A_i}^{\alpha}$ can be interpreted as the arrival times of active customers, $\Phi_{A_i}^{\alpha}(t)$ is the cumulative lost service capacity over $[0,t]$. \item[ii.] The passive population of the system is described as the following process: \begin{align}\label{passive-eqn} X_{P_j}^\beta(t) = X_{P_j}^\beta(0) + Y_{P_j}^{\beta}(t) + \Phi_{P_j}^{\beta}(t), \end{align} where $X_{P_j}^\beta(t)$ denotes the position of the customer $j \in \{1,\ldots, N_P\}$ at time $t \geq 0$. Similarly, $Y_{P_j}^{\beta}$ can be interpreted as the arrival times of passive customers, $\Phi_{P_j}^{\beta}(t)$ is the cumulative lost service capacity over $[0,t]$. \end{itemize} Motivating by \cite{Pilipenko2014} (cf. Section 3.3), we consider a limit theorem for a model of queueing theory that leads us to a system of reflecting SDEs. Using the central limit theorem, we have \begin{align}\label{limit-1} \frac{Y_{A_i}^{\alpha}(t) - \alpha t}{\sqrt{\alpha}} \to w_{A_i}(t) \text{ when } \alpha \to \infty, \end{align} and \begin{align}\label{limit-2} \frac{Y_{P_i}^{\beta}(t) - \beta t}{\sqrt{\beta}} \to w_{P_j}(t) \text{ when } \beta \to \infty \end{align} in $\Lambda$. Similarly, we also have \begin{align}\label{Skorohod-1} \left(\frac{X_{A_i}^\alpha(t)}{\sqrt{\alpha}},\frac{\Phi_{A_i}^\alpha(t)}{\sqrt{\alpha}} \right) \to (\xi_{A_i}(t), \phi_{A_i}(t)) \text{ when } \alpha \to \infty, \end{align} and \begin{align}\label{Skorohod-2} \left(\frac{X_{P_j}^\beta(t)}{\sqrt{\beta}},\frac{\Phi_{P_j}^\beta(t)}{\sqrt{\beta}} \right) \to (\xi_{P_j}(t), \phi_{P_j}(t)) \text{ when } \beta \to \infty \end{align} in $\Lambda$. As a result, the pairs $(\xi_{A_i}(t), \phi_{A_i}(t))$ and $(\xi_{P_j}(t), \phi_{P_j}(t))$ are the solutions of the following Skorokhod type equations: \begin{align} \xi_{A_i}(t) &= w_{A_i}(t) + \phi_{A_i}(t), \\ \xi_{P_j}(t) &= w_{P_j}(t) + \phi_{P_j}(t). \end{align} In the lattice-based numerical implementation of this model, each customer is represented as a particle. The resulting coupled particle system is interacting via the site exclusion principle, i.e. each site of the lattice can be occupied only by a single particle. \subsection{Numerical results}\label{KMC} Our representative examples are reported here for two different species of \emph{active} and \emph{passive} customers, moving inside $\Lambda$ (we use in the notation the symbols A and P to respectively refer to them). Note that the sites of the external boundary of the room, i.e. the sites $x\in\mathbb{Z}^2\setminus\Lambda$, is such that there exists $y\in\Lambda$ nearest neighbor of $x$ which cannot be accessed by the customers. We call the state of the system \emph{configuration} $\eta\in\Omega=\{-1,0,1\}^\Lambda$ and we shall say that the site $x$ is \emph{empty} if $\eta_x=0$, \emph{occupied by an active customer} if $\eta_x=1$, and \emph{occupied by a passive customer} if $\eta_x=-1$. The number of active (respectively, passive) customers in the configuration $\eta$ is given by $n_{\text{A}}(\eta)=\sum_{x\in\Lambda}\delta_{1,\eta_x}$ (resp.\ $n_{\text{P}}(\eta)=\sum_{x\in\Lambda}\delta_{-1,\eta_x}$), where $\delta_{\cdot,\cdot}$ is Kronecker's symbol. Their sum is the total number of particles in the configuration $\eta$. The overall dynamics of our system is incorporated in the continuous time Markov chain $\eta(t)$ on $\Omega$ with rates $c(\eta,\eta')$, for the detailed definitions of the rates $c(\eta(t),\eta)$, see, e.g., \cite{Cirillo2019, Cirillo2020}. We took the inspiration from the population dynamics in \cite{Cirillo2019, Colangeli2019} to model the dynamics of our active and passive customers. Using the descriptions based on a simple exclusion process, the dynamics in the room is modeled as follow: the passive customers perform a symmetric simple exclusion dynamics on the whole lattice, whereas active customers is subject to a drift, guiding particles towards the exit door. The numerical results reported in this section are obtained by using the kinetic Monte Carlo (KMC) method. In particular, we simulate the presented model by using the following scheme: we extract at time $t$ an exponential random time $\tau$ as a function with parameter which has the total rate $\sum_{\zeta\in\Omega}c(\eta(t),\zeta)$, then set the time $t$ equal to $t+\tau$. Next, we select a configuration using the probability distribution $c(\eta(t),\eta)/\sum_{\zeta\in\Omega}c(\eta(t),\zeta)$ and set $\eta(t+\tau)=\eta$ (for the detailed definitions of the rates $c(\eta(t),\eta)$, see, e.g., \cite{Cirillo2019,Cirillo2020}). This numerical scheme has initially been studied in \cite{Cirillo2019}, where the authors implemented a version of KMC to analyze the pedestrian escape from an obscure room by using a lattice gas model with two species of particles. Further validation of the numerical methodology used here was reported in \cite{Cirillo2020}, based on a counterflow pedestrian model. Note that the overall dynamics of our system is based on a continuous time Markov chain, i.e. the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix, together with a simple exclusion process. On the other hand, we use a statistical-mechanics-based lattice gas framework where we employ a kinetic Monte Carlo procedure to simulate the queueing theory model described in earlier in this section. In general, Monte Carlo statistical methods, particularly those based on Markov chains, provide a possible way to sample the distribution of the input random variable. Moreover, the reflecting stochastic queueing theory model we discussed here is a low-delity modelling methodology for general behavioral systems. In order to approximate this reflecting stochastic dynamics problem, we combine the coupled system of SDEs with a high-delity modelling method by using a statistical-mechanics-based lattice gas approach. Therefore, the methodology used in our model can be considered as a multi-fidelity approach to statistical inference (e.g. in \cite{Peherstorfer2018-2,Robert2004}). Here, we consider the system defined in Section \ref{model-1} for $L=60$, $\omega=20$. All of the simulations are obtained starting with the same initial configuration chosen once for all by distributing the customers at random with uniform probability distribution. All other necessary details are provided in the figure captions. The main representative numerical results of our analysis here are shown in Fig. \ref{fig:fig1}. We have plotted the total number of customers and the customer currents (outgoing fluxes) as functions of time (see e.g. in \cite{Colangeli2019}), for different values of $\varepsilon$ ($\varepsilon \geq 0$ is the drift quantity). Note that the current of active customers is defined in the infinite time limit by the ratio between the total number of active customers, that in the interval $(0, t)$ passed through the servers to leave the geometry, and the time $t$. Among other observations, our numerical results have demonstrated that the residence time of passive customers increases the residence time of active customers. Furthermore, we have observed that \textquotedblleft too smart\textquotedblright \ active customers increase their exit times. By the notion of \textquotedblleft too smart\textquotedblright, applied to active customers, we mean that when the values $\varepsilon$ are large enough, active customers move too quickly towards the exit door. In the left panel of Fig. \ref{fig:fig1}, the dynamics in the presence of passive customers increases the residence time of active customers. The current of active customers in the dynamics of only active customers is larger than in the case of both active and passive customers in the geometry for the case of $\varepsilon=0.1$, $\varepsilon=0.3$ and $\varepsilon=0.5$. To explain the observed effect, we examine the corresponding active customer exit times in the right panel of Fig. \ref{fig:fig1}. Due to the simple exclusion principle, the presence of passive customers can be seen as an obstacle in the geometry, explaining an increase in the waiting time of active customers. However, when increasing the value of $\varepsilon$ from $0.1$ to $0.3$ and $0.5$, the current of active customers in the dynamics of both active and passive customers with $\varepsilon=0.5$ is smaller than the current of the same dynamics with $\varepsilon=0.3$. Similarly, the current of active customers in the dynamics of both active and passive customers with $\varepsilon=0.3$ is smaller than the current of the same dynamics with $\varepsilon=0.1$. This effect is interesting. In general, when we increase the values of drift quantity, the current of active customers should be increasing. Looking at the corresponding active customer exit times in the right panel of Fig. \ref{fig:fig1}, we see the same effect where the residence time of active customers in the case of $\varepsilon=0.3$ is larger than in the case of $\varepsilon=0.1$. This is due to the fact that an increase in the value of $\varepsilon$ lets the active customers become \textquotedblleft too smart\textquotedblright. Hence, the active customers go quickly to the exit door and accumulate there together with the presence of passive customers in the geometry that makes the queue longer and increase the exit time of the active customers. This phenomenon could also be observed in the dynamics of only active customers where the current of active customers with $\varepsilon=0.5$ is larger than the current with $\varepsilon=0.3$ up to a certain value. After that it becomes smaller than in the case of $\varepsilon=0.3$. Similar effect can be observed in the corresponding active customer exit times in the right panel of Fig. \ref{fig:fig1}. Finally, it is worth noting that \textquotedblleft too smart\textquotedblright \ active customers increase their residence times. As a closing note for this discussion, the presence of passive customers increase the time residence of active customers in the system. As further research, this numerical investigation would be interesting not only in the queueing theory models but also in the applications of neural networks, social networks and so on. As we mentioned above, our reflecting stochastic dynamics for active-passive populations can be applied also in a neuronal model, which we consider next. \begin{figure}[h!] \centering \begin{tabular}{ll} \includegraphics[width=0.45\textwidth]{current-queue.pdf}& \includegraphics[width=0.45\textwidth]{par-queue.pdf} \end{tabular} \caption{Left panel: Evolution of the current as a function of time for active-passive customers and active customer exit times for $L=60$, $N_A=N_P=1200$ (empty symbols) and $N_A=1200, N_P=0$ (solid symbols) with $\varepsilon=0.1$ (circles), $\varepsilon=0.3$ (triangles), $\varepsilon=0.5$ (squares). Right panel: Behavioral pattern of the crowd for $L=60$, $N_A=N_P=1200$ (empty symbols) and $N_A=1200, N_P=0$ (solid symbols) with $\varepsilon=0.1$ (circles), $\varepsilon=0.3$ (triangles), $\varepsilon=0.5$ (squares).} \label{fig:fig1} \end{figure} \section{An application in neuroscience}\label{neuro-model} In this section, we study a system of reflecting stochastic dynamics of a cell's membrane potential model for two distinct ions $\text{Na}+$ and $\text{Cl}-$. Specifically, we define the model and provide numerical examples. \subsection{Model description} For the functionality of a nervous system, neurons must be able to send and receive signals (see, e.g., \cite{Rye2016}). In particular, each neuron has a charged cellular membrane that causes a voltage difference between the inside and the outside. This charge of the membrane can change in response to neurotransmitter molecules released from other neurons and environmental stimuli. Therefore, to better understand the communication among neurons, we must first understand the \textquotedblleft resting\textquotedblright membrane charge. In general, in surrounding of each ion there is a lipid bilayer membrane that is impermeable to charged molecules or ions. Hence, ions must pass through special proteins called ion channels that span the membrane to enter or exit the neuron (see, e.g., \cite{Rye2016, Laing2010}). The activity of ion channels have different configurations: open, closed, and inactive, for instance, see in Fig. \ref{fig:membrane}. These ion channels are sensitive to the environment and can change their shape accordingly. Ion channels that change their structure in response to voltage changes are called voltage-gated ion channels. Voltage-gated ion channels regulate the relative concentrations of different ions inside and outside the cell membrane. The difference in total charge between the inside and outside of this cell is called the membrane potential. Moreover, a neuron can receive input from other neurons and the transmission of a signal between neurons is generally carried by a chemical called a neurotransmitter. Transmission of a signal within a neuron is carried by a brief reversal of the resting membrane potential called an action potential. An action potential is a brief reversal of the resting membrane potential caused by the transmission of a signal within a neuron from dendrite to axon terminal, see, e.g., Fig \ref{fig:action-potential}. In general, when neurotransmitter molecules bind to receptors located on a neuron’s dendrites, ion channels open. In fact, this opening ion channels allows positive ions to enter the neuron at excitatory synapses and results in depolarization of the membrane a decrease in the difference in voltage between the inside and outside of the neuron (see, e.g., \cite{Laing2010}). Moreover, these ion channels open only for a short time that caused more positive ions inside the membrane than outside and positively-charged ions diffuse out. As these positive ions go out, the inside of the membrane once again becomes negative with respect to the outside, the phenomenon called hyperpolarization. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{membrane.png} \caption{Schematic representation of cell's membrane potential model. At the peak action potential $\text{Cl}-$ channels close while $\text{Na}+$ channels open. $\text{Na}+$ ions leave the cell, and the membrane eventually becomes hyperpolarized (Created with BioRender.com).} \label{fig:membrane} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{action-potential.png} \caption{Schematic representation of an action potential illustrates its various phases as the action potential passes a point on a cell membrane (Created with BioRender.com). } \label{fig:action-potential} \end{figure} There are many different types of stochastic processes relevant to a quantitative modelling of stochastic neural activity. For instance, the behaviour of ion channels, as described above, which play an important role in the neural dynamics for action potential generation. Specifically, the membrane potential setting can vary continuously and is driven by synaptic noise and channel noise. Hence, one of simplest models describing the membrane potential evolution is provided by the one-dimensional processes. Furthermore, the membrane fluctuations obey a simple SDE which is formally equivalent to the Ornstein-Uhlenbeck process from statistical physics, see an example in Fig \ref{fig:OU} where we plot randomly $100$ sample paths of the process using Euler-Maruyama method for SDEs (see, e.g., \cite{Higham2001}). It is clear that the drift term in the Ornstein-Uhlenbeck process represents the amount of momentum lost by drag force and the diffusion term is subjected to a rapidly fluctuating random force. Therefore, the sample paths of this process help to explain the movement of particles in e.g. biological systems, social dynamic models, etc. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{OU-samplepaths.png} \caption{(Color online) Ornstein-Uhlenbeck process for $100$ sample paths with $\mu=1$ and $\sigma=1$. This figure is obtained by using Euler-Maruyama method for SDEs (see, e.g., \cite{Higham2001}).} \label{fig:OU} \end{figure} In what follows, we discuss the application of the reflected Ornstein-Uhlenbeck process in neuroscience, as well as another approach from a lattice gas setting to analyze the behavior of neurons in a membrane potential model. Let us first recall the Ornstein-Uhlenbeck process modeling of the action potential of neurons (see, e.g., \cite{Ha2009, Xing2012, Bo2013}). Let $\{V_t : t \geq 0\}$ be a reflected Ornstein-Uhlenbeck process defined on $[r, \infty)$ with drift $(\mu - V_t/\gamma)$ and constant diffusion parameter $\sigma$. Then, the process $\{V_t : t \geq 0\}$ satisfies the following SDE: \begin{align}\label{OU} \begin{cases} dV_t = (\mu - \frac{1}{\gamma}V_t)dt + \sigma dW_t + dL_t, \\ V_0 \in [r, \infty), \end{cases} \end{align} where $W_t$ is a standard Brownian motion, while $\{L_t: t \geq 0\}$ is a continuous non-decreasing process with $L_0 = 0$. In particular, the process $\{L_t: t \geq 0\}$ increases only when $V_t = r$ and keeps $V_t \geq r$ for all $t$. In \eqref{OU}, the quantity $1/\gamma$ is the speed of reversion to its long term mean $\mu \gamma$ and $\sigma$ represents the diffusion coefficient which is instantaneous. The Ornstein-Uhlenbeck process can be interpreted as the scaling limit of mean-reverting discrete Markov chains, analogous to Brownian motion as the scaling limit of simple random walk. One of particular examples is the Ehrenfest Urn model discussed in \cite{Casas2015}. It is clear that after the membrane potential is reached and the neuron fires, it frequently goes below the resting baseline starting level, namely hyperpolarization. Hence, a reflecting boundary setting for the Ornstein-Uhlenbeck model can capture the process of the membrane potential with the hyperpolarization level. In particular, a reflecting boundary in the Ornstein-Uhlenbeck model is the maximum hyperpolarization level. By using this reflecting stochastic dynamics, we can control the action potential that goes down immediately to the resting state after reaching the peak. Using the Euler-Maruyama method for SDEs (see, e.g., \cite{Higham2001}), we provide a numerical example of the membrane potential evolution in Fig. \ref{fig:fig2} following the Ornstein-Uhlenbeck process. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{reflectedOU2.png} \caption{Membrane potential following Ornstein-Uhlenbeck process with $\mu = 1.2$, $\gamma=10$ and $\sigma=0.3$ for two cases: the red curve represents the SDEs with reflecting boundary condition at $r=0.5$, while the blue curve represents the SDEs without reflecting boundary condition.} \label{fig:fig2} \end{figure} We have shown a simple neuronal model based on one-dimensional processes to describe the membrane potential evolution where we are interested in the movement of only one single neuron. In general, since the human nervous system consists of billions of neural cells (or neurons), this novel ingredient brings a lot of challenging questions to the modeling of such biological systems. For example, one of the questions we will try to answer in remainder of this paper is how to capture the behaviour of many ions when the action potential reaches peak and then goes down to the hyperpolization state. To address this question we took the inspiration from the queueing theory model presented in Section \ref{model}. We assume that there are two different populations of ions: $\text{Na}+$ and $\text{Cl}-$ moving in a cell's membrane potential (a $2\text{D}$ lattice). We use the lattice gas dynamics, similar to Section \ref{model} for the case of $\varepsilon=0$, where now this model describes the dynamics of two populations of ions in a membrane potential. Here, $\text{Na}+$ ions can be seen as active customers, while $\text{Cl}-$ ions can be consider as passive customers. For all of the detailed analysis of reflecting stochastic dynamics of such ions ($\text{Na}+$ and $\text{Cl}-$), we use an analogous description to Section \ref{model}, replacing $A$ by $\text{Na}+$ and $P$ by $\text{Cl}-$. For the lattice gas approximation in this application, we provide the detailed setting in the next Section. \subsection{Numerical results} In this section, we consider a cell's membrane potential of two different populations of ions: $\text{Na}+$ and $\text{Cl}-$. The geometry is a square lattice $\Lambda:=\{1,\dots,L\}\times\{1,\dots,L\}\subset \mathbb{Z}^2$ of side length $L$, where $L$ is an odd positive integer number. $\Lambda$ will be referred in this context as {\em nerve net}. An element $x=(x_1,x_2)$ of the room $\Lambda$ is called \emph{site} or \emph{cell}. Two sites $x,y\in\Lambda$ are said \emph{nearest neighbors} if and only if $|x-y|=1$. In addition, $N_{\text{Na}+}$ is the total number of $\text{Na}+$ ions, $N_{\text{Cl}-}$ is the total number of $\text{Cl}-$ ions with $N:=N_{\text{Na+}}+N_{\text{Cl-}}$ and $N_{\text{Na+}}, N_{\text{Cl-}}, N \in \mathbb{N}$. Furthermore, we assume that ion channels located on the top row of the geometry, and that there are $\omega$ ion channels at the exit door. The ion channels only open for $\text{Na}+$ ions to leave the cell membrane, while the $\text{Cl}-$ ions cannot leave the domain due to the inaccessibility of the channels. This model is illustrated in Fig. \ref{fig:fig3}, where we show the configuration of the system at different times. The opening of channels that let only positive ions flow out of the cell can cause the hyperpolarization. This phenomenon is observed when the membrane potential becomes more negative at a particular spot on the neuron's membrane. \begin{figure}[h!] \centering \begin{tabular}{lll} \includegraphics[width=0.33\textwidth]{movie1.pdf}& \includegraphics[width=0.33\textwidth]{movie2.pdf}& \includegraphics[width=0.33\textwidth]{movie3.pdf}\\[0.1cm] \includegraphics[width=0.33\textwidth]{movie4.pdf}& \includegraphics[width=0.33\textwidth]{movie5.pdf}& \includegraphics[width=0.33\textwidth]{movie6.pdf} \\[0.1cm] \includegraphics[width=0.33\textwidth]{movie7.pdf}& \includegraphics[width=0.33\textwidth]{movie8.pdf}& \includegraphics[width=0.33\textwidth]{movie9.pdf} \end{tabular} \caption{\small (Color online) Configurations of the model at different times (increasing in lexicographic order). Parameters: $L=60$, $w_{\mathrm{ex}}=20$ and $\varepsilon=0.2$. Red pixels represent active particles, blue pixels denote passive particles, and gray sites are empty. In the initial configuration (top left panel) there are $1200$ active and $1200$ passive customers.} \label{fig:fig3} \end{figure} To simulate this process, as we have mentioned above, we exploit the idea of a lattice gas model to describe the dynamics of neurons in a cell's membrane potential model. We use the same numerical scheme as in Section \ref{KMC}, with the only difference here is that active customers are considered as $\text{Na}+$ ions and passive customers are represented by $\text{Cl}-$ ions. The neuroscience model that we study here has been developed following the idea presented in Section \ref{model} for the case of $\varepsilon=0$. This means that both $\text{Na}+$ and $\text{Cl}-$ ions perform a symmetric simple exclusion dynamics on the whole lattice, where only $\text{Na}+$ ions can flow out the channels. At this moment, we consider the case without chemical reactions terms in our neuronal dynamics. To get closer to a more realistic picture, the dynamics of neurons in a cell's membrane potential model would have to include chemical reactions terms. However, in such a case, the situation becomes a lot more challenging due to a feedback mechanism of the chemical reaction quantities together with interactions between the dynamics of ions and the surrounding environment. Such a generalized model is not a subject of our investigation here, and it will be reported elsewhere. \begin{figure}[h!] \centering \begin{tabular}{ll} \includegraphics[width=0.45\textwidth]{current-neuron.pdf}& \includegraphics[width=0.45\textwidth]{par-neuron.pdf} \end{tabular} \caption{Left panel: Evolution of the current as a function of time for $L=60$, $N_{\text{Na+}}=N_{\text{Cl}-}=1200$ (empty symbols), $\omega = 15$ (circles), $\omega = 20$ (triangles), $\omega = 30$ (squares), $\omega = 40$ (diamonds), $\omega = 60$ (pentagons) (circles). Right panel: Behavioral pattern of the crowd for $L=60$, $N_{\text{Na+}}=N_{\text{Cl}-}=1200$ (empty symbols), $\omega = 15$ (circles), $\omega = 20$ (triangles), $\omega = 30$ (squares), $\omega = 40$ (diamonds), $\omega = 60$ (pentagons) (circles).} \label{fig:fig4} \end{figure} \begin{figure}[h!] \centering \begin{tabular}{ll} \includegraphics[width=0.45\textwidth]{current-neuro-density.pdf}& \includegraphics[width=0.45\textwidth]{par-neuro-density.pdf} \end{tabular} \caption{Left panel: Evolution of the current as a function of time for $L=60$, $N_{\text{Na+}}=800, N_{\text{Cl}-}=1600$ (empty circles), $N_{\text{Na+}}=N_{\text{Cl}-}=1200$ (empty triangles), $N_{\text{Na+}}=1600, N_{\text{Cl}-}=800$ (empty squares). Right panel: Behavioral pattern of the crowd for $L=60$, $N_{\text{Na+}}=800, N_{\text{Cl}-}=1600$ (empty circles), $N_{\text{Na+}}=N_{\text{Cl}-}=1200$ (empty triangles), $N_{\text{Na+}}=1600, N_{\text{Cl}-}=800$ (empty squares).} \label{fig:fig5} \end{figure} The main numerical results of this part of our inverstigation is shown in Figs. \ref{fig:fig4} and \ref{fig:fig5}. We have plotted the ion currents and the behavioral pattern of the ions as a function of time, for different numbers of the ion channels in Fig. \ref{fig:fig4} and for different densities of $\text{Na}+$ and $\text{Cl}-$ ions in the cell in Fig. \ref{fig:fig5}. We observe that the current of $N_{\text{Na+}}$ ions increases when we increase the number of open ion channels in the left panel of Fig. \ref{fig:fig4}. This is visible also in the corresponding $N_{\text{Na+}}$ ion exit times in the right panel of Fig. \ref{fig:fig4}. Furthermore, we have examined also the numerical results for different densities of $N_{\text{Na+}}$ ions in the given geometry. In Fig. \ref{fig:fig4}, we obtained similar results, where we increase the density of $N_{\text{Na+}}$ ions, and the currents of ions also increase. Moreover, we have shown the behavioral pattern of $N_{\text{Na+}}$ ions at the corresponding $N_{\text{Na+}}$ ion exit times in the right panel of Fig. \ref{fig:fig5}. The activity of neurons in a cell's membrane potential problem can be predicted via this behavioral system so that we can control the currents of ions in the cell for specific purposes. Our lattice model opens an interesting research direction where the behavior of many ions can be investigated in the study of neuroscience by developing further its intrinsic links with reflecting stochastic processes of mixed population dynamics. The next step would be the investigation of the dynamics of ions in the presence of a feedback mechanism of chemical reaction terms in the associated neuronal models. \section{Concluding remarks}\label{concluding-remarks} We have analyzed numerically the reflecting stochastic dynamics of active-passive populations and provided two representative examples in a queueing theory model and neuroscience. We have used a statistical-mechanics-based lattice gas framework where we employ a kinetic Monte Carlo procedure for the implementation of our corresponding models. In the first model, based on our numerical experiments, we have observed the impact of passive customers on the residence times of the active population, which allowed us to conclude that the presence of passive customers in the system increases the waiting time of the active customers. In reality, setting a limit on the presence of passive customers would allow for an optimization approach on the waiting time of the queues. In the current consideration, we have limited our representative examples to a situation where the interaction between active and passive customers is subject to simple exclusion rules. It would be instructive to analyze the situations where nonlocal interactions among the crowd participants are also allowed. In the second model, based on our numerical experiments, we have examined the behavior of many ions when the action potential reaches peak and then go down to the hyperpolization state. Via this behavioral system, the activity of neurons in a cell's membrane potential problem can be predicted. In the current model, we investigated the behavior of many ions in the absence of chemical reaction terms. It would be interesting to extend the model for a more complex scenario where chemical reaction terms are added to the system. A comparison between the theoretical prediction with the real data would benefit further progress in the development of the presented framework. The class of reflecting stochastic dynamics in operation research and neuroscience models provides a bridge between reflecting stochastic dynamics in a confined domain and a lattice gas approximation via a limit theorem. The ideas presented in this contribution may be extended to queueing theory models in healthcare systems subjected to an epidemic, e.g. by including symptomatic-asymptomatic populations into account \cite{Meares2020,Tadic2020,Tadic2021}, as well as to other application areas where we have to deal with uncertainties intrinsic to two groups of populations with distinct behavioral patterns. \section*{Acknowledgment} Authors are grateful to the NSERC and the CRC Program for their support. RM is also acknowledging support of the BERC 2018-2021 program and Spanish Ministry of Science, Innovation and Universities through the Agencia Estatal de Investigacion (AEI) BCAM Severo Ochoa excellence accreditation SEV-2017-0718 and the Basque Government fund AI in BCAM EXP. 2019/00432.
{ "timestamp": "2021-06-01T02:40:58", "yymm": "2102", "arxiv_id": "2102.07766", "language": "en", "url": "https://arxiv.org/abs/2102.07766" }
\section{Introduction} \label{sec:intro} \vspace{-1mm} In recent years, speech synthesis technology has rapidly improved with the introduction of deep neural networks. In particular, WaveNet~\cite{oord-2016-wavenet}, which has an autoregressive (AR) structure, directly models the distributions of waveform samples and has demonstrated remarkable performance. WaveNet can be used as a speech vocoder by conditioning auxiliary features such as Mel-spectrogram and acoustic features extracted by conventional signal processing-based vocoder~\cite{tamamori-2017-speaker}. This is also used in state-of-the-art speech synthesis systems, which greatly contributes to improving the quality of synthesized speech~\cite{shen-2018-natural,ping-2017-deep}. However, WaveNet suffers from slow inference speed because of the AR mechanism and huge network architectures. Although compact AR models~\cite{kalchbrenner-2018-efficient,valin-2019-lpcnet} have been proposed to accelerate inference speed, it is limited because audio samples must be generated sequentially. Thus, such models are not suited for real-time TTS applications. Recently, significant efforts have been devoted to building non-AR models to resolve this problem. Parallel WaveNet~\cite{oord-2018-parallel} and ClariNet~\cite{ping-2018-clarinet} introduce the teacher-student knowledge distillation. This framework transfers the knowledge from an AR teacher WaveNet to an inverse autoregressive flow (IAF)-based non-AR student model~\cite{kingma-2016-improved}. The IAF student model is highly parallelizable and can synthesize high-quality waveforms. However, the training procedure is complicated because it requires a well-trained teacher model as well as a mix of distilling and other perceptual training criteria. WaveGlow~\cite{prenger-2019-waveglow} and FloWaveNet~\cite{kim-2018-flowavenet} with flow-based generative models have been proposed as well. Although these models can be directly learned by minimizing the negative log-likelihood of the training data, they need a huge number of parameters and require many GPU resources to obtain optimal results for a single speaker model. Another approach for parallel waveform generation is to use generative adversarial networks (GANs)~\cite{goodfellow-2014-generative}. A GAN is a powerful generative model that has been successfully used in various research fields such as image generation~\cite{reed-2016-generative}, speech synthesis~\cite{saito-2018-statistical}, and singing voice synthesis~\cite{hono-2019-singing}. GAN-based models have also been proposed for waveform generation~\cite{yamamoto-2020-parallel,kumar-2019-melgan}. Since these training frameworks enable models to effectively capture the time-frequency distribution of the speech waveform and improve training stability, these GAN-based models are much easier to train than the conventional non-AR methods described above. A neural vocoder can generate high-fidelity waveforms since it can restore missing information from the acoustic feature in a data-driven fashion and is less limited by the knowledge and assumptions of the conventional vocoder~\cite{kawahara-1999-restructuring,morise-2016-world}. However, this also results in a lack of acoustic controllability and robustness. In fact, it is difficult for a neural vocoder to generate a speech waveform with accurate pitches outside the range of the training data. Some methods with explicit periodic signals~\cite{wang-2019-neural,oura-2019-deep} and methods with a pitch-dependent convolution mechanism~\cite{wu-2020-quasi-pwg} address this problem. It is known that both periodic and aperiodic components are mixed in speech waveforms. Although neural vocoders often model speech waveforms as single signals without considering these mixed components, it is important to take them into account to model speech waveforms more effectively. In particular, when the neural vocoder is used in a singing voice synthesis system~\cite{nishimura-2016-singing,hono-2018-recent}, the accuracy of pitch and breath sound reproduction has a significant effect on quality and naturalness. Several methods for decomposing the periodic and aperiodic components contained in the speech waveform have been proposed~\cite{serra-1990-spectral,zubrycki-2007-accurate}. However, it is still difficult to decompose them, and it is not optimal to use decomposed waveforms, including decomposition errors, as the training data for the neural vocoders. In this paper, we consider speech waveform modeling in terms of the model structures and propose PeriodNet, a non-autoregressive neural vocoder for better speech waveform modeling. We introduce two versions with different model structures, a parallel model and a series model, assuming that the periodic and aperiodic waveforms can be generated from the explicit periodic and aperiodic signals, such as a sine wave and a noise sequence, respectively. Our proposed methods also can generate a waveform that includes a pitch outside the range of the training data. Moreover, our models have the robustness of the input pitch since these generate the periodic and aperiodic waveforms with two separate neural networks. \section{Waveform modeling} \vspace{-1mm} \subsection{Autoregressive neural vocoder} \vspace{-1mm} In neural vocoders with an AR structure~\cite{oord-2016-wavenet,kalchbrenner-2018-efficient,valin-2019-lpcnet}, a speech waveform at each timestep is modeled as a probability distribution conditioned on past speech samples and auxiliary features such as Mel-spectrograms and acoustic features. An overview of the AR neural vocoder is shown in Fig.~\ref{fig:ar}. In this paper, we use WaveNet~\cite{oord-2016-wavenet} as a neural network to generate waveforms (referred to as a generator in this paper). WaveNet has a stack of dilated causal convolution with a gated activation function, and it is capable of modeling speech waveforms with complex periodicity. However, there is a problem that it cannot make a parallel inference and takes time to generate waveform because of the AR structure. \subsection{Non-autoregressive neural vocoder} \vspace{-1mm} In non-AR neural vocoders~\cite{oord-2018-parallel,ping-2018-clarinet,prenger-2019-waveglow,kim-2018-flowavenet,yamamoto-2020-parallel,kumar-2019-melgan}, the neural network represents the mapping function from a pre-generated input signal, such as Gaussian noise, to the speech waveform. Hence, all waveform samples can be generated in parallel without incurring the expense of having to make predictions autoregressively. However, it is difficult to predict a speech waveform with autocorrelation from a noise sequence without autocorrelation properly. Prior studies~\cite{wang-2019-neural,oura-2019-deep} have proposed methods that use explicit periodic signals such as sine waves. These methods provide high pitch accuracy and can synthesize waveforms with a pitch not included in the training data. In this paper, following these attempts, we use the sine wave, noise, and voiced/unvoiced (V/UV) sequence as input signals, as shown in Fig~\ref{fig:baseline}. Note that this V/UV sequence is smoothed in advance. Various architectures can be used for the generator in Fig~\ref{fig:baseline}; we use a Parallel WaveGAN~\cite{yamamoto-2020-parallel}-based architecture. Details of model architectures will be described in Sec.~\ref{sec:details}. \section{Proposed model structures separating periodic and aperiodic components} \label{sec:methods} \vspace{-1mm} \subsection{Model structures} \label{sec:structures} \vspace{-1mm} A speech waveform contains periodic and aperiodic waveforms. In the structure shown in Fig.~\ref{fig:baseline}, the generation process of the periodic and aperiodic waveforms is represented by a single model. However, this structure is not always optimal for waveform modeling, especially when the accuracy of pitch and breath sound reproduction has a significantly affects quality and naturalness, such as in singing voice synthesis. We assume that the speech waveform is the sum of periodic and aperiodic components. The periodic and aperiodic components are expected to be easily created from the periodic and aperiodic signal (such as the sine waves and noise sequences), respectively. Thus, in this paper, we propose a parallel mode structure and a series model structure based on these assumptions. The parallel model structure is shown in Fig~\ref{fig:parallel}. This structure assumes that the periodic and aperiodic waveforms are independent of each other. An explicit periodic signal consisting of a sine wave and V/UV sequence is used to predict the periodic waveform, and an explicit aperiodic signal consisting of noise and V/UV sequence is used to predict the aperiodic waveform. The series model structure is shown in Fig~\ref{fig:series}. In this structure, we assume that the aperiodic waveform depends on the periodic waveform, considering the possibility that there is an aperiodic waveform corresponding to the phase of the periodic waveform. Specifically, we introduce a residual connection between two generators so that the latter generator can predict the aperiodic component taking into account the dependence of the periodic component. In the parallel model and the series model, different acoustic features can be selected for the auxiliary features of the periodic and aperiodic generators, making it possible to obtain more robust neural vocoders with proper conditioning. \subsection{Model details} \label{sec:details} \vspace{-1mm} In this paper, we incorporate Parallel WaveGAN~\cite{yamamoto-2020-parallel}-based framework into our non-AR baseline and proposed models, as shown in Fig.~\ref{fig:baseline}, Fig.~\ref{fig:parallel}, and Fig.~\ref{fig:series}. Each generator has the same architecture as the generator of \cite{yamamoto-2020-parallel}, which is a modified WaveNet-based model with non-causal convolution. On the other hand, for the discriminators, we utilize a multi-scale architecture with three discriminators that have identical network structures but operate on different audio scales, following \cite{kumar-2019-melgan}. Each discriminator has the same architecture as the discriminator of \cite{yamamoto-2020-parallel}. These models are trained by optimizing the combination of multi-resolution short-time Fourier transform loss and adversarial loss in the same fashion as \cite{yamamoto-2020-parallel}. In the training vocoder with the parallel and series model structures, the final output sequence, which is the sum of two generators' output sequence, is only evaluated. This is the same as the baseline model with the single model structure. From the assumptions presented in Sec.~\ref{sec:structures}, by inputting the sine wave and noise sequence separately, each generator should be trained to predict periodic and aperiodic waveforms, respectively. \begin{figure}[t] \begin{minipage}{0.49\hsize} \centering \includegraphics[width=0.95\hsize]{fig/ar.pdf} \subcaption{AR model} \label{fig:ar} \vspace{1mm} \end{minipage} \begin{minipage}{0.49\hsize} \centering \includegraphics[width=0.95\hsize]{fig/single.pdf} \subcaption{Non-AR baseline model} \label{fig:baseline} \end{minipage} \begin{minipage}{0.49\hsize} \centering \includegraphics[width=0.95\hsize]{fig/parallel.pdf} \subcaption{Non-AR parallel model} \label{fig:parallel} \end{minipage} \begin{minipage}{0.49\hsize} \centering \includegraphics[width=0.95\hsize]{fig/series.pdf} \subcaption{Non-AR series model} \label{fig:series} \end{minipage} \vspace{-2mm} \caption{Structures for speech waveform modeling} \vspace{-2mm} \label{fig:model} \end{figure} \section{Experiments} \vspace{-1mm} \subsection{Experimental conditions} \vspace{-1mm} \begin{figure*}[t] \begin{minipage}{0.33\hsize} \centering \includegraphics[width=0.95\hsize]{fig/stft/pm1_p.pdf} \vspace{-1mm} \subcaption{Waveform of the periodic generator's output} \label{fig:pm1_p} \end{minipage} \begin{minipage}{0.33\hsize} \centering \includegraphics[width=0.95\hsize]{fig/stft/pm1_ap.pdf} \vspace{-1mm} \subcaption{Waveform of the aperiodic generator's output} \label{fig:pm1_ap} \end{minipage} \begin{minipage}{0.33\hsize} \centering \includegraphics[width=0.95\hsize]{fig/stft/pm1_final.pdf} \vspace{-1mm} \subcaption{Waveform after the sum of two signals} \label{fig:pm1_final} \end{minipage} \vspace{-2mm} \caption{Spectrograms of generated waveform by non-AR parallel model} \label{fig:pm_spec} \end{figure*} \begin{figure*}[t] \begin{minipage}{0.33\hsize} \centering \includegraphics[width=0.95\hsize]{fig/stft/sm_p.pdf} \vspace{-1mm} \subcaption{Waveform of the periodic generator's output} \label{fig:sm_p} \end{minipage} \begin{minipage}{0.33\hsize} \centering \includegraphics[width=0.95\hsize]{fig/stft/sm_ap.pdf} \vspace{-1mm} \subcaption{Waveform of the aperiodic generator's output} \label{fig:sm_ap} \end{minipage} \begin{minipage}{0.33\hsize} \centering \includegraphics[width=0.95\hsize]{fig/stft/sm_final.pdf} \vspace{-1mm} \subcaption{Waveform after the sum of two signals} \label{fig:sm_final} \end{minipage} \vspace{-2mm} \caption{Spectrograms of generated waveform by non-AR series model} \label{fig:sm_spec} \end{figure*} Seventy Japanese children's songs (total: 70 min) performed by one female singer were used for the experiments. Sixty songs were used for training, and the rest were used for testing. Singing voice signals were sampled at 48 kHz, and each sample was quantized by 16 bits. The auxiliary features consisted of 50-dimensional WORLD mel-cepstral coefficients~\cite{morise-2016-world}, 25-dimensional mel-cepstral analysis aperiodicity measures, one-dimensional continuous log fundamental frequency ($F_0$) value, and one-dimensional voiced/unvoiced binary code. Feature vectors were extracted with a 5-ms shift, and the features were normalized to have zero mean and unit variance before training. In the training stage, the sine waves for the input of the non-AR neural vocoder were generated based on the glottal closure point extracted from a natural speech using REAPER~\cite{Web-REAPER}. The purpose of this is to input a sine wave that is close in phase to the target's natural speech during training. Meanwhile, the sine waves were generated based on the $F_0$ values in the synthesis stage. The following seven systems were compared. \begin{itemize} \item \textbf{WN}:\;The AR WaveNet~\cite{oord-2016-wavenet}. \item \textbf{BM1}:\;The non-AR baseline model, as shown in Fig.~\ref{fig:baseline} that used noise and a V/UV signal as the generator input and is conditioned on all auxiliary features. \item \textbf{BM2}:\;The non-AR baseline model, as shown in Fig.~\ref{fig:baseline} that used a sine wave and a V/UV signal as the generator input and is conditioned on all auxiliary features. \item \textbf{BM3}:\;The non-AR baseline model, as shown in Fig.~\ref{fig:baseline} that used noise, a sine wave, and a V/UV signal as the generator input and is conditioned on all auxiliary features. \item \textbf{PM1}:\;The non-AR parallel model, as shown in Fig.~\ref{fig:parallel}. The periodic generator takes a sine wave and a V/UV signal as input, and the aperiodic generator takes noise and a V/UV signal as input. Both generators are conditioned on all auxiliary features. \item \textbf{PM2}:\;The non-AR parallel model, as shown in Fig.~\ref{fig:parallel}. Unlike \textbf{PM1}, the aperiodic generator is conditioned by auxiliary features other than $F_0$. \item \textbf{SM}:\;The non-AR series model, as shown in Fig.~\ref{fig:series}. The periodic generator takes a sine wave and a V/UV signal as input, and the aperiodic generator takes noise, a V/UV signal, and the output signal of the periodic generator as input. Both generators are conditioned on all auxiliary features. \end{itemize} \textbf{WN} consisted of 30 layers of dilated residual convolution blocks with causal convolution. The dilations of \textbf{WN} were set to $1, 2, 4, \ldots, 512$, and the 10 dilation layers were stacked three times. The channel size for dilation, residual block, and skip-connection in \textbf{WN} was set to 256, and the filter size in \textbf{WN} was set to two. The singing voice waveforms to train \textbf{WN} were quantized from 16 bits to 8 bits by using the $\mu$-law algorithm~\cite{recommendation-1988-pulse}. The generators of \textbf{BM1}, \textbf{BM2}, and \textbf{BM3}, and periodic generator of \textbf{PM1}, \textbf{PM2}, and \textbf{SM} consisted of 30 layers of dilated residual convolution blocks with three dilation cycles, the same as \textbf{WN}. The aperiodic generators of \textbf{PM1}, \textbf{PM2}, and \textbf{SM} consisted of 10 layers of dilated residual convolution blocks without dilation cycles. The channel size for dilation, residual block, and skip-connection was set to 64, and the filter size was set to three. The discriminators of \textbf{BM1}, \textbf{BM2}, \textbf{BM3}, \textbf{PM1}, \textbf{PM2}, and \textbf{SM} had the multi-scale architecture with three discriminators. The discriminators took 48 kHz full-resolution waveforms, and 24 kHz and 16 kHz downsampled waveforms. The downsampling was performed using average pooling. Each discriminator consisted of 10 non-causal dilated convolutions with leaky ReLU activation function. We applied weight normalization~\cite{salimans-2016-weight} to all convolutional layers. All models were trained using the RAdam optimizer~\cite{liu-2019-radam} with 1000K iterations. Specifically, in \textbf{BM1}, \textbf{BM2}, \textbf{BM3}, \textbf{PM1}, \textbf{PM2}, and \textbf{SM}, the discriminators were fixed for the first 100K iterations, and then both the generator and discriminator were jointly trained afterward. \subsection{Comparison of spectrograms} \label{sec:exp_pap} \vspace{-1mm} Fig.~\ref{fig:pm_spec} and Fig.~\ref{fig:sm_spec} show the spectrograms in \textbf{PM1} and \textbf{SM}, respectively. Each figure has three spectrograms of the waveform of the periodic generator's output, the aperiodic generator's output, and the sum of two predicted signals. Fig.~\ref{fig:pm1_p} and Fig.~\ref{fig:pm1_ap} show that the waveform of the periodic generator contains many harmonic components, and that of the aperiodic generator contains the other frequency components. As seen in the highlighted boxes on the left and in the center, which represent parts of the breath and unvoiced plosives ``/t/'', respectively, it can be seen that the spectra of these unvoiced sounds only appear in the output of the aperiodic generator. These tendencies can also be seen in Fig.~\ref{fig:sm_p} and Fig.~\ref{fig:sm_ap}. These results indicate that two generators in the parallel model and the series model work on modeling the transformation from the sine waves and the noise sequence to the periodic and the aperiodic waveforms. Comparing the highlighted box in the lower right of Fig.~\ref{fig:pm1_ap} and Fig.~\ref{fig:sm_ap}, the output waveform of the aperiodic generator in \textbf{SM} contains more harmonic components than in \textbf{PM1}. This suggests that the periodic waveform as the input of the aperiodic generator in \textbf{SM} may have leaked into to the output of the aperiodic generator because the output waveform of the periodic generator fed into the aperiodic generator. It should be noted that some harmonic components are also included in Fig.~\ref{fig:pm1_ap} since the periodic and aperiodic waveforms were not explicitly decomposed in the training stage. \subsection{Subjective evaluations} \vspace{-1mm} \begin{figure}[t] \centering \includegraphics[height=3.8cm]{fig/ex1.pdf} \vspace{-2mm} \caption{Subjective evaluation results of experiment 1} \label{fig:mos1} \end{figure} \begin{figure}[t] \centering \includegraphics[height=3.8cm]{fig/ex2.pdf} \vspace{-2mm} \caption{Subjective evaluation results of experiment 2} \label{fig:mos2} \end{figure} \begin{figure}[t] \centering \includegraphics[height=3.8cm]{fig/ex3.pdf} \vspace{-2mm} \caption{Subjective evaluation results of experiment 3} \label{fig:mos3} \end{figure} \subsubsection{Comparison of AR/non-AR neural vocoders and the input signals} \vspace{-1mm} We conducted a listening test using \textbf{WN}, \textbf{BM1}, \textbf{BM2}, \textbf{BM3}, and \textbf{NAT} to compare neural vocoders the with and without the AR structure and the input signals for the non-AR neural vocoder. Note that \textbf{NAT} indicates a recorded natural waveform. The naturalness of the synthesized singing voice was assumed using the mean opinion score (MOS) test method. The participants were sixteen native Japanese speakers, and each participant evaluated ten phrases randomly selected from the test data. After listening to each test sample in the MOS test, the participants were asked to score the naturalness of the sample out of five (1 = Bad; 2 = Poor; 3 = Fair; 4 = Good; and 5 = Excellent). The results of the subjective evaluation are shown in Fig.~\ref{fig:mos1}. \textbf{BM1} yielded a lower MOS value than \textbf{WN}, indicating that it is difficult to generate high-quality singing voices from noise. On the other hand, \textbf{BM2} showed the same score as \textbf{WN}. By inputting a periodic signal, the neural vocoder can appropriately synthesize waveforms with periodicity the lack of the AR structure. However, the waveform of \textbf{WN} contains quantization noise, so the quality of \textbf{BM2} was insufficient. \textbf{BM3}, which inputs both explicit periodic and aperiodic signals, has reached the MOS value close to \textbf{NAT}. This indicates the effectiveness of using both explicit periodic and aperiodic signals as inputs for non-AR neural vocoders. \subsubsection{Comparison of model structures of non-AR neural vocoders} \vspace{-1mm} To compare the model structures of non-AR neural vocoders, we conducted two subjective evaluation experiments using \textbf{BM3}, \textbf{PM1}, \textbf{PM2}, and \textbf{SM}. In these experiments, the samples were generated by four vocoders conditioned on two different F0 scales: original and double scale. In the experiment with the original $F_0$ scale, we also used the natural waveform \textbf{NAT} for comparison. The results are presented in Fig.~\ref{fig:mos2} and Fig.~\ref{fig:mos3}. These figures show that \textbf{PM1}, \textbf{PM2}, and \textbf{SM} attained higher naturalness than \textbf{BM3}. This indicates that it is effective for the non-AR neural vocoders using the explicit periodic signal to introduce a parallel or series structure. Although the difference between \textbf{PM1}, \textbf{PM2}, and \textbf{SM} was negligible when conditioning on the original $F_0$ as shown in Fig.~\ref{fig:mos2}, \textbf{PM2} was the best performance when conditioning on the doubled $F_0$ as shown in Fig.~\ref{fig:mos3}. The waveform samples generated by \textbf{BM3}, \textbf{PM1}, and \textbf{SM} tended to contain more aperiodic waveforms than those generated by \textbf{PM2}. In \textbf{BM3}, the period and aperiodic components were not modeled separately, and speech waveforms were generated from a single generator conditioned by auxiliary features including $F_0$. In \textbf{PM1} and \textbf{SM}, although the networks for modeling these components were separate, both aperiodic generators were conditioned on auxiliary features including $F_0$. In particular, the aperiodic generator in \textbf{SM} also depended on periodic waveforms predicted by the periodic generator. Therefore, it was assumed that \textbf{BM3}, \textbf{PM1}, and \textbf{SM} could not generate aperiodic waveforms when these vocoders took out-of-range $F_0$ as the acoustic features in the synthesis stage. \textbf{PM2} is more robust for an unseen $F_0$ outside the $F_0$ range of the training data because the aperiodic generator in \textbf{PM2} does not depend on the periodic signal or $F_0$. \section{Conclusions} \vspace{-1mm} We introduced PeriodNet, a non-AR neural vocoder with new model structures, to appropriately modeling the periodic and aperiodic components in the speech waveform. Each generator in the parallel or series model structure can model the periodic and aperiodic waveforms without the use of decomposition techniques. The experimental results showed that the proposed methods were able to generate high-fidelity speech waveforms and improve the ability to generate waveforms with a pitch outside the range of the training data. Future work includes investigating the effect of proposed methods on different datasets, such as a multi-speaker and multi-singer dataset. \section{Acknowledgements} \vspace{-1mm} This work was supported by JSPS KAKENHI Grant Number JP19H04136 and JP18K11163. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-02-17T02:00:25", "yymm": "2102", "arxiv_id": "2102.07786", "language": "en", "url": "https://arxiv.org/abs/2102.07786" }
\section{Introduction} \label{sec:intro} Black hole X-ray binaries (BHXRBs) are ideal laboratories for studying in detail the evolution of the accretion flow as a function of the accretion state during an outburst \citep[e.g.][]{Done2007}. During the soft state, thermal emission with a peak temperature of $\sim$ 1 keV is observed. The low or undetected rapid X-ray variability \citep[on timescales between milliseconds and hundreds of seconds; e.g.][]{Gierlinski2005,Belloni2005} suggests the accretion flow is stable against fast changes in the mass accretion rate in this state \citep[][]{Lyubarskii1997,Churazov2001}. A geometrically thin, optically thick accretion disc \citep[][]{Shakura1973,Novikov1973} extending close to the innermost stable circular orbit (ISCO) of the black hole (BH) is the accepted paradigm to explain the thermally dominated soft state \citep[e.g.][]{Gilfanov2010}. However, this standard model is unable to explain the hard spectral states, which are instead dominated by rapidly variable \citep[][]{Gierlinski2005,MunozDarias2011,Heil2012} emission from a hot plasma with peak temperatures of a few tens to hundreds of keV \citep[e.g.][]{Joinet2008,Motta2009,Buisson2019,Zdziarski2021b}. In the hard state the primary broadband X-ray continuum more likely results from Compton up-scattering processes of less energetic photons (thermal and/or synchrotron) by the electrons in the hot plasma \citep[][]{Poutanen1998,Gierlinski1999,Poutanen2009}. On the other hand, the standard disc is observed to be cooler ($kT\lower.5ex\hbox{\ltsima}$0.3 keV; e.g. \citealt{Kolehmainen2014,ZdziarskiBDM2020}) and more variable \citep[][]{Wilkinson2009, BDM2015a} than in the soft state. Transitions between hard and soft states are characterised by abrupt changes in the X-ray spectral and timing properties of the source \citep[e.g.][]{Belloni2010,Bogensberger2020}. The intermediate states during which the transition occurs have been further classified into hard-intermediate and soft-intermediate states, based on their significantly different X-ray timing properties \citep{Homan2005,Belloni2005}. The observed changes characterising the intermediate states are likely a consequence of major, but still not well-understood, variations in the physical properties of the accretion flow. All the models proposed to explain the hard state postulate that the hot phase of the accretion flow is located at relatively small distances from the BH; however, its exact geometry is unknown, and a number of viable possibilities currently exist. For example, the hot flow might fill the inner regions of a truncated optically thick and geometrically thin accretion disc, with the transition region characterised by an overlap between the two phases \citep[i.e. a corona sandwiching the inner disc and/or disrupted disc clumps embedded in the hot flow; e.g.][]{Done1999,Yuan2014,Poutanen2018}. Alternatively, the standard disc might reach close to the BH and the hard X-ray source may be a compact region, possibly located on the rotation axis of the BH (the so-called lamppost; e.g. \citealt{Garcia2014,Niedzwiecki2016}) and linked to the base of a jet \citep[][]{Miller2006}. As a matter of fact, radio observations indicate the presence of a bipolar relativistically expanding and outflowing plasma associated with this state (\citealt{Fender2004}; \citeyear{Fender2006}; \citealt{Corbel2001}; \citeyear{Corbel2004}; \citealt{Brocksopp2002}; \citealt{Miller-Jones2011}). However, the exact relation (if any) between the X-ray source and the jet is still unclear \citep[e.g.][]{Yuan2019}. The main challenge for these models is explaining the high level of X-ray spectral and timing complexity that characterises the hard state, including: complex broadband spectral curvature \citep[][]{Makishima1986,Nowak2011}; evolution of characteristic variability timescales \citep[][]{Done2007}; detection of hard X-ray lags (hard band flux variations lagging behind soft band variations) in Comptonised emission-dominated regions of the spectrum \citep[][]{Nowak1999,Grinberg2014}; and the recurrent appearance of low-frequency quasi-periodic oscillations (QPOs) during high luminosity hard and hard-intermediate states \citep[e.g.][]{Rodriguez2002,Casella2004,Sobolewska2006,Ingram2019a}. Truncated disc models can (at least qualitatively) explain most of the observed X-ray spectral and timing properties of BHXRBs and their evolution during an outburst, provided some spectral stratification of the hot flow is postulated for the hard state \citep[][]{Done2007,Axelsson2018,Mahmoud2018a}. Within this scenario the key ingredient is an evolving inner disc radius, moving closer to the BH as the source evolves from the hard to the soft state, and receding again as the source goes back to quiescence \citep[][]{Esin1997,Yuan2014}. On the other hand, a compact X-ray source with a disc inner radius at $\sim R_{{\rm ISCO}}$ already at moderate Eddington ratios is usually invoked to explain the broad profile of Fe K emission lines in the disc reflection component, which are observed particularly in bright hard states \citep[][]{Miller2006,Garcia2015,Furst2015}. Within this model, variations in the vertical extent of the primary hard X-ray source have been proposed to explain the evolution of some of the X-ray spectral and timing properties throughout the hard state \citep[][]{Kara2019,Buisson2019}. Theoretical arguments and supporting observational evidence exist for the optically thick and geometrically thin disc being highly truncated during quiescence and in the early stages of an outburst ($R_{\rm{in}}\sim 10^{3-4}\ R_{\rm{g}}$ for $M_{\rm{BH}}\sim 10 M_{\odot}$, where $R_{\rm{g}}=GM_{\rm{BH}}/c^{2}$ is the gravitational radius; e.g. \citealt{Lasota1996,Narayan1997,Dubus2001,McClintock2001,Esin2001,Bernardini2016}). Nonetheless, the question of when the disc reaches the ISCO (whether early in the hard state, in the bright hard state, or during the intermediate states characterising the hard-to-soft state transition) remains unanswered. Attempts to settle this issue have so far yielded controversial results (see \citealt{Bambi2021} for a review). Among the main problems is the difficulty in obtaining a long, almost uninterrupted monitoring of an outburst with high throughput X-ray detectors covering energy ranges with significant contribution from all the main spectral components. In particular, while the rise in luminosity throughout the hard state can be quite slow (of the order of a few weeks to months), the hard-intermediate states preceding the transition to the soft-intermediate states are spanned in a very short amount of time (a few days), and thus they are very difficult to catch \citep[][]{Belloni2005,Dunn2010}. Yet, these phases are of the utmost importance for understanding how the accretion flow evolves. The \emph{Neutron star Interior Composition Explorer} ({\rm NICER}; \citealt{Gendrau2016}) is making significant inroads in this context by performing high cadence, high quality X-ray observations of Galactic transient sources. Among the sources observed by {\rm NICER}\ is the very bright BHXRB MAXI J1820+070 (ASASSN-18ey), which was discovered in March 2018 \citep[][]{Tucker2018}. During the 2018 outburst the source rapidly brightened, reaching extremely high X-ray luminosities, with a peak X-ray flux of about 4 Crab. Such a high brightness and low absorbing column density to the source ($\rm{N_{H}\sim 10^{21} cm^{-2}}$, \citealt{Uttley2018}) make MAXI J1820+070 an excellent target for detailed studies of BHXRB outbursts and state transitions. Intensive monitoring at radio wavelengths, followed by high spatial resolution \emph{Chandra} observations, revealed the ejection of long-lived discrete relativistic jets \citep[][]{Bright2019,Espinasse2020}. Major changes in the X-ray variability properties of the source preceded these ejections by a few hours \citep[][]{Homan2020}, suggesting that these events marked the transition from the hard to the soft state. Since its discovery, {\rm NICER}\ has been observing the source regularly, providing an incredibly rich archive of data covering all the phases of the outburst. Here we focus on the first part of the 2018 outburst, covering the hard and hard-intermediate states before transition to the soft-intermediate states. We employ two independent methods to constrain the inner accretion flow geometry and its evolution. Both methods rely on the application of X-ray spectral-timing analysis techniques \citep[][]{Uttley2014} to study the quasi-thermal soft component produced by the hard X-ray irradiation of the inner disc \citep[e.g.][]{ZdziarskiBDM2020}. Since the hard X-ray flux impinging on the cold disc medium is variable, it will induce a temporal response in the disc quasi-thermal emission, which is the focus of this work. The first method (Sect. \ref{sec:softlags}) is based on studying the evolution of high-frequency ($\nu\lower.5ex\hbox{\gtsima}$1 Hz) soft X-ray lags, namely soft band flux responding to rapid hard band flux variations with a time delay. These lags are thought to be the signature of X-ray thermal reverberation, namely the time-delayed response of the re-emitted quasi-thermal disc emission to variations in the hard X-ray irradiating flux \citep[e.g.][]{Uttley2011}. Being strongly dependent on the light path between the X-ray source and the reprocessing region, X-ray reverberation lags can be used to map the geometry of the inner accretion flow and its evolution. To date, such lags have been detected in some of the best monitored BHXRBs \citep[][]{Uttley2011,BDM2015b,BDM2016,BDM2017,WangJ2020}, including MAXI J1820+070 \citep[][]{Kara2019}. The second method (Sect. \ref{sec:covar}) is based on studying the evolution of covariance spectra \citep[][]{Wilkinson2009,Uttley2014}, which are used to single out the spectral components responsible for the observed thermal reverberation lags. The peak temperature of the quasi-thermal component responding to the hard X-ray irradiation of the inner disc is linked to the disc inner radius, thus providing an alternative way of constraining the geometry of the inner accretion flow \citep[][]{ZdziarskiBDM2020,Zdziarski2021a}. For our computations we make use of the latest results from the optical spectroscopy of MAXI J1820+070, which yield a BH mass of $M_{\rm{BH}} = (5.95 \pm 0.22) {\rm sin}^{-3}i\ M_{\odot}$ \citep[][]{Torres2020}. Hereafter we assume a fiducial value of $M_{\rm{BH}} =7.6\pm1.7\ M_{\odot}$. This encompasses the range of values inferred from current constraints on the orbital ($66^\circ<i<81^\circ$; \citealt{Torres2020}), and jet inclination ($63\pm 3^\circ$; \citealt{Atri2020}; $64\pm 5^\circ$, \citealt{Wood2021}). We use the distance to the source of $d = 2.96\pm0.33\ \rm{kpc}$ \citep[where the uncertainty is for 68 per cent confidence;][]{Atri2020}. \section{Observations and data reduction} \label{sec:data} We analysed data from {\rm NICER}\ X-ray Timing Instrument (XTI) observations of MAXI J1820+070 between MJD 58189 and MJD 58306. These observations cover the first part of the 2018 outburst, namely the hard and hard-intermediate accretion states, up to the transition to the soft-intermediate states (the observations analysed are listed in Table \ref{tab1}, the observation identification number, ObsID, increases progressively with the time of the observation; we note that our sample also includes the observations previously analysed in \citealt{Kara2019}). For brevity, the different observations will be hereafter referred to as O\emph{xxx}, with \emph{xxx} being the last three digits of their ObsID. The data were reprocessed with the NICERDAS tools in HEASOFT v6.28 and {\rm NICER}\ calibration files as of July 27, 2020. Calibrated, unfiltered, all Measurement/Power Unit (MPU) merged files ({\tt ufa}) were created using {\tt NICERL2} task. We applied standard filtering criteria \citep[e.g.][]{Stevens2018}, using {\tt NIMAKETIME} and {\tt NICERCLEAN} routines. We extracted light curves in the energy range $13-15\ \rm{keV}$ (corresponding to the range where {\rm NICER}\ XTI effective area quickly drops) to check for periods of high particles background. Time intervals showing background events with rate $>2 \ \rm{counts\ s^{-1}}$ \citep[e.g.][]{Ludlam2018} were discarded. Of the 56 focal plane modules (FPMs) of {\rm NICER}\ XTI, FPMs 11, 20, 22, and 60 are not operational. Following `{\rm NICER}\ Analysis Tips \& Caveats\footnote{\url{https://heasarc.gsfc.nasa.gov/docs/nicer/data_analysis/nicer_analysis_tips.html}},' FPMs 14 and 34 were removed from the analysis as they can exhibit increased detector noise. The single observations were screened to identify additional FPMs showing anomalous behaviour, which, for example, can manifest as the total number of registered counts significantly exceeding the average counts from all detectors. However, none was found during the analysed observations. Occasionally (particularly during the brightest phases of the outburst before the transition to the soft-intermediate states), the number of active FPMs was reduced to avoid telemetry saturation and an increase in dead time. Once accounting for this reduction and the outlined filtering criteria, the net number of FPMs used for this analysis ranges between $50$ and $27$, as reported in Table \ref{tab1}. The resulting event lists were used to extract light curves in different energy bands with a time bin of $0.4\ \rm{ms}$. Custom ancillary response files (ARFs) and redistribution matrix files (RMFs) were computed for the specific subset of FPMs of each observation. This was done by combining the corresponding per-module, publicly distributed ARFs and RMFs. Fits were performed using Xspec v.12.11.1 and errors are hereafter reported at the 90 per cent confidence level. In order to speed-up the reduction process, short good time intervals (GTIs) with length $<10$ s were removed. This led us to the exclusion of observations O128, O129, O154, O181, O191, O192, O193, because of their resulting very short net exposure ($\lsim200$ s). The effective, on-source, exposure times of analysed observations after screening and filtering of short GTIs are listed in Table \ref{tab1}. \section{Hardness-intensity diagram} \label{sec:HID} We built the hardness-intensity diagram (HID; Fig. \ref{fig:hid}) of the source in order to select observations corresponding to the accretion states of interest. To this aim we extracted {\rm NICER}\ count rates in the energy ranges 2--4 keV and 4--12 keV. The total 2--12 keV count rate was normalised for the number of FPMs used for the analysis ({\tt N}$_{\rm{FPM}}$, Table \ref{tab1}). In some cases {\tt N}$_{\rm{FPM}}$ changed during a single observation. In these cases the observation was split so as to have datasets characterised by a constant number of FPMs (each point in the HID corresponds to either a single {\rm NICER}\ observation or the part of an observation characterised by constant {\tt N}$_{\rm{FPM}}$). We note that the HID of Fig. \ref{fig:hid} reports also observations excluded from the analysis because too short (Sect. \ref{sec:data}). \begin{figure} \includegraphics[width=\columnwidth]{hid.pdf} \caption{HID of MAXI J1820+070 during its 2018 outburst (from MJD 58189 to MJD 58305). The hardness is computed as the ratio between the 4--12 keV and 2--4 keV {\rm NICER}\ count rates. On the y axis we report the total 2--12 keV {\rm NICER}\ count rate re-normalised for the number of used FPMs. Highlighted in different colours and symbols are the phases of the outburst that are the focus of our work. The grey empty circles show the subsequent spectral evolution of the source up until MJD 58404. The epochs of the appearance of a type-B QPO (MJD$\sim$58305.7; \citealt{Homan2020}) and the launch of relativistic discrete radio ejecta (MJD 58305.6$\pm0.04$; \citealt{Wood2021}, and $\sim$MJD 58306, \citealt{Bright2019,Espinasse2020}) are marked, respectively, by the bigger star and orange arrows.} \label{fig:hid} \end{figure} During the first part of the outburst, we identified four main phases: an initial brightening at almost constant hardness (which lasted for about 12 days, `rise' in Fig. \ref{fig:hid}), a persistent (lasting about 2 months) bright hard state (`plateau' in Fig. \ref{fig:hid}), a sharp dimming phase (lasting about one month) at slightly lower, and almost constant hardness (`bright decline' in Fig. \ref{fig:hid}), and the quick (lasting about two weeks) transition to the soft-intermediate state (`hard-soft transition' in Fig. \ref{fig:hid}). These phases are the focus of our analysis. We note that the last analysed dataset includes only the first part ($\sim$64 ks) of O197, corresponding to an on-source effective exposure of $\sim 12$ ks. This spans the phases immediately before the fast transition from the hard-intermediate to the soft-intermediate state (i.e. the $\sim47$ ks preceding the transition, of which $\sim$8 ks of effective time) and the transition itself. The excluded part of O197 corresponds to the soft-intermediate state. In this phase X-ray spectral-timing measurements are noisier due to a drop in the intrinsic variability of the source. As shown by \citet[][]{Homan2020}, apart from the drop of broadband X-ray variability strength, the transition from the hard-intermediate to the soft-intermediate state was also marked by an intense X-ray flare and the switch from type-C to type-B QPO at MJD$\sim$58305.7. This epoch is highlighted in the HID of Fig. \ref{fig:hid}. Also shown in Fig. \ref{fig:hid} is the estimated epoch of the launch of discrete relativistic ejecta, detected first at radio wavebands \citep[][]{Bright2019} and later on, at larger scales, in the X-rays \citep[][]{Espinasse2020}. An earlier ejection has been recently reported \citep[][]{Wood2021}, coinciding with O197, and preceding the switch to a type-B QPO. The empty circles in Fig. \ref{fig:hid} show the spectral changes of the source during the remaining part of the 2018 outburst, till MJD 58404. \section{X-ray spectral-timing analysis} \label{sec:spec-tim} We measured the Fourier cross-spectrum to extract frequency-dependent time lags and covariance spectra \citep[e.g.][]{Nowak1999,Uttley2014}. Given that the on-source exposure time of each {\rm NICER}\ observation is usually quite short (a few hundred to a few thousand seconds; see Table \ref{tab1}), we adopted the approach of combining consecutive observations in order to increase the signal-to-noise ratio (S/N) of spectral-timing products. Before combining consecutive observations we verified that they do not show deviations from stationarity \citep[][]{Vaughan2003}. The main reason for this is that non-stationary behaviour is likely associated with substantial changes in the physical properties of the accretion flow, possibly including geometry, whose evolution we aim to test here. Details on this selection are reported in Appendix \ref{sec:PSD}. In addition, we combined only datasets with the same number of FPMs. This is relevant for the fit of covariance spectra (Sect. \ref{sec:covar}), since the response of the instrument depends on the specific FPMs used for the analysis (Sect. \ref{sec:data}). The final groups of combined observations can be inferred from Table \ref{tab1}. \ At the beginning of the outburst, the power spectral density (PSD) changed relatively slowly. In this phase of the outburst we split large groups of observations having compatible PSDs into smaller groups each characterised by sufficiently long net exposure time ($\sim$ 10--15 ks). On the other hand, before the hard-soft transition the PSD changed quickly, thus preventing us from combining consecutive observations and obtaining longer exposures (e.g. O191, O192, and O193 were discarded because of the short exposures and the inability to combine them Sect. \ref{sec:data}). \begin{figure} \includegraphics[width=\columnwidth]{soft_hard_lag.pdf}\\ \caption{ 0.5--1 keV vs. 2--5 keV lag-frequency spectrum of some of the analysed observations of MAXI J1820+070 during the rise, plateau, bright decline, and hard-soft transition phases (with colour codes and symbols as in Fig. \ref{fig:hid}). The positive lags dominating at low frequencies ($\lower.5ex\hbox{\ltsima}$ 2 Hz) are hard X-ray lags. The plot illustrates the hump-like structures characterising these low-frequency hard lags and their complex evolution throughout the first part of the outburst. At high frequencies ($\lower.5ex\hbox{\gtsima}$ 2 Hz) the (negative) soft X-ray lag becomes dominant (see inset, with the y axis in linear scale).} \label{fig:hardsoftlags} \end{figure} \begin{figure*} \includegraphics[width=0.68\columnwidth]{O103lag_plot_with_integr_interval.pdf} \includegraphics[width=0.68\columnwidth]{O104_105lag_plot_with_integr_interval.pdf} \includegraphics[width=0.68\columnwidth]{O109lag_plot_with_integr_interval.pdf}\\ \includegraphics[width=0.68\columnwidth]{O111lag_plot_with_integr_interval.pdf} \includegraphics[width=0.68\columnwidth]{O118_119lag_plot_with_integr_interval.pdf} \includegraphics[width=0.68\columnwidth]{O147lag_plot_with_integr_interval.pdf}\\ \includegraphics[width=0.68\columnwidth]{O157lag_plot_with_integr_interval.pdf} \includegraphics[width=0.68\columnwidth]{O171lag_plot_with_integr_interval.pdf} \includegraphics[width=0.68\columnwidth]{O177to180lag_plot_with_integr_interval.pdf}\\ \includegraphics[width=0.68\columnwidth]{O189lag_plot_with_integr_interval.pdf} \includegraphics[width=0.68\columnwidth]{O196lag_plot_with_integr_interval.pdf} \includegraphics[width=0.68\columnwidth]{O197lag_plot_with_integr_interval.pdf} \caption{ 0.5--1 keV vs. 2--5 keV lag-frequency spectra of selected observations of MAXI J1820+070. The observations illustrate the diverse behaviour of soft (negative) lags during different phases of the outburst (as also reported in the insets). The solid red curve is the best-fit {\tt RELTRANS} model. The blue arrow marks the frequency range used for the extraction of covariance spectra. } \label{fig:softlags} \end{figure*} \subsection{The evolution of high-frequency soft reverberation lags} \label{sec:softlags} Soft X-ray lags (soft variations lagging behind hard variations) in BHXRBs are interpreted as a signature of thermal reprocessing in the disc. Light travel time delays due to the extra path between the X-ray source and the disc are thought to contribute significantly to the observed soft lags (thermal reverberation; e.g. \citealt{Uttley2011,BDM2015b}; \citeyear{BDM2017}). As a consequence, these lags can be used as a diagnostic of the geometry of the innermost accretion flow. Soft X-ray lags ascribable to thermal reverberation were consistently observed in the Plateau phase of MAXI J1820+070, with clear indications of an evolution of lag amplitude and frequency \citep{Kara2019}. We measured soft X-ray lags throughout the first part of the outburst of MAXI J1820+070. Following results reported in \cite{Zdziarski2021b} we selected a soft (0.5--1 keV) and a hard (2--5 keV) energy band. The selected soft band shows a soft excess that includes some contribution from disc emission \citep[figure 6 in ][]{Zdziarski2021b}, the hard band is instead dominated by a hard X-ray Comptonisation component. Although the analysis of \cite{Zdziarski2021b} does not cover all the phases considered here, we verified that these choices remain appropriate even in the latest phases. We did not include photons with energies $>5$ keV to avoid strong contribution from Fe K line photons. We computed frequency-dependent time lags between the selected bands as $\tau(\nu)=\phi(\nu)/2\pi\nu$, where $\phi(\nu)$ is the phase of the cross spectrum as a function of frequency \citep[e.g.][]{Nowak1999}. We corrected the lags for dead-time-induced cross-channel talk \citep[][]{vanderKlis1987,Vaughan1999}, which introduces instrumental $\pm\pi$-phase lags. However, we verified that up to very high frequencies ($\sim100$ Hz) the contribution from the source dominates, thus making cross-channel talk negligible. A number of examples among the obtained lag-frequency spectra are shown in Fig. \ref{fig:hardsoftlags} (which reports the broad-frequency band $\sim0.03-100$ Hz lag spectrum) and Fig. \ref{fig:softlags} (which focuses on the high-frequency $\sim 2-100$ Hz range behaviour). The lags are re-binned using a multiplicative re-binning factor of 1.2, with each frequency bin containing at least 500 points \citep[][]{Ingram2019b}. At low frequencies ($\lower.5ex\hbox{\ltsima}$ 1--2 Hz) the lag-frequency spectra are dominated by (positive) hard X-ray lags (variations in hard X-ray photons follow variations in soft X-ray photons; see Fig. \ref{fig:hardsoftlags}). These lags are commonly observed in BHXRBs at frequencies $\lower.5ex\hbox{\ltsima} 1$ Hz and are thought to be intrinsic to the primary hard X-ray continuum \citep[e.g.][]{Nowak1999,Pottschmidt2000,Grinberg2014,Rapisarda2016}.\\ \begin{figure*} \includegraphics[width=\columnwidth]{nu_tau0_vs_HR_withtransition_fit_zoom_withphases.pdf} \hspace{0.2cm} \includegraphics[width=\columnwidth]{nu_tau0_vs_HR_withtransition_nofit_withphases.pdf} \caption{Evolution of $\nu_{0}$ in lag-frequency spectra as a function of spectral hardness. The \emph{left panel} reports results from the rise, plateau, and bright decline phases, which are observed to follow the same trend. The dotted black line is the best-fit model. The \emph{right panel} reports results from the hard-soft transition phase as compared to the other phases (grey symbols), which is observed to break from the observed trend at higher spectral hardness.The arrows are 90 per cent lower and upper limits.} \label{fig:nu0_vs_HR} \end{figure*} At higher frequencies a (negative) soft X-ray lag is clearly observed in almost all the analysed observations. Only during observations O101 and O102 the detection is marginal\footnote{For O101 and O102 we measured an average soft lag of $\tau=1.0\pm0.5$ ms and $\tau=1.5\pm0.4$ ms over the frequency interval $\sim$2.5--10 Hz and $\sim$5--20 Hz, respectively}. The soft X-ray lag typically starts to be observed at frequencies $\lower.5ex\hbox{\gtsima} 2$ Hz (inset in Fig. \ref{fig:hardsoftlags} and Fig. \ref{fig:softlags}), after getting to its maximum amplitude (in absolute value) it starts decreasing, reaching zero-amplitude at a given very high frequency (hereafter $\nu_{0}$). As clear from Fig. \ref{fig:softlags}, soft X-ray lags appear to vary both in amplitude and in characteristic frequencies throughout the analysed observations. The maximum measured amplitudes range between $\sim 3$ ms and $\sim 0.2$ ms\footnote{We note that the measured values of lag amplitude are a factor of $\sim3$ longer than those reported in \cite{Kara2019}. This discrepancy has been verified by considering the same selected observations reported in \cite{Kara2019} and using the same energy bands. Details are reported in Appendix \ref{sec:lagampl}.}. However, the most compelling observed phenomenon is the net increase in lag amplitude in the last analysed dataset (O197), as compared to previous ones (Fig. \ref{fig:softlags}, bottom-right panel). Here the soft X-ray lag amplitude suddenly increases to $\sim$15--20 ms and dominates the entire sampled frequency range (see bottom panel of Fig. \ref{fig:softlags}), with no clear evidence of hard X-ray lags (down to $\sim$0.05 Hz). This dataset corresponds to the phases immediately before and during the transition as identified by the appearance of a type-B QPO \citep[][]{Homan2020}. However, the disappearance of the hard lags/appearance of a long $\sim$15--20 ms lag precedes the appearance of the type-B QPO. Indeed, we verified that the long, dominant soft X-ray lag is already present when considering only the $\sim50$ ks preceding the transition itself (i.e. the first $\sim 8$ ks of effective on-source time during O197, Sect. \ref{sec:HID}). Such a long reverberation lag is observed also right after the transition from the soft to the hard state (from O262 to O265). This will be studied in more detail in a follow-up paper. It is worth noting that a much longer lag ($\sim$ 20 s) has been reported by \cite{Buisson2021} in a \emph{NuSTAR} observation performed on MJD 58306.35 (quasi-simultaneous to {\rm NICER}\ observation O198), that is, right after the transition to the soft-intermediate state. The evolution of soft X-ray reverberation lags informs us about the evolution of the geometry of the system. From a visual inspection of the lag-frequency spectra, the observed maximum amplitude of the reverberation lag clearly shows a complex evolutionary pattern (the trend of observed maximum reverberation lag amplitude as a function of spectral hardness is shown and discussed in more detail in Appendix \ref {sec:lagampl}). However, it is well known that the presence of both primary continuum and reprocessed photons in the energy bands used for the computation of cross-spectra has the effect of diluting the amplitude of soft lags \citep[e.g.][]{Uttley2014}. In particular, it can be easily shown that the amount of dilution also depends on the amplitude of the lags intrinsic to the primary X-ray continuum, being minimum when the continuum has no intrinsic interband lags. Nonetheless, the primary continuum in BHXRBs is characterised by (positive) hard X-ray lags, which show a complex evolution during the outburst \citep[e.g.][]{Pottschmidt2000,Grinberg2014,Altamirano2015,Reig2018}. As clear from Fig. \ref{fig:hardsoftlags}, this complexity characterises also MAXI J1820+070 (see also \citealt{WangY2020}). As a consequence, the hard X-ray lags in the continuum can bias the maximum observed amplitude of the reverberation lag, and their behaviour throughout the outburst can potentially distort the observed evolution of soft reverberation lags. Nonetheless, the frequency at which the soft lag goes to zero, $\nu_{0}$, has been shown to be unaffected by dilution of continuum photons \citep[e.g. Fig. 8 of ][]{Wilkins2013,Uttley2014,Mizumoto2018}. This frequency is also unaffected by the presence of hard X-ray lags associated with the primary continuum, because these lags are commonly observed to steadily decrease with frequency, eventually becoming negligible \citep[e.g.][]{Nowak1999,BDM2015b}. This behaviour is also seen in MAXI J1820+070, as shown in Fig. \ref{fig:hardsoftlags}. Thanks to the suppression of hard X-ray lags towards high frequencies, $\nu_{0}$ directly depends on the reverberation lag intrinsic (and unbiased) amplitude as $\nu_{0}\propto \tau_{\rm rev}^{-1}$. We note that $\nu_{0}$ is unbiased by the presence of hard lags even when these lags are not totally suppressed at high frequencies (see Appendix \ref{sec:nuzero}). Therefore, $\nu_{0}$ can be used to study, in a model-independent way, the evolution of soft reverberation lags. In order to constrain $\nu_{0}$ in an objective way we modelled the frequency-dependent lag profile. Because of the marginal detection of soft X-ray lags during O101 and O102, and given that no significant differences in the lag are observed, only for this part of the analysis these two observations were combined. For the fits we used the {\tt RELTRANS} spectral-timing model (\citealt{Ingram2019c}) in the version presented in \citeauthor{Mastroserio2019} (\citeyear{Mastroserio2019,Mastroserio2020}). This model assumes a lamppost geometry for the source of Comptonised hard X-ray photons and describes the hard X-ray lags as due to spectral pivoting of the inverse-Compton spectrum \citep[][]{Mastroserio2018,Mastroserio2019}. Hard X-ray lags generally show hump-like features at specific frequencies which might be associated with a more structured Comptonising region (e.g. \citealt{Mahmoud2018a}; \citeyear{Mahmoud2018b}). These complexities are clearly observed in the lag-frequency spectra of MAXI J1820+070 (see Fig. \ref{fig:hardsoftlags}) and did not allow us to always obtain good fits of both the hard and soft lags on a broad range of frequencies. Therefore, we excluded most of the hard lags complex profile from the fits, restricting the modelling of the lag-frequency spectra to frequencies ranging between $\sim1-2$ Hz and $100$ Hz. For O197 we used the frequency range $6-100$ Hz because of the anomalous behaviour observed at lower frequencies (lack of visible hard lags) that did not allow us to obtain good fits over a broader frequency range. Since this observation catches the source right before and during the transition, it is likely to present additional spectral-timing complexities not included in current models. We note that, for a given value of the BH mass, the parameters of the model influencing the position of $\nu_{0}$ are the X-ray source height, $h$, and the disc inner radius, $R_{\rm in}$. However, these parameters are degenerate when fitting lag-frequency spectra alone. For this reason, and because the geometry of the Comptonising region might be more complex than assumed in the model, we do not discuss the inferred best-fit values for these parameters, and use the best-fit models only to obtain a phenomenological description of the soft X-ray lag profile. Additional technical details of these fits are reported in Appendix \ref{sec:lagfits}. The derived best-fit models provide a satisfactory description of the high-frequency lag spectrum at each epoch (see the examples in Fig. \ref{fig:softlags}, where the red solid curve represents the best-fit model). We estimated $\nu_{0}$ as the frequency at which the best-fit model approaches zero at high frequencies\footnote{We noted that, given the low S/N of some observations at high frequencies, the errors on $\nu_0$ were not well constrained because the model that defines the 90\% confidence upper limit tends asymptotically to zero. Therefore, we set an arbitrarily small threshold of $\lsim0.05$ ms for the absolute amplitude of the negative lag, below which we consider the model to be consistent with zero lag. We verified that assuming even smaller values for this threshold returns consistent results.}. The errors on $\nu_{0}$ are obtained by accounting for the uncertainty in the parameters of the model (as described in Appendix \ref{sec:lagfits}). The derived values of $\nu_{0}$ are listed in Table \ref{tab2}. These values are plotted as a function of spectral hardness in Fig. \ref{fig:nu0_vs_HR}. For most of the analysed observations (more specifically those corresponding to the rise, plateau, and bright decline phases), the data show a steady trend of increasing $\nu_{0}$ with decreasing spectral hardness (Fig. \ref{fig:nu0_vs_HR} left panel). A linear model in log-log space to describe this trend in the rise, plateau, and bright decline is preferred to a constant model at $>99.99$ per cent confidence level (i.e. corresponding to a $\Delta\chi^2\sim154$ for a difference of 1 degree of freedom; Fig. \ref{fig:nu0_vs_HR} left panel). Notably this trend appears to break during the hard-intermediate states preceding the transition (hard-soft transition phase in Fig. \ref{fig:nu0_vs_HR}, right panel). Indeed, $\nu_{0}$ is observed to decrease by a factor of $\sim$2--3 at spectral hardness $\lower.5ex\hbox{\ltsima} 0.28$, remaining systematically lower than observed in most of the bright hard state. During observation O197 (right before and during the transition) we estimate $\nu_{0}=71^{+105}_{-19}$ Hz (Table \ref{tab2}). An increase in $\nu_{0}$ (as characteristic of most of the analysed part of the outburst, Fig. \ref{fig:nu0_vs_HR}, left panel) indicates a decrease in the intrinsic lag amplitude, and thus a decrease in the relative distances between the Comptonising plasma and the reprocessing region in the disc. This might be due to the inner disc truncation radius moving inwards or the Comptonising region decreasing in its height or vertical extent. The behaviour close to transition (Fig. \ref{fig:nu0_vs_HR}, right panel) is instead more puzzling and might be associated with the presence of a jet \citep[][see Sect. \ref{sec:discussion1}]{Bright2019,Homan2020,Espinasse2020}. The intrinsic amplitudes that can be inferred from $\nu_{0}$ correspond to Euclidean distances of the order of several tens-to-hundreds of $R_{\rm{g}}$ (see also discussion in Sect. \ref{sec:discussion2}). However, these distances represent the weighted mean over all the light travel paths of the Comptonised photons to the disc \citep[e.g.][]{Wilkins2013}, and can be significantly larger than the minimum travelled distance set by the X-ray source height and the inner disc radius (\citealt{Mahmoud2019}; see also Sect. \ref{sec:discussion2}). The distribution of light travel paths depends on the geometrical parameters of the disc and the Comptonising plasma, and is encoded in the impulse-response function \citep[e.g.][]{Gilfanov2000,Poutanen2002,Wilkins2013}. However, simultaneous fits of cross-spectra and time-averaged spectra, possibly with models that account for more complicated geometries for the X-ray source, are needed in order to constrain the impulse-response function. In the following section we employ an alternative technique to obtain tighter constraints on the geometry of the inner accretion flow responsible for the observed soft reverberation lags in MAXI J1820+070. \subsection{Constraining the inner radius of the disc from covariance spectra} \label{sec:covar} The hard X-ray flux that irradiates the inner parts of the accretion disc is partly reflected (i.e. producing absorption features and emission lines plus a Compton scattering continuum) and partly reprocessed as a quasi-thermal continuum in the soft X-ray band. The latter adds up to the disc black body component due to internal dissipation \citep[e.g.][]{Gierlinski2008,ZdziarskiBDM2020} and is expected to give a significant contribution in accretion states energetically dominated by the hot Comptonising plasma. Therefore, the observed disc black body temperature at the inner radius of the disc is $T_{\rm in}=f_{\rm col} T_{{\rm eff}}$, where $T_{{\rm eff}}$ is the effective temperature due to the internal dissipation and irradiation processes (such as $T^4_{\rm{eff}}= T^4_{\rm{eff, dis}}+T^4_{\rm{eff, irr}}$), and $f_{\rm col}>1$ is the colour-correction factor accounting for spectral hardening \citep[e.g.][]{Ebisawa1993,Shimura1995,Davis2005,Salvesen2021}. As discussed in \cite{ZdziarskiBDM2020} and \cite{Zdziarski2021a}, $T_{\rm in}$ can be estimated from the Stefan-Boltzmann law, as: \begin{eqnarray}\label{eq:S-B_law} T_{\rm in} \lower.5ex\hbox{\gtsima} f_{\rm col} T_{\rm{eff, irr}} = f_{\rm col} \left[\frac{(1-a)F_{\rm{\rm irr}}(R_{\rm in})}{\sigma}\right]^{1/4} ,\end{eqnarray} where $a$ is the disc albedo, $(1-a)$ is the fraction of irradiating flux reprocessed as a quasi-thermal component, $F_{\rm{irr}}$ is the total irradiating hard X-ray flux impinging on the inner radius of the disc $R_{\rm{in}}$, and $\sigma$ is the Stefan–Boltzmann constant. Assuming a standard emissivity profile for the X-ray source (i.e. $\propto R^{-q}$, with the emissivity index $q=3$) and a small scale height compared to the disc radius (such as in a coronal geometry or a lamppost relatively close to the BH horizon, this assumption is discussed in Sect. \ref{sec:discussion1}), the flux irradiating the inner edge of the disc is $F_{\rm irr}(R_{\rm in})\propto R_{\rm in}^{-2}$ \citep[][]{ZdziarskiBDM2020}. Therefore, following \cite{ZdziarskiBDM2020}, Eq. \ref{eq:S-B_law} translates into a lower limit on $R_{\rm in}$: \begin{eqnarray}\label{eq:S-B_law2} \frac{R_{\rm in}}{R_{\rm g}} \lower.5ex\hbox{\gtsima} 10 \frac{\mathcal{R}^{1/2}(1-a)^{1/2} l_{\rm irr}^{1/2}}{(kT_{\rm eff}/1\ {\rm keV})^2 (M/10M_{\odot})^{1/2}} ,\end{eqnarray} where $l_{\rm irr}=4\pi d^2 F_{\rm irr}/L_{\rm Edd}$ (assuming isotropy) is the total (i.e. including contribution from both variable and constant components) Eddington-scaled (with $L_{\rm Edd}$ for pure hydrogen) luminosity of the irradiating hard X-ray source, and $k$ is the Boltzmann constant. The factor $\mathcal{R}$ is the reflection fraction and was introduced by \cite{ZdziarskiBDM2020} to account for the possible reduction of hard X-ray flux impinging on the disc (e.g. as a consequence of anisotropic emission; see discussion in Sect. \ref{sec:discussion2}). The approximate equality in Eq. \ref{eq:S-B_law2} holds for the case of null disc internal dissipation ($T^4_{\rm{eff}}\sim T^4_{\rm{eff, irr}}$). Under this condition an estimate of the disc truncation radius can be obtained. We note that this additional quasi-thermal component is incorporated in state-of-the-art reflection spectra \citep[e.g.][]{Tomsick2018}, but for the remainder of the paper we fit it as a separate component in order to constrain its temperature. The quasi-thermal emission due to irradiation responds to the fast variability of the hard X-ray flux (thermal reverberation), as inferred from the detection of high-frequency soft X-ray lags (Sect. \ref{sec:softlags}). Previous analyses showed that the thermal response of the disc to variable irradiation also produces a soft, disc black body-like component on top of the hard X-ray Comptonisation continuum in high-frequency covariance spectra \citep[][]{Wilkinson2009,Uttley2011}. These spectra single out components varying in a linearly correlated way (coherently) and produced in the innermost regions of the accretion flow, thus can be used to study the quasi-thermal emission produced via irradiation of the inner disc. We estimated the temperature of the quasi-thermal component responding to hard X-ray irradiation of the inner disc, from the fits of high-frequency covariance spectra ($kT_{\rm in,covar}$). We emphasise that these are the frequencies where the soft X-ray lags ascribed to thermal reverberation are observed, thus allowing us to constrain the geometry of the region of the disc producing these delays. Indeed, the temperature inferred for this region was used together with Eq. \ref{eq:S-B_law2} in order to obtain constraints on the inner radius of the irradiated disc. In order to extract the covariance spectra we chose a broad energy band as reference (0.5--10 keV) and computed cross-spectra between the reference band and a series of adjacent energy bins. We followed the standard approach of removing the contribution of each energy bin from the reference band before computing each cross spectrum (see \citealt{Uttley2014} for details, and \citealt{Ingram2019b} for a different approach). The cross-spectra were averaged over a frequency interval of interest. Since we are interested in the thermal reprocessing component, we considered the frequency intervals where soft X-ray reverberation lags are detected (Sect. \ref{sec:softlags}). In order to exclude as much contribution as possible from the process responsible for the hard X-ray lags (which produces correlated disc and hard X-ray continuum variability at low frequencies; e.g. \citealt{Wilkinson2009,Uttley2011,BDM2015a}), we focused only on the highest sampled frequencies. Specifically, we averaged the cross-spectra over the frequencies that range between the maximum observed soft lag amplitude and $\tau\sim\tau(\nu_0)$ (the frequency intervals for the lags displayed in Fig. \ref{fig:softlags} are indicated by the blue arrows). In general, the tested frequencies range between $\sim$6 Hz and $\sim$ 90 Hz. Since $kT_{\rm in,covar}$ is expected to increase with frequency (as higher frequencies test regions of the disc closer to the BH), we verified that even limiting our analysis to the highest-frequency end of the chosen frequency intervals does not influence significantly the results reported here. We first fit the covariance spectra of each observation at energies $E\geq3$ keV with a Comptonisation model, absorbed by a column of cold gas (\texttt{TBabs*nthComp} in Xspec, \citealt{Zdziarski1996,Zycki1999,Wilms2000}). The seed photons temperature was fixed at 0.2 keV, the photon index, and normalisation were left free to vary. Since {\rm NICER}\ data are not sensitive to the Comptonisation high energy cut-off, the temperature of the Comptonising plasma was fixed at $kT_{\rm e}=30$ keV \citep[][]{Zdziarski2021b}. The column density was fixed at $N_{\rm H}=1.4\times10^{21}\rm{cm^{-2}}$ (\citealt{Kajava2019}; Dzie{\l}ak et al. 2021). The best-fit models were then extrapolated down to $E=0.3$ keV, revealing significant residuals at soft energies. In order to fit such residuals, we added a quasi-thermal contribution from a disc component. However, since we are fitting covariance spectra (which are essentially obtained from the difference between the time-dependent spectrum and the time-averaged spectrum of the source, \citealt{Wilkinson2009}) we need to account for the fact that the observed thermal emission in such spectra deviates from that of a disc blackbody \citep[][]{vanParadijs1986}. In other words, fitting covariance spectra with a disc blackbody model such as \texttt{diskbb} in Xspec \citep[][]{Makishima1986} would lead to an overestimate of the intrinsic disc temperature. Therefore, following \cite{vanParadijs1986}, within Xspec we defined the model \texttt{deltadisk}\footnote{\texttt{deltadisk}$=$\texttt{diskbb}$(kT_{\rm in,covar})-$\texttt{diskbb}$(kT_{\rm in})$\\$\approx (\partial {\tt diskbb}/\partial T_{\rm in})\vert_{T_{\rm in,covar}}\Delta T_{\rm in}$.} that represents the derivative of the \texttt{diskbb} model over $T_{\rm in}$. This corresponds to computing the variable spectrum as a small perturbation of the time-averaged one. In this model, the temperature of the disc in covariance spectra $T_{\rm in,covar}$ was left free to vary, and the seed photons temperature of \texttt{nthComp} was tied to $T_{\rm in,covar}$. The fits were performed in the 0.3--10 keV band, simultaneously for groups of consecutive observations belonging to the same phase of the outburst (as reported in Table \ref{tab1} and Fig. \ref{fig:hid}), thus resulting in a total of four groups. Due to its short exposure, O168 did not allow us to constrain the parameters of the model well, so it was discarded. Results of the fits are reported in Table \ref{tab2} for each fitting group. As an example, the best-fit model to high-frequency ($7-30$ Hz) covariance spectra of combined observations O104-O105 is shown in Fig. \ref{fig:covarO104105}. \begin{figure} \includegraphics[width=\columnwidth]{eeuf_rat_cov104_105_deltadiskbb.pdf} \caption{Unfolded best-fit \texttt{TBabs[deltadisk + nthComp]} model for the high-frequency ($7-30$ Hz) covariance spectrum of combined observations O104-O105. The residuals observed at $\sim$1--2 keV are likely due to {\rm NICER}'s calibration systematics ({\rm NICER}'s effective area shows features due to O, Si, and Au at 0.5, 1.8, and 2.2 keV).} \label{fig:covarO104105} \end{figure} The \texttt{TBabs[deltadisk + nthComp]} model is statistically preferred to the simple Comptonisation model in the joint fits for each phase of the outburst (for the four fitting groups the $\chi^2$ improves by 128, 167, 74, and 71, respectively, for a difference of 7, 18, 12, and 8 in the number of degrees of freedom). These results point to the presence of linearly correlated variability between the primary source of hard X-ray photons and the disc on the short timescales corresponding to the sampled frequencies. This is expected if the excess soft emission is produced via hard X-ray irradiation of the inner disc, and the process is linear at least to first order (Sect. \ref{sec:discussion}). This is also in agreement with the interpretation of the soft X-ray lags detected in MAXI J1820+070 as due to thermal reprocessing (Sect. \ref{sec:softlags}). \begin{figure*} \includegraphics[width=\columnwidth]{Gamma_vs_hr_deltadiskbb.pdf} \hspace{0.2cm} \includegraphics[width=\columnwidth]{Tin_vs_hr_deltadiskbb.pdf} \caption{Best-fit values of spectral parameters from fits of high-frequency covariance spectra with the \texttt{TBabs[deltadisk + nthComp]} model. The parameters are plotted as a function of spectral hardness (i.e. the ratio between the 4--12 keV and the 2--4 keV total count rates). \emph{Left panel:} Spectral slope of the hard Comptonisation component, $\Gamma_{\rm covar}$. \emph{Right panel:} Inner disc temperature, $kT_{\rm in,covar}$.} \label{fig:Tin_Gamma_hr} \end{figure*} In Fig. \ref{fig:Tin_Gamma_hr} we report the best-fit values of the $\Gamma_{\rm in,covar}$ and $kT_{\rm in,covar}$ parameters as a function of spectral hardness. The slope of the hard X-ray Comptonisation component does not show significant changes throughout this part of the outburst (left panel of Fig. \ref{fig:Tin_Gamma_hr}). This indicates that the spectral structure of the innermost parts of the hot plasma does not vary significantly throughout the hard and hard-intermediate states: it remains quite hard, although the errors become large towards the transition and a spectral softening cannot be excluded during the last observation. On the other hand, the temperature of the disc black body component in high-frequency covariance spectra is observed to steadily vary with hardness (right panel of Fig. \ref{fig:Tin_Gamma_hr}). We checked that letting the $N_{\rm H}$ free to vary in covariance spectra results in a slight, systematic shift of $kT_{\rm in,covar}$ towards lower values, which does not significantly influence the obtained results. The measured disc black body component in covariance spectra becomes hotter, by a factor of $\sim$5, as the time-averaged spectrum softens. Notably, during the rise, plateau, and bright decline, the characteristic frequency $\nu_{0}$ of the soft X-ray lag (and thus its intrinsic amplitude) is consistent with an increase (decrease in intrinsic lag amplitude) by a similar or slightly smaller factor (of $\sim3-5$; Fig. \ref{fig:nu0_vs_HR}, left panel). This strongly hints at a connection between the two in these phases of the outburst. Indeed, the steady increase in the characteristic frequency of soft X-ray lags (Fig. \ref{fig:nu0_vs_HR}) in these phases of the outburst suggests a change in the geometry of the innermost accretion flow. This might ultimately lead to a change in the irradiation of the disc, causing (according to Eq. \ref{eq:S-B_law}) the observed increase in the inner disc temperature $kT_{\rm in,covar}$ (Fig. \ref{fig:Tin_Gamma_hr}, right panel). Therefore, we assumed that the scale height of the X-ray source is relatively small, and used the best-fit $kT_{\rm in, covar}$ values from high-frequency covariance spectra to put constraints on the inner radius of the irradiated disc via Eq. \ref{eq:S-B_law2}. The effective temperature of the disc is influenced by the variable hard X-ray flux, but intrinsic dissipation as well irradiation from constant (or variable on longer timescales) hard X-ray emission may also contribute to it. Thus, in general Eq. \ref{eq:S-B_law2} yields a lower limit on the inner radius, unless heating from irradiation dominates in these regions ($kT_{\rm in}/f_{\rm col}=kT_{\rm eff}\sim kT_{\rm eff,irr}$). We assumed $f_{\rm col}=1.7$ \citep[][]{Shimura1995,Kubota1998,Gierlinski1999,Davis2005}. In order to estimate the total irradiating luminosity $L_{\rm irr}$ we performed systematic fits of time-averaged spectra in the energy band 3--10 keV. Details of these fits are reported in Appendix \ref{sec:timeave}. The Eddington-scaled irradiating luminosity $l_{\rm irr}$ was estimated from the 0.1--1000 keV flux ($F_{\rm irr,0.1-1000 keV}$ reported in Table \ref{tab2}) of the extrapolated best-fit Comptonised incident spectrum and assuming the constraints on the distance from \cite{Atri2020}. For the albedo we assumed a value of $a=0.5,$ which is an average of the values observed in the hard state of other BHXRBs ($\sim0.3-0.7$ \citealt{ZdziarskiBDM2020}). We accounted for the uncertainty on this parameter in the computations of the errors (see below). We kept the reflection fraction fixed at $\mathcal{R}=1$ (as expected for an isotropic, static X-ray source, covering the inner disc), since time-averaged fits yield estimates very close to this value. In order to compute the 90 per cent confidence errors on the inferred values of $R_{\rm in}$ we randomised the measured irradiating fluxes from time-averaged spectra (according to a log-normal distribution, \citealt{Uttley2005}) and the best-fit values of $kT_{\rm in,covar}$ (according to a normal distribution) within their range of uncertainties. The error was then inferred from the resulting distribution of $R_{\rm in}$ values. In the computation of the errors we also included the uncertainty on the assumed values of $f_{\rm col}$ and $a$ by considering the extreme values of these parameters within the ranges $f_{\rm col}\sim 1.3-2.2$ \citep[e.g.][]{Salvesen2021} and $a\sim0.3-0.7$ \citep[][]{ZdziarskiBDM2020}. Given that this computation yields a lower limit on $R_{\rm in}$, we conservatively accounted for the uncertainties on the BH mass and source distance (Sect. \ref{sec:intro}) by considering their $1\sigma$ confidence values that minimise the inner radius (Sect. \ref{sec:intro}). Figure \ref{fig:Rin_vs_hr} reports the results as a function of spectral hardness. These are consistent with a significant disc-truncation throughout most of the analysed observations, with a net decrease in the disc inner radius (by a factor of $\sim$5--6 at most) between the first and the last observation. \begin{figure} \includegraphics[width=\columnwidth]{Rinmin_vs_hr_deltadiskbb.pdf} \caption{Lower limits on the inner disc radius as a function of spectral hardness, as obtained from constraints on the quasi-thermal component in high-frequency covariance spectra. The horizontal dotted lines mark the ISCO for a Schwarzschild and Kerr BH. Results are obtained assuming an isotropic static X-ray source irradiating the inner disc (reflection fraction $\mathcal{R}=1$) and with a small scale height compared to the disc radius.} \label{fig:Rin_vs_hr} \end{figure} \section{Discussion} \label{sec:discussion} Our analysis shows that the geometry of the innermost accretion flow of MAXI J1820+070 changes significantly and continuously throughout the hard and hard-intermediate states of its 2018 outburst. This result was obtained via two independent X-ray spectral-timing methods applied to {\rm NICER}\ data, namely the analysis of soft X-ray reverberation lags, and of the soft quasi-thermal component in high-frequency covariance spectra. \subsection{The evolution of the inner flow geometry} \label{sec:discussion1} During the rise, the plateau, and the bright decline (Fig. \ref{fig:hid}) the soft X-ray lags in MAXI J1820+070 show a steady increase (decrease) of characteristic frequency $\nu_{0}$ (intrinsic amplitude) as the source softens (Fig. \ref{fig:nu0_vs_HR}, left panel). This trend is in qualitative agreement with the evolution of the frequency of the type-C QPO in the PSDs of the source \citep[][]{Buisson2019}. Soft X-ray lags in BHXRBs are ascribed to disc thermal reverberation of incident hard X-ray photons \citep[e.g.][]{Uttley2011,BDM2015b,Kara2019}. Therefore, the behaviour observed in MAXI J1820+070 can be interpreted in terms of a reduction of relative distance between the Comptonising region and the disc as the source softens (such behaviour is schematically illustrated in Fig. \ref{fig:cartoon} in the framework of the evolving inner radius of a truncated disc; see discussion below). However, during the hard-soft transition this trend breaks (Fig. \ref{fig:nu0_vs_HR}, right panel). In particular, during observations at spectral hardness $\lower.5ex\hbox{\ltsima} 0.28$, $\nu_{0}$ suddenly decreases (by a factor of $\sim$2--3), implying an increase in the reverberation lag intrinsic amplitude. We notice that this occurs simultaneously to an intense hard (15--50 keV) X-ray flare, as detected in \emph{Swift}/BAT light curves \citep{WangY2020}, suggesting a major change in the innermost emitting regions. Moreover, during O197, when the transition took place (as identified via X-ray and radio analysis; \citealt{Shidatsu2019,Homan2020}) there is no evidence of hard lags (the lack of a hard lag is observed at least down to $\sim 0.05$ Hz), and a long ($\sim$ 15--20 ms) reverberation lag is observed to dominate the entire frequency band (Fig. \ref{fig:softlags} bottom right panel). We suggest that such a behaviour might be the consequence of a different component dominating the X-ray emission at this time, possibly associated with the observed \citep[][]{Bright2019,Espinasse2020,Wood2021} ballistic jet ejections from the source (see Fig. \ref{fig:cartoon} and discussion below). State-of-the-art X-ray spectral-timing models may not be able to reproduce the soft X-ray lags, as well as the (apparently total) suppression of hard X-ray lags in this phase of the outburst. At the frequencies of the soft X-ray reverberation lag covariance spectra show excess soft X-ray emission, well-modelled by a quasi-thermal component due to the disc (Fig. \ref{fig:covarO104105}, Sect. \ref{sec:covar}). This indicates that this component is produced via X-ray heating, and that the thermal response of the disc to variable hard X-ray irradiation preserves a certain level of coherence. We found that the inner temperature of the disc black body component in the high-frequency covariance spectra of MAXI J1820+070 steadily increases as the source softens (Fig. \ref{fig:Tin_Gamma_hr}, right panel). Following the method outlined in \cite{ZdziarskiBDM2020} and assuming a constant and relatively small scale height for the X-ray source, we used these measurements to obtain constraints on the inner radius of the irradiated disc region responsible for the observed reverberation lags. Our estimates are consistent with an evolution of $R_{\rm in}$ (Fig. \ref{fig:Rin_vs_hr}), causing changes in the irradiation of the disc, and consequently in its temperature (Fig. \ref{fig:Tin_Gamma_hr}, right panel). In most of the hard state (i.e. during the rise, plateau, and bright decline) the observed increase (decrease) in the irradiated inner disc temperature (radius) as the source softens is consistent with the scaling characterising the frequency of soft X-ray reverberation lags (a change by a factor of $\sim3-4$ is observed in all these cases, Fig. \ref{fig:nu0_vs_HR}, Fig. \ref{fig:Tin_Gamma_hr} right panel, and Fig. \ref{fig:Rin_vs_hr}). This behaviour is qualitatively in agreement with predictions of disc-truncated models \citep[e.g.][]{Esin1997,Done2007}. However, the observed trend characterising the irradiated inner disc temperature and radius persists till transition, not showing any clear break as instead seen for the soft lags. In other words, these two independent diagnostics of geometry appear to stop probing the same regions when the source is close to transition. \begin{figure} \includegraphics[width=\columnwidth]{cartoon.pdf} \caption{Sketch of the physical scenario proposed to explain the observed evolution of soft X-ray reverberation lags and of the quasi-thermal component from irradiation in high-frequency covariance spectra observed in MAXI J1820+070.} \label{fig:cartoon} \end{figure} Therefore, we interpret the behaviour near the transition to soft-intermediate states as due to a drastically different configuration of the innermost accretion flow than that characterising the preceding phases of the outburst. In particular, by tracing back the motion of the discrete, relativistic ejecta from the central X-ray source detected in radio and X-ray observations of MAXI J1820+070, \cite{Bright2019} and \cite{Espinasse2020} estimate a launch date around MJD 58306, while an additional earlier ejection (MJD 58305.6) has been recently reported \citep[][]{Wood2021}. The latter coincides with the last of our observations (O197). This suggests a link between these discrete ejections, the long reverberation lag and the suppression of hard X-ray lags. The scenario we propose to explain the observed phenomenology is illustrated in the simplified sketch of Fig. \ref{fig:cartoon}. The inner radius of the disc decreases steadily throughout most of the hard state, as implied by the increase (decrease) of lag frequency (amplitude) and as traced by the inner temperature of the irradiated inner disc in high-frequency covariance spectra. Close to transition, while the inner radius is consistent with settling near (or at) the ISCO (as traced by the steady evolution of the quasi-thermal component in high-frequency covariance spectra), the structure of the X-ray source undergoes major changes (as traced by the break in the scaling of soft X-ray lags and the suppression of hard X-ray lags). Indeed, the intrinsic reverberation lag depends on the location of the region(s) of hard X-ray energy dissipation and the reprocessing region in the disc. If the emission due to internal shocks in a ballistic jet becomes significant close to the transition, then the dissipation of hard X-rays would predominantly occur in a larger or more distant region, which would cause the irradiation of a larger area of the disc. This would qualitatively explain the generally lower observed characteristic lag frequencies (Fig. \ref{fig:nu0_vs_HR}), and in particular, the very long reverberation lag (Fig. \ref{fig:softlags} bottom rightmost panel) at state transition. Moreover, if the hard X-ray lags commonly observed during the outburst are produced in the inner accretion flow (e.g. as proposed in models of propagating accretion rate fluctuations; e.g. \citealt{Lyubarskii1997}), a change in the dominant process of hard X-ray emission would also explain the observed suppression of these hard lags. In particular, this is expected if the dominant hard X-ray component is largely decoupled from fluctuations generated in the accretion flow (e.g. if this component originates in shocks along the jet, \citealt{Vincentelli2018}). We observe that the soft X-ray lags start deviating from the trend observed at higher spectral hardness about 4 days before the transition (Fig. \ref{fig:nu0_vs_HR}, right panel), namely on MJD 58302. This might indicate that the major changes in the inner accretion flow leading to the launch of relativistic ejecta start occurring a few days before the ejection. Interestingly, \cite{Buisson2021} reported the detection of a much longer ($\sim$20 s) soft X-ray lag $\sim$ 0.4 day after the transition (i.e. in soft-intermediate states). The authors interpreted it as due to the disruption and refilling of the inner disc following the jet ejection. We again emphasise that the estimates of the disc inner radius through Eq. \ref{eq:S-B_law2} (Fig. \ref{fig:Rin_vs_hr}) are strictly valid for a X-ray source with a height above the disc small compared to the disc radius (for example a corona above the inner disc or a lamppost located at a few $R_g$ from the BH, Sect. \ref{sec:covar}). Moreover, our computations assume that the height of the source remains approximately constant. If this condition does not apply, then variations in the height of the X-ray source are expected to play a significant role too in the observed changes of disc illumination. Recently, \cite{Kara2019} and \cite{Buisson2019} proposed a scenario whereby the vertical extent of the X-ray source, rather than the inner edge of the accretion disc, decreases throughout the hard state, driving the observed changes in the thermal reverberation lags. In particular, this was proposed in order to justify the remarkable stability of the broad component of the Fe K line in the hard and hard-intermediate states of MAXI J1820+070. However, this scenario entails constant illumination of the inner parts of the disc, which ultimately implies that the geometry of the regions of the X-ray source closer to the disc (and responsible for the irradiation of the inner disc) does not change significantly. Our results appear to disfavour such a scenario because in this case we would expect to measure a constant inner disc temperature, while we find the temperature to increase steadily (Fig. \ref{fig:Tin_Gamma_hr}, right panel). Therefore, our results appear not in agreement with the scenario proposed by \cite{Kara2019} and \cite{Buisson2019}. On the other hand, given the above mentioned assumptions of our analysis, we cannot completely rule out the possibility that the X-ray source varies in its height (rather than in its vertical extent) during the outburst, thus contributing to the observed trend of inner disc temperature. However, this scenario seems hard to reconcile with the observed evolution of type-C QPOs in this source \citep[][]{Buisson2019}. Therefore, we favour a solution whereby the hard state is dominated by a decrease in the inner radius. This is also supported by a recent re-analysis of four \emph{NuSTAR} observations of MAXI J1820+070 at different hard state epochs \citep[][]{Zdziarski2021b}, according to which the lack of a spectral evolution of the broad component of the Fe K line is indicative of modest relativistic effects, as expected when the inner radius does not reach the ISCO. The apparent lack of a clear relativistic component in the high-frequency covariance spectra of the source (Fig. \ref{fig:covarO104105}) further corroborates this interpretation. \subsection{The truncation radius of the disc} \label{sec:discussion2} Our analysis of high-frequency covariance spectra (Sect. \ref{sec:covar}) yields a lower limit on the disc inner radius for each of the analysed observations (Fig. \ref{fig:Rin_vs_hr}). These estimates depend on the value of the reflection fraction parameter $\mathcal{R}$. In our treatment the parameter $\mathcal{R}$ is used to account for a possible reduction of hard X-ray flux reaching the disc for causes other than changes of disc radius or X-ray source height \citep[][]{ZdziarskiBDM2020}. Assuming isotropic illumination of the disc $\mathcal{R}=1$ we found truncation at $\gsim30\ R_g$ at the beginning of the monitoring, corresponding to bolometric luminosities of a few per cent of $L_{\rm E}$ \citep[][]{Zdziarski2021b}. The disc turns out to be significantly truncated ($\gsim10\ R_g$) throughout most of the bright hard and hard-intermediate states, and appears to reach close to the ISCO only at transition. Assuming a lower value of $\mathcal{R}$ (e.g. as found in \citealt{Zdziarski2021b}) would reduce the inferred value for $R_{\rm in}$ by a factor $\mathcal{R}^{1/2}$, thus not affecting our inference of a disc significantly truncated throughout most of the analysed phases of the outburst. However, even accounting for the possible effects of a smaller $\mathcal{R}$, the data remain consistent with a disc at the ISCO of a low spinning BH during the transition (O197). This is in agreement with other independent results suggesting the BH in MAXI J1820+070 has spin $a<1$ \citep[][]{Buisson2019,Atri2020}. As discussed in Sect. \ref{sec:softlags}, the soft X-ray reverberation lags can also be used to infer an estimate of the disc truncation radius. However, this requires a more complicated modelling approach that relies on the simultaneous fit of variability, lag and time-averaged spectra \citep[e.g.][]{Mahmoud2019,Mastroserio2019}, and depends on the assumptions on the geometry of the Comptonising region. This problem has been widely discussed in the literature \citep[][]{Uttley2011,BDM2017,WangJ2020}, and can be understood by considering the intrinsic soft lag amplitude implied by its characteristic frequency $\nu_0$ (Fig. \ref{fig:nu0_vs_HR}), and the distance that this amplitude maps. The corresponding Euclidean distance is $d_{\rm rev}=c\tau_{\rm rev}(1+\cos[\theta-i])^{-1}\sim c(2\nu_{0})^{-1}(1+\cos[\theta-i])^{-1}$, where $\theta$ is the angle between the line of sight to the X-ray source and the straight path linking the X-ray source to the disc reprocessing region. The case $\theta=\pi/2$ approximates a truncated disc illuminated by a central source (in this case the derived distance coincides with the inner disc radius). Assuming $\theta=\pi/2$ and $i=70^\circ$, our estimates of $\nu_0$ (Sect. \ref{sec:softlags} and Table \ref{tab2}) yield Euclidean distances between the X-ray source and the reprocessing region in the disc ranging between a few tens and a few hundreds of $R_{\rm g}$. These distances are a factor of $\sim3-10$ times larger than the disc inner radius inferred from covariance spectra. In fact the distance inferred from reverberation lags is a weighted average among all possible light paths between the X-ray source and the disc (i.e. corresponding to the mean lag of the impulse response function; e.g. \citealt{Wilkins2013}). Therefore, in case of complicated geometries (i.e. more intricate impulse response function), such as for an extended X-ray source, the link between the measured lag and the geometrical parameters of the disc and of the X-ray source may be quite complex. This has been also shown in \cite{Mahmoud2019}, where the inner truncation radius resulted significantly smaller (by a factor of $\sim4$) than the distance mapped by the observed reverberation lag. This would explain why the soft X-ray lags measured in MAXI J1820+070 apparently imply distances larger than the disc inner radius inferred from covariance spectra. It is interesting to note that such large discrepancies between the distances implied by X-ray reverberation lags and the disc inner radius inferred by other independent methods is generally not observed in active galactic nuclei (AGN; e.g. \citealt{Fabian2009,BDM2013,Marinucci2014,Kara2016,Alston2020}). This might be due to a substantial difference between the geometry of the X-ray source in hard state BHXRBs as compared to that of those AGN (all type-1 non-jetted Seyferts or narrow line Seyferts) for which X-ray reverberation has been measured. In particular, dissipation of hard X-rays in these AGN likely occurs in a more compact region \cite[e.g.][]{Fabian2015}, thus reducing the number of possible light paths and thus the width of the impulse response function. Such inference is also in agreement with the fact that the characteristic timescales of variability appear systematically offset towards lower frequencies in hard state BHXRBs as compared to AGN \citep[e.g.][]{Koerding2007}. Finally, it is important to point out that in the previous discussion we make the implicit assumption that the detected reverberation lags are dominated by the light travel distance between the X-ray source and the disc, while other timescales are negligible. In particular, the thermalisation timescale should also contribute to the observed delays. However, this quantity scales inversely with the density and directly with the temperature of the disc surface layers irradiated by the X-ray source \citep[e.g.][]{Collin2003}. For certain values of these two parameters (e.g. density $\lsim10^{18}\rm cm^{-3}$ for temperatures of $\sim$1 keV) the thermalisation timescale may be of the order of the observed lags (i.e. $\sim0.1-1$ ms). However, while the density of the disc is currently a not well-constrained parameter, recent studies suggest it to be quite high in BHXRBs ($\gsim10^{19-20}\rm cm^{-3}$, \citealt{Tomsick2018,Jiang2019,ZdziarskiBDM2020}). If so, the corresponding reprocessing time is negligible compared to the measured lag. Moreover, contrary to what observed, if the lags were dominated by the thermalisation timescale, the observed process would be highly non-linear (incoherent; e.g. \citealt{Vaughan1997}). \section{Conclusions} We performed a systematic X-ray spectral-timing study of the first part of the 2018 outburst of the BHXRB system MAXI J1820+070. Our analysis covers the hard and hard-intermediate states as well as the transition to the soft-intermediate states. Our main results are as follows: \begin{itemize} \item The soft X-ray thermal reverberation lags show a steady evolution throughout most of the hard state (specifically during the rise, plateau, and bright decline). This evolution is indicative of a decrease in the relative distance between the X-ray source and the disc as the source softens.\\ \item The quasi-thermal component associated with the soft X-ray reverberation lags and observed in high-frequency covariance spectra shows a steady increase in the inner temperature. This is a consequence of changes in the hard X-ray irradiation of the inner disc and is consistent with a decrease in the disc truncation radius, in agreement with the observed evolution of soft X-ray lags. The disc is consistent with being highly truncated at the beginning of the outburst ($R_{\rm in}\lower.5ex\hbox{\gtsima} 30 R_{\rm g}$) and with remaining significantly truncated throughout most of the hard and hard-intermediates states ($R_{\rm in}\lower.5ex\hbox{\gtsima} 10 R_{\rm g}$).\\ \item Major changes occur before the transition to the soft-intermediate states. The thermal reverberation lags suddenly increase (decrease) in amplitude (frequency), breaking the trend observed during the preceding phases of the outburst. This suggests an increase in the relative distance between the X-ray source and the disc. At the same time, the hard X-ray lags intrinsic to the primary continuum get suppressed. On the other hand, the observed trend characterising the quasi-thermal component in high-frequency covariance spectra persists until transition, not showing any clear break. This trend is consistent with the inner edge of the disc increasingly approaching the ISCO, and possibly settling there at (or close to) transition.\\ \end{itemize} In order to explain the observed X-ray spectral-timing phenomenology, we propose a scenario whereby the inner radius of the disc decreases steadily throughout the hard and hard-intermediate states. This scenario is an alternative to the contracting vertical extent of the corona proposed in \cite{Kara2019} and appears more in line with the detected steady increase in the irradiated disc temperature. While the inner disc maintains this trend close to the transition to soft-intermediate states, reaching (or nearing) the ISCO, the X-ray source undergoes major structural changes. In particular, we propose that in this phase the dissipation of hard X-rays predominantly occurs in the ballistic jet (possibly through internal shocks). This would qualitatively explain the behaviour of soft X-ray reverberation lags and the suppression of hard X-ray lags (provided the latter are intrinsically produced in the accretion flow). Interestingly, the soft X-ray lags start deviating from the trend observed during most of the hard state about 4 days before the appearance of a type-B QPO (which marks the hard-to-soft transition; \citealt{Homan2020}) and the ejection of relativistic plasma (as later detected at radio and X-ray wavelengths; \citealt{Bright2019,Espinasse2020}), possibly representing the precursor to such events. Future intensive monitoring campaigns of BHXRB systems with {\rm NICER}\ will allow us to further test this scenario. \begin{landscape} \begin{table} \centering \caption{Log of analysed {\rm NICER}\ observations of MAXI J1820+070.} \label{tab1} \resizebox{1.3\textwidth}{!}{\begin{tabular}{ccccccccccccccc} \hline \noalign{\smallskip} Obs ID & Exp &{\tt N}$_{\rm{FPM}}$ & Phase & Date &Obs ID & Exp &{\tt N}$_{\rm{FPM}}$ & Phase & Date & Obs ID & Exp &{\tt N}$_{\rm{FPM}}$ & Phase & Date \\ \noalign{\smallskip} & [s] & & & & & [s] & & & & & [s] & & & \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1200120101 & 6274 & 50 & Rise & 2018-03-12 & 1200120132 & 3773 & 50 & Plateau & 2018-04-20 & 1200120165 & 1303 & 50 & Bright decline & 2018-05-31\\ & & & & &1200120133 & 4727 & 50 & Plateau & 2018-04-21 & 1200120166 & 1349 & 50 & Bright decline & 2018-06-01 \\ 1200120102 & 4513 & 50 & Rise & 2018-03-13 & 1200120134 & 6345 & 50 & Plateau & 2018-04-21 & 1200120167 & 1529 & 50 & Bright decline & 2018-06-02 \\ & & & & & & & & & & & & & & \\ 1200120103 & 9474 & 50 & Rise & 2018-03-13 &1200120135 & 4077 & 50 & Plateau & 2018-04-23 & 1200120168 & 417 & 50 & Bright decline & 2018-06-03 \\ & & & & &1200120136 & 1570 & 50 & Plateau & 2018-04-24 & & & & & \\ 1200120104 & 6231 & 50 & Rise & 2018-03-15 &1200120137 & 6870 & 50 & Plateau & 2018-04-25 & 1200120169 & 1761 & 50 & Bright decline & 2018-06-04 \\ 1200120105 & 3060 & 50 & Rise & 2018-03-16 &1200120138 & 3131 & 50 & Plateau & 2018-04-25 & 1200120170 & 3592 & 50 & Bright decline & 2018-06-05 \\ & & & & &1200120139 & 560 & 50 & Plateau & 2018-04-27 & & & & & \\ 1200120106 & 4446 & 50 & Rise & 2018-03-21 & & & & & & 1200120171 & 2373 &50 & Bright decline & 2018-06-06\\ 1200120107$^{a}$& 9293 & 50 & Rise & 2018-03-22 & 1200120140 & 1844 & 50 & Plateau & 2018-04-29 & & & & & \\ & & & & &1200120141 & 904 & 50 & Plateau & 2018-05-01 & 1200120172 & 2249 & 50 & Bright decline & 2018-06-07 \\ 1200120107$^{b}$& 3703 & 42 & Rise & 2018-03-22 & & & & & & & & & & \\ 1200120108 & 5296 & 42 & Rise & 2018-03-22 &1200120142 & 3195 & 50 & Plateau & 2018-05-02 & 1200120173 & 3284 & 50 & Bright decline & 2018-06-08 \\ & & & & & & & & & & 1200120174 & 2698 & 50 & Bright decline & 2018-06-09 \\ 1200120109 & 11472 & 50 & Rise & 2018-03-24 & 1200120143 & 2075 & 50 & Plateau & 2018-05-03 & & & & & \\ & & & & &1200120144 & 2992 & 50 & Plateau & 2018-05-04 & 1200120175 & 2156 & 50 & Bright decline & 2018-06-10 \\ 1200120110 & 19360 & 50 & Plateau & 2018-03-24 & & & & & & 1200120176 & 5311 & 50 & Bright decline & 2018-06-11 \\ & & & & &1200120145 & 4863 & 50 & Plateau & 2018-05-05 & & & & & \\ 1200120111 & 14893 & 50 & Plateau & 2018-03-26 &1200120146 & 4756 & 50 & Plateau & 2018-05-06 & 1200120177 & 435 & 50 & Bright decline & 2018-06-12\\ & & & & & & & & & & 1200120178 & 2625 & 50 & Bright decline & 2018-06-13 \\ 1200120112 & 12033 & 50 & Plateau & 2018-03-26 &1200120147 & 4609 & 50 & Plateau & 2018-05-07 & 1200120179 & 1237 & 50 & Bright decline & 2018-06-14 \\ 1200120113 & 1980 & 50 & Plateau & 2018-03-28 & & & & & & 1200120180 & 2246 & 50 & Bright decline & 2018-06-15 \\ 1200120114 & 2701 & 50 & Plateau & 2018-03-29 &1200120148 & 2650 & 50 & Plateau & 2018-05-08 & & & & & \\ 1200120115 & 3137 & 50 & Plateau & 2018-03-30 & & & & & & 1200120182 & 1120 & 50 & Bright decline & 2018-06-17 \\ & & & & &1200120149 & 2650 & 50 & Plateau & 2018-05-09 & 1200120183 & 438 & 50 & Bright decline & 2018-06-18 \\ 1200120116 & 9209 & 50 & Plateau & 2018-03-31 & 1200120150 & 1652 & 50 & Plateau & 2018-05-10 & 1200120184 & 448 & 50 & Bright decline & 2018-06-19 \\ 1200120117 & 2752 & 50 & Plateau & 2018-04-01 &1200120151 & 1037 & 50 & Plateau & 2018-05-11 & 1200120185 & 578 & 50 & Bright decline & 2018-06-20 \\ & & & & & & & & & & & & & & \\ 1200120118 & 8105 & 50 & Plateau & 2018-04-02 & 1200120152 & 2102 & 50 & Plateau & 2018-05-12 & 1200120186 & 916 & 50 & Hard-soft Transition & 2018-06-23 \\ 1200120119 & 2935 & 50 & Plateau & 2018-04-03 & 1200120153 & 356 & 50 & Plateau & 2018-05-13 & 1200120187 & 1395 & 50 & Hard-soft Transition & 2018-06-24 \\ & & & & & & & & & & & & & & \\ 1200120120 & 6049 & 50 & Plateau & 2018-04-04 & 1200120155 & 326 & 50 & Bright decline &2018-05-21 & 1200120188 & 1244 & 50 & Hard-soft Transition & 2018-06-27 \\ 1200120121 & 2955 & 50 & Plateau & 2018-04-05 & 1200120156 & 2886 &50 & Bright decline & 2018-05-22 & & & & & \\ 1200120122 & 3247 & 50 & Plateau & 2018-04-06 & & & & & & 1200120189 & 11441 & 50 & Hard-soft Transition & 2018-06-28 \\ 1200120123 & 752 & 50 & Plateau & 2018-04-07 & 1200120157 & 3584 & 50 & Bright decline & 2018-05-23 & & & & & \\ & & & & & & & & & & 1200120190 & 2150 & 50 & Hard-soft Transition & 2018-06-29 \\ 1200120124 & 177 & 50 & Plateau & 2018-04-09 & 1200120158 & 1730 & 50 & Bright decline & 2018-05-24 & & & & & \\ 1200120125 & 307 & 50 & Plateau & 2018-04-10 & 1200120159 & 1960 & 50 & Bright decline & 2018-05-25 & 1200120194 & 4738 & 50 & Hard-soft Transition &2018-07-03 \\ 1200120126 & 896 & 50 & Plateau & 2018-04-11 &1200120160 & 1688 & 50 & Bright decline & 2018-05-26 & & & & & \\ 1200120127 & 354 & 50 & Plateau & 2018-04-12 & 1200120161 & 2540 & 50 & Bright decline & 2018-05-27 & 1200120195 & 691 & 50 & Hard-soft Transition & 2018-07-04 \\ & & & & & 1200120162 & 574 & 50 & Bright decline & 2018-05-28 & & & & & \\ 1200120130 & 7493 & 50 & Plateau & 2018-04-16 & & & & & & 1200120196 & 541 & 50 & Hard-soft Transition & 2018-07-05 \\ 1200120131 & 6571 & 50 & Plateau & 2018-04-17 & 1200120163 & 1040 & 50 & Bright decline & 2018-05-29 & & & & & \\ & & & & & 1200120164 & 375 & 50 & Bright decline & 2018-05-30 & 1200120197$^{c}$ & 12429 & 27 & Hard-soft Transition & 2018-07-06 \\ \noalign{\smallskip} \hline \end{tabular}} \tablefoot{The table reports: {\rm NICER}\ observations IDs; net on-source exposure time after screening and filtering of $<10$ s GTIs; number of FPMs used for the analysis (\texttt{N}$_{\rm FPM}$); position of the source in the HID as sketched in Fig. \ref{fig:hid}; date of the observation (yyyy:mm:dd). The adopted grouping highlights observations combined for the X-ray spectral-timing analysis.\\ \tablefoottext{a,b}{First and second part of observation 1200120107, characterised by a change in the number of active FPMs; \\$^c$ First part of observation 1200120197, including the time before and during the transition.}} \end{table} \end{landscape} \begin{acknowledgements} This work is part of the EU-funded ``BHmapping'' project. While this work was being reviewed, \citealt{WangJ2021} was published, also showing the behaviour of reverberation lags at transition. The authors thank G. Mastroserio for providing the version of {\tt RELTRANS} used in this paper, M. Mendez for helpful discussions on fits of variability spectra, G.K. Jaisawal for help with {\rm NICER}\ data reduction, and the anonymous referee for their useful comments. BDM acknowledges support from the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie grant agreement No. 798726 and Ram\'on y Cajal Fellowship RYC2018-025950-I. AAZ and MD acknowledge support from the Polish National Science Centre under the grants 2015/18/A/ST9/00746 and 2019/35/B/ST9/03944. GP acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 865637). TMB acknowledges financial contribution from the agreement ASI-INAF n.2017-14-H.0 and PRIN-INAF 2019 n.15. We acknowledge support from the International Space Science Institute (Bern). \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2021-08-10T02:03:02", "yymm": "2102", "arxiv_id": "2102.07811", "language": "en", "url": "https://arxiv.org/abs/2102.07811" }
\section{Introduction} \label{sec:intro} One of the main objectives of Run 3 of the Large Hadron Collider (LHC) will be the further investigation of the Higgs sector. Most studies directly targeting the Higgs boson will focus on its onshell production and subsequent decay. Indeed, one might naively expect that the cross section to produce an offshell Higgs boson is negligible, due to the extremely narrow width of the Higgs boson of about 4 MeV in the Standard Model (SM). However, contrary to this expectation, it is known that approximately 10\% of $gg \to H^* \to VV$ events are produced with an invariant mass $m_{VV}$ above the $2 m_V$ production threshold~\cite{Kauer:2012hd}. The importance of the offshell region for Higgs phenomenology was further highlighted in Ref.~\cite{Caola:2013yja}, which showed that a comparison of onshell and offshell data can provide stringent constraints on the width of the Higgs boson (see also Refs.~\cite{Campbell:2013una,Campbell:2013wga}). While later work indicated that such constraints are not model-independent, they also revealed the potential of using offshell data to probe the couplings of the Higgs boson~\cite{Anderson:2013afp,Gainer:2013rxa,Englert:2014ffa,Logan:2014ppa,Gainer:2014hha,Englert:2014aca,Azatov:2014jga}. Offshell analyses have been performed by both ATLAS~\cite{Aad:2015xua,Aaboud:2018puo} and CMS~\cite{Khachatryan:2014iha,Khachatryan:2015mma,Khachatryan:2016ctc,Sirunyan:2019twz}, and have succeeded in constraining the Higgs boson width to $\mathcal{O}$(10\,MeV). This is several orders of magnitude smaller than a direct constraint, which is limited by the detector resolution. Nevertheless, offshell analyses are currently still limited by the available statistics. Further studies of offshell Higgs boson production will therefore be a key component of the investigations of the Higgs sector during both Run 3 and in the high luminosity phase of the LHC. In this paper, we will focus on the production of an offshell Higgs boson through gluon fusion and its subsequent decay into a pair of electroweak gauge bosons. To this end we consider the signal Higgs production process $gg \to H^* \to VV$ together with the corresponding continuum background process $gg \to VV$ and their interference. We study the two diboson modes $VV=\{ZZ,W^+W^-\}$ and we assume leptonic decays of the diboson pair. In the following, for brevity, we often denote the processes according to the intermediate diboson resonances ($ZZ$,~ $W^+W^-$). However by this we always refer to the full four-lepton offshell processes, including the interference between $Z$ and offshell photon production. The signal process proceeds predominantly through a top-quark loop. For onshell Higgs production, the top-quark mass is the largest scale in the process and can be approximated as infinitely heavy, allowing this loop-induced process to be reduced to a tree-level one. Using this approximation, the next-to-next-to-next-to-leading order (N3LO) corrections to Higgs production have been computed~\cite{Anastasiou:2015vya,Anastasiou:2016cez,Mistlberger:2018etf}. However, this approximation is not valid for offshell Higgs production, since the virtuality of the Higgs may be comparable to (or even larger than) the top-quark mass. This means that a leading-order (LO) prediction for offshell Higgs production requires the computation of a one-loop amplitude with the full top mass dependence, while the next-to-leading order (NLO) correction requires a two-loop amplitude. By itself, this would not be so onerous, but there is a second reason why predictions for offshell Higgs production are more demanding than for onshell Higgs production. It is well-known that the interference effects between the signal $gg \to H^* \to VV$ and the background process $gg \to VV$ can be sizeable and thus must be taken into account~\cite{Kauer:2012hd}. Moreover, as we discussed above, the impact of top quarks in the loops cannot be neglected, and this means that in the computation of the background amplitudes $gg \to VV$ the contribution from both massless and massive quarks circulating in the loops should be considered. Results for offshell Higgs production including the mass dependence of quarks in the loop and interference effects are known at LO ~\cite{Binoth:2008pr,Kauer:2012hd,Campbell:2013una,Campbell:2013wga}. Results in the presence of an additional radiated jet have also been presented~\cite{Cascioli:2013gfa,Campbell:2014gua}. At NLO, the two-loop $gg \to VV$ amplitudes for massless quarks circulating in the loop have been known for several years~\cite{Caola:2015ila,vonManteuffel:2015msa}. However, the corresponding amplitudes for massive quark loops have only recently become available~\cite{Agarwal:2020dye,Bronnum-Hansen:2020mzk, Bronnum-Hansen:2021olh}. This means that a fully consistent NLO prediction with the exact dependence on the top-quark mass for offshell Higgs production is in sight but still not available. However, NLO calculations including interference effects have been presented based on an expansion in $1/m_t$~\cite{Melnikov:2015laa,Campbell:2016ivq,Caola:2016trd}. This expansion is not valid for high energies, but has been shown to work well below the top-pair production threshold $2m_t$. In fact, Ref.~\cite{Campbell:2016ivq} uses a conformal mapping and Pad\'e approximants to extend the results beyond the top-pair threshold. More recently, it has been demonstrated that using an expansion in $1/m_t$ together with a threshold expansion as inputs for Pad\'e approximants can lead to improved estimates for both $gg \to HH$ and $gg \to VV$ amplitudes~\cite{Grober:2017uho,Grober:2019kuf}. In Ref.~\cite{Davies:2020lpf} the massive two-loop amplitude for $gg \to ZZ$ has been computed in the high-energy expansion $s,|t| \gg m_t^2$, which opens the door for a NLO description of this process in the phase space \mbox{$m_{VV} > 2 m_t$}. However, even disregarding these methods, there is a significant region of the offshell phase space with \mbox{$ m_{VV} < 2\,m_t$} in which the $1/m_t$ expansion is expected to be reliable, and hence where a good approximation to the NLO corrections can be obtained. We base the Monte Carlo generator presented here on such an approximation, following the calculation of Ref.~\cite{Caola:2016trd}. In the future, the generator can easily be extended to also cover the region $m_{VV} > 2\,m_t$ by replacing the massive two-loop amplitudes. Reliable NLO corrections to the continuum background $gg \to VV$ alone can be obtained ignoring heavy quark contributions (or these can be incorporated via a reweighting of the massless two-loop amplitude with the LO mass dependence). They are available in the literature both for $gg \to ZZ$~\cite{Caola:2015psa,Grazzini:2018owa} and $gg \to W^+W^-$~\cite{Caola:2015rqy,Grazzini:2020stb}.~\footnote{The results of Refs.~\cite{Grazzini:2018owa,Grazzini:2020stb} also include the offshell Higgs contribution, however without investigating it explicitly. } Formally these are of $\mathcal{O}(\alpha_S^3)$ with respect to the LO $pp\to VV$ process, i.e. they contribute beyond the order of the known NNLO corrections to the quark-induced channels~\cite{Cascioli:2014yka,Gehrmann:2014fva,Grazzini:2015hta,Grazzini:2016ctr,Heinrich:2017bvg} - yet they yield phenomenologically relevant contributions. The NLO results of Refs.~\cite{Caola:2015psa,Caola:2015rqy,Campbell:2016ivq,Caola:2016trd,Grazzini:2018owa,Grazzini:2020stb} are at fixed-order parton level, meaning that they do not account for radiation beyond one additional jet. This, together with the fact that unweighted events are not available, hinders the use of these calculations in experimental analyses. In this paper, we report on NLO calculations for offshell Higgs production, including interference effects, matched to parton showers using the \POWHEG{} method~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd,Jezo:2015aia}. The implementation extends earlier work by two of us~\cite{Alioli:2016xab} that considered the background process $gg \to ZZ \to 4\ell$ only. Furthermore, in contrast to Ref.~\cite{Alioli:2016xab}, here we also include the contribution from $qg$- and $q\bar{q}$-initiated channels. This implementation allows the generation of unweighted events with additional radiation included through the parton shower, and should facilitate the use of the NLO calculations in experimental analyses. The corresponding \RES generator \ggfl will be made publicly available in due time. The paper is organized as follows. In Sec.~\ref{sec:computational_setup}, we briefly discuss the technical details involved in the \linebreak parton-level calculation as well as in the matching procedure. In Sec.~\ref{sec:setup}, we summarize the numerical inputs that we use. In Sec.~\ref{sec:fixedorder}, we present fixed-order results validating our calculation and investigate the applied approximations. Finally in Sec.~\ref{sec:nlopsresults} we present numerical results for $ZZ$ and $WW$ production matched to parton showers. We conclude in Sec.~\ref{sec:conclusions_outlook}. \section{Computational setup} \label{sec:computational_setup} In this section, we describe the matching of the NLO calculation of gluon-induced four-lepton production to parton showers through the \POWHEG{} method implemented in \RES. We first describe the structure of the fixed-order NLO computation and then discuss several details relevant for the matching to \PYTHIAn{}. \subsection{Structure of the NLO computation} \label{sec:computation} We begin by summarizing the salient features of the NLO calculation, and refer the reader to Ref.~\cite{Caola:2016trd} for additional discussion. As mentioned in the previous section, we need to consider both Higgs-mediated amplitudes $gg \to H^* \to VV$ as well as continuum production $gg \to VV$ amplitudes. We therefore write the full amplitude for gluon-induced $VV$ production as \begin{equation} A = A_{\rm signal} + A_{\rm bkgd} \end{equation} where $A_{\rm signal}$ refers to Higgs-mediated amplitudes, while $A_{\rm bkgd}$ refers to amplitudes without any Higgs propagators. Squaring this equation gives \begin{equation} |A|^2 = |A_{\rm signal}|^2 + |A_{\rm bkgd}|^2 + 2{\rm Re}\left(A_{\rm signal} A_{\rm bkgd}^*\right)\,. \end{equation} Upon integrating over the phase space for the final state particles, the first two terms on the right-hand side give the signal and background results, respectively, while the third term gives the interference contribution \begin{equation} \mathrm{d}\sigma_{\rm full} = \mathrm{d}\sigma_{\rm signal} + \mathrm{d}\sigma_{\rm bkgd} + \mathrm{d}\sigma_{\rm intf}. \end{equation} In Secs.~\ref{sec:fixedorder} and ~\ref{sec:nlopsresults} we will present results for these contributions separately, as well as for their sum $\mathrm{d} \sigma_{\rm full}.$ As mentioned in the previous section, the LO amplitudes for both $A_{\rm signal}$ and $A_{\rm bkgd}$ are well known~\cite{Glover:1988rg,Matsuura:1991pj,Zecher:1994kb,Binoth:2008pr,Kauer:2012hd,Campbell:2013una,Campbell:2013wga}. At NLO, we have to compute the real and virtual corrections to $A_{\rm signal}$ and $A_{\rm bkgd}$. The corrections to $A_{\rm signal}$ have been known for some time~\cite{Ellis:1987xu,Spira:1995rr,Harlander:2005rq,Aglietti:2006tp}. On the other hand, the NLO corrections to the background amplitude $A_{\rm bkgd}$ are more involved, and deserve a separate discussion. We begin by examining the virtual corrections to the $gg \to ZZ$ process. In this case, one can clearly separate massless loops of the first five flavours, and massive top-quark loops. The virtual (two-loop) amplitudes for the former are known~\cite{vonManteuffel:2015msa,Caola:2015ila}, and we construct these using the {\tt ggVVamp} library~\cite{vonManteuffel:2015msa}. Results for two-loop amplitudes with massive quarks were presented very recently~\cite{Agarwal:2020dye, Bronnum-Hansen:2021olh}. However, here we follow the approach of Refs.~\cite{Melnikov:2015laa,Caola:2016trd} and use an expansion in $1/m_t$ for the massive amplitudes. This implies that our NLO results for the $ZZ$ production process are only valid below the top pair production threshold $m_{ZZ} < 2m_t$. Finally, we need to include double-triangle amplitudes, where each triangle can have either massless or massive quarks in the loop. We employ analytic results for these amplitudes taken from Refs.~\cite{Hagiwara:1990dx,Campbell:2007ev}. \begin{figure}[t!] \begin{center} \begin{tabular}{cccc} \includegraphics[scale=0.21]{figures/ggVVg.pdf} & & \includegraphics[scale=0.21]{figures/qgVVq.pdf}\\[0ex] \includegraphics[scale=0.21]{figures/qqbVVg.pdf} & & \includegraphics[scale=0.21]{figures/qqbVVg2.pdf}\\[0ex] \end{tabular} \end{center} \caption{\label{fig:diagrams}% Real corrections to gluon-induced $VV$ production, with different partonic channels.} \end{figure} We now discuss the case of $WW$ production. Since top and bottom quarks mix in the loop, there is no longer a clear division into massive and massless loops. For this reason, Ref.~\cite{Caola:2016trd} only considered four massless quark flavours in the loop for $WW$, neglecting the third generation entirely. Here we take a slightly different approach. We compute the two-loop amplitudes $A_{\rm bkgd}^{\rm 2loop}$ using {\tt ggVVamp} assuming massless quarks for four active flavours. We then reweight the amplitudes as follows \begin{align} \label{eq:rwgt} A_{\rm bkgd}^{\rm 2loop, rwgt} = A_{\rm bkgd}^{\rm 2loop} (u,d,s,c)\frac{|A_{\rm bkgd}^{\rm 1loop} (u,d,s,c,t,b)|^2}{|A_{\rm bkgd}^{\rm 1loop} (u,d,s,c)|^2}\, \end{align} where $A_{\rm bkgd}^{\rm 1loop} (u,d,s,c,t,b)$ is the one-loop amplitude with full mass dependence for the third-generation quarks and $A_{\rm bkgd}^{\rm 1loop} (u,d,s,c)$ is the one-loop amplitude with four active flavours. \footnote{We note that results for the two-loop $gg \to WW$ amplitudes with massive top quarks in the loop were recently presented in Ref.~\cite{Bronnum-Hansen:2020mzk}.} We will comment on the accuracy of this approach in Sec.~\ref{sec:masseffects}. The real corrections to $gg \to VV$ include both the purely gluonic channel $gg \to VV + g$ as well as channels with initial state quarks $qg \to VV+q$ and $q\bar{q} \to VV+g $ (see Fig.~\ref{fig:diagrams}) and their crossings. At $\mathcal{O}(\alpha_s^3)$, the former can be unambiguously identified as corrections to the loop-induced process $gg \to VV$. The $qg$ and $q\bar{q}$ channels are more intricate. These channels also appear in the $\mathcal{O}(\alpha_s^3)$ corrections to the $q\bar{q} \to VV$ process, and it is not possible to parametrically distinguish these corrections from the corrections to the loop-induced process that we are interested in. For this reason, these channels were not included in Ref.~\cite{Caola:2016trd}. On the other hand, there is no obstacle to computing these corrections, as they form a gauge invariant subset and their only infrared singularities are removed through the collinear renormalization of the parton distribution functions. Indeed, results for these channels were included in Refs.~\cite{Grazzini:2018owa,Grazzini:2020stb}. In this paper, we choose to include these channels in our nominal predictions at NLO and also in the results after matching to the parton shower. At the same time we will also investigate the impact of these channels in Sec.~\ref{sec:qgchannel}, so that both their magnitude and their impact on the scale variation can be properly assessed. Note that we define these contributions to include any amplitudes with \textit{at least one} vector boson attached to a closed fermion loop. In particular, amplitudes such as those represented in Fig.~\ref{fig:diagrams}(d) (contributing to $gg\to ZZ$) are included. \footnote{In our code, the user can choose to switch off amplitudes with one vector boson attached to an external quark line using the {\tt ol\_noexternalvqq} flag. The user can also turn off the $qg$ and $q\bar{q}$ channels altogether with the {\tt select\_real} flag.} We compute all the real correction loop-squared amplitudes using {\tt OpenLoops 2}~\cite{Cascioli:2011va,Buccioni:2019sur} including massless and massive quark contributions in the loop, allowing us to retain the full dependence on the top quark mass (and bottom quark mass where applicable). This is in contrast to the approach of Ref.~\cite{Caola:2016trd} where the real amplitudes for $gg \to ZZ+g$ involving a top-quark loop were computed using an expansion in $1/m_t$. Finally, we note that our calculation includes single-resonant amplitudes in all partonic channels. In order to ensure numerical stability across the whole phase-space, including in the IR regions close to the soft or collinear limits of the real radiation, we rely on the {\tt OpenLoops} stability system, which automatically reevaluates all phase-space points with the two reduction methods implemented in {\tt Collier}~\cite{Denner:2016kdg}. For unstable points matrix elements are set to zero. We verified that varying the corresponding threshold \texttt{stability\_kill2} by a factor of 10 around a central value of $10^{-2}$ leaves all results unchanged (see Ref.~\cite{Buccioni:2019sur} for documentation of this threshold parameter). In order to optimize the treatment of all the colorless resonances, we take advantage of the \RES framework~\cite{Jezo:2015aia}, which, despite being specifically designed to handle the subtractions when intermediate colored resonances are present, can at the same time improve the phase-space sampling of any resonance. This is achieved by first manually specifying the resonance histories.\footnote{The automatic resonance-finding algorithm in \RES is not yet able to handle resonances in loop-induced contributions.} The \RES then decomposes the cross section into contributions associated to a well-defined resonance structure, which are enhanced on that particular cascade chain. Each contribution is separately integrated at this point with a dedicated resonance-aware phase space sampling which makes use of a resonance-aware subtraction procedure. The resonance-aware subtraction makes use of a mapping from a real to the underlying Born configuration that preserves the virtuality of intermediate resonances. Due to the absence of QCD divergences in the resonances, the resonance-aware subtraction is strictly speaking not necessary for the processes considered here. However, we choose to adopt it because its usage improves the statistical errors for observables directly probing the resonance structure. The last essential feature of the \RES implementation is the ability to generate remnants and regular events even when the corresponding cross section is negative, which was not possible in previous versions of the \POWHEGBOX that were instead assuming them to be positive. Despite usually being squares of matrix elements, in this process remnants and regulars contributions might indeed assume negative values in the calculation the interference terms. Technical details about necessary modifications in \RES to deal with the processes at hand are given in \ref{sec:mod}. \subsection{Matching to \PYTHIAn{}} \label{sec:matching} We next discuss the matching of the NLO calculation of $gg \to VV$ to the \PYTHIAn{} parton shower in the framework of \RES. The resonance structure that we construct at the partonic level is further preserved by the parton shower by specifying the input resonance cascade chain at the Les Houches event level (LHE) and making sure that the shower does not distort it through recoil effects. This is achieved by using the {\tt PowhegHooks} class in \PYTHIAn{}. However, the {\tt PowhegHooks} class needs to know the number of final state particles involved in the LO process once the resonance decays are stripped. Since in the \RES{} this number is not fixed, we modified the {\tt PowhegHooks} class accordingly, following the recipe adopted in Ref.~\cite{FerrarioRavasio:2019vmq}. The \PYTHIAn{} parton shower implements two recoil schemes for initial-state radiation~(ISR): in both cases the recoil is always applied to all final-state particles to absorb the transverse momentum imbalance due to ISR off an initial-initial dipole. In the default scheme~\cite{Sjostrand:2004ef} the same is also done for initial-final dipoles. There is, however, also the option to use a fully local scheme~\cite{Cabouat:2017rzi}, in which when an initial-state emission takes place from an initial-final dipole, the final-state spectator absorbs the transverse-momentum recoil and the other particles in the event are left unchanged. The default recoil scheme is the recommended option to handle the $s$-channel production of colour singlets, while the alternative one was originally designed to handle deep inelastic scattering and vector boson fusion events. Since at LO the process considered in this paper describes the production of a colour singlet, we maintain as our baseline the default recoil scheme of \PYTHIAn{}. However, since in principle we can also generate events with a hard final state jet, in our numerical results we compare the two recoil prescriptions for exclusive observables that might be sensitive to this choice. In all our showered predictions we include underlying event simulation and hadronization effects. However, in order to simplify the identification of the leptons, we turn off QED radiation and the decay of unstable hadrons. \section{Numerical setup} \label{sec:setup} In this section we present the numerical inputs for the results presented in the following sections. Coupling and mass input parameters are fixed to the following values: \begin{align*} m_{\mathrm{Z}}&=91.1876~\GeV\;, & \Gamma_{\mathrm{Z}}&=2.4952~\GeV\;,\\ m_{\mathrm{W}}&=80.3980~\GeV\;, & \Gamma_{\mathrm{W}}&=2.1054~\GeV\;,\\ m_{\mathrm{H}}&=125.1~\GeV\;, & \Gamma_{\mathrm{H}}&=4.03\cdot10^{-3}~\GeV\;,\\ m_{\mathrm{t}}&=173.2~\GeV\;, & G_{F}&=1.16639 \cdot 10^{-5}\,{\rm GeV}^{-2}\,, \end{align*} where $G_{F}$ denotes the Fermi constant and \begin{align*} \alpha=\frac{\sqrt{2}}{\pi}m_W \sin^2\theta_W\,, \end{align*} with the real-valued weak mixing angle \begin{align*} \sin^2\theta_W=1-\frac{m_W^2}{m_Z^2}\,. \end{align*} In general, the \ggfl generator allows for finite bottom-quark masses. In this work, we mostly use the $N_F=5$ flavour scheme and treat the bottom quark as massless $m_{\mathrm{b}}=0~\GeV$. Only in Sec.~\ref{sec:validation}, where we validate against the results of Ref.~\cite{Caola:2016trd}, do we use a non-zero bottom-quark mass, and there we choose $m_{\mathrm{b}}=4.5~\GeV$. We use the partonic luminosities and strong coupling from the \texttt{NNPDF30\_lo\_as\_0130} and the \linebreak \texttt{NNPDF30\_nlo\_as\_0118} sets~\cite{Ball:2014uwa} for the validation against the results of Ref.~\cite{Caola:2016trd} that we present in Sec.~\ref{sec:validation}. For all other results, we use the \texttt{NNPDF31\_nlo\_as\_0118} set~\cite{Ball:2017nwa}. We consider center-of-mass energies of 13~\TeV, and set as renormalization and factorization scales for all modes \begin{align} \mu=\mu_{R}=\mu_{F}&=\frac{\mfl}{2}\,, \end{align} where \begin{align} \mfl^{2}=\left(\sum_{i \in \{\ell, \nu\}} p_{i}\right)^{2}\,. \end{align} We obtain scale uncertainty bands by independently varying the renormalization and factorization scales by a factor of two and omitting antipodal variations. At the generator level the following kinematic cuts are applied in the $ZZ$ channel, \begin{align} 5~\GeV&<\mll<180~\GeV,\label{eq:cut_mll}\\ 70~\GeV&<\mfl<340~\GeV\label{eq:cut_m4l}\,. \end{align} We need to impose such an upper cut on $\mfl$ because, as discussed in the previous sections, the virtual corrections are computed using a $1/m_t$ expansion which is no longer valid for large values of $\mfl$~\cite{Caola:2016trd}. For $WW$ production we only require \begin{equation} \mtltn >1~\GeV, \end{equation} to ensure the renormalization and factorization scales remain inside the perturbative domain. We do not impose any transverse momentum or rapidity requirements on the final-state leptons. In order to avoid numerical instabilities of the loop-induced amplitudes we need to impose additional mild technical cuts at the generation level. For the Born kinematics, we discard configurations where the transverse momentum of the vector boson is smaller than 100~MeV. For the real corrections, we neglect configurations where the transverse momentum of the radiated parton is smaller than 100~MeV, as also done in Ref.~\cite{Alioli:2016xab}. Indeed this region only gives rise to power-suppressed contributions that do not significantly change the total cross-section. We verified that our results are independent of these technical cuts varying them by a factor of 5 from 0.1 GeV to 0.5 GeV. Finally, we reconstruct jets with the anti-$\kt$ algorithm~\cite{Cacciari:2008gp} as implemented in the \FASTJET{} package~\cite{Cacciari:2005hq, Cacciari:2011ma}, with jet radius $R=0.4$ and $p_{T,j} > 20$\,GeV. \section{Fixed-order NLO results} \label{sec:fixedorder} In the following we present selected fixed-order results that we used to validate our implementation and to investigate the accuracy of the applied approximation for the treatment of mass effects in the virtual corrections. \subsection{Validation} \label{sec:validation} \begin{table}[t!] \centering \small \begin{tabular}{c| c | c || c | c } \multicolumn{5}{c}{ $ZZ$: $gg \to e^{+} e^{-} \mu^{+} \mu^{-}$}\\ \hline &\multicolumn{2}{|c||}{ \RES} &\multicolumn{2}{|c}{ Ref.~\cite{Caola:2016trd} } \\ \hline contrib &LO [fb] &NLO [fb] &LO [fb] &NLO [fb]\phantom{\Big|} \\ \hline \back & 2.898(1) & 4.482(6) & 2.90(1) & 4.49(1) \\ \signal & 0.0431(1) & 0.0745(2) & 0.043(1) & 0.074(1) \\ \interf &-0.1542(3) &-0.2870(4) &-0.154(1) &-0.287(1) \\ \end{tabular} \qquad \begin{tabular}{c| c | c || c | c } \hline \multicolumn{5}{c}{ $W^+W^-$: $gg \to e^{+} \nu_e \mu^{-} \bar\nu_{\mu}$}\\ \hline &\multicolumn{2}{|c||}{ \RES} &\multicolumn{2}{|c}{ Ref.~\cite{Caola:2016trd} } \\ \hline contrib &LO [fb] &NLO [fb] &LO [fb] &NLO [fb]\phantom{\Big|} \\ \hline \back & 48.92(6) &74.62(7) & 49.0(1) &74.7(1)\\ \signal & 48.24(8) &83.31(5) &48.3(1) &83.35(2)\\ \interf & -2.24(1) &-4.20(2) &-2.24(1) &-4.15(1)\\ \end{tabular} \caption{Comparison of LO and NLO cross sections for the signal, background, and interference contributions to $gg \to ZZ\to e^{+} e^{-} \mu^{+} \mu^{-}$ (top) and $gg \to W^+W^-\to e^{+} \nu_e \mu^{-} \bar\nu_{\mu}$ (bottom) with those of Ref.~\cite{Caola:2016trd}. The $qg$- or $q\bar q$-induced channels are not considered.} \label{table:total_XS} \end{table} As a validation of our implementation, we compare the fixed-order LO and NLO cross sections for $gg \to ZZ\to e^{+} e^{-} \mu^{+} \mu^{-}$ and $gg \to W^+W^-\to e^{+} \nu_e \mu^{-} \bar\nu_{\mu}$ against the results of Ref.~\cite{Caola:2016trd} in Tab.~\ref{table:total_XS}, with selection cuts as specified in Ref.~\cite{Caola:2016trd}. The signal, background and interference contributions are shown separately. Following the approach of Ref.~\cite{Caola:2016trd}, a finite bottom-quark mass $m_b$ is used everywhere in the signal ($|A_{\rm signal}|^2$). For the $ZZ$ channel, the background ($|A_{\rm bkgd}|^2$) is computed with $m_b=0$ and the interference ($2 \rm{Re}(A_{\rm signal}A_{\rm bkgd}^*)$) is computed with a finite $m_b$ besides from the virtual contribution which is computed analytically in a mixed-mass scheme, where the bottom mass is neglected in the background amplitude $A_{\rm bkgd}$.\footnote{Conversely to the calculation in Ref.~\cite{Caola:2016trd}, we cannot easily use a different bottom mass value for $A_{\rm bkgd}$ and $A_{\rm signal}$ when evaluating the interference using the Born and real matrix elements provided by \OL.} For the $WW$ channel, $A_{\rm bkgd}$ is evaluated with $n_f=4$, for both the background and the interference channels. For this validation we do not consider quark-initiated channels in the real contribution, consistent with Ref.~\cite{Caola:2016trd}. Within numerical accuracy we find convincing agreement between the two calculations at LO and NLO. \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/WW-NLO-rwgt-crop.pdf} \caption{Differential distribution in the transverse mass $m_{T}$ of the four lepton system in $gg \to e^{+} \nu_e \mu^{-} \bar\nu_{\mu} $ at fixed-order NLO. We show results with and without reweighting of the heavy-quark mass effects in the virtual amplitude using dashed and dashed-dotted curves, respectively. The comparison is shown for the full (red), the background (blue) and the interference (orange) contributions. The lower panes show the bin-by-bin ratios of the results without reweighting to those with reweighting. For the nominal prediction we use a symlog scale with a linear threshold=$10^{-7}$. } \label{fig:WW-NLO-rwgt} \end{figure} \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/ZZ-ZZmass-crop.pdf} \caption{Differential distribution in invariant mass $m_{4\ell}$ of the four-lepton system in $gg \to e^{+} e^{-} \mu^{+} \mu^{-}$ at NLO matched to \PYTHIAn{}. The upper panel shows nominal predictions at fixed-order NLO (dashed) for the background (blue), the signal (green) and the interference (orange) separately and their sum (red) together with NLO+PS predictions (solid). For the nominal prediction we use a symlog scale with a linear threshold=$10^{-6}$. The first ratio plot shows the relative yield of the different contributions with respect to the full, both at the fixed-order NLO level and also after parton shower. The lower four ratio plots show the LHE level (dotted) and fully showered corrections with respect to fixed-order NLO for the sum of all contributions (second ratio plot), the background only (third ratio plot), the signal only (fourth ratio plot), and for the interference only (fifth ratio plot). The band associated to NLO+PS predictions indicates the $7$-point scale variation uncertainty.} \label{fig:ZZ-ZZmass} \end{figure} \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/ZZ-HTTOT-crop.pdf} \caption{Differential distribution in $H_{\rm T}$ in $gg \to e^{+} e^{-} \mu^{+} \mu^{-}$ at NLO matched to \PYTHIAn{}. Predictions, colour coding and bands as in Fig.~\ref{fig:ZZ-ZZmass}.} \label{fig:ZZ-HTtot} \end{figure} \subsection{Mass effects} \label{sec:masseffects} As discussed in Sec.~\ref{sec:computation}, the massive contributions to the two-loop virtual amplitudes are incorporated via approximations in our calculation. All other ingredients including the real amplitudes retain full mass dependence. For the $ZZ$ process the approximation for the massive two-loop amplitudes is based on an expansion in $1/m_t$. As discussed in detail in Ref.~\cite{Caola:2016trd} the resulting accuracy is estimated to be at the percent level for $m_{ZZ} < 2m_t$ and quickly deteriorates beyond this. For the $WW$ process we reweight the massless two-loop amplitudes as detailed in Eq.~\ref{eq:rwgt}. In order to gauge the impact of this reweighting and thus the accuracy of the applied approximation, we compare against results obtained using only two massless generations for two-loop $A_{\rm bkgd}$ amplitudes, while all other contributions are computed using full mass dependence as usual. We plot these results for the transverse mass $m_{T}$ of the four lepton system in $W^+W^-$ production in Fig.~\ref{fig:WW-NLO-rwgt}. This observable is defined as \begin{align} \label{eq:mt} m_{T} = \sqrt{2\, E_{\rm T,miss}\, p_{\rm T,\ell^+\ell^-} \,(1-\cos (\phi_{\rm miss, \ell^+\ell^-})} \,, \end{align} where the missing transverse energy $E_{\rm T,miss}$ is given by the neutrino momenta at truth level and $\phi_{\rm miss, \ell^+\ell^-}$ is the angle between the sum of the neutrino momenta and the sum of the lepton momenta. For the background contribution the effect of the reweighting is at the few percent level for the bulk of the cross section and increases up to about $15\%$-$20\%$ at large transverse masses. For the interference the impact is at the $15\%$ level inclusively and mildly increases in the tail of the transverse mass distribution. The sum of all contributions -- which also includes the signal where no approximations are needed -- receives inclusive variations due to the reweighting procedure of $7\%$. In the tail this increases to $10\%$-$15\%$. \section{NLO results matched to parton showers} \label{sec:nlopsresults} In this section we present our numerical results matched to the \PYTHIAn{} parton shower. We consider the different-flavour decay modes $gg \to e^{+} e^{-} \mu^{+} \mu^{-}$ and \linebreak $gg \to e^+ \nu_e \mu^{-} \bar\nu_{\mu}$ and for simplicity denote them $ZZ$ and $W^+W^-$ production respectively. The same-flavour leptonic decay modes will also be made available in the \ggfl generator, but are not the focus of this study. \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/ZZ-pt_ZZ-crop.pdf} \caption{Differential distribution in the transverse momentum of the four lepton system $p_{\rm T,4\ell}$ in $gg \to e^{+} e^{-} \mu^{+} \mu^{-}$ matched to \PYTHIAn{}. Predictions, colour coding and bands as in Fig.~\ref{fig:ZZ-ZZmass}. } \label{fig:ZZ-pTZZ} \end{figure} \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/ZZ-dpt_jnocut-crop.pdf} \caption{Differential distribution in the transverse momentum of the hardest jet $p_{\rm T,j_1}$ in $gg \to e^{+} e^{-} \mu^{+} \mu^{-}$ at NLO matched to \PYTHIAn{}. Predictions, colour coding and bands as in Fig.~\ref{fig:ZZ-ZZmass}.} \label{fig:ZZ-pTj} \end{figure} \subsection{$ZZ$ production} \label{sec:nlopsresultsZZ} In Figs.~\ref{fig:ZZ-ZZmass}-\ref{fig:ZZ-pTj} we present numerical results at NLO, LHE level and NLO matched to \PYTHIAn{} (NLO+PS) for gluon-induced $ZZ$ production, showing the full result as well as the signal, background and interference contributions separately. In Fig.~\ref{fig:ZZ-ZZmass} the invariant mass of the four-lepton system is shown. The Higgs-mediated signal shows the resonance peak at the Higgs boson mass together with the well-known significant offshell tail. The background clearly exhibits a single-resonant peak at $m_{4\ell}=m_Z$~\footnote{Interestingly we note that this single-resonant peak is missing at LO due to vanishing of the corresponding amplitudes induced by the triangle anomaly.} and increases significantly for $m_{4\ell}> 2m_Z$, where both intermediate $Z$ bosons can become onshell. In this region the interference also starts to become relevant. As a consequence of the very inclusive phase-space cuts employed in our numerical analysis, both the signal and the interference reach about 10\% of the full result at large $m_{4\ell} \approx 2m_t$, with the interference being destructive. It is well known that the interference provides an even larger destructive contribution at higher values of $m_{4\ell}$, which are however beyond the validity of the $1/mt$ expansion used in our calculation. The $m_{4\ell}$ observable is inclusive in QCD radiation and consequently parton-shower corrections are marginal for all contributions (individually and in their sum). In fact, for all production modes the fixed-order NLO prediction agrees at the percent level with both the LHE level prediction and the fully showered prediction. Scale uncertainties at the fully showered level are approximately 20\%. At small invariant masses ($m_{4\ell}<150$ GeV) the interference becomes very small and consequently Monte Carlo statistics deteriorate quickly in this regime. Fig.~\ref{fig:ZZ-HTtot} shows the distribution in \begin{align} H_T=\sum\limits_{i \in \{ \ell,\nu,j\}} p_{\rm T,i} \,, \end{align} where the sum over the transverse momenta considers all leptons and reconstructed jets. In this distribution the signal peaks at $H_T=m_H$, while the background peaks at $H_T=2 m_Z$. For small $H_T$ parton-shower corrections are mostly driven by the first radiation already present at the LHE level. For the background contribution, these corrections are small, but for the signal contribution they lead to a negative correction of about 50\%. A possible explanation is that the signal distribution is strongly peaked around $m_H$ and therefore very sensitive to additional radiation that moves events away from the peak. For large $H_T$, the parton showers provide substantial positive corrections up to a factor of 2, while the scale uncertainties can be as large as 50\%. This effect can be understood as follows. The upper cut on the invariant mass of the four leptons Eq.~\ref{eq:cut_m4l} also restricts $H_T < 340$ GeV at LO. However, the phase space for $H_T > 340$ GeV can be filled via additional QCD radiation. This leads to significant NLO corrections (not shown here), as well as to sizable parton-shower corrections and LO-like scale uncertainties. \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/WW-mt_WW-crop.pdf} \caption{Differential distribution in transverse mass $m_{T}$ of the four-lepton system in $gg \to e^{+} \nu_e \mu^{-} \bar\nu_{\mu} $ at NLO matched to \PYTHIAn{}. Predictions, colour coding and bands as in Fig.~\ref{fig:ZZ-ZZmass}.} \label{fig:WW-mT} \end{figure} \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/WW-HTTOT-crop.pdf} \caption{Differential distribution in $H_{\rm T}$ in $gg \to e^{+} \nu_e \mu^{-} \bar\nu_{\mu} $ at NLO matched to \PYTHIAn{}. Predictions, colour coding and bands as in Fig.~\ref{fig:ZZ-ZZmass}.} \label{fig:WW-HTtot} \end{figure} Figs.~\ref{fig:ZZ-pTZZ} and \ref{fig:ZZ-pTj} display the transverse momentum of the four-lepton system and of the hardest jet respectively. For the latter no lower cut on the jet transverse-momentum is applied. The two distributions are identical at fixed-order (they only differ in the first bin which for $p_{\rm T,4\ell}$ includes the Born and virtual contributions proportional to $\delta(p_{\rm T,4\ell})$). The fully showered predictions include a Sudakov suppression which can clearly be seen at the lower end of both the $p_{\rm T,4\ell}$ and the $p_{\rm T,j_1}$ distributions. We also observe that the parton shower changes the sign of the lowest bin in the $p_{\rm T,4\ell}$ spectrum. This can be understood as follows: the virtual contribution, proportional to $\delta(p_{\rm T,4\ell})$, always comes with an opposite sign of the corresponding real contribution.After the shower (and even after the first \POWHEG{} emission) the virtual contribution gets spread out at finite values of $p_{\rm T,4\ell}$. This results in a change of sign in the first bin. Turning now to the opposite end of the spectrum, the $p_{\rm T,4\ell}$ distribution corresponds to the entire QCD recoil of the four-lepton system and for all contributions receives large parton shower corrections in the tail, while LHE level corrections are largest at $p_{\rm T,4\ell}\approx 40-50\,\GeV$ and become small in the tail, where the Sudakov suppression fades away. As already discussed in Ref.~\cite{Alioli:2016xab} the large parton-shower corrections can be explained by the fact that, by adding further radiation, the shower increases the transverse momentum of the colour-neutral four lepton system, which has to recoil against the sum of all emitted particles. On the contrary, in the tail of $p_{\rm T,j_1}$ no such enhancement of the corrections due to the parton shower is observed. In fact, by construction the shower emissions are subdominant with respect to the leading jet and on average are separated enough not to be clustered with it. With respect to the LHE level we observe small and negative parton-shower corrections, being compatible within scale uncertainties. \subsection{$W^+W^-$ production} In Figs.~\ref{fig:WW-mT}-\ref{fig:WW-pTj} we present numerical results at NLO, LHE level and NLO matched to \PYTHIAn{} for gluon-induced $W^+W^-$ production, showing again the signal, background, interference, and full results. In contrast to the corresponding results for $ZZ$ production, here we consider the distribution in the transverse mass $m_{T}$ of the four-lepton system, as defined in Eq. \ref{eq:mt}, instead of the invariant mass of the colour-singlet system. This is shown in Fig. \ref{fig:WW-mT}. As for the invariant mass in $ZZ$ production, the impact of the parton-shower corrections on the transverse mass in $W^+W^-$ production is marginal, as expected from its inclusive (with respect to QCD radiation) nature. It is noteworthy that the interference becomes very large at high $m_T$ and eventually contributes beyond $-50\%$ for $m_{T}>300\,\GeV$. However, also for the interference alone parton-shower corrections are marginal for the entire $m_T$ range considered. A similarly strong enhancement of the interference can also be observed at large $H_{\rm T}$, as shown in Fig.~\ref{fig:WW-HTtot}. In the tail of this observable, parton-shower corrections are again sizable. However in contrast to $ZZ$ production, here no upper boundary on the four-lepton invariant mass is applied and the parton-shower corrections level off for large $H_{\rm T}$ at around $50\%$ for the background and $70\%$ for the full. \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/WW-pt_WW-crop.pdf} \caption{Differential distribution in the transverse momentum of the four-lepton system $p_{\rm T,2\ell 2\nu}$ in $gg \to e^{+} \nu_e \mu^{-} \bar\nu_{\mu} $ at NLO matched to \PYTHIAn{}. Predictions, colour coding and bands as in Fig.~\ref{fig:ZZ-ZZmass}.} \label{fig:WW-pTWW} \end{figure} \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/WW-dpt_jnocut-crop.pdf} \caption{Differential distribution in the transverse momentum of the hardest jet $p_{\rm T,j_1}$ in $gg \to e^{+} \nu_e \mu^{-} \bar\nu_{\mu} $ at NLO matched to \PYTHIAn{}. Predictions, colour coding and bands as in Fig.~\ref{fig:ZZ-ZZmass}.} \label{fig:WW-pTj} \end{figure} We finally consider the QCD recoil for $W^+W^-$ production in Fig.~\ref{fig:WW-pTWW} and the transverse-momentum distribution of the hardest jet in Fig.~\ref{fig:WW-pTj}. We observe similar behaviour as for $ZZ$ production: the anticipated Sudakov suppression at the low end of both spectra, very large parton-shower corrections in the tail of $p_{\rm T,2\ell 2\nu}$, and mild corrections in the entire $p_{\rm T,j_1}$ spectrum. \subsection{Shower recoil scheme} \label{sec:recoil} As discussed in Sec.~\ref{sec:matching} \PYTHIAn{} implements two alternative shower recoil schemes: the default scheme in which the transverse momentum imbalance after an initial-final dipole emission is democratically distributed among all final-state particles, including the four lepton system, and a fully local scheme, in which the recoil is entirely absorbed by the coloured spectator.\footnote{This is activated by the \PYTHIAn{} setting \texttt{SpaceShower::dipoleRecoil = on}.} In Fig.~\ref{fig:py8recoil} we compare these two schemes considering the transverse momentum of the four lepton system in the background contribution to $ZZ$ production. As already anticipated in Sec.~\ref{sec:nlopsresultsZZ} the default recoil scheme leads to a very hard spectrum in the tail (with a 50\% increase with respect to the LHE distribution around 100~GeV). Conversely the dipole scheme remains close to the LHE level at large $p_{T,4\ell}$. For small values of $p_{T,4\ell}$, the dominant contribution should arise from several (soft) emissions whose total transverse momentum sum up to zero. However, in the dipole scheme, the transverse momentum recoil for ISR is not always absorbed by the final-state colour singlet. This explains why for very small values of $p_{T,4\ell}$ the local recoil leads to a significantly smaller cross section compared to the default scheme. Thus, the default scheme yields a better description of the logarithmically enhanced region, while it also overpopulates the hard region of the spectrum. A detailed discussion of the logarithmic accuracy of the parton shower goes beyond the purposes of this article and the choice of the recoil scheme has important implications at higher logarithmic orders~\cite{Nagy:2009vg,Dasgupta:2018nvj,Bewick:2019rbu,Dasgupta:2020fwr,Forshaw:2020wrq,Hamilton:2020rcu}. However, since the choice of the recoil scheme only affects our predictions beyond the claimed accuracy, a comparison of the two options available in \PYTHIAn{} should help assess the size of the total theoretical uncertainty. \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/ZZ-RecoilPythia-crop.pdf} \caption{Differential distribution in transverse momentum of the four charged leptons in $gg \to e^+ e^- \mu^+ \mu^- $ for the background contribution at the LHE level~(black), and at the particle level, using the default \PYTHIAn{} recoil~(blue) and the fully local dipole recoil~(violet). In the lower panel the ratio with respect to the LHE level is shown. } \label{fig:py8recoil} \end{figure} \subsection{Effect of $qg$ and $q\bar q$ channels} \label{sec:qgchannel} In contrast to the calculations in Ref.~\cite{Caola:2016trd,Alioli:2016xab}, in this study we do include the $qg$ and $q\bar q$ induced channels contributing to the real radiation at NLO.~\footnote{These channels were also considered in the fixed-order NLO study of \cite{Grazzini:2018owa,Grazzini:2020stb}.} Here we would like to explicitly highlight the impact of these production channels. To this end in Figs.~\ref{fig:ZZ-ZZmass-split}-\ref{fig:ZZ-HTtot-split} we illustrate at the LHE level the impact of the $qg$ and $q\bar q$ channels with respect to only the $gg$ channels for the different production modes, considering the $m_{4\ell}$ and $H_{\rm T}$ distributions in $ZZ$ production. We find very similar results also for the $W^+W^-$ production mode. In the $m_{4\ell}$ distribution the impact of the $qg/q\bar q$ channels is rather flat and about $25\%$ for all production modes. For $H_{\rm T}$ it is increasing with increasing $H_{\rm T}$ and reaches up to $50\%$ in the considered range. Clearly, any precision analysis of $gg$-induced four-lepton production should include these additional partonic channels opening up at NLO. \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/ZZ-ZZmass-split-crop.pdf} \caption{Differential distribution in invariant mass $m_{4\ell}$ of the four-lepton system in $gg \to e^{+} e^{-} \mu^{+} \mu^{-}$ at NLO and LHE level. Colour coding of the different production modes as in Fig. \ref{fig:ZZ-ZZmass-split}. The lower ratio plots show the full LHE contribution including all partonic channels ($gg+qg+qq$) over only the $gg$ channel contribution for the different production modes.} \label{fig:ZZ-ZZmass-split} \end{figure} \begin{figure}[tb] \includegraphics[width=0.48\textwidth]{figures/ZZ-HTTOT-split-crop.pdf} \caption{Differential distribution in $H_{\rm T}$ in $gg \to e^+ e^- \mu^+ \mu^- $ at NLO and LHE level. Predictions, colour coding and bands as in Fig.~\ref{fig:ZZ-ZZmass-split}.} \label{fig:ZZ-HTtot-split} \end{figure} \section{Conclusions and Outlook} \label{sec:conclusions_outlook} Gluon-induced four-lepton production offers a unique laboratory for the measurements of offshell Higgs bosons. At the same time precision studies of diboson processes and corresponding background estimates in new physics searches are becoming sensitive to the accuracy of the modelling of the gluon-induced production modes. Having this in mind, in this paper we presented an implementation of the loop-induced processes $gg \to ZZ$ and $gg \to W^+W^-$ including offshell leptonic decays and non-resonant contributions at NLO matched to the \PYTHIAn{} parton shower event generator. We consistently include the continuum background contribution, the Higgs-mediated signal, and their interference. All of these are loop-induced processes and therefore their implementation in a fully-exclusive NLO event generator matched to parton showers poses a significant technical challenge. In inclusive observables, such as the four-lepton\linebreak invariant-mass distribution in $ZZ$ production, the \linebreak parton-shower corrections are found to be marginal, while in more exclusive observables like the recoil of the four-lepton system they can become substantial. For the latter we highlighted the importance of the parton-shower recoil scheme. Furthermore we investigated the relevance of the $qg/q\bar q$ induced production channels, which partly overlap with the higher-order corrections to quark-induced diboson production. In our calculation all ingredients have been treated exact at the NLO level apart from the massive amplitudes contributing to the two-loop virtuals, which are incorporated via approximations. Exact results for the latter have become available very recently and could be incorporated in an updated version of the \ggfl generator presented here. Moreover, the generator will be made publicly available in the \RES framework. \section*{Acknowledgments} We are grateful to Fabrizio Caola for useful discussions during the preliminary stages of this work. We also thank Paolo Nason for discussing the modifications to \RES{} necessary for implementing the processes described in this paper. J.L. is supported by the Science and Technology Research Council (STFC) under the Consolidated Grant ST/T00102X/1 and the STFC Ernest Rutherford Fellowship ST/S005048/1. The work of S.A. is supported by the ERC Starting Grant REINVENT-714788. He also acknowledges funding from Fondazione Cariplo and Regione Lombardia, grant 2017-2070 and by the Italian MUR through the FARE grant R18ZRBEAFC. S.F.R.'s work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 788223, PanScales) and by the UK Science and Technology Facilities Council (grant number ST/P001246/1).
{ "timestamp": "2021-02-17T02:00:24", "yymm": "2102", "arxiv_id": "2102.07783", "language": "en", "url": "https://arxiv.org/abs/2102.07783" }
\section{Introduction} \label{sec:Intro} \subsection{The problem} \label{subsec:equation} In this paper, we study a linear collisional kinetic equation in a bounded domain with general Maxwell boundary condition. More precisely, we consider a smooth enough bounded domain $\Omega \subseteq \mathbf R^d$, $d \geqslant 2$, we denote by ${\mathcal O} := \Omega \times \mathbf R^d$ the interior set of phase space and $\Sigma := \partial\Omega \times \mathbf R^d$ the boundary set of phase space. For a (variation of a) density function $f=f(t,x,v)$, $t \geqslant 0$, $x \in \Omega$, $v \in \mathbf R^d$, we then look at the following equation \begin{eqnarray}\label{eq:dtf=Lf} \partial_t f &=& {\mathscr L} f := - v \cdot \nabla_x f + {\mathscr C} f \quad\hbox{in}\quad (0,\infty) \times {\mathcal O}, \\ \label{eq:BdyCond} \gamma_{\!-} f &=& {\mathscr R} \gamma_{\!+} f \quad\hbox{on}\quad (0,\infty) \times \Sigma, \end{eqnarray} where $\gamma_{\!\pm} f$ denote the trace of $f$ at the boundary set and where ${\mathscr C}$ and ${\mathscr R}$ stand for two linear collisional operators that we describe below. Our goal is to investigate the long-time behavior of solutions to this linear equation. In order to do so, we will prove an hypocercivity result using a general and robust approach inspired by previous works on~$L^2$-hypocoercivity. \smallskip \noindent{\it Motivation.} We first briefly explain the motivation to study this problem. We consider a system of particles confined in~$\Omega$ whose state is described by the variations of the density of particles $F = F(t,x,{v}) \geqslant 0$ which at time $t \geqslant 0$ and at position~$x \in \Omega$, move with velocity ${v} \in \mathbf R^d$. We suppose that collisions between particles are for instance described by the Boltzmann or the Landau bilinear collision operator. It leads us to consider the following equation: \begin{eqnarray} \label{eq:nonlinear} \partial_t F &=& - v \cdot \nabla_x F + Q(F,F) \quad\hbox{in}\quad (0,\infty) \times {\mathcal O} \\ \label{eq:Bdynonlinear} \gamma_{\!-} F &=& {\mathscr R} \gamma_{\!+} F \quad\hbox{on}\quad (0,\infty) \times \Sigma, \end{eqnarray} where $Q$ is for instance the Boltzmann or the Landau collision operator. The standard (normalized and centered) Maxwellian \begin{equation}\label{eq:standardMaxw} \mu=\mu(v) := (2\pi)^{-d/2}e^{-|v|^2/2} \end{equation} is a global equilibrium of this equation. In order to study this type of problem in a close-to-equilibrium regime, we write the distribution $F$ as the following perturbation of the global equilibrium $\mu$: $F=\mu+ f$. If $F$ solves~\eqref{eq:nonlinear}-\eqref{eq:Bdynonlinear}, then the linearized equation (throwing away the quadratic term) satisfied by $f$ is nothing but \eqref{eq:dtf=Lf}-\eqref{eq:BdyCond} with $$ {\mathscr C} f := Q(\mu,f) + Q(f,\mu). $$ The assumptions (A1)-(A2)-(A3) made below on the collisional operator~${\mathscr C}$ are met by the linearized Boltzmann and Landau equations for the so-called hard potentials (and thus including the Boltzmann hard spheres case). It is worth noting that by a straightforward adaptation of our method, we can also treat linear operators preserving only mass such that the Fokker-Planck operator or the relaxation operator. We believe that our analysis is also new in this setting. In Section~\ref{sec:weak}, we present more general assumptions that allow us to deal with linearized Boltzmann and Landau operators corresponding to softer potentials. \smallskip \noindent{\it The boundary condition.} Let us now describe the boundary condition \eqref{eq:BdyCond}. For that purpose, we need to introduce regularity hypotheses on $\partial\Omega$ and some notations. We assume that the boundary~$\partial\Omega$ is smooth enough so that the outward unit normal vector $n(x)$ at $x \in \partial\Omega$ is well-defined as well as $\d\sigma_{\! x}$ the Lebesgue surface measure on $\partial\Omega$. % The precise regularity on $\partial\Omega$ that we will need is that the signed distance $\delta$ defined by $\delta(x) := - d(x,\partial\Omega)$ if $x \in \Omega$, $\delta(x) := d(x,\partial\Omega)$ if $x \in \Omega^c$, so that $\Omega = \{x \in \mathbf R^d, \delta(x) < 0\}$, satisfies $\delta \in W^{3,\infty}(\Omega)$ and $\nabla \delta(x) \ne 0$ for $x \in \partial\Omega$, so that $\nabla \delta / |\nabla \delta|$ coincides with the outward unit normal vector $n$ on $\Omega$. We then define $\Sigma_\pm^x := \{ {v} \in \mathbf R^d; \pm \, {v} \cdot n(x) > 0 \}$ the sets of outgoing ($\Sigma_+^x$) and incoming ($\Sigma_-^x$) velocities at the point $x \in \partial\Omega$ as well as $$ \Sigma_\pm := \Big\{ (x,{v}) \in \Sigma; \pm n(x) \cdot {v} > 0 \Big\} = \Big\{(x,{v}); \, x \in \partial\Omega, \, {v} \in \Sigma^x_\pm \Big \}. $$ We denote by $\gamma f$ the trace of $f$ on $\Sigma$, and by $\gamma_{\pm} f = \mathbf 1_{\Sigma_{\pm}} \gamma f$ the traces on $\Sigma_{\pm}$. The boundary condition \eqref{eq:BdyCond} thus takes into account how particles are reflected by the wall and takes the form of a balance between the values of the trace $\gamma f$ on the outgoing and incoming velocities subsets of the boundary. We assume that the reflection operator acts locally in time and position, namely $$ ({\mathscr R} \gamma_{\!+} f)(t,x,v) = {\mathscr R}_x (\gamma_{\!+} f (t,x,\cdot))(v) $$ and more specifically it is a possibly position dependent Maxwell boundary condition operator \begin{equation}\label{eq:boundary} {\mathscr R}_x (g (x,\cdot))(v) = (1-\alpha(x)) g (x , R_x v) + \alpha(x) D g (x,v), \end{equation} for any $ (x,v) \in \Sigma_-$ and for any function $g : \Sigma_+ \to \mathbf R$. Here $\alpha : \partial\Omega \to [0,1]$ is a Lipschitz function, called the accommodation coefficient, $R_x$ is the specular reflection operator $$ R_x v = v - 2 n(x) (n(x) \cdot v), $$ and $D$ is the diffusive operator \begin{align} \label{eq:def_D} D g(x,v) = c_\mu \mu(v) \widetilde g (x), \quad \widetilde g (x) = \int_{\Sigma^x_+} g(x,w) \, n(x) \cdot w \, \d w , \end{align} where the constant $c_\mu := (2\pi)^{1/2}$ is such that $c_\mu \widetilde{\mu} = 1$ and we recall that $\mu$ stands for the standard Maxwellian \eqref{eq:standardMaxw}. The boundary condition \eqref{eq:boundary} corresponds to the \emph{pure specular reflection} boundary condition when $\alpha \equiv 0$ and it corresponds to the \emph{pure diffusive} boundary condition when $\alpha \equiv 1$. It is worth emphasizing that when $\gamma f$ satisfies the boundary condition \eqref{eq:BdyCond}--\eqref{eq:boundary}, for any test function $\varphi = \varphi(v)$ and any $x \in \partial \Omega$, \begin{equation}\label{eq:invariantsBoundary} \int_{\mathbf R^d} \gamma f \varphi n(x) \cdot v \, \mathrm{d} v = \int_{\Sigma^x_+} \gamma_{\!+} f \, n(x) \cdot v \, [ \varphi - (1-\alpha(x)) \varphi \circ R_x - \alpha(x) c_\mu \widetilde{\varphi \circ R_x \mu})] \, \mathrm{d} v. \end{equation} As a consequence, whatever is the accommodation coefficient $\alpha$, making the choice $\varphi = 1$ so that $\varphi \circ R_x = c_\mu \widetilde{\varphi \circ R_x \mu} =1$, we get \begin{equation}\label{eq:invariantsBoundary1} \int_{\mathbf R^d} \gamma f \, n(x) \cdot v \, \mathrm{d} v = 0, \end{equation} which means that there is no flux of mass at the boundary (no particle goes out nor enters in the domain). Assuming now $\alpha \equiv 0$, making the choice~$\varphi (v) = |v|^2$ and observing that $|R_x v|^2 = |v|^2$, we get \begin{equation}\label{eq:invariantsBoundary2} \int_{\mathbf R^d} \gamma f \, |v|^2 \, n(x) \cdot v \, \mathrm{d} v = 0, \end{equation} which means that there is no flux of energy at the boundary in the case of the pure specular reflection boundary condition. \smallskip \noindent{\it The collisional operator.} Let us now describe the hypotheses made on the collisional linear operator ${\mathscr C}$ involved in the linear evolution equation \eqref{eq:dtf=Lf}. We assume that the operator acts locally in time and position, namely $$ ({\mathscr C} f)(t,x,v) = {\mathscr C} ( f (t,x,\cdot))(v), $$ that the operator has mass, velocity and energy conservation laws, namely \begin{equation}\label{eq:local-conservations} \int_{\mathbf R^d} ({\mathscr C} g)(v) \, \varphi (v) \, \mathrm{d} v = 0, \end{equation} for $\varphi := 1, v_i, |v|^2$, $i \in \{1,\dots,d\}$, and for any nice enough function $g$, and that the operator has a spectral gap in the classical Hilbert space associated to the standard Maxwellian $\mu$. In order to be more precise, we introduce the Hilbert space $$ L^2_v (\mu^{-1}) := \left\{ f : \mathbf R^d \to \mathbf R \; \Big| \; \int_{\mathbf R^d} f^2 \mu^{-1} \, \mathrm{d} v < +\infty \right\} $$ endowed with the scalar product $$ (f,g)_{L^2_v(\mu^{-1})} := \int_{\mathbf R^d} f g \mu^{-1} \, \mathrm{d} v $$ and the associated norm $\| \cdot \|_{L^2_v(\mu^{-1})}$. We assume that the operator ${\mathscr C}$ is a closed operator with dense domain $\mathrm{Dom} ({\mathscr C})$ in $L^2_v (\mu^{-1})$ which satisfies: \begin{itemize} \item[(A1)] Its kernel is given by $$ \mathrm{ker} ({\mathscr C}) = \mathrm{span}\{ \mu , v_1 \mu , \ldots , v_d \mu , |v|^2 \mu \}, $$ and we denote by $\pi f$ the projection onto $\mathrm{ker} ({\mathscr C})$ given by \begin{equation}\label{def:pi} \pi f = \left( \int_{\mathbf R^d} f \, \d w \right) \mu + \left( \int_{\mathbf R^d} w f \, \d w \right) \cdot v \mu + \left( \int_{\mathbf R^d} \frac{|w|^2-d}{\sqrt{2d}} \, f \, \d w \right) \frac{|v|^2-d}{\sqrt{2d}} \, \mu. \end{equation} \item[(A2)] The operator is self-adjoint on $L^2_v(\mu^{-1})$ and negative $({\mathscr C} f , f )_{L^2_v(\mu^{-1})} \leqslant 0$, so that its spectrum is included in $\mathbf R_{-}$, and \eqref{eq:local-conservations} holds true for any { $g \in \mathrm{Dom} ({\mathscr C})$}. We assume furthermore that ${\mathscr C}$ satisfies a coercivity estimate, more precisely that there is a positive constant $\lambda >0$ such that for any $f \in \mathrm{Dom} ({\mathscr C})$ one has \begin{equation}\label{eq:coercivity} (-{\mathscr C} f , f)_{L^2_v(\mu^{-1})} \geqslant \lambda \| f^\perp \|_{L^2_{v}(\mu^{-1})}^2, \end{equation} where $f^\perp := f - \pi f$. \item[(A3)] For any polynomial function $\phi=\phi(v) : \mathbf R^d \to \mathbf R$ of degree $\leqslant 4$, there holds $\mu \phi \in \hbox{Dom}({\mathscr C})$, so that there exists a constant $C_\phi \in (0,\infty)$ such that $$ \bigl\| {\mathscr C} (\phi \mu) \bigr\|_{L^2_v(\mu^{-1})} \leqslant C_\phi. $$ \end{itemize} \subsection{Conservation laws} Without loss of generality, we shall assume hereafter that the domain $\Omega$ verifies \begin{equation}\label{eq:Omega-centre} |\Omega| = \int_{\Omega} \mathrm{d} x = 1 \quad\text{and}\quad \int_{\Omega} x \, \mathrm{d} x = 0. \end{equation} One easily obtains from \eqref{eq:local-conservations}, the Stokes theorem and \eqref{eq:invariantsBoundary1} that any solution $f$ to equation \eqref{eq:dtf=Lf}--\eqref{eq:BdyCond} satisfies the conservation of mass $ \frac{\d}{\mathrm{d} t} \int_{\mathcal O} f \, \mathrm{d} v \, \mathrm{d} x = \int_{\mathcal O} ({\mathscr C} f - v \cdot \nabla_x f) \, \mathrm{d} v \, \mathrm{d} x =0. $ In the case of the specular reflection boundary condition, that is \eqref{eq:BdyCond} with $\alpha \equiv 0$, some additional conservation laws appear. On the one hand, one also has the conservation of energy $ \frac{\d}{\mathrm{d} t} \int_{\mathcal O} |v|^2 f \, \mathrm{d} v \, \mathrm{d} x = \int_{\mathcal O} |v|^2({\mathscr C} f - v \cdot \nabla_x f) \, \mathrm{d} v \, \mathrm{d} x =0, $ because of \eqref{eq:local-conservations}, the Stokes theorem again and \eqref{eq:invariantsBoundary2}. On the other hand, if the domain~$\Omega$ possesses rotational symmetry, we also have the conservation of the corresponding angular momentum. More precisely, we define the set of all infinitesimal rigid displacement fields \begin{equation}\label{eq:RR} {\mathcal R} := \{ x \in \Omega \mapsto Ax + b \in \mathbf R^d \,; A \in {\mathcal M}^a_{d} (\mathbf R), \; b \in \mathbf R^d \}, \end{equation} where ${\mathcal M}^a_{d} (\mathbf R)$ denotes the set of skew-symmetric $d \times d$-matrices with real coefficients, as well as the linear manifold of \emph{centered} infinitesimal rigid displacement fields preserving $\Omega$ \begin{equation}\label{eq:RROmega} {\mathcal R}_\Omega = \{ R \in {\mathcal R} \mid b=0 , \; R(x) \cdot n(x) = 0, \; \forall \, x \in \partial\Omega \}. \end{equation} We observe here that, thanks to the assumption \eqref{eq:Omega-centre}, we can work only with \emph{centered} infinitesimal rigid displacement fields preserving $\Omega$. Indeed, if $R$ is an infinitesimal rigid displacement field preserving $\Omega$, that is, $R(x) = Ax + b \in {\mathcal R}$ is such that $R(x) \cdot n(x) = 0$ on $\partial \Omega$, then $$ \begin{aligned} |b|^2 &= \int_{\Omega} \nabla (b \cdot x) \cdot (Ax + b) \, \mathrm{d} x \\ &= -\int_{\Omega} (b \cdot x) \Div (Ax + b) \, \mathrm{d} x + \int_{\partial \Omega} (b \cdot x) (Ax+b) \cdot n(x) \, \d\sigma_{\!x} = 0, \end{aligned} $$ and thus $b=0$. When the set ${\mathcal R}_\Omega$ is not reduced to $\{ 0 \}$, that is when $\Omega$ has rotational symmetries, then one deduces the conservation of angular momentum $ \frac{\d}{\mathrm{d} t} \int_{\mathcal O} R(x) \cdot v f \, \mathrm{d} v \, \mathrm{d} x = 0, \quad \forall \, R \in {\mathcal R}_\Omega. $ Indeed if $R \in {\mathcal R}_\Omega$, there exists $A \in {\mathcal M}^a_{d} (\mathbf R)$ such that $R(x)=Ax$ for any { $x \in \Omega$}. We then compute, using integration by parts, \begin{align*} \frac{\d}{\mathrm{d} t} \int_{\mathcal O} R(x) \cdot v f \mathrm{d} v \, \mathrm{d} x &= \int_{\mathcal O} Ax \cdot v (- v \cdot \nabla_{x} f + {\mathscr C} f) \, \mathrm{d} v \, \mathrm{d} x \\ &= \int_{\mathcal O} \partial_{x_k} (Ax \cdot v) v_k f \, \mathrm{d} v \, \mathrm{d} x - \int_\Sigma Ax \cdot v \, { \gamma f } \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &= - \int_\Sigma Ax \cdot v \, { \gamma f } \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x}, \end{align*} thanks to the velocity conservation law \eqref{eq:local-conservations} and the fact that $A$ is skew-symmetric. For the boundary term, using \eqref{eq:invariantsBoundary} with $\varphi(x,v) := Ax \cdot v$ and $\alpha \equiv 0$, we get \begin{align*} \int_\Sigma Ax \cdot v \,\gamma f \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} & = \int_{\Sigma_+} Ax \cdot (v - R_x v) \gamma_{+}f \, |n(x) \cdot v| \, \mathrm{d} v \, \d\sigma_{\! x} \\ & = 2\int_{\Sigma_+} (Ax \cdot n(x)) \gamma_{+}f \, |n(x) \cdot v|^2 \, \mathrm{d} v \, \d\sigma_{\! x} =0, \end{align*} because $v - R_x v= 2(n(x) \cdot v) n(x)$ and $R \in {\mathcal R}_{\Omega}$. \subsection{Main results}\label{ssec:main} Define the position and velocity dependent Hilbert space $$ {\mathcal H} = L^2_{x,v} (\mu^{-1}) := \left\{ f : {\mathcal O} \to \mathbf R \; \Big| \; \int_{\mathcal O} f^2 \mu^{-1} \, \mathrm{d} v \, \mathrm{d} x < +\infty \right\} $$ endowed with the scalar product $$ \left\langle f,g \right\rangle_{{\mathcal H}} := \int_{\mathcal O} f g \mu^{-1} \, \mathrm{d} v \, \mathrm{d} x $$ and the associated norm $\| \cdot \|_{{\mathcal H}}$. For $f \in {\mathcal H}$, we also introduce the following conditions: \begin{align} &\int_{\mathcal O} f \, \mathrm{d} x \, \mathrm{d} v =0 , \tag{C1}\label{eq:C1}\\ &\int_{\mathcal O} |v|^2 f \, \mathrm{d} x \, \mathrm{d} v = 0 , \tag{C2}\label{eq:C2}\\ &\int_{\mathcal O} R(x) \cdot v f \, \mathrm{d} x \, \mathrm{d} v =0, \quad \forall \, R \in {\mathcal R}_\Omega. \tag{C3}\label{eq:C3} \end{align} We are now able to state our main hypocoercivity result: \begin{theo}\label{theo:hypo} There exists a scalar product $\left\langle\!\left\langle \cdot , \cdot \right\rangle \! \right\rangle$ on the space ${\mathcal H}$ so that the associated norm $|\hskip-0.04cm|\hskip-0.04cm| \cdot |\hskip-0.04cm|\hskip-0.04cm|$ is equivalent to the usual norm $\| \cdot \|_{{\mathcal H}}$, and for which the linear operator ${\mathscr L}$ satisfies the following coercivity estimate: there is a positive constant $\kappa >0$ such that $ \left\langle \! \left\langle - {\mathscr L} f , f \right\rangle\!\right\rangle \geqslant \kappa |\hskip-0.04cm|\hskip-0.04cm| f |\hskip-0.04cm|\hskip-0.04cm|^2 $ for any $f \in \mathrm{Dom}({\mathscr L})$ satisfying the boundary condition \eqref{eq:BdyCond}, assumption~\eqref{eq:C1} and furthermore assumptions \eqref{eq:C2}-\eqref{eq:C3} in the specular reflection case ($\alpha\equiv 0$ in \eqref{eq:BdyCond}). \end{theo} \noindent This result improves existing results regarding hypocoercivity in a bounded domain for the {linearized} Boltzmann and Landau equations (and consequently for their long-time stability, see Theorem~\ref{theo:main}) in three regards: \smallskip -- We consider a general, smooth enough, convex or non-convex domain. \smallskip -- The $L^2$ estimates that we establish are constructive, which means that they depend constructively of some collisional constants (that appear in the estimates (A2)-(A3) satisfied by the collisional operator ${\mathscr C}$) and some geometrical constants depending on the domain $\Omega$ (that appear in some Poincaré and Korn inequalities which can be made explicit, at least for a domain with simple geometry). \smallskip -- Our method encompasses the three boundary conditions (pure diffusive, specular reflection and Maxwell) in a single treatment. In particular, we can solve the Maxwell boundary condition in the case where the accommodation coefficient $\alpha$ vanishes everywhere or on some subset of the boundary. \medskip Our proof is based on a $L^2$-hypocoercivity approach. The challenge of hypocoercivity is to understand the interplay between the collision operator that provides dissipativity in the velocity variable and the transport one which is conservative, in order to obtain global dissipativity for the whole problem. There are two main hypocoercivity methods, the $H^1$ and the $L^2$ ones. The $H^1$-hypocoercivity approach has been first introduced for hypoelliptic operators by H\'erau, Nier~\cite{Herau-Nier} and Eckmann, Hairer~\cite{MR1969727}, further developed by Nier, Helffer~\cite{HN05} and Villani~\cite{Mem-villani} and extended to more general kinetic operators in Villani~\cite{Mem-villani} and Mouhot, Neumann~\cite{Mouhot-Neumann}. It is also reminiscent of the work by Desvillettes and Villani on the trend to global equilibrium for spatially inhomogeneous kinetic systems in \cite{Desvillettes-Villani-2001}, \cite{Desvillettes-Villani-2005}, and of the high order Sobolev energy method developed by Guo in \cite{Guo-2002-I} and subsequently. In summary, the idea consists in endowing the~$H^1$~space with a new scalar product which makes coercive the considered operator and whose associated norm is equivalent to the usual $H^1$ norm. In order to be adapted to more general operators and geometries, the $L^2$-hypocoercivity technique for one dimensional space of collisional invariants has been next introduced by H\'erau~\cite{MR2215889} and developed by Dolbeault-Mouhot-Schmeiser~\cite{Dolbeault2009511,MR3324910}. The $L^2$-hypocoercivity technique for a space of collisional invariants of dimension larger than one (including the Boltzmann and Landau cases) has been introduced by Guo in \cite{MR2679358}, and developed further mainly by Guo, collaborators and students. Again the idea consists in endowing the $L^2$ space with a new scalar product which makes coercive the considered operator and whose associated norm is equivalent to the usual $L^2$ norm. \medskip We present hereafter the line of reasoning of this last approach that will be ours. It heavily relies on the micro-macro decomposition of the solution of the equation: $f = f^\perp + \pi f$, where~$f^\perp$ denotes the microscopic part and $\pi f$ the macroscopic part defined in~\eqref{def:pi}. The coercive estimate~\eqref{eq:coercivity} on the collision operator~${\mathscr C}$ already gives a control on~$f^\perp$ but not on the macroscopic term $\pi f$. Then, in order to control the macroscopic part, we construct a new scalar product on~${\mathcal H}$ by adding, step by step, new terms in order to control the missing terms appearing on the macroscopic part~$\pi f$. Roughly speaking, the scalar product that we cook up takes the following form: \begin{eqnarray*} \left\langle \! \left\langle f, g \right\rangle \! \right\rangle &:=& \left\langle f,g \right\rangle_{{\mathcal H}} - \eta \left\langle \widetilde\pi f , \nabla \Delta^{-1}\pi g \right\rangle_{L^2_x(\Omega)} - \eta \left\langle \nabla \Delta^{-1} \pi f , \widetilde\pi g \right\rangle_{L^2_x(\Omega)} , \end{eqnarray*} choosing $\eta > 0$ small enough, and where the moments operator { $ \widetilde\pi : {\mathcal H} \to (L^2_x(\Omega))^{d}$} and the inverse Laplacian type operator $\Delta^{-1}$ have to be suitably defined (see Sections~\ref{sec:elliptic}~\&~\ref{sec:proof}). \smallskip Our proof is a variant of previous proofs of the same type but differs from them by several aspects: \smallskip (i) The order between the $\nabla$ operator and the $\Delta^{-1}$ operator is the one from Guo's approach~\cite{MR2679358,MR3562318} rather than the one from Dolbeault-Mouhot-Schmeiser's approach~\cite{Dolbeault2009511,MR3324910}. That is important in order to handle the rather singular operator involved by the boundary condition. \smallskip (ii) The choice of the mean operator $\widetilde \pi f$ differs from the one used in \cite{MR2679358,MR3562318,MR3579575,MR3840911} but looks very much like the one in \cite{MR2813582,MR2966364,CDHMMS}. It allows to deal with general Maxwell boundary condition (and the possibility that $\alpha$ vanishes somewhere or everywhere) but leads to a first natural control of the symmetric gradient of the momentum component of the macroscopic part~$\nabla^s m$ instead of the full derivative $\nabla m$ as in Guo's approach. \smallskip (iii) The definition of the $\Delta^{-1}$ operator has to be chosen wisely in order to handle the general Maxwell boundary condition and the mean operator $\widetilde \pi f$. We thus need to establish natural $H^{-1} \to H^1$ and $L^2 \to H^2$ regularity estimates for some classical elliptic problems but associated with somehow unusual boundary conditions. \smallskip Let us give a few more details about (iii). First, we shall introduce an auxiliary Poisson equation with Robin or Neumann boundary conditions, which are devised in order to control \emph{mass} and \emph{energy} terms of $\pi f$. This result is stated in Theorem~\ref{theo:Poisson} and is based on Poincar\'e type inequalities. Next, we shall introduce a tailored Lam\'e-type system with mixed Robin-type boundary conditions in order to deal with the \emph{momentum} component of the macroscopic part $\pi f$. The corresponding result is presented in Theorem~\ref{theo:regH2-korn} and is based on Korn-type inequalities, which are discussed in Section~\ref{sec:Korn}. For more information on Korn inequalities we refer to the fundamental result of Duvaut-Lions \cite[Theorem~3.2 Chap.~3]{MR0521262}, and on the variant introduced by Desvillettes and Villani \cite{DV02}. For further references and a recent treatment of Korn's inequality, we refer to Ciarlet and Ciarlet \cite{MR2119999}. For more details concerning the regularity issue for similar elliptic equations and systems we refer to~\cite{MR775683,CostabelDN,MR1452171} and the references therein. \medskip Let us now point out that our hypocoercivity result obtained in Theorem~\ref{theo:hypo} enables us to deduce an exponential stability result for our equation~\eqref{eq:dtf=Lf} supplemented with the boundary condition~\eqref{eq:BdyCond}. \begin{theo}\label{theo:main} Let $f_{\mathrm{in}} \in {\mathcal H}$ satisfying assumption~\eqref{eq:C1} and furthermore assumptions~\eqref{eq:C2} and \eqref{eq:C3} in the specular reflection case ($\alpha\equiv 0$ in \eqref{eq:BdyCond}). There exist positive constants $\kappa , C >0$ such that for any solution $f$ to \eqref{eq:dtf=Lf}--\eqref{eq:BdyCond} associated to the initial data~$f_{\mathrm{in}}$, there holds $$ \| f(t) \|_{{\mathcal H}} \leqslant C e^{-\kappa t} \| f_{\mathrm{in}} \|_{{\mathcal H}}, \quad \forall \, t \geqslant 0. $$ \end{theo} \noindent This result is a first step towards the global existence and the study of the long-time behavior of solutions to the nonlinear problem~\eqref{eq:nonlinear}-\eqref{eq:Bdynonlinear} in a close-to-equilibrium regime that will be the object of a forthcoming work. \smallskip We here briefly mention some similar coercivity estimates or exponential stability results established in the last decade for linear kinetic equations (mainly for the linearized Boltzmann equation) in a bounded domain. These ones have then been used for proving global existence of solutions to nonlinear equation in a close-to-equilibrium regime and convergence to the equilibrium in the long-time asymptotic. As already mentioned, Guo~\cite{MR2679358} has first proved a $L^2_{x,v}$ coercivity estimate for the cutoff Boltzmann equation with hard potentials or hard-spheres by using non-constructive technique in two cases: the specular reflection boundary condition with strictly convex and analytic domains~$\Omega$ and the pure diffusive boundary condition assuming the domain $\Omega$ is smooth and convex. These results have been generalized by Briant and Guo \cite{MR3562318} who derived constructive exponential stability estimates in $L^2_{x,v}$ for any positive and constant accommodation coefficient $\alpha \in (0,1)$, with no more convexity assumptions on $\Omega$. For the same equation endowed with specular reflection boundary condition, a still non-constructive $L^2$ estimate was derived in the convex setting, without analyticity assumptions on the domain, by Kim and Lee \cite{MR3762275}. The authors then extended their results to periodic cylindrical domain with non-convex analytic cross-section~\cite{MR3840911}. Furthermore, the only results we are aware of in the case of long-range interaction, that is, for non-cutoff Boltzmann and Landau collision operators in a bounded domain, are the very recent works of Guo-Hwang-Jang-Ouyang~\cite{MR4076068} (see also \cite{Guoetal-specular2}) for the Landau equation with specular reflection boundary condition, and Duan-Liu-Sakamoto-Strain~\cite{DuanLiuSakamotoStrain} for non-cutoff Boltzmann and Landau equations in a finite channel with inflow or specular reflection boundary conditions. However, as far as we understand, the arguments presented in \cite{MR4076068} seem to be constructive only when $\partial\Omega$ is flat, while the arguments presented in~\cite{Guoetal-specular2} are again non-constructive. It is also worth mentioning that an alternative existence of solutions framework to the above quite strong but close-to-equilibrium regime framework has been introduced by DiPerna and Lions who proved in \cite{DiPernaLionsAnnals,MR1088276,MR1295942} the existence of global weak (renormalized) solutions of arbitrary amplitude to the Boltzmann equation in the case of the whole space for initial data satisfying only the physically natural condition that the total mass, energy and entropy are finite. The extension to the case of a bounded domain with reflection conditions (including specular reflection, pure diffusive reflection and Maxwell reflection) has been then obtained in \cite{HamdacheARMA1992,ArkerydMaslova,MischlerCMP2000,MischlerENS2010}. We must emphasize that our treatment of boundary terms bears some similarity with the analysis made in \cite{MischlerENS2010} in order to take advantage of the information provided by Darroz\`es and Guiraud inequality~\cite{DGineq}. \medskip To end this introduction, we point out that in Section~\ref{sec:weak}, we broaden our study to the case where the linearized operator only enjoy a weak coercivity estimate to obtain results of weak hypocoercivity and sub-exponential stability in Theorems~\ref{theo:weak-hypo} and~\ref{theo:weak-main}. \smallskip Also, in Section~\ref{sec:hydro}, we extend our study to a rescaled version of~\eqref{eq:dtf=Lf} which naturally arises in the analysis of hydrodynamical limit problems, we obtain hypocoercivity and stability results uniformly with respect to the rescaling parameter in Theorems~\ref{theo:hydro-hypo} and~\ref{theo:hydro-main}. \medskip \noindent {\bf Acknowledgements.} The authors thank O.~Kavian and F.~Murat for enlightening discussions and for having pointing out several relevant references. This work has been partially supported by the Projects EFI: ANR-17-CE40-0030 (K.C.\ and I.T.) and SALVE: ANR-19-CE40-0004 (I.T.) of the French National Research Agency (ANR). A.B. acknowledges financial support from R\'egion \^Ile de France. \section{Elliptic equations} \label{sec:elliptic} We present some functional estimates associated to some elliptic problems related to the macroscopic quantities. In this section, we denote the classical norm on $L^2_x(\Omega)$ by $\|\cdot\|$ and the associated scalar product by~$(\cdot,\cdot)$. We also write $$ \langle f \rangle := \int_\Omega f \, \mathrm{d} x $$ the mean of $f$ (recall our normalization assumption \eqref{eq:Omega-centre}). The operators that we consider only act on the position variable $x$, so that, in order to lighten the notations, we will not mention it in our proofs. For the same reason, we often write $\partial_i$ for $\partial_{x_i}$, $i \in \{1,\dots, d\}$. \subsection{Poincar\'e inequalities and Poisson equation} \label{sec:poincare} We consider the following Poisson equation \begin{equation}\label{eq:elliptic} \left\{ \begin{aligned} - \Delta u &= \xi \quad\text{in}\quad \Omega , \\ (2-\alpha(x))\nabla u \cdot n(x) + \alpha(x) u &= 0 \quad\text{on}\quad \partial \Omega, \end{aligned} \right. \end{equation} for a scalar source term $\xi : \Omega \to \mathbf R$. Remark that when $\alpha \equiv 0$ then \eqref{eq:elliptic} corresponds to the Poisson equation with homogeneous Neumann boundary condition. Otherwise,~\eqref{eq:elliptic} corresponds to the Poisson equation with homogeneous Robin (or mixed) boundary condition. We define the Hilbert spaces $$ V_1 := H^1(\Omega) \quad \hbox{and} \quad V_0 := \left\{ u \in H^1(\Omega); \ \int_\Omega u \, \mathrm{d} x = 0 \right\} $$ endowed with the $H^1(\Omega)$-norm, and next $$ V_\alpha := \begin{cases} V_1 \quad\text{if}\quad \alpha \not\equiv 0 \\ V_0 \quad\text{if}\quad \alpha \equiv 0. \end{cases} $$ On $V_\alpha$, we define the bilinear form $$ a_\alpha(u,v) := \int_\Omega \nabla u \cdot \nabla v \, \mathrm{d} x + \int_{\partial\Omega} \frac{\alpha }{2-\alpha} \, u v \, \d\sigma_{\! x}. $$ We start by a result on Poincaré-type inequalities: \begin{prop} There hold \begin{equation}\label{eq:PWineq} \forall \, u \in V_0, \quad \| u \| \lesssim \| \nabla u \|, \end{equation} and \begin{equation}\label{eq:PTypeineq} \forall \, u \in V_1, \quad \| u \|^2 \lesssim a_\alpha(u,u). \end{equation} \end{prop} The first inequality is nothing but the classical Poincaré-Wirtinger inequality. For the second inequality (which is probably also classical), we have no precise reference for a constructive proof. For the sake of completeness and because we will need to repeat that kind of argument in the next section, we give a sketch of a non constructive proof by contradiction based on a compactness argument. \medskip \noindent {\it Proof of~\eqref{eq:PTypeineq}.} Assuming that \eqref{eq:PTypeineq} is not true, there exists a sequence $(u_n)_{n \in \mathbf N}$ in $H^1(\Omega)$ such that $$ 1 = \| u_n \|^2 \geqslant n \left( \| \nabla u_n \|^2 + \left\| \sqrt{\frac{\alpha}{2-\alpha}} u_n \right\|_{L^2(\partial\Omega)}^2 \right). $$ As a consequence, up to the extraction of a subsequence, there exists $u \in H^1(\Omega)$ such that $u_n {\,\rightharpoonup\,} u$ weakly in $H^1(\Omega)$ and $u_n \to u$ strongly in $L^2(\Omega)$. From the above estimate we deduce that $\| \nabla u \| \leqslant \liminf_{n \to \infty} \| \nabla u_n \| = 0$, so that $u = C$ is a constant. On the one hand, we have $\| \sqrt{\alpha/(2-\alpha)} u \|_{L^2(\partial\Omega)} = \lim_{n \to \infty} \| \sqrt{\alpha/(2-\alpha)} u_n \|_{L^2(\partial\Omega)} = 0$ so that~$C = 0$. On the other hand, we get $\| u \| = \lim_{n\to \infty} \| u_n \| = 1$, which implies that $C \not = 0$ and thus a contradiction. \qed \smallskip We now state a result on the existence, uniqueness and regularity of solutions to \eqref{eq:elliptic}. \begin{theo}\label{theo:Poisson} For any given $\xi \in L^2(\Omega)$, there exists a unique $u \in V_\alpha$ solution to the variational problem \begin{equation}\label{eq:varValpha} a_\alpha(u,w) = ( \xi , w ) ,\quad\forall \, w \in V_\alpha. \end{equation} Assuming furthermore that $\left\langle \xi \right\rangle = 0$ when $\alpha \equiv 0$, there holds $u \in H^2(\Omega)$, $u$ verifies the elliptic equation \eqref{eq:elliptic} a.e.\ and \begin{equation}\label{eq:varValphaH2} \| u \|_{H^2(\Omega)} \lesssim \| \xi \|. \end{equation} \end{theo} We give a sketch of the proof of Theorem~\ref{theo:Poisson} which is very classical, except maybe the way we handle the $H^2$ regularity estimate. The proof will be taken up again in the next section where we deal with an elliptic system of equations associated to the symmetric gradient. \begin{proof}[Proof of Theorem~\ref{theo:Poisson}] We split the proof into 4 steps. The first one is dedicated to the application of Lax-Milgram theorem. The last three ones are devoted to the proof of the $H^2$ regularity estimate: in Step 2, we develop a formal argument which leads to a directional regularity estimate supposing that the variational solution~$u$ is {\em a~priori} smooth; we then make it rigorous in Step 3 by not supposing any smoothness assumption on $u$ and in Step~4, we end the proof of~\eqref{eq:varValphaH2}. \smallskip\noindent{\sl Step 1. } We first observe that there exists $\lambda > 0$ such that $$ a_\alpha(u,u) \geqslant \lambda \| u \|_{H^1(\Omega)}^2, \quad \forall \, u \in V_\alpha, $$ and thus $a_\alpha$ is coercive. The above estimate is a direct consequence of the Poincar\'e-Wirtinger inequality~\eqref{eq:PWineq} in the case when $\alpha \equiv 0$ and the variant of the classical Poincar\'e inequality given in~\eqref{eq:PTypeineq} when $\alpha \not\equiv 0$. Because $\xi \in L^2(\Omega) \subset V_\alpha'$, we may use the Lax-Milgram theorem and we get the existence and uniqueness of $u \in V_\alpha$ satisfying \eqref{eq:varValpha} as well as \begin{equation} \label{eq:u_H1} \|u\|_{H^1(\Omega)} \lesssim \|\xi\|. \end{equation} For the remainder of the proof, we furthermore assume $\left\langle \xi \right\rangle = 0$ when $\alpha \equiv 0$. We claim that \eqref{eq:varValpha} can be improved into the following new formulation: there exists a unique $u \in V_\alpha$ satisfying \begin{equation}\label{eq:varValpha-bis} a_\alpha(u,w) = ( \xi , w ) , \quad \forall \, w \in H^1(\Omega). \end{equation} When $\alpha \not\equiv 0$ formulation \eqref{eq:varValpha-bis} is nothing but \eqref{eq:varValpha}. In the case $\alpha \equiv 0$ so that $V_\alpha \not= H^1(\Omega)$, we remark that for any $w \in H^1(\Omega)$, we have $w - \left\langle w \right\rangle \in V_0$ and therefore $$ \begin{aligned} a_\alpha (u,w) &= a_\alpha (u , w - \left\langle w \right\rangle) \\ &= \int_{\Omega} \xi w \, \mathrm{d} x - \int_{\Omega} \xi \left\langle w \right\rangle \mathrm{d} x = \int_{\Omega} \xi w \, \mathrm{d} x , \end{aligned} $$ where we have used the formulation \eqref{eq:varValpha} and the condition $\left\langle \xi \right\rangle =0$ so that $\int_{\Omega} \xi \left\langle w \right\rangle \mathrm{d} x = 0$ in the second line. \smallskip\noindent{\sl Step 2. A priori directional estimate.} For any small enough open set $\omega \subset \Omega$, we fix a vector field $a \in C^2(\bar\Omega)$ such that $|a| = 1$ on $\omega$ and $a \cdot n = 0$ on $\partial \Omega$, and we set $X := a \cdot \nabla$ the associated differential operator. For a smooth function $u$, we compute \begin{eqnarray*} \| \nabla X u\|^2 &=& (\nabla u, X^* \nabla X u ) + ([\nabla,X] u, \nabla X u ) \\ &=& (\nabla u, \nabla X^*X u ) + (\nabla u, [X^*, \nabla ] X u ) + ([\nabla,X] u, \nabla X u ), \end{eqnarray*} where we have used that \begin{equation} \label{eq:X*} (Xf,g) = (f,X^* g), \quad X^* g := - \hbox{div} (a g), \end{equation} because $a \cdot n = 0$ on $\partial \Omega$. On the other hand, we compute formally \begin{equation} \label{eq:IPPbord} \int_{\partial\Omega} (Xu)^2 \, \frac{\alpha}{2-\alpha} \d\sigma_{\! x} = \int_{\partial\Omega} \frac{\alpha}{2-\alpha} \, u ( X^* Xu) \, \d\sigma_{\! x} - \int_{\partial\Omega} \biggl( X \frac{\alpha}{2-\alpha} \biggr) \, u (Xu) \, \d\sigma_{\! x} . \end{equation} In the next step of the proof, we will work with a discrete version of the operator~$X$ which will allow us to make rigorous computations. Assuming furthermore now that $u \in V_\alpha$ satisfies \eqref{eq:varValpha-bis} and that $X^*X u \in H^1(\Omega)$, we may use \eqref{eq:varValpha-bis} with { $w := X^*X u $} and we deduce \begin{eqnarray*} &&\| \nabla X u\|^2 + \int_{\partial\Omega} \frac{\alpha}{2-\alpha} (Xu)^2 \, \d\sigma_{\! x} \\ &&\quad= (\xi, X^*X u ) + (\nabla u, [X^*, \nabla ] X u ) + ([\nabla,X] u, \nabla X u ) - \int_{\partial\Omega} \biggl( X \frac{\alpha}{2-\alpha} \biggr) \, u (Xu) \, \d\sigma_{\! x}. \end{eqnarray*} We easily compute for $i=1,\dots,d$ \begin{eqnarray*} [\partial_i,X] = (\partial_i a) \cdot \nabla, \quad {[{X^*}, {\partial_i}]} = \partial_i (\hbox{div} a) + (\partial_i a) \cdot \nabla, \end{eqnarray*} so that for some constant $C = C(\| a \|_{W^{2,\infty}(\Omega)})$ and any function { $w \in H^1(\Omega)$ }, we have $$ \| [\nabla,X] w \| \leqslant C \| \nabla w \|, \quad \| [X^*, \nabla ] w \| \leqslant C \| w \|_{H^1(\Omega)}. $$ We then deduce that for some constant $C = C( \| a \|_{W^{2,\infty}(\Omega)},\|\alpha\|_{W^{1,\infty}(\Omega)})$, we have \begin{eqnarray*} &\| \nabla X u\|^2 \leqslant \| \xi \| \| X^*X u \| + C \| \nabla u \| \| X u \|_{H^1(\Omega)} \\ &\hspace{5cm} +C \| \nabla u \| \| \nabla X u \| + C \| u \|_{L^2(\partial\Omega)} \| X u \|_{L^2(\partial\Omega)}. \end{eqnarray*} Recalling \eqref{eq:u_H1} and observing that $ \| X^* w \| + \| X w \| + \| w \|_{L^2(\partial\Omega)} \lesssim \| w \|_{H^1(\Omega)}$, we obtain \begin{eqnarray*} \| \nabla X u\|^2 \lesssim \| \xi \| \| \nabla X u\| + \|\xi\|^2, \end{eqnarray*} and we conclude that \begin{equation}\label{eq:RegDirXu} \| \nabla X u\| \lesssim \| \xi \| . \end{equation} \smallskip\noindent{\sl Step 3. Rigorous directional estimate.} When we do not deal with an {\em a priori} smooth solution, but just with a variational solution $ u \in V_\alpha$ satisfying \eqref{eq:varValpha-bis}, we have to modify the argument in the following way. We define $\Phi_t : \bar\Omega \to \bar\Omega$ the flow associated to the differential equation \begin{equation} \label{eq:flow} \dot y = a(y), \quad y(0) = x, \end{equation} so that $\Phi_t(x) := y(t)$, $(t,x) \mapsto \Phi_t(x)$ is $C^1$ and $\Phi_t$ is a diffeomorphism on both $\Omega$ and~$\partial\Omega$ for any $t \in \mathbf R$. We next define $$ X^hu(x) := \frac{1}{h} \bigl( u(\Phi_h(x)) - u(x)), $$ so that $X^h u \in H^1(\Omega)$ if $u \in V_\alpha$. Repeating the argument of Step 1, we get the identity \begin{equation} \label{eq:nablaXh} \begin{aligned} &\| \nabla X^h u\|^2 + \int_{\partial\Omega} \frac{\alpha}{2-\alpha} (X^h u)^2 \, \d\sigma_{\! x} = (\xi, X^{h*}X^h u ) + (\nabla u, [X^{h*}, \nabla ] X^h u ) \\ &\qquad \qquad \qquad + \, \, ([\nabla,X^h] u, \nabla X^h u ) - \int_{\partial\Omega} u (\Phi_h(x)) \biggl((X^h u) \, X^h \bigg(\frac{\alpha}{2-\alpha}\bigg) \biggr) (x) \, \d\sigma_{\! x} , \end{aligned} \end{equation} where we denote \begin{eqnarray*} X^{h*} w (x) := \frac{1}{h} \Big[ w(\Phi_{-h}(x)) \left|\hbox{\rm det} D\Phi_{-h}(x)\right|- w(x) \Big]. \end{eqnarray*} Notice here that we used a discrete version of the integration by parts leading to~\eqref{eq:IPPbord} and it only relies on a change of variable on $\partial \Omega$, which makes our computation fully rigorous. As in the second step of the proof, we are now going to bound each term of the right-hand-side of~\eqref{eq:nablaXh}. First, notice that for $|h| \leqslant 1$, we have for some $|h_0| \leqslant 1$: \[ X^h u(x) = \sum_j \partial_j u(\Phi_{h_0}(x)) a_j(\Phi_{h_0}(x)) \] so that there exists $C=C(\|a\|_{W^{1,\infty}(\Omega)})$ such that for any $|h| \leqslant 1$, we have $\|X^h u\| \leqslant C \|\nabla u\|$. We can estimate $\|X^{h*}w\|$ in a similar way using that $$ X^{h*} w (x) = {\frac1{h}} \big[ w(\Phi_{-h}(x)) - w(x) \big] \left|\hbox{\rm det} D\Phi_{-h}(x)\right|+ {\frac1{h}} w(x) \big[\left|\hbox{\rm det} D\Phi_{-h}(x)\right| -\left|\hbox{\rm det} D\Phi_{0}(x)\right| \big]. $$ Consequently, we deduce that there exists $C = C(\|a\|_{W^{2,\infty}(\Omega)})$ such that for $|h| \leqslant 1$, \begin{equation} \label{eq:control_Xh} \|X^{h*} w\| + \|X^h w\| + \|w\|_{L^2(\partial \Omega)} \leqslant C \|w \|_{H^1(\Omega)}. \end{equation} For $i=1,\dots,d$, { for $|h| \leqslant 1$, $x \in \bar{\Omega}$, writing $\Phi_h(x) = (\Phi_{h,1}(x),\dots,\Phi_{h,d}(x))$}, we compute \begin{align*} [\partial_i, X^h] w (x) &= {\frac1{h}} \sum_{j \neq i} \partial_j w (\Phi_h(x)) \partial_i \Phi_{h,j}(x) + {\frac1{h}} \partial_i w (\Phi_h(x)) \left( \partial_i \Phi_{h,i}(x) - 1\right) \\ &= {\frac1{h}} \sum_{j} \partial_j w (\Phi_h(x)) \left(\partial_i \Phi_{h,j}(x) - \partial_i \Phi_{0,j}(x)\right) \end{align*} and similarly \begin{align*} [X^{h*}, \partial_i ] w (x) &= {1 \over h} \sum_{j} \partial_j w (\Phi_{-h}(x)) \left(\partial_i \Phi_{0,j}(x)-\partial_i \Phi_{-h,j}(x) \right) \left|\hbox{\rm det} D\Phi_{-h}(x)\right| \\ &\quad - {1 \over h} w(\Phi_{-h}(x)) \partial_i \left|\hbox{\rm det} D\Phi_{-h}(x)\right|. \end{align*} As previously, we can easily bound $[\partial_i, X^h] w$ and the first term in $[X^{h*}, \partial_i ] w$ by $C \|\nabla w\|$ with $C=C(\|a\|_{W^{1,\infty}(\Omega)})$ for any $|h| \leqslant 1$. The second term of $[X^{h*}, \partial_i ] w$ can be bounded by $C \|w\|$ with $C=C(\|a\|_{W^{2,\infty}(\Omega)})$ for any $|h| \leqslant 1$ since for any $j$, we have $\partial_{ij} \Phi_0(x) = 0$. This implies that there exists $C = C(\| a \|_{W^{2,\infty}(\Omega)})$ such that for $|h| \leqslant 1$ and any function $w$ in $H^1(\Omega)$, we have $$ \| [\partial_i,X^h] w \| \leqslant C \| \nabla w \|, \quad \| [X^{h*}, \partial_i ] w \| \leqslant C \| w \|_{H^1(\Omega)}. $$ We deduce that for some $C = C(\|a\|_{W^{2,\infty}}, \|\alpha\|_{W^{1,\infty}})$, we have for any $|h| \leqslant 1$: \begin{align*} \| \nabla X^h u\|^2 &\leqslant \| \xi \| \| X^{h*} X^h u \| + C \| \nabla u \| \| X^h u \|_{H^1(\Omega)} \\ &\qquad +C \| \nabla u \| \| \nabla X^h u \| + C \| u \|_{L^2(\partial\Omega)} \| X^h u \|_{L^2(\partial\Omega)} \end{align*} and then, { using $\|X^h u \| \lesssim \| \nabla u\|$, \eqref{eq:control_Xh} and \eqref{eq:u_H1},} $ \| \nabla X^h u\| \lesssim \| \xi \| . $ Passing to the limit $h\to0$, we recover \eqref{eq:RegDirXu}. \smallskip\noindent {\sl Step 4. Proof of~\eqref{eq:varValphaH2}.} Consider a small enough open set $\omega \subset \Omega$, so that we may fix $a^1, \dots , a^d$ a family of smooth vector fields such that it is an orthonormal basis of $\mathbf R^d$ at any point $x \in \omega$ and~$a^1(x) = n(x)$ for any $x \in \partial\Omega \cap \partial\omega$. In order to see that it indeed holds true, we may argue as follows. If $ \partial\Omega \cap \partial\omega = \emptyset$, we may take $a^j := e_j$ the canonical basis of $\mathbf R^d$. Otherwise, we fix $x_0 \in \partial\Omega \cap \partial\omega$. Because $\nabla \delta(x_0) \ne 0$, we may fix first $i \in \{1, \dots, d \}$ such that $\partial_{x_i} \delta(x_0) \ne 0$ and thus $\partial_{x_i} \delta(x) \ne 0$ for any $x \in \omega$, for $\omega$ small enough. We then define $b^1:= \nabla \delta$, $b^j := e_{j-1}$ for any $j \in \{2, \dots, i \}$ and $b^j := e_{j}$ for any $j \in \{i+1, \dots, d \}$. Finally, we apply the Gram-Schmidt process to $(b^1(x),\dots,b^d(x))$ to obtain $(a^1(x),\dots,a^d(x))$. We set now $X_i := a^i \cdot \nabla$. From the third step, we have \begin{equation}\label{eq:RegDirXiu} \| \nabla X_i u\| \lesssim \| \xi \|, \quad \forall \, i = 2, \dots, d. \end{equation} As a consequence of our previous construction, the matrix $A:=(a^1,\dots, a^d)$ is orthonormal. We thus have $\delta_{k\ell} = a^k \cdot a^\ell = a_k \cdot a_\ell$, where we denoted by $a_m$ the $m$-th line vector of the matrix~$A$. As a consequence, we have \begin{equation} \label{eq:Deltau} \sum_i X^*_i X_i u = - \sum_{i,k,\ell} \partial_k (a^i_k a^i_\ell \partial_\ell u) = - \sum_{k,\ell} \partial_k(a_k \cdot a_\ell \partial_\ell u) - \Delta u, \end{equation} from what we deduce \begin{eqnarray*} { X^*_1 X_1 u = \xi - \sum_{i \neq 1} X^*_i X_i u. } \end{eqnarray*} Because of \eqref{eq:RegDirXiu}, the above identity and $[X_1^*,X_1] u = {(a^1 \cdot \nabla \hbox{div} (a^1))u }$, we get \begin{align*} \|X_1^2 u\|^2 &= (X_1^* X_1 u, X_1^* X_1 u) + (X_1 u, [X_1^*,X_1] X_1 u) \\ &\lesssim \|\xi\|^2 + \sum_{i \ne 1} \Big( \| \nabla X_i u\| \|\xi\| + \| \nabla X_i u\|^2 \Big) + \|u\|_{H^1(\Omega)}^2 \lesssim \|\xi\|^2. \end{align*} Together with \eqref{eq:RegDirXiu} again, we have then established \begin{equation}\label{eq:RegDirXiju} \| X_i X_j u\| \lesssim \| \xi \|, \quad \forall \, i ,j = 1, \dots, d. \end{equation} Recalling that $A = (a^1, \dots, a^d)$, we have $\partial_i = (A X)_i$. As a consequence, we may write \begin{eqnarray*} \partial_i\partial_j u &=& \sum_{m,\ell} A_{im} X_m A_{j\ell} X_\ell u \\ &=& \sum_{m,\ell} \left(A_{im} A_{j\ell} X_m X_\ell u + A_{im} [ X_m , A_{j\ell} ] X_\ell u\right), \end{eqnarray*} where the last operator is of order $1$. Together with the starting point estimate~\eqref{eq:u_H1} and~\eqref{eq:RegDirXiju}, we conclude that $ \| \partial_i \partial_j u\| \lesssim \| \xi \|, \quad \forall \, i ,j = 1, \dots, d, $ which ends the proof of~\eqref{eq:varValphaH2}. We can now conclude the proof of Theorem~\ref{theo:Poisson}. Indeed, because~$u \in H^2(\Omega)$, we may compute from~\eqref{eq:varValpha} and the Stokes formula: \begin{eqnarray*} \int_{\partial\Omega} \left\{ \frac{\partial u}{\partial n} + \frac{\alpha u}{2-\alpha} \right\} w \, \d\sigma_{\! x} &=& \int_\Omega \Delta u \, w \, \mathrm{d} x + \int_\Omega \nabla u \cdot \nabla w \, \mathrm{d} x + \int_{\partial\Omega} \frac{\alpha }{2-\alpha} \, u w \, \d\sigma_{\! x} \\ &=& \int_\Omega (\Delta u + \xi ) w \, \mathrm{d} x, \end{eqnarray*} for any $w \in V_\alpha$. Considering first $w \in C^1_c(\Omega)$ and next $w \in C^1(\bar\Omega)$, we get that $u$ satisfies both equations in \eqref{eq:elliptic}. \end{proof} \subsection{Korn inequalities and the associated elliptic equation} \label{sec:Korn} For a vector field $M=(m_i)_{1 \leqslant i \leqslant d} : \Omega \to \mathbf R^d$, we define its symmetric gradient through $$ \nabla^{s}_x M := {1 \over 2} \left( \partial_j{m_i} + \partial_i m_j \right)_{1 \leqslant i,j \leqslant d}, $$ as well as its skew-symmetric gradient by $$ \nabla^{a}_x M := {1 \over 2} \left( \partial_j{m_i} - \partial_i m_j \right)_{1 \leqslant i,j \leqslant d}. $$ Through this section, in order to lighten the notations, we will write $\nabla^s$ for $\nabla^s_x$, and~$\nabla^a$ for $\nabla^a_x$. We consider the system of equations \begin{equation}\label{eq:elliptic-korn} \left\{ \begin{aligned} - \Div (\nabla^s U) = \Xi \quad&\text{in}\quad \Omega , \\ U \cdot n = 0 \quad&\text{on}\quad \partial \Omega, \\ (2-\alpha) \left[\nabla^s U n - (\nabla^s U : n \otimes n) n \right] + \alpha U = 0 \quad&\text{on}\quad \partial \Omega, \end{aligned} \right.\qquad \end{equation} for a vector-field source term $\Xi : \Omega \to \mathbf R^d$. Because $$ \Div (\nabla^s U) = \Delta U + \nabla \Div U, $$ we see that \eqref{eq:elliptic-korn} is nothing but a Lamé-type system with a kind of homogeneous Robin (or mixed) boundary condition. We define the Hilbert spaces $$ {\mathcal V}_1 := \left\{ W : \Omega \to \mathbf R^d \mid W \in H^1(\Omega), \; W \cdot n(x) = 0 \text{ on } \partial \Omega \right\} $$ and $$ {\mathcal V}_0 := \left\{ W : \Omega \to \mathbf R^d \mid W \in H^1(\Omega), \; W \cdot n(x) = 0 \text{ on } \partial \Omega , \; P_\Omega \left\langle \nabla^a W \right\rangle = 0 \right\}, $$ where $P_\Omega$ denotes the orthogonal projection onto the set ${\mathcal A}_\Omega = \{ A \in {\mathcal M}^a_d(\mathbf R); \; Ax \in {\mathcal R}_\Omega \}$ of all skew-symmetric matrices giving rise to a centered infinitesimal rigid displacement field preserving $\Omega$ (see \eqref{eq:RROmega} for the definition of ${\mathcal R}_\Omega$). Both spaces are endowed with the~$H^1(\Omega)$ norm. We then denote $$ {\mathcal V}_\alpha := \begin{cases} {\mathcal V}_1 \quad\text{if}\quad \alpha \not\equiv 0 \\ {\mathcal V}_0 \quad\text{if}\quad \alpha \equiv 0. \end{cases} $$ We also define on ${\mathcal V}_\alpha$ the bilinear form $$ A_\alpha(U,W) := \int_{\Omega} \nabla^s U : \nabla^s W \, \mathrm{d} x + \int_{\partial \Omega} \frac{\alpha(x)}{2-\alpha(x)} U \cdot W \, \d\sigma_{\! x}, $$ where $M : N := \sum_{ij} m_{ij} n_{ij}$ for two matrices $M = (m_{ij})$, $N = (n_{ij})$. \smallskip The coercivity of the bilinear form $A_\alpha$ is related to Korn-type inequalities that we present below. We start stating a first classical version of Korn's inequality: \begin{lem}\label{lem:Korn1} For any vector-field $U \in H^1(\Omega)$, we have \begin{equation}\label{Korn1} \inf_{R \in {\mathcal R}} \| \nabla (U - R) \|^2 \lesssim \| \nabla^s U \|^2, \end{equation} where we recall that ${\mathcal R}$ is the space of all infinitesimal rigid displacement fields defined in~\eqref{eq:RR}, or equivalently, we have \begin{equation}\label{Korn1bis} \| \nabla U \|^2 \lesssim \| \nabla^s U \|^2 + |\langle \nabla^a U \rangle |^2. \end{equation} \end{lem} For the statement of \eqref{Korn1} and its proof, we refer to \cite[Eq.~(1)]{DV02} where Friedrichs~\cite[Eq.~(13), Second case]{MR22750} and Duvaut-Lions~\cite[Eq.~(3.49)]{MR0521262} are quoted, as well as \cite[Theorem~2.2]{MR2119999} and the references therein. In the following lemma, we prove an estimate on $|\langle \nabla^a U \rangle |$ in the case $\alpha\not\equiv 0$. \begin{lem}\label{lem:Korn2} Supposing $\alpha\not\equiv 0$, we have \begin{equation}\label{Korn1ter} |\langle \nabla^a U \rangle |^2 \lesssim \| \nabla^s U \|^2 + \left\| \sqrt{\alpha \over {2-\alpha}} U \right\|_{L^2(\partial\Omega)}^2, \end{equation} for any vector-field $U \in H^1(\Omega)$. \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:Korn2}] In order to establish \eqref{Korn1ter}, we argue by contradiction. We assume thus that \eqref{Korn1ter} is not true, so that there exists a sequence $(U_n)_{n \in \mathbf N}$ in $H^1(\Omega)$ satisfying $$ 1 = |\langle \nabla^a U_n \rangle |^2 \geqslant n \left( \| \nabla^s U_n \|^2 + \left\| \sqrt{\alpha \over {2-\alpha}} U_n \right\|^2_{L^2(\partial\Omega)} \right). $$ Together with \eqref{Korn1bis} and \eqref{eq:PTypeineq} applied to each component of $U_n$, we obtain that $(U_n)_{n \in \mathbf N}$ is bounded in $H^1(\Omega)$. As a consequence, up to the extraction of a subsequence, there exists~$U \in H^1(\Omega)$ such that $U_n {\,\rightharpoonup\,} U$ weakly in $H^1(\Omega)$ { and $U_n \to U$ strongly in $L^2(\Omega)$}. Passing to the limit in the above estimates satisfied by $(U_n)_{n \in \mathbf N}$, we get $|\langle \nabla^a U \rangle|^2 = 1$, $ \| \sqrt{\alpha/(2-\alpha)} U \|^2_{L^2(\partial\Omega)} = 0$ and~$ \| \nabla^s U \| = 0$. From $\nabla^s U = 0$, we first deduce that there exist an antisymmetric matrix $A$ and a constant vector $b \in \mathbf R^d$ such that $U(x) = A x+b$ on $\Omega$, and, thanks to the estimate $ \| \sqrt{\alpha/(2-\alpha)} U \|^2_{L^2(\partial\Omega)} = 0$, we deduce that $$ A x + b = 0 \quad \text{on} \quad \Gamma := \{ x \in \partial\Omega, \, \alpha(x) > 0 \}, $$ which has positive measure $|\Gamma|>0$ { using that $\alpha$ is a Lipschitz function}. We fix $\bar x$ an interior point of $\Gamma$. As in the fourth step of the proof of Theorem~\ref{theo:Poisson}, we consider a family of smooth vector fields $a^1, \dots, a^d$ such that it is an orthonormal basis of $\mathbf R^d$ and such that for any $x \in \partial\Omega$, $a^1(x)=n(x)$. We then introduce the flow $(\Phi^i_t)_{t \geqslant 0}$ associated to $a^i$ for $i= 2, \dots, d$. For $t$ small enough, $\Phi^i_t(\bar x)$ is still in the interior of $\Gamma$ so that $$ A a^i (\bar x) = {\d \over \d t} (A \Phi^i_t(\bar x) + b) = 0. $$ Therefore, for any $i \geqslant 2$, one has, using that $A \bar x + b = 0$ so that $b = - A \bar x$ $$ a^i (\bar x) \cdot U(x) = a^i (\bar x) \cdot (Ax+b) = - A a^i (\bar x) \cdot x + A a^i (\bar x) \cdot \bar x =0, $$ for any $x \in \Omega$, or, in other words, $U(x) \in \mathbf R \bar n$ for any $x \in \Omega$, with $\bar n := n(\bar x)$. We may thus write $U(x) = \phi(x) \bar n$, with $\phi : \Omega \to \mathbf R$ an affine function, so that $\phi(x) = k \cdot x + k_0$, $k \in \mathbf R^d$, $k_0 \in \mathbf R$. There exists next at least one index $i_0 \in \{ 1, \dots, d \}$ such that $\bar n_{i_0} \not = 0$ because $| \bar n | = 1$. Using again the fact that $\nabla^s U =0$ on~$\Omega$ and observing that $(\nabla U)_{ij} = k_i \bar n_j$, we deduce first $k_{i_0} = 0$ because $k_{i_0} \bar n_{i_0} = (\nabla^s U)_{i_0i_0} = 0$ and next $k_i = 0$ for any $i \not=i_0$ because $k_{i} \bar n_{i_0} = 2 (\nabla^s U)_{i_0 i} = 0$. We have thus established that $U = n_0 := k_0 \, \bar n$ on $\Omega$, for some constant $n_0 \in \mathbf R^d$. We may alternatively prove that $\nabla U = 0$ and $U$ is constant again by using just the claim \cite[Eq.~(3)]{DV02}. Anyway, both arguments lead to the fact that $U=0$ because of the boundary condition on $\Gamma$ which is in contradiction with $|\langle \nabla^a U \rangle|^2 = 1$. That ends the proof of \eqref{Korn1ter}. \end{proof} Gathering \eqref{Korn1bis} and \eqref{Korn1ter}, we then have established the (probably classical) following Korn-type inequality: \begin{lem}\label{lem:Korn3} Assuming $\alpha \not\equiv 0$. For any vector-field $U \in H^1(\Omega)$, there holds \begin{equation}\label{Korn1quar} \| \nabla U \|^2 \lesssim \| \nabla^s U \|^2 + \left\| \sqrt{\alpha \over {2-\alpha}} U \right\|_{L^2(\partial\Omega)}^2. \end{equation} \end{lem} For later reference, we also mention that a similar argument (and even a bit simpler, see also \cite[Eq.~(2)]{DV02} and \cite[Theorem~2.1]{MR2119999}) leads to the following variant of Korn's inequality: \begin{lem}\label{lem:Korn4} For any vector-field $U \in H^1(\Omega)$, there holds \begin{equation}\label{Korn1six} \| \nabla U \|^2 \lesssim \| \nabla^s U \|^2 + \| U \|^2. \end{equation} \end{lem} It is worth emphasizing that we also have the following Poincaré inequality: \begin{lem}\label{lem:Korn5} For any $U \in H^1(\Omega)$ such that $U(x) \cdot n(x) = 0$ on $\partial \Omega$, there holds \begin{equation}\label{PoincareVectV0} \| U \|^2 \lesssim \| \nabla U \|^2. \end{equation} \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:Korn5}] As before, we may argue by contradiction, assuming that \eqref{PoincareVectV0} is not true, so that there exists a sequence $(U_n)_{n \in \mathbf N}$ in $H^1(\Omega)$ satisfying $U_n \cdot n(x) = 0$ on $\partial \Omega$ and such that $$ 1 = \| U_n \|^2 \geqslant n \| \nabla U_n \|^2 . $$ We immediately deduce that there exists $U \in H^1(\Omega)$ such that $\nabla U = 0$, $ \| U \|^2 = 1$ and $U \cdot n(x) = 0$ which gives our contradiction. \end{proof} Gathering \eqref{Korn1quar} and \eqref{PoincareVectV0}, we may state a last version of our first Korn inequality: \begin{prop} Suppose that $\alpha \not\equiv 0$. For any $U \in H^1(\Omega)$ such that $U(x) \cdot n(x) = 0$ on~$\partial \Omega$, there holds \begin{equation}\label{PoincareKorn1} \| U \|_{H^1(\Omega)}^2 \lesssim \| \nabla^s U \|^2 + \left\| \sqrt{\alpha \over {2-\alpha}} U \right\|_{L^2(\partial\Omega)}^2. \end{equation} \end{prop} \smallskip On the other hand, a less classical Korn's inequality has been established by Desvillettes and Villani~\cite{DV02}: \begin{lem}\label{lem:Korn6} For any vector-field $U \in H^1(\Omega)$ verifying $U \cdot n(x) = 0$ on $\partial \Omega$, one has \begin{equation}\label{Korn2} \inf_{R \in {\mathcal R}_\Omega} \| \nabla (U - R) \|^2 \lesssim \| \nabla^s U \|^2, \end{equation} where we remind that ${\mathcal R}_\Omega$ stands for the space of centered infinitesimal rigid displacement fields defined in \eqref{eq:RROmega}, or equivalently one has \begin{equation}\label{Korn2bis} \| \nabla U \|^2 \lesssim \| \nabla^s U \|^2 + |P_\Omega \langle \nabla^a U \rangle |^2, \end{equation} where we recall that $P_\Omega$ stands for the orthogonal projection onto the space ${\mathcal A}_\Omega$ as defined before. \end{lem} In the case when ${\mathcal R}_\Omega = \{ 0 \}$, that is when $\Omega$ has no axi-symmetry, \eqref{Korn2} is nothing but the inequality stated in \cite[Theorem~3]{DV02} and for which a detailed constructive proof is provided therein. The proof of \eqref{Korn2} in the three dimensional case is also alluded in \cite[Section~5]{DV02}. We do not explain how the analysis developed in~\cite{DV02} makes possible to get a constructive proof of \eqref{Korn2} in the general case (whatever is the dimension $d$), but rather briefly explain how \eqref{Korn2bis} may be established thanks to a compactness argument. \medskip \noindent {\it Proof of~\eqref{Korn2bis}.} We first claim that for any vector-field $U \in H^1(\Omega)$ such that $U \cdot n(x) = 0$ on $\partial \Omega$, one has \begin{equation}\label{Korn2ter} \| U \|^2 \lesssim \| \nabla^s U \|^2 + |P_\Omega \langle \nabla^a U \rangle |^2. \end{equation} Assume indeed by contradiction that \eqref{Korn2ter} is not true, so that there exists a sequence $(U_n)_{n \in \mathbf N}$ satisfying $U_n \cdot n(x) = 0$ on $\partial \Omega$ such that $$ 1 = \| U_n \|^2 \geqslant n \left( \| \nabla^s U_n \|^2 + |P_\Omega \langle \nabla^a U_n \rangle |^2 \right). $$ Together with the Korn inequality \eqref{Korn1six}, we deduce that there exists $U \in H^1(\Omega)$ satisfying $U \cdot n(x) = 0$ on $\partial \Omega$ such that (up to the extraction of a subsequence) $U_n {\,\rightharpoonup\,} U$ weakly in~$H^1(\Omega)$ and $U_n \to U$ strongly in $L^2(\Omega)$. Passing to the limit in the estimates satisfied by~$(U_n)_{n \in \mathbf N}$, we first get $\nabla^s U = 0$ which implies that $U = Ax +b \in {\mathcal R}$. Moreover we obtain~$U \cdot n(x) = (Ax + b) \cdot n(x) = 0$ on $\partial \Omega$ and thus, thanks to the remark after \eqref{eq:RROmega} using the assumption \eqref{eq:Omega-centre}, we obtain that $b=0$ and hence $A \in {\mathcal A}_\Omega$ or equivalently $Ax \in {\mathcal R}_\Omega$. Finally, we also have $P_\Omega \left\langle \nabla^a U \right\rangle = P_\Omega A = 0$ which implies $A \in {\mathcal A}_\Omega^\perp$ and thus~$A = 0$. We therefore obtain $U=0$ which is in contradiction with the fact that $\| U \|^2 = 1$. That ends the proof of \eqref{Korn2ter}. The proof of \eqref{Korn2bis} follows by gathering \eqref{Korn1six} and \eqref{Korn2ter}. \qed \medskip Gathering~\eqref{Korn2bis} with \eqref{Korn2ter}, we finally obtain the following Korn-type inequality: \begin{prop} For any vector-field $U \in H^1(\Omega)$ such that $U \cdot n(x) = 0$ on $\partial \Omega$, there holds \begin{equation}\label{PoincareKorn2} \| U \|^2_{H^1(\Omega)} \lesssim \| \nabla^s U \|^2 + |P_\Omega \langle \nabla^a U \rangle |^2. \end{equation} \end{prop} \smallskip We can now state our result concerning the existence, uniqueness and regularity of solutions to the elliptic system \eqref{eq:elliptic-korn}. \begin{theo}\label{theo:regH2-korn} For any given $\Xi \in L^2(\Omega)$, there exists a unique solution $U \in {\mathcal V}_\alpha$ to the variational problem associated to \eqref{eq:elliptic-korn}, namely \begin{equation}\label{eq:elliptic-korn-var} A_\alpha(U,W) = ( \Xi , W ) \quad\forall \, W \in {\mathcal V}_\alpha. \end{equation} If furthermore $\Xi$ satisfies the condition $\left\langle \Xi , Ax \right\rangle = 0$ for any $Ax \in {\mathcal R}_\Omega$ when $\alpha \equiv 0$, then the variational solution $U$ to \eqref{eq:elliptic-korn} satisfies $U \in H^2(\Omega)$ with $ \| U \|_{H^2(\Omega)} \lesssim \| \Xi \|, $ and moreover $U$ verifies \eqref{eq:elliptic-korn} a.e. \end{theo} The proof of Theorem~\ref{theo:regH2-korn} follows the same steps as the proof of Theorem~\ref{theo:Poisson}. We briefly present it below. \begin{proof}[Proof of Theorem~\ref{theo:regH2-korn}] We split the proof into four steps, the three last ones being devoted to the proof of the $H^2$ regularity estimate. \smallskip\noindent{\sl Step 1. } Thanks to the above Korn-type inequalities, more precisely \eqref{PoincareKorn1} for the case $\alpha \not\equiv 0$ and \eqref{PoincareKorn2} for the case $\alpha \equiv 0$, we deduce that the bilinear form $A_\alpha$ is coercive in ${\mathcal V}_\alpha$, that is, there is a constant $\lambda >0$ such that $ \forall \, U \in {\mathcal V}_\alpha, \quad \quad \lambda \left(\| U \|^2 + \| \nabla U \|^2 \right) \leqslant A_\alpha(U,U). $ One can therefore apply Lax-Milgram theorem which gives us the existence and uniqueness of $U \in {\mathcal V}_\alpha$ satisfying \eqref{eq:elliptic-korn-var}. \smallskip For the remainder of the proof, we additionally assume that $\left\langle \Xi , Ax \right\rangle = 0$ for any $Ax \in {\mathcal R}_\Omega$ when $\alpha \equiv 0$. We then claim that \eqref{eq:elliptic-korn-var} can be improved into the following new variational formulation: there exists a unique $U \in {\mathcal V}_\alpha$ verifying \begin{equation}\label{eq:elliptic-korn-var-bis} A_\alpha(U,W) = ( \Xi , W ), \quad\forall \, W \in {\mathcal V}_1. \end{equation} In the case $\alpha \not \equiv 0$ or $\alpha \equiv 0$ with a non axi-symmetric domain $\Omega$, that is ${\mathcal R}_\Omega = \{ 0 \}$, equation~\eqref{eq:elliptic-korn-var-bis} is nothing but \eqref{eq:elliptic-korn-var} since in these cases ${\mathcal V}_\alpha = {\mathcal V}_1$. When $\alpha \equiv 0$ and $\Omega$ has rotational symmetry, that is ${\mathcal R}_\Omega \neq \{ 0 \}$, for any $W \in {\mathcal V}_1$ we have $W - P_\Omega \left\langle \nabla^a W \right\rangle x \in {\mathcal V}_0$ and therefore $$ \begin{aligned} A_\alpha (U,W) &= A_\alpha (U , W - P_\Omega \left\langle \nabla^a W \right\rangle x) \\ &= \int_{\Omega} \Xi \cdot W \, \mathrm{d} x - \int_{\Omega} \Xi \cdot (P_\Omega \left\langle \nabla^a W \right\rangle x) \, \mathrm{d} x \\ &= \int_{\Omega} \Xi \cdot W \, \mathrm{d} x \end{aligned} $$ where we have used that $\nabla^s (P_\Omega \left\langle \nabla^a W \right\rangle x) = 0$ in the first line, formulation \eqref{eq:elliptic-korn-var} in the second line, and the condition $\left\langle \Xi , Ax \right\rangle = 0$ for any $Ax \in {\mathcal R}_\Omega$ in the third line, since~$P_\Omega \left\langle \nabla^a W \right\rangle x \in {\mathcal R}_\Omega$ by definition. \smallskip\noindent{\sl Step 2.} For any small enough open set $\omega \subset \Omega$, we fix a vector field $a \in C^2(\bar\Omega)$ such that $|a| = 1$ on $\omega$ and $a \cdot n = 0$ on $\partial \Omega$, and we set $X := a \cdot \nabla$ the associated differential operator. For a smooth solution $U$ to \eqref{eq:elliptic-korn-var-bis}, we compute \begin{eqnarray*} \| \nabla^s X U\|^2 &=& (\nabla^s U, X^* \nabla^s X U ) + ([\nabla^s,X] U, \nabla^s X U ) \\ &=& (\nabla^s U, \nabla^s X^*X U ) + (\nabla^s U, [X^*, \nabla^s ] X U ) + ([\nabla^s,X] U, \nabla^s X U ) \end{eqnarray*} where we have used~\eqref{eq:X*}. On the other hand, we have the following formal equality $$ \int_{\partial\Omega} (XU)\cdot (XU) \, \frac{\alpha}{2-\alpha} \d\sigma_{\! x} = \int_{\partial\Omega} \frac{\alpha}{2-\alpha} \, U \cdot ( X^* XU) \, \d\sigma_{\! x} - \int_{\partial\Omega} \biggl( X \frac{\alpha}{2-\alpha} \biggr) \, U \cdot (XU) \, \d\sigma_{\! x} . $$ We define \begin{eqnarray*} ({\mathcal A} W)_{ij} &:=& \frac12 \bigl( {[\partial_i,X]} W_j + {[\partial_j,X]} W_i \bigr) \\ ({\mathcal B} W)_{ij} &:=& \frac12 \bigl( {[{X^*}, {\partial_i}]} W_j +{[{X^*}, {\partial_j}]} W_i \bigr). \end{eqnarray*} Supposing the additional regularity assumption $X^*XU \in {\mathcal V}_1$, using { $(\nabla^s)^* \nabla^s = - \operatorname{div}(\nabla^s \cdot)$} and making the choice $W := X^*XU$ in the variational equation~\eqref{eq:elliptic-korn-var}, we obtain \begin{align*} &\| \nabla^s X U\|^2 + \int_{\partial\Omega} (XU)\cdot (XU) \, \frac{\alpha}{2-\alpha} \d\sigma_{\! x} \\ &\quad=(\Xi, X^*X U ) + (\nabla^s U, {\mathcal B} X U ) + ({\mathcal A} U, \nabla^s X U ) - \int_{\partial\Omega} \biggl( X \frac{\alpha}{2-\alpha} \biggr) \, U \cdot (XU) \, \d\sigma_{\! x} . \end{align*} From the Korn inequalities \eqref{Korn1quar} (when $\alpha\not\equiv0$) and \eqref{Korn1six} (when $\alpha\equiv0$), we first deduce \begin{eqnarray*} \| \nabla X U\|^2 &\lesssim \| \Xi \| \| X^*X U \| + \| \nabla U \| \| {\mathcal B} X U \| + \| {\mathcal A} U \| \| \nabla X U \| \\ &\qquad \qquad + \|U\|_{L^2(\partial \Omega)} \|XU\|_{L^2(\partial\Omega)} + \|XU\|^2. \end{eqnarray*} Then, since \begin{eqnarray*} {[\partial_i,X]} = (\partial_i a) \cdot \nabla, \quad {[{X^*}, {\partial_i}]} = \partial_i (\hbox{div} a) + (\partial_i a) \cdot \nabla, \end{eqnarray*} we deduce that $$ \| {\mathcal A} W \| + \| {\mathcal B} W \| \lesssim \| W \|_{H^1(\Omega)}, \quad \forall \, W \in {\mathcal V}_1. $$ We also have the elementary estimates $$ \| X^*W \| + \| X W \| \lesssim \| W \|_{H^1(\Omega)}, \quad \forall \, W \in {\mathcal V}_1. $$ Thanks to the already established estimate $\| U \|_{H^1(\Omega)} \lesssim \| \Xi \|$, we are then able to deduce that \begin{eqnarray*} \| \nabla X U\|^2 \lesssim \| \Xi \| \| \nabla X U\| + \| \Xi\|^2, \end{eqnarray*} and finally \begin{equation}\label{eq:RegDirX} \| \nabla X U\| \lesssim \| \Xi \| . \end{equation} Note that as in the proof of Theorem~\ref{theo:Poisson}, the multiplicative constants involved in our estimates depend on $\|a\|_{W^{2,\infty}(\Omega)}$ and $\|\alpha\|_{W^{1,\infty}(\Omega)}$. \smallskip\noindent{\sl Step 3.} When we do not deal with an a priori smooth solution, but just with a solution $U \in {\mathcal V}_\alpha$ to \eqref{eq:elliptic-korn-var-bis}, we modify the argument in the following way. We consider a small enough open set $\omega \in \Omega$, so that we may fix $a^1,\dots,a^d$ a family of smooth vector fields such that $(a^1,\dots,a^d)$ is an orthonormal basis of $\mathbf R^d$ at any point $x \in \omega$ and $a^1(x) = n(x)$ for any $x \in \partial \Omega \cap \partial \omega$. The construction of such a family is given in the Step 4 of the proof of Theorem \ref{theo:Poisson}. We set $A = (a^1,\dots,a^d)$. Let $k \in \{2, \dots, d\}$. Then $a = a^k$ is as in Step 2 and we define $\Phi_t$ the associated flow introduced in~\eqref{eq:flow}. We define $J^h (x) := A(\Phi_h(x)) A(x)^{-1}$, so that in particular $J^h (x) n (x) = n(\Phi_h(x))$ for any $h$. We next define $$ X^hU(x) := \frac{1}{h} \left( {^T\!}J^h(x) U(\Phi_h(x)) - U(x)\right), $$ so that $X^h U \in {\mathcal V}_1$ if $U \in {\mathcal V}_\alpha$. Repeating the argument of Step 2, we get \begin{eqnarray*} \| \nabla^s X^h U\|^2 = (\nabla^s U, \nabla^s X^{h*}X^h U ) + (\nabla^s U, {\mathcal B}^h X^h U ) + ({\mathcal A} ^h U, \nabla^s X^h U ), \end{eqnarray*} where we denote \begin{eqnarray*} X^{h*} M (x) &:=& {1 \over h} [ |\operatorname{det} D\Phi_{-h}(x)| J^h(\Phi_{-h}(x)) M(\Phi_{-h}(x)) - M(x)] \\ ({\mathcal A}^h W)_{ij} &:=& \frac12 \bigl( {[\partial_i,X^h]} W_j + {[\partial_j,X^h]} W_i \bigr) \\ ({\mathcal B}^h W)_{ij} &:=& \frac12 \bigl( {[{X^{h*}}, {\partial_i}]} W_j +{[{X^{h*}}, {\partial_j}]} W_i \bigr). \end{eqnarray*} On the other hand, we have \begin{align*} &\int_{\partial \Omega} \frac{\alpha}{2-\alpha}(x) U(x) \cdot X^{h*} X^h U(x) \d\sigma_{\! x} \\ &\quad = \int_{\partial \Omega} \frac{\alpha}{2 - \alpha}(\Phi_h(x)) (X^hU)(x) \cdot (X^hU)(x) \d\sigma_{\! x} + \int_{\partial \Omega} U(x) \cdot X^hU(x) Y^h\bigg(\frac{\alpha}{2-\alpha}\bigg)(x) \d\sigma_{\! x}, \end{align*} where \[ Y^h M(x) := \frac{1}{h} \Big(M(\Phi_h(x)) - M(x)\Big). \] We also have that if $U \in {\mathcal V}_\alpha$ then $X^{h*}X^h U\in {\mathcal V}_1$ too. Indeed, we compute \begin{align*} X^{h*}X^hU (x)&= {1 \over h^2} |\operatorname{det} D\Phi_{-h}(x)| J^h(\Phi_{-h}(x)) \left(\left({^T\!J^h}(\Phi_{-h}(x))\right) U(x) - U(\Phi_{-h}(x))\right) \\ &\quad - {1 \over h^2} \left( {^T\!J^h}(x) U(\Phi_h(x)) - U(x)\right) =: T_1(x) + T_2(x), \end{align*} the last equality standing for a definition of $T_1$ and $T_2$. As already noticed, if $U \in {\mathcal V}_\alpha$, then $X^hU(x) \cdot n(x)=0$ so that $T_2(x) \cdot n(x) =0$. Concerning~$T_1$, we first have $$ J^h(\Phi_{-h}(x)) \left({^T\!J^h}(\Phi_{-h}(x))\right) U(x) \cdot n(x) = U(x) \cdot n(x) = 0. $$ Then, we remark that $J^h(\Phi_{-h}(x)) = {^T\!}J^{-h}(x)$, so that $$ J^h(\Phi_{-h}(x)) U(\Phi_{-h}(x)) \cdot n(x) = U(\Phi_{-h}(x)) \cdot J^{-h}(x) n(x) = U(\Phi_{-h}(x)) \cdot n(\Phi_{-h}(x))=0. $$ Using this and the fact that $U$ is a solution of \eqref{eq:elliptic-korn-var-bis}, we deduce that \begin{align*} &\| \nabla^s X^h U\|^2 + \int_{\partial \Omega} \frac{\alpha}{2 - \alpha}(\Phi_h(x)) (X^hU)(x) \cdot (X^hU)(x) \d\sigma_{\! x} \\ &\quad = (\Xi, X^{h*}X^h U ) + (\nabla^s U, {\mathcal B}^h X^h U ) + ({\mathcal A} ^h U, \nabla^s X^h U ) - \int_{\partial \Omega} U \cdot (X^h U) \, \bigg( Y^h \frac{\alpha}{2 - \alpha}\bigg) \d\sigma_{\! x}. \end{align*} Similarly as in the proof of Theorem~\ref{theo:Poisson}, one can prove the following elementary estimate $$ \|X^h W\| + \| X^{h*}W \| + \| {\mathcal A}^h W \| + \| {\mathcal B}^h W \| \lesssim \|W \|_{H^1(\Omega)}, \quad \forall \, W \in {\mathcal V}_1. $$ Using these bounds combined with the already established estimate $ \| U \|_{H^1(\Omega)} \lesssim \| \Xi \|$ and the Korn inequality, we deduce, as in the Poisson case, that $$ \| \nabla X^h U\| \lesssim \| \Xi \|, \quad \forall \, |h| \leqslant 1. $$ Passing to the limit $h \to 0$, we then get $$ \| \nabla X^0 U\| \lesssim \| \Xi \|, $$ with $X^0 U_j = a \cdot \nabla U_j + A \, (a \cdot \nabla A^{-1}) U_j$ for $j=1,\dots,d$. Note that as in the Poisson case, the multiplicative constants are uniform in $|h| \leqslant 1$ and depend on $\|a\|_{W^{2,\infty}}$ and~$\|\alpha\|_{W^{1,\infty}}$. We then recover~\eqref{eq:RegDirX} by observing that we have $ \| A \, (a \cdot \nabla A^{-1}) U \|_{H^1} \lesssim \| \Xi \|$. \medskip\noindent {\sl Step 4. } We set now $X_i := a^i \cdot \nabla$. From the second step, we have \begin{equation}\label{eq:RegDirXi} \| \nabla X_i U\| \lesssim \| \Xi \|, \quad \forall \, i = 2, \dots, d. \end{equation} We first notice that $$ \partial_j = \sum_i a^i_jX_i = -\sum_i X_i^*(a^i_j \cdot). $$ Combining this with~\eqref{eq:Deltau}, we deduce that \begin{align*} \Xi_j &= -\Delta U_j - \partial_j(\operatorname{div} U) = \sum_i X_i^*X_i U_j + \sum_{i,\ell,m} X_i^*(a^i_ja^m_\ell X_m U_\ell) \\ &= X_1^* X_1 U_j + \sum_\ell X_1^*(a^1_ja^1_\ell X_1 U_\ell) + \sum_{i \neq 1} X_i^*X_i U_j+ \sum_{(i,m) \neq (1,1)} \sum_\ell X_i^*(a^i_ja^m_\ell X_m U_\ell). \end{align*} We notice that $X_i^*(fg) = (X_i^*f) g - f(X_i g)$. Using then~\eqref{eq:RegDirXi} combined with the fact that for $i=1,\dots,d$, we have $a^i \in W^{2,\infty}(\Omega)$, we deduce \begin{equation} \label{eq:X1*X1_1} X_1^*X_1 U_j + \sum_\ell a^1_j a^1_\ell X_1^*X_1 U_\ell = R_j(U,\Xi) \quad \text{with} \quad \|R_j(U,\Xi)\| \lesssim \|\Xi\|. \end{equation} Multiplying the equality in~\eqref{eq:X1*X1_1} by $a_j^1$ and then summing it over $j$, we get $$ 2 \sum_\ell a^1_\ell X_1^*X_1 U_\ell = \sum_j a^1_j R_j(U,\Xi), $$ and thus \begin{equation} \label{eq:a1X1*X1} \big\|a^1 \cdot X_1^*X_1 U\big\| \lesssim \|\Xi\|. \end{equation} Coming back to~\eqref{eq:X1*X1_1} and using once more that $\delta_{j\ell} = a_j \cdot a_\ell$, so that \begin{equation} \label{eq:X1*X1} X_1^* X_1 U_j = \sum_{\ell,m} a^m_j a^m_\ell X_1^* X_1 U_\ell, \end{equation} we obtain that $$ \sum_{ m \neq 1, \ell \in \{1,\dots,d\}} a^m_ja^m_\ell X_1^*X_1U_\ell = R_j(U,\Xi) - 2 \sum_\ell a^1_j a^1_\ell X_1^*X_1 U_\ell . $$ Together with~\eqref{eq:a1X1*X1} and the fact that $\|R_j(U,\Xi)\|\lesssim \|\Xi\|$, it yields \begin{equation} \label{eq:amX1*X1} \bigg\|\sum_{\ell, m \neq 1} a^m_ja^m_\ell X_1^*X_1U_\ell \bigg\| \lesssim \|\Xi\|. \end{equation} Finally, using again~\eqref{eq:X1*X1},~\eqref{eq:a1X1*X1} and~\eqref{eq:amX1*X1} imply $$ \|X_1^*X_1 U_j\| \lesssim \|\Xi\|. $$ Recalling that $[X_1,X_1^*] u = (a^1 \cdot \nabla \hbox{div} (a^1))u $, because { $\|U\|_{H^1(\Omega)} \lesssim \|\Xi\|$}, the above inequality implies $$ \| X^2_1 U\| \lesssim \| \Xi \|, $$ and then together with \eqref{eq:RegDirXi}, we have established $ \| X_i X_j U\| \lesssim \| \Xi \|, \quad \forall \, i ,j = 1, \dots, d. $ We can then conclude the proof of Theorem~\ref{theo:regH2-korn} as in the one of Theorem~\ref{theo:Poisson}. \end{proof} \section{Proof of Theorem~\ref{theo:hypo}}\label{sec:proof} \label{sec:proofHypo} Consider the operator ${\mathscr L}$ defined in~\eqref{eq:dtf=Lf}. For any $f \in {\mathcal H} $ we decompose $f = \pi f + f^\perp$ with the macroscopic part $\pi f$ given by $$ \pi f (x,v) = \varrho(x) \mu(v) + m(x) \cdot v \mu(v) + \theta(x) \, \frac{(|v|^2 - d)}{\sqrt{2d}} \, \mu(v), $$ where the mass, momentum and energy are defined respectively by $$ \varrho(x) = \int_{\mathbf R^d} f (x,v) \, \mathrm{d} v , \quad m(x) = \int_{\mathbf R^d} v f (x,v) \, \mathrm{d} v \quad \text{and} \quad \theta(x) = \int_{\mathbf R^d} \frac{(|v|^2 - d)}{\sqrt{2d}} \, f (x,v) \, \mathrm{d} v . $$ Remark that $$ \| f \|_{{\mathcal H}}^2 = \| f^\perp \|_{{\mathcal H}}^2 + \| \pi f \|_{{\mathcal H}}^2 $$ and $$ \| \pi f \|_{{\mathcal H}}^2 = \| \varrho \|_{L^2_x(\Omega)}^2 + \| m \|_{L^2_x(\Omega)}^2 + \| \theta \|_{L^2_x(\Omega)}^2. $$ \smallskip The focus of the remainder of this section will be the proof of Theorem~\ref{theo:hypo} (note that Theorem~\ref{theo:main} is a direct consequence of Theorem~\ref{theo:hypo}). As explained in Subsection~\ref{ssec:main}, in Theorem~\ref{theo:hypo}, the construction of the scalar product $\left\langle\!\left\langle \cdot , \cdot \right\rangle \! \right\rangle$ on the space ${\mathcal H}$ begins with the usual scalar product, which gives us a control of the microscopic part $f^\perp$, and after that, step by step, new terms are added to it in order to control all components of the macroscopic part $\pi f$. The construction of each of those terms is performed from Section~\ref{ssec:micro} through Section~\ref{ssec:mass}, and then in Section~\ref{ssec:conclusion} we shall complete the proof of Theorem~\ref{theo:hypo}. \smallskip We consider hereafter $f$ satisfying the conditions of Theorem~\ref{theo:hypo}, namely $f \in \mathrm{Dom}({\mathscr L})$ satisfying the boundary condition \eqref{eq:BdyCond}, so that in particular \eqref{eq:invariantsBoundary1} holds, which translates into \begin{equation}\label{eq:mcdotn=0} m (x) \cdot n(x) = 0 \quad \text{for} \quad x \in \partial \Omega, \end{equation} and satisfying assumption~\eqref{eq:C1} which means $$ \langle \varrho \rangle = \int_{\Omega} \varrho \, \mathrm{d} x = 0. $$ In the specular reflection case ($\alpha\equiv 0$ in \eqref{eq:BdyCond}), the additional assumptions \eqref{eq:C2}-\eqref{eq:C3} hold, which corresponds to \begin{equation}\label{eq:C2C3pif} \langle \theta \rangle = \int_{\Omega} \theta \, \mathrm{d} x = 0 \quad\text{and}\quad \langle R \cdot m \rangle = \int_{\Omega} R \cdot m \, \mathrm{d} x = 0 \quad \forall \, R \in {\mathcal R}_\Omega. \end{equation} For simplicity we introduce the notations $f_{\pm} := \gamma_{\pm} f$, $D^\perp := \mathrm{Id} - D$, where $D$ is given by~\eqref{eq:def_D} and $\partial {\mathcal H}_+ := L^2(\Sigma_+ ; \mu^{-1}(v) n(x) \cdot v)$. It is worth emphasizing that because $f \in \mathrm{Dom}({\mathscr L})$, the trace functions $f_{\pm} $ are well defined. We refer the interested reader to \cite{MR274925,MR777741} for the classical definition of the trace of a solution to a transport equation as well as to \cite{MR1765137,MischlerCMP2000,MR2150445} for a more modern approach. \subsection{Microscopic part}\label{ssec:micro} We start with the following result, giving a control of the microscopic part $f^\perp$ and a boundary term. \begin{lem}\label{lem:micro} There exists $\lambda >0$ such that $$ \left\langle - {\mathscr L} f , f \right\rangle_{{\mathcal H}} \geqslant \lambda \| f^\perp \|_{{\mathcal H}}^2 + \frac12 \| \sqrt{\alpha(2-\alpha)} D^\perp f_+ \|_{\partial {\mathcal H}_+}^2. $$ \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:micro}] We write $$ \left\langle - {\mathscr L} f , f \right\rangle_{{\mathcal H}} = \left\langle - {\mathscr C} f , f \right\rangle_{{\mathcal H}} + \left\langle v \cdot \nabla_x f , f \right\rangle_{{\mathcal H}}. $$ Thanks to \eqref{eq:coercivity} one has $$ \left\langle - {\mathscr C} f , f \right\rangle_{{\mathcal H}} \geqslant \lambda \| f^\perp \|_{{\mathcal H}}^2. $$ For the second term, we first get thanks to an integration by parts $$ \left\langle v \cdot \nabla_x f , f \right\rangle_{{\mathcal H}} = \int_{\mathcal O} (v \cdot \nabla_x f) f \mu^{-1} \, \mathrm{d} x \, \mathrm{d} v = \frac12 \int_{\Sigma} \gamma f^2 \mu^{-1} n(x) \cdot v \, \d\sigma_{\! x} \, \mathrm{d} v. $$ Writing $\gamma f^2 = f_{+}^2 \mathbf 1_{\Sigma_{+}} + f_{-}^2 \mathbf 1_{\Sigma_{-}}$ and using the boundary condition \eqref{eq:BdyCond}, we thus obtain $$ \begin{aligned} \left\langle v \cdot \nabla_x f , f \right\rangle_{{\mathcal H}} &= \frac12 \int_{\Sigma_{+}} f_{+}^2 \mu^{-1} |n(x) \cdot v| \, \d\sigma_{\! x} \, \mathrm{d} v - \frac12 \int_{\Sigma_{-}} f_{-}^2 \mu^{-1} |n(x) \cdot v| \, \d\sigma_{\! x} \, \mathrm{d} v \\ &= \frac12 \int_{\Sigma_{+}} f_{+}^2 \mu^{-1} |n(x) \cdot v |\, \d\sigma_{\! x} \, \mathrm{d} v \\ &\quad - \frac12 \int_{\Sigma_{-}} \big \{ (1-\alpha(x))f_{+}(x,R_x v) + \alpha(x) D f_{+}(x,v) \big\}^2 \mu^{-1} |n(x) \cdot v| \, \d\sigma_{\! x} \, \mathrm{d} v . \end{aligned} $$ We apply the change of variables $v \mapsto R_x v$, so that $\Sigma_{-}$ transforms into $\Sigma_{+}$, which yields $$ \begin{aligned} \left\langle v \cdot \nabla_x f , f \right\rangle_{{\mathcal H}} &= \frac12 \int_{\Sigma_{+}} f_{+}^2 \mu^{-1} |n(x) \cdot v| \, \d\sigma_{\! x} \, \mathrm{d} v \\ &\quad - \frac12 \int_{\Sigma_{+}} \big \{ (1-\alpha(x))f_{+} + \alpha(x) D f_{+} \big \}^2 \mu^{-1} |n(x) \cdot v |\, \d\sigma_{\! x} \, \mathrm{d} v, \end{aligned} $$ since $Df_{+}(x,R_x v) = Df_{+}(x,v)$ and $|n(x) \cdot R_x v| = |n(x) \cdot v|$. Writing $f_{+} = D^\perp f_{+} + D f_{+}$, one has $$ \int_{\Sigma_+} f_+^2 \mu^{-1} n(x) \cdot v \, \d\sigma_{\! x} \, \mathrm{d} v = \int_{\Sigma_+} (Df_+)^2 \mu^{-1} n(x) \cdot v \, \d\sigma_{\! x} \, \mathrm{d} v +\int_{\Sigma_+} (D^\perp f_+)^2 \mu^{-1} n(x) \cdot v \, \d\sigma_{\! x} \, \mathrm{d} v, $$ since $Df_+ \perp D^\perp f_+ $ in $\partial{\mathcal H}_+$. All together, we conclude to $$ \begin{aligned} &\left\langle v \cdot \nabla_x f , f \right\rangle_{{\mathcal H}} \\ &\qquad = \frac12 \int_{\Sigma_{+}} \left\{ (Df_{+})^2 + (D^\perp f_{+})^2 - [ (1-\alpha(x)) D^\perp f_{+} + D f_{+} ]^2\right\} \mu^{-1} n(x) \cdot v \, \d\sigma_{\! x} \, \mathrm{d} v \\ &\qquad= \frac12 \int_{\Sigma_{+}} \left\{ [1- (1-\alpha(x))^2] (D^\perp f_{+})^2 - 2(1-\alpha(x)) Df_+ D^\perp f_+ \right\} \mu^{-1} n(x) \cdot v \, \d\sigma_{\! x} \, \mathrm{d} v \\ &\qquad= \frac12 \int_{\Sigma_{+}} \alpha(x)(2-\alpha(x)) (D^\perp f_{+})^2\mu^{-1} n(x) \cdot v \, \d\sigma_{\! x} \, \mathrm{d} v. \end{aligned} $$ We finish the proof by gathering previous estimates. \end{proof} \subsection{Boundary terms}\label{ssec:bdry} We start by stating a technical lemma which will be useful to treat the boundary terms in what follows. \begin{lem}\label{lem:boundary} Let $\phi: \mathbf R^d \to \mathbf R$. For any $x \in \partial\Omega$, there holds $$ \begin{aligned} \int_{\mathbf R^d} \phi(v) \gamma f(x,v) \, n(x) \cdot v \, \mathrm{d} v & = \int_{\Sigma^x_{+}} \phi(v) \alpha(x) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \\ &\quad + \int_{\Sigma^x_{+}} \left\{ \phi(v) - \phi(R_x v) \right\} (1-\alpha(x)) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \\ &\quad + \int_{\Sigma^x_{+}} \left\{ \phi(v) - \phi(R_x v) \right\} D f_{+} \, n(x) \cdot v \, \mathrm{d} v . \end{aligned} $$ \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:boundary}] We first write, thanks to the decomposition $\gamma f = f_{+} \mathbf 1_{\Sigma_{+}} + f_{-} \mathbf 1_{\Sigma_{-}}$, $$ \begin{aligned} \int_{\mathbf R^d} \phi(v) \gamma f(x,v) \, n(x) \cdot v \, \mathrm{d} v = \int_{\Sigma^x_{+}} \phi(v) f_{+} \, n(x) \cdot v \, \mathrm{d} v - \int_{\Sigma^x_{-}} \phi(v) f_{-} \, |n(x) \cdot v| \, \mathrm{d} v . \end{aligned} $$ Applying the boundary condition \eqref{eq:BdyCond} and then the change of variables $v \mapsto R_x v$, we hence obtain $$ \begin{aligned} &\int_{\Sigma^x_{-}} \phi(v) f_{-} \, |n(x) \cdot v| \, \mathrm{d} v\\ &\qquad = \int_{\Sigma^x_{-}} \phi(v) \left\{ (1-\alpha(x)) f_{+}(x,R_x v) + \alpha(x) D f_{+} (x,v) \right \} \, |n(x) \cdot v| \, \mathrm{d} v \\ &\qquad = \int_{\Sigma^x_{+}} \phi(R_x v) \left\{ (1-\alpha(x)) f_{+}(x,v) + \alpha(x) D f_{+} (x,v) \right\} \, |n(x) \cdot v| \, \mathrm{d} v , \end{aligned} $$ since $Df_{+}(x,R_x v) = Df_{+}(x,v)$ and $|n(x) \cdot R_x v| = |n(x) \cdot v|$. We write $f_{+} = D^\perp f_{+} + D f_{+} $ and thus $$ \begin{aligned} &\int_{\mathbf R^d} \phi(v) \gamma f(x,v) \, n(x) \cdot v \, \mathrm{d} v \\ &\qquad = \int_{\Sigma^x_{+}} \left\{ \phi(v) f_{+} - \phi(R_x v)(1-\alpha(x)) f_{+} - \phi(R_x v) \alpha(x) D f_{+} \right\} n(x) \cdot v \, \mathrm{d} v \\ &\qquad = \int_{\Sigma^x_{+}} \phi(v) \alpha(x) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \\ &\qquad\quad + \int_{\Sigma^x_{+}} \left\{ \phi(v) - \phi(R_x v) \right\} (1-\alpha(x)) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \\ &\qquad\quad + \int_{\Sigma^x_{+}} \left\{ \phi(v) - \phi(R_x v) \right\} D f_{+} \, n(x) \cdot v \, \mathrm{d} v, \end{aligned} $$ which concludes the proof. \end{proof} \subsection{Energy}\label{ssec:energy} In this subsection we construct a functional in order to control the energy component of the macroscopic part $\pi f$. We denote $$ \theta[g] := \int_{\mathbf R^d} \frac{(|v|^2 - d)}{\sqrt{2d}} \, g\, \mathrm{d} v, $$ so that $\theta = \theta[f]$. We define $u[\theta]$ as the solution to the elliptic equation \eqref{eq:elliptic} associated to $\xi=\theta \in L^2_x(\Omega)$ given by Theorem~\ref{theo:Poisson}, in particular \begin{equation} \label{eq:uthetafH2} \| u[\theta] \|_{H^2_x(\Omega)} \lesssim \| \theta \|_{L^2_x(\Omega)}. \end{equation} It is worth noticing that in the specular reflection case, that is when $\alpha \equiv 0$ in \eqref{eq:BdyCond}, we have $\left\langle \theta \right\rangle = 0$ from \eqref{eq:C2C3pif}, so that the solution $u[\theta]$ to the Poisson equation with Neumann boundary condition is well-defined. \smallskip We also introduce the vector $p = (p_i)_{1 \leqslant i \leqslant d}$ defined by $$ p_i(v): = v_i \, \frac{(|v|^2 - d-2)}{\sqrt{2d}}, $$ and the associated moment functional $M_p[g] = ( M_{p_i} [g])_{1 \leqslant i \leqslant d}$ given by \begin{equation}\label{eq:def-Mp} M_{p_i}[g] = \int_{\mathbf R^d} v_i \, \frac{(|v|^2 - d-2)}{\sqrt{2d}} \, g \, \mathrm{d} v. \end{equation} \begin{lem}\label{lem:uthetaL} One has \begin{equation}\label{thetaLf} \theta [{\mathscr L} f] = -\sqrt{\frac{2}{d}} \, \nabla_x \cdot m - \nabla_x \cdot M_p[f] \end{equation} and \begin{equation}\label{Mpf} M_p[f] = M_p[f^\perp]. \end{equation} As a consequence, from Theorem~\ref{theo:Poisson}, the unique variational solution $u[\theta [{\mathscr L} f]]$ to \eqref{eq:elliptic} associated to~$\xi=\theta[ {\mathscr L} f]$ satisfies \begin{equation} \label{eq:uthetaLfH1} \| u[\theta [{\mathscr L} f]] \|_{H^1_x(\Omega)} \lesssim \| m \|_{L^2_x(\Omega)} + \| f^\perp \|_{{\mathcal H}} + \| \sqrt{\alpha(2-\alpha)} \, D^\perp f_+ \|_{\partial {\mathcal H}_+}. \end{equation} \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:uthetaL}] We start by proving \eqref{thetaLf}. By writing ${\mathscr L} f = - v \cdot \nabla_x f + {\mathscr C} f^\perp$ we have $\theta[{\mathscr L} f] = \theta [ - v \cdot \nabla_x f]$. We then compute $$ \begin{aligned} \theta[ - v \cdot \nabla_x f] &= - \nabla_x \cdot \int_{\mathbf R^d} \frac{(|v|^2 - d)}{\sqrt{2d}} v f \, \mathrm{d} v \\ &= - \sqrt{\frac{2}{d}} \nabla_x \cdot \int_{\mathbf R^d} v f \, \mathrm{d} v - \nabla_x \cdot \int_{\mathbf R^d} \frac{(|v|^2 - d-2)}{\sqrt{2d}} v f \, \mathrm{d} v , \end{aligned} $$ and this concludes the proof of \eqref{thetaLf}. Moreover, using the decomposition \begin{equation}\label{eq:fdecomposition} f = \varrho \mu + m\cdot v \mu + \theta \frac{|v|^2-d}{\sqrt{2d}} \mu + f^\perp, \end{equation} a straightforward computation gives $$ \begin{aligned} M_p[f] &= \varrho \int_{\mathbf R^d} p(v) \mu \, \mathrm{d} v + m_i \int_{\mathbf R^d} v_i p(v) \mu \, \mathrm{d} v + \theta \int_{\mathbf R^d} p(v) \left(\frac{|v|^2 - d}{\sqrt{2d}} \right) \mu \, \mathrm{d} v + M_p[f^\perp] . \end{aligned} $$ We conclude to \eqref{Mpf}, since $\int_{\mathbf R^d} p(v) \mu \, \mathrm{d} v = \int_{\mathbf R^d} v_i p(v) \mu \, \mathrm{d} v = \int_{\mathbf R^d} (|v|^2-d)p(v) \mu \, \mathrm{d} v = 0$. \smallskip From Theorem~\ref{theo:Poisson}, there exists a unique variational solution $u := u[\theta [{\mathscr L} f]]$ to \eqref{eq:elliptic} associated to~$\xi=\theta[ {\mathscr L} f]$. Thanks to Step~1 in the proof of Theorem~\ref{theo:Poisson}, this solution satisfies \begin{equation}\label{eq:uthetaLf1} \lambda \| u \|_{H^1_x (\Omega)}^2 \leqslant \| \nabla_x u \|_{L^2_x(\Omega)}^2 + \| \sqrt{\tfrac{\alpha}{2-\alpha}} \, u \|_{L^2_x(\partial\Omega)}^2 , \end{equation} for some constant $\lambda >0$. Moreover, thanks to the variational formulation \eqref{eq:varValpha}, one has \begin{eqnarray*} && \| \nabla_x u \|_{L^2_x(\Omega)}^2 + \| \sqrt{\tfrac{\alpha}{2-\alpha}} \, u \|_{L^2_x(\partial\Omega)}^2 \\ &&\qquad = -\int_{\Omega} \left(\sqrt{\frac{2}{d}} \, \nabla_x \cdot m + \nabla_x \cdot M_p[f] \right) u \, \mathrm{d} x \\ &&\qquad = \int_{\Omega} \left(\sqrt{\frac{2}{d}} \, m + M_p[f] \right) \cdot \nabla_x u \, \mathrm{d} x - \int_{\partial\Omega} \left(\sqrt{\frac{2}{d}} \, m + M_p[f] \right) \cdot n(x) \, u \, \d\sigma_{\! x}, \end{eqnarray*} where we have performed one integration by parts in the second equality. As a consequence, we have \begin{equation}\label{eq:uthetaLf2} \begin{aligned} & \| \nabla_x u \|_{L^2_x(\Omega)}^2 + \| \sqrt{\tfrac{\alpha}{2-\alpha}} \, u \|_{L^2_x(\partial\Omega)}^2 \\ &\qquad = \int_{\Omega} \left(\sqrt{\frac{2}{d}} \, m + M_p[f^\perp] \right) \cdot \nabla_x u \, \mathrm{d} x - \int_{\partial\Omega} M_p[f] \cdot n(x) \, u \, \d\sigma_{\! x}, \end{aligned} \end{equation} where we have used \eqref{Mpf} and that $m \cdot n = 0$ as noticed in \eqref{eq:mcdotn=0}. For the boundary term appearing in last equation, we observe that thanks to Lemma~\ref{lem:boundary} and because $|v|^2 = |R_x v|^2$, for any $x \in \partial \Omega$, we have $$ \begin{aligned} M_p[f] \cdot n(x) &= \int_{\mathbf R^d} \frac{|v|^2-d-2}{\sqrt{2d}} f \, n(x) \cdot v \, \mathrm{d} v \\ &= \alpha(x) \int_{\Sigma_+^x} \frac{|v|^2-d-2}{\sqrt{2d}}\, D^\perp f_+ \, n(x) \cdot v \, \mathrm{d} v, \end{aligned} $$ and therefore $$ \left| \int_{\partial\Omega} M_p[f] \cdot n(x) \, u \, \d\sigma_{\! x} \right| \lesssim \Big\| \sqrt{\alpha(2-\alpha)} \, D^\perp f_+ \Big\|_{\partial {\mathcal H}_+} \Big\| \sqrt{\tfrac{\alpha}{2-\alpha}} \, u \Big\|_{L^2_x(\partial\Omega)}. $$ Remarking that $$ \| M_p[f^\perp] \|_{L^2_x(\Omega)} \lesssim \| f^\perp \|_{{\mathcal H}}, $$ we finally obtain \eqref{eq:uthetaLfH1} by gathering the above estimate on the boundary term together with \eqref{eq:uthetaLf1} and \eqref{eq:uthetaLf2}, and using Cauchy-Schwarz inequality. \end{proof} \smallskip We next establish the following result, which gives us a control of the energy $\theta$. \begin{lem}\label{lem:energy} There are constants $\kappa_1, C >0$ such that $$ \begin{aligned} &\left\langle -\nabla_x u[\theta] , M_p [{\mathscr L} f] \right\rangle_{L^2_x(\Omega)} + \left\langle -\nabla_x u[\theta [{\mathscr L} f]], M_p [f] \right\rangle_{L^2_x(\Omega)}\\ &\qquad\qquad \geqslant \kappa_1 \| \theta \|_{L^2_x(\Omega)}^2 - C \| m \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} - C \| f^\perp \|_{{\mathcal H}}^2 -C \| \sqrt{\alpha(2-\alpha)} D^\perp f_+ \|_{\partial {\mathcal H}_+}^2. \end{aligned} $$ \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:energy}] Using~\eqref{eq:uthetaLfH1} and~\eqref{Mpf}, one has $$ \begin{aligned} \left| \left\langle -\nabla_x u[\theta [{\mathscr L} f]], M_p [f^\perp] \right\rangle_{L^2_x(\Omega)} \right| &\lesssim \| \nabla_x u[\theta [{\mathscr L} f]]\|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} \\ &\lesssim \| m \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} + \| f^\perp \|_{{\mathcal H}}^2 + \| \sqrt{\alpha(2-\alpha)} D^\perp f_+ \|_{\partial {\mathcal H}_+}^2, \end{aligned} $$ which allows us to bound the second term in the LHS of the estimate of the statement. For the first term, writing $M_p[{\mathscr L} f] = M_p[- v \cdot \nabla_x f] + M_p[ {\mathscr C} f^\perp]$ one obtains $$ \left\langle -\nabla_x u[\theta] , M_p [{\mathscr L} f] \right\rangle_{L^2_x(\Omega)} = T_1 + T_2 $$ with $$ T_1 := \left\langle \partial_{x_i} u[\theta] , \partial_{x_j} \int_{\mathbf R^d} p_i(v) v_j f \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} $$ and $$ T_2 := \left\langle - \nabla_x u[\theta] , \int_{\mathbf R^d} p(v) {\mathscr C} f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)}. $$ For the term $T_2$, we remark that $$ \int_{\mathbf R^d} p(v) {\mathscr C} f^\perp \, \mathrm{d} v = \left( f^\perp , {\mathscr C} (p \mu) \right)_{L^2_v(\mu^{-1})}, $$ so that from the property (A3) on ${\mathscr C}$ and~\eqref{eq:uthetafH2}, we get $$ |T_2| \lesssim \| \nabla_x u[\theta] \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} \lesssim \| \theta \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}}. $$ For the term $T_1$, we write $$ \begin{aligned} T_1 &= - \left\langle \partial_{x_j} \partial_{x_i} u[\theta] , \int_{\mathbf R^d} p_i(v) v_j f \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} + \int_{\partial\Omega} \partial_{x_i} u[\theta] n_j(x) \left( \int_{\mathbf R^d} p_i(v) v_j \, \gamma f \, \mathrm{d} v \right) \d\sigma_{\! x} \\ &=: A+B. \end{aligned} $$ Using the decomposition \eqref{eq:fdecomposition}, we get $$ \int_{\mathbf R^d} p_i(v) v_j f \, \mathrm{d} v = \delta_{ij} \left(1+\frac{2}{d}\right) \theta + \int_{\mathbf R^d} p_i(v) v_j f^\perp \, \mathrm{d} v. $$ As a consequence, we obtain $$ \begin{aligned} A &= \left(1+\frac{2}{d}\right) \left\langle -\Delta_x u[\theta] , \theta \right\rangle_{L^2_x(\Omega)} - \left\langle \partial_{x_j} \partial_{x_i} u[\theta] , \int_{\mathbf R^d} p_i(v) v_j f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)}\\ &= \left(1+\frac{2}{d}\right) \|\theta\|^2_{L^2_x(\Omega)} - \left\langle \partial_{x_j} \partial_{x_i} u[\theta] , \int_{\mathbf R^d} p_i(v) v_j f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)}, \end{aligned} $$ since by definition of $u[\theta]$ we have $-\Delta_x u[\theta] = \theta$. Because of~\eqref{eq:uthetafH2}, we obtain $$ \begin{aligned} \left| \left\langle \partial_{x_j} \partial_{x_i} u[\theta] , \int_{\mathbf R^d} p_i(v) v_j f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} \right| &\lesssim \| \nabla_x^2 u[\theta] \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} \\ &\lesssim \| \theta \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}}. \end{aligned} $$ Thanks to Young's inequality, we thus get $$ A \geqslant \frac12\left(1+\frac{2}{d}\right) \| \theta \|_{L^2_x(\Omega)}^2 - C \| f^\perp \|_{{\mathcal H}}^2. $$ We now investigate the boundary term $B$. Thanks to Lemma~\ref{lem:boundary}, we have $$ \begin{aligned} B &= \int_{\Sigma} \nabla_x u[\theta] \cdot p(v) (\gamma f) \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &= \int_{\Sigma_{+}} \nabla_x u[\theta] \cdot p(v) \alpha(x) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &\quad + \int_{\Sigma_{+}} \nabla_x u[\theta] \cdot [p(v) - p(R_x v)] (1-\alpha(x)) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &\quad + \int_{\Sigma_{+}} \nabla_x u[\theta] \cdot [p(v) - p(R_x v)] D f_{+} \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &=: B_1 + B_2 + B_3. \end{aligned} $$ We remark that $$ p(v) - p(R_x v) = 2 n(x) (n(x) \cdot v) \frac{(|v|^2-d-2)}{\sqrt{2d}} $$ and thus $$ \nabla_x u[\theta] \cdot [p(v) - p(R_x v)] = 2 \nabla_x u[\theta] \cdot n(x) \, (n(x) \cdot v) \, \frac{(|v|^2-d-2)}{\sqrt{2d}} . $$ Thanks to the boundary condition satisfied by $u[\theta]$, in the case $\alpha \equiv 0$, we already obtain that $B = 0$. Otherwise, when $\alpha \not \equiv 0$, recalling \eqref{eq:def_D}, we first obtain for the term $B_3$, that $$ \begin{aligned} B_3 &= \frac{2c_\mu}{\sqrt{2d}} \int_{\Sigma_+} \nabla_x u[\theta] \cdot n(x) \mu(v) (|v|^2-d-2)\widetilde f(x) \, (n(x) \cdot v)^2 \, \mathrm{d} v \, \d\sigma_{\!x} \\ &= \frac{2c_\mu}{\sqrt{2d}} \int_{\partial\Omega} \nabla_x u[\theta] \cdot n(x) \widetilde f(x) \left( \int_{\Sigma_+^x } (|v|^2-d-2) \mu(v) \, (n(x) \cdot v)^2 \mathrm{d} v \right) \d\sigma_{\! x}, \end{aligned} $$ and the integral in $v$ vanishes, thus $B_3 = 0$. For the term $B_1$, the Cauchy-Schwarz inequality and~\eqref{eq:uthetafH2} give $$ \begin{aligned} |B_1| &\lesssim \| \nabla_x u[\theta] \|_{L^2_x(\partial\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} \\ &\lesssim \| \nabla_x u[\theta] \|_{H^1(\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} \\ &\lesssim \| \theta \|_{L^2_x(\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} . \end{aligned} $$ For the term $B_2$, the boundary condition satisfied by $u[\theta]$ implies $$ \nabla_x u[\theta] \cdot [p(v) - p(R_x v)] (1-\alpha(x)) = - \frac{1-\alpha(x)}{2 - \alpha(x)} \alpha(x) u[\theta] 2 (n(x) \cdot v) \frac{(|v|^2-d-2)}{\sqrt{2d}} , $$ hence we obtain $$ \begin{aligned} |B_2| &= 2\left| \int_{\Sigma_{+}} u[\theta] \frac{(|v|^2-d-2)}{\sqrt{2d}} \alpha(x) \frac{1-\alpha(x)}{2 - \alpha(x)} D^\perp f_+ \, (n(x) \cdot v)^2 \, \mathrm{d} v \, \d\sigma_{\! x} \right| \\ &\lesssim \| u[\theta] \|_{L^2_x(\partial\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} \\ &\lesssim \| \theta \|_{L^2_x(\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} . \end{aligned} $$ We complete the proof by gathering the previous estimates, using Young's inequality and remarking that $\sqrt{\alpha(2-\alpha)} \geqslant \alpha$. \end{proof} \subsection{Momentum}\label{ssec:momentum} In this subsection we construct a functional that is devised to control the momentum component of the macroscopic part $\pi f$. We denote $$ m[g] := \int_{\mathbf R^d} v g \, \mathrm{d} v , $$ so that $m = m[f]$. We define $U[m]$ as the solution to the elliptic equation \eqref{eq:elliptic-korn} associated to $\Xi = m \in L^2_x(\Omega)$ given by Theorem~\ref{theo:regH2-korn}, whence \begin{equation}\label{eq:UmfH2} \| U[m] \|_{H^2_x(\Omega)} \lesssim \| m \|_{L^2_x(\Omega)}. \end{equation} It is worth noting that in the specular reflection case ($\alpha \equiv 0$ in \eqref{eq:BdyCond}), the condition \eqref{eq:C2C3pif} holds, and therefore the solution~$U[m]$ is indeed well-defined. \smallskip Considering the matrix $q_{ij} = (q_{ij})_{1 \leqslant i , j \leqslant d}$ given by $$ q_{ij}(v) = v_i v_j - \delta_{ij}, $$ we define the associated moment functional $M_q[g] = (M_{q_{ij}} [g])_{1 \leqslant i , j \leqslant d}$ as \begin{equation}\label{eq:def-Mq} M_{q_{ij}}[g] = \int_{\mathbf R^d} (v_i v_j - \delta_{ij}) g \, \mathrm{d} v. \end{equation} \begin{lem}\label{lem:mLLf} There holds \begin{equation}\label{mLf} \begin{aligned} m [{\mathscr L} f] &= - \nabla_x \varrho - \nabla_x \cdot M_q[f] \\ \end{aligned} \end{equation} and \begin{equation}\label{Mqf} M_q[f] = \sqrt{\frac{2}{d}} \theta I_d + M_q[f^\perp]. \end{equation} As a consequence of Theorem~\ref{theo:regH2-korn}, the unique variational solution $U[m[{\mathscr L} f]]$ to \eqref{eq:elliptic-korn-var} associated to $\Xi = m[{\mathscr L} f]$ satisfies \begin{equation}\label{eq:UmLfH1} \| U[m[{\mathscr L} f]] \|_{H^1_x(\Omega)} \lesssim \| \varrho \|_{L^2_x(\Omega)} + \| \theta \|_{L^2_x(\Omega)} + \| f^\perp \|_{{\mathcal H}} + \| \sqrt{\alpha(2-\alpha)} D^\perp f_+ \|_{\partial {\mathcal H}_+}. \end{equation} \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:mLLf}] Writing ${\mathscr L} f = - v \cdot \nabla_x f + {\mathscr C} f^\perp$ we already obtain that $m[{\mathscr L} f] = m [- v \cdot \nabla_x f]$. We hence compute, for $i \in \{1 , \ldots, d \}$, \begin{equation*} \begin{aligned} m_i [- v \cdot \nabla_x f] &= - \partial_{x_{j}} \int_{\mathbf R^d} v_i v_j f \, \mathrm{d} v \\ &= - \partial_{x_{i}} \int_{\mathbf R^d} f \, \mathrm{d} v - \partial_{x_{j}} \int_{\mathbf R^d} (v_i v_j -\delta_{ij}) f \, \mathrm{d} v \\ &= - \partial_{x_i} \varrho - \partial_{x_{j}} M_{q_{ij}}[f], \end{aligned} \end{equation*} which gives \eqref{mLf}. Thanks to the decomposition \eqref{eq:fdecomposition} we also obtain, for $i,j \in \{1 , \ldots, d \}$, \begin{equation*} \begin{aligned} M_{q_{ij}}[f] &= \varrho \int_{\mathbf R^d} (v_i v_j - \delta_{ij}) \mu \, \mathrm{d} v +m_k \int_{\mathbf R^d} (v_i v_j - \delta_{ij}) v_k \mu \, \mathrm{d} v \\ &\quad +\theta \int_{\mathbf R^d} (v_i v_j - \delta_{ij}) \left(\frac{|v|^2 - d}{\sqrt{2d}} \right) \mu \, \mathrm{d} v + M_{q_{ij}}[f^\perp] , \end{aligned} \end{equation*} which gives \eqref{Mqf} since $\int_{\mathbf R^d} (v_i v_j - \delta_{ij}) \mu \, \mathrm{d} v = \int_{\mathbf R^d} (v_i v_j - \delta_{ij}) v_k \mu \, \mathrm{d} v= 0$ and $\int_{\mathbf R^d} (v_i v_j - \delta_{ij}) \left(\frac{|v|^2 - d}{\sqrt{2d}} \right) \mu \, \mathrm{d} v = \sqrt{\frac{2}{d}} \delta_{ij}$. \smallskip Now let $U := U[m[{\mathscr L} f]]$ be the unique variational solution to \eqref{eq:elliptic-korn-var} associated to $\Xi = m[{\mathscr L} f]$ from Theorem~\ref{theo:regH2-korn}. From Step~1 of the proof of Theorem~\ref{theo:regH2-korn}, one has \begin{equation}\label{eq:UmLf1} \lambda \| U \|_{H^1_x(\Omega)}^2 \leqslant \| \nabla^s U \|_{L^2_x(\Omega)}^2 + \| \sqrt{\tfrac{\alpha}{2-\alpha}} \, U \|_{L^2_x(\partial\Omega)}^2, \end{equation} for some $\lambda >0$. Moreover from \eqref{eq:elliptic-korn-var}, we obtain \begin{equation}\label{eq:UmLf2} \begin{aligned} & \| \nabla^s U \|_{L^2_x(\Omega)}^2 + \| \sqrt{\tfrac{\alpha}{2-\alpha}} \, U \|_{L^2_x(\partial\Omega)}^2 \\ &\qquad = - \int_{\Omega} (\nabla_x \varrho + \nabla_x \cdot M_q[f] ) \cdot U \, \mathrm{d} x\\ &\qquad = \int_{\Omega} \varrho I_d : \nabla U \, \mathrm{d} x + \int_{\Omega} M_q[f] : \nabla U \, \mathrm{d} x \\ &\qquad \quad - \int_{\partial \Omega} \varrho n(x) \cdot U \, \d\sigma_{\! x} - \int_{\partial \Omega} M_q[f] n(x) \cdot U \, \d\sigma_{\! x} \\ &\qquad = \int_{\Omega} \varrho I_d : \nabla^s U \, \mathrm{d} x + \int_{\Omega} \left( \sqrt{\frac{2}{d}} \theta I_d + M_q[f^\perp] \right) : \nabla^s U \, \mathrm{d} x \\ &\qquad \quad - \int_{\partial \Omega} M_q[f] n(x) \cdot U \, \d\sigma_{\! x} , \end{aligned} \end{equation} where we have performed an integration by parts in the second equality, used that $U \cdot n(x) = 0$ since $U \in {\mathcal V}_\alpha$ and \eqref{Mqf} in the last one. We now deal with the boundary term in the last equation. We have, for any $x \in \partial\Omega$, $$ \begin{aligned} M_q[f] n(x) \cdot U &= \int_{\mathbf R^d} v_i v_j f n_j(x) U_i \, \mathrm{d} v - \int_{\mathbf R^d} f n_i(x) U_i \, \mathrm{d} v \\ &= \int_{\mathbf R^d} f (v \cdot U) (n(x) \cdot v) \, \mathrm{d} v \\ &= \alpha(x) \int_{\Sigma_+^x} D^\perp f_+ (v \cdot U) (n(x) \cdot v) \, \mathrm{d} v \\ &\quad + \int_{\Sigma_+^x} ( v - R_x v ) \cdot U (1-\alpha(x)) D^\perp f_+ (n(x) \cdot v) \, \mathrm{d} v \\ &\quad +\int_{\Sigma_+^x} ( v - R_x v ) \cdot U Df_+ (n(x) \cdot v) \, \mathrm{d} v , \end{aligned} $$ using that $U \cdot n(x) = 0$ and Lemma~\ref{lem:boundary} in the last line. Observe now that, for any $x \in \partial \Omega$, we have $$ ( v - R_x v ) \cdot U = 2 \left(n(x) \cdot U \right) (n(x)\cdot v) = 0 , $$ by using again that the solution verifies $U \cdot n(x) = 0$. We hence finally get $$ \begin{aligned} \left| \int_{\partial\Omega} M_q[f] n(x) \cdot U \, \d\sigma_{\! x} \right| &\lesssim \| \sqrt{\alpha(2-\alpha)} \, D^\perp f_+ \|_{\partial {\mathcal H}_+} \| \sqrt{\tfrac{\alpha}{2-\alpha}} U[m[{\mathscr L} f]] \|_{L^2_x(\partial\Omega)}. \end{aligned} $$ We conclude to \eqref{eq:UmLfH1} by gathering this last estimate together with \eqref{eq:UmLf1} and \eqref{eq:UmLf2}, applying Cauchy-Schwarz inequality and remarking that $$ \| M_q[f^\perp] \|_{L^2_x(\Omega)} \lesssim \| f^\perp \|_{{\mathcal H}} . $$ \end{proof} We now deduce the following result, which gives a control of the momentum $m$. \begin{lem}\label{lem:momentum} There are constants $\kappa_2,C >0$ such that $$ \begin{aligned} &\left\langle -\nabla_x^s U[m], M_q [{\mathscr L} f] \right\rangle_{L^2_x(\Omega)} + \left\langle -\nabla_x^s U[m[{\mathscr L} f]] , M_q [f] \right\rangle_{L^2_x(\Omega)}\\ &\qquad\qquad \geqslant \kappa_2 \| m \|_{L^2_x(\Omega)}^2 - C \| f^\perp \|_{{\mathcal H}} \| \varrho \|_{L^2_x(\Omega)} - C \| \theta \|_{L^2_x(\Omega)} \| \varrho \|_{L^2_x(\Omega)} \\ &\qquad\qquad\quad -C \| \theta \|_{L^2_x(\Omega)}^2- C \| f^\perp \|_{{\mathcal H}}^2 -C \| \sqrt{\alpha(2-\alpha)} D^\perp f_+ \|_{\partial {\mathcal H}_+}^2. \end{aligned} $$ \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:momentum}] Thanks to \eqref{Mqf} and \eqref{eq:UmLfH1}, we have $$ \begin{aligned} &\left| \left\langle -\nabla_x^s U[m[{\mathscr L} f]] , \sqrt{\frac{2}{d}} \theta I_d + M_q[f^\perp] \right\rangle_{L^2_x(\Omega)} \right| \\ &\lesssim \| \nabla_x^s U[m[{\mathscr L} f]] \|_{L^2_x(\Omega)} \left( \| \theta \|_{L^2_x(\Omega)} + \| f^\perp \|_{{\mathcal H}} \right)\\ &\lesssim \left( \| \varrho \|_{L^2_x(\Omega)} + \| \theta \|_{L^2_x(\Omega)} + \| f^\perp \|_{{\mathcal H}} + \| \sqrt{\alpha(2-\alpha)} D^\perp f_+ \|_{\partial {\mathcal H}_+}\right) \left( \| \theta \|_{L^2_x(\Omega)} + \| f^\perp \|_{{\mathcal H}} \right) , \end{aligned} $$ which allows us to bound the second term in the LHS of the estimate of the statement. For the first term, we write $M_q[{\mathscr L} f] = M_q[ - v \cdot \nabla_x f] + M_q[{\mathscr C} f^\perp]$ to obtain $$ \left\langle -\nabla_x^s U[m], M_q [{\mathscr L} f] \right\rangle_{L^2_x(\Omega)} = T_1 + T_2, $$ with $$ T_1 := \left\langle (\nabla_x^s U[m])_{ij} , \partial_{x_k} \int_{\mathbf R^d} q_{ij}(v) v_k f \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} $$ and $$ T_2 := \left\langle -\nabla_x^s U[m] , \int_{\mathbf R^d} q(v) {\mathscr C} f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)}. $$ Observing that $$ \int_{\mathbf R^d} q(v) {\mathscr C} f^\perp \, \mathrm{d} v = \left( f^\perp , {\mathscr C} (q \mu) \right)_{L^2_v(\mu^{-1})}, $$ we get from~\eqref{eq:UmfH2} that $$ |T_2| \lesssim \| \nabla_x^s U[m] \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} \lesssim \| m \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}}. $$ For the term $T_1$, thanks to an integration by parts, we may write $$ \begin{aligned} T_1 & = - \left\langle \partial_{x_k} (\nabla_x^s U[m])_{ij} , \int_{\mathbf R^d} q_{ij}(v) v_k f \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} \\ &\quad + \int_{\partial\Omega} (\nabla_x^s U[m])_{ij} n_k(x) \left( \int_{\mathbf R^d} q_{ij}(v) v_k \, \gamma f \, \mathrm{d} v \right) \d\sigma_{\! x} \\ &=: A+B. \end{aligned} $$ Thanks to the decomposition \eqref{eq:fdecomposition}, we get $$ \int_{\mathbf R^d} q_{ij}(v) v_k f \, \mathrm{d} v = \delta_{jk} m_i + \delta_{ik} m_j + \int_{\mathbf R^d} q_{ij}(v) v_k f^\perp \, \mathrm{d} v, $$ and hence $$ \begin{aligned} A &= 2 \left\langle - \Div_x ( \nabla_x^s U[m] ) , m \right\rangle_{L^2_x(\Omega)} - \left\langle \partial_{x_k} (\nabla_x^s U[m])_{ij} , \int_{\mathbf R^d} q_{ij}(v) v_k f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} \\ &= 2 \|m\|^2_{L^2_x(\Omega)} - \left\langle \partial_{x_k} (\nabla_x^s U[m])_{ij} , \int_{\mathbf R^d} q_{ij}(v) v_k f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)}, \end{aligned} $$ since $-\Div_x ( \nabla_x^s U[m] ) = m$ by definition of $U[m]$. Using~\eqref{eq:UmfH2}, we have $$ \begin{aligned} \left| \left\langle \partial_{x_k} (\nabla_x^s U[m])_{ij} , \int_{\mathbf R^d} q_{ij}(v) v_k f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} \right| &\lesssim \| \nabla_x^2 U[m] \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} \\ &\lesssim \| m \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}}. \end{aligned} $$ We thus obtain, thanks to Young's inequality, $$ A \geqslant \| m \|_{L^2_x(\Omega)}^2 - C \| f^\perp \|_{{\mathcal H}}^2. $$ We now investigate the boundary term $B$. Thanks to Lemma~\ref{lem:boundary}, we have $$ \begin{aligned} B &= \int_{\Sigma} \nabla_x^s U[m] : q(v) \, \gamma f \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &= \int_{\Sigma_+} \nabla_x^s U[m] : q(v) \alpha(x) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &\quad + \int_{\Sigma_+} \nabla_x^s U[m] : [q(v)-q(R_x v)] (1-\alpha(x)) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &\quad + \int_{\Sigma_+} \nabla_x^s U[m] : [q(v)-q(R_x v)] D f_{+} \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &=: B_1 + B_2 + B_3, \end{aligned} $$ and we remark that $$ q(v) - q(R_x v) = 4 \left[ (n(x) \otimes v)^{\operatorname{sym}} - n(x) \otimes n(x) (n(x) \cdot v) \right] (n(x) \cdot v), $$ { where, for any matrix $M \in \mathcal{M}_d(\mathbf R)$, we set $(M^{\operatorname{sym}})_{ij} = \frac12(M_{ij} + M_{ji})$}, so that $$ \begin{aligned} &\nabla_x^s U[m] : [q(v)-q(R_x v)] \\ &\quad = 4 \Big\{ \nabla_x^s U[m] :(n(x) \otimes v)^{\operatorname{sym}} - \nabla_x^s U[m] : n(x) \otimes n(x) (n(x) \cdot v) \Big\} (n(x) \cdot v). \end{aligned} $$ Taking the scalar product with $v$ in the boundary condition satisfied by $U[m]$, we see that, we already have $B = 0$ in the case $\alpha \equiv 0$. Otherwise, when $\alpha \not\equiv 0$, we first obtain for the term $B_3$, making a change of variables $v \mapsto R_x v$, using also that $(R_x v \cdot n) = - (v \cdot n)$, and recalling that $D f(x,v) = c_\mu \mu(v) \widetilde f(x)$, that $$ \begin{aligned} B_3 &= 2c_\mu \int_{\Sigma} \nabla_x^s U[m] : q(v) \mu(v) \widetilde f(x) \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\!x} \\ &= 2c_\mu \int_{\partial\Omega} (\nabla_x^s U[m])_{ij} n_k(x) \widetilde f(x) \left( \int_{\mathbf R^d} q_{ij}(v) v_k \mu(v) \, \mathrm{d} v \right) \d\sigma_{\! x} = 0, \end{aligned} $$ since the integral in $v$ vanishes. For the term $B_1$, the Cauchy-Schwarz inequality and~\eqref{eq:UmfH2} give $$ \begin{aligned} |B_1| &\lesssim \| \nabla_x^s U[m] \|_{L^2_x(\partial\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} \\ &\lesssim \| m \|_{L^2_x(\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} . \end{aligned} $$ For the term $B_2$, the boundary condition satisfied by $U[m]$ implies $$ \begin{aligned} \nabla_x^s U[m] : [q(v)-q(R_x v)] (1-\alpha(x)) &= - \frac{1-\alpha(x)}{2 - \alpha(x)} 4 \alpha(x) (U[m] \cdot v) (n(x) \cdot v), \end{aligned} $$ hence we obtain $$ \begin{aligned} |B_2| &= 4\left| \int_{\Sigma_{+}} (U[m] \cdot v) \, \frac{1-\alpha(x)}{2 - \alpha(x)} \alpha(x) D^\perp f \, (n(x) \cdot v)^2 \, \mathrm{d} v \, \d\sigma_{\! x} \right| \\ &\lesssim \| U[m] \|_{L^2_x(\partial\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} \\ &\lesssim \| m \|_{L^2_x(\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} . \end{aligned} $$ The proof is then complete by gathering previous estimates, using Young's inequality and observing that $\sqrt{\alpha(2-\alpha)} \geqslant \alpha$ . \end{proof} \subsection{Mass}\label{ssec:mass} In this subsection we introduce the last functional, which is built in order to control the mass component of the macroscopic part $\pi f$. We denote $$ \varrho[g] := \int_{\mathbf R^d} g \, \mathrm{d} v , $$ so that $\varrho = \varrho[f]$. We consider $u_{\mathrm{N}}[\varrho]$ the solution to the Poisson equation \eqref{eq:elliptic} with Neumann boundary condition associated to $\xi = \varrho \in L^2_x(\Omega)$ constructed in Theorem~\ref{theo:Poisson}, namely $u_{\mathrm{N}} [\varrho]$ satisfies a.e. \begin{equation} \label{eq:systemuNrho} \left\{ \begin{aligned} - \Delta_x u_{\mathrm{N}}[\varrho] &= \varrho \quad \text{in} \quad \Omega , \\ \nabla_x u_{\mathrm{N}}[\varrho] \cdot n(x) &= 0 \quad \text{on} \quad \partial\Omega, \end{aligned} \right. \end{equation} which is indeed well-defined since $\left\langle \varrho \right\rangle = 0$. In particular, we have \begin{equation} \label{eq:uNrhoH2} \| u_{\mathrm{N}}[\varrho] \|_{H^2_x(\Omega)} \lesssim \| \varrho \|_{L^2_x(\Omega)}. \end{equation} \begin{lem}\label{lem:rLLf} There holds \begin{equation}\label{rhoLf} \varrho [{\mathscr L} f] = - \nabla_x \cdot m . \end{equation} As a consequence of Theorem~\ref{theo:Poisson}, the unique variational solution $u_{\mathrm{N}}[\varrho[{\mathscr L} f]]$ to \eqref{eq:varValpha} with Neumann boundary condition associated to $\xi = \varrho[{\mathscr L} f]$ satisfies \begin{equation}\label{eq:uNrhoLfH1} \| u_{\mathrm{N}}[\varrho[{\mathscr L} f]] \|_{H^1_x(\Omega)} \lesssim \| m \|_{L^2_x(\Omega)}. \end{equation} \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:rLLf}] Since ${\mathscr L} f = - v\cdot \nabla_x f + {\mathscr C} f^\perp$, one has $$ \begin{aligned} \varrho[{\mathscr L} f] = \varrho[- v \cdot \nabla_x f] = - \nabla_x \cdot \int_{\mathbf R^d} v f \, \mathrm{d} v \end{aligned} $$ which gives \eqref{rhoLf}. Now let $u := u_{\mathrm{N}}[\varrho[{\mathscr L} f]]$ be the unique variational solution to \eqref{eq:elliptic} with Neumann boundary condition associated to $\xi = \varrho[{\mathscr L} f]$ given by Theorem~\ref{theo:Poisson}. From the variational formulation \eqref{eq:varValpha} we have, thanks to an integration by parts, $$ \begin{aligned} \| \nabla_x u \|_{L^2_x(\Omega)}^2 & = - \int_{\Omega} (\nabla_x \cdot m) u \, \mathrm{d} x \\ &= \int_{\Omega} m \cdot \nabla_x u \, \mathrm{d} x - \int_{\partial\Omega} m \cdot n(x) \, u \, \d\sigma_{\! x} = \int_{\Omega} m \cdot \nabla_x u \, \mathrm{d} x \end{aligned} $$ where we have used that $m \cdot n(x) = 0$ in last equality. We therefore obtain \eqref{eq:uNrhoLfH1} thanks to the Cauchy-Schwarz inequality. \end{proof} We now establish the following result, which gives a control of the mass $\varrho$. \begin{lem}\label{lem:mass} There are constants $\kappa_3 , C >0$ such that $$ \begin{aligned} &\left\langle -\nabla_x u_{\mathrm{N}}[\varrho], m [{\mathscr L} f] \right\rangle_{L^2_x(\Omega)} + \left\langle -\nabla_x u_{\mathrm{N}}[\varrho[{\mathscr L} f]] , m[f] \right\rangle_{L^2_x(\Omega)}\\ &\qquad\qquad \geqslant \kappa_3 \| \varrho \|_{L^2_x(\Omega)}^2 - C \left(\| m \|_{L^2_x(\Omega)}^2 + \| \theta \|_{L^2_x(\Omega)}^2 + \| f^\perp \|_{{\mathcal H}}^2 \right) \\ &\qquad\qquad\quad - C \| { \sqrt{\alpha(2-\alpha)}} D^\perp f_+ \|_{\partial {\mathcal H}_+}^2 . \end{aligned} $$ \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:mass}] From~\eqref{eq:uNrhoLfH1}, we have $$ \begin{aligned} \left| \left\langle -\nabla_x u_{\mathrm{N}}[\varrho[{\mathscr L} f]] , m[f] \right\rangle_{L^2_x(\Omega)} \right| &\lesssim \| \nabla_x u_{\mathrm{N}}[\varrho[{\mathscr L} f]] \|_{L^2_x(\Omega)} \| m \|_{L^2_x(\Omega)} \lesssim \| m \|_{L^2_x(\Omega)}^2, \end{aligned} $$ which allows us to bound the second term in the LHS of the estimate of the statement. For the first term, writing $m[{\mathscr L} f] = m[-v\cdot \nabla_x f] + m[{\mathscr C} f^\perp] $ and observing that $m[{\mathscr C} f^\perp] = 0$, we obtain $$ \left\langle -\nabla_x u_{\mathrm{N}}[\varrho], m [{\mathscr L} f] \right\rangle_{L^2_x(\Omega)} = \left\langle \partial_{x_i} u_{\mathrm{N}}[\varrho] , \partial_{x_j} \int_{\mathbf R^d} v_i v_j f \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)}. $$ We then write $$ \begin{aligned} &\left\langle \partial_{x_i} u_{\mathrm{N}}[\varrho] , \partial_{x_j} \int_{\mathbf R^d} v_i v_j f \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} \\ &\qquad = -\left\langle \partial_{x_j} \partial_{x_i} u_{\mathrm{N}}[\varrho] , \int_{\mathbf R^d} v_i v_j f \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} + \int_{\partial\Omega} \partial_{x_i} u_{\mathrm{N}}[\varrho] n_j(x) \left( \int_{\mathbf R^d} v_i v_j \, \gamma f \, \mathrm{d} v \right) \d\sigma_{\! x} \\ &\qquad =: A+B. \end{aligned} $$ Thanks to the decomposition \eqref{eq:fdecomposition}, we get $$ \int_{\mathbf R^d} v_i v_j f \, \mathrm{d} v = \delta_{ij} \varrho + \delta_{ij} \sqrt{\frac{2}{d}} \theta + \int_{\mathbf R^d} v_i v_j f^\perp \, \mathrm{d} v, $$ and hence $$ \begin{aligned} A &= \left\langle -\Delta_x u_N[\varrho] , \varrho \right\rangle_{L^2_x(\Omega)} + \sqrt{\frac{2}{d}} \left\langle -\Delta_x u_N[\varrho] , \theta \right\rangle_{L^2_x(\Omega)} - \left\langle \partial_{x_j} \partial_{x_i} u_N[\varrho] , \int_{\mathbf R^d} v_i v_j f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)}\\ &= \|\varrho\|^2_{L^2_x(\Omega)} + \sqrt{\frac{2}{d}} \left\langle -\Delta_x u_N[\varrho] , \theta \right\rangle_{L^2_x(\Omega)} - \left\langle \partial_{x_j} \partial_{x_i} u_N[\varrho] , \int_{\mathbf R^d} v_i v_j f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)}, \end{aligned} $$ since $-\Delta_x u_N[\varrho] = \varrho$ by definition of $u_N[\varrho]$. Using~\eqref{eq:uNrhoH2}, we have $$ \begin{aligned} \left| \left\langle \partial_{x_j} \partial_{x_i} u_N[\varrho] , \int_{\mathbf R^d} v_i v_j f^\perp \, \mathrm{d} v \right\rangle_{L^2_x(\Omega)} \right| &\lesssim \| \nabla_x^2 u_N[\varrho] \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} \\ &\lesssim \| \varrho \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} , \end{aligned} $$ from which it follows, thanks to Young's inequality, $$ A \geqslant \frac12 \| \varrho \|_{L^2_x(\Omega)}^2 - C \| \theta \|_{L^2_x(\Omega)}^2 - C \| f^\perp \|_{{\mathcal H}}^2. $$ We now investigate the boundary term $B$. Thanks to Lemma~\ref{lem:boundary} we have $$ \begin{aligned} B &= \int_{\Sigma} \nabla_x u_{\mathrm{N}}[\varrho] \cdot v \, \gamma f \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &= \int_{\Sigma_{+}} \nabla_x u_{\mathrm{N}}[\varrho]\cdot v \alpha(x) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &\quad + \int_{\Sigma_{+}} \nabla_x u_{\mathrm{N}}[\varrho] \cdot [v - R_x v] (1-\alpha(x)) D^\perp f_{+} \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &\quad + \int_{\Sigma_{+}} \nabla_x u_{\mathrm{N}}[\varrho] \cdot [v - R_x v] D f_{+} \, n(x) \cdot v \, \mathrm{d} v \, \d\sigma_{\! x} \\ &=: B_1 + B_2 + B_3, \end{aligned} $$ and we remark that $$ v - R_x v = 2 n(x) (n(x) \cdot v), $$ so that $$ \nabla_x u_{\mathrm{N}}[\varrho] \cdot [v - R_x v] = 2 \nabla_x u_{\mathrm{N}}[\varrho] \cdot n(x) \, (n(x) \cdot v). $$ Therefore, thanks to the boundary condition satisfied by $u_{\mathrm{N}}[\varrho]$ in~\eqref{eq:systemuNrho}, we already obtain~$B_2 = B_3 = 0$. In the case $\alpha \equiv 0$, we also have $B_1=0$. Otherwise, when $\alpha \not \equiv 0$, the Cauchy-Schwarz inequality and~\eqref{eq:uNrhoH2} yield $$ \begin{aligned} |B_1| &\lesssim \| \nabla_x u_{\mathrm{N}}[\varrho] \|_{L^2_x(\partial\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} \\ &\lesssim \| \varrho \|_{L^2_x(\Omega)} \| \alpha D^\perp f_+ \|_{\partial {\mathcal H}_+} . \end{aligned} $$ The proof is then complete by gathering all the previous estimates, using Young's inequality { and observing again that $\sqrt{\alpha(2-\alpha)} \geqslant \alpha$}. \end{proof} \subsection{Proof of Theorem \ref{theo:hypo}}\label{ssec:conclusion} We define the scalar product $\left\langle\!\left\langle \cdot, \cdot \right\rangle\!\right\rangle$ on~${\mathcal H}$ by $ \begin{aligned} \left\langle \! \left\langle f , g \right\rangle \! \right\rangle &:= \left\langle f , g \right\rangle_{{\mathcal H}} \\ &\quad + \eta_1 \left\langle -\nabla_x u[\theta[f]] , M_p [g] \right\rangle_{L^2_x(\Omega)} + \eta_1\left\langle -\nabla_x u[\theta [g]], M_p [f] \right\rangle_{L^2_x(\Omega)} \\ &\quad + \eta_2 \left\langle -\nabla_x^s U[m[f]] , M_q [g] \right\rangle_{L^2_x(\Omega)} + \eta_2 \left\langle -\nabla_x^s U[m[g]] , M_q [f] \right\rangle_{L^2_x(\Omega)} \\ &\quad + \eta_3 \left\langle -\nabla_x u_{\mathrm{N}}[\varrho[f]] , m [g] \right\rangle_{L^2_x(\Omega)} + \eta_3 \left\langle -\nabla_x u_{\mathrm{N}}[\varrho[g]] , m[f] \right\rangle_{L^2_x(\Omega)} \end{aligned} $ with $0 \ll \eta_3 \ll \eta_2 \ll \eta_1 \ll 1$, and where we recall that the moments $M_p$ and $M_q$ are defined respectively in \eqref{eq:def-Mp} and \eqref{eq:def-Mq}; $u[\theta[f]]$ is the solution of the Poisson equation~\eqref{eq:elliptic} with data $\theta[f]$; $U[m[f]]$ is the solution to the elliptic system \eqref{eq:elliptic-korn} with data $m[f]$; $u_{\mathrm{N}}[\varrho[f]]$ is the solution to the Poisson equation with homogeneous Neumann boundary condition~\eqref{eq:systemuNrho} with data $\varrho[f]$, and similarly for the terms depending on $g$. We denote by $|\hskip-0.04cm|\hskip-0.04cm| \cdot |\hskip-0.04cm|\hskip-0.04cm|$ the norm associated to the scalar product $\left\langle \! \left\langle \cdot , \cdot \right\rangle \! \right\rangle$, and we observe that $$ \| f \|_{{\mathcal H}} \lesssim |\hskip-0.04cm|\hskip-0.04cm| f |\hskip-0.04cm|\hskip-0.04cm| \lesssim \| f \|_{{\mathcal H}}. $$ Let $f$ satisfy the assumptions of Theorem~\ref{theo:hypo}. Recalling that we denote $\varrho=\varrho[f]$, $m=m[f]$ and $\theta=\theta[f]$, noting that $\sqrt{\alpha(2-\alpha)} \geqslant \alpha$ since $\alpha$ takes values in $[0,1]$, and gathering Lemmas~\ref{lem:micro},~\ref{lem:energy},~\ref{lem:momentum} and~\ref{lem:mass}, one has $$ \begin{aligned} \left\langle \! \left\langle - {\mathscr L} f , f \right\rangle \! \right\rangle &\geqslant \lambda \| f^\perp \|_{{\mathcal H}}^2 + \frac12 \| \sqrt{\alpha(2-\alpha)} D^\perp f_{+} \|_{\partial {\mathcal H}_+}^2 \\ &\quad +\eta_1 \Big( \kappa_1 \| \theta \|_{L^2_x(\Omega)}^2 - C \| m \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} \\&\qquad \qquad - C \| f^\perp \|_{{\mathcal H}} ^2 - C \| \sqrt{\alpha(2-\alpha)} D^\perp f_{+} \|_{\partial {\mathcal H}_+}^2 \Big)\\ &\quad +\eta_2 \Big( \kappa_2 \| m \|_{L^2_x(\Omega)}^2 - C \| \varrho \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} - C \| \varrho \|_{L^2_x(\Omega)} \| \theta \|_{L^2_x(\Omega)}\\ &\qquad \qquad- C \| \theta \|_{L^2_x(\Omega)}^2 - C \| f^\perp \|_{{\mathcal H}}^2 - C \| \sqrt{\alpha(2-\alpha)} D^\perp f_{+} \|_{\partial {\mathcal H}_+}^2 \Big) \\ &\quad +\eta_3 \Big( \kappa_3 \| \varrho \|_{L^2_x(\Omega)}^2 - C \| m \|_{L^2_x(\Omega)}^2 - C \| \theta \|_{L^2_x(\Omega)}^2 \\ &\qquad \qquad- C \| f^\perp \|_{{\mathcal H}}^2 - C \| \sqrt{\alpha(2-\alpha)} D^\perp f_{+} \|_{\partial {\mathcal H}_+}^2 \Big). \end{aligned} $$ Thanks to Young's inequality, we have $$ \begin{aligned} \eta_1 C \| m \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} &\leqslant \frac{\lambda}{4} \| f^\perp \|_{{\mathcal H}}^2 + C \eta_1^2 \| m \|_{L^2_x(\Omega)}^2, \\ \eta_2 C \| \varrho \|_{L^2_x(\Omega)} \| f^\perp \|_{{\mathcal H}} &\leqslant \frac{\lambda}{4} \| f^\perp \|_{{\mathcal H}}^2 + C \eta_2^2 \| \varrho \|_{L^2_x(\Omega)}^2 , \\ \eta_2 C \| \varrho \|_{L^2_x(\Omega)} \| \theta \|_{L^2_x(\Omega)} &\leqslant \frac{\eta_1 \kappa_1}{2} \| \theta \|_{L^2_x(\Omega)}^2 + C \frac{\eta_2^2}{\eta_1} \| \varrho \|_{L^2_x(\Omega)}^2 . \end{aligned} $$ We thus obtain $$ \begin{aligned} \left\langle \! \left\langle - {\mathscr L} f , f \right\rangle \! \right\rangle &\geqslant \left(\frac{\lambda}{2} - \eta_1 C - \eta_2 C - \eta_3 C\right) \| f^\perp \|_{{\mathcal H}}^2 \\ &\quad + \left(\frac12 - \eta_1 C - \eta_2 C - \eta_3 C \right) \| \sqrt{\alpha(2-\alpha)} D^\perp f_{+} \|_{\partial {\mathcal H}_+}^2 \\ &\quad +\left(\frac{\eta_1 \kappa_1}{2} - \eta_2 C - \eta_3 C \right)\| \theta \|_{L^2_x(\Omega)}^2 \\ &\quad + \left( \eta_2 \kappa_2 - \eta_1^2 C - \eta_3 C \right)\| m \|_{L^2_x(\Omega)}^2 \\ &\quad + \left( \eta_3 \kappa_3 - \eta_2^2 C - \frac{\eta_2^2}{\eta_1}C\right)\| \varrho \|_{L^2_x(\Omega)}^2 . \end{aligned} $$ We now choose $\eta_1 := \eta$, $\eta_2 := \eta^{\frac{3}{2}}$, $\eta_3 := \eta^{\frac{7}{4}}$, and we deduce $$ \begin{aligned} \left\langle \! \left\langle - {\mathscr L} f , f \right\rangle \! \right\rangle &\geqslant \left(\frac{\lambda}{2} - \eta C \right) \| f^\perp \|_{{\mathcal H}}^2 + \left(\frac12 - \eta C \right) \| \sqrt{\alpha(2-\alpha)} D^\perp f_{+} \|_{\partial {\mathcal H}_+}^2 \\ &\quad +\eta \left( \frac{\kappa_1}{2} - \eta^{\frac{1}{2}} C \right)\| \theta \|_{L^2_x(\Omega)}^2 + \eta^{\frac{3}{2}} \left( \kappa_2 - \eta^{\frac{1}{4}} C \right)\| m \|_{L^2_x(\Omega)}^2 \\ &\quad + \eta^{\frac{7}{4}} \left( \kappa_3 - \eta^{\frac{1}{4}} C\right)\| \varrho \|_{L^2_x(\Omega)}^2 . \end{aligned} $$ Choosing $0 < \eta < 1$ small enough, we get $$ \begin{aligned} \left\langle \! \left\langle - {\mathscr L} f , f \right\rangle \! \right\rangle &\geqslant \kappa \left( \| f^\perp \|_{{\mathcal H}}^2 +\| \varrho \|_{L^2_x(\Omega)}^2 +\| m \|_{L^2_x(\Omega)}^2 +\| \theta \|_{L^2_x(\Omega)}^2 \right) \\ &\quad + \kappa' \| \sqrt{\alpha(2-\alpha)} D^\perp f_{+} \|_{\partial {\mathcal H}_+}^2 \end{aligned} $$ for some constants $\kappa,\kappa' >0$. We conclude the proof of Theorem~\ref{theo:hypo} since $$ \| f^\perp \|_{{\mathcal H}}^2 +\| \varrho \|_{L^2_x(\Omega)}^2 +\| m \|_{L^2_x(\Omega)}^2 +\| \theta \|_{L^2_x(\Omega)}^2 = \| f \|_{{\mathcal H}}^2 $$ and $\| \cdot \|_{{\mathcal H}}$ is equivalent to $|\hskip-0.04cm|\hskip-0.04cm| \cdot |\hskip-0.04cm|\hskip-0.04cm|$. \qed \section{Weakly coercive operators} \label{sec:weak} In this section we extend our method to the case in which the collision operator ${\mathscr C}$ is \emph{weakly coercive}, that is, it satisfies assumption (A2') below which is weaker than the coercive estimate of assumption (A2) in Subsection~\ref{subsec:equation}. In this situation we do not expect to obtain an exponential decay but only a sub-exponential decay supposing further integrability/regularity properties of the initial data; in other words the semigroup associated to the full linear operator ${\mathscr L}$ is not uniformly exponentially stable but only strongly stable. These weakly coercive operators arise naturally in several classes of evolution PDEs. In the setting of control theory and wave-type equations we refer to the works \cite{Lebeau,LebeauRobb,Burq,MR3219503,MR3626005} and the references therein, in which the energy of the equation is shown to decay with non-exponential rate. These results have then inspired an abstract theory for strongly stable semigroups. We refer to \cite{BD,BEPS,BCT} and the references therein, where such a line of research is developped. In the framework of kinetic equations, the works \cite{Caflisch1,Caflisch2} have established the sub-exponential decay of the semigroup associated to the linearized cutoff Boltzmann equation with soft potentials. We also refer to the works \cite{GS1,GS2} that establish decay estimates for the non-cutoff Boltzmann and Landau equations with very soft potentials, as well as \cite{MR3625186} for the Landau equation. All these results are established in the torus or the whole space, and, to the best of our knowledge, the only works concerning domains with boundary conditions are the recent results of \cite{MR4076068} for the Landau equation with specular reflection boundary condition, and \cite{DuanLiuSakamotoStrain} for non-cutoff Boltzmann and Landau equations in a finite channel with specular reflection or inflow boundary conditions. Concerning Fokker-Planck equations and kinetic Fokker-Planck equations we shall quote \cite{RockWang,KM} and \cite{MR4069622}, as well as the references therein. We also mention the results concerning degenerate linear transport equations \cite{MR2569870,MR3048598,MR4063917}, as well as degenerate linear Boltzmann equations \cite{MR3479064}. Finally, the free transport equation with diffusive or Maxwell boundary condition has been tackled in \cite{MR2765738,MR3199988,MR4179249} for instance. \smallskip We assume in this section that the operator ${\mathscr C}$ satisfies (A1) on $L^2_v (\mu^{-1})$, as well as: \begin{itemize} \item[(A2')] The operator is self-adjoint on $L^2_v(\mu^{-1})$ and negative $({\mathscr C} f , f )_{L^2_v(\mu^{-1})} \leqslant 0$, so that its spectrum is included in $\mathbf R_{-}$, and \eqref{eq:local-conservations} holds true for any { $g \in \mathrm{Dom}({\mathscr C})$}. We assume further that ${\mathscr C}$ satisfies a weak coercivity estimate: there is a positive constant $\lambda >0$ and a radially symmetric function $\omega_0 : \mathbf R^d \to [1,\infty)$ with $\lim_{|v| \to \infty} \omega_0(v) = \infty$ such that for any $f \in \mathrm{Dom} ({\mathscr C})$ one has $ (-{\mathscr C} f , f)_{L^2_v(\mu^{-1})} \geqslant \lambda \| f^\perp \|_{L^2_{v}( \omega_0^{-1} \mu^{-1})}^2, $ where $f^\perp := f - \pi f$. \item[(A3')] For any polynomial function $\phi=\phi(v) : \mathbf R^d \to \mathbf R$ of degree $\leqslant 4$, one has $ \mu \phi \in \mathrm{Dom} ({\mathscr C})$ with $$ \| {\mathscr C} (\phi \mu) \|_{L^2_v(\omega_0 \mu^{-1})} < \infty, $$ and, for some positive constant $C > 0$, for all $f \in \mathrm{Dom}({\mathscr C})$, $$ \left| \int_{\mathbf R^d} \phi(v) f^\perp \, \mathrm{d} v \right| \leqslant C \| f^\perp \|_{L^2_{v}(\omega_0^{-1}\mu^{-1})}. $$ \item[(A4)] There exists a radially symmetric function $\omega_1 : \mathbf R^d \to [1,\infty)$ with $\lim_{|v| \to \infty} \omega_1(v) = \infty$ and a positive constant $C>0$ such that for any $f \in \mathrm{Dom} ({\mathscr L})$, one has $ \left\langle {\mathscr L} f , f \right\rangle_{L^2_{x,v} ( \omega_1 \mu^{-1})} \leqslant C \| f \|_{L^2_{x,v}(\omega_0^{-1} \mu^{-1})}^2. $ \end{itemize} \medskip We recall that ${\mathcal H} = L^2_{x,v}(\mu^{-1})$ and in this Section, we will also use the following notations: ${\mathcal H}_0 := L^2_{x,v} (\omega_0^{-1} \mu^{-1})$ and ${\mathcal H}_1 := L^2_{x,v} (\omega_1 \mu^{-1})$. Remark now that we have $$ \|f ^{\perp}\|^2_{{\mathcal H}_0} + \| \pi f \|^2_{{\mathcal H}_0} \lesssim \| f \|_{{\mathcal H}_0}^2 \lesssim \|f ^{\perp}\|^2_{{\mathcal H}_0} + \| \pi f \|^2_{{\mathcal H}_0} $$ and $$ \|\pi f\|^2_{{\mathcal H}_0} \lesssim \| \varrho \|^2_{L^2_x(\Omega)} + \| m \|^2_{L^2_x(\Omega)} + \| \theta \|_{L^2_x(\Omega)}^2 \lesssim \|\pi f\|^2_{{\mathcal H}_0}. $$ Repeating the proof of Theorem~\ref{theo:hypo} with the above assumptions we obtain: \begin{theo}\label{theo:weak-hypo} There exists a scalar product $\left\langle\!\left\langle \cdot , \cdot \right\rangle \! \right\rangle_{{\mathcal H}}$ on the space ${\mathcal H}$ so that the associated norm $|\hskip-0.04cm|\hskip-0.04cm| \cdot |\hskip-0.04cm|\hskip-0.04cm|_{{\mathcal H}}$ is equivalent to the usual norm $\| \cdot \|_{{\mathcal H}}$, and for which the linear operator ${\mathscr L}$ satisfies the following weak coercivity estimate: there is a positive constant $\kappa >0$ such that one has $ \left\langle \! \left\langle - {\mathscr L} f , f \right\rangle\!\right\rangle_{{\mathcal H}} \geqslant \kappa \| f \|_{{\mathcal H}_0}^2 $ for any $f \in \mathrm{Dom}({\mathscr L})$ satisfying the boundary condition \eqref{eq:BdyCond}, assumption~\eqref{eq:C1} and furthermore assumptions \eqref{eq:C2}-\eqref{eq:C3} in the specular reflection case ($\alpha\equiv 0$ in \eqref{eq:BdyCond}). \end{theo} As a consequence of the weak coercivity estimate for ${\mathscr L}$, we obtain the following result of sub-exponential decay to equilibrium. \begin{theo}\label{theo:weak-main} Let $f_{\mathrm{in}} \in {\mathcal H}_1$ satisfying condition~\eqref{eq:C1} and furthermore~\eqref{eq:C2}-\eqref{eq:C3} in the specular reflection case ($\alpha\equiv 0$ in \eqref{eq:BdyCond}). There exist a positive constant~$C>0$ and a decreasing function $\vartheta : \mathbf R_+ \to \mathbf R_+$ with $\lim_{t \to \infty} \vartheta(t) = 0$ such that for any solution $f$ to \eqref{eq:dtf=Lf}--\eqref{eq:BdyCond} (with ${\mathscr C}$ satisfying (A1)--(A2')--(A3')--(A4) above) associated to the initial data~$f_{\mathrm{in}}$, there holds $$ \| f(t) \|_{{\mathcal H}} \leqslant C \vartheta(t) \| f_{\mathrm{in}} \|_{{\mathcal H}_1}, \quad \forall \, t \geqslant 0. $$ \end{theo} \begin{proof}[Proof of Theorem~\ref{theo:weak-main}] Let $f$ be a solution to \eqref{eq:dtf=Lf}--\eqref{eq:BdyCond} associated to $f_{\mathrm{in}} \in \mathrm{Dom}({\mathscr L})$, the general case when $f_{\mathrm{in}} \in {\mathcal H}_1$ then deduces by a usual density argument. Thanks to Theorem~\ref{theo:weak-hypo}, we have \begin{equation}\label{eq:weak1} \frac{\d}{\mathrm{d} t} |\hskip-0.04cm|\hskip-0.04cm| f(t) |\hskip-0.04cm|\hskip-0.04cm|_{{\mathcal H}}^2 = \left\langle \! \left\langle {\mathscr L} f(t) , f(t) \right\rangle\!\right\rangle_{{\mathcal H}} \leqslant -\kappa \| f(t) \|_{{\mathcal H}_0}^2. \end{equation} Remark that for any $R>0$ we have the following interpolation inequality \begin{equation}\label{eq:weak2} \| g \|^2_{{\mathcal H}} \leqslant \omega_0(R) \| g \|^2_{{\mathcal H}_0} + \frac{1}{\omega_1(R)} \, \| g \|^2_{{\mathcal H}_1}. \end{equation} Moreover we claim that there is a constant $C>0$ such that \begin{equation}\label{eq:weak3} \| f(t) \|_{{\mathcal H}_1} \leqslant C \| f_{\mathrm{in}} \|_{{\mathcal H}_1}. \end{equation} Indeed for $\delta >0$ small enough, we define the following scalar product on ${\mathcal H}_1$ $$ \left\langle \! \left\langle f , g \right\rangle \! \right\rangle_{{\mathcal H}_1} := \delta \left\langle f , g \right\rangle_{{\mathcal H}_1} + \left\langle\!\left\langle f , g \right\rangle\!\right\rangle_{{\mathcal H}}. $$ Gathering (A4) and Theorem~\ref{theo:weak-hypo}, we obtain $$ \begin{aligned} \left\langle \! \left\langle {\mathscr L} f , f \right\rangle \! \right\rangle_{{\mathcal H}_1} &\leqslant (\delta C - \kappa) \| f \|_{{\mathcal H}_0}^2 \leqslant 0, \end{aligned} $$ which implies the claim by observing that the norm associated to $ \left\langle \! \left\langle \cdot , \cdot \right\rangle \! \right\rangle_{{\mathcal H}_1} $ is equivalent to the standard norm on ${\mathcal H}_1$. From \eqref{eq:weak1}, \eqref{eq:weak2} and \eqref{eq:weak3}, we therefore deduce $$ \begin{aligned} \frac{\d}{\mathrm{d} t} |\hskip-0.04cm|\hskip-0.04cm| f(t) |\hskip-0.04cm|\hskip-0.04cm|_{{\mathcal H}}^2 &\leqslant -\frac{\kappa}{\omega_0(R)} \, \| f(t) \|_{{\mathcal H}}^2 + \frac{\kappa}{\omega_0(R) \omega_1(R)} \, \| f(t) \|_{{\mathcal H}_1}^2 \\ &\leqslant -\frac{c \kappa}{\omega_0(R)} \, |\hskip-0.04cm|\hskip-0.04cm| f(t) |\hskip-0.04cm|\hskip-0.04cm|_{{\mathcal H}}^2 + \frac{\kappa C}{\omega_0(R) \omega_1(R)} \,\| f_{\mathrm{in}} \|_{{\mathcal H}_1}^2, \end{aligned} $$ for some constant $c>0$, where we have used in last line that $|\hskip-0.04cm|\hskip-0.04cm| \cdot |\hskip-0.04cm|\hskip-0.04cm|_{{\mathcal H}}$ and $\| \cdot \|_{{\mathcal H}}$ are equivalent, and the above claim. From the above inequality it follows $$ \begin{aligned} |\hskip-0.04cm|\hskip-0.04cm| f(t) |\hskip-0.04cm|\hskip-0.04cm|_{{\mathcal H}}^2 &\leqslant \exp\left( -\frac{c \kappa}{\omega_0(R)} \, t \right) |\hskip-0.04cm|\hskip-0.04cm| f_{\mathrm{in}} |\hskip-0.04cm|\hskip-0.04cm|_{{\mathcal H}}^2 +\frac{C}{c \omega_1(R)} \, \| f_{\mathrm{in}} \|_{{\mathcal H}_1}^2 \\ &\leqslant \left\{ \exp\left( -\frac{c \kappa}{\omega_0(R)} \, t \right) + \frac{C}{c \omega_1(R)} \right\} \| f_{\mathrm{in}} \|_{{\mathcal H}_1}^2, \end{aligned} $$ for any $R>0$. Defining $$ \vartheta(t) := \left( \inf_{R>0} \left\{ \exp\left( -\frac{c \kappa}{\omega_0(R)} \, t \right) + \frac{C}{c \omega_1(R)} \right\} \right)^{\frac12}, $$ we hence obtain $$ |\hskip-0.04cm|\hskip-0.04cm| f(t) |\hskip-0.04cm|\hskip-0.04cm|_{{\mathcal H}} \leqslant \vartheta(t) \| f_{\mathrm{in}} \|_{{\mathcal H}_1}, $$ which concludes the proof using again that $|\hskip-0.04cm|\hskip-0.04cm| \cdot |\hskip-0.04cm|\hskip-0.04cm|_{{\mathcal H}}$ and $\| \cdot \|_{{\mathcal H}}$ are equivalent. \end{proof} \section{Hydrodynamic limits} \label{sec:hydro} In this part, we study the following rescaled problem: \begin{eqnarray}\label{eq:dtf=Lepsf} \partial_t f &=& {\mathscr L}_{\varepsilon} f := - {1 \over {\varepsilon}} v \cdot \nabla_x f + {1 \over {\varepsilon}^2} {\mathscr C} f \quad\hbox{in}\quad (0,\infty) \times {\mathcal O}, \\ \label{eq:BdyCondeps} \gamma_{\!-} f &=& {\mathscr R} \gamma_{\!+} f \quad\hbox{on}\quad (0,\infty) \times \Sigma, \end{eqnarray} with ${\varepsilon} \in (0,1]$, ${\mathscr C}$ satisfying assumptions~(A1),~(A2) and~(A3) introduced in Subsection~\ref{subsec:equation} and the boundary condition~\eqref{eq:BdyCondeps} being the same as~\eqref{eq:BdyCond} described in Subsection~\ref{subsec:equation}. The motivation to study this problem comes from the issue of deriving the incompressible Navier-Stokes-Fourier system from kinetic equations. Indeed, it is well-known (see~\cite{BGL1}) that in order to reach this goal, we shall introduce the dimensionless Knudsen number ${\varepsilon}$ and the problem reduces to the analysis of the following equation \begin{equation} \label{eq:nonlineareps} \partial_t F^{\varepsilon} = {\mathscr L}_{\varepsilon} F^{\varepsilon} + {1 \over {\varepsilon}} Q(F^{\varepsilon},F^{\varepsilon}) \end{equation} with $$ {\mathscr L}_{\varepsilon} f = - {1 \over {\varepsilon}} v \cdot \nabla_x f + {1 \over {\varepsilon}^2} {\mathscr C} f, \quad {\mathscr C} f := Q(\mu,f) + Q(f,\mu). $$ Then, in order to derive the incompressible Navier-Stokes-Fourier limit from kinetic equations, the purpose is to prove that, as ${\varepsilon}$ goes to $0$, a solution $F^{\varepsilon}$ to~\eqref{eq:nonlineareps} converges towards some limit that depends on time and space variables only through macroscopic quantities that are solutions to the incompressible Navier-Stokes-Fourier system. The starting point of this study is the analysis of the linearized problem~\eqref{eq:dtf=Lepsf} and our method is robust enough to treat this rescaled problem. More precisely, we are able to provide a result of large time stability for the linear problem~\eqref{eq:dtf=Lepsf}-\eqref{eq:BdyCondeps} uniformly with respect to the parameter~${\varepsilon} >0$. \medskip The problem of deriving incompressible Navier-Stokes equation from Boltzmann equation has been largely studied in the framework of weak solutions (renormalized for the Boltzmann equation and Leray type for the Navier-Stokes one), in the torus, the whole space or bounded domains. We do not make an extensive presentation here of this type of result but just mention the papers~\cite{BGL1,BGL2} in which this program has been initiated and~\cite{GSR} in which the first complete proof of convergence has been obtained in the whole space. We also mention the works~\cite{MSR,SR-book,JM} in which the problem has been treated in bounded domains starting from the renormalized solutions constructed in~\cite{MischlerENS2010}. Concerning the case of strong solutions, we mention the works~\cite{BardosUkai,DMEL} and more recent ones~\cite{ALT,Briant,BMM,GT,JXZ,Rachid} which are all framed in the torus and/or the whole space. To our knowledge, no result of derivation is available for strong solutions in a bounded domain. The study of this derivation will be the object of a forthcoming work. We focus here on the study of the linearized rescaled problem~\eqref{eq:dtf=Lepsf}-\eqref{eq:BdyCondeps}. \medskip We here give an adapted version of Theorem~\ref{theo:hypo} in our new rescaled framework: \begin{theo}\label{theo:hydro-hypo} There exists a scalar product $\left\langle\!\left\langle \cdot , \cdot \right\rangle \! \right\rangle_{\!{\varepsilon}}$ on the space ${\mathcal H}$ so that the associated norm $|\hskip-0.04cm|\hskip-0.04cm| \cdot |\hskip-0.04cm|\hskip-0.04cm|_{\varepsilon}$ is equivalent to the usual norm $\| \cdot \|_{{\mathcal H}}$ uniformly in ${\varepsilon} \in (0,1]$, and for which the linear operator ${\mathscr L}_{\varepsilon}$ satisfies the following coercivity estimate: there is a positive constant $\kappa >0$ such that for any ${\varepsilon} \in (0,1]$, one has $ \left\langle \! \left\langle - {\mathscr L}_{\varepsilon} f , f \right\rangle\!\right\rangle_{\!{\varepsilon}} \geqslant \kappa |\hskip-0.04cm|\hskip-0.04cm| f |\hskip-0.04cm|\hskip-0.04cm|^2_{\varepsilon} + {\kappa \over {\varepsilon}^2} \|f^\perp\|_{{\mathcal H}}^2, $ for any $f \in \mathrm{Dom}({\mathscr L})$ satisfying the boundary condition \eqref{eq:BdyCondeps}, assumption~\eqref{eq:C1} and furthermore assumptions \eqref{eq:C2}-\eqref{eq:C3} in the specular reflection case ($\alpha\equiv 0$ in \eqref{eq:BdyCond}). \end{theo} \begin{proof}[Sketch of the proof of Theorem~\ref{theo:hydro-hypo}] Using the same notations as in Subsection~\ref{ssec:conclusion}, we introduce the following scalar product on ${\mathcal H}$: $ \begin{aligned} \left\langle \! \left\langle f , g \right\rangle \! \right\rangle_{\!{\varepsilon}} &:= \left\langle f , g \right\rangle_{{\mathcal H}} \\ &\quad + \eta_1 {\varepsilon} \left\langle -\nabla_x u[\theta[f]] , M_p [g] \right\rangle_{L^2_x(\Omega)} + \eta_1 {\varepsilon}\left\langle -\nabla_x u[\theta [g]], M_p [f] \right\rangle_{L^2_x(\Omega)} \\ &\quad + \eta_2 {\varepsilon} \left\langle -\nabla_x^s U[m[f]] , M_q [g] \right\rangle_{L^2_x(\Omega) +\eta_2 {\varepsilon}\left\langle -\nabla_x^s U[m[g]] , M_q [f] \right\rangle_{L^2_x(\Omega)} \\ &\quad + \eta_3 {\varepsilon} \left\langle -\nabla_x u_{\mathrm{N}}[\varrho[f]] , m [g] \right\rangle_{L^2_x(\Omega) + \eta_3 {\varepsilon} \left\langle -\nabla_x u_{\mathrm{N}}[\varrho[g]] , m[f] \right\rangle_{L^2_x(\Omega)} \end{aligned} $ with $0 \ll \eta_3 \ll \eta_2 \ll \eta_1 \ll 1$ chosen as in the proof of Theorem~\ref{theo:hypo}. We denote by $|\hskip-0.04cm|\hskip-0.04cm| \cdot |\hskip-0.04cm|\hskip-0.04cm|_{\varepsilon}$ the norm associated to the scalar product~$\left\langle \! \left\langle \cdot , \cdot \right\rangle \! \right\rangle_{\!{\varepsilon}}$, and we observe that $$ \| f \|_{{\mathcal H}} \lesssim |\hskip-0.04cm|\hskip-0.04cm| f |\hskip-0.04cm|\hskip-0.04cm|_{\varepsilon} \lesssim \| f \|_{{\mathcal H}} $$ where the multiplicative constants are uniform in ${\varepsilon} \in (0,1]$. The norms $\| \cdot \|_{{\mathcal H}}$ and $|\hskip-0.04cm|\hskip-0.04cm| \cdot |\hskip-0.04cm|\hskip-0.04cm|_{\varepsilon}$ are thus equivalent independently of ${\varepsilon} \in (0,1]$. Repeating the proof of Theorem~\ref{theo:hypo}, we obtain the desired result. \end{proof} \medskip Using once more this equivalence of norms, we are able to prove the following stability result for our equation~\eqref{eq:dtf=Lepsf}-\eqref{eq:BdyCondeps} uniformly in~${\varepsilon} \in (0,1]$: \begin{theo}\label{theo:hydro-main} Let $f_{\mathrm{in}}^{\varepsilon} \in {\mathcal H}$ satisfying condition~\eqref{eq:C1} and furthermore \eqref{eq:C2}-\eqref{eq:C3} in the specular reflection case ($\alpha\equiv 0$ in \eqref{eq:BdyCond}). There exist positive constants $\kappa , C >0$ independent of ${\varepsilon} \in (0,1]$ such that for any solution~$f^{\varepsilon}$ to \eqref{eq:dtf=Lepsf}--\eqref{eq:BdyCondeps} associated to the initial data $f_{\mathrm{in}}^{\varepsilon}$, for any ${\varepsilon} \in (0,1]$ and for any $t \geqslant 0$, there holds $$ \| f^{\varepsilon}(t) \|_{{\mathcal H}} \leqslant C e^{-\kappa t} \| f_{\mathrm{in}}^{\varepsilon} \|_{{\mathcal H}} \quad \text{and} \quad {1 \over {\varepsilon}^2} \int_0^\infty \|(f^{\varepsilon})^\perp(s)\|^2_{{\mathcal H}} \, e^{2\kappa s} \, \d s \leqslant C \| f_{\mathrm{in}}^{\varepsilon} \|^2_{{\mathcal H}}. $$ \end{theo} \begin{rem} Notice that we can perform the same analysis to extend the result of this Section to operators that satisfy a weak coercivity estimate as in Section~\ref{sec:weak}. Namely, one can obtain sub-exponential decay of the solution $f^{\varepsilon}$ to~\eqref{eq:dtf=Lepsf}-\eqref{eq:BdyCondeps}, that is uniform in ${\varepsilon} \in (0,1]$, when the collision operator ${\mathscr C}$ involved in~\eqref{eq:dtf=Lepsf} satisfy assumptions (A1), (A2'), (A3') and (A4) of Section~\ref{sec:weak}. \end{rem} \phantomsection \bibliographystyle{acm} \addcontentsline{toc}{section}{References}
{ "timestamp": "2021-02-16T02:39:48", "yymm": "2102", "arxiv_id": "2102.07709", "language": "en", "url": "https://arxiv.org/abs/2102.07709" }
\section{Introduction} It is one of the fundamental properties of quantum mechanics that the evolution of quantum states is described by linear maps which are completely positive and trace preserving (CPTP), stemming from the unitary dynamics enforced on a larger Hilbert space~\cite{nielsen_2011}. However, in several different settings of practical importance, various applications of quantum dynamics which are not CPTP can be encountered. This motivates the study of such transformations, and in particular a precise understanding of how they can be compared with and approximated by physical quantum channels. One important application of non-CPTP maps is in entanglement detection, where positive but not completely positive maps can serve as entanglement witnesses~\cite{horodecki_1996-1}. A bipartite state $\rho$ is entangled if and only if there exists a positive map $\Phi$ such that $\mathrm{id} \otimes \Phi(\rho)$ is no longer a positive operator, and therefore such a map can reveal the correlations of $\rho$. This approach has constituted one of the most important ways of detecting entanglement~\cite{guhne_2009,horodecki_2009}, but its experimental implementation encounters an obstacle: how to realise the action of an unphysical linear map in practice? This question prompted the introduction of structural physical approximations (SPA) of non-CPTP maps~\cite{horodecki_2003-4}, which aim to enable the physical evaluation of general maps by designing suitable approximations in terms of quantum channels and using them to infer properties of the original map~\cite{horodecki_2002-1,korbicz_2008,bae_2017}. Another setting in which non-CPTP maps are encountered is that of non-Markovian quantum dynamics or, generally, in the reduced dynamics of correlated systems. Specifically, when an open quantum system shares some initial correlations with its environment, the evolution of the composite system-environment state can correspond to a non-CPTP map when looking only at the dynamics of the reduced state of the system~\cite{pechukas_1994,shaji_2005,rodriguez-rosario_2008,carteret_2008}. Although the physical interpretation of this is a matter of debate and alternative ways to understand such dynamics have been proposed~\cite{alicki_1995,modi_2012-1,schmid_2019}, it can nevertheless be useful to study such non-CPTP evolutions directly to gain an understanding of reduced dynamics of open quantum systems. Even broader types of unphysical quantum dynamics can be found in the areas of quantum error correction and error mitigation~\cite{shor_1996,temme_2017,li_2017}. This is because, in a broad sense, both of these settings are concerned with the following problem: if an unknown system has undergone a noisy evolution as $\rho \mapsto \Theta(\rho)$, how can we reconstruct the original state as closely as possible, that is, how to implement a map $\Phi$ such that $\Phi \circ \Theta (\rho) \approx \rho$? Such inverse operations typically cease to be valid quantum channels, and so it becomes necessary to devise approaches to implement them in practice with the use of physical operations. In this work, we introduce a general quantitative framework for the characterisation of such unphysical maps by approximating them with quantum channels. We then explicitly give the considered measures operational meaning by connecting them with the performance of practical tasks, including the cost of simulating a given map with quantum channels. Notably, we show that all of the considered measures reduce to the same quantity when the given linear map is trace preserving: they all equal the diamond norm~\cite{kitaev_1997,watrous_2004}, a fundamental computational tool that serves as a measure of quantum channel distance and finds many uses in the practical characterisation of quantum processes~\cite{watrous_2018}. This endows the diamond norm with new meanings in the operational tasks that we consider, and furthermore allows a number of new connections to be established. On the one hand, many known results in the quantification of the diamond norm can be carried over to the setting of our work, and on the other hand, we can use our characterisation to provide new insight into the computation and applications of the diamond norm. Our approach is based on the notion of robustness measures~\cite{vidal_1999} --- inspired by recent applications of such quantities in the study of general resource theories of channels~\cite{diaz_2018-2,takagi_2019,liu_2019-1,gour_2019-1,uola_2020,yuan_2020,takagi_2020,takagi_2020-2,regula_2020-2,regula_2020-3,jiang_2020}, we use them to quantify the amount of noise needed to turn a given map into a quantum channel. Such measures allow for several different generalisations to the setting of linear maps, motivating us to study and compare these definitions. The robustness-based approaches can be understood as different ways of designing optimal decompositions of linear maps in terms of quantum channels, and so they generalise the standard structural physical approximations~\cite{horodecki_2003-4}. We express the measures as semidefinite programs and establish various relations and bounds between them. We apply our first measure in the task of simulating the action of an unphysical map with valid channels, accomplished by allowing the use of ancillary systems which can consists of linear combinations of quantum states. \br{Such an approach allows us to reduce the problem of simulating the dynamics of quantum systems to the much simpler case of simulating the use of a non-positive Hermitian operator. Assessing the difficulty of this procedure then reduces to quantifying how much the given operator deviates from being a valid quantum state, and --- employing the trace norm as a natural quantifier of such `non-quantumness' --- } we show that the optimal cost of simulating a non-CPTP map in this way is given exactly by the value of the robustness measure. Furthermore, answering the question of whether any unphysical map can provide measurable operational advantages over quantum channels, we show this to be the case in the setting of discrimination-based quantum games, establishing our second robustness measure as the exact quantifier of this advantage. Our results also generalise and shed light on the very recent findings of Ref.~\cite{jiang_2020}, which considered a similar framework for approximating trace-preserving maps using a robustness- and quasiprobability-based approach. In particular, we show that the measure considered in \cite{jiang_2020} is actually an alternative expression for the diamond norm of a map, rather than a new quantity. The paper is structured as follows. In Sec.~\ref{sec:robustness_intro}, we introduce the notions of robustness measures and show how they can be applied to non-CPTP linear maps. We establish precise connections with the diamond norm in Sec.~\ref{sec:diamond}. We then proceed to show that the robustness measures --- and hence the diamond norm --- play a crucial role in quantifying the cost of simulating linear maps (Sec.~\ref{sec:simulation}) as well as in understanding the advantages a non-CPTP map could provide in input-output quantum games (Sec.~\ref{sec:games}). We proceed to establish a number of bounds for the measures in Sec.~\ref{sec:bounds}. Finally, we discuss the applications of our approach, comparisons with other methods, and explicitly show how the measures can be evaluated for some representative examples in Sec.~\ref{sec:applications}. \section{Robustness of non-CP maps}\label{sec:robustness_intro} Let $A$ and $B$ denote two finite-dimensional quantum systems of dimension $d_A$ and $d_B$, respectively. We will use $\mathbb{L}(A)$ to denote the set of all linear operators, $\mathbb{H}(A)$ to denote the set of all Hermitian operators, and $\mathbb{D}(A)$ to denote all density operators acting on the Hilbert space of system $A$. We use $\< X, Y \> = \Tr(X^\dagger Y)$ for the Hilbert-Schmidt inner product. Among all linear maps from $\mathbb{L}(A)$ to $\mathbb{L}(B)$, we will be primarily concerned with Hermiticity-preserving maps $\H(A,B)$, which are defined as maps such that $\Phi(X) \in \mathbb{H}(B) \; \forall X \in \mathbb{H}(A)$. A map is called positive if $\Phi(X) \geq 0 \; \forall X \geq 0$ (w.r.t.\ the positive semidefinite cone), completely positive (CP) if $\mathrm{id}_{A} \otimes \Phi$ is positive, trace preserving if $\Tr \Phi(X) = \Tr X \; \forall X$, and trace non-increasing if $\Tr \Phi(X) \leq \Tr X \; \forall X$. To each map $\Phi \in \H(A,B)$ we will associate the Choi operator $J_\Phi = (\mathrm{id}_A \otimes \Phi)[\proj{\Omega}] \in \mathbb{H}(A \otimes B)$ where $\ket{\Omega} = \sum_{i} \ket{ii}$. Importantly, a map is Hermiticity-preserving iff $J_\Phi = J_\Phi^\dagger$, CP iff $J_{\Phi} \geq 0$, and trace preserving iff $\Tr_B J_\Phi = \mathbbm{1}_A$ (see e.g.\ \cite{watrous_2018}). Let ${\mathrm{CPTNI}}(A,B)$ denote the set of completely positive and trace--non-increasing maps in $\H(A,B)$, and analogously ${\mathrm{CPTP}}(A,B)$ the set of completely positive and trace-preserving maps. For simplicity of notation, we will often simply write ${\mathrm{CPTP}}$ for ${\mathrm{CPTP}}(A,B)$ (and analogously for other sets) when the spaces in consideration are not relevant. In order to quantify how much a given map deviates from the set of CPTP maps, we will employ the concept of \textit{robustness measures}~\cite{vidal_1999}. It will be insightful to first review how such measures are defined for quantum states. Given a \br{convex} set of interest $\mathcal{F} \subseteq \mathbb{D}$, commonly chosen to be the set of free states in a given resource theory, one asks: how much noise from a set $\mathcal{N} \subseteq \mathbb{D}$ has to be added to a state $\rho$ in order to make it a free state? This has the intuitive interpretation of measuring how robust the resources contained in the state $\rho$ are with respect to noise from the set $\mathcal{N}$. Specifically, we write \begin{equation}\begin{aligned} r_\mathcal{N}(\rho) \coloneqq \min \lsetr \lambda \barr \frac{\rho + \lambda \omega}{1+\lambda} \br{\eqqcolon \sigma} \in \mathcal{F},\; \omega \in \mathcal{N} \rsetr. \end{aligned}\end{equation} The most common choices of the noise set $\mathcal{N}$ are: $\mathcal{N} = \mathbb{D}$, in which case we obtain the so-called \textit{generalised robustness} equivalently given by \begin{equation}\begin{aligned} r_\mathbb{D}(\rho) = \min \left\{\left. \lambda \bar \rho \leq (1+\lambda) \sigma,\; \sigma \in \mathcal{F} \right\}, \end{aligned}\end{equation} and the choice $\mathcal{N} = \mathcal{F}$, which corresponds to the \textit{standard robustness} $r_\mathcal{F}$. The latter quantity is directly related to the so-called base norm $\norm{\rho}{\mathcal{F}}$ of the set $\mathcal{F}$, which can be alternatively understood as an optimisation of quasiprobability distributions over the set $\mathcal{F}$: \begin{equation}\begin{aligned} 2 r_\mathcal{F}(\rho) + 1 =& \norm{\rho}{\mathcal{F}}\\ \coloneqq& \min \left\{\left. \lambda_+ + \lambda_- \bar \rho + \lambda_- \sigma_- = \lambda_+ \sigma_+,\; \sigma_{\pm} \in \mathcal{F} \right\}\\ =& \min \left\{\left. \sum_i |\lambda_i| \bar \rho = \sum_i \lambda_i \sigma_i,\; \sigma_{i} \in \mathcal{F} \right\}\\ \end{aligned}\end{equation} where the third line is a simple consequence of the convexity of $\mathcal{F}$. The definitions straightforwardly extend to unnormalised operators $X$: \br{defining \begin{equation}\begin{aligned} r_\mathcal{N}(X) \coloneqq \min \lsetr \lambda \barr \frac{X + \lambda \omega}{\Tr X +\lambda} \eqqcolon \sigma \in \mathcal{F},\; \omega \in \mathcal{N} \rsetr, \end{aligned}\end{equation} }% it is important to notice that the trace of $X$ will come into play, and the base norm will equal $\norm{X}{\mathcal{F}} = 2 r_\mathcal{F}(X) + \Tr X$. The case of interest to us will be where the set of free states $\mathcal{F}$ contains \textit{all} physical quantum states, $\mathcal{F} = \mathbb{D}$, in which case the different notions of the robustness are equal and one has \begin{equation}\begin{aligned} 2 r_\mathbb{D}(X) + \Tr X = \norm{X}{1}, \end{aligned}\end{equation} that is, the base norm is precisely the trace norm (Schatten 1-norm) $\norm{\cdot}{1}$. \paragraph{Robustness of linear maps.} A generalisation of these concepts to the case of linear maps can be done in several different ways. Firstly, one has to note that it does not suffice to consider trace-preserving maps \br{in the definitions of this measures. This follows since any linear combination of CPTP maps necessarily satisfies that $\Tr_B J_\Phi \propto \mathbbm{1}$, which means that $\Tr \Phi(\rho)$ takes the same value for any input state $\rho$. Therefore, any Hermiticity-preserving map whose reduced Choi matrix is not proportional to the identity operator cannot be represented as $\lambda_+ \Lambda_+ - \lambda_- \Lambda_-$ for CPTP $\Lambda_\pm$.} To circumvent this, we will employ the set of completely positive and trace--non-increasing maps, which can be understood as probabilistic implementations of quantum channels. Importantly, robustness-based definitions which were all equal in the case of states might not be equal any more. We therefore need to explicitly consider three different types of the robustness w.r.t.\ the sets ${\mathrm{CPTP}}$ or ${\mathrm{CPTNI}}$: \begin{align} R(\Phi) \coloneqq& \min \lsetr \lambda \barr \frac{\Phi + \lambda \Lambda}{1+\lambda} \in {\mathrm{CPTNI}},\; \Lambda \in {\mathrm{CPTNI}} \rsetr,\label{eq:rob1}\\ R'(\Phi) \coloneqq& \min \lsetr \lambda \barr J_\Phi \leq (1+\lambda) J_\Lambda,\; \Lambda \in {\mathrm{CPTNI}} \rsetr\label{eq:rob2}\\ =& \min \lsetr \lambda \barr J_\Phi \leq (1+\lambda) J_\Lambda,\; \Lambda \in {\mathrm{CPTP}} \rsetr,\nonumber\\ R''(\Phi) \coloneqq& \min \left\{\left. \lambda \bar \Phi + \lambda \Lambda \in {\mathrm{CP}},\; \Lambda \in {\mathrm{CPTNI}} \right\}\label{eq:rob3}\\ =& \min \left\{\left. \lambda \bar \Phi + \lambda \Lambda \in {\mathrm{CP}},\; \Lambda \in {\mathrm{CPTP}} \right\},\nonumber \end{align} as well as a generalised notion of a base norm with respect to the set of completely positive and trace--non-increasing maps: \begin{equation}\begin{aligned} \norm{\Phi}{\bnorm} \coloneqq \min \left\{\left. \lambda_+ + \lambda_- \bar \Phi = \lambda_+ \Lambda_+ - \lambda_- \Lambda_-,\; \Lambda_{\pm} \in {\mathrm{CPTNI}} \right\}. \label{eq:CPTNI norm def} \end{aligned}\end{equation} In the expressions for $R'$ and $R''$, we made use of the fact that one can, without loss of generality, restrict the optimisation to CPTP maps; this follows since for any $\Lambda \in {\mathrm{CPTNI}}$ such that $\Tr_B J_\Lambda \leq \mathbbm{1}_A$ we can define the map $\Lambda'$ by $J_{\Lambda'} = J_\Lambda + \frac{C}{d_B} \otimes \mathbbm{1}_{B}$ where $C = \mathbbm{1}_A - \Tr_B J_\Lambda \geq 0$ which satisfies $\Lambda' \in {\mathrm{CPTP}}$ and achieves the same value of the objective function. We note that closely related definitions were recently also considered in Ref.~\cite{jiang_2020} for the case of trace-preserving maps. All of the quantities above are well-defined and take a finite value for any map $\Phi \in \H(A,B)$, as we shall see explicitly by establishing general upper bounds in Sec.~\ref{sec:bounds}. \br{The robustness $R(\Phi)$ can be seen to be an upper bound for all other quantities: any feasible decomposition of $\Phi$ in Eq.~\eqref{eq:rob1} gives feasible solutions for Eq.~\eqref{eq:rob2}, \eqref{eq:rob3}, and for the base norm in Eq.~\eqref{eq:CPTNI norm def}.} It is a priori unclear whether one can find general conditions under which the inequalities between the different measures are tight. We shall shortly see that equality indeed holds for all trace-preserving linear maps. All of the introduced quantities can be computed as semidefinite programs, which follows since the constraints for a map to be CPTNI (or CPTP) are linear matrix inequalities. This means that the measures can be evaluated efficiently (in the dimensions of the map) using numerical software. The equivalent dual forms of the problems, which can also provide some insight into the differences between the different definitions of the robustness measures, will be reported shortly in Sec.~\ref{sec:bounds}. \section{Relation with the diamond norm}\label{sec:diamond} For any Hermiticity-preserving map $\Phi$, the diamond norm (completely bounded trace norm) is defined as~\cite{kitaev_1997,watrous_2018} \begin{equation}\begin{aligned} \norm{\Phi}{\dia} = \max_{\rho \in \mathbb{D}(A \otimes A)} \norm{ \mathrm{id}_A \otimes \Phi \, (\rho)}{1}, \end{aligned}\end{equation} where, in a slight abuse of notation, we use $\mathbb{D}(A \otimes A)$ to denote the states acting on a bipartite Hilbert space composed of the space $A$ and another space isomorphic thereto. The diamond norm finds use as a fundamental measure of distance between quantum channels, mirroring the operational role of the trace distance in measuring distances between quantum states~\cite{kitaev_1997,Sacchi2005,gilchrist_2005,watrous_2018}. It is one of the most widely employed figures of merit in comparing quantum channels and benchmarking channel manipulation protocols. Its quantification and characterisation is therefore crucial to an effective understanding of the properties of quantum processes. \br{Close connections between the diamond norm and the base norm in the space of quantum channels can be inferred already from the operational similarity that the diamond norm bears to the trace norm, the latter being the natural base norm in the space of quantum states. Here we aim to clarify the details of such connections and to explicitly relate the diamond norm with the robustness measures. } We will first introduce the following lemma, which establishes a useful formulation of the diamond norm for Hermiticity-preserving maps. The result is closely related to a more general approach for generalised quantum channels considered previously by Jenčová~\cite{jencova_2014}, \br{and can be alternatively deduced from Lem.~4 and Thm.~2 of \cite{jencova_2014}}. \begin{boxed}{white} \begin{lemma}\label{lemma:diamond_herm} For any Hermiticity-preserving map $\Phi$, it holds that \begin{align}\label{eq:diamond_tighter_ineq} \norm{\Phi}{\dia} &= \min \left\{\left. \mu \bar J_\Phi = M_+ - M_-,\; M_{\pm} \geq 0,\; \Tr_B (M_+ + M_-) \leq \mu \mathbbm{1}_{A} \right\}\\ &= \min \left\{\left. \mu \bar J_\Phi = M_+ - M_-,\; M_{\pm} \geq 0,\; \Tr_B (M_+ + M_-) = \mu \mathbbm{1}_{A} \right\}.\label{eq:diamond_tighter} \end{align} \end{lemma} \end{boxed} \begin{proof} Let $\norm{\Phi}{\dia}'$ denote the quantity in \eqref{eq:diamond_tighter}. We first notice that the constraint $\Tr_B (M_+ + M_-) = \mu \mathbbm{1}_A$ can be relaxed to $\Tr_B (M_+ + M_-) \leq \mu \mathbbm{1}_A$ without loss of generality. This follows since for any feasible $M_\pm$ s.t. $\Tr_{B} (M_+ + M_-) + C = \mu \mathbbm{1}$ with $C\geq 0$, one can define feasible solutions $M'_{\pm} = M_\pm + \frac{C}{2d_B} \otimes \mathbbm{1}_{B}$ which satisfy $\Tr_B (M'_+ + M'_-) = \mu \mathbbm{1}_{A}$ and thus achieve the same optimal value. We thus have \begin{equation}\begin{aligned}\label{eq:diamond_ch_sdp} \norm{\Phi}{\dia}' &= \min \left\{\left. \mu \bar J_\Phi = M_+ - M_-,\; M_{\pm} \geq 0,\; \Tr_B (M_+ + M_-) \leq \mu \mathbbm{1}_{A} \right\}\\ &= \min \left\{\left. \norm{\Tr_B (M_+ + M_-)}{\infty} \bar J_\Phi = M_+ - M_-,\; M_{\pm} \geq 0 \right\}. \end{aligned}\end{equation} Taking the Lagrange dual of the above (see Appendix~\ref{app:duals}) gives \begin{equation}\begin{aligned}\label{eq:norm2} \norm{\Phi}{\dia}' &= \max \left\{\left. \< J_\Phi, W \> \bar - \rho \otimes \mathbbm{1}_B \leq W \leq \rho \otimes \mathbbm{1}_B,\; \rho \in \mathbb{D}(A) \right\}\\ &= \sup \left\{\left. \< J_\Phi, W \> \bar - \rho \otimes \mathbbm{1}_B \leq W \leq \rho \otimes \mathbbm{1}_B,\; \rho \in \mathbb{D}_{>0}(A) \right\}\\ &= \sup \left\{\left. \< W, \sqrt{\rho \otimes \mathbbm{1}_B} \, J_\Phi\, \sqrt{\rho \otimes \mathbbm{1}_B} \> \bar \rho \in \mathbb{D}_{>0}(A),\, -\mathbbm{1}_{A \otimes B} \leq W \leq \mathbbm{1}_{A \otimes B} \right\}\\ &= \sup_{\rho \in \mathbb{D}_{>0}(A)} \norm{\sqrt{\rho \otimes \mathbbm{1}_B} \, J_\Phi\, \sqrt{\rho \otimes \mathbbm{1}_B}}{1}\\ &= \max_{\rho \in \mathbb{D}(A)} \norm{\sqrt{\rho \otimes \mathbbm{1}_B} \, J_\Phi\, \sqrt{\rho \otimes \mathbbm{1}_B}}{1}, \end{aligned}\end{equation} where in the second line, \br{by continuity, we restricted our attention to the set of full-rank states $\mathbb{D}_{>0}(A)$ without loss of generality}, and in the third line we made the change of variables $W \mapsto \sqrt{\rho^{-1}\otimes \mathbbm{1}_B} W \sqrt{\rho^{-1}\otimes \mathbbm{1}_B}$. The fact that this equals the diamond norm of $\Phi$ can be deduced from the results of Ref.~\cite{watrous_2009} already; for completeness, we will show this explicitly. Recalling that $J_\Phi = (\mathrm{id}_A \otimes \Phi)\proj{\Omega}$ with $\ket{\Omega}$ being the unnormalised maximally entangled state, and using the fact that $\Phi$ is only acting on one of the subsystems, we can write \begin{equation}\begin{aligned} \norm{\Phi}{\dia}' &= \max_{\rho \in \mathbb{D}(A)} \norm{ \sqrt{\rho \otimes \mathbbm{1}_B} \, (\mathrm{id}_A \otimes \Phi) \left[ \proj{\Omega} \right]\, \sqrt{\rho \otimes \mathbbm{1}_B}}{1}\\ &= \max_{\rho \in \mathbb{D}(A)} \norm{ (\mathrm{id}_A \otimes \Phi) \left[ \sqrt{\rho} \otimes \mathbbm{1}_A \, \proj{\Omega}\, \sqrt{\rho} \otimes \mathbbm{1}_A \right]}{1}\\ &= \max_{\psi \in \mathbb{D}(A \otimes A)} \norm{ \mathrm{id}_A \otimes \Phi \,(\psi)}{1}\\ &= \max_{\rho \in \mathbb{D}(A \otimes A)} \norm{ \mathrm{id}_A \otimes \Phi \,(\rho)}{1}\\ &= \norm{\Phi}{\dia} \end{aligned}\end{equation} where we used that any pure state $\psi \in \mathbb{D}(A \otimes A)$ can be written as $\left(\sqrt{\rho} \otimes \mathbbm{1}_A\right) \proj{\Omega} \left(\sqrt{\rho} \otimes \mathbbm{1}_A\right)$ for a suitable choice of $\rho \in \mathbb{D}(A)$, with $\ket{\psi}$ constituting the canonical purification of $\rho$. \end{proof} \br{Compared with the semidefinite programs for the diamond norm of general linear maps originally derived in Refs.~\cite{watrous_2009,watrous_2013}, the form of the diamond norm presented in Lemma~\ref{lemma:diamond_herm} already constitutes a major simplification --- both at a conceptual level, allowing for a restatement of the problem in terms of optimising over decompositions of the form $J_\Phi = M_+ - M_-$, and computationally, as the number of optimisation variables is reduced.} As an immediate consequence of the above result, we can use the characterisation of the diamond norm in Eq.~\eqref{eq:diamond_tighter_ineq} to construct valid feasible solutions for the base norm and robustness measures in Eqs.~\eqref{eq:rob1}--\eqref{eq:CPTNI norm def}, and vice versa. \begin{boxed}{white} \begin{corollary}\mbox{}\label{cor:dia_cptni_ineq} For any Hermiticity-preserving map $\Phi$, it holds that \begin{equation}\begin{aligned}\label{eq:dia_cptni_ineq} 2 \norm{\Phi}{\dia} &\geq \norm{\Phi}{\bnorm} \geq \norm{\Phi}{\dia},\\ \norm{\Phi}{\dia} + 1 &\geq R'(\Phi) \geq \frac{1}{2}\left(\vphantom{\Big[}\norm{\Phi}{\dia} - 2 + \lambda_{\min}(\Tr_B J_\Phi)\right)\\ \norm{\Phi}{\dia} &\geq R''(\Phi) \geq \frac{1}{2}\left(\vphantom{\Big[}\norm{\Phi}{\dia} - \lambda_{\max}(\Tr_B J_\Phi)\right), \end{aligned}\end{equation} \end{corollary} where $\lambda_{\min}$ and $\lambda_{\max}$ denote, respectively, the smallest and the largest eigenvalues. \end{boxed} \br{% \begin{proof} Any decomposition for the diamond norm of the form $J_\Phi = M_+ - M_-$ with $\Tr_B (M_+ + M_-) \leq \mu \mathbbm{1}_{A}$ satisfies $\Tr_B M_{\pm} \leq \mu \mathbbm{1}_A$, which provides valid feasible solutions to the norm $\norm{\cdot}{\bnorm}$ and the robustness measures. On the other hand, any decomposition for $\norm{\cdot}{\bnorm}$ of the form $ \Phi = \lambda_+ \Lambda_+ - \lambda_- \Lambda_-$ with $\Lambda_{\pm} \in {\mathrm{CPTNI}}$ gives a feasible decomposition for the diamond norm with $\Tr_B (\lambda_+ J_{\Lambda_+} + \lambda_- J_{\Lambda_-}) \leq (\lambda_+ + \lambda_-)\mathbbm{1}_A$. Similarly, any decomposition for $R'$ satisfying $J_\Phi \leq (1+\lambda) J_\Lambda$ gives a feasible decomposition for $\norm{\cdot}{\dia}$ of the form $J_\Phi = (1+\lambda) J_\Lambda - M_-$ where $M_- \coloneqq (1+\lambda) J_\Lambda - J_\Phi$. Using that \begin{equation}\begin{aligned} \Tr_B\left[(1+\lambda) J_\Lambda + M_-\right] &= \Tr_B \left[ 2(1+\lambda) J_\Lambda - J_\Phi\right]\\ &\leq \left[2(1+\lambda) - \lambda_{\min}(\Tr_B J_\Phi)\right] \mathbbm{1}_A, \end{aligned}\end{equation} we get the stated bound. The case of $R''$ follows analogously. \end{proof} }% Equality between the different quantities can be shown for all trace-preserving maps, directly relating the diamond norm with our considered measures. \begin{boxed}{white} \begin{theorem}\label{thm:rob_diamond} For any map $\Phi \in \H(A,B)$ which is trace preserving or, more generally, proportional to a trace-preserving map in the sense that $\Tr_B J_\Phi \propto \mathbbm{1}$, it holds that \begin{equation}\begin{aligned}\label{eq:dia_tp} \norm{\Phi}{\dia} = \norm{\Phi}{\bnorm} = \min \left\{\left. \mu_+ + \mu_- \bar J_\Phi = \mu_+ J_{\Lambda_+} - \mu_- J_{\Lambda_-},\; \Lambda_{\pm} \in {\mathrm{CPTP}} ,\; \mu_{\pm} \in \mathbb{R}_+ \right\}. \end{aligned}\end{equation} \br{For trace-preserving maps $\Phi$, it additionally holds that} \begin{equation}\begin{aligned} R(\Phi) = R'(\Phi) = R''(\Phi) = \frac{\norm{\Phi}{\bnorm}-1}{2} = \frac{\norm{\Phi}{\dia}-1}{2}. \end{aligned}\end{equation} \end{theorem} \end{boxed} \begin{proof} From the fact that $\Tr_B J_\Phi = t \mathbbm{1}_A$ for some $t \in \mathbb{R}$, it is easy to see that every decomposition of the form $J_\Phi = M_+ - M_-,\; M_{\pm} \geq 0,\; \Tr_B (M_+ + M_-) = \mu \mathbbm{1}_{A}$ as in Lemma~\ref{lemma:diamond_herm} has to satisfy \begin{equation}\begin{aligned} \Tr_B M_+ = \frac{\mu+t}{2} \mathbbm{1}_A,\quad \Tr_B M_- = \frac{\mu-t}{2} \mathbbm{1}_A. \end{aligned}\end{equation} This implies that we can equivalently write \begin{equation}\begin{aligned} \norm{\Phi}{\dia} &= \min \left\{\left. \mu_+ + \mu_- \bar J_\Phi = \mu_+ M_+ - \mu_- M_-,\; M_{\pm} \geq 0,\; \Tr_B M_+ = \Tr_B M_- = \mathbbm{1}_{A} \right\} \end{aligned}\end{equation} which is precisely Eq.~\eqref{eq:dia_tp}. Notice then that any such decomposition gives a valid feasible solution for $\norm{\Phi}{\bnorm}$, together with Cor.~\ref{cor:dia_cptni_ineq} yielding equality between the two norms. When $\Phi$ is trace preserving ($t=1$), we can write \begin{equation}\begin{aligned}\label{eq:dia_tp_eq} \norm{\Phi}{\dia} &= \min \left\{\left. \mu_+ + \mu_- \bar J_\Phi = \mu_+ J_{\Lambda_+} - \mu_- J_{\Lambda_-},\; \Lambda_{\pm} \in {\mathrm{CPTP}} \right\}\\ &= \min \left\{\left. 2 \mu_+ - 1 \bar J_\Phi = \mu_+ J_{\Lambda_+} - \mu_- J_{\Lambda_-},\; \Lambda_{\pm} \in {\mathrm{CPTP}} \right\}\\ &= \min \left\{\left. 2 \mu_- + 1 \bar J_\Phi = \mu_+ J_{\Lambda_+} - \mu_- J_{\Lambda_-},\; \Lambda_{\pm} \in {\mathrm{CPTP}} \right\}. \end{aligned}\end{equation} The equality $\norm{\Phi}{\dia} = 2 R'(\Phi) + 1 = 2 R''(\Phi) + 1$ then follows: on the one hand, any decomposition of the form in Eq.~\eqref{eq:dia_tp_eq} gives a feasible decomposition for $R'$ and $R''$ in Eqs.~\eqref{eq:rob2}--\eqref{eq:rob3}, and on the other hand, any decomposition for the robustness measures is necessarily of the form in Eq.~\eqref{eq:dia_tp_eq}. Equality with the robustness $R(\Phi)$ follows by noting again that any feasible decomposition in Eq.~\eqref{eq:dia_tp_eq} gives a feasible decomposition for $R(\Phi)$, and on the other hand using the relation $R(\Phi) \geq R'(\Phi)$ which holds by definition. \end{proof} \begin{remark}The expression in Eq.~\eqref{eq:dia_tp} is valid also in the case of trace-annihilating maps ($\Tr_B J_\Phi = 0$), and thus the case of computing the distance $\norm{\Lambda - \Lambda'}{\dia}$ between two quantum channels. A simplified expression for this problem appeared previously in~\cite{watrous_2009} and was explicitly expressed as a robustness-type measure in~\cite{gour_2019-1}. \end{remark} We note that the quantity $\norm{\cdot}{\bnorm}$, applied to trace-preserving maps, was recently considered in Refs.~\cite{jiang_2020} and~\cite{piveteau_2021}. It was not noticed in these works that this is simply the diamond norm, and hence many results shown in \cite{jiang_2020} (e.g.\ the multiplicativity with respect to tensor product, unitary invariance, bounds with trace norm $\norm{J_\Phi}{1}$, monotonicity under the action of superchannels, and some explicit expressions) follow directly from known properties of the diamond norm~\cite{watrous_2004,watrous_2009,michel_2018,nechita_2018}. We will later see that this equivalence does not extend to maps which are not trace preserving (or proportional thereto), and indeed we can have $\norm{\Phi}{\bnorm} = 2 \norm{\Phi}{\dia}$ in the extreme case. \section{Quantifying simulation cost}\label{sec:simulation} Since the quantum dynamics which can be realised in practice are restricted to completely positive maps, a relevant question then becomes: how can one simulate the action of a non-CPTP map on a quantum state when only CPTP maps are available to us? A similar question was recently asked in Ref.~\cite{jiang_2020}, where the authors applied quasiprobability sampling methods~\cite{pashayan_2015,temme_2017,takagi_2020-2} to the desired operation $\Phi$. We take a different approach here and instead allow for the use of an ancillary system $X$, which can be an affine combination of quantum states, in order to simulate the action of the map $\Phi$ as a CPTP map $\Lambda$ acting jointly on the input quantum state and the ancilla $X$. The ``non-physicality'' of the given map $\Phi$ is then pushed into the system $X$, allowing for the overall transformation $\Lambda$ to be a valid quantum channel. The motivation for this approach is that the task of simulating the action of the non-CPTP map $\Phi$ is effectively replaced with the simulation of a unit-trace Hermitian operator $X$, which could be significantly easier to realise in practice, especially since we will see that the dimension of the ancilla can be taken to be arbitrarily small. Standard quasiprobability-based approaches such as the ones employed in \cite{temme_2017,takagi_2020-2,jiang_2020} aim to estimate the expectation value $\Tr[\Phi(\rho)A]$, where $\Phi$ is a non-CPTP map and $A$ an observable, by decomposing the given map as $\Phi = \lambda_i \Lambda_i$ with $\lambda_i \in \mathbb{R}$ and $\Lambda_i \in {\mathrm{CPTP}}$ (or CPTNI). The expectation value $\Tr[\Phi(\rho)A]$ is then estimated by evaluating $\Tr[\Lambda_i(\rho)A]$ and appropriately sampling from the output distributions with probabilities determined by the coefficients $\lambda_i$~\cite{pashayan_2015,temme_2017}. In practice, this means that we have to repeatedly realise each operation $\Lambda_i$, which requires the implementation of a different quantum circuit for each operation. Consider, on the other hand, a situation in which the dynamics is fixed as some map $\Lambda \in {\mathrm{CPTNI}}$, and we only need to vary the input states. This can be achieved by writing $\Phi(\cdot) = \Lambda(\cdot \otimes X)$, where we can write any Hermitian operator in a quasiprobability representation as $X = \sum_i \mu_i \rho_i$. The task of sampling from the output distribution is then reduced to feeding in the different states $\rho_i$ into the circuit which realises the fixed operation $\Lambda$, thus greatly simplifying the implementation. As mentioned in Sec.~\ref{sec:robustness_intro}, a natural quantifier of how much a given operator $X \in \mathbb{H}(A)$ with $\Tr(X) = 1$ deviates from the set of all quantum states is the trace norm $\norm{X}{1}$. Indeed, this quantity can be given an explicit interpretation in terms of the optimal cost of a quasiprobability-based \br{estimation of the expectation value of $X$}~\cite{pashayan_2015}. We then define the simulation cost of a map as the minimal amount of such ``non-physicality'' of $X$ needed to simulate the action of the map: \begin{equation}\begin{aligned} S(\Phi) \coloneqq \min \left\{\left. \norm{X}{1} \bar \Lambda (\cdot \otimes X ) = \Phi(\cdot), \;\Tr(X) = 1,\; \Lambda \in {\mathrm{CPTNI}} \right\}. \end{aligned}\end{equation} We then have the following. \begin{boxed}{white} \begin{theorem}\label{thm:cost} For any map $\Phi \in \H(A,B)$, it holds that \begin{equation}\begin{aligned} S(\Phi) = 2 R(\Phi) + 1. \end{aligned}\end{equation} In the case of a trace-preserving $\Phi$, we have in particular that \begin{equation}\begin{aligned} S(\Phi) = \norm{\Phi}{\dia} \end{aligned}\end{equation} and an optimal $\Lambda$ for the simulation can be chosen to satisfy $\Lambda \in {\mathrm{CPTP}}$. \end{theorem} \end{boxed} \begin{proof} Let $\Lambda_{\pm} \in {\mathrm{CPTNI}}(A,B)$ be maps that achieve an optimal decomposition for $\Phi$ such that $\Phi=(1+R(\Phi))\Lambda_+ - R(\Phi)\Lambda_-$. Now, consider a non-positive Hermitian operator $X= \mu_+ \omega_+ - \mu_- \omega_- \in \mathbb{H}(A')$ where $\omega_\pm$ are orthogonal quantum states and $\Tr[X]= \mu_+ - \mu_- =1$. We do not impose any additional conditions on the size of the ancillary system $A'$, meaning that its Hilbert space can be chosen to be an arbitrary space of dimension at least 2. Defining the projector onto the positive part of $X$ as $P_+$, we then consider the map defined by the action on a basis $\ketbra{i_1}{j_1}\otimes \ketbra{i_2}{j_2}\in \mathbb{L}(A \otimes A')$ as follows: \begin{equation}\begin{aligned} \Lambda(\ketbra{i_1}{j_1}\otimes\ketbra{i_2}{j_2})&:=\Tr\left[\frac{P_+}{\mu_+}\ketbra{i_2}{j_2}\right]\Phi(\ketbra{i_1}{j_1})+\Tr\left[\left(\mathbbm{1}-\frac{P_+}{\mu_+}\right)\ketbra{i_2}{j_2}\right]\Lambda_-(\ketbra{i_1}{j_1})\\ &= (1+R(\Phi))\Tr\left[\frac{P_+}{\mu_+}\ketbra{i_2}{j_2}\right]\Lambda_+(\ketbra{i_1}{j_1})\\ &\quad +\Tr\left[\left(\mathbbm{1}-(1+R(\Phi))\frac{P_+}{\mu_+}\right)\ketbra{i_2}{j_2}\right]\Lambda_-(\ketbra{i_1}{j_1}) \label{eq:direct construction} \end{aligned}\end{equation} It is easy to check that $\Lambda(\rho\otimes X)=\Phi(\rho)$. Now, we will show that as long as the condition \begin{eqnarray} (1+R(\Phi))\Tr\left[\frac{P_+}{\mu_+}\rho\right]\leq 1\; \forall \rho \label{eq:direct positive} \end{eqnarray} is satisfied, then $\Lambda$ is also CPTNI. This can be seen by observing first that \eqref{eq:direct positive} gives \begin{eqnarray} 0\leq \tilde{P} \coloneqq (1+R(\Phi))\frac{P_+}{\mu_+}\leq \mathbbm{1}, \end{eqnarray} which implies that $\tilde{P}$ is a valid POVM element. Note that we can rewrite \eqref{eq:direct construction} as \begin{eqnarray} \Lambda&=& \Lambda_+\otimes T_{\tilde{P}} +\Lambda_-\otimes T_{\mathbbm{1}-\tilde{P}} \\ &=& (\mathbbm{1}\otimes T_{\tilde{P}})(\Lambda_+\otimes\mathbbm{1}) +(\mathbbm{1}\otimes T_{\mathbbm{1}-\tilde{P}})(\Lambda_-\otimes\mathbbm{1}) \end{eqnarray} where $T_{P}(\cdot):=\Tr\left[P\cdot \right]$. Since $\Lambda_+$, $\Lambda_-$, and $T_{\tilde{P}}$, $T_{\mathbbm{1}-\tilde{P}}$ are all completely positive, $\Lambda$ is also completely positive. Since \eqref{eq:direct positive} is always satisfied when \begin{eqnarray} \mu_+ \geq 1+R(\Phi), \end{eqnarray} an operator $X$ with $\norm{X}{1} = \mu_+ + \mu_- = 1+2 R(\Phi)$ achieves the desired implementation. The converse part can be proven by extending an argument in Ref.~\cite{diaz_2018-2} to our setting. Suppose a non-quantum resource $X=\mu_+ \omega_+ - \mu_- \omega_-$ and CPTNI map $\Lambda$ realise the simulation of $\Phi$, i.e. $\Phi(\cdot) = \Lambda\left(\cdot\otimes X\right)$. Also, define $\Lambda_+(\cdot) \coloneqq\Lambda(\cdot\otimes \omega_+)$. Then, by linearity of $\Lambda$, we get \begin{eqnarray} \Lambda_+(\cdot)&=&\frac{1}{\mu_+}\Lambda(\cdot\otimes X)+\frac{1}{\mu_+}\Lambda(\cdot\otimes \mu_- \omega_-)\\ &=& \frac{1}{\mu_+}\Phi(\cdot)+\frac{\mu_-}{\mu_+}\Lambda(\cdot\otimes \omega_-). \end{eqnarray} Since $\Lambda_+$ and $\Lambda_- \coloneqq \Lambda(\cdot\otimes \omega_-)$ are CPTNI maps, this is a valid linear decomposition of $\Phi$ into two CPTNI maps, providing an upper bound for its robustness as $R(\Phi)\leq \mu_- = \mu_+-1$. This gives the desired lower bound for the simulation cost as $\mu_+ + \mu_- \geq 1 + 2 R(\Phi)$. \end{proof} An interesting quantitative equivalence emerges between our approach and the method of Ref.~\cite{jiang_2020}. In that work, the authors showed that the minimal overhead required to employ quasiprobability-based simulation techniques~\cite{pashayan_2015,temme_2017} to estimate $\Tr[\Phi(\rho)A]$ for a trace-preserving map $\Phi$ scales with the norm $\norm{\Phi}{\bnorm}$ (see also the discussion in Sec.~\ref{sec:examples_inverses}). Since we know from Thm.~\ref{thm:rob_diamond} that \begin{equation}\begin{aligned} 2R(\Phi) + 1 = \norm{\Phi}{\bnorm} = \norm{\Phi}{\dia} \end{aligned}\end{equation} holds for any trace-preserving map, the quantitative cost of the simulation scheme is actually the same as our method, despite the seemingly different approaches employed. In fact, our Thm.~\ref{thm:cost} shows that it is sufficient to consider decompositions of $\Phi$ as \begin{equation}\begin{aligned}\label{eq:quasiprob_ancilla} \Phi(\cdot) = \mu_+ \Lambda(\cdot \otimes \omega_+) - \mu_- \Lambda(\cdot \otimes \omega_-) \end{aligned}\end{equation} where $\Lambda$ and $X = \mu_+ \omega_+ - \mu_- \omega_-$ are as constructed in our protocol. This means that, despite the significant practical simplification obtained by fixing the dynamics of the simulator as $\Lambda$ and optimising over the quasiprobability representations of $X$ instead, our simulation method does not sacrifice any performance, and the optimal sampling overhead cost of the more direct approach of \cite{jiang_2020} cannot be any better. We note that Theorem~\ref{thm:cost} gives a general way of reducing the task of simulating the action of a linear map $\Phi$ to simulating an affine combination of states in the form of the operator $X$. This could provide methods for the simulation of dynamics even beyond quasiprobability-based approaches like the one discussed above, although the specifics of this will depend on the given simulation method. \paragraph{State injection and resource simulation.} The setting considered here is closely related to state injection methods which generalise quantum teleportation~\cite{bennett_1993} and find use e.g.\ in the resource theories of entanglement~\cite{berta_2013,pirandola_2017,wilde_2018,gour_2019,bauml_2019}, stabiliser-state quantum computation~\cite{gottesman_1999,seddon_2019}, and coherence~\cite{bendana_2017,diaz_2018-2}. In such tasks, a resourceful state $\phi$ (such as a maximally entangled singlet) is used to simulate the action of an arbitrary quantum channel $\Theta$ as $\Theta(\cdot) = \Gamma(\cdot \otimes \phi)$, where now $\Gamma$ is a free operation (such as a protocol consisting of local operations and classical communication only). In this sense, our result can be thought of as the cost of channel simulation in the resource theory of ``non-physicality'' beyond quantum mechanics, with the operator $X$ acting as a resource. There are many potential ways to interpret such a result: for instance, unit-trace Hermitian operators which are not necessarily positive semidefinite have found use as so-called pseudo-states in~\cite{geller_2014}, where they were used to study correlations beyond quantum mechanics, and as so-called pseudo-density matrices in~\cite{fitzsimons_2015}, where they were used to put spatial and temporal correlations on equal footing. Being able to use a Hermitian system $X$ could then be interpreted as having access to such extended sets of correlations. We leave a precise investigation of the connections between the operational setting employed here and resource theories of correlations for future work. \paragraph{Amortised simulation.} A related setting that we can consider is that of \textit{amortised} simulation \cite{kaur_2017,diaz_2018-2}, in which the non-quantum resource $X$ is not consumed completely, but instead we can recover some of it in the form of another resource $Y$ which can be reused. Precisely, we define \begin{equation}\begin{aligned} S_A(\Phi) = \min \left\{\left. \frac{\norm{X}{1}}{\norm{Y}{1}} \bar \Lambda (\cdot \otimes X ) = \Phi(\cdot) \otimes Y, \;\Tr(X) = \Tr(Y) = 1,\; \Lambda \in {\mathrm{CPTNI}} \right\}. \end{aligned}\end{equation} \br{Clearly, $S_A(\Phi) \leq S(\Phi)$ as we can just take $X$ to be optimal for $S$ and $Y$ to be the trivial system $1$. One could expect amortisation to lead to a strictly smaller cost of simulating a given map. However, we can show that this is not the case --- amortisation cannot improve the simulation cost of any trace-preserving map.} \begin{boxed}{white} \begin{corollary}\label{corr:cost_amo} For any trace-preserving map $\Phi \in \H(A,B)$, it holds that \begin{equation}\begin{aligned} S_A(\Phi) = S(\Phi) = \norm{\Phi}{\dia}. \end{aligned}\end{equation} \end{corollary} \end{boxed} \begin{proof} Let $\Lambda$ be the optimal map such that $\Lambda (\cdot \otimes X ) = \Phi(\cdot) \otimes Y$ with $S_A(\Phi) = \norm{X}{1}/\norm{Y}{1}$. \br{Noting that this can be alternatively understood as a simulation protocol for the trace-preserving map $\Phi(\cdot) \otimes Y$, Thm.~\ref{thm:cost} tells us that any such protocol satisfies} \begin{equation}\begin{aligned} \norm{X}{1} &\geq S\left( \Phi(\cdot) \otimes Y \right)\\ &= \norm{ \Phi(\cdot) \otimes Y }{\dia}\\ &= \norm{ \Phi}{\dia} \norm{Y}{1}\\ &= S (\Phi) \norm{Y}{1} \end{aligned}\end{equation} where we used the multiplicativity of the diamond norm and the fact that $\norm{Y}{\dia}=\norm{Y}{1}$ where we treat $Y$ as a preparation channel with a trivial input space. From this we have that $S_A(\Phi) \geq S(\Phi)$, which concludes the proof. \end{proof} \section{Quantifying advantages in quantum games}\label{sec:games} The study of general linear maps in a resource-theoretic setting motivates the question: is there a well-defined operational task in which having access to \emph{any} non-CPTP map could provide practical advantages over all quantum channels? In order to give an instance of such a task, we consider the setting of input-output games, inspired by the work of Ref.~\cite{rosset_2018} and studied in the context of dynamical quantum resources in~\cite{uola_2020,yuan_2020}. The setting is as follows: Alice prepares a state chosen randomly from the ensemble $\{p_i, \sigma_i\}_i$ and sends the state through the map $\Phi \in \H(A,B)$ to Bob, who then measures with a POVM $\{M_j\}_j$. The players are then awarded a score based on a reward function characterised by the coefficients $\{w_{ij}\}_{i,j} \in \mathbb{R}$, and their goal is to maximise the average payoff given by \begin{equation}\begin{aligned} P(\Phi, \{p_i, \sigma_i\}, \{M_j\}, \{w_{ij}\}) = \sum_{i,j} w_{ij} p_i \< M_j, \Phi(\sigma_i) \> \end{aligned}\end{equation} by a suitable choice of the states and measurements. The tuple $\mathcal{G} = (\{p_i, \sigma_i\}, \{M_j\}, \{w_{ij}\})$ then defines the \textit{input-output game} $\mathcal{G}$. We stress that, although the payoff $P(\Phi, \mathcal{G})$ might lose its physical meaning as a discrimination task when $\Phi$ is an arbitrary linear map, already for a \textit{positive} trace-preserving map $\Phi$ we have that every output $\Phi(\sigma_i)$ is indeed a valid density matrix and thus the measurement at the output constitutes a well-defined state discrimination task. We are then interested in quantifying the best possible advantage that a given map $\Phi$ could provide over CPTP maps. Such an optimisation is unbounded without any further constraints, so we will consider games for which any completely positive map $\Gamma$ achieves a non-negative payoff value --- this can always be ensured by suitably shifting the payoff function for a given game. We then have the following. \begin{boxed}{white} \begin{theorem}\label{thm:operational} For any map $\Phi \in \H(A,B)$, it holds that \begin{equation}\begin{aligned}\label{eq:operational_diam} \sup_{\mathcal{G}} \frac{P(\Phi, \mathcal{G})}{\max_{\Lambda \in {\mathrm{CPTP}}} P(\Lambda, \mathcal{G})} = R'(\Phi) + 1 \end{aligned}\end{equation} where the maximisation is over all input-output games $\mathcal{G}$ such that $P(\Gamma,\mathcal{G}) \geq 0 \; \forall \Gamma \in {\mathrm{CP}}$. In the case of a trace-preserving $\Phi$, we have in particular that \begin{equation}\begin{aligned} \sup_{\mathcal{G}} \frac{P(\Phi, \mathcal{G})}{\max_{\Lambda \in {\mathrm{CPTP}}} P(\Lambda, \mathcal{G})} = \frac{\norm{\Phi}{\dia}+1}{2}, \end{aligned}\end{equation} and it suffices to optimise over games such that $P(\Lambda,\mathcal{G}) \geq 0 \; \forall \Lambda \in {\mathrm{CPTP}}$. \end{theorem} \end{boxed} \begin{proof} Any $\Phi$ can be written as $\Phi = (1+R'(\Phi)) \Lambda - \Gamma$ where $\Lambda \in {\mathrm{CPTP}}$, $\Gamma \in {\mathrm{CP}}$. On the one hand, we then have for any $\mathcal{G}$ that \begin{equation}\begin{aligned} P(\Phi, \mathcal{G}) &= (1+R'(\Phi)) \,P(\Lambda, \mathcal{G}) -P(\Gamma, \mathcal{G})\\ &\leq (1+R'(\Phi))\, P(\Lambda, \mathcal{G})\\ &\leq (1+R'(\Phi)) \max_{\Lambda \in {\mathrm{CPTP}}} P(\Lambda, \mathcal{G}) \end{aligned}\end{equation} where the first inequality follows since $P(\Gamma,\mathcal{G}) \geq 0 \; \forall \Gamma \in {\mathrm{CP}}$, which shows that the left-hand side of Eq.~\eqref{eq:operational_diam} is upper-bounded by the right-hand side. By Thm.~\ref{thm:rob_diamond}, in the case of a trace preserving map $\Phi$ we can equivalently write $\Phi = (1+R'(\Phi)) \Lambda_+ - R'(\Phi) \Lambda_-$ where $\Lambda_\pm \in {\mathrm{CPTP}}$, so one only needs to consider games such that $P(\Lambda,\mathcal{G}) \geq 0 \; \forall \Lambda \in {\mathrm{CPTP}}$. On the other hand, by strong Lagrange duality (see App.~\ref{app:duals}) we can write \begin{equation}\begin{aligned}\label{eq:rob_dual} R'(\Phi) + 1 = \max \left\{\left. \< W, J_\Phi \> \bar 0 \leq W \leq \rho \otimes \mathbbm{1}_B,\; \rho \in \mathbb{D}(A) \right\}. \end{aligned}\end{equation} We can then make the following observations. Firstly, since the set of separable states in $\mathbb{D}(A\otimes B)$ has a non-empty interior~\cite{zyczkowski_1998}, any Hermitian operator $X$ can be written as $X = \sum_{i=1}^{n} x_i\, \sigma_i \otimes \eta_i$ for some $\sigma_i \in \mathbb{D}(A)$, $\eta_i \in \mathbb{D}(B)$, $x_i \in \mathbb{R}$, and $n \in \NN$. Then, choose the optimal $W$ in Eq.~\eqref{eq:rob_dual} and write $W^{T_A} = \sum_{i=1}^{n} x_i\, \sigma_i \otimes \eta_i$, where $T_A$ denotes the partial transpose. Defining the set $\{M_i\}_{i=1}^{n+1}$ by $M_i \coloneqq \eta_i / \norm{\sum_j \eta_j}{\infty}$ for $i\leq n$ and $M_{n+1} = \mathbbm{1} - \sum_{i=1}^{n} M_i$, we have that \begin{equation}\begin{aligned} W = \sum_i p_i w_i \, \sigma^T_i \otimes M_i \end{aligned}\end{equation} where $p_i = 1/n$ for $i \leq n$ and $p_{n+1} = 0$, and the coefficients $w_i$ are defined by $w_i = x_i n \norm{\sum_j \eta_j}{\infty}$. By the Choi-Jamiołkowski isomorphism and the linearity of $\Phi$, we then have for $\Phi$ that \begin{equation}\begin{aligned} \< W, J_\Phi \> = \sum_i p_i w_i \< M_i, \Phi(\sigma_i) \> = P(\Phi, \mathcal{G}') \end{aligned}\end{equation} with $\mathcal{G}'$ defined by the above choices of $\{p_i, \sigma_i\}$, $\{M_i\}$, and $\{w_i\}$. Noticing that $W \geq 0 \Rightarrow P(\Gamma, \mathcal{G}') \geq 0 \; \forall \Gamma \in {\mathrm{CP}}$, this finally gives \begin{equation}\begin{aligned} \sup_{\mathcal{G}} \frac{P(\Phi, \mathcal{G})}{\max_{\Lambda \in {\mathrm{CPTP}}} P(\Lambda, \mathcal{G})} &\geq \frac{P(\Phi, \mathcal{G}')}{\max_{\Lambda \in {\mathrm{CPTP}}} P(\Lambda, \mathcal{G}')}\\ &\geq \frac{R'(\Phi) + 1}{{\max_{\Lambda \in {\mathrm{CPTP}}} R'(\Lambda) + 1}} \\ &= R'(\Phi) + 1 \end{aligned}\end{equation} where the second inequality follows since $P(\Phi, \mathcal{G}') = \<W, J_\Phi \> = R'(\Phi)+1$ holds by assumption while $P(\Lambda, \mathcal{G}') = \<W, J_\Lambda \> \leq R'(\Lambda)+1$ holds for any map $\Lambda$ by definition, and the last equality follows since $R'(\Lambda) = 0$ for any $\Lambda \in {\mathrm{CPTP}}$. \end{proof} \section{General bounds}\label{sec:bounds} Useful bounds for the measures can be obtained by relating them with norms or quantities computed at the level of the Choi operator $J_\Phi$, avoiding an optimisation over all CPTNI or CPTP maps. For instance, the following relation with the trace norm generalises known bounds for the diamond norm~\cite{kliesch_2017,nechita_2018} (see also~\cite{jiang_2020}). \begin{boxed}{white} \begin{proposition}\label{prop:bounds_trace_norm} For any Hermiticity-preserving map $\Phi \in \H(A,B)$, decompose $J_\Phi$ into its positive and negative parts as $J_\Phi = {J_\Phi}_+ - {J_\Phi}_-$ with ${J_\Phi}_\pm \geq 0$. Then \begin{equation}\begin{aligned} \norm{J_\Phi}{1} &\geq \norm{\Phi}{\bnorm} \geq \frac{1}{d_A} \norm{J_\Phi}{1},\\ \max \big\{ \Tr {J_\Phi}_+ - 1,\, \Tr {J_\Phi}_- \big\} &\geq R(\Phi) \geq \max \left\{ \frac{1}{d_A} \Tr {J_\Phi}_+ - 1,\, \frac{1}{d_A} \Tr {J_\Phi}_- \right\},\\ \Tr {J_\Phi}_+ - 1 &\geq R'(\Phi) \geq \frac{1}{d_A} \Tr {J_\Phi}_+ - 1,\\ \Tr {J_\Phi}_- &\geq R''(\Phi) \geq \frac{1}{d_A} \Tr {J_\Phi}_-. \end{aligned}\end{equation} \end{proposition} \end{boxed} \begin{proof} Consider $\norm{\cdot}{\bnorm}$ first. Using the expression \begin{equation}\begin{aligned} \norm{J_\Phi}{1} &= \min \left\{\left. \mu_+ + \mu_- \bar J_\Phi = \mu_+ \omega_+ - \mu_- \omega_-,\; \omega_{\pm} \in \mathbb{D}(A \otimes B) \right\} \end{aligned}\end{equation} we see that any such decomposition provides a feasible solution for $\norm{\Phi}{\bnorm}$, since $\omega_\pm$ constitute valid Choi operators of maps $\Omega_\pm \in {\mathrm{CPTNI}}(A,B)$. The first inequality thus follows. The second inequality is a consequence of the bound $\norm{\Phi}{\bnorm} \geq \norm{\Phi}{\dia}$ from Cor.~\ref{cor:dia_cptni_ineq} and the fact that $\frac{1}{d_A} \norm{J_\Phi}{1}$ is known to lower bound the diamond norm (see e.g.~\cite{kliesch_2017,nechita_2018}). It can also be explicitly seen by noting that any decomposition of the form $J_\Phi = \lambda_+ J_{\Lambda_+} - \lambda_- J_{\Lambda_-}$ with $\Lambda_\pm \in \CPTNI$ can provide a decomposition for the trace norm by rescaling each $J_{\Lambda_\pm}$ by its trace; specifically, \begin{equation}\begin{aligned} \norm{J_{\Phi}}{1} \leq \lambda_+ \Tr J_{\Lambda_+} + \lambda_- \Tr J_{\Lambda_-}, \end{aligned}\end{equation} and using the fact that $\Tr J_{\Lambda_\pm} \leq d_A \norm{J_{\Lambda_\pm}}{\infty} \leq d_A$ gives the desired bound. The case of the robustness measures $R', R''$ follows analogously, where we now use the fact that $\Tr X _+ = \min \left\{\left. \mu \bar X \leq \mu \rho, \; \rho \in \mathbb{D} \right\}$ and $\Tr X_- = \min \left\{\left. \mu \bar X + \mu \rho \geq 0, \; \rho \in \mathbb{D} \right\}$ for any Hermitian $X$. For the robustness $R$, take $\lambda$ to be the greater of $\Tr J_{\Phi_+}-1$ and $\Tr J_{\Phi_-}$, and write \begin{equation}\begin{aligned} J_{\Phi} + \lambda \frac{{J_\Phi}_-}{\lambda} = (1 + \lambda) \frac{{J_\Phi}_+}{1+\lambda}. \end{aligned}\end{equation} Since each $\frac{{J_\Phi}_\pm}{\lambda} \in \CPTNI$, this provides a valid feasible solution for $R$. On the other hand, $R(\Phi) \geq \max\{ R'(\Phi), R''(\Phi)\}$ by definition, from which the lower bound follows. \end{proof} Both the upper and the lower bounds in Prop.~\ref{prop:bounds_trace_norm} can be tight, as was shown already for the diamond norm~\cite{michel_2018}. However, better upper bounds can be obtained as follows. \begin{boxed}{white} \begin{proposition}\mbox{}\label{prop:bounds_upper} For any Hermiticity-preserving map $\Phi \in \H(A,B)$, it holds that \begin{equation}\begin{aligned} \norm{\Phi}{\dia} & \leq \lambda_{\max}(\Tr_B [{J_\Phi}_+ + {J_\Phi}_-]),\\ \norm{\Phi}{\bnorm} &\leq \lambda_{\max}(\Tr_B {J_\Phi}_+) + \lambda_{\max}(\Tr_B {J_\Phi}_-),\\ R(\Phi) &\leq \max \big\{ \lambda_{\max}(\Tr_B J_{\Phi_+}) - 1,\; \lambda_{\max}(\Tr_B J_{\Phi_-}) \big\},\\ R'(\Phi) &\leq \lambda_{\max}(\Tr_B {J_\Phi}_+) -1,\\ R''(\Phi) &\leq \lambda_{\max}(\Tr_B {J_\Phi}_-).\\ \end{aligned}\end{equation} \end{proposition} \end{boxed} We note that the bound for the diamond norm, which we stated above for completeness, appeared previously in~\cite{nechita_2018}. \begin{proof} \br{The bounds for $\norm{\cdot}{\dia}$, $\norm{\cdot}{\bnorm}$, $R'$ and $R''$} follow simply by using $J_\Phi = {J_\Phi}_+ - {J_\Phi}_-$ as feasible solutions \br{in the definitions.} For the robustness $R$, take $\lambda$ to be the greater of $\lambda_{\max}(\Tr_B J_{\Phi_+})-1$ and $\lambda_{\max}(\Tr_B J_{\Phi_-})$, and write \begin{equation}\begin{aligned} J_{\Phi} + \lambda \frac{{J_\Phi}_-}{\lambda} = (1 + \lambda) \frac{{J_\Phi}_+}{1+\lambda}. \end{aligned}\end{equation} \br{Since this is a feasible solution for $R$, we get $R(\Phi) \leq \lambda$.} \end{proof} As for lower bounds, we will first need to establish dual expressions for the considered measures. The following Proposition is an application of standard convex duality arguments, and we include details in Appendix~\ref{app:duals} for completeness. \begin{boxed}{white} \begin{proposition}\label{prop:duals} For any $\Phi \in \H(A,B)$, the following dual expressions hold. \begin{equation}\begin{aligned}\label{eq:duals} \norm{\Phi}{\dia} &= \max \left\{\left. \< J_\Phi, W \> \bar - \rho \otimes \mathbbm{1}_B \leq W \leq \rho \otimes \mathbbm{1}_B,\; \rho \in \mathbb{D}(A) \right\}\\ % \norm{\Phi}{\bnorm} &= \max \left\{\left. \< J_\Phi, W \> \bar - \rho \otimes \mathbbm{1}_B \leq W \leq \sigma \otimes \mathbbm{1}_B,\; \rho, \sigma \in \mathbb{D}(A) \right\}\\ % R(\Phi) &= \max \left\{\left. \< J_\Phi, W \> - \Tr Y \bar - X \otimes \mathbbm{1}_B \leq W \leq Y \otimes \mathbbm{1}_B,\; X, Y \geq 0,\; \Tr (X + Y) = 1 \right\}\\ % R'(\Phi) &= \max \left\{\left. \< J_\Phi, W \> - 1 \bar 0 \leq W \leq \rho \otimes \mathbbm{1}_B,\; \rho \in \mathbb{D}(A) \right\}\\ % R''(\Phi) &= \max \left\{\left. \< J_\Phi, W - \rho \otimes \mathbbm{1}_B \> \bar 0 \leq W \leq \rho \otimes \mathbbm{1}_B,\; \rho \in \mathbb{D}(A) \right\}\\ &= \max \left\{\left. \< J_\Phi, W \> - \Tr \Phi(\rho) \bar 0 \leq W \leq \rho \otimes \mathbbm{1}_B,\; \rho \in \mathbb{D}(A) \right\}.\\ \end{aligned}\end{equation} \end{proposition} \end{boxed} We can then obtain lower bounds by employing the dual optimisation problems. The bound for the diamond norm is well known~\cite{watrous_2004}, but we find it is insightful to rederive it using this approach\footnote{We remark the curious fact that, despite the apparent similarity, the bound for the diamond norm in Prop.~\ref{prop:bounds_lower} is not the induced Schatten norm $\norm{\cdot}{1\to 1}$, as the latter requires an optimisation over non-Hermitian input operators even when the map $\Phi$ is Hermiticity-preserving~\cite{watrous_2004}.}. \begin{boxed}{white} \begin{proposition}\label{prop:bounds_lower} For any $\Phi \in \H(A,B)$ and any input state $\rho \in \mathbb{D}(A)$, let $\Phi(\rho)_{\pm}$ denote the positive/negative part of the output operator $\Phi(\rho)$. Then \begin{equation}\begin{aligned} \norm{\Phi}{\bnorm} \geq \norm{\Phi}{\dia} &\geq \max \left\{\left. \norm{\Phi(\rho)}{1} \bar \rho \in \mathbb{D}(A) \right\}\\ &\geq \norm{\Tr_B J_\Phi}{\infty}\\ % \norm{\Phi}{\bnorm} &\geq \max \left\{\left. \Tr \Phi(\rho)_- + \Tr \Phi(\sigma)_+ \bar \rho, \sigma \in \mathbb{D}(A) \right\}\\ &\geq \lambda_{\max} [(\Tr_B{J_\Phi})_+] + \lambda_{\max} [(\Tr_B{J_\Phi})_-]\\ % R(\Phi) &\geq \max \left\{\left. \Tr \Phi(X)_- + \Tr \Phi(Y)_+\textbf{} - \Tr Y \bar X, Y \geq 0,\; \Tr (X + Y) = 1 \right\}\\ &\geq \max \Big\{ \lambda_{\max} [(\Tr_B{J_\Phi})_+] - 1,\, \lambda_{\max} [(\Tr_B{J_\Phi})_-] \Big\}\\ % R'(\Phi) &\geq \max \left\{\left. \Tr \Phi(\rho)_+ - 1 \bar \rho \in \mathbb{D}(A) \right\}\\ &\geq \lambda_{\max} [(\Tr_B{J_\Phi})_+] - 1\\ % R''(\Phi) &\geq \max \left\{\left. \Tr \Phi(\rho)_- \bar \rho \in \mathbb{D}(A) \right\}\\ &\geq \lambda_{\max} [(\Tr_B{J_\Phi})_-].\\ \end{aligned}\end{equation} \end{proposition} \end{boxed} \begin{proof} Consider the diamond norm first. The main idea is to restrict the optimisation in the dual expression of $\norm{\cdot}{\dia}$ in \eqref{eq:duals} to operators of the form $W = \rho \otimes Z$ for some operator $Z$. Then we have \begin{equation}\begin{aligned} \norm{\Phi}{\dia} &\geq \max \left\{\left. \< J_\Phi, \rho \otimes Z \> \bar - \mathbbm{1}_B \leq Z \leq \mathbbm{1}_B,\; \rho \in \mathbb{D}(A) \right\}\\ &= \max \left\{\left. \< \Phi(\rho^T), Z \> \bar - \mathbbm{1}_B \leq Z \leq \mathbbm{1}_B,\; \rho \in \mathbb{D}(A) \right\}\\ &= \max \left\{\left. \norm{\Phi(\rho^T)}{1} \bar \; \rho \in \mathbb{D}(A) \right\}, \end{aligned}\end{equation} where the second line follows by the Choi-Jamiołkowski isomorphism. Taking $Z \in \{ \mathbbm{1}, -\mathbbm{1} \}$, we get the lower bound \begin{equation}\begin{aligned} \norm{\Phi}{\dia} &\geq \max \left\{\left. \pm \< \Tr_B J_\Phi, \rho \> \bar \rho \in \mathbb{D}(A) \right\}\\ &= \norm{\Tr_B J_\Phi}{\infty}. \end{aligned}\end{equation} In the case of $\norm{\cdot}{\bnorm}$, we use feasible solutions of the form $W = \rho \otimes Z + \sigma \otimes V$ with $-\mathbbm{1} \leq Z \leq 0$ and $0 \leq V \leq \mathbbm{1}$ to obtain the stated bound analogously --- the crucial observation being that $\max \left\{\left. \< A, B \> \bar 0 \leq B \leq \mathbbm{1} \right\} = \Tr A_+$ for any Hermitian $A$. The other measures follow in the same way. \end{proof} Note the similarity between the eigenvalue-based lower bounds of Prop.~\ref{prop:bounds_lower} and the upper bounds of Prop.~\ref{prop:bounds_upper}: the upper bounds consider the eigenvalues after decomposing $J_\Phi$ as ${J_\Phi}_+ - {J_\Phi}_-$, while the lower bounds use the positive and negative parts of $\Tr_B J_\Phi$. An immediate consequence is that for any completely positive map $\Phi$, it holds that \begin{equation}\begin{aligned}\label{eq:equality_CP} \norm{\Phi}{\bnorm} = \norm{\Phi}{\dia} = \norm{\Tr_B J_\Phi}{\infty} \end{aligned}\end{equation} since the operators $J_\Phi$ and $\Tr_B J_\Phi$ are both positive semidefinite. However, the lower bounds allow us to show explicitly that the equality $\norm{\Phi}{\bnorm} = \norm{\Phi}{\dia}$ is no longer true for maps which are neither CP nor trace preserving, and in fact the extreme disparity of $\norm{\Phi}{\bnorm} = 2 \norm{\Phi}{\dia}$ (cf.\ Cor.~\ref{cor:dia_cptni_ineq}) can be achieved. Consider for instance the case when \begin{equation}\begin{aligned} \Phi(\cdot) = \braket{0|\cdot|0} \proj{0} - \braket{1|\cdot|1} \proj{1}. \end{aligned}\end{equation} Decomposing $J_{\Phi} = \proj{0} \otimes \proj{0} - \proj{1} \otimes \proj{1}$ into its positive and negative parts, the bound of Prop.~\ref{prop:bounds_upper} gives $\norm{\Phi}{\dia} \leq 1$. However, the best upper bound we get for $\norm{\Phi}{\bnorm}$ is 2, and it is indeed tight: we have $\Phi(\proj{0}) = \proj{0}$ and $\Phi(\proj{1}) = - \proj{1}$, and so Prop.~\ref{prop:bounds_lower} gives $\norm{\Phi}{\bnorm}\geq 2$. A similar argument can be used to show that $R(\Phi) = 1$, which in particular implies that $2 R(\Phi) + 1 > \norm{\Phi}{\bnorm} > \norm{\Phi}{\dia}$. All of the bounds that we established in this section can be tight, as we shall demonstrate in what follows. \section{Applications and examples}\label{sec:applications} \subsection{Positive maps and structural physical approximation} Positive maps constitute a fundamental way to detect and characterise quantum entanglement~\cite{horodecki_1996-1,guhne_2009,horodecki_2009}. One of the most studied approaches to implementing such maps in practice is the structural physical approximation (SPA)~\cite{horodecki_2003-4,horodecki_2002-1}, which aims to approximate a given positive map $\Phi$ with a physical quantum channel by considering decompositions of the form $\Phi + \varsigma \mathcal{D}$, where $\mathcal{D}$ is the completely depolarising channel, $J_\mathcal{D} = \mathbbm{1} / d_B$. Such approximations have found use in both understanding the properties of positive maps~\cite{korbicz_2008,shultz_2015}, as well as in realising them in experiments~\cite{horodecki_2002-1,lim_2011,bae_2017}. Intuitively, the robustness measures can then be understood as different approaches to defining an optimised SPA to the map $\Phi$, by allowing channels other than the depolarising map to be used in the decomposition (cf.~\cite{jiang_2020}). We will now discuss the similarities and differences between the approaches by studying two representative examples of positive maps. \paragraph{Transposition map.} Consider first the transposition map $T \in \H(A,A)$. Letting ${\mathrm{SPA}}(T)$ denote the minimal amount $\varsigma$ needed for $(T + \varsigma \mathcal{D})/(1+\varsigma)$ to be a quantum channel, it can be easily verified that ${\mathrm{SPA}}(T) = d_A$. However, by making a more suitable choice of a channel in the optimisation, our robustness measures construct an approximation as $(T + \lambda \Lambda)/(1+\lambda)$ where $\lambda = \frac{1}{2}(d_A-1)$ already suffices to ensure that this is a valid physical channel. From this we see that $R(T) =\frac{1}{2}(d_A-1)$ and hence $\norm{T}{\bnorm} = d_A$. Quantitatively, the advantage gained by allowing arbitrary channels in such decompositions can therefore be significant. To understand why a better approximation can be obtained, let us take a closer look at the optimal decomposition for this map. Our generalised approach can take into consideration the fact that the Choi operator of the transposition map, $J_T$ (the swap operator), already has a non-trivial positive part, which means that there is no need to act on that part of the space. More specifically, a better approximation is obtained simply by defining the map $\displaystyle J_\Lambda = \frac{\mathbbm{1} - J_T}{d_A - 1} \in {\mathrm{CPTP}}$ and mixing as \begin{equation}\begin{aligned} J_T + \frac12 (d_A - 1) J_\Lambda \geq 0 \;\Rightarrow\; \frac{2}{d_A+1}\, T + \frac{d_A-1}{d_A+1} \,\Lambda \in {\mathrm{CPTP}}. \end{aligned}\end{equation} Structurally, this is not too different from the SPA --- the only maps involved in the combination are the depolarising channel and the transposition map itself, even if the optimal approximation is not simply a convex mixture of the two. Indeed, we could define an optimised structural physical approximation which allows for such decompositions to be used: \begin{equation}\begin{aligned}\label{eq:spa_modified} {\mathrm{SPA}}'(\Phi) \coloneqq& \min \lsetr \varsigma' \barr J_{\Phi} + \varsigma' \left[ \frac{\lambda_{\max}(J_\Phi) \mathbbm{1} - J_\Phi}{\lambda_{\max}(J_\Phi) d_B - 1}\right] \geq 0 \rsetr\\ =& - \lambda_{\min}(J_\Phi) \frac{d_B \lambda_{\max}(J_\Phi) - 1}{\lambda_{\max}(J_\Phi) - \lambda_{\min}(J_\Phi)}, \end{aligned}\end{equation} with the expression valid for any map such that $\lambda_{\min}(J_{\Phi}) < \lambda_{\max}(J_\Phi) \neq d_B^{-1}$. This can be used to give a general bound to the robustness measures. \begin{boxed}{white} \begin{proposition} For any trace-preserving map $\Phi \in \H(A,B)$ such that $\Phi \neq \mathcal{D}$, it holds that \begin{equation}\begin{aligned} R(\Phi) \leq {\mathrm{SPA}}'(\Phi) \leq {\mathrm{SPA}}(\Phi). \end{aligned}\end{equation} \end{proposition} \end{boxed} In the case of the transpose, it holds that ${\mathrm{SPA}}'(T)=R(T)=\frac{1}{2}(d_A-1)$, so we know that an optimal approximation of the transposition map can be realised with only the depolarising channel, as long as one considers the optimised approach of Eq.~\eqref{eq:spa_modified}. However, this is not the case for general maps, and the advantages offered by the generalised robustness approach can provide new insight into optimal approximations of maps, as we shall see in the following. \paragraph{Choi map.} The Choi map $\mathcal{C} \in \H(A,A)$ with $d_A = 3$ is an example of an indecomposable positive map, and is defined by~\cite{choi_1980} \begin{equation}\begin{aligned} \mathcal{C}(X) \coloneqq \begin{pmatrix} X_{11}+X_{22} & -X_{12} & -X_{13} \\ -X_{21} & X_{22}+X_{33} & -X_{23} \\ -X_{31} & -X_{32} & X_{33} + X_{11} \end{pmatrix} \end{aligned}\end{equation} where $X_{ij}$ denote the matrix elements of $X$ in a chosen basis. \br{A numerical evaluation shows} that the optimal decompositions for $\mathcal{C}$ give ${\mathrm{SPA}}(\mathcal{C}) = \frac{3}{2}$ and ${\mathrm{SPA}}'(\mathcal{C}) = \frac{2}{3}$. With the robustness, an improved choice can be obtained by choosing $\Lambda = \mathrm{id}$ and mixing as $J_\mathcal{C} + \frac{1}{6} J_{\mathrm{id}} \geq 0$, yielding $R(\mathcal{C}) = \frac{1}{6}$. Consequently, mixing with more general maps can not only provide quantitative improvements, but also identify ways of implementing non-CPTP maps which are impossible to find with the standard structural physical approximations. An interesting difference between the SPA- and robustness-based approaches is that the optimal SPA of the Choi map is a measure-and-prepare (entanglement-breaking) channel~\cite{korbicz_2008}, while the map obtained in the robustness-based approach is not (as can be verified with the PPT criterion). Since measure-and-prepare channels enjoy an easy implementation in practical settings, it would be an interesting extension of our approach to consider the extent of a quantitative advantage that can be maintained while requiring that the optimal CPTP approximation be entanglement breaking. We also note that another approach to realising positive maps was studied in Ref.~\cite{dong_2019} by using multiple copies of the input state, where a related SPA-based approximation was also considered. An extension of the methods of our work to this framework could provide additional insight into the implementability of positive maps. \subsection{Inverse quantum channels}\label{sec:examples_inverses} A fundamentally important case of a non-CPTP map encountered in many settings is the inverse linear map of a bijective quantum channel, that is, a map such that $\Lambda^{-1} \circ \Lambda = \Lambda \circ \Lambda^{-1} = \mathrm{id}$.\footnote{We note that in many cases it suffices to consider only left or right inverses, but we assume two-sided invertibility for simplicity.} Note that such an inverse is not guaranteed to exist for a general channel, and even when it does, it will not form a valid quantum channel unless $\Lambda$ is a unitary map. However, many important cases of quantum dynamics are indeed invertible, allowing us to study their inverses in the formalism of our work. \paragraph{Non-Markovianity.} One setting in which channel inverses play a role is the study of non-Markovianity. Among the different ways to define Markovian evolution, a common way is to say that a time-dependent evolution governed by the channel $\Lambda_{t,0}$ is Markovian if it behaves as a physical map over any time interval $[t,t+\delta t]$. Mathematically, any $\Lambda_{t,0}$ satisfying this condition is said to be CP-divisible~\cite{rivas_2010,chruscinski_2014,rivas_2014}, which can be formalised by the statement that for all times $t$ and $s \leq t$ we can write $$\Lambda_{t,0} = \Xi_{t,s} \circ \Lambda_{s,0}$$ where the propagator $\Xi_{t,s}$ is a CPTP map. For more general channels, the decomposition $\Lambda_{t,0} = \Xi_{t,s} \circ \Lambda_{s,0}$ results in some $\Xi_{t,s}$ that is non-CPTP, indicating that Markovian dynamics break down after some time point $s$. Observe that, provided $\Lambda_{t,0}$ is invertible for all $t$, we can take $\Xi_{t,s} = \Lambda_{t,0} \circ \Lambda^{-1}_{s,0}$. Therefore, \br{the non-physicality of $\Lambda_{t,0} \circ \Lambda^{-1}_{s,0}$ serves as an indicator of non-Markovianity, and --- since this map is trace preserving for any trace-preserving $\Lambda$ --- the diamond norm $\norm{\Lambda_{t,0} \circ \Lambda^{-1}_{s,0}}{\dia}$ can be used as a quantitative measure of non-Markovianity over the time-interval $[t,s]$}. This is similar to the original approach of Ref.~\cite{rivas_2010} where a quantifier based on the trace norm of the Choi operator was employed --- the advantage of our definition is the ability to interpret this quantity operationally. Specifically, we observe that quantum mechanics is ultimately a Markovian theory: if we had knowledge of all relevant objects, then all quantum dynamics could be described by Markovian unitary dynamics. That is, any information from the past that is relevant to the future must pass through the present, and hence the optimal prediction of future observational statistics ultimately depends only on the the present state of reality. Non-Markovianity is an artefact of not tracking all relevant information in the present. In our context, this arises as our mathematical characterisation of the candidate channel, $\Lambda_{s,0}$, does not track the state of the environment. The operational relevance of $\norm{\Lambda_{t,0} \circ \Lambda^{-1}_{s,0}}{\dia}$ then becomes more evident. Notably, in Sec.~\ref{sec:simulation} we presented a systematic means of simulating any unphysical map $\Xi_{t,s}$ by introducing an ancillary system $X$. Here, we may think of this as building a Markovian model for $\Xi_{t,s}$ by introducing $X = \sum_i \mu_i \rho_i$ as an ``artificial environment''. The feeding in of different states $\rho_i$ depending on $X$ then represents a means in which non-Markovian behaviour on the system is realised. While this construction does not immediately look physical (as it allows affine mixtures of quantum states), it can be simulated by a classical computer with sufficient resource overhead. The resource costs of doing so --- $\norm{\Xi_{t,s}}{\dia}$ --- thus represents a bound on the information processing capabilities of the environment that enable said non-Markovian behaviour to emerge. There are multiple approaches for extending this to a time-independent measure of non-Markovianity of $\Lambda$. One could, for example, take the supremum of the measure $\norm{\Lambda_{t,0} \circ \Lambda^{-1}_{s,0}}{\dia}$ over all $t$ and $s$. This would then characterise how much extra information processing we need beyond tracking the state of the system at time $s$ to simulate dynamics over the time-interval $[s,t]$. We may also follow an approach based on Ref.~\cite{rivas_2010} and define $\displaystyle \mathcal{I}_{\,\dia} (\Lambda) \coloneqq \int_{0}^\infty g_{\,\dia,t}(\Lambda) \,\mathrm{d}t$, where $g_{\,\dia,t}$ can be understood as the right-hand derivative of the diamond norm of the dynamics at time $t$: \begin{equation}\begin{aligned} g_{\,\dia,t}(\Lambda) \coloneqq \lim_{\varepsilon \to 0^+} \frac{\norm{\Lambda_{t+\varepsilon,0} \circ \Lambda_{t,0}^{-1}}{\dia}-\norm{\Lambda_{t,0} \circ \Lambda_{t,0}^{-1}}{\dia}}{\varepsilon} = \lim_{\varepsilon \to 0^+} \frac{\norm{\Lambda_{t+\varepsilon,0} \circ \Lambda_{t,0}^{-1}}{\dia}-1}{\varepsilon}. \end{aligned}\end{equation} $\mathcal{I}_{\,\dia} (\Lambda)$ therefore represents the total amount of non-Markovianity in this evolution. A suitable normalisation of this quantity can allow for the comparison of the strength of non-Markovianity in different settings~\cite{rivas_2010,rivas_2014}. We leave a careful consideration of these possibilities to future work. \paragraph{Error mitigation.} Another application for the study of channel inverses is error mitigation. This setting considers the scenario where one is tasked with computing expectation values of the type $\Tr[ \mathcal{U}(\rho) A ]$ for an input state $\rho$, ideal gate $\mathcal{U}$, and observable $A$, while operations are followed by a noise channel $\Theta$. A leading approach to this problem, called probabilistic error cancellation~\cite{temme_2017,endo_2018}, is to counteract the noise with the inverse map $\Theta^{-1}$, so that $\Tr[\mathcal{U}(\rho) A]=\Tr[ \Theta \circ \Theta^{-1}\circ\mathcal{U}(\rho) A ]$. By decomposing $\Theta^{-1}$ into a quasiprobability distribution over a convex subset of channels $\P=\{\Lambda_i\}$ such that $\Lambda_i\circ\mathcal{U}$ would be implementable on a (fictitious) noiseless device, standard quasiprobability sampling arguments allows us to construct an unbiased estimator for $\Tr[\mathcal{U}(\rho) A]$ using only operations implementable on a noisy device. The optimal overhead cost of such a procedure scales as $\gamma_\P(\Theta)^2$, where~\cite{temme_2017,takagi_2020-2} \begin{equation}\begin{aligned} \gamma_\P(\Theta) &= \min \left\{\left. \sum_i |\lambda_i| \bar \Theta^{-1} = \sum_i \lambda_i \Lambda_i,\; \Lambda_i \in \P \right\}\\ &= \min \left\{\left. \lambda_+ + \lambda_- \bar \Theta^{-1} = \lambda_+ \Lambda_+ - \lambda_- \Lambda_-,\; \Lambda_\pm \in \P \right\}. \label{eq:sampling cost} \end{aligned}\end{equation} The specific choice of $\P$ can be made depending on not only the physical setting in consideration, but also on one's precise motivations. On the one hand, a set with a finite number of operations (e.g., Clifford gates) turns Eq.~\eqref{eq:sampling cost} into a linear program~\cite{temme_2017,endo_2018}, making the overhead cost easily computable while sacrificing the expressibility of devices. On the other hand, choosing a larger set with an infinite number of implementable operations takes into account a larger expressibility~\cite{takagi_2020-2}, but makes the computation of Eq.~\eqref{eq:sampling cost} hard in general. Here, to accommodate computability and expressibility at the same time, we take another approach considered in Ref.~\cite{jiang_2020,Xiong2020sampling}: we choose $\P$ to be \emph{all} physical quantum channels. We notice that the norm $\norm{\cdot}{\bnorm}$ provides the cost of error mitigation in this setting as $\gamma_{{\mathrm{CPTP}}} (\Theta) = \norm{\Theta^{-1}}{\bnorm} = \norm{\Theta^{-1}}{\dia}$, which can be efficiently computed by semidefinite programming. Although this choice of $\P$ might seem too permissive, the lower bound obtained through this approach can actually match known achievability results (upper bounds)~\cite{jiang_2020}, showing new optimality results and even improving on the specialised characterisation of Ref.~\cite{takagi_2020-2} in some cases. Of note is the fact that, since any inverse map $\Theta^{-1}$ of a quantum channel $\Theta$ is trace preserving, our Thm.~\ref{thm:rob_diamond} shows a new application of the diamond norm in bounding the cost of error mitigation: it always holds that $\gamma_\P(\Theta) \geq \norm{\Theta^{-1}}{\dia}$, regardless of the choice of $\P$. In some cases --- such as when experiencing the leakage or loss of some qubits during computation --- the noisy evolution can actually correspond to a map which is not trace preserving. Although many previous approaches did not take this into consideration, our methods explicitly extend to such maps, allowing one to understand the simulation of non-trace-preserving linear maps through Thm.~\ref{thm:cost}. Related settings which our methods can characterise include the so-called linear quantum error correction~\cite{shabani_2009}, which aims to correct errors of systems undergoing general, non-CPTP dynamics $\Theta$, as well as error mitigation for non-Markovian noise~\cite{Hakoshima2021nonMarkovian}, where the mitigation cost can be related to a measure of non-Markovianity. In such cases, our approach can thus help understand the implementation of not only the inverse maps, but also the dynamics themselves. \subsubsection{Computing the measures} To showcase the application of our methods and evaluate the measures for some representative examples, we will consider the inverse maps of several fundamental types of noisy quantum evolutions: depolarising, amplitude damping, dephasing, and qubit leakage channels. The expressions for the first two appeared in Ref.~\cite{jiang_2020}, which we rederive using the methods and results of this work. We also find for the first three that the optimal decomposition into $\Lambda_\pm$ for the norm $\|\Theta^{-1}\|_{\bnorm}$ (Eq.~\eqref{eq:CPTNI norm def}), can be taken as convex mixtures of unitaries and state preparations. Thus, $\|\Theta^{-1}\|_{\bnorm}$ also serves as the optimal cost $\gamma_\P(\Theta^{-1})$ with a smaller set $\P$ as considered in Ref.~\cite{takagi_2020-2}, indicating that the capability to implement all CPTNI maps does not provide any advantage over that of implementing unitaries and state preparations only. Note that the inverses of trace-preserving maps are trace preserving, and so in such cases the equality $\norm{\Phi}{\bnorm} = \norm{\Phi}{\dia} = 2R(\Phi) + 1$ holds by Thm.~\ref{thm:rob_diamond}, which means that it will suffice to evaluate any one of the measures. \paragraph{Depolarising noise.} The depolarising channel, given by $\mathcal{D}_p(X) \coloneqq (1-p) X + p \Tr X \frac{\mathbbm{1}}{d_A}$ for some noise parameter $p\in[0,1)$, has the inverse $\mathcal{D}^{-1}_p(X) = \frac{1}{1-p} X - \frac{p}{1-p} \Tr X \frac{\mathbbm{1}}{d_A}$. This gives \begin{equation}\begin{aligned} J_{\mathcal{D}^{-1}_p} = \frac{1}{1-p} \proj{\Omega} - \frac{p}{(1-p)d_A} \mathbbm{1}_{A \otimes A}. \end{aligned}\end{equation} Importantly, one can notice that $\Tr_B {J_{\mathcal{D}^{-1}_p}}_+$ and $\Tr_B {J_{\mathcal{D}^{-1}_p}}_-$ are proportional to identity. As first noticed in~\cite{nechita_2018,michel_2018}, this means that the lower bound $\frac{1}{d_A} \norm{J_{\mathcal{D}^{-1}_p}}{1}$ of Prop.~\ref{prop:bounds_trace_norm} matches the upper bound $\lambda_{\max} \left(\Tr_B \left[ {J_{\mathcal{D}^{-1}_p}}_+ + {J_{\mathcal{D}^{-1}_p}}_- \right]\right)$ of Prop.~\ref{prop:bounds_upper}\footnote{In fact, $\norm{\Phi}{\dia} = \frac{1}{d_A} \norm{J_\Phi}{1}$ if and only if $\Tr_B \left({J_{\Phi}}_+ + {J_{\Phi}}_-\right) \propto \mathbbm{1}$~\cite{nechita_2018,michel_2018}.}. We thus get \begin{equation}\begin{aligned} \norm{\mathcal{D}^{-1}_p}{\bnorm} = \norm{\mathcal{D}^{-1}_p}{\dia} &= \frac{1}{d_A} \norm{J_{\mathcal{D}^{-1}_p}}{1} = \frac{1 + \left(1-2d_A^{-2}\right) p}{1-p}. \end{aligned}\end{equation} \paragraph{Dephasing noise.} The generalised dephasing channel~\cite{devetak_2005-2} is defined by $\Delta_{\textbf{p}}(X) \coloneqq \sum_{i=0}^{d_A-1} p_i Z_i X Z_i^\dagger$, where $\textbf{p} = (p_0, \ldots, p_{d_A-1})$ is a chosen set of noise parameters $p_i \geq 0$, and $Z_i$ refers to the qudit clock operators \begin{equation}\begin{aligned} Z_i = \sum_{j=0}^{d_A-1} \omega^{ij} \proj{j} \end{aligned}\end{equation} in some basis $\{\ket{i}\}$, with $\omega$ being a primitive $d_A$th root of unity. In the case of $d_A = 2$, this recovers the usual qubit dephasing channel $\Delta_{p}(X) = (1-p) X + p Z X Z^\dagger$. One can notice that the action of this channel can be represented by $\Delta_{\textbf{p}} (X) = X \odot S$ where $\odot$ denotes the element-wise matrix product (Schur/Hadamard product), and \begin{equation}\begin{aligned} (S)_{jk} = \sum_{i=0}^{d_A-1} p_i \omega^{ij} (\omega^{ik})^* = \sum_{i=0}^{d_A-1} p_i \omega^{i(j-k)} \qquad j,k = 0, \ldots d_A-1 \end{aligned}\end{equation} in the same basis $\{\ket{i}\}$. Provided that the coefficients of $S$ are non-zero (that is, $\Delta_{\textbf{p}}$ does not act as a completely dephasing channel on any subspace), the map is invertible as $\Delta^{-1}_{\textbf{p}}(X) = X \odot \overline{S}$ with $\overline{S}$ defined by \begin{equation}\begin{aligned} (\overline{S})_{jk} = \frac{1}{(S)_{jk}} \qquad j,k = 0, \ldots d_A-1. \end{aligned}\end{equation} We will now show that $\norm{\Delta^{-1}_{\textbf{p}}}{\bnorm} = \norm{\Delta^{-1}_{\textbf{p}}}{\dia} = \frac{1}{d_A} \norm{J_{\Delta^{-1}_{\textbf{p}}}}{1} = \frac{1}{d_A}\norm{\overline{S}}{1}$. The equality $\norm{\Delta^{-1}_{\textbf{p}}}{\bnorm} = \norm{\Delta^{-1}_{\textbf{p}}}{\dia}$ is a consequence of Thm.~\ref{thm:rob_diamond}; note here that we do not actually need to impose that $\Delta_{\textbf{p}}$ be trace preserving (i.e., that $\sum_i p_i = 1$), since both $\Delta_{\textbf{p}}$ and $\Delta^{-1}_{\textbf{p}}$ are always proportional to a trace-preserving map by construction. To show the equality $\norm{\Delta^{-1}_{\textbf{p}}}{\bnorm} = \frac{1}{d_A}\norm{\overline{S}}{1}$, consider the decomposition of $\overline{S}$ as $\overline{S} = \overline{S}_+ - \overline{S}_-$. Crucially, since $S$ is a circulant matrix, so is $\overline{S}$, and hence it can be diagonalised by the Fourier transform matrix $(F)_{jk} = \frac{1}{\sqrt{d_A}} \omega^{jk}$~\cite[2.2.P10]{horn_2012}. Each eigenvector of $\overline{S}$ is therefore of the form \begin{equation}\begin{aligned} \ket{s_m} = \frac{1}{\sqrt{d_A}} \sum_{i=0}^{d_A - 1} \omega^{im} \ket{i}, \label{eq:dephasing eigenstate} \end{aligned}\end{equation} ensuring in particular that all diagonal elements of each density matrix $\proj{s_m}$ are equal. This entails that $\overline{S}_+$ and $\overline{S}_-$ both have constant diagonals. Define now the maps \begin{equation}\begin{aligned} \Lambda_\pm (X) \coloneqq X \odot \overline{S}_\pm. \end{aligned}\end{equation} Since $\overline{S}_\pm \geq 0$, each such map is completely positive~\cite[Thm. 3.7]{paulsen_2002}, and clearly $\Lambda'_\pm \coloneqq \Lambda_\pm d_A / \Tr(\overline{S}_\pm)$ is trace preserving as we have just seen that $(\overline{S}_\pm)_{ii} = (\overline{S}_\pm)_{jj} \; \forall i,j$. Thus we have a decomposition as \begin{equation}\begin{aligned} \Delta^{-1}_{\textbf{p}} = \frac{\Tr \overline{S}_+}{d_A} \Lambda'_+ - \frac{\Tr \overline{S}_-}{d_A} \Lambda'_-, \quad \Lambda'_\pm \in {\mathrm{CPTP}}, \label{eq:dephasing inverse decomposition} \end{aligned}\end{equation} from which we get the bound $\norm{\Delta^{-1}_{\textbf{p}}}{\bnorm} \leq \frac{1}{d_A}\norm{\overline{S}}{1}$. On the other hand, let $\ket{\psi} = \frac{1}{\sqrt{d_A}} \sum_{i=0}^{d_A-1} \ket{i}$ and use Prop.~\ref{prop:bounds_lower} to get \begin{equation}\begin{aligned} \norm{\Delta^{-1}_{\textbf{p}}}{\bnorm} \geq \norm{\Delta^{-1}_{\textbf{p}}(\proj{\psi})}{1} = \frac{1}{d_A}\norm{\overline{S}}{1}. \end{aligned}\end{equation} Finally, the equality $\norm{J_{\Delta^{-1}_{\textbf{p}}}}{1} = \norm{\overline{S}}{1}$ is obtained by noticing that $J_{\Delta^{-1}_{\textbf{p}}} = \sum_{i,j} (\overline{S})_{ij} \ketbra{ii}{jj}$ which has the same eigenvalues as $\overline{S}$. The eigenvalues of $\overline{S}$ can be readily obtained due to the fact that it is a circulant matrix~\cite[2.2.P10]{horn_2012}, allowing for a straightforward computation of the trace norm $\norm{\overline{S}}{1}$ and altogether giving \begin{equation}\begin{aligned} \norm{\Delta^{-1}_{\textbf{p}}}{\dia} = \frac{1}{d_A} \sum_{m=0}^{d_A-1} \left| \sum_{j=0}^{d_A-1} \left( \sum_{i=0}^{d_A-1} p_i \omega^{j(i-m)} \right)^{-1} \right|. \end{aligned}\end{equation} For the qubit dephasing channel with $p\in[0,\frac{1}{2})$, we recover \begin{equation}\begin{aligned} \norm{\Delta^{-1}_p}{\dia} = \frac{1}{2} \norm{\begin{pmatrix}1 & \frac{1}{1-2p} \\\frac{1}{1-2p} & 1\end{pmatrix}}{1} = \frac{1}{1-2p}. \end{aligned}\end{equation} Since each eigenvector $\ket{s_m}$ for $\overline{S}$ in \eqref{eq:dephasing eigenstate} corresponds to the application of $Z_m$, $\Lambda'_\pm$ in \eqref{eq:dephasing inverse decomposition} are realised as probabilistic applications of the generalised phase unitaries. \paragraph{Amplitude damping noise.} The qubit amplitude damping channel $\mathcal{A}_\gamma(\cdot) = A_0 \cdot A_0^\dagger + A_1 \cdot A_1^\dagger$ is defined by the Kraus operators $A_0 \coloneqq \proj{0} + \sqrt{1-\gamma} \proj{1}$ and $A_1 \coloneqq \sqrt{\gamma} \ketbra{0}{1}$. Using the fact that \begin{equation}\begin{aligned} \proj{1} &= \frac{1}{1-\gamma} \mathcal{A}_\gamma(\proj{1}) - \frac{\gamma}{1-\gamma} \proj{0}\\ &= \frac{1}{1-\gamma} \mathcal{A}_\gamma(\proj{1}) - \frac{\gamma}{1-\gamma} \mathcal{A}_\gamma(\proj{0}), \end{aligned}\end{equation} we have \begin{equation}\begin{aligned} \mathcal{A}_\gamma^{-1}(\proj{1}) = \frac{1}{1-\gamma} \proj{1} - \frac{\gamma}{1-\gamma} \proj{0}. \end{aligned}\end{equation} Proposition \ref{prop:bounds_lower} thus gives \begin{equation}\begin{aligned} \norm{\mathcal{A}^{-1}_\gamma}{\bnorm} = \norm{\mathcal{A}^{-1}_\gamma}{\dia} &\geq \norm{\mathcal{A}^{-1}_\gamma(\proj{1})}{1}\\ &= \frac{1+\gamma}{1-\gamma}. \end{aligned}\end{equation} A matching upper bound can be obtained by explicitly computing $J_{\mathcal{A}_\gamma^{-1}}$ (see e.g.~\cite{temme_2017,takagi_2020-2}) and using the upper bound in Prop.~\ref{prop:bounds_upper}. The above shows a rather general method of obtaining lower bounds for linear maps which are inverses of other linear maps, without having to explicitly compute the full inverse map. Indeed, this can be extended to maps which only approximately invert a given channel --- useful, for instance, when dealing with non-invertible maps, or when aiming to reduce the cost of implementing a given map by only requiring that it approximately mitigates the error. \begin{boxed}{white} \begin{proposition} Let $\Phi \in \H(A,B)$ and $\wt\Phi \in \H(B,A)$ be such that $\snorm{ \wt\Phi \circ \Phi(\rho) - \rho }{1} \leq \varepsilon$ for all $\rho \in \mathbb{D}(A)$. Then \begin{equation}\begin{aligned} \snorm{\wt\Phi}{\dia} &\geq \max \left\{\left. \norm{Z}{1}(1-\varepsilon) \bar \Phi(Z) \in \mathbb{D}(B) \right\} \\ % \snorm{\wt\Phi}{\bnorm} &\geq \max \left\{\left. \Tr Z_- + \Tr Q_+ - \varepsilon (\norm{Z}{1}+\norm{Q}{1}) \bar \Phi(Z), \Phi(Q) \in \mathbb{D}(B) \right\} \\ % R(\wt\Phi) &\geq \max \left\{\left. \Tr Z_- + \Tr Q_+ - \Tr \Phi(Q) - \varepsilon (\norm{Z}{1}+\norm{Q}{1}) \bar \Phi(Z), \Phi(Q) \geq 0, \right.\\ &\hphantom{\geq \max \left\{\left. \Tr Z_- + \Tr Q_+ - \Tr \Phi(Q) - \varepsilon (\norm{Z}{1}+\norm{Q}{1}) \bar \right.} \Tr \Phi(Z + Q) = 1 \big\}\\ % R'(\wt\Phi) &\geq \max \left\{\left. \Tr Z_+ - 1 - \varepsilon \norm{Z}{1} \bar \Phi(Z) \in \mathbb{D}(B) \right\}\\ % R''(\wt\Phi) &\geq \max \left\{\left. \Tr Z_- - \varepsilon \norm{Z}{1} \bar \Phi(Z) \in \mathbb{D}(B) \right\}.\\ \end{aligned}\end{equation} \end{proposition} \end{boxed} \begin{proof} We use Prop.~\ref{prop:bounds_lower} to get that \begin{equation}\begin{aligned} \snorm{\wt\Phi}{\dia} &\geq \max \left\{\left. \snorm{\wt\Phi(\sigma)}{1} \bar \sigma \in \mathbb{D}(B) \cap \mathrm{ran}(\Phi) \right\}\\ &= \max \left\{\left. \snorm{\wt\Phi \circ \Phi (Z)}{1} \bar \Phi(Z) \in \mathbb{D}(B) \right\}\\ &\geq \max \left\{\left. \norm{Z}{1} - \snorm{ Z - \wt\Phi \circ \Phi (Z) }{1} \bar \Phi(Z) \in \mathbb{D}(B) \right\}\\ &\geq \max \left\{\left. \norm{Z}{1} (1-\varepsilon) \bar \Phi(Z) \in \mathbb{D}(B) \right\}. \end{aligned}\end{equation} The third line follows by the triangle inequality, and the last line is a consequence of the assumption that $\snorm{ \wt\Phi \circ \Phi(\rho) - \rho }{1} \leq \varepsilon$ for all $\rho \in \mathbb{D}(A)$, since we can write any $Z = \mu_+ \rho_+ - \mu_- \rho_-$ for some $\rho_\pm \in \mathbb{D}(A)$ to get $\norm{\wt\Phi \circ \Phi(Z) - Z}{1} \leq \varepsilon (\mu_+ + \mu_-) \leq \varepsilon \norm{Z}{1}$. The case of the other measures is analogous: using the variational form of the function $\Tr Z_+$ (and similarly $\Tr Z_-$) we can obtain \begin{equation}\begin{aligned} \Tr \wt\Phi(\sigma)_+ &= \max \left\{\left. \< \wt\Phi \circ \Phi (Z), W \> \bar 0 \leq W \leq \mathbbm{1} \right\}\\ &= \max \left\{\left. \< Z , W \> - \< Z - \wt\Phi \circ \Phi(Z), W \> \bar 0 \leq W \leq \mathbbm{1} \right\}\\ &\geq \max \left\{\left. \< Z , W \> - \snorm{Z - \wt\Phi\circ \Phi(Z)}{1} \norm{W}{\infty} \bar 0 \leq W \leq \mathbbm{1} \right\}\\ &\geq \Tr Z_+ - \varepsilon \norm{Z}{1} \end{aligned}\end{equation} where we used the Cauchy-Schwarz inequality. Using these bounds in Prop.~\ref{prop:bounds_lower} yields the stated result. \end{proof} \paragraph{Leakage error.} Consider the qubit leakage error $\L_p(\cdot)=L_p\cdot L_p^\dagger$ where $L_p\coloneqq \dm{0}+\sqrt{1-p}\dm{1}$. This represents a situation where the excited state is lost with probability $1-p$, and this stochastic nature is reflected to the fact that $\L_p$ is not trace preserving. The inverse of the leakage error is given by $\L_p^{-1}(\cdot) = L_p^{-1} \cdot L_p^{-1}$. Since this is a completely positive map, Eq.~\eqref{eq:equality_CP} gives \begin{equation}\begin{aligned} \norm{\L_p^{-1}}{\bnorm} = \norm{\L_p^{-1}}{\dia} = \norm{\Tr_B J_{\L_p^{-1}}}{\infty} = \frac{1}{1-p}. \end{aligned}\end{equation} Note that the inverse can be realised as \begin{equation}\begin{aligned} \L_p^{-1}=\frac{1}{2}\left(1+\frac{1}{\sqrt{1-p}}\right)\mathrm{id}-\frac{1}{2}\left(\frac{1}{\sqrt{1-p}}-1\right)\mathcal{Z}+\frac{p}{1-p}\Pi_{\dm{1}} \end{aligned}\end{equation} where $\mathcal{Z}(\cdot)\coloneqq Z\cdot Z$ with $Z=\dm{0}-\dm{1}$ being the Pauli $Z$ matrix, and $\Pi_{\dm{1}}(\cdot)\coloneqq \dm{1}\cdot\dm{1}$ being the projection onto the state $\ket{1}$. \section{Discussion} We introduced a comprehensive quantitative approach to the study of non-completely-positive linear maps, focusing in particular on the task of approximating and simulating them with valid quantum channels. To this end, we considered several quantifiers which generalise measures employed in the study of quantum resources --- namely, variants of the robustness and base norm measures. We showed that they satisfy very close relations with the diamond norm, and in particular are exactly equal to it for any trace-preserving linear map. Since such trace-preserving maps are the most commonly encountered examples of dynamics beyond physical quantum channels, this allowed us to establish fruitful interrelations between the quantities, and discover new applications of the fundamentally important quantity that is the diamond norm. We developed in particular two operational connections. Firstly, we introduced a method of simulating general linear maps with quantum channels, shifting the difficulty of realising non-quantum dynamics onto the structurally simpler task of implementing linear combinations of quantum states. We showed that our robustness measure exactly quantifies the cost of realising such schemes in terms of the required state-based resources. Secondly, we showed that another variant of the robustness finds use as an exact quantifier of the performance advantage that a general linear map can enable over quantum channels in a class of state discrimination games. We introduced a number of useful bounds and explicitly employed them to demonstrate the computability of the measures for some representative examples. Finally, we showed how our measures can find use in the quantitative characterisation of several practically relevant settings, namely, structural approximations of positive maps, non-Markovianity quantification, and tightly bounding the cost of probabilistic error mitigation. Although we focused on the application of our framework to Hermiticity-preserving maps, we note that more general linear maps can be treated in a similar way. The simplest way to approach this is to decompose any linear map $\Phi$ into its Hermiticity-preserving and skew-Hermiticity-preserving parts, that is, write $\Phi = \Phi_{\rm H} + i \Phi_{\rm SH}$ where the constituent maps are defined through $J_{\Phi_{\rm H}} \coloneqq \frac12 (J_\Phi + J_\Phi^\dagger)$ and $J_{\Phi_{\rm SH}} \coloneqq \frac{1}{2i} (J_\Phi - J_\Phi^\dagger)$. The maps $\Phi_{\rm H}$ and $\Phi_{\rm SH}$ are then explicitly Hermiticity-preserving, and our arguments can be applied to them directly. A similar approach was employed in~\cite{buscemi_2013-2} to decompose the two-point quantum correlator $\mathcal{T} : \mathbb{L}(A) \to \mathbb{L}(A \otimes A)$, defined as the map satisfying $\Tr [ \mathcal{T}(\rho) (A \otimes B) ] = \Tr [ A \rho B ]$ for all $A,B$. Indeed, one can show that the decompositions constructed in~\cite{buscemi_2013-2} are also optimal for the robustness-based quantities. We also note that the diamond norm has been applied as a measure of specific properties of quantum channels, such as their ability to detect coherence~\cite{theurer_2019}. Connections between our methods and such approaches could be fruitful to explore. A major outstanding issue is to understand how the framework of this work can be extended to non-linear maps, which could allow for the characterisation and more efficient approximation of important unphysical dynamics such as quantum cloners. This question was already asked in the earliest works concerned with approximating non-CPTP maps with quantum channels~\cite{horodecki_2003-4}, but it still remains a considerable challenge to devise approaches which could apply to general non-linear transformations. \begin{acknowledgments} We acknowledge fruitful discussions with Joonwoo Bae, Francesco Buscemi, Ludovico Lami, Varun Narasimhachar, Jayne Thompson, and Xiao Yuan. This research is supported by the National Research Foundation (NRF), Singapore, under its NRFF Fellow program (Award No. NRF-NRFF2016-02), the National Research Foundation and Agence Nationale de la Recherche joint Project No. NRF2017-NRFANR004 VanQuTe, the Singapore Ministry of Education Tier 1 Grant RG162/19 (S) and grant No. FQXi-RFP-IPW-1903 from the Foundational Questions Institute and Fetzer Franklin Fund (a donor advised fund of Silicon Valley Community Foundation). B.R.\ is supported by the Presidential Postdoctoral Fellowship from Nanyang Technological University, Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. \end{acknowledgments} \let\L\LL \let\l\ll \bibliographystyle{unsrtnat}
{ "timestamp": "2021-08-06T02:22:08", "yymm": "2102", "arxiv_id": "2102.07773", "language": "en", "url": "https://arxiv.org/abs/2102.07773" }
\section{Introduction} \label{sec:intro} Deep learning methods, where a computational model learns an intricate representation of a large-scale dataset, yielded dramatic performance improvements in speech recognition and computer vision~\citep{lecun2015deep}. When we have seen such improvements, a common factor is the availability of large-scale training data~\citep{deng2009imagenet, bellemare2013arcade}. For ad hoc ranking in information retrieval, which is a core problem in the field, we did not initially see dramatic improvements in performance from deep learning methods. This led to questions about whether deep learning methods were helping at all~\citep{Yang2019critically}. If large training data sets are a factor, one explanation for this could be that the training sets were too small. The TREC Deep Learning Track, and associated MS MARCO leaderboards \citep{bajaj2016ms}, have introduced human-labeled training sets that were previously unavailable. The main goal is to study information retrieval in the \emph{large training data} regime, to see which retrieval methods work best. The two tasks, document retrieval and passage retrieval, each have hundreds of thousands of human-labeled training queries. The training labels are sparse, with often only one positive example per query. Unlike the MS MARCO leaderboards, which evaluate using the same kind of sparse labels, the evaluation at TREC uses much more comprehensive relevance labeling. Each year of TREC evaluation evaluates on a new set of test queries, where participants submit before the test labels have even been generated, so the TREC results are the gold standard for avoiding multiple testing and overfitting. However, the comprehensive relevance labeling also generates a reusable test collections, allowing reuse of the dataset in future studies, although people should be careful to avoid overfitting and overiteration. The main goals of the Deep Learning Track in 2020 have been: \begin{enumerate*}[label=\arabic*)] \item To provide large reusable training datasets with associated large scale click dataset for training deep learning and traditional ranking methods in a large training data regime, \item To construct reusable test collections for evaluating quality of deep learning and traditional ranking methods, \item To perform a rigorous blind single-shot evaluation, where test labels don't even exist until after all runs are submitted, to compare different ranking methods, and \item To study this in both a traditional TREC setup with end-to-end retrieval and in a re-ranking setup that matches how some models may be deployed in practice. \end{enumerate*} \section{Task description} \label{sec:task} The track has two tasks: Document retrieval and passage retrieval. Participants were allowed to submit up to three runs per task, although this was not strictly enforced. Submissions to both tasks used the same set of $200$ test queries. In the pooling and judging process, NIST chose a subset of the queries for judging, based on budget constraints and with the goal of finding a sufficiently comprehensive set of relevance judgments to make the test collection reusable. This led to a judged test set of $45$ queries for document retrieval and $54$ queries for passage retrieval. The document queries are not a subset of the passage queries. When submitting each run, participants indicated what external data, pretrained models and other resources were used, as well as information on what style of model was used. Below we provide more detailed information about the document retrieval and passage retrieval tasks, as well as the datasets provided as part of these tasks. \subsection{Document retrieval task} The first task focuses on document retrieval, with two subtasks: \begin{enumerate*}[label=(\roman*)] \item Full retrieval and \item top-$100$ reranking. \end{enumerate*} In the full retrieval subtask, the runs are expected to rank documents based on their relevance to the query, where documents can be retrieved from the full document collection provided. This subtask models the end-to-end retrieval scenario. In the reranking subtask, participants were provided with an initial ranking of $100$ documents, giving all participants the same starting point. This is a common scenario in many real-world retrieval systems that employ a telescoping architecture \citep{matveeva2006high, wang2011cascade}. The reranking subtask allows participants to focus on learning an effective relevance estimator, without the need for implementing an end-to-end retrieval system. It also makes the reranking runs more comparable, because they all rerank the same set of 100 candidates. The initial top-$100$ rankings were retrieved using Indri \citep{strohman2005indri} on the full corpus with Krovetz stemming and stopwords eliminated. Judgments are on a four-point scale: \begin{etaremune}[start=3] \item \textbf{Perfectly relevant:} Document is dedicated to the query, it is worthy of being a top result in a search engine. \item \textbf{Highly relevant:} The content of this document provides substantial information on the query. \item \textbf{Relevant:} Document provides some information relevant to the query, which may be minimal. \item \textbf{Irrelevant:} Document does not provide any useful information about the query. \end{etaremune} For metrics that binarize the judgment scale, we map document judgment levels 3,2,1 to relevant and map document judgment level 0 to irrelevant. \subsection{Passage retrieval task} Similar to the document retrieval task, the passage retrieval task includes \begin{enumerate*}[label=(\roman*)] \item a full retrieval and \item a top-$1000$ reranking tasks. \end{enumerate*} In the full retrieval subtask, given a query, the participants were expected to retrieve a ranked list of passages from the full collection based on their estimated likelihood of containing an answer to the question. Participants could submit up to $1000$ passages per query for this end-to-end retrieval task. In the top-$1000$ reranking subtask, $1000$ passages per query were provided to participants, giving all participants the same starting point. The sets of 1000 were generated based on BM25 retrieval with no stemming as applied to the full collection. Participants were expected to rerank the 1000 passages based on their estimated likelihood of containing an answer to the query. In this subtask, we can compare different reranking methods based on the same initial set of $1000$ candidates, with the same rationale as described for the document reranking subtask. Judgments are on a four-point scale: \begin{etaremune}[start=3] \item \textbf{Perfectly relevant:} The passage is dedicated to the query and contains the exact answer. \item \textbf{Highly relevant:} The passage has some answer for the query, but the answer may be a bit unclear, or hidden amongst extraneous information. \item \textbf{Related:} The passage seems related to the query but does not answer it. \item \textbf{Irrelevant:} The passage has nothing to do with the query. \end{etaremune} For metrics that binarize the judgment scale, we map passage judgment levels 3,2 to relevant and map document judgment levels 1,0 to irrelevant. \section{Datasets} \label{sec:data} Both tasks have large training sets based on human relevance assessments, derived from MS MARCO. These are sparse, with no negative labels and often only one positive label per query, analogous to some real-world training data such as click logs. In the case of passage retrieval, the positive label indicates that the passage contains an answer to a query. In the case of document retrieval, we transferred the passage-level label to the corresponding source document that contained the passage. We do this under the assumption that a document with a relevant passage is a relevant document, although we note that our document snapshot was generated at a different time from the passage dataset, so there can be some mismatch. Despite this, machine learning models trained with these labels seem to benefit from using the labels, when evaluated using NIST's non-sparse, non-transferred labels. This suggests the transferred document labels are meaningful for our TREC task. This year for the document retrieval task, we also release a large scale click dataset, The ORCAS data, constructed from the logs of a major search engine~\citep{craswell2020orcas}. The data could be used in a variety of ways, for example as additional training data (almost 50 times larger than the main training set) or as a document field in addition to title, URL and body text fields available in the original training data. For each task there is a corresponding MS MARCO leaderboard, using the same corpus and sparse training data, but using sparse data for evaluation as well, instead of the NIST test sets. We analyze the agreement between the two types of test in Section~\ref{sec:result}. \begin{table}[] \centering \caption{Summary of statistics on TREC 2020 Deep Learning Track datasets.} \begin{tabular}{@{}lrr@{}} \toprule ~ & \multicolumn{1}{c}{Document task} & \multicolumn{1}{c}{Passage task} \\ Data & \multicolumn{1}{c}{Number of records} & \multicolumn{1}{c}{Number of records} \\ \midrule Corpus & $3,213,835$ & $8,841,823$ \\ \addlinespace Train queries & $367,013$ & $502,939$ \\ Train qrels & $384,597$ & $532,761$ \\ \addlinespace Dev queries & $5,193$ & $6,980$ \\ Dev qrels & $5,478$ & $7,437$ \\ \addlinespace 2019 TREC queries & $200 \rightarrow 43$ & $200 \rightarrow 43$ \\ 2019 TREC qrels & $16,258$ & $9,260$ \\ \addlinespace 2020 TREC queries & $200 \rightarrow 45$ & $200 \rightarrow 54$ \\ 2020 TREC qrels & $9,098$ & $11,386 $ \\ \bottomrule \end{tabular} \label{tbl:data} \end{table} \begin{table*} \centering \caption{Summary of ORCAS data. Each record in the main file (\texttt{orcas.tsv}) indicates a click between a query (Q) and a URL (U), also listing a query ID (QID) and the corresponding TREC document ID (DID). The run file is the top-100 using Indri query likelihood, for use as negative samples during training.} \begin{tabular}{lrrl} \toprule Filename & Number of records & Data in each record \\ \midrule \texttt{orcas.tsv} & 18.8M & \texttt{QID Q DID U}\\ \texttt{orcas-doctrain-qrels.tsv} & 18.8M & \texttt{QID DID} \\ \texttt{orcas-doctrain-queries.tsv} & 10.4M & \texttt{QID Q}\\ \texttt{orcas-doctrain-top100} & 983M & \texttt{QID DID score} \\ \bottomrule \end{tabular} \label{tab:orcas} \end{table*} Table \ref{tbl:data} and Table \ref{tab:orcas} provide descriptive statistics for the dataset derived from MS MARCO and the ORCAS dataset, respectively. More details about the datasets---including directions for download---is available on the TREC 2020 Deep Learning Track website\footnote{\url{https://microsoft.github.io/TREC-2020-Deep-Learning}}. Interested readers are also encouraged to refer to \citep{bajaj2016ms} for details on the original MS MARCO dataset. \section{Results and analysis} \label{sec:result} \paragraph{Submitted runs} The TREC 2020 Deep Learning Track had 25 participating groups, with a total of 123 runs submitted across both tasks. Based run submission surveys, we manually classify each run into one of three categories: \begin{itemize} \item \textbf{nnlm:} if the run employs large scale pre-trained neural language models, such as BERT \citep{devlin2018bert} or XLNet \citep{yang2019xlnet} \item \textbf{nn:} if the run employs some form of neural network based approach---\emph{e.g.}, Duet \citep{mitra2017learning, mitra2019updated} or using word embeddings \citep{joulin2016bag}---but does not fall into the ``nnlm'' category \item \textbf{trad:} if the run exclusively uses traditional IR methods like BM25 \citep{robertson2009probabilistic} and RM3 \citep{abdul2004umass}. \end{itemize} We placed 70 ($57\%$) runs in the ``nnlm'' category, 13 ($10\%$) in the ``nn'' category, and the remaining 40 ($33\%$) in the ``trad'' category. In 2019, 33 ($44\%$) runs were in the ``nnlm'' category, 20 ($27\%$) in the ``nn'' category, and the remaining 22 ($29\%$) in the ``trad'' category. While there was a significant increase in the total number of runs submitted compared to last year, we observed a significant reduction in the fraction of runs in the ``nn'' category. We further categorize runs based on subtask: \begin{itemize} \item \textbf{rerank:} if the run reranks the provided top-$k$ candidates, or \item \textbf{fullrank:} if the run employs their own phase 1 retrieval system. \end{itemize} We find that only 37 ($30\%$) submissions fall under the ``rerank'' category---while the remaining 86 ($70\%$) are ``fullrank''. Table~\ref{tbl:runs-by-type} breaks down the submissions by category and task. \paragraph{Overall results} Our main metric in both tasks is Normalized Discounted Cumulative Gain (NDCG)---specifically, NDCG@10, since it makes use of our 4-level judgments and focuses on the first results that users will see. To get a picture of the ranking quality outside the top-10 we also report Average Precision (AP), although this binarizes the judgments. For comparison to the MS MARCO leaderboard, which often only has one relevant judgment per query, we report the Reciprocal Rank (RR) of the first relevant document on the NIST judgments, and also using the sparse leaderboard judgments. Some of our evaluation is concerned with the quality of the top-$k$ results, where $k=100$ for the document task and $k=1000$ for the passage task. We want to consider the quality of the top-$k$ set without considering how they are ranked, so we can see whether improving the set-based quality is correlated with an improvement in NDCG@10. Although we could use Recall@$k$ as a metric here, it binarizes the judgments, so we instead use Normalized Cumulative Gain (NCG@$k$)~\citep{rosset2018optimizing}. NCG is not supported in trec\_eval. For trec\_eval metrics that are correlated, see Recall@$k$ and NDCG@$k$. The overall results are presented in Table~\ref{tbl:results-docs} for document retrieval and Table~\ref{tbl:results-passages} for passage retrieval. These tables include multiple metrics and run categories, which we now use in our analysis. \paragraph{Neural \emph{vs. } traditional methods.} The first question we investigated as part of the track is which ranking methods work best in the large-data regime. We summarize NDCG@10 results by run type in Figure~\ref{fig:model-stem-by-model-type}. For document retrieval runs (Figure~\ref{fig:model-task-docs-stem-by-model-type}) the best ``trad'' run is outperformed by ``nn'' and ``nnlm'' runs by several percentage points, with ``nnlm'' also having an advantage over ``nn''. We saw a similar pattern in our 2019 results. This year we encouraged submission of a variety of ``trad'' runs from different participating groups, to give ``trad'' more chances to outperform other run types. The best performing run of each category is indicated, with the best ``nnlm'' and ``nn'' models outperforming the best ``trad'' model by $23\%$ and $11\%$ respectively. For passage retrieval runs (Figure~\ref{fig:model-task-passages-stem-by-model-type}) the gap between the best ``nnlm'' and ``nn'' runs and the best ``trad'' run is larger, at $42\%$ and $17\%$ respectively. One explanation for this could be that vocabulary mismatch between queries and relevant results is greater in short text, so neural methods that can overcome such mismatch have a relatively greater advantage in passage retrieval. Another explanation could be that there is already a public leaderboard, albeit without test labels from NIST, for the passage task. (We did not launch the document ranking leaderboard until after our 2020 TREC submission deadline.) In passage ranking, some TREC participants may have submitted neural models multiple times to the public leaderboard, so are relatively more experienced working with the passage dataset than the document dataset. In query-level win-loss analysis for the document retrieval task (Figure~\ref{fig:model-task-docs-bar-per-query}) the best ``nnlm'' model outperforms the best ``trad'' run on 38 out of the 45 test queries (\emph{i.e.}, $84\%$). Passage retrieval shows a similar pattern in Figure~\ref{fig:model-task-passages-bar-per-query}. Similar to last year's data, neither task has a large class of queries where the “nnlm” model performs worse. \begin{table} \fontsize{8}{9}\selectfont \centering \caption{Document retrieval runs. RR (MS) is based on MS MARCO labels. All other metrics are based on NIST labels. Rows are sorted by NDCG@10.} \begin{tabular}{llllrrrrr} \toprule run & group & subtask & neural & RR (MS) & RR & NDCG@10 & NCG@100 & AP \\ \midrule d\_d2q\_duo & h2oloo & fullrank & nnlm & 0.4451 & 0.9476 & 0.6934 & 0.7718 & 0.5422 \\ d\_d2q\_rm3\_duo & h2oloo & fullrank & nnlm & 0.4541 & 0.9476 & 0.6900 & 0.7769 & 0.5427 \\ d\_rm3\_duo & h2oloo & fullrank & nnlm & 0.4547 & 0.9476 & 0.6794 & 0.7498 & 0.5270 \\ ICIP\_run1 & ICIP & rerank & nnlm & 0.3898 & 0.9630 & 0.6623 & 0.6283 & 0.4333 \\ ICIP\_run3 & ICIP & rerank & nnlm & 0.4479 & 0.9667 & 0.6528 & 0.6283 & 0.4360 \\ fr\_doc\_roberta & BITEM & fullrank & nnlm & 0.3943 & 0.9365 & 0.6404 & 0.6806 & 0.4423 \\ ICIP\_run2 & ICIP & rerank & nnlm & 0.4081 & 0.9407 & 0.6322 & 0.6283 & 0.4206 \\ roberta-large & BITEM & rerank & nnlm & 0.3782 & 0.9185 & 0.6295 & 0.6283 & 0.4199 \\ bcai\_bertb\_docv & bcai & fullrank & nnlm & 0.4102 & 0.9259 & 0.6278 & 0.6604 & 0.4308 \\ ndrm3-orc-full & MSAI & fullrank & nn & 0.4369 & 0.9444 & 0.6249 & 0.6764 & 0.4280 \\ ndrm3-orc-re & MSAI & rerank & nn & 0.4451 & 0.9241 & 0.6217 & 0.6283 & 0.4194 \\ ndrm3-full & MSAI & fullrank & nn & 0.4213 & 0.9333 & 0.6162 & 0.6626 & 0.4069 \\ ndrm3-re & MSAI & rerank & nn & 0.4258 & 0.9333 & 0.6162 & 0.6283 & 0.4122 \\ ndrm1-re & MSAI & rerank & nn & 0.4427 & 0.9333 & 0.6161 & 0.6283 & 0.4150 \\ mpii\_run2 & mpii & rerank & nnlm & 0.3228 & 0.8833 & 0.6135 & 0.6283 & 0.4205 \\ bigIR-DTH-T5-R & QU & rerank & nnlm & 0.3235 & 0.9119 & 0.6031 & 0.6283 & 0.3936 \\ mpii\_run1 & mpii & rerank & nnlm & 0.3503 & 0.9000 & 0.6017 & 0.6283 & 0.4030 \\ ndrm1-full & MSAI & fullrank & nn & 0.4350 & 0.9333 & 0.5991 & 0.6280 & 0.3858 \\ uob\_runid3 & UoB & rerank & nnlm & 0.3294 & 0.9259 & 0.5949 & 0.6283 & 0.3948 \\ bigIR-DTH-T5-F & QU & fullrank & nnlm & 0.3184 & 0.8916 & 0.5907 & 0.6669 & 0.4259 \\ d\_d2q\_bm25 & anserini & fullrank & nnlm & 0.3338 & 0.9369 & 0.5885 & 0.6752 & 0.4230 \\ TUW-TKL-2k & TU\_Vienna & rerank & nn & 0.3683 & 0.9296 & 0.5852 & 0.6283 & 0.3810 \\ bigIR-DH-T5-R & QU & rerank & nnlm & 0.2877 & 0.8889 & 0.5846 & 0.6283 & 0.3842 \\ uob\_runid2 & UoB & rerank & nnlm & 0.3534 & 0.9100 & 0.5830 & 0.6283 & 0.3976 \\ uogTrQCBMP & UoGTr & fullrank & nnlm & 0.3521 & 0.8722 & 0.5791 & 0.6034 & 0.3752 \\ uob\_runid1 & UoB & rerank & nnlm & 0.3124 & 0.8852 & 0.5781 & 0.6283 & 0.3786 \\ TUW-TKL-4k & TU\_Vienna & rerank & nn & 0.4097 & 0.9185 & 0.5749 & 0.6283 & 0.3749 \\ bigIR-DH-T5-F & QU & fullrank & nnlm & 0.2704 & 0.8902 & 0.5734 & 0.6669 & 0.4177 \\ bl\_bcai\_multfld & bl\_bcai & fullrank & trad & 0.2622 & 0.9195 & 0.5629 & 0.6299 & 0.3829 \\ indri-sdmf & RMIT & fullrank & trad & 0.3431 & 0.8796 & 0.5597 & 0.6908 & 0.3974 \\ bcai\_classic & bcai & fullrank & trad & 0.3082 & 0.8648 & 0.5557 & 0.6420 & 0.3906 \\ longformer\_1 & USI & rerank & nnlm & 0.3614 & 0.8889 & 0.5520 & 0.6283 & 0.3503 \\ uogTr31oR & UoGTr & fullrank & nnlm & 0.3257 & 0.8926 & 0.5476 & 0.5496 & 0.3468 \\ rterrier-expC2 & bl\_rmit & fullrank & trad & 0.3122 & 0.8259 & 0.5475 & 0.6442 & 0.3805 \\ bigIR-DT-T5-R & QU & rerank & nnlm & 0.2293 & 0.9407 & 0.5455 & 0.6283 & 0.3373 \\ uogTrT20 & UoGTr & fullrank & nnlm & 0.3787 & 0.8711 & 0.5453 & 0.5354 & 0.3692 \\ RMIT\_DFRee & RMIT & fullrank & trad & 0.2984 & 0.8756 & 0.5431 & 0.6979 & 0.4087 \\ rmit\_indri-fdm & bl\_rmit & fullrank & trad & 0.2779 & 0.8481 & 0.5416 & 0.6812 & 0.3859 \\ d\_d2q\_bm25rm3 & anserini & fullrank & nnlm & 0.2314 & 0.8147 & 0.5407 & 0.6831 & 0.4228 \\ rindri-bm25 & bl\_rmit & fullrank & trad & 0.3302 & 0.8572 & 0.5394 & 0.6503 & 0.3773 \\ bigIR-DT-T5-F & QU & fullrank & nnlm & 0.2349 & 0.9060 & 0.5390 & 0.6669 & 0.3619 \\ bl\_bcai\_model1 & bl\_bcai & fullrank & trad & 0.2901 & 0.8358 & 0.5378 & 0.6390 & 0.3774 \\ bl\_bcai\_prox & bl\_bcai & fullrank & trad & 0.2763 & 0.8164 & 0.5364 & 0.6405 & 0.3766 \\ terrier-jskls & bl\_rmit & fullrank & trad & 0.3190 & 0.8204 & 0.5342 & 0.6761 & 0.4008 \\ rmit\_indri-sdm & bl\_rmit & fullrank & trad & 0.2702 & 0.8470 & 0.5328 & 0.6733 & 0.3780 \\ rterrier-tfidf & bl\_rmit & fullrank & trad & 0.2869 & 0.8241 & 0.5317 & 0.6410 & 0.3734 \\ BIT-run2 & BIT.UA & fullrank & nn & 0.2687 & 0.8611 & 0.5283 & 0.6061 & 0.3466 \\ RMIT\_DPH & RMIT & fullrank & trad & 0.3117 & 0.8278 & 0.5280 & 0.6531 & 0.3879 \\ d\_bm25 & anserini & fullrank & trad & 0.2814 & 0.8521 & 0.5271 & 0.6453 & 0.3791 \\ d\_bm25rm3 & anserini & fullrank & trad & 0.2645 & 0.8541 & 0.5248 & 0.6632 & 0.4006 \\ BIT-run1 & BIT.UA & fullrank & nn & 0.3045 & 0.8389 & 0.5239 & 0.6061 & 0.3466 \\ rterrier-dph & bl\_rmit & fullrank & trad & 0.3033 & 0.8267 & 0.5226 & 0.6634 & 0.3884 \\ rterrier-tfidf2 & bl\_rmit & fullrank & trad & 0.3010 & 0.8407 & 0.5219 & 0.6287 & 0.3607 \\ uogTrBaseQL17o & bl\_uogTr & fullrank & trad & 0.4233 & 0.8276 & 0.5203 & 0.6028 & 0.3529 \\ uogTrBaseL17o & bl\_uogTr & fullrank & trad & 0.3870 & 0.7980 & 0.5120 & 0.5501 & 0.3248 \\ rterrier-dph\_sd & bl\_rmit & fullrank & trad & 0.3243 & 0.8296 & 0.5110 & 0.6650 & 0.3784 \\ BIT-run3 & BIT.UA & fullrank & nn & 0.2696 & 0.8296 & 0.5063 & 0.6072 & 0.3267 \\ uogTrBaseDPHQ & bl\_uogTr & fullrank & trad & 0.3459 & 0.8052 & 0.5052 & 0.6041 & 0.3461 \\ uogTrBaseQL16 & bl\_uogTr & fullrank & trad & 0.3321 & 0.7930 & 0.4998 & 0.6030 & 0.3436 \\ uogTrBaseL16 & bl\_uogTr & fullrank & trad & 0.3062 & 0.8219 & 0.4964 & 0.5495 & 0.3248 \\ uogTrBaseDPH & bl\_uogTr & fullrank & trad & 0.3179 & 0.8415 & 0.4871 & 0.5490 & 0.3070 \\ nlm-bm25-prf-2 & NLM & fullrank & trad & 0.2732 & 0.8099 & 0.4705 & 0.5218 & 0.2912 \\ nlm-bm25-prf-1 & NLM & fullrank & trad & 0.2390 & 0.8086 & 0.4675 & 0.4958 & 0.2720 \\ mpii\_run3 & mpii & rerank & nnlm & 0.1499 & 0.6388 & 0.3286 & 0.6283 & 0.2587 \\ \bottomrule \end{tabular} \label{tbl:results-docs} \end{table} \begin{table} \small \centering \caption{Passage retrieval runs. RR (MS) is based on MS MARCO labels. All other metrics are based on NIST labels.} \begin{tabular}{llllrrrrr} \toprule run & group & subtask & neural & RR (MS) & RR & NDCG@10 & NCG@1000 & AP \\ \midrule pash\_r3 & PASH & rerank & nnlm & 0.3678 & 0.9147 & 0.8031 & 0.7056 & 0.5445 \\ pash\_r2 & PASH & rerank & nnlm & 0.3677 & 0.9023 & 0.8011 & 0.7056 & 0.5420 \\ pash\_f3 & PASH & fullrank & nnlm & 0.3506 & 0.8885 & 0.8005 & 0.7255 & 0.5504 \\ pash\_f1 & PASH & fullrank & nnlm & 0.3598 & 0.8699 & 0.7956 & 0.7209 & 0.5455 \\ pash\_f2 & PASH & fullrank & nnlm & 0.3603 & 0.8931 & 0.7941 & 0.7132 & 0.5389 \\ p\_d2q\_bm25\_duo & h2oloo & fullrank & nnlm & 0.3838 & 0.8798 & 0.7837 & 0.8035 & 0.5609 \\ p\_d2q\_rm3\_duo & h2oloo & fullrank & nnlm & 0.3795 & 0.8798 & 0.7821 & 0.8446 & 0.5643 \\ p\_bm25rm3\_duo & h2oloo & fullrank & nnlm & 0.3814 & 0.8759 & 0.7583 & 0.7939 & 0.5355 \\ CoRT-electra & HSRM-LAVIS & fullrank & nnlm & 0.4039 & 0.8703 & 0.7566 & 0.8072 & 0.5399 \\ RMIT-Bart & RMIT & fullrank & nnlm & 0.3990 & 0.8447 & 0.7536 & 0.7682 & 0.5121 \\ pash\_r1 & PASH & rerank & nnlm & 0.3622 & 0.8675 & 0.7463 & 0.7056 & 0.4969 \\ NLE\_pr3 & NLE & fullrank & nnlm & 0.3691 & 0.8440 & 0.7458 & 0.8211 & 0.5245 \\ pinganNLP2 & pinganNLP & rerank & nnlm & 0.3579 & 0.8602 & 0.7368 & 0.7056 & 0.4881 \\ pinganNLP3 & pinganNLP & rerank & nnlm & 0.3653 & 0.8586 & 0.7352 & 0.7056 & 0.4918 \\ pinganNLP1 & pinganNLP & rerank & nnlm & 0.3553 & 0.8593 & 0.7343 & 0.7056 & 0.4896 \\ NLE\_pr2 & NLE & fullrank & nnlm & 0.3658 & 0.8454 & 0.7341 & 0.6938 & 0.5117 \\ NLE\_pr1 & NLE & fullrank & nnlm & 0.3634 & 0.8551 & 0.7325 & 0.6938 & 0.5050 \\ 1 & nvidia\_ai\_apps & rerank & nnlm & 0.3709 & 0.8691 & 0.7271 & 0.7056 & 0.4899 \\ bigIR-BERT-R & QU & rerank & nnlm & 0.4040 & 0.8562 & 0.7201 & 0.7056 & 0.4845 \\ fr\_pass\_roberta & BITEM & fullrank & nnlm & 0.3580 & 0.8769 & 0.7192 & 0.7982 & 0.4990 \\ bigIR-DCT-T5-F & QU & fullrank & nnlm & 0.3540 & 0.8638 & 0.7173 & 0.8093 & 0.5004 \\ rr-pass-roberta & BITEM & rerank & nnlm & 0.3701 & 0.8635 & 0.7169 & 0.7056 & 0.4823 \\ bcai\_bertl\_pass & bcai & fullrank & nnlm & 0.3715 & 0.8453 & 0.7151 & 0.7990 & 0.4641 \\ bigIR-T5-R & QU & rerank & nnlm & 0.3574 & 0.8668 & 0.7138 & 0.7056 & 0.4784 \\ 2 & nvidia\_ai\_apps & fullrank & nnlm & 0.3560 & 0.8507 & 0.7113 & 0.7447 & 0.4866 \\ bigIR-T5-BERT-F & QU & fullrank & nnlm & 0.3916 & 0.8478 & 0.7073 & 0.8393 & 0.5101 \\ bigIR-T5xp-T5-F & QU & fullrank & nnlm & 0.3420 & 0.8579 & 0.7034 & 0.8393 & 0.5001 \\ nlm-ens-bst-2 & NLM & fullrank & nnlm & 0.3542 & 0.8203 & 0.6934 & 0.7190 & 0.4598 \\ nlm-ens-bst-3 & NLM & fullrank & nnlm & 0.3195 & 0.8491 & 0.6803 & 0.7594 & 0.4526 \\ nlm-bert-rr & NLM & rerank & nnlm & 0.3699 & 0.7785 & 0.6721 & 0.7056 & 0.4341 \\ relemb\_mlm\_0\_2 & UAmsterdam & rerank & nnlm & 0.2856 & 0.7677 & 0.6662 & 0.7056 & 0.4350 \\ nlm-prfun-bert & NLM & fullrank & nnlm & 0.3445 & 0.8603 & 0.6648 & 0.6927 & 0.4265 \\ TUW-TK-Sparse & TU\_Vienna & rerank & nn & 0.3188 & 0.7970 & 0.6610 & 0.7056 & 0.4164 \\ TUW-TK-2Layer & TU\_Vienna & rerank & nn & 0.3075 & 0.7654 & 0.6539 & 0.7056 & 0.4179 \\ p\_d2q\_bm25 & anserini & fullrank & nnlm & 0.2757 & 0.7326 & 0.6187 & 0.8035 & 0.4074 \\ p\_d2q\_bm25rm3 & anserini & fullrank & nnlm & 0.2848 & 0.7424 & 0.6172 & 0.8391 & 0.4295 \\ bert\_6 & UAmsterdam & rerank & nnlm & 0.3240 & 0.7386 & 0.6149 & 0.7056 & 0.3760 \\ CoRT-bm25 & HSRM-LAVIS & fullrank & nnlm & 0.2201 & 0.8372 & 0.5992 & 0.8072 & 0.3611 \\ CoRT-standalone & HSRM-LAVIS & fullrank & nnlm & 0.2412 & 0.8112 & 0.5926 & 0.6002 & 0.3308 \\ bl\_bcai\_mdl1\_vt & bl\_bcai & fullrank & trad & 0.1854 & 0.7037 & 0.5667 & 0.7430 & 0.3380 \\ bcai\_class\_pass & bcai & fullrank & trad & 0.1999 & 0.7115 & 0.5600 & 0.7430 & 0.3374 \\ bl\_bcai\_mdl1\_vs & bl\_bcai & fullrank & trad & 0.1563 & 0.6277 & 0.5092 & 0.7430 & 0.3094 \\ indri-fdm & bl\_rmit & fullrank & trad & 0.1798 & 0.6498 & 0.5003 & 0.7778 & 0.2989 \\ terrier-InL2 & bl\_rmit & fullrank & trad & 0.1864 & 0.6436 & 0.4985 & 0.7649 & 0.3135 \\ terrier-BM25 & bl\_rmit & fullrank & trad & 0.1631 & 0.6186 & 0.4980 & 0.7572 & 0.3021 \\ DLH\_d\_5\_t\_25 & RMIT & fullrank & trad & 0.1454 & 0.5094 & 0.4935 & 0.8175 & 0.3199 \\ indri-lmds & bl\_rmit & fullrank & trad & 0.1250 & 0.5866 & 0.4912 & 0.7741 & 0.2961 \\ indri-sdm & bl\_rmit & fullrank & trad & 0.1600 & 0.6239 & 0.4822 & 0.7726 & 0.2870 \\ p\_bm25rm3 & anserini & fullrank & trad & 0.1495 & 0.6360 & 0.4821 & 0.7939 & 0.3019 \\ p\_bm25 & anserini & fullrank & trad & 0.1786 & 0.6585 & 0.4796 & 0.7428 & 0.2856 \\ bm25\_bert\_token & UAmsterdam & fullrank & trad & 0.1576 & 0.6409 & 0.4686 & 0.7169 & 0.2606 \\ terrier-DPH & bl\_rmit & fullrank & trad & 0.1420 & 0.5667 & 0.4671 & 0.7353 & 0.2758 \\ TF\_IDF\_d\_2\_t\_50 & RMIT & fullrank & trad & 0.1391 & 0.5317 & 0.4580 & 0.7722 & 0.2923 \\ small\_1k & reSearch2vec & rerank & nnlm & 0.0232 & 0.2785 & 0.2767 & 0.7056 & 0.2112 \\ med\_1k & reSearch2vec & rerank & nnlm & 0.0222 & 0.2720 & 0.2708 & 0.7056 & 0.2081 \\ DoRA\_Large\_1k & reSearch2vec & rerank & nnlm & 0.0208 & 0.2740 & 0.2661 & 0.7056 & 0.2072 \\ DoRA\_Small & reSearch2vec & fullrank & nnlm & 0.0000 & 0.1287 & 0.0484 & 0.0147 & 0.0088 \\ DoRA\_Med & reSearch2vec & fullrank & nnlm & 0.0000 & 0.1075 & 0.0431 & 0.0147 & 0.0087 \\ DoRA\_Large & reSearch2vec & fullrank & nnlm & 0.0000 & 0.1111 & 0.0414 & 0.0146 & 0.0079 \\ \bottomrule \end{tabular} \label{tbl:results-passages} \end{table} \begin{figure} \includegraphics[width=\textwidth]{img/model-task-docs-lollipop-per-query.pdf} \caption{Comparison of the best ``nnlm'' and ``trad'' runs on individual test queries for the document retrieval task. Queries are sorted by difference in mean performance between ``nnlm'' and ``trad'' runs. Queries on which ``nnlm'' wins with large margin are at the top.} \label{fig:model-task-docs-bar-per-query} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{img/model-task-passages-lollipop-per-query.pdf} \caption{Comparison of the best ``nnlm'' and ``trad'' runs on individual test queries for the passage retrieval task. Queries are sorted by difference in mean performance between ``nnlm'' and ``trad'' runs. Queries on which ``nnlm'' wins with large margin are at the top.} \label{fig:model-task-passages-bar-per-query} \end{figure} \paragraph{End-to-end retrieval \emph{vs. } reranking.} Our datasets include top-$k$ candidate result lists, with 100 candidates per query for document retrieval and 1000 candidates per query for passage retrieval. Runs that simply rerank the provided candidates are ``rerank'' runs, whereas runs that perform end-to-end retrieval against the corpus, with millions of potential results, are ``fullrank'' runs. We would expect that a ``fullrank'' run should be able to find a greater number of relevant candidates than we provided, achieving higher NCG@$k$. A multi-stage ``fullrank'' run should also be able to optimize the stages jointly, such that early stages produce candidates that later stages are good at handling. According to Figure~\ref{fig:recall-stem}, ``fullrank'' did not achieve much better NDCG@10 performance than ``rerank'' runs. In fact, for the passage retrieval task, the top two runs are of type ``rerank''. While it was possible for ``fullrank'' to achieve better NCG@$k$, it was also possible to make NCG@$k$ worse, and achieving significantly higher NCG@$k$ does not seem necessary to achieve good NDCG@10. Specifically, for the document retrieval task, the best ``fullrank'' run achieves $5\%$ higher NDCG@10 over the best ``rerank' run; whereas for the passage retrieval task, the best ``fullrank'' run performs slightly worse ($0.3\%$ lower NDCG@10) compared to the best ``rerank' run. Similar to our observations from Deep Learning Track 2019, we are not yet seeing a strong advantage of ``fullrank'' over ``rerank''. However, we hope that as the body of literature on neural methods for phase 1 retrieval (\emph{e.g.}, \citep{boytsov2016off, zamani2018neural, mitra2019incorporating, nogueira2019document}) grows, we would see a larger number of runs with deep learning as an ingredient for phase 1 in future editions of this TREC track. \begin{figure} \center \begin{subfigure}{.49\textwidth} \includegraphics[width=\textwidth]{img/model-task-docs-stem-by-subtask.pdf} \caption{NDCG@10 for runs on the document retrieval task} \label{fig:model-task-docs-stem-by-subtask} \end{subfigure} \hfill \begin{subfigure}{.49\textwidth} \includegraphics[width=\textwidth]{img/model-task-passages-stem-by-subtask.pdf} \caption{NDCG@10 for runs on the passage retrieval task} \label{fig:model-task-passages-stem-by-subtask} \end{subfigure} \begin{subfigure}{.49\textwidth} \includegraphics[width=\textwidth]{img/ncg-task-docs-stem.pdf} \caption{NCG@100 for runs on the document retrieval task} \label{fig:recall-task-docs-stem} \end{subfigure} \hfill \begin{subfigure}{.49\textwidth} \includegraphics[width=\textwidth]{img/ncg-task-passages-stem.pdf} \caption{NCG@1000 for runs on the passage retrieval task} \label{fig:recall-task-passages-stem} \end{subfigure} \caption{Analyzing the impact of ``fullrank'' \emph{vs. } ``rerank'' settings on retrieval performance. Figure~(a) and (b) show the performance of different runs on the document and passage retrieval tasks, respectively. Figure~(c) and (d) plot the NCG@100 and NCG@1000 metrics for the same runs for the two tasks, respectively. The runs are ordered by their NDCG@10 performance along the $x$-axis in all four plots. We observe, that the best run under the ``fullrank'' setting outperforms the same under the ``rerank'' setting for both document and passage retrieval tasks---although the gaps are relatively smaller compared to those in Figure~\ref{fig:model-stem-by-model-type}. If we compare Figure~(a) with (c) and Figure~(b) with (d), we do not observe any evidence that the NCG metric is a good predictor of NDCG@10 performance.} \label{fig:recall-stem} \end{figure} \paragraph{Effect of ORCAS data} Based on the descriptions provided, ORCAS data seems to have been used by six of the runs (ndrm3-orc-full, ndrm3-orc-re, uogTrBaseL17, uogTrBaseQL17o, uogTr31oR, relemb\_mlm\_0\_2). Most runs seem to be make use of the ORCAS data as a field, with some runs using the data as an additional training dataset as well. Most runs used the ORCAS data for the document retrieval task, with relemb\_mlm\_0\_2 being the only run using the ORCAS data for the passage retrieval task. This year it was not necessary to use ORCAS data to achieve the highest NDCG@10. However, when we compare the performance of the runs that use the ORCAS dataset with those that do not use the dataset within the same group, we observe that usage of the ORCAS dataset always led to an improved performance in terms of NDCG@10, with maximum increase being around $0.0513$ in terms of NDCG@10. This suggests that the ORCAS dataset is providing additional information that is not available in the training data. This could also imply that even though the training dataset provided as part of the track is very large, deep models are still in need of more training data. \paragraph{NIST labels \emph{vs. } Sparse MS MARCO labels.} Our baseline human labels from MS MARCO often have one known positive result per query. We use these labels for training, but they are also available for test queries. Although our official evaluation uses NDCG@10 with NIST labels, we now compare this with reciprocal rank (RR) using MS MARCO labels. Our goal is to understand how changing the labeling scheme and metric affects the overall results of the track, but if there is any disagreement we believe the NDCG results are more valid, since they evaluate the ranking more comprehensively and a ranker that can only perform well on labels with exactly the same distribution as the training set is not robust enough for use in real-world applications, where real users will have opinions that are not necessarily identical to the preferences encoded in sparse training labels. Figure~\ref{fig:rrms_vs_ndcg} shows the agreement between the results using MS MARCO and NIST labels for the document retrieval and passage retrieval tasks. While the agreement between the evaluation setup based on MS MARCO and TREC seems reasonable for both tasks, agreements for the document ranking task seems to be lower (Kendall correlation of $0.46$) than agreements for the passage task (Kendall correlation of $0.69$). This value is also lower than the correlation we observed for the document retrieval task for last year. In Table~\ref{tab:kendall_by_tasktype} we show how the agreement between the two evaluation setups varies across task and run type. Agreement on which are the best neural network runs is high, but correlation for document trad runs is close to zero. \begin{table}[] \centering \caption{Leaderboard metrics breakdown. The Kendall agreement ($\tau$) of NDCG@10 and RR (MS) varies across task and run type. Agreement on the best neural network runs is high, but agreement on the best document trad runs is very low. We do not list the agreement for passage nn runs since there are only two runs.} \begin{tabular}{lrr} \toprule run type & docs & passages \\ \midrule nnlm & 0.83 & 0.76 \\ nn & 0.96 & --- \\ trad & 0.03 & 0.67 \\ \midrule all & 0.46 & 0.69 \\ \bottomrule \end{tabular} \label{tab:kendall_by_tasktype} \end{table} One explanation for this low correlation could be use of the ORCAS dataset. ORCAS was mainly used in the document retrieval task, and could bring search results more in line with Bing's results, since Bing's results are what may be clicked. Since MS MARCO sparse labels were also generated based on top results from Bing, we would expect to see some correlation between ORCAS runs and MS MARCO labels (and Bing results). By contrast, NIST judges had no information about what results were retrieved or clicked in Bing, so may have somewhat less correlation with Bing's results and users. In Figure~\ref{fig:orcas_scatter} we compare the results from the two evaluation setups when the runs are split based on the usage of the ORCAS dataset. Our results suggest that runs that use the ORCAS dataset did perform somewhat better based on the MS MARCO evaluation setup. While the similarities between the ORCAS dataset and the MS MARCO labels seem to be one reason for the mismatch between the two evaluation results, it is not enough to fully explain the $0.03$ correlation in Table\ref{tab:kendall_by_tasktype}. Removing the ORCAS ``trad'' runs only increases the correlation to $0.13$. In the future we plan to further analyze the possible reasons for this poor correlation, which could also be related to 1) the different metrics used in the two evaluation setups (RR vs. NDCG@10), 2) the different sensitivity of the datasets due to the different number of queries and number of documents labelled per query), or 3) difference in relevance labels provided by NIST assessors vs. labels derived from clicks. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{img/docs_rrms_vs_ndcg.pdf} \includegraphics[width=0.49\textwidth]{img/passages_rrms_vs_ndcg.pdf} \caption{Leaderboard metrics agreement analysis. For document runs, the agreement between the leaderboard metric RR (MS) and the main TREC metric NDCG@10 is lower this year. The Kendall correlation is $\tau=0.46$, compared to $\tau=0.69$ in 2019. For the passage task, we see $\tau=0.69$ in 2020, compared to $\tau=0.68$ in 2019.} \label{fig:rrms_vs_ndcg} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{img/orcas_scatter.pdf} \caption{This year it was not necessary to use ORCAS data to achieve the highest NDCG@10. ORCAS runs did somewhat better on the leaderboard metric RR (MS), which uses different labels from the other metrics. This may indicate an alignment between the Bing user clicks in ORCAS with the labeled MS MARCO results, which were also generated by Bing.} \label{fig:orcas_scatter} \end{figure} \section{Reusability of test collections} \section{Conclusion} \label{sec:conclusion} The TREC 2020 Deep Learning Track has provided two large training datasets, for a document retrieval task and a passage retrieval task, generating two ad hoc test collections with good reusability. The main document and passage training datasets in 2020 were the same as those in 2019. In addition, as part of the 2020 track, we have also released a large click dataset, the ORCAS dataset, which was generated using the logs of the Bing search engine. For both tasks, in the presence of large training data, this year's non-neural network runs were outperformed by neural network runs. While usage of the ORCAS dataset seems to help improve the performance of the systems, it was not necessary to use ORCAS data to achieve the highest NDCG@10. We compared reranking approaches to end-to-end retrieval approaches, and in this year's track there was not a huge difference, with some runs performing well in both regimes. This is another result that would be interesting to track in future years, since we would expect that end-to-end retrieval should perform better if it can recall documents that are unavailable in a reranking subtask. This year the number of runs submitted for both tasks have increased compared to last year. In particular, number of non-neural runs have increased. Hence, test collections generated as part of this year's track may be more reusable compared to last year since these test collections may be fairer towards evaluating the quality of unseen non-neural runs. We note that the number of ``nn'' runs also seems to be smaller this year. We will continue to encourage a variety of approaches in submission, to avoid converging too quickly on one type of run, and to diversify the judging pools. Similar to last year, in this year's track we have two types of evaluation label for each task. Our official labels are more comprehensive, covering a large number of results per query, and labeled on a four point scale at NIST. We compare this to the MS MARCO labels, which usually only have one positive result per query. While there was a strong correlation between the evaluation results obtained using the two datasets for the passage retrieval task, the correlation for the document retrieval task was lower. Part of this low correlation seems to be related to the usage of the ORCAS dataset (which is generated using similar dataset as the one used to generate the MS MARCO labels) by some runs, and evaluation results based on MS MARCO data favoring these runs. However, our results suggest that while the ORCAS dataset could be one reason for the low correlation, there might be other reasons causing this reduced correlation, which we plan to explore as future work.
{ "timestamp": "2021-02-16T02:38:19", "yymm": "2102", "arxiv_id": "2102.07662", "language": "en", "url": "https://arxiv.org/abs/2102.07662" }
\section{Introduction} Magnetic resonance imaging (MRI) is a non-invasive, radiation-free medical imaging modality technique that provides excellent soft tissue contrast for diagnostic purposes. However, lengthy acquisition times in MRI remain a limitation. Accelerated MRI techniques acquire fewer measurements at a sub-Nyquist rate, and use redundancies in the acquisition system or the images to remove the resulting aliasing artifacts during reconstruction. In clinical MRI systems, multi-coil receivers are used during data acquisition. Parallel imaging (PI) is the most clinically used method for accelerated MRI, and exploits the redundancies between these coils for reconstruction \cite{Sense,Grappa}. Compressed sensing (CS) is another conventional accelerated MRI technique that exploits the compressibility of images in sparsifying transform domains \cite{lustig}, and is commonly used in combination with PI. However, PI and CS may suffer from noise and residual artifacts at high acceleration rates \cite{Robson,sandino2020compressed}. \begin{figure}[t] \begin{center}\label{fig:Robustness} \includegraphics[width=1\linewidth]{Figure1.png} \end{center} \vspace{-.3cm} \caption {Test datasets may differ from the training datasets in terms of sampling pattern, SNR, contrast and anatomy. Such differences lead to suboptimal reconstructions in the test datasets, raising robustness and generalizability concerns for translation of trained models to clinical practice. } \vspace{-.3cm} \end{figure} Recently, deep learning (DL) methods have emerged as an alternative accelerated MRI technique due to their improved reconstruction quality compared to conventional approaches \cite{Hammernik,Knoll_SPM}. Particularly, physics-guided deep learning reconstruction (PG-DLR) approaches have gained interest due to their robustness and improved reconstruction quality \cite{Hammernik,Knoll_SPM,Hemant,Schlemper,Hosseini_JSTSP}. PG-DLR methods explicitly incorporate the physics of the data acquisition system into the neural network via a procedure known as algorithm unrolling \cite{monga2019algorithm}. This is done by unrolling conventional iterative algorithms that alternate between data consistency and regularization steps for a fixed number of iterations. Subsequently, PG-DLR approaches are trained in a supervised manner using large databases of fully-sampled measurements \cite{Hammernik, Hemant}. More recently, self-supervised learning via data undersampling (SSDU) has shown that similar reconstruction quality to supervised PG-DLR can be achieved while training on a database of only undersampled measurements \cite{yaman_SSDU_MRM}. While such database learning strategies offer improved reconstruction quality, acquisition of large datasets may not often be feasible. In some MRI applications involving time-varying physiological processes, dynamic information such as time courses of signal changes, contrast-related uptake or breathing patterns may differ substantially between subjects, making it difficult to generate high-quality databases of sufficient size for the aforementioned strategies. Furthermore, database training, in general, brings along concerns about generalization \cite{eldar2017challenges, KnollfastMRIChallenge}. In particular, measurement system parameters such as undersampling pattern or acceleration rates used in training may not match the unseen test dataset \cite{Knoll_TL}. Moreover, the test data may have different imaging contrast, SNR, or anatomical features compared to the training database, which may lead to sub-optimal reconstruction \cite{Knoll_TL, fastmri_2ndChallenge}. Finally, training datasets may lack examples of rare and/or subtle pathologies, increasing the risk of generalization failure \cite{Knoll_TL, KnollfastMRIChallenge}. In this work, we tackle these challenges associated with database training, and propose zero-shot SSDU (ZS-SSDU), which performs scan-specific training of PG-DLR without any external training database. Succinctly, ZS-SSDU splits acquired measurements into three types of disjoint sets, which are respectively used only in the PG-DLR neural network, in defining the training loss, and in establishing a stopping strategy to avoid overfitting. In cases where a database-pretrained network is available, ZS-SSDU leverages transfer learning for improved reconstruction quality and reduced computational complexity. Our contributions can be summarized as follows: \begin{itemize} \item We propose a zero-shot self-supervised method for learning MRI reconstruction from a single undersampled dataset without any external training database. \item We provide a well-defined methodology for determining stopping criterion to avoid over-fitting in contrast to other single-image training approaches \cite{Ulyanov_2018_CVPR}. \item We synergistically combine the proposed zero-shot self-supervised learning approach with transfer learning (TL) to further reduce computational costs and achieve improved reconstruction quality. \item We apply the proposed zero-shot learning approach to knee and brain MRI datasets, and show its efficacy in removing residual aliasing and banding artifacts compared to supervised database learning. \item We show that the the proposed zero-shot learning in combination with TL may address robustness and generalizability issues of trained supervised models in terms of changes in sampling pattern, acceleration rate, contrast, SNR, and anatomy at inference time. \end{itemize} \section{Background and Related Work} \subsection{Accelerated MRI Acquisition Model} In MRI, raw measurement data is acquired in the frequency domain, also known as k-space. In current clinical MRI systems, multiple receiver coils are used, where each is sensitive to different parts of the volume. In practice, MRI is accelerated by taking fewer measurements, which are characterized by an undersampling mask denoting the acquired locations in k-space. For a multi-coil MRI acquisition, the forward model is given as \begin{equation}\label{Eq:Forward_Model_MultiCoil_Percoil} {\bf y}_i = {\bf P}_{\Omega}\mathcal{F}{\bf C}_{i} \mathbf{x} +{\bf n}_i, \:\: i \in \{1, \dots, n_c\}, \end{equation} where ${\bf x}$ is the underlying image, ${\bf y}_i$ are the acquired measurements for the $i^\textrm{th}$ coil, ${\bf P}_{\Omega}$ is the masking operator for undersampling pattern $\Omega$, $\mathcal{F}$ is the Fourier transform, ${\bf C}_{i}$ is a diagonal matrix characterizing the sensitivity of the $i^\textrm{th}$ coil, ${\bf n}_i$ is measurement noise for $i^\textrm{th}$ coil, and $n_c$ is the number of coils \cite{Sense}. This system is typically concatenated across the coil dimension to yield a compact representation \begin{equation}\label{Eq:Forward_Model_MultiCoil} {\bf y}_{\Omega} = {\bf E}_{\Omega} \mathbf{x} +{\bf n}, \end{equation} where ${\bf y}_{\Omega}$ is the acquired undersampled measurements across all coils, ${\bf E}_\Omega$ is the forward encoding operator that concatenates ${\bf P}_{\Omega}\mathcal{F}{\bf C}_{i}$ across $i \in \{1, \dots, n_c\}.$ The general inverse problem for accelerated MRI is given as \begin{equation}\label{Eq:recons1} \arg \min_{\bf x} \|\mathbf{y}_{\Omega}-\mathbf{E}_{\Omega}\mathbf{x}\|^2_2 + \cal{R}(\mathbf{x}), \end{equation} where the first term $\|\mathbf{y}_{\Omega}-\mathbf{E}_{\Omega}\mathbf{x}\|^2_2 $ enforces data consistency (DC) with acquired measurements and $\cal{R}(\mathbf{\cdot})$ is a regularizer. \subsection{Parallel Imaging} The most clinically used acceleration strategy, PI uses linear methods to recover the image from the undersampled measurements ${\bf y}_{\Omega}$. PI performs reconstruction by solving Eq. \ref{Eq:recons1} either without a regularizer term or with a Tikhonov regularizer \cite{cgsense}. The clinical implementation of PI relies on a so-called uniform undersampling pattern, where acquired k-space lines are equispaced. This leads to coherent foldover artifacts in the aliased images, but also allows a non-iterative unaliasing solution in the image domain \cite{Sense}. We note that the problem can also be solved in k-space via interpolation \cite{Grappa}. PI typically suffers from noise amplification at high acceleration rates \cite{Sense,Grappa}. \subsection{Compressed Sensing} CS accelerates MRI acquisition by exploiting the compressibility of images in some transform domain, while requiring incoherent aliasing artifacts \cite{lustig}. In CS reconstruction, a sparsity-inducing regularizer is used in Eq. \ref{Eq:recons1}. A popular choice is the $\ell_1$ norm of transform-domain coefficients, in a fixed linear sparsifying domain, such as wavelets \cite{lustig}. CS-based approaches utilize random undersampling patterns to generate incoherent aliasing artifacts. One of the main limitations of CS approaches is blurring and residual artifacts seen at high acceleration rates \cite{sandino2020compressed}. \subsection{PG-DLR with Algorithm Unrolling} Several optimization methods are available for solving the inverse problem in (\ref{Eq:recons1}) \cite{fessler_SPM}. Variable-splitting via quadratic penalty is one such approach that decouples the DC and regularizer units. It introduces an auxiliary variable ${\bf z}$ that is constrained to be equal to ${\bf x}$, and (\ref{Eq:recons1}) is reformulated as an unconstrained problem with a quadratic penalty \begin{equation}\label{Eq:recons2} \arg \min_{\bf x, z} \|\mathbf{y}_{\Omega}-\mathbf{E}_{\Omega}\mathbf{x}\|^2_2 + \mu \|\mathbf{x}-\mathbf{z}\|^2_2 + \cal{R}(\mathbf{z}), \end{equation} where $\mu$ is the penalty parameter. The optimization problem in Eq. (\ref{Eq:recons2}) is then solved via alternating minimization as \begin{subequations} \begin{align} & \mathbf{z}^{(i)} = \arg \min_{\bf z}\mu \lVert\mathbf{x}^{(i-1)}-\mathbf{z}\rVert_{2}^2 +\cal{R}(\mathbf{z}),\label{Eq:recons3a} \\ & \mathbf{x}^{(i)} = \arg \min_{\bf x}\|\mathbf{y}_{\Omega}-\mathbf{E}_{\Omega}\mathbf{x}\|^2_2 +\mu\lVert\mathbf{x}-\mathbf{z}^{(i)}\rVert_{2}^2,\label{Eq:recons3b} \end{align} \end{subequations} where $\mathbf{z}^{(i)}$ is an intermediate variable and $\mathbf{x}^{(i)}$ is the desired image at iteration $i$. In PG-DLR, an iterative algorithm, as in (\ref{Eq:recons3a}) and (\ref{Eq:recons3b}) is unrolled for a fixed number of iterations \cite{Hammernik, Hemant,Hosseini_JSTSP,LeslieYing_SPM}. Each unrolled iteration contains a DC and a regularizer unit, in which the regularizer sub-problem in Eq. (\ref{Eq:recons3a}) is implicitly solved with neural networks and the DC sub-problem in Eq. (\ref{Eq:recons3b}) is solved via linear methods such as gradient descent \cite{Hammernik} or conjugate gradient (CG) \cite{Hemant}. There have been numerous works on PG-DLR for accelerated MRI \cite{Schlemper, Hammernik,Hemant, LeslieYing_SPM,yaman_SSDU_MRM}. Most of these works vary from each other on the algorithms used for DC and neural networks employed in the regularizer units. However, all these works require a large database of training samples. \section{Methods} In this section, we review supervised and self-supervised PG-DLR training strategies. Subsequently, we introduce the proposed zero-shot self-supervised learning approach. \subsection{Supervised Learning for PG-DLR} In supervised PG-DLR, training is performed using a database of fully-sampled reference data. Let ${\bf y}_{\textrm{ref}}^i$ be fully-sampled k-space for subject $i$ and $f({\bf y}_{\Omega}^i, {\bf E}_{\Omega}^i; {\bm \theta})$ be the output of the unrolled network for under-sampled k-space ${\bf y}_{\Omega}^i$, where the network is parameterized by ${\bm \theta}$. End-to-end training minimizes \cite{Knoll_SPM,yaman_SSDU_MRM} \vspace{-0.02cm} \begin{equation} \min_{\bm \theta} \frac1N \sum_{i=1}^{N} \mathcal{L}( {\bf y}_{\textrm{ref}}^i, \:{\bf E}_\textrm{full}^i f({\bf y}_{\Omega}^i, {\bf E}_{\Omega}^i; {\bm \theta})), \vspace{-0.02cm} \end{equation} where $N$ is the number of samples in the training database, ${\bf E}_\textrm{full}^i$ is the fully-sampled encoding operator that transform network output to k-space and $\mathcal{L}(\cdot, \cdot)$ is a loss function. \begin{figure*}[!t] \begin{center} \includegraphics[width=1\textwidth]{Figure_Architecture.png} \end{center} \vspace{-.3cm} \caption {\label{fig:Architecture}An overview of the proposed zero-shot self-supervised learning approach. a) Acquired measurements for the scan of interest are split into three sets: a training mask ($\Theta$), a loss mask ($\Lambda$), and a validation mask for automated early stopping ($\Gamma$). b) The parameters, ${\bm \theta}$, of the unrolled MRI reconstruction network are updated using $\Theta$ and $\Lambda$ in data consistency (DC) units of the unrolled network and for defining loss, respectively. c) Concurrently, a k-space validation procedure is used to establish the stopping criterion by using $\Omega \backslash \Gamma$ in the DC units and $\Gamma$ to measure a validation loss. d) Once the network training has been stopped due to an increasing trend in the validation loss, the final reconstruction is performed using the relevant learned network parameters and all the acquired measurements in the DC unit.} \vspace{-.25cm} \end{figure*} \subsection{Self-Supervised Learning for PG-DLR} \label{sec:ssdu} Unlike supervised learning, SSDU performs training without fully-sampled data by only utilizing acquired undersampled measurements\cite{yaman_SSDU_MRM}. In SSDU, the undersampled data indices, $\Omega$ are split into two disjoint sets $\Theta$ and $\Lambda$ as $\Omega = \Theta \cup \Lambda$. $\Theta$ is the set of k-space locations used in the DC units of the PG-DLR network during training, while $\Lambda$ is a set of k-space locations used in the loss function. End-to-end training is performed using the loss function \vspace{-0.02cm} \begin{equation} \min_{\bm \theta} \frac1N \sum_{i=1}^{N} \mathcal{L}\Big({\bf y}_{\Lambda}^i, \: {\bf E}_{\Lambda}^i \big(f({\bf y}_{\Theta}^i, {\bf E}_{\Theta}^i; {\bm \theta}) \big) \Big). \vspace{-0.02cm} \end{equation} SSDU was also extended to a multi-mask setting \cite{Yaman_MultiMask_SSDU}, where $\Omega$ is retrospectively split into disjoint sets $\Theta_k$ and $\Lambda_k$ multiple times. Hence, available measurements for each slice in the database are partitioned $K$ times such that \begin{equation}\label{eq:mmssdu1} \Omega = \Theta_k \cup \Lambda_k, \ \quad k \in \{1, \dots, K\}, \end{equation} with $\Lambda_k = \Omega \backslash \Theta_k$, leading to the following training loss \begin{equation} \label{eq:mmssdu2} \min_{\bm \theta} \frac{1}{N\cdot K} \sum_{i=1}^{N}\sum_{k=1}^{K} \mathcal{L}\Big({\bf y}_{\Lambda_j}^i, \: {\bf E}_{\Lambda_k}^i \big(f({\bf y}_{\Theta_k}^i, {\bf E}_{\Theta_k}^i; {\bm \theta}) \big) \Big). \end{equation} The multi-mask SSDU was shown to outperform the single-mask SSDU in terms of reconstruction quality \cite{Yaman_MultiMask_SSDU}. \subsection{Proposed Zero-Shot Learning for PG-DLR} \label{sec:zsssdu} The 2-way partitioning of data in Section \ref{sec:ssdu} is reminiscent of cross-validation, where the available database is split into two sets, one of which is used to train the model and the other to test whether the trained model generalizes to unseen data. Similarly in SSDU, one set, $\Theta$ is used within the network, while the other set, $\Lambda$ is never seen by the network and is used to test the generalization performance to other k-space points by defining the loss. Note the main difference to cross-validation is that in SSDU, partitioning is performed for each slice in the training database, whereas cross-validation partitions the whole database only once. While this 2-way partitioning can be applied for subject-specific learning, it leads to overfitting unless the training is stopped early \cite{Amir_EMBC}. This is similar to other single-image learning strategies, such as deep image prior (DIP) or zero-shot super-resolution\cite{Ulyanov_2018_CVPR, ZSSR}. DIP-type approaches shows that an un-trained neural network can successfully perform instance-specific image restoration tasks such as denoising, super-resolution, inpainting without any training data. However, such DIP-type techniques requires an early stopping for avoiding over-fitting, which is typically done with a manual heuristic selection \cite{Ulyanov_2018_CVPR, Amir_EMBC}. Thus, in this work, we propose a zero-shot self-supervised learning via data undersampling (ZS-SSDU) approach for scan-specific PG-DLR with a well-defined automated early stopping criterion. \vspace{0.2cm} \noindent\textbf{ZS-SSDU Formulation and Training: } In this work, we extend the partitioning to a 3-way split, which is reminiscent of using a validation set in addition to testing and training sets for hyperparameter tuning and/or for regularization by early stopping. We define the following partition for $\Omega$: \begin{equation} \Omega = \Theta \sqcup \Lambda \sqcup \Gamma, \end{equation} where $\sqcup$ denotes a disjoint union, i.e. $\Theta, \Lambda$ and $\Gamma$ are pairwise disjoint (Figure \ref{fig:Architecture}). Similar to Section \ref{sec:ssdu}, $\Theta$ is used in the DC units of the unrolled network, and $\Lambda$ is used to define the loss in k-space. The third partition $\Gamma$ is a set of acquired k-space indices that are set aside for defining a k-space validation loss. ZS-SSDU can be formulated in the multi-mask setting \cite{Yaman_MultiMask_SSDU}, by fixing the k-space validation partition $\Gamma \subset \Omega$, and performing the multi-masking on $\Omega \backslash \Gamma$. Formally, $\Omega \backslash \Gamma$ is partitioned $K$ times such that \vspace{-0.02cm} \begin{equation}\label{eq:zsmmssdu1} \Omega \backslash \Gamma = \Theta_k \sqcup \Lambda_k, \ \quad k \in \{1, \dots, K\}, \vspace{-0.02cm} \end{equation} where $\Lambda_k$, $\Theta_k$ and $\Gamma$ are pairwise disjoint, i.e. $\Omega =\Gamma \sqcup \Theta_k \sqcup \Lambda_k, \forall \ k$. Thus, ZS-SSDU performs training by minimizing \\ \vspace{-.02cm} \begin{equation} \label{eq:zsmmssdu2} \nonumber \min_{\bm \theta} \frac{1}{K} \sum_{k=1}^{K} \mathcal{L}\Big({\bf y}_{\Lambda_k}, \: {\bf E}_{\Lambda_k} \big(f({\bf y}_{\Theta_k}, {\bf E}_{\Theta_k}; {\bm \theta}) \big) \Big) \vspace{-0.02cm} \end{equation} This is now supplemented by a new k-space validation loss, which tests the generalization performance of the trained network on the k-space validation partition $\Gamma$. For the $l^\textrm{th}$ epoch, where the learned network weights are specified by ${\bm \theta}^{(l)}$, this validation loss is given by: \vspace{-0.03cm} \begin{equation} \label{Eq:recons7b} \mathcal{L}\Big({\bf y}_{\Gamma}, \: {\bf E}_{\Gamma} \big(f({\bf y}_{\Omega \backslash \Gamma}, {\bf E}_{\Omega \backslash \Gamma}; {\bm \theta}^{(l)}) \big) \Big). \vspace{-0.03cm} \end{equation} Note that in (\ref{Eq:recons7b}), the network output is calculated by applying the DC units on $\Omega \backslash \Gamma = \Theta \sqcup \Lambda$, i.e. all acquired points outside of $\Gamma$, to better assess its generalizability performance. The key idea is that while the training loss will decrease over epochs, the k-space validation loss will start increasing once overfitting is observed. Thus, we monitor the loss in (\ref{Eq:recons7b}) during training to define an early stopping criterion to avoid overfitting. Let $L$ be the epoch in which training needs to be stopped. Then at inference time, the network output is calculated as $f({\bf y}_{\Omega}, {\bf E}_{\Omega}; {\bm \theta}^{(L)}),$ i.e. all acquired points are used to calculate the network output \cite{yaman_SSDU_MRM}. \vspace{0.2cm} \noindent\textbf{ZS-SSDU with Transfer Learning (TL):} TL has been used for re-training DL models pre-trained on large databases to reconstruct MRI data with different characteristics \cite{Knoll_TL}. However, such transfer still requires another, often smaller database for re-training. In contrast, in the presence of pre-trained models, ZS-SSDU can be combined with TL, referred to as ZS-SSDU-TL, to reconstruct a single dataset with different characteristics by using weights of the pre-trained model for initialization. This facilitates faster convergence, reducing the reconstruction time. \vspace{0.1cm} \begin{figure}[!b] \begin{center} \includegraphics[width=1\linewidth]{Figure_DataType.png} \end{center} \vspace{-.3cm} \caption {\label{fig:DataType} Different contrast weightings and anatomies: a) Cor-PD, b) Cor-PDFS, c) Ax-FLAIR, d) Ax-T2, as well as undersampling patterns: e) Uniform, f) Random mask, used in this study. Zero-filled images generated by uniform and random undersampling masks have coherent and incoherent aliasing artifacts, respectively. Coherent aliasing artifacts are generally harder to remove compared to incoherent artifacts.} \end{figure} \section{Experiments} \subsection{Datasets} We performed experiments on publicly available fully-sampled multi-coil knee and brain MRI from fastMRI database \cite{fastmri}. Knee and brain MRI datasets contained data from 15 and 16 receiver coils, respectively. Fully-sampled datasets were retrospectively undersampled by keeping 24 lines of autocalibrated signal (ACS) from center of k-space. FastMRI database contains different contrast weightings. For knee MRI, we used coronal proton density (Cor-PD) and coronal proton density with fat suppression (Cor-PDFS) and for brain MRI, axial Flair(Ax-FLAIR) and axial T2 (Ax-T2). Figure \ref{fig:DataType} shows the different datasets used in this study, as well as different types of undersampling masks. \subsection{Implementation Details}\label{sec:implementation_details} All PG-DLR approaches were trained end-to-end using 10 unrolled iterations. CG method and a ResNet structure were employed in the DC and regularizer units of the unrolled network, respectively \cite{yaman_SSDU_MRM}. ResNet is comprised of a layer of input and output convolution layers, and 15 residual blocks (RB) each containing two convolutional layers, where the first layer is followed by ReLU and the second layer is followed by a constant multiplication \cite{Timofte}. All layers had a kernel size of 3×3, 64 channels. The real and imaginary parts of the complex MR images were concatenated prior to being input to the ResNet as 2-channel images. The unrolled network, which shares parameters across the unrolled iterations had a total of 592,129 trainable parameters. Coil sensitivity maps were generated from the central 24$\times$24 ACS using ESPIRiT \cite{ESPIRIT}. End-to-end training was performed with a normalized $\ell_1$-$\ell_2$ loss (Adam optimizer, LR=$5\cdot 10^{-4}$, batch size=1) \cite{yaman_SSDU_MRM}. Peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) were used as quantitative evaluation criteria. \subsection{Reconstruction Method Comparisons} In this work, we focus on comparing training strategies for accelerated MRI reconstruction. Thus, we used the same network architecture from Section \ref{sec:implementation_details} for all training methods in all experiments. We note that the proposed ZS-SSDU training strategy is agnostic to the specifics of the neural network architecture, and the neural network for the regularizer unit \cite{Timofte} was not tuned in any way. In fact, the number of tunable network parameters is higher than the number of undersampled measurements available on a single dataset, i.e. dimension of ${\bf y}_{\Omega}$. As such, different neural network architectures may be used for the regularizer unit in the unrolled network, but this is not the focus of our study. \vspace{0.2 cm} \noindent\textbf{Supervised PG-DLR:} Supervised PG-DLR models for knee and brain MRI were trained on 300 slices from 15 and 30 different subjects, respectively. For each knee and brain contrast weighting, two networks were trained separately using random and uniform masks \cite{Hammernik} at an acceleration rate (R) of 4 \cite{KnollfastMRIChallenge}. Trained networks were used for comparison and TL purposes. We note that random undersampling results in incoherent artifacts, whereas uniform undersampling leads to coherent artifacts that are harder to remove (Figure \ref{fig:DataType}) \cite{Knoll_TL}. Hence, we focus on the more difficult problem of uniform undersampling, while presenting random undersampling results in Supplementary Materials. \vspace{0.2 cm} \noindent\textbf{DIP-Recon:} We employ a DIP-type scan-specific MRI reconstruction that uses all acquired measurements in both DC and defining loss \cite{yaman_SSDU_MRM}: \begin{equation} \label{Eq:dip_ssdu} \vspace{-0.02cm} \mathcal{L}\Big({\bf y}_{\Omega}, \: {\bf E}_{\Omega} \big(f({\bf y}_{\Omega}, {\bf E}_{\Omega}; {\bm \theta}) \big) \Big). \vspace{-0.02cm} \end{equation} We refer to the reconstruction from this training mechanism as DIP-Recon. DIP-Recon-TL refers to combining (\ref{Eq:dip_ssdu}) with TL. As mentioned, DIP-Recon does not have a stopping criterion, hence early stopping was heuristically determined(Supplementary Figure S1). \vspace{0.2 cm} \noindent\textbf{Parallel Imaging:} We include CG-SENSE, which is a commonly used scan-specific conventional PI method \cite{Sense,cgsense}, as the clinical baseline quality for comparison purposes. \begin{figure}[!b] \begin{center} \includegraphics[width=1\linewidth]{loss_curves.png} \end{center} \vspace{-.3cm} \caption {\label{fig:loss_curves} a) Representative training and k-space validation loss curves for ZS-SSDU with multiple $K \in \{1,10, 25,50\}$ masks on Cor-PD knee MRI using uniform undersampling at R = 4. For $K>$ 1 the validation loss forms an L-curve, whose breaking point (red arrows) dictates the automated early stopping criterion for training. b) Corresponding ZS-SSDU reconstruction results. At $K$ = 1, without a clear stopping criterion, visible artifacts remain, highlighting the overfitting. For $K>$ 1, ZS-SSDU shows good reconstruction quality without residual artifacts. } \vspace{-.3cm} \end{figure} \subsection{Automated Stopping and Ablation Study}\label{sec:ablation_study} The stopping criterion for the proposed ZS-SSDU was investigated on slices from the knee dataset. Validation set $\Gamma$ was selected from the acquired measurements $\Omega$ using a uniformly random selection with $|\Gamma|/|\Omega|=0.2$. The remaining acquired measurements $\Omega \backslash \Gamma$ were retrospectively split into disjoint 2-tuples multiple times based on uniformly random selection with the ratio $\rho= |\Lambda_k|/|\Omega \backslash \Gamma| = 0.4$ $\forall k \in \{1, \dots, K\}$ \cite{yaman_SSDU_MRM}. \vspace{-0.14cm} Figure \ref{fig:loss_curves}a shows representative scan-specific training and validation loss curves at R = 4 of ZS-SSDU for $K \in \{1,10,25,50\}$. As expected, training loss decreases with increasing epochs for all $K$. Validation loss for $K$ = 1 decreases without showing a clear breaking point for stopping. For $K>$ 1, validation loss forms an L-curve, and the breaking point of the L-curve is used as the stopping criterion. Figure \ref{fig:loss_curves}b shows reconstructions corresponding to the $K$ values using the proposed stopping criterion. For $K>$ 1, no visible residual aliasing artifacts are observed. For $K$ = 1, without a clear breaking point for stopping criterion, reconstructions show lower visual and quantitative quality. $K$ = 10 is used for the rest of the study, while noting $K$ = 25 and 50 also show similar performance. \begin{figure}[!b] \begin{center} \includegraphics[width=1\linewidth]{Loss_Curves_TL.png} \end{center} \vspace{-.3cm} \caption {\label{fig:loss_curves_tl} a) Loss curves for ZS-SSDU with/without TL for $ K =10 $ on a representative Cor-PD knee MRI slice. ZS-SSDU with TL converges faster compared to ZS-SSDU (red arrows). b) Reconstruction results corresponding to the loss curves. Both ZS-SSDU and ZS-SSDU-TL remove residual artifacts.} \vspace{-.3cm} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=1\textwidth]{Figure_Artifact_Removal.png} \end{center} \vspace{-.3cm} \caption {\label{fig:Artifact_Removal} Reconstruction results on a representative test slice from a) Cor-PD knee MRI and b)Ax-FLAIR brain MRI at R = 4 with uniform undersampling. CG-SENSE, DIP-Recon, DIP-Recon-TL suffer from noise amplification and residual artifacts shown with red arrows, especially in knee MRI due to the unfavorable coil geometry. Scan-specific ZS-SSDU and ZS-SSDU-TL achieve artifact-free and improved reconstruction quality, similar to the database-trained supervised PG-DLR. } \vspace{-.3cm} \end{figure*} \begin{table*}[b] \begin{adjustbox}{width=0.85\textwidth,center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & Metrics & CG-SENSE & Supervised PG-DLR & DIP-Recon & DIP-Recon-TL & ZS-SSDU & ZS-SSDU-TL \\ \hline \multirow{2}{*}{Cor-PD} & SSIM & 0.862 & \textbf{0.952} & 0.793 & 0.819 & 0.948 & 0.951 \\ \cline{2-8} & PSNR & 34.521 & 39.966 & 32.668 & 33.583 & 39.550 & \textbf{40.102} \\ \hline \multirow{2}{*}{Ax-FLAIR} & SSIM & 0.836 & 0.934 & 0.799 & 0.818 & 0.935 & \textbf{0.937} \\ \cline{2-8} & PSNR & 31.969 & \textbf{37.375} & 30.637 & 31.249 & 36.861 & 37.250 \\ \hline \end{tabular} \end{adjustbox} \caption{Average PSNR and SSIM values on 30 test slices. } \label{tbl:Average_PSNR_SSIM} \end{table*} Figure \ref{fig:loss_curves_tl}a and b show loss curves and reconstruction results on a Cor-PD slice with and without transfer learning. ZS-SSDU-TL, which uses pre-trained supervised PG-DLR parameters as initial starting parameters, converges faster compared to ZS-SSDU, substantially reducing the total training time. Both ZS-SSDU and ZS-SSDU-TL remove residual artifacts, while the latter shows visually and quantitatively improved reconstruction performance. \subsection{Reconstruction Results} In the first set of experiments, we compare all the methods when the testing and training data belongs to the same knee or brain MRI contrast weighting with the same acceleration rate and undersampling mask. These experiments aim to show the efficacy of the proposed approach in performing scan-specific MRI reconstruction, while removing residual aliasing artifacts. We also note that this is the most favorable setup for database-trained supervised PG-DLR. In the subsequent experiments, we focus on the reported generalization and robustness issues with database-trained PG-DLR methods \cite{Knoll_TL,KnollfastMRIChallenge,BandingArtifacts,fastmri_2ndChallenge}. We investigate banding artifacts, as well as in-domain and cross-domain transfer cases. For these experiments, we concentrate on ZS-SSDU-TL, since ZS-SSDU has no prior domain information, and is inherently not susceptible to such generalizability issues. \vspace{0.2 cm} \noindent\textbf{Comparison of Reconstruction Methods: } In these experiments, supervised PG-DLR was trained and tested using uniform undersampling at R = 4, representing a perfect match for training and testing conditions. Figure \ref{fig:Artifact_Removal}a and b show reconstruction results for Cor-PD knee and Ax-FLAIR brain MRI datasets in this setting. CG-SENSE reconstruction suffers from significant residual artifacts and noise amplification in Cor-PD knee and Ax-FLAIR brain MRIs, respectively. Similarly, both DIP-Recon and DIP-Recon-TL suffer from residual artifacts and noise amplification. Supervised PG-DLR achieves artifact-free reconstruction. Both ZS-SSDU and ZS-SSDU-TL also perform artifact-free reconstruction with similar image quality. Table \ref{tbl:Average_PSNR_SSIM} shows the average SSIM and PSNR values on 30 test slices. Similar observations apply when random undersampling employed (Supplementary Figure S2). While supervised PG-DLR generally works well when the training and test data characteristics are matched, it may still suffer from residual artifacts in some cases, which are successfully suppressed by ZS-SSDU methods (Supplementary Figure S3). \vspace{0.2 cm} \noindent\textbf{Banding Artifacts:} Banding artifacts appear in the form of streaking horizontal lines, and occur due to high acceleration rates and anisotropic sampling \cite{BandingArtifacts}. These hinder radiological evaluation and are regarded as a barrier for the translation of DL reconstruction methods into clinical practice \cite{BandingArtifacts}. This set of experiments explored training and testing on Cor-PDFS data, where database-trained PG-DLR reconstruction has been reported to show such artifacts \cite{BandingArtifacts, fastmri_2ndChallenge}. Figure \ref{fig:Banding Artifacts} shows reconstructions for a Cor-PDFS test slice. While DIP-Recon-TL suffers from clearly visible noise amplification, supervised PG-DLR suffers from banding artifacts shown with yellow arrows. ZS-SSDU-TL significantly alleviates these banding artifacts in the reconstruction. While supervised PG-DLR achieves slightly better SSIM and PSNR (Supplementary Table S1), we note that banding artifacts do not necessarily correlate with such metrics, and are usually picked up in expert readings \cite{BandingArtifacts, KnollfastMRIChallenge}. \vspace{0.2 cm} \noindent\textbf{In-Domain Transfer:} In these experiments, we compared the in-domain generalizability of database-trained PG-DLR and scan-specific PG-DLR. For in-domain transfer, training and test datasets are of the same type of data, but may differ from each other in terms of acceleration and undersampling pattern (Figure \ref{fig:Robustness}). In Figure \ref{fig:CoronalPD_Robustness}a, supervised PG-DLR was trained with random undersampling and tested on uniform undersampling. Supervised PG-DLR fails to generalize and suffers from residual aliasing artifacts (red arrows), consistent with previous reports \cite{Knoll_TL,fastmri_2ndChallenge}. Similarly, DIP-Recon-TL suffers from artifacts and noise amplification. Proposed ZS-SSDU-TL achieves an artifact-free and improved reconstruction quality. In Figure \ref{fig:CoronalPD_Robustness}b, supervised PG-DLR was trained with uniform undersampling at R = 4 and tested on uniform undersampling at R = 6. While both supervised PG-DLR and DIP-Recon-TL suffers from aliasing artifacts, ZS-SSDU-TL successfully removes these artifacts. Average PSNR and SSIM values align with the observations (Supplementary Table S1). \vspace{0.2 cm} \noindent\textbf{Cross-Domain Transfer:} In the last set of experiments, we investigated the cross-domain generalizability of database-trained PG-DLR compared to scan-specific trained PG-DLR. For cross-domain transfer, training and test datasets are of the different data characteristics and generally differ in terms of contrast, SNR, and anatomy (Figure \ref{fig:Robustness}). In Figure \ref{fig:DifferentContrast}, supervised PG-DLR was trained on Cor-PDFS and Ax-T2 at R = 4 with uniform sampling, but tested on Cor-PD and Ax-FLAIR at R = 4 with uniform undersampling, respectively. In both cases, supervised PG-DLR fails to generalize and has residual artifacts (red arrows). Similarly, DIP-Recon-TL suffers from artifacts and noise. ZS-SSDU-TL achieves an artifact-free improved reconstruction. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{BandingArtifacts.png} \end{center} \vspace{-.3cm} \caption {\label{fig:Banding Artifacts} Supervised PG-DLR suffers from banding artifacts (yellow arrows), while ZS-SSDU-TL significantly alleviates these artifacts. DIP-Recon-TL suffers from clear noise amplification.} \vspace{-.3cm} \end{figure} \begin{figure}[!b] \begin{center} \includegraphics[width=1\linewidth]{InDomainTransfer.png} \end{center} \vspace{-.3cm} \caption {\label{fig:CoronalPD_Robustness} Supervised PG-DLR was trained with a) random mask and tested on uniform mask, both at R = 4; b) uniform mask at R = 4 and tested on uniform mask at R = 6. In both cases, supervised PG-DLR and DIP-Recon-TL suffer from visible artifacts (red arrows). ZS-SSDU-TL achieves artifact-free reconstruction.} \vspace{-.3cm} \end{figure} Figure \ref{fig:AnatomyChange} shows the performance when the testing anatomy differs from training anatomy. Supervised PG-DLR was trained on Cor-PD and Ax-FLAIR at R = 4 with uniform sampling, and tested on Ax-FLAIR and Cor-PD at R=4 with uniform undersampling, respectively. In both cases, supervised PG-DLR fails to generalize across different anatomies with visible residual artifacts. DIP-Recon-TL also suffers from artifacts and noise amplification in Cor-PD and Ax-FLAIR, respectively. ZS-SSDU-TL successfully removes noise and residual artifacts. For both cross-domain transfer experiments, average PSNR and SSIM values match our observations (Supplementary Table S1). \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{CrossDomainTransfer_Contrast.png} \end{center} \vspace{-.3cm} \caption {\label{fig:DifferentContrast} Using pre-trained a) Cor-PDFS and b) Ax-FLAIR models for Cor-PD and Ax-T2 reconstructions, respectively. Supervised PG-DLR fails to generalize when contrast and SNR changes, with residual artifacts (red arrows). DIP-Recon-TL also shows artifacts. ZS-SSDU-TL successfully removes noise and artifacts.} \vspace{-.3cm} \end{figure} \begin{figure}[b] \begin{center} \includegraphics[width=1\linewidth]{AnatomyChange.png} \end{center} \vspace{-.3cm} \caption {\label{fig:AnatomyChange} Using pre-trained a) Ax-FLAIR and b) Cor-PD models for Cor-PD and Ax-FLAIR reconstructions, respectively. When used across different anatomies, supervised PG-DLR exhibits artifacts (red arrows). While DIP-Recon-TL also has artifacts, ZS-SSDU-TL achieves artifact-free reconstruction in both cases.} \vspace{-.3cm} \end{figure} \section{Conclusions} We proposed a self-supervised zero-shot deep learning method, ZS-SSDU, for scan-specific accelerated MRI reconstruction from a single undersampled dataset. The main ideas in ZS-SSDU were to divide the acquired measurement data into three types of disjoint sets, and use these in the PG-DLR network, for training loss, and for an automated early stopping criterion to avoid overfitting. In particular, we showed that with our training methodology and automated stopping criterion, scan-specific zero-shot learning of PG-DLR can be achieved \textit{even when the number of tunable network parameters is higher than the number of available measurements.} Our training method is not dependent on the particulars of the PG-DLR network architectures, and may be applied to different networks \cite{Schlemper, Hammernik,Hemant, LeslieYing_SPM,yaman_SSDU_MRM}. Finally, we also combined ZS-SSDU with transfer learning, in cases where a pre-trained model may be available, to further reduce reconstruction time and improve performance. Our results showed that ZS-SSDU methods perform similarly to database-trained supervised PG-DLR when training and testing data are matched, and they significantly outperform database-trained methods in terms of artifact reduction and generalizability when the training and testing data differ in terms of image characteristics and acquisition parameters. {\small \bibliographystyle{ieee_fullname}
{ "timestamp": "2021-03-29T02:25:22", "yymm": "2102", "arxiv_id": "2102.07737", "language": "en", "url": "https://arxiv.org/abs/2102.07737" }
\subsection*{Table of contents entry} in functional oxides are promising for applications}, but their fabrication and use, essentially limited to SrTiO$_3$-based heterostructures, are hampered by the need of growing complex oxide over-layers thicker than 2~nm using evolved techniques. This work shows that thermal deposition of a monolayer of an elementary reducing agent suffices to create 2DESs in numerous oxides. \textbf{Keyword}: 2DES in oxide surfaces and interfaces.\\ \begin{figure}[h] \begin{center} \includegraphics[clip, width=15cm]{Figure-TOC-Al-STO.pdf} \end{center} \end{figure} \pagebreak A critical challenge of modern materials science is to tailor novel states of matter suitable for future applications beyond semiconductor technology. Two-dimensional electron systems (2DESs) in multi-functional oxides~\cite{Ohtomo2004} can show metal-to-insulator transitions~\cite{Thiel2006}, superconductivity~\cite{Reyren2007,Caviglia2008}, magnetism~\cite{Brinkman2007,Li2011,Bert2011}, or spin-polarized states~\cite{Caviglia2010,BenShalom2010,Santander-Syro2014}, and are thus an active field of current research~\cite{Takagi2010,Mannhart2010,Hwang2012}. However, the fabrication of 2DESs in oxide heterostructures, like LaAlO$_3$/SrTiO$_3$, requires growing a layer of binary (\emph{e.g.} Al$_2$O$_3$) or ternary (\emph{e.g.} LaAlO$_3$) oxides with a ``critical thickness'' of at least 20~\AA~using evolved deposition techniques, such as pulsed laser deposition~\cite{Ohtomo2004,Nakagawa2006,Takagi2010,Mannhart2010, Hwang2012,Chen2011,Lee2012,Delahaye2012}. Thus, the reproducibility of their properties depends crucially on the growth parameters, while their fabrication is complex, expensive, and unsuitable for mass production. Moreover, the existence of a critical thickness of 20~\AA~for the onset of conductivity severely limits the control of the 2DES’s properties, hampering tunneling spectroscopy studies or applications that rely on charge or spin injection~\cite{Lesne2014}. Similarly, the realization of 2DESs at the surface of SrTiO$_3$ or other oxides requires the use of intense UV or X-ray synchrotron radiation, to desorb oxygen from the surface~\cite{Santander-Syro2011,Meevasana2011, Santander-Syro2012,King2012,Bareille2014,Roedel2014,Walker2014,Roedel2015}. Thus, these 2DESs can be only manipulated and studied in ultra-high vacuum (UHV), to preserve the vacancies from re-oxidation, and are obviously not suited for experiments or applications at ambient conditions. Here we demonstrate a new, wholly general, extremely simple and cost-effective method to generate 2DESs in functional oxides. We use thermal evaporation from a Knudsen cell to deposit, at room temperature in UHV, an atomically-thin layer of an elementary reducing agent, such as pure aluminum, on the oxide surface. Due to an efficient redox reaction, the Al film pumps oxygen from the substrate, oxidizes into insulating AlO$_x$, and forms a pristine, homogeneous 2DES in the first atomic planes of the underlying oxide. The principle of redox reactions induced by metals at the surface of oxides is well documented~\cite{Fu2005,Fu2007}. However, the simple idea of using a pure, elementary reducing agent to create a 2DES at a metal-oxide interface was not explored so far. This overcomes the complexity of growing an oxide thin film, the requirement of a critical thickness of insulating capping layer to create the 2DES in UHV, and the necessity, in the case of surfaces, of strong synchrotron radiation to desorb oxygen. As a novel application, we extend this method to generate a 2D metallic state at the surface of the room-temperature ferroelectric BaTiO$_3$. Such hitherto unobserved coexistence of ferroelectricity and 2D conductivity in the same material is promising for functional devices using ferroelectric resistive switching~\cite{Kim2013,Tra2013}. This new, simpler and cheaper, fabrication route for 2DESs is thus adaptable to numerous oxides, given that oxygen vacancies are shallow donors in these materials, resulting in itinerant electrons. Moreover, this technique is scalable to industrial production, and ideally suited for applications that rely on charge or spin injection and for the realization of mesoscopic devices. \begin{figure*} \begin{center} \includegraphics[clip, width=16cm]{STO-Al-Capping-arXiv-Fig1.pdf} \end{center} \caption{\label{fig:STO001} \footnotesize (a,~b) ARPES energy-momentum intensity maps measured at the Al(2\AA)/SrTiO$_3$(001) interface prepared \emph{in-situ}, using respectively $47$~eV linear vertical (LV) and $90$~eV linear horizontal (LH) photons. (c,~d) Corresponding Fermi surface maps. Data at $h\nu = 47$~eV were measured around the $\Gamma_{102}$ point, while data at $h\nu = 90$~eV were measured around $\Gamma_{103}$. (e,~f) Fermi surface maps measured at the Al/SrTiO$_3$(111) and Al/TiO$_2$(001) anatase interfaces prepared \emph{in-situ}. Unless specified otherwise, all spectra in this and remaining figures were measured at $T = 8$~K. } } \end{figure*} The existence of a 2DES at the interface between the oxidized Al layer and SrTiO$_3$(001), SrTiO$_3$(111) and anatase-TiO$_2$(001) is evidenced by our angle-resolved photoemission spectroscopy (ARPES) data presented in \textbf{Figure}~\ref{fig:STO001} --see the \textcolor{blue}{Supporting Information} for details about the surface preparation, Al deposition, and ARPES measurements. For simplicity, and to recall that we are simply depositing pure Al (\emph{not aluminum oxide}) on top of the oxide surfaces, all throughout this paper we note the resulting oxidized Al capping layer simply as ``Al", specifying in parenthesis the evaporated thickness. The energy-momentum and Fermi surface maps formed by the $t_{2g}$ orbitals, shown in Figure~\ref{fig:STO001}, agree with previous ARPES studies at the reduced surface of these materials~\cite{Santander-Syro2011,Meevasana2011,Plumb2014, Roedel2014,Walker2014,Roedel2015}, demonstrating that in both cases the same 2DESs are observed. Note that, instead of the local creation of oxygen vacancies using an intense UV beam, the evaporated Al reduces the whole surface homogeneously. As a consequence, the data quality, evidenced by the line widths, is also much better than in previous studies. Thus, as shown in Figure~\ref{fig:STO001}(a), a kink and change in intensity in the dispersion of the light bands at $E \approx -30$~meV, attributed to electron-phonon coupling~\cite{King2014}, can be very clearly distinguished. The Fermi-surface areas and, hence, the charge carrier densities of the 2DESs at the Al/SrTiO$_3$(111) and Al/TiO$_2$(001) interfaces are about $1.3$ and $2$ times larger than their counterparts at the surfaces reduced by photons, probably due to a higher and more homogeneous concentration of oxygen vacancies. \begin{figure} \begin{center} \includegraphics[clip, width=8.5cm]{STO-Al-Capping-arXiv-Fig2.pdf} \end{center} \caption{\label{fig:Al2p} \footnotesize (a) Angle-integrated spectra of the Al 2p peak of the Al/SrTiO$_3$(001) Al/TiO$_2$ anatase and Al/BaTiO$_3$(001) interfaces measured at a photon energy of $h\nu=100$~eV. The curves in different colors correspond to different thicknesses of Al (red 2\AA, black 4\AA, blue 6\AA). The peak shape of the Al 2p peak indicates if the Al layer is oxidized or metallic. The dashed black curve corresponds to a fully oxidized Al layer, obtained after annealing the sample with $4$~\AA~Al capping at 250$^{\circ}$C in UHV. (b) Momentum distribution curves (MDCs) at $E_F$, along the Fermi-surface cut schematized in the inset, measured at the Al/SrTiO$_3$(001) interface at $hv=47$~eV for different Al thicknesses. Peaks in the MDCs correspond to the Fermi momenta, where the MDC cuts the Fermi surface. The decrease in intensity of the MDCs for increasing Al thickness is merely due to increasing damping of the photoemission signal. (c) MDCs integrated over $E_F \pm 5$~meV for increasing UV exposure times on the Al/SrTiO$_3$(001) interface and the bare STO surface measured under identical conditions. The similarity of the two MDCs at the interface is in strong contrast to the evolution under light irradiation of the MDCs at the bare surface. } } \end{figure} To understand the redox reaction at the Al/oxide interface, we probed the oxidation state of Al by measuring the Al-$2p$ core levels, whose binding energies are very different for metallic and oxidized Al. As shown in \textbf{Figure}~\ref{fig:Al2p}(a), the two contributions can be distinguished in the Al(6\AA)/STO spectrum (blue curve), with the metallic component around $72.5$~eV binding energy and the oxidized part around $75$~eV binding energy. In contrast, the metallic Al component decreases for a thinner 4\AA~film (black curve), and the deposition of only 2\AA~of pure Al results in a nearly fully oxidized film of Al (red curves). In other words, \emph{an ultra-thin layer of 2\AA~of pure Al is sufficient to pump the oxygen from the surface region of all the oxides studied in this work}. The spatial distribution of the oxygen vacancies close to the interface is discussed in the Supporting Information. Note that the oxidation of the metallic Al results in an increased layer thickness: as the mass density of Al is $2.7$~g/cm$^3$ and the one of amorphous Al$_2$O$_3$ is about $4$~g/cm$^3$, the deposition of 2\AA~of Al yields an oxidized Al film of 2.5\AA. To determine if the thickness of the Al-layer has an influence on the electron density of the 2DES, we turn to the momentum distribution curves (MDCs) at the Fermi level as shown in Figure~\ref{fig:Al2p}(b). As can be seen in Figure~\ref{fig:Al2p}(b), the Fermi momenta are essentially the same, within $0.01$~\AA$^{-1}$ for the 2\AA~(red MDC), 4\AA~(black) and 6\AA~(blue) thick Al films. As the 2D density of electrons depends solely on the Fermi momenta, it is clear that this electron density already saturates at an Al film thickness of 2\AA. Thus, our method overcomes the necessity of a ``critical thickness'' of capping layer to generate a 2DES in UHV. Previous studies on the bare surface of SrTiO$_3$ prepared \textit{in-situ} showed that synchrotron UV-irradiation was necessary to create the oxygen vacancies responsible for the 2DES~\cite{Meevasana2011,Plumb2014,Walker2014,Roedel2015}. This is again demonstrated in the lower panel of Figure~\ref{fig:Al2p}(c), which shows the evolution with time of the MDC at $E_{F}$ upon UV-irradiation on a bare SrTiO$_3$(001) surface. While the 2DES is absent at $t=0$h (black MDC), its carrier density increases up to saturation upon UV irradiation (blue and red MDCs), as denoted by the increase of $k_F$ for increasing exposure times. Contrary to the bare surface, there is \emph{no measurable influence} of the UV irradiation on the electronic structure of the Al/SrTiO$_3$ system, neither on the charge carrier density nor on the line-shapes or spectral weight of the 2DES, as demonstrated in the top panel of Figure~\ref{fig:Al2p}(c): the MDCs at $E_F$ show a stable subband structure and a maximum electron density from the very beginning of the measurements. This indicates that the oxygen vacancy concentration and distribution, due the redox reaction at the interface between Al and SrTiO$_3$, is already saturated and stable upon irradiation. As demonstrated in the \textcolor{blue}{Supporting Information}, the 2DES at the interface of oxidized Al/SrTiO$_3$ is stable also at room temperature, while the deposition of an Al film of 10\AA~or more on SrTiO$_3$ minimizes the re-oxidation of vacancies in air. \begin{figure*} \begin{center} \includegraphics[clip, width=16cm]{STO-Al-Capping-arXiv-Fig3.pdf} \end{center} \caption{\label{fig:BTO001} \footnotesize (a)~AFM topography and corresponding PFM phase signal measured on a $30$~nm-thick BaTiO$_3/$Nb:SrTiO$_3$ film. No ferroelectric domains could be detected in the as-grown film, while such domains can be written, as shown in (b), with $+6$~V on the AFM tip in the outer square ($4 \times 4$~$\mu$m$^2$) and $-6$~V in the inner square ($2 \times 2$~$\mu$m$^2$). (c,~d) ARPES energy-momentum intensity maps at the Al(2\AA)/BaTiO$_3$ interface prepared \emph{in-situ}, using respectively $47$~eV LV and $80$~eV LH photons, the latter being close to $\Gamma_{103}$. (e)~Second energy-derivative (negative values) of the ARPES map in (d). Vertical dashed red lines in (c)-(e) are the Brillouin-zone edges. The red curve in (e) is a cosine fit to the heavy band. (f,~g)~Spectral weight integrated over $E_F \pm 30$~meV at the Al(2\AA)/BaTiO$_3$ interface using $80$~eV LV and $47$~eV LV photons, respectively. (h)~Capacitance-voltage curve on the Al(2\AA)/BaTiO$_3$ interface measured previously by ARPES, showing the butterfly shape characteristic of a ferroelectric hysteresis. A Pd circular pad and the Nb:STO substrate were used as top and bottom electrodes, respectively. Note that, due to the voltage drop through the thin alumina layer, the voltages $V_{c}^{+}$ and $V_{c}^{-}$ required to reverse the polarization are rather high. Thus, it was not possible to perform a polarization reversal in PFM mode. } } \end{figure*} We now show that the deposition of an ultra-thin Al film can also be used to create a 2DES at the surface of the room-temperature ferroelectric BaTiO$_3$ (BTO), thus constituting a new type of confined metallic state on a truly room-temperature functional oxide. Our BTO samples are (001)-oriented thin films (thickness $30$~nm) epitaxially grown on SrTiO$_3$(001) --see \textcolor{blue}{Supporting Information} for details about thin-film growth, piezo-response force microscopy (PFM), and capacitance-voltage (C-V) measurements. In contrast to the bulk crystals, which usually exhibit ferroelectric-domain stripes of period $\sim 50 - 200$~nm, even down to $100$~nm scale thicknesses~\cite{Schilling2006}, the $30$~nm-thick BTO films deposited on Nb:STO show a single domain state, with the ferroelectric polarization aligned along the $[001]$ axis, due to the in-plane compressive strain induced by the epitaxial growth~\cite{Pertsev1998,Choi2004,Chen2013}. The absence of ferroelectric domains and the local reversibility of the polarization are demonstrated in \textbf{Figures}~\ref{fig:BTO001}(a,~b) by the simultaneous atomic force microscopy (AFM) and PFM images of a BaTiO$_3/$Nb:SrTiO$_3$ thin film. The energy-momentum ARPES intensity maps of Figures~\ref{fig:BTO001}(c-e) prove the formation of metallic itinerant states at the surface of the BTO(001) thin-film after deposition of 2~\AA~of Al. The resulting 2DES is constituted of a light ($d_{xy}$-like) and a heavy ($d_{xz/yz}$-like) electron pocket around $\Gamma$, best observed in Figures~\ref{fig:BTO001}(c,~d) with LV and LH polarizations, respectively. Such light-polarization-dependent selection rules are typical for $t_{2g}$-like states observed at the surface of other titanates, such as SrTiO$_3$ and anatase~\cite{Santander-Syro2011,Roedel2015}. In the case of BaTiO$_3$, the light band forms a strong peak of spectral weight whose intensity is cut-off at $E_F$ --see Figure~\ref{fig:BTO001}(c). Although we cannot observe a dispersive feature within this peak of intensity, its binding energy indicates that the conduction band bottom is filled up with itinerant electrons. The heavy band, on the other hand, presents a clear dispersion --see Figures~\ref{fig:BTO001}(d,~e). A tight-binding fit of this band, red curve in Figure~\ref{fig:BTO001}(e), yields a band bottom of $-115$~meV at $\Gamma$, a band top of $-35$~meV at the zone edge, and an effective mass near $\Gamma$ of approximately $12 m_e$. Figures~\ref{fig:BTO001}(f,~g) show that the spectral weight at $E_F$ is composed of a central disc formed by the light electron pocket, best seen in Figure~\ref{fig:BTO001}(g), and two orthogonal Fermi-surface strips spanning the entire Brillouin zone, formed by the heavy bands. These last correspond to the elliptical Fermi sheets observed at the surface of SrTiO$_3$(001), as in Figure~\ref{fig:STO001}(d), but in the case of BaTiO$_3$(001) they extend beyond the zone boundary, thus forming open Fermi sheets. From Figure~\ref{fig:BTO001}(g), the distribution of spectral weight at $E_F$ for the circular Fermi surface spans a Fermi momentum $k_F \approx 0.15 \pm 0.02$~\AA$^{-1}$. The Fermi strips can be approximated as rectangles of long and short sides $k_{l} = 2 \times \pi / a$ (with $a = 4$~\AA~the size of the square unit cell at the BTO surface) and $k_{s} = 0.15 \pm 0.02$~\AA$^{-1}$. From the total area $A_F$ enclosed by all the Fermi surfaces, the density of carriers of the 2DES at the BaTiO$_3$(001) surface is $n_{2D}^{\text{BTO(001)}} = A_F/(2\pi^2) \approx (2.8 \pm 0.4) \times 10^{14}$~cm$^{-2}$, which is comparable to the density of states at the SrTiO$_3$ or anatase-TiO$_2$ surfaces. The \textcolor{blue}{Supporting Information} presents additional data showing the Fermi momenta extracted from fits to the spectra, and the photon-energy dependence of the electronic structure. Finally, Figure~\ref{fig:BTO001}(h) shows a measurement of the capacitance-voltage curve on the \emph{same} Al(2\AA)/BaTiO$_3$ interface that was measured by ARPES. The ``butterfly'' shape, with a difference of about $0.5$~V between the two coercive voltages, demonstrates that the BTO film is still ferroelectric after deposition of the Al layer and ARPES measurements, thus keeping its functional behavior. A 2DES at the surface of BaTiO$_3$ is in essence an intrinsic metal/ferroelectric interface. Polarization switching of the bulk material, for instance by strain, could allow a direct gating of the 2DES, while a sufficiently thick capping alumina layer protects it against re-oxidation at ambient conditions, and can be even used to draw metallic nano-circuits of intrinsic ferroelectric tunnel junctions. Thus, this system provides a realistic platform for the realization of non-volatile memories using ferroelectric resistive switching~\cite{Kim2013,Tra2013} or for ultra-sensitive strain or pressure detectors. In conclusion, the method we present here for realizing 2DESs in oxides has the advantages of simplicity and versatility --for instance, it can be readily implemented in many UHV setups, allowing future investigations of 2DESs in complex oxides using non-synchrotron based spectroscopic techniques, like tunneling or Raman spectroscopies. This method is also pertinent for the study of transport phenomena in mesoscopic oxide devices. Indeed, STO has emerged as an exciting nano-electronics device platform~\cite{Goswami2015}, owing to the existence of superconductivity, spin-orbit interaction and magnetism which are controllable with a gate voltage. Our work opens up new possibilities to explore these questions by making a class range of transition-metal oxide 2DESs suitable for transport, including the surfaces which are candidates for hosting topological electronic states~\cite{Bareille2014,Roedel2014}. Furthermore, the stability of the 2DES in ambient conditions can be achieved through a sufficiently thick layer of oxidized Al. This opens the possibility to integrate TMO 2DESs into functional devices without the need of evolved deposition techniques. \subsection*{Experimental Section} The ARPES measurements were conducted at the CASSIOPEE beamline of Synchrotron SOLEIL (France). We used linearly polarized photons in the energy range $30-110$~eV and a hemispherical electron analyzer with vertical slits. The angular and energy resolutions were $0.25^{\circ}$ and 15~meV. The mean diameter of the incident photon beam was smaller than 100~$\upmu$m. The samples were cooled down to $T=8$~K before measuring. Unless specified otherwise, all data were taken at that temperature. The results have been reproduced on more than 10 samples for SrTiO$_3$(001), and on at least two samples for other surface orientations and for TiO$_2$ anatase, and on 3 thin films of BaTiO$3$/SrTiO$_3$(001). All through this paper, reciprocal-space directions $\langle hkl \rangle$ and planes $(hkl)$ are defined in the conventional cell of each material (cubic for SrTiO$_3$, simple tetragonal for anatase and BaTiO$_3$). The indices $h$, $k$, and $l$ of $\Gamma_{hkl}$ correspond to the reciprocal lattice vectors of the cubic unit cell (SrTiO$_3$ and BaTiO$_3$) or body-centered tetragonal unit cell (anatase). Additional details on the sample and surface preparation, the Al deposition conditions, and the piezo-response force microscopy and capacitance-voltage measurements can be found in the \textcolor{blue}{Supporting Information}. \subsection*{Acknowledgements} We thank illuminating discussions with M.~Gabay and M.~J.~Rozenberg, and V. Pillard for help with the sample preparation. This work is supported by public grants from the French National Research Agency (ANR), project LACUNES No ANR-13-BS04-0006-01, and the ``Laboratoire d'Excellence Physique Atomes Lumi\`ere Mati\`ere'' (LabEx PALM project ELECTROX) overseen by the ANR as part of the ``Investissements d'Avenir'' program (reference: ANR-10-LABX-0039). T.~C.~R. acknowledges funding from the RTRA--Triangle de la Physique (project PEGASOS). A.F.S.-S. thanks support from the Institut Universitaire de France. \section*{Supporting Information} \subsection*{Sample and surface preparation and Al deposition} The non-doped, polished crystals of SrTiO$_3$ were supplied by CrysTec~GmbH, the anatase crystals by SurfaceNet GmbH. To prepare the SrTiO$_3$ surfaces, the samples were ultrasonically agitated in deionized water, subsequently etched in buffered HF and annealed at 950$^{\circ}$C for three hours in oxygen flow. This results in Ti-terminated (001) or (111) surfaces with terraces of width $50$~to~$200$~nm separated by steps, as verified by atomic-force microscopy (AFM) in contact mode (data not shown here). The BTO 300\AA-thick films were prepared by Laser-MBE using a sintered BTO target. A Kr-F excimer laser was used for the deposition. The substrate, which was etched prior to the deposition to obtain a TiO$_2$ terminating layer, was glued to the heater with silver paste. The growth of the film, monitored by reflection high-energy electron diffraction (RHEED), was carried out at 650 $^{\circ}$C in 5x10$^{-4}$ mbar oxygen pressure with 0.1 \% ozone. RHEED oscillations were used to measure the deposited thickness. At the end of the deposition, the films were cooled down in 6x10$^{-3}$ mbar oxygen pressure, always with 0.1\% ozone. To clean the surfaces in UHV, the SrTiO$_3$ samples were annealed at a temperature $T=550-650$~$^\circ$C for $t=10-90$~min at pressures lower than $p < 2\times10^{-8}$~mbar. The anatase crystals were prepared by Ar$^{+}$ sputtering ($U=1$~kV, $t=10$~min) and annealing cycles ($T=550-600$~$^\circ$C, $t=30$~min,) similar to the procedure described by Setvin~\textit{et al.}~\cite{Setvin2014a}. The surface of the BaTiO$_3$ thin films was cleaned by annealing the samples at temperatures $T = 500-550^{\circ}$~C for $5 - 30$~min. One of the samples was Ar$^+$ sputtered ($U = 500$~V, $t = 10$~min) prior to the UHV annealing, without noticeable changes in the ARPES data. The surface quality and the possible existence of surface reconstructions was probed by low-energy electron diffraction (data not shown here). The SrTiO$_3$(001) and BaTiO$_3$(001) surfaces are unreconstructed, whereas the (111) surface shows a $3 \times 3$ reconstruction, and the anatase (001) surface shows a two-domain $1\times4$ reconstruction. To create a local, high concentration of oxygen vacancies in the surface region of SrTiO$_3$, TiO$_2$ anatase, or BaTiO$3$, amorphous Al-films with thicknesses between $d=2-10$~\AA~were deposited on the prepared surface of the crystals. Aluminium was evaporated from a Knudsen cell using an alumina crucible. The growth rate was approximately $0.3$~\AA~/min, corresponding to a temperature of about 925$^{\circ}$C of the crucible. The Al-flux was calibrated prior to the evaporation using a quartz microbalance. The cleanliness of the deposit was checked by evaporating a thin Al-film on a Cu substrate where no oxidation could be detected by Auger spectroscopy. The temperature of the crystals ranged between $T=25-100$~$^\circ$C during the Al deposition. \subsection*{Piezo-response force microscopy and capacitance-voltage measurements} For the PFM measurements, a probing signal of $2$~V$_{\textrm{pp}}$ at a frequency of 25~kHz was applied to a Co/Cr coated cantilever with $\sim 5$~N/m force constant. A lower probing signal of $0.5$~V$_{\textrm{pp}}$ was also used with no change on the observed phase images such as the ones shown in Figure~3(a) of the main text. In order to assess the ferroelectric character of the BTO film measured by ARPES, $300$~$\mu$m-diameter Pd electrodes ($200$~nm thickness) were deposited through a shadow mask on top of the Al oxide (AlO$_x$) layer. The C-V measurements were performed using a LCR meter with a $30$~mV$_{\textrm{pp}}$ AC amplitude at $10$~kHz, while a source-meter allowed for the DC biasing with $0.1$~V steps of $500$~ms duration. The C-V curve, Figure~3(h) of the main text, shows the characteristic butterfly shape of a ferroelectric material. Note that due to the ultra-thin AlO$_x$ layer, the required voltages to reverse the polarization ($V_c$) are rather high and shifted towards the positive voltage side, indicating an internal upward electric field in the BTO layer. For these reasons it was not possible to achieve a polarization reversal in PFM configuration on this sample. \subsection*{Homogeneity of the 2DES} \begin{figure} \begin{center} \includegraphics[clip, width=5cm]{STO001_homo.pdf} \end{center} \caption{\label{fig:STO001_homo} \footnotesize (a) Momentum distribution curves integrated over $E_F \pm 5$~meV for the Al/SrTiO$_3$(001) interface prepared \emph{in-situ}, measured at different positions (see inset) separated by at least 2~mm from each other -or over 100 times the size of the UV spot. Each spectrum was obtained within minutes on a part of the sample that had not been illuminated before. The Fermi momenta, given by the MDC peak positions, are independent of the position in the sample, demonstrating the homogeneity of the 2DES at the Al/SrTiO$_3$(001). } } \end{figure} Previous studies of the 2DES at the surface of SrTiO$_3$ were conducted on fractured~\cite{Santander-Syro2011, Meevasana2011} or \emph{in-situ} prepared surfaces~\cite{Plumb2014, Roedel2014}. The fracturing process results in locally ordered surfaces~\cite{Guisinger2009}, while the \emph{in-situ} preparation results in an ordered surface. The fracturing process or intense UV light irradiation at low temperature (spot size $\approx 100\times100\upmu$m) create a local, high concentration of oxygen vacancies in the surface region of SrTiO$_3$ whose electrons (partly) dope the 2DES. In contrast to such spatial inhomogeneity of the 2DES in fractured or UV irradiated bare SrTiO$_3$, the fact that we cannot observe any changes induced by the synchrotron light at the Al/SrTiO$_3$ interface (Figure~2(c) of the main text) suggests that the underlying SrTiO$_3$ surface is reduced homogeneously. This is explicitly shown in \textbf{Figure~}\ref{fig:STO001_homo}, which presents the momentum distribution curves at $E_F$ measured at five different positions on the Al(2\AA)/SrTiO$_3$ interface. We observe that the Fermi momenta, given by the MDC peaks, are independent of the measurement point, demonstrating the homogeneity of the interfacial 2DES over distances of several millimeters. \subsection*{Temperature dependence of the electronic structure of the 2DES at the Al/SrTiO$_3$(001) interface} \begin{figure*} \begin{center} \includegraphics[clip, width=16cm]{STO001_temp.pdf} \end{center} \caption{\label{fig:STO001_temp} \footnotesize (a) Energy-momentum map measured at the Al(2~\AA)/SrTiO$_3$(001) interface prepared \emph{in-situ} at different temperatures $T=$21~K, 172~K and 303~K. Data were collected around the $\Gamma_{102}$ point using LV photons at $h\nu = 47$~eV. (b) Energy and momentum distribution curves along $k_{<010>}=0$ and the Fermi level of the $E-k$ maps in (a). } } \end{figure*} \textbf{Figure~}\ref{fig:STO001_temp}(a) shows the energy-momentum maps at the Al/SrTiO$_3$ interface measured respectively at 21~K, 172~K, and 303~K, under the same conditions as the $E-k$ map in Figure~1(a) of the main text. The dispersions of the two light bands of $d_{xy}$-character are still clearly visible at $T=172$~K although the line widths are increased due to thermal broadening. At $T=300$~K, the line widths are too large to identify the individual bands, although the left branch of the outer band is still visible close to the Fermi level $E_F$. Nevertheless, the spectral weight at the Fermi level demonstrates the existence and stability of the 2DES at room temperature. To compare the energy-momentum maps more directly, Figure~\ref{fig:STO001_temp}(b) shows the energy distribution curves at $k_{<010>}=0$ and the momentum distribution curves at Fermi level $E_F$. As can be seen from the value of the Fermi momenta $k_F$ of the peaks in the MDCs as well as the binding energies of the peaks in the energy distribution curves (EDCs), the charge carrier density decreases slightly when temperature increases. We checked (not shown) that these results are reproducible upon thermal cycling. \subsection*{Effect of exposure to ambient conditions} \begin{figure*} \begin{center} \includegraphics[clip, width=16cm]{STO001_high_energy_V2.pdf} \end{center} \caption{\label{fig:high_hv} \footnotesize (a) Fermi surface map measured in three neighboring Brillioun zones at $h\nu=459.5$~eV on the Al(10\AA)/SrTiO$_3$(001) interface prepared \emph{in-situ} and subsequently exposed to air for $t=20$~min. The red dashed circles and ellipses illustrate the Fermi surface shown in Figures~1(c,~d) of the main text. The thick red lines correspond to the borders of the bulk Brillioun zones. % (b) Angle-integrated spectrum of the Al 2p peak of the Al(10\AA)/SrTiO$_3$(001) interface measured at $h\nu=458.4$~eV. Due to the exposure to air, the Al film is completely oxidized --compare to Figure~2 of the main text. % (c) Energy distribution curves integrated around $\Gamma$ measured at the Al(10\AA)/SrTiO$_3$(001) and Al(2\AA)/SrTiO$_3$(001) interfaces. To facilitate the comparison, a momentum-independent background, due to spectral weight from the in-gap state, was removed from the EDC of the Al(2\AA)/SrTiO$_3$(001) data. } } \end{figure*} In principle, the thicker the oxidized Al film the better the passivation of the surface against re-oxidation in ambient air pressure. For amorphous Al$_2$O$_3$ films grown by atomic layer deposition on the surface of SrTiO$_3$(001), a film thickness of $\geq 1.2$~nm is sufficient to create a stable 2DES at the Al$_2$O$_3$/SrTiO$_3$ interface~\cite{Lee2012}. Note that this value is identical to the thickness of the natural oxidized layer at the surface of aluminum ($1.24$~nm)~\cite{Cai2011}. Hence, this thickness is sufficient to prevent oxygen diffusion through a homogenous Al$_2$O$_3$ capping layer.\\ In our case, the probing depth of the high-resolution ARPES measurements, such as the ones shown in the main text, is limited by the mean free path of electrons which is $\sim 5$~\AA~at kinetic energies of $E_{kin}=20-100$~eV. To increase the probing depth and probe the 2DES at buried interfaces, \emph{e.g.} LaAlO$_3$/SrTiO$_3$, soft x-ray angle-resolved resonant photoelectron spectroscopy was applied previously~\cite{Berner2013}. Thus, to test the stability of the 2DES, we exposed an Al(10~\AA)/SrTiO$_3$ sample to ambient conditions for about 30 minutes, and conducted soft x-ray resonant ARPES ($hv$=459.5~eV) at low temperatures. Note that $1$~nm of Al corresponds to about $1.25$~nm of Al$_2$O$_3$ which is close to the ``critical'' passivation thickness mentioned above. As can be seen from the Fermi surface in \textbf{Figure~}\ref{fig:high_hv}(a), the 2DES at the Al(10~\AA)/SrTiO$_3$ interface still exists after the exposure to air. For comparison, the red dashed circles and ellipses represent the Fermi surfaces measured at the ultra-thin Al(2~\AA)/SrTiO$_3$ interface --see Figures~1(c,~d) of the main text. Note that the 10~\AA~Al layer was completely oxidized after exposure to air, as demonstrated by the peak shape of the Al-2p peak in Figure~\ref{fig:high_hv}(b). The data quality in these soft-X-ray ARPES measurements is lower compared to UV-measurements as the surface is not pristine anymore after exposure to air, the 2DES is buried below a thick oxidized Al film, the photoemission cross section of the valence states is much smaller at higher photon energies, and the total energy resolution at $hv = 459.5$~eV is about 80~meV, compared to 15~meV at $h\nu=47$~eV. However, it is clear that the Fermi surface, and hence the charge-carrier density, are comparable between the Al(10\AA)/SrTiO$_3$ sample exposed to air and the pristine Al(2\AA)/SrTiO$_3$. Figure~\ref{fig:high_hv}(c) compares the momentum-integrated band structure around $\Gamma$ for the two different interfaces, confirming that their electronic structures are comparable. Thus, these results demonstrate that the oxidized Al/SrTiO$_3$ interface effectively passivates the 2DES on SrTiO$_3$. In order to adapt the method of creation of 2DES at the Al/oxide interface to transport measurements, and to be certain that the oxidized Al layer completely blocks oxygen diffusion, a capping layer thickness above the ``critical'' passivation value of $\sim 1.2$ nm is necessary. At the same time, the capping layer suitable for transport should be insulating without contributions of metallic Al. Several possibilities should be explored in future studies: optimization of growth parameters (\emph{e.g.} applying an oxygen partial pressure~\cite{Cai2011} during deposition after the first 2~\AA~of Al and/or a slight increase of the temperature to oxidize Al thicknesses greater than 2~\AA) or deposition of another type of insulating capping layer after the deposition of 2~\AA~of Al. \subsection*{Subband dispersions at the Al(2\AA)/anatase interface} \begin{figure} \begin{center} \includegraphics[clip, width=8cm]{Al_TiO2_001_EK.pdf} \end{center} \caption{\label{fig:Al_TiO2_001_EK} \footnotesize Energy-momentum intensity map measured at the Al(2~\AA)/a-TiO$_2$(001) interface prepared \emph{in-situ}. The data was recorded at $T=8$~K using LV photons at $h\nu = 47$~eV around the $\Gamma_{102}$ point. } } \end{figure} \textbf{Figure~}\ref{fig:Al_TiO2_001_EK} presents the ARPES energy-momentum intensity map at the Al(2\AA)/a-TiO$_2$(001) interface. The two subbands form the circular Fermi surfaces shown in Figure~1(f) of the main text. As mentioned there, these Fermi surfaces are almost twice larger than their counterparts at the surface of anatase reduced by photons~\cite{Roedel2015}. In agreement with this observation, the two subbands at the Al(2\AA)/a-TiO$_2$(001) interface disperse down to larger binding energies: approximately $-100$~meV and $-230$~meV for the upper and lower subbands, respectively, while at the bare, reduced anatase surface they disperse only down to about $-60$~meV and $-170$~meV~\cite{Roedel2015}. \subsection*{Oxygen vacancy distribution at the Al(2\AA)/anatase interface} \begin{figure*} \begin{center} \includegraphics[clip, width=16cm]{TiO2-XPS-Ovacs.pdf} \end{center} \caption{\label{fig:TiO2_XPS_Ovacs} \footnotesize{ (a) XPS at the Ti-$2p$ core level of Al(2\AA)/a-TiO$_2$~$(001)$ as a function of emission angle using $h\nu = 1150$~eV photons. At this photon energy, the universal inelastic mean free path of electrons emitted from the Ti-$2p$ peak is $\lambda[E_{kin}(\textrm{Ti-}2p)] = 1.4$~nm~\cite{Seah1979}. The red markers and corresponding error bars indicate the peak positions and uncertainties for the different Ti oxidation state ($4+$, $3+$, and $2+$). (b) XPS at the Ti-$3p$ core level of anatase $(001)$ at normal emission as a function of photon energy. The inelastic mean free path of electrons emitted from the Ti-$3p$ peak at different photon energies is specified in the inset table. The XPS intensity in panels (a) and (b) is normalized to the Ti$^{4+}$ peak. (c) Ratio of intensities, from panel (a), between the Ti$^{2+}$~$+$~Ti$^{3+}$ shoulder and the Ti$^{4+}$ peak as a function of the electron ejection angle. The dashed curve is the best fit to the data assuming a step-like distribution of vacancies over $9 \pm 3$~\AA~below the surface, as schematized in panel~(d). (d) Model used for the distribution of the different Ti oxidation states due to oxygen vacancies beneath the AlO$_x$/anatase interface: blue line for Ti$^{2+}$~$+$~Ti$^{3+}$, red line for Ti$^{4+}$. The double arrow indicates the error bar in the determination of the vacancy depth distribution. } } \end{figure*} The redox reaction creates oxygen vacancies at the Al(2\AA)/oxide interface. The spatial distribution of these electron donors results in the creation of a potential well confining the electrons and forming the 2DES. To determine the distribution of vacancies, we measured the Ti-$2p$ and Ti-$3p$ core levels of anatase $(001)$ using X-ray photoemission at $h\nu = 1150$~eV as a function of the electron emission angle, and at normal emission as a function of the X-ray photon energy, and fitted the peaks using either Voigt or Lorentzian line shapes together with a Shirley background. As can be seen in \textbf{Figures~S}\ref{fig:TiO2_XPS_Ovacs}(a,~b), the core levels are composed of several peaks (red markers) corresponding to Ti ions of different oxidation state ($4+$, $3+$, and $2+$) due to the presence of oxygen vacancies. We observe that the fraction of Ti-$4^{+}$ of stoichiometric, insulating TiO$_2$ increases for larger electron escape depths, as evidenced by the angle and photon energy dependencies in Figures~S\ref{fig:TiO2_XPS_Ovacs}(a,~b). By contrats, the Ti-$3^{+}$ and Ti-$2^{+}$ components, associated to free carriers and oxygen vacancies, become increasingly important for smaller escape depths and thus, closer to the interface.\\ To obtain the concentration profile $c(z, Ti^{x+})$ of the $Ti^{x+}$ species along the confinement direction $z$ perpendicular to the surface, we calculate the total area of the corresponding core level peak by: \begin{align*} a(Ti) \propto \int dz \sum_{x=2,3,4} c(z, Ti^{x+}) \exp\left(-\frac{d(z)}{\lambda(E_{kin})}\right), \end{align*} where $d(z)$ is the distance travelled by a photo-emitted electron inside matter (\emph{i.e.}, anatase~$+$~AlO$_x$ layer), which depends on the emission angle, and $\lambda$ the inelastic mean free path for electrons photo-emitted with kinetic energy $E_{kin}$. \\ Figure~\ref{fig:TiO2_XPS_Ovacs}(c) shows the ratio $[a(Ti^{2+})+a(Ti^{3+})]/a(Ti^{4+})$ as a function of the electron emission angle. The error bars indicate the variation of this ratio using different line shapes and backgrounds to fit the various Ti peaks. We then fit the observed changes in such peak area ratio using a Heavyside function for the concentration profile of oxygen vacancies, as shown in Figure~\ref{fig:TiO2_XPS_Ovacs}(d). The result of the fit, shown by the dashed curve in Figure~\ref{fig:TiO2_XPS_Ovacs}(c), yields a depth of $9 \pm 3$~\AA~for the vacancy-rich layer below the surface, and a fraction of $(62 \pm 15)$\% of Ti ions with oxidation states $3+$ or $2+$. \subsection*{In-plane and out-of-plane Fermi surfaces of the 2DES in BaTiO$_3$} \begin{figure*} \begin{center} \includegraphics[clip, width=16cm]{BTO-Al-capping-KFs-HeavyBands.pdf} \end{center} \caption{\label{fig:BTO_KFs_Heavy} \footnotesize (a,~b)~Fermi surface maps (spectral weight integrated over $E_F \pm 5$~meV) at the the Al(2\AA)/BaTiO$_3$ interface using $80$~eV LV and LH photons, respectively. Data were collected around the $\Gamma_{103}$ Brillouin zone. The open red circles show the Fermi momenta obtained from Lorentzian fits to the MDCs at $E_F$. The red squares show the Brillouin-zone edges. (c)~MDC integrated over $E_F \pm 10$~meV along the left edge of the $\Gamma_{103}$ Brillouin zone, corresponding to a cut along the light blue arrows in panel (a). The blue dashed curves are Lorentzian peaks, and the black curve is the resulting total fit. } } \end{figure*} \textbf{Figures~S}\ref{fig:BTO_KFs_Heavy}(a,~b) show the Fermi-surface strips formed by the heavy bands of the 2DES at the surface of BaTiO$_3$ (Al(2\AA)/BTO/Nb:STO interface). The data were taken on the same Brillouin zone using mutually orthogonal photon polarizations, which due to photoemission selection rules enhance either the Fermi strip parallel to $k_{<010>}$ or the Fermi strip parallel to $k_{<100>}$. The open red circles show the Fermi momenta obtained from Lorentzian fits to the MDCs at $E_F$. Figure~\ref{fig:BTO_KFs_Heavy}(c) shows one of such MDCs, corresponding to a cut at the left edge of the $\Gamma_{103}$ Brillouin zone (light blue arrows). This MDC clearly shows a double-peak structure, corresponding to the two Fermi sheets of the Fermi strip, which is thus open at the Brillouin-zone edge. This is in agreement with the fact that the heavy bands running along the $k_{<010>}$ and $k_{<010>}$ directions do not cross $E_F$, as shown in Figures~3(d,~e) of the main text. From these MDC fits, the average distance between opposite Fermi momenta along the short side of the Fermi strips is $k_{s} \approx 0.15 \pm 0.02$~\AA$^{-1}$. \begin{figure*} \begin{center} \includegraphics[clip, width=16cm]{BTO-Al-capping-hv-dependence.pdf} \end{center} \caption{\label{fig:BTO_hv_dep_d2} \footnotesize (a)~Raw Fermi surface map at the the Al(2\AA)/BaTiO$_3$ interface in the $k_{\langle 001 \rangle}$~--~$k_{\langle 010 \rangle}$ plane, acquired by varying the photon energy in 1~eV steps between $h\nu_1 = 30$~eV and $h\nu_2 = 100$~eV using LH photons. To calculate the momentum perpendicular to the surface, we use a free-electron final-state approximation, and set the inner potential to $V_0=12$~eV. The spectral weight was integrated over $[E_F - 30, E_F + 5]$~meV. The red square shows the edges of the $\Gamma_{003}$ bulk Brillouin zone. The blue and green arcs show the spherical-cap cuts in 3D reciprocal space obtained with $h\nu = 80$~eV and $h\nu = 47$~eV, respectively, corresponding to the data presented in Figure~3 of the main text. (b)~Second derivative (negative values only) of the Fermi surface map in (a). The yellow and purple dashed curves are guides to the eye showing, respectively, the non-dispersive Fermi surface of the light $d_{xy}$-like states, and the dispersive Fermi surface of the heavy $d_{xz/yz}$-like states. } } \end{figure*} Finally, \textbf{Figures~S}\ref{fig:BTO_hv_dep_d2}(a,~b) show the out-of-plane Fermi-surface map of the 2DES at the surface of BaTiO$_3$, obtained from the photon-energy dependence of the electronic structure measured over more than an entire bulk Brillouin zone. The inner cylinder, yellow dashed lines in Figure~\ref{fig:BTO_hv_dep_d2}(b), is associated to the light $d_{xy}$-like band forming the Fermi circle in the plane. As its Fermi surface does not disperse along the confinement direction, it corresponds to a 2D-like state. The data also show a large ellipse dispersing along $k_{<001>}$, hence presenting a 3D-like character, best seen in the lower part of the $\Gamma_{003}$ Brillouin zone, purple dashed lines in Figure~\ref{fig:BTO_hv_dep_d2}(b). This Fermi sheet is associated to the heavy bands forming the Fermi strips in the plane. Of course, such 3D-like behavior cannot correspond to a true bulk state, as the redox reaction occurs only at the interface region. Note also that the 3D carrier density resulting from such a state would be huge, comparable to that of good metals, while the bulk BTO film is still insulating. Instead, such 2D-3D dichotomy between different states forming the 2DES in BaTiO$_3$, also observed for the 2DES at the surface of SrTiO$_3$~\cite{Plumb2014}, can be qualitatively understood as arising from confinement itself. In the bulk, by cubic symmetry, the $t_{2g}$ bands are expected to form 3 identical mutually orthogonal Fermi surfaces similar to prolate ellipsoids along the main crystallographic axes --or open quasi-cylinders, when the band filling is such that the ellipsoids' long axis extends beyond the zone boundary. Confinement along $z$ will result, by Heisenberg's principle, in ``de-confinement'', or elongation, of the ellipsoids along $k_z$. When the confinement length becomes $\lesssim a$ (one unit cell), the ellipsoid stretches over $k_z \gtrsim 2\pi/a$ (one Brillouin zone), and the out-of-plane Fermi surface becomes a cylinder. Thus, in the case of the 2DES at the surface of BaTiO$_3$, we see from Figure~\ref{fig:BTO_KFs_Heavy} that the Fermi-surface strip formed by $d_{xz/yz}$-like states has an in-plane Fermi momentum $k_F = k_s/2 = 0.075$~\AA$^{-1}$, while from Figure~\ref{fig:BTO_hv_dep_d2} its out-of-plane Fermi momentum is approximately 0.4~\AA$^{-1}$. Hence, there is an elongation along $k_z$ of the $d_{xz/yz}$ ellipsoids due to confinement. Similarly, as noted before, the in-plane circular Fermi surface, formed by $d_{xy}$-like states, forms a cylinder along the out-of-plane direction. We then conclude that the planar $d_{xy}$-like states are more tightly confined to the surface, while the non-planar $d_{xz/yz}$-like states extend over multiple unit cells towards the bulk. This situation is wholly similar to the case of the 2DES at the surface of SrTiO$_3$~\cite{Plumb2014}, and simply reflects the fact that the confinement potential is wedge-shaped, such that electrons with a large effective mass along $k_z$ ($d_{xy}$ states) are more confined than electrons with a small effective mass along $k_z$ ($d_{xz/yz}$ states)~\cite{Santander-Syro2011,Plumb2014}.
{ "timestamp": "2021-02-16T02:40:47", "yymm": "2102", "arxiv_id": "2102.07748", "language": "en", "url": "https://arxiv.org/abs/2102.07748" }
\section{Introduction} Let $Oct$ denote the Octahedron. We only consider simple graphs in this article. Let $G$ and $H$ be two graphs. $H$ is called a minor of $G$ denoted by $H \leq_{m} G$ if it can be generated by deleting or contracting edges from $G$. If $G$ has no minor isomorphic to $H$, $G$ is $H$-free. Assume $v$ is a vertex of a 3-connected graph $G$ such that $d(v) \geq 4$. Given two sets $A,B\subseteq N_{G}(v)$, where $N_{G}(v)$ is the set of vertices adjacent to $v$ in $G$ and $A \cap B=\emptyset$, $min\{|A|,|B|\}\geq 2$. We mean a $3$-$split$ of $v$ is the operation of first deleting $v$ from $G$ and adding two new adjacent vertices $v'$, $v''$, then joining $v'$ to vertices in $A$ and $v''$ to vertices in $B$. It is clearly that a graph obtained by 3-splitting a vertex of a 3-connected graph will also be 3-connected. For a given graph $H$, characterizing $H$-free graphs is a difficult topic in graph theory. We focus on 3-connected graph $H$ in this paper. Tutte's Wheel Theorem states that every 3-connected graph can be obtained from a wheel by repeatedly adding edges and 3-splitting vertices~\cite{W.T.Tutte}. By this theorem, we can generate all 3-connected graphs. Let $G_{1}$, $G_{2}$ be two disjoint graphs with more than $k$ vertices. The 0-sum of $G_{1}$ and $G_{2}$ is the disjoint union of $G_{1}$, $G_{2}$. The 1-sum of $G_{1}$, $G_{2}$ is obtained by identifying one vertex of $G_{1}$ with one vertex of $G_{2}$. The 2-sum of $G_{1}$, $G_{2}$ is obtained by identifying one edge of $G_{1}$ with one edge of $G_{2}$, and the common edge could be deleted after identification. The 3-sum of $G_{1}$, $G_{2}$ is obtained by identifying one triangle of $G_{1}$ with one triangle of $G_{2}$, and some of the three common edges could be deleted after identification. Next, we introduce some known results for $H$-free graphs where $H$ is 3-connected and we order the results according to the number of edges of $H$. Ding~\cite{G.Ding} characterized all $H$-free graphs for 3-connected $H$ with at most 11 edges, including $Oct\backslash e$. Let $\aleph$ denote the set of graphs obtained by 3-summing wheels and Prisms, and let $K_{5}^{\triangle}$ denote the graph obtained by 3-summing Prism and $K_{5}$. \begin{thm}[\cite{G.Ding}]\label{thm1.1} $Oct\backslash e$-free graphs consists of graphs in $\aleph$ and 3-connected minors of $V_{8}$, $Cube$, and $K_{5}^{\triangle}$. \end{thm} For 3-connected graphs with 12 edges, $V_{8}$-free graphs, $Cube$-free graphs and $Oct$-free graphs are characterized in~\cite{G.Ding Oct,J.Maharry and N.Robertson,J.Maharry Cube}. \begin{thm}[\cite{G.Ding Oct}]\label{thm1.2} A graph is Oct-free if and only if it is constructed by 0-, 1-, 2- and 3-sums starting from graphs in $\{K_{1}, K_{2}, K_{3}, K_{4}\} \cup \{C_{2n-1}^{2} : n\geq 3\} \cup \{L_{4}^{'}, L_{5}, L_{5}^{'}, L_{5}^{''}, P_{10}\}$ (see Figure 1). \end{thm} \input{Figure1.Tpx} There are 51 3-connected graphs with 13 edges, but only two related results. One is for 4-connected $Oct^{+}$-free graphs, where $Oct^{+}$ denote the graph $Oct+e$~\cite{J.Maharry}. It can be seen that $Oct^{+}$ is isomorphic to $K_{6}$ with two parallel edges removed. \begin{thm}[\cite{J.Maharry}]\label{thm1.3} Every 4-connected graph that does not contain a minor isomorphic to $Oct^{+}$ is either planar or the square of an odd cycle. \end{thm} In this paper, we consider the two graphs that obtained by 3-splitting a vertex of the Octahedron. We denote the planar one by $Oct_{1}^{+}$ and the non-planar one by $Oct_{2}^{+}$ (as shown in Figure 2). It can be seen that $Oct_{1}^{+}$ and $Oct_{2}^{+}$ are 3-connected and they both have 13 edges. Our purpose is to characterize 4-connected $Oct_{1}^{+}$-free graphs and 4-connected $Oct_{2}^{+}$-free graphs. For $Oct_{1}^{+}$, we also characterize all planar $Oct_{1}^{+}$-free graphs by characterizing 3-connected planar $Oct_{1}^{+}$-free graphs. \begin{figure}[htbp] \centering \includegraphics[height=3cm,width=6cm,angle=0]{figure2.eps} \caption{$Oct_{1}^{+}$ , $Oct_{2}^{+}$} \label{fig2} \end{figure} Let $v$ be a vertex of a 4-connected graph $G$. A 4-$split$ of $v$ produces a new graph $G'$ as follows. Given two sets $A,B\subseteq N_{G}(v)$, where $N_{G}(v)$ is the set of vertices adjacent to $v$ in $G$ and $min\{|A|,|B|\}\geq 3$. Remove $v$ from $G$ and add two new adjacent vertices $v'$, $v''$ such that $N_{G'}(v')=A\cup \{v''\}$, $N_{G'}(v'')=B\cup \{v'\}$. Clearly, $G'$ is also 4-connected. The following are the main results of this paper. \begin{thm}\label{thm1.4} A 4-connected graph is $Oct_{1}^{+}$-free if and only if it is $C_{6}^{2}$, $C_{2k+1}^{2}$ ($k\geq2$) or it is obtained from $C_{5}^{2}$ by repeatedly 4-splitting vertices. And $C_{6}^{2}$ is the only 4-connected planar $Oct_{1}^{+}$-free graph. \end{thm} \begin{thm}\label{thm1.5} A planar graph is $Oct_{1}^{+}$-free if and only if it is constructed by repeatedly taking 0-, 1-, 2-sums starting from $\{K_{1}, K_{2} ,K_{3}\} \cup \mathscr{K} \cup \{Oct,L_{5} \}$, where $\mathscr{K}$ is the set of graphs obtained by repeatedly taking the special 3-sums of $K_{4}$ . \end{thm} \input{Figure3.Tpx} \begin{thm}\label{thm1.6} A 4-connected graph is $Oct_{2}^{+}$-free if and only if it is planar, $C_{2k+1}^{2}$ ($k\geq2$), $L(K_{3,3})$ or it is obtained from $C_{5}^{2}$ by repeatedly 4-splitting vertices. \end{thm} \section{Preliminaries} In this section, we introduce some definitions and known results we will use in section 3. A $separation$ of $G$ is an ordered pair of subgraphs $(H,K)$ such that $E(H) \cap E(K) =\emptyset$, $H \cup K =G$, and $|E(H)|\geq |V(H) \cap V(K)|\leq |E(K)|$. A separation $(H,K)$ is called a $k$-$separation$ if $|V(H) \cap V(K)|=k$. A $cyclic$ $separation$ is a separation $(H,K)$ in which both $H$ and $K$ contain circuits. Suppose $k$ is an integer greater than two. A graph $G$ is $cyclically$ $k$-$connected$ if it is 2-connected, $|E(G)|-|V(G)|+1\geq k$ and there does not exist a cyclic $k'$-separation of $G$ for $k' \leq k$. A graph is $cubic$ if it is 3-regular. The graph $L(H)$ is called the $line$ $graph$ of $G$ if $V(L(G)) = E(G)$, and for any two vertices $e$, $f$ in $V(L(G))$, $e$ and $f$ are adjacent vertices if and only if they are adjacent edges in $G$. Let $\mathcal{C}=\{C_{n}^{2}:n\geq5\}$ and $\mathcal{L}=\{L(H)$: $H$ be the cubic cyclically 4-connected graph$\}$. A $(G_{0}, G_{n})$-$chain$ is a sequence of 4-connected graphs $G_{0}, G_{1},...,G_{n}$ and each $G_{i}$ $(i < n)$ has an edge $e_{i}$ such that $G_{i}/e_{i}=G_{i+1}$. There is a classical result of Martinov for 4-connected graphs, which is known as chain theorem. \begin{thm}[\cite{N.Martinov}]\label{thm2.1} For every 4-connected graph $G$, there exists a sequence of 4-connected graphs $G_{0}, G_{1},...,G_{n}$ such that $G_{0}=G$, $G_{n} \in \mathcal{C} \cup \mathcal{L}$, and every $G_{i}$ $(i < n)$ has an edge $e_{i}$ for which $G_{i}/e_{i}=G_{i+1}$. \end{thm} This result has been strengthened by Qin and Ding as follows. \begin{thm}[\cite{C.Qin}]\label{thm2.2} Let $G$ be a 4-connected graph not in $\mathcal{C} \cup \mathcal{L}$. If $G$ is planar, then there exists a $(G, C_{6}^{2})$-chain; if $G$ is non-planar, then there exists a $(G, K_{5})$-chain. \end{thm} Thus, any 4-connected graph that not in $\mathcal{C} \cup \mathcal{L}$ can be generated by repeatedly 4-splitting vertices from $C_{6}^{2}$ or $C_{5}^{2}$. There are some good properties for cubic cyclically 4-connected graphs and the line graphs. Adding a $handle$ to $G$ means the operation that first subdivide two nonadjacent edges $e_{1}$ and $e_{2}$ of $G$, then add a new edge connecting two new internal vertices. And a graph $H$ is $topologically$ contained in a graph $G$, if there exists a subgraph of $G$ that is isomorphic to a subdivision of $H$. \begin{lem}[\cite{N.C.Wormald}]\label{lem2.3} The class of all cubic cyclically 4-connected graphs can be generated by repeatedly adding handles starting from $K_{3,3}$ or the $Cube$. \end{lem} \begin{lem}[\cite{G.Ding P}]\label{lem2.4} If $G$ is a cyclically 4-connected non-planar cubic graph, then either $G=K_{3,3}$ or $G$ contains a subdivision of $V_{8}$. \end{lem} \begin{lem}[\cite{J.Maharry four}]\label{lem2.5} If $H$ is topologically contained in $G$, then $L(H) \leq_{m} L(G)$. \end{lem} The following are some results for 3-connected graphs. \begin{lem}[\cite{G.Ding}]\label{lem2.6} Let $H$ be a 3-connected graph. Then a graph is $H$-free if and only if it is constructed by repeatedly taking 0-, 1-, and 2-sums, starting from $\{K_{1}, K_{2}, K_{3}\}$ $\cup$ $\{$3-connected $H$-free graphs$\}$. \end{lem} \begin{lem}[\cite{P.Seymour}]\label{lem2.7} Suppose a 3-connected graph $H \neq W_{3}$ is a proper minor of a 3-connected graph $G \neq W_{n}$. Then G has a minor J, which is obtained from H by either adding an edge or 3-splitting a vertex. \end{lem} \section{Proof of main results} \subsection{4-connected $Oct_{1}^{+}$-free graphs} In this section, we characterize 4-connected $Oct_{1}^{+}$-free graphs and prove Theorem~\ref{thm1.4}. \begin{lem}\label{lem3.1} If a 4-connected graph $G \in \mathcal{C} \cup \mathcal{L}$ and $G$ is $Oct_{1}^{+}$-free, then $G$ is $C_{6}^{2}$ or $C_{2k+1}^{2}$ $(k\geq 2)$. \end{lem} \begin{proof} Suppose $G=L(H)$, where $H$ is a cubic cyclically 4-connected graph. By Lemma~\ref{lem2.3}, $H$ can be generated by repeatedly adding handles starting from $K_{3,3}$ or the $Cube$. Thus, $L(H) \geq_{m} L(K_{3,3})$ or $L(H) \geq_{m} L(Cube)$ by lemma~\ref{lem2.5}. Since $L(K_{3,3})$ and $L(Cube)$ both contain $Oct_{1}^{+}$ as a minor (as shown in Figure 4 and Figure 5), $G$ contains $Oct_{1}^{+}$-minor too. \input{Figure4.Tpx} \input{Figure5.Tpx} Thus we assume that $G \in \mathcal{C}$. Suppose $G=C_{2k+1}^{2}$ $(k \geq 2)$, then $G$ is $Oct_{1}^{+}$-free since $C_{2k+1}^{2}$ $(k\geq 2)$ is $Oct$-free. For $C_{2k}^{2}$ $(k \geq 3)$, $C_{2k+2}^{2}$ contains $C_{2k}^{2}$ as a minor. Clearly, $C_{6}^{2}$ is $Oct_{1}^{+}$-free since it only has six vertices. And it is easy to verify that $C_{8}^{2}$ contains $Oct_{1}^{+}$-minor, thus all $C_{2k}^{2}$ contains $Oct_{1}^{+}$-minor for $k \geq 4$. \end{proof} \begin{thm}\label{thm3.2} A 4-connected planar graph is $Oct_{1}^{+}$-free if and only if it is $C_{6}^{2}$. \end{thm} \begin{proof} The sufficiency clearly holds. To prove the necessity, assume that $G$ is a 4-connected planar $Oct_{1}^{+}$-free graph. If $G \in \mathcal{C} \cup \mathcal{L}$, by Lemma~\ref{lem3.1} $G=C_{6}^{2}$. If $G$ is not in $\mathcal{C} \cup \mathcal{L}$, by Theorem~\ref{thm2.2} there exists a $(G, C_{6}^{2})$-chain. Let $\{v_{1}, v_{2},..., v_{6}\}$ be vertices of $C_{6}^{2}$ (shown in Figure 6). By symmetry, we 4-split $v_{1}$ and first consider the minimal case $|A|=|B|=3$, where $A$, $B$ belongs to $N(v_{1})$, $A \cup B =N(v_{1})$. Suppose $v_{1}'$, $v_{1}''$ are two new vertices obtained by 4-splitting $v_{1}$, and $N(v_{1}')=A \cup \{v_{1}''\}$, $N(v_{1}'')=B \cup \{v_{1}'\}$. Since the four neighbors of $v_{1}$ in $C_{6}^{2}$ construct a 4-cycle, three vertices in $A$ are pairwise adjacent. By symmetry, we assume that $A = \{v_{2}, v_{5}, v_{6}\}$, then $v_{3}$ must be adjacent to $v_{1}''$. Therefore, $B$ must be one of the following sets: $\{v_{3}, v_{5}, v_{6}\}$, $\{v_{2}, v_{3}, v_{5}\}$, $\{v_{2}, v_{3}, v_{6}\}$. In all cases, the new graph $G'$ generated by 4-splitting $v_{1}$ from $C_{6}^{2}$ with $|A|=|B|=3$ contains $Oct_{1}^{+}$. Clearly, all other 4-splits of $C_{6}^{2}$ contain $G'$ as a minor, thus contain $Oct_{1}^{+}$. Hence, $G$ is $C_{6}^{2}$. \end{proof} \input{Figure6.Tpx} \noindent\emph{Proof of Theorem~\ref{thm1.4}} The result follows from Theorem~\ref{thm2.2}, Lemma~\ref{lem3.1} and Theorem~\ref{thm3.2}. $\hfill\qedsymbol$ \subsection{Planar $Oct_{1}^{+}$-free graphs} In this section, we characterize all planar $Oct_{1}^{+}$-free graphs and prove Theorem~\ref{thm1.5}. We first establish some lemmas. \begin{lem}\label{lem3.3} If $G_{1}$, $G_{2}$ are $k$-connected for $k=0,1,2,3$ and at least one of them is non-planar, then the $k$-sum of $G_{1}$ and $G_{2}$ is non-planar. \end{lem} \begin{proof} It clearly holds for $k=0,1$. Without loss of generality, we suppose $G_{1}$ is non-planar, $G_{2}$ is planar. Then $G_{1}$ contains a subdivision of $K_{3,3}$ or $K_{5}$, we denote it by $\Gamma$. When $k=2$, let $G$ be the 2-sum of $G_{1}$ and $G_{2}$, let $e=uv$ be the common edge. Suppose $G$ is planar, then $e$ is contained in the subdivision $\Gamma$ and $e$ is deleted after identification. Since $G_{2}$ is 2-connected, there exists an ($u$,$v$)-path $P$ different from $e$. Then $G_{1} \cup P$ contains a subdivision of $K_{3,3}$ or $K_{5}$, a contradiction. When $k=3$, let $G$ be the graph obtained by 3-summing $G_{1}$ and $G_{2}$ over a common triangle $v_{1}v_{2}v_{3}v_{1}$. Suppose $G$ is planar and $v_{1}v_{2},v_{2}v_{3},v_{1}v_{3}$ are all deleted after identification. Since $G_{1}$ is non-planar, some edges in $\{v_{1}v_{2},v_{2}v_{3},v_{1}v_{3}\}$ are contained in the subdivision $\Gamma$. If $\Gamma$ is a subdivision of $K_{3,3}$, since there is no triangle in $\Gamma$, at most two edges of $\Gamma$ are contained in $\{v_{1}v_{2},v_{2}v_{3},v_{1}v_{3}\}$, say $v_{1}v_{2},v_{2}v_{3}$. Since $G_{2}$ is 3-connected, there exists a vertex $v$ different from $v_{1},v_{2},v_{3}$ and three internally-disjoint $(v,v_{1})$-path, $(v,v_{2})$-path, $(v,v_{3})$-path in $G_{2}$. By contracting $(v,v_{2})$-path to $v_{2}$, we can obtain a $(v_{1},v_{2})$-path $P_{1}$ and a $(v_{2},v_{3})$-path $P_{2}$. Then $\Gamma \setminus v_{1}v_{2} \setminus v_{2}v_{3} \cup P_{1} \cup P_{2}$ forms a subdivision of $K_{3,3}$ again. This contradicts to the planarity of $G$. Thus we assume that $\Gamma$ is a subdivision of $K_{5}$. As shown in Figure 7, the 3-sum $G'$ of $K_{5}$ and $K_{4}$ is non-planar. Since every 3-connected graph contains $W_{3} = K_{4}$ as a minor, $G$ must contain $G'$ as a minor. A contradiction. \end{proof} \input{Figure7.Tpx} It is sufficient to consider the $k$-sums $(k=0,1,2,3)$ of planar graphs to characterize all planar $Oct_{1}^{+}$-free graphs. \begin{lem}\label{lem3.4} If $G_{1}$, $G_{2}$ are both planar, then the $k$-sum $G$ $(k=0,1,2)$ of $G_{1}$ and $G_{2}$ is planar. \end{lem} \begin{proof} It clearly holds for $k=0,1$. When $k=2$, let $e$ be the common edge. Since $G_{i}$ $(i=1,2)$ is planar, there exists a planar embedding $H_{i}$ of $G_{i}$ such that the outer face $\widetilde{f_{i}}$ of $H_{i}$ is incident with $e$. Thus a planar embedding of $G$ can be obtained by embedding $H_{2}$ in $\widetilde{f_{1}}$ of $H_{1}$. \end{proof} Recall that the 3-sum of $G_{1}$, $G_{2}$ is obtained by identifying one triangle of $G_{1}$ with one triangle of $G_{2}$, and some of the three common edges could be deleted after identification. Next, we define the $special$ $3$-$sum$ of two graphs. A triangle $abca$ of $G$ is called a $separating$ $triangle$ if the graph obtained from $G$ by deleting vertices $a$, $b$, $c$ is disconnected. Otherwise, we call the triangle $abca$ $non$-$separating$ $triangle$. The $special$ $3$-$sum$ of $G_{1}$ and $G_{2}$ is obtained by taking 3-sum of them over a non-separating triangle of both $G_{1}$ and $G_{2}$. \begin{lem}\label{lem3.5} Let $G_{1}$, $G_{2}$ be two 3-connected planar graphs with triangles and let $G$ be a 3-sum of them. If $G$ is obtained by taking special 3-sum of them, then $G$ is planar. Otherwise, $G$ is non-planar. \end{lem} \begin{proof} Suppose $G$ is obtained by 3-summing $G_{1}$, $G_{2}$ over an non-separating triangle $C_{1}$ of both $G_{1}$ and $G_{2}$. Let $f$ be the face of $G_{1}$ that bounded by $C_{1}$. Then there exists a planar embedding $H$ of $G_{1}$ such that the outer face $\tilde{f}$ of $H$ has the same boundary as $f$. Thus a planar embedding of $G$ can be obtained by embedding $G_{2}$ in $\tilde{f}$. Next, we suppose that $G$ is obtained by 3-summing $G_{1}$ and $G_{2}$ over a separating triangle $C_{2}=abca$ of $G_{1}$ or $G_{2}$, say $G_{1}$. Since $G_{1}$ is planar, there exist two vertices $u_{1}$, $u_{2}$ such that $u_{1}$ is in int$C_{2}$ and $u_{2}$ is in ext$C_{2}$. Let $u_{3}$ be a vertex of $G_{2}$ that different from $\{a,b,c\}$. Since both $G_{1}$ and $G_{2}$ are 3-connected, there exists a $\{u_{i},a\}$-path $P_{i1}$, a $\{u_{i},b\}$-path $P_{i2}$ and a $\{u_{i},c\}$-path $P_{i3}$ in $G$ such that $P_{i1}$, $P_{i2}$ and $P_{i3}$ are internally-disjoint (Shown in Figure 8). It can be seen that $P_{11} \cup P_{12} \cup P_{13} \cup P_{21} \cup P_{22} \cup P_{23} \cup P_{31} \cup P_{32} \cup P_{33}$ forms a subdivision of $K_{3,3}$. Thus, $G$ is non-planar. \end{proof} \input{Figure8.Tpx} We next prove Theorem~\ref{thm1.5}. \noindent\emph{Proof of Theorem~\ref{thm1.5}} We first characterize all 3-connected planar $Oct_{1}^{+}$-free graphs. Suppose a graph $G$ is 3-connected planar $Oct_{1}^{+}$-free graph. Two cases now arise, depending on whether G has an Oct-minor. {\bf Case 1.} $G$ contains an $Oct$-minor. It is clearly that $G=Oct$ is 3-connected planar $Oct_{1}^{+}$-free graph. Thus we assume that $G \neq Oct$. By Lemma~\ref{lem2.7}, $G$ has a minor $J$, which is obtained from $Oct$ by either adding an edge or 3-splitting a vertex. Adding any edge to $Oct$ results in a non-planar graph. And there are two graphs obtained by 3-splitting a vertex of $Oct$, one is $Oct_{1}^{+}$, another is non-planar. Hence, in this case, $Oct$ is the only 3-connected planar $Oct_{1}^{+}$-free graph. {\bf Case 2.} $G$ is $Oct$-free. Since $G$ is 3-connected, $G$ is constructed by taking 3-sums starting from graphs in $\{K_{4}\} \cup \{C_{2n-1}^{2} : n\geq 3\} \cup \{L_{4}^{'}, L_{5}, L_{5}^{'}, L_{5}^{''}, P_{10}\}$ by Theorem~\ref{thm1.2}. By Lemma~\ref{lem3.3}, Lemma~\ref{lem3.5} and the planarity of $G$, $G$ is $L_{5}$ or $G$ is the special 3-sums of $K_{4}$. Thus $G$ belongs to $\{L_{5}\} \cup \mathscr{K}$ in this case. It follows from Case 1 and Case 2 that $G$ belongs to $\{Oct,L_{5}\} \cup \mathscr{K}$. Then Theorem~\ref{thm1.5} follows from Lemma~\ref{lem2.6}, Lemma~\ref{lem3.3} and Lemma~\ref{lem3.4}. $\hfill\qedsymbol$ \subsection{4-connected $Oct_{2}^{+}$-free graphs} In this section, we characterize 4-connected $Oct_{2}^{+}$-free graphs and prove Theorem~\ref{thm1.6}. \begin{lem}\label{lem3.6} Graphs in $\mathcal{C}$ are all 4-connected $Oct_{2}^{+}$-free graphs. \end{lem} \begin{proof} It clearly holds since $C_{2k}^{2}$ $(k \geq 3)$ is planar and $C_{2k+1}^{2}$ $(k \geq 2)$ is $Oct$-free. \end{proof} \begin{lem}\label{lem3.7} If a 4-connected graph $G \in \mathcal{L}$ and $G$ is $Oct_{2}^{+}$-free, then $G$ is planar or $G = L(K_{3,3})$. \end{lem} \begin{proof} Suppose $G = L(H)$, where $H$ is a cubic cyclically 4-connected graph. If $G$ is planar, then $G$ is $Oct_{2}^{+}$-free since $Oct_{2}^{+}$ is non-planar. When $G$ is non-planar, $H$ is non-planar too. By Lemma~\ref{lem2.4}, $H = K_{3,3}$ or $H$ contains a subdivision of $V_{8}$. {\bf Case 1.} $H = K_{3,3}$. As shown in Figure 9, $G = L(K_{3,3})$ has 9 vertices $\{v_{1}, v_{2},..., v_{9}\}$ and 18 edges. If $Oct_{2}^{+}$ is a minor of $G$, the minor can be obtained from $G$ by contracting two edges and then deleting some edges. Without loss of generality, we first contract $v_{4}v_{5}$ to $v_{5}$ and denote the resulting graph by $G_{816}$, means it has 8 vertices and 16 edges. By symmetry, we next contract one of the edges in $\{v_{1}v_{5}, v_{2}v_{5}, v_{3}v_{6}, v_{1}v_{2}, v_{2}v_{3}, v_{5}v_{6}, v_{1}v_{3}, v_{1}v_{7}, v_{3}v_{9}, v_{2}v_{8}\}$. We verify every case in order and up to isomorphism there are six resulting graphs, we denote them by $G_{713}^{a}$, $G_{714}^{a}$, $G_{713}^{b}$, $G_{714}^{b}$, $G_{715}$, $G_{714}^{c}$ respectively. Since they are all $Oct_{2}^{+}$-free graphs, $G = L(K_{3,3})$ is $Oct_{2}^{+}$-free too. \input{Figure9.Tpx} \input{Figure10.Tpx} {\bf Case 2.} $H$ contains a subdivision of $V_{8}$. By Lemma~\ref{lem2.5}, $G$ contains $L(V_{8})$ as a minor. Since $L(V_{8})$ contains a $Oct_{2}^{+}$-minor as shown in Figure 10, $G$ contains $Oct_{2}^{+}$ as a minor. \end{proof} \noindent\emph{Proof of Theorem~\ref{thm1.6}} Suppose $G$ is a 4-connected graph that is not in $\mathcal{C} \cup \mathcal{L}$. If $G$ is planar, then $G$ is $Oct_{2}^{+}$-free clearly. If $G$ is non-planar, by Theorem~\ref{thm2.2}, there is a $(G, K_{5})$-chain. That is $G$ can be generated by repeatedly splitting vertices of $C_{5}^{2}$. Then Theorem~\ref{thm1.6} follows from Lemma~\ref{lem3.6} and Lemma~\ref{lem3.7}. $\hfill\qedsymbol$
{ "timestamp": "2021-02-16T02:39:47", "yymm": "2102", "arxiv_id": "2102.07706", "language": "en", "url": "https://arxiv.org/abs/2102.07706" }
\section{A. Proof for Theorem $1$} In addition to the ones in the main text, we first give more notations and definitions to formulate the problem. \textit{Definition A$1$.} For $\mathcal{H}'\subseteq\mathcal{H}$, we define the \textit{concentration function} as $\alpha(\epsilon)=1-\inf\{\mu(\mathcal{H}'_\epsilon)|\mu(\mathcal{H})\geq\frac12\}$ with distance measure $D(\cdot)$ and probability measure $\mu(\cdot)$ in a $d$-dimensional vector space. If \begin{equation}\label{eq:def4} \alpha(\epsilon)\leq \alpha e^{-\beta\epsilon^2d}, \end{equation} then the vector space is said to be in $(\alpha,\beta)$-normal Levy group. We also introduce the following Lemma A1, which has already been obtained in Ref. \cite{liu2019vulnerability}. Here, we recap the statement and sketch the proof for completeness. \textit{Lemma A$1$.} For a quantum classifier $\mathcal{C}_i$ that takes $\rho\in SU(d)$ according to the Haar measure $\mu(\cdot)$ as input and has a misclassified set $\mathcal{E}_i$. Suppose the adversarial input state $\rho'$ is restricted by $d_{HS}(\rho,\rho')\leq\epsilon$ to the clean data $\rho$. Then to guarantee an adversarial risk $R_i$, $\epsilon$ is bounded below by \begin{equation}\label{eq:thm3} \epsilon^2\geq\frac {4}{d}\ln{[\frac{2}{\mu(\mathcal{E}_i)(1-R_i)}]}. \end{equation} To prove Lemma A$1$, we further introduce the following two lemmas together with their brief proofs. \textit{Lemma A$2$. } (Theorem 3.7 in \cite{mahloujifar2019curse}) For each classifiers $\mathcal{C}_i$ and risk $\mu(\mathcal{E}_i)$, consider additional perturbation $\rho\rightarrow\rho'$, $\rho,\rho'\in \mathcal{H}$ and $D(\rho,\rho')\leq\epsilon$. If the adversarial risk $\mu(\mathcal{E}_{i,\epsilon})$ is guaranteed to be at least $R_i$, then $\epsilon^2$ must also be bounded by \begin{equation}\label{eq:lem1} \epsilon^2\geq\frac {1}{\beta d}\ln{[\frac{\alpha^2}{\mu(\mathcal{E}_i)(1-R_i)}]}. \end{equation} \textit{Proof. }We decompose the perturbation $\epsilon=\epsilon_1+\epsilon_2$. First construct a $\epsilon_1$ such that $\mu(\mathcal{E}_i)>\alpha e^{-\beta\epsilon_1^2d}$. Consider two cases for whether $\mu(\mathcal{E}_i)\leq\frac12$. (i) If $\mu(\mathcal{E}_i)>\frac 12$, then we have $\mu(\mathcal{E}_{i,\epsilon_1})>\mu(\mathcal{E}_i)>\frac 12$. (ii) If $\mu(\mathcal{E}_i\leq\frac12)$, suppose $\mu(\mathcal{E}_{i,\epsilon_1})\leq\frac12$. Then the complement probability $\mu(\mathcal{H}\backslash\mathcal{E}_{i,\epsilon_1})\geq\frac 12$. Denote $\mathcal{H}_i'=\mathcal{H}\backslash\mathcal{E}_{i,\epsilon_1}$, then $\mu(\mathcal{H}_i')\geq\frac12$ and $\mathcal{E}_i=\mathcal{H}\backslash\mathcal{H}_{i,\epsilon_1}'$. Hence, we can deduce a contradiction using \eqref{eq:def4} as $\alpha(\epsilon_1)\geq 1-\mu(\mathcal{H}'_{i,\epsilon_1})=\mu(\mathcal{E}_i)>\alpha(\epsilon_1)$. Therefore, the perturbation $\epsilon_1$ ensures $\mu(\mathcal{E}_{i,\epsilon_1})>\frac 12$. Then we attach $\epsilon_2$ to $\mathcal{E}_{i,\epsilon_1}$, which is $\epsilon=\epsilon_1+\epsilon_2$ perturbation on $\mathcal{E}_i$. Applying \eqref{eq:def4} we can prove the lemma as $R_i=\mu(\mathcal{E}_{i,\epsilon})=\mu(\mathcal{E}_{i,\epsilon_1+\epsilon_2})>1-\alpha(\epsilon_2)$ and $\epsilon^2<\epsilon_1^2+\epsilon_2^2=\frac{1}{\beta d}\{\ln[\frac{\alpha}{\mu(\mathcal{E}_i)}]+\ln\frac{\alpha}{(1-R_i)}\}$. \textit{Lemma A$3$. }$SU(d)$ group with Haar probability measure and normalized Hilbert-Schmidt metric is in $(\sqrt{2},\frac 14)$-normal Levy group \cite{gromov1983topological,giordano2007some}. \textit{Proof.} First apply isoperimetric inequality \cite{gromov1983topological,milman2009asymptotic}, which states that for $\mathcal{H}'\subseteq\mathcal{H},\dim(\mathcal{H})=d$ and $\mu(\mathcal{H}')\geq\frac 12$, \begin{equation}\label{eq:lem2-1} \mu(\mathcal{H}'_\epsilon)\geq 1-\sqrt{2}e^{-\epsilon^2dR(\mathcal{H})/[2(d-1)]}, \end{equation} where $R(\mathcal{H})=\inf_v{\text{Ric}(v,v)}$ for the Ricci curvature $\text{Ric}(v,v')$ of $\mathcal{H}$ and $v$ goes through all unit tangent vectors in $\mathcal{H}$. Combining \eqref{eq:lem2-1} and \eqref{eq:def4} we can deduce that \begin{equation}\label{eq:lem2-2} \alpha(\epsilon)\leq\sqrt{2}e^{-\epsilon^2dR(\mathcal{H})/2(d-1)}. \end{equation} According to \cite{meckes2014concentration}, for $SU(d)$ equipped with Hilbert-Schmidt metric, ${\rm Ric}(v,v)=\frac d2 G(v,v)$. And $G(v,v)$ is the Hilbert-Schmidt metric and $v$ is any unit tangent vector in $SU(d)$. Then from \cite{oszmaniec2016random} $G(v,v)=1$. Therefore, $R(\mathcal{H})=\frac d2$. This indicates that we can rewrite \eqref{eq:lem2-2} as \begin{equation}\label{eq:lem2-3} \alpha(\epsilon)\leq\sqrt{2}e^{-\epsilon^2d^2/4(d-1)}<\sqrt{2}e^{-\epsilon^2d/4}. \end{equation} Combining \eqref{eq:lem1} and \eqref{eq:lem2-3}, it is shown that for a classifier $\mathcal{C}_i$ with risk $\mathcal{E}_i$ which takes $\rho\in SU(d)$ as input and the Hilbert-Schmidt metric, to bound above adversarial risk with $R_i$, the adversarial perturbation is bounded below by $\epsilon^2\geq\frac {4}{d}\ln{[\frac{2}{\mu(\mathcal{E}_i)(1-R_i)}]}$. Hence, we have completed the proof for Lemma A1. Now, we continue to prove the Theorem $1$ in the main text by using the Ineq. \eqref{eq:thm3}. We consider a set of quantum classifiers $\mathcal{C}_i,i=1,...,k$ with risk $\mathcal{E}_i,i=1,...,k$. Our goal is to calculate $\mu(\mathcal{E}_\epsilon)$ for a given $\epsilon$ perturbation. Consider the set $\mathcal{E}_{\text{set}}=\cap_{i=1}^k\mathcal{E}_i$ of original data that is misclassified by all classifiers in the set. If we assume an additional condition $\mathcal{E}_{\text{set}} \neq \emptyset$, then we can construct a quantum classifier $\mathcal{C}^*$ that misclassifies all $\rho\in\mathcal{E}_{\text{set}}$ and correctly classifies other states in $\mathcal{H}$. Then we apply \eqref{eq:thm3} to this classifier $\mathcal{C}^*$ and can deduce that to guarantee a risk larger than $R_0$, the perturbation is bounded below by \begin{eqnarray} \epsilon^2\geq\frac {4}{d}\ln{[\frac{2}{\mu(\mathcal{E}_\text{set})(1-R_0)}]}. \label{SingleClass} \end{eqnarray} If the additional constraint is not satisfied, i.e. $\cap_{i=1}^k\mathcal{E}_i=\emptyset$, then we can not directly construct a quantum classifier $\mathcal{C}^*$. In this case, we notice that $\mathcal{E}_\epsilon=\cap_{i=1}^k\mathcal{E}_{i,\epsilon}=\mathcal{H}-\cup_{i=1}^k(\mathcal{H}-\mathcal{E}_{i,\epsilon})$. Therefore $\mu(\mathcal{E}_\epsilon)$ can be bounded below by: \begin{equation}\label{eq:IEsize} \mu(\mathcal{E}_\epsilon)\geq 1-\sum_{i=1}^k\frac{|\mathcal{H}\backslash\mathcal{E}_{i,\epsilon}|}{|\mathcal{H}|}=\sum_{i=1}^k\mu(\mathcal{E}_{i,\epsilon})-(k-1). \end{equation} Hence, if we attach a perturbation that ensures $\mu(\mathcal{E}_{i,\epsilon})\geq R_{0,i}=\frac{k-1+R}{k}$ for each classifier $\mathcal{C}_i$, then the universal adversarial risk will be bounded below by $R$. Replacing $R$ and $\mu(\mathcal{E}_i)$ in \eqref{eq:thm3} with $R_0$ and $\mu(\mathcal{E})_{\text{min}}$, we finish the proof by arriving at the inequality: \begin{equation} \epsilon^2\geq\frac {4}{d}\ln{[\frac{2k}{\mu(\mathcal{E})_{\text{min}}(1-R_0)}]}. \label{Theorem1Supp} \end{equation} It is worthwhile to mention that the Ineq. (\ref{Theorem1Supp}) holds regardless of whether the additional assumption $\mathcal{E}_{\text{set}} \neq \emptyset$ is satisfied or not. When $\mathcal{E}_{\text{set}} \neq \emptyset$ is satisfied, the problem reduces to the case for the single classifier $\mathcal{C}^*$. Yet, we cannot tell which inequality, either Ineq. (\ref{SingleClass}) or (\ref{Theorem1Supp}), gives a tighter bound as we have no information about the value of $\mu(\mathcal{E}_{\text{set}})$ and $\mu(\mathcal{E}_{\text{min}})$. In our numerical simulations, among the test set containing 100 ground states of the Ising model, we find that there are five samples that can be misclassified by all eight quantum classifiers without adding any perturbation. This indicates that the additional condition might be satisfied in practice. \section{B. Proof for Theorem $2$} In this section, we provide the details of the proof for Theorem $2$ with some further discussions. Following the definitions in the main text, the adversarial operator $\hat{\epsilon}$ is unitary, and hence $\hat{\epsilon}^{-1}$ is also unitary. Then by applying the property of unitary transformation, we have \begin{equation}\label{eq:eqsize} \mu(\mathcal{E}_{\hat{\epsilon}})=\frac{|\hat{\epsilon}^{-1}(\mathcal{E})|}{|\mathcal{H}|}=\frac{|\mathcal{E}|}{|\mathcal{H}|}=\mu(\mathcal{E}). \end{equation} This indicates that the adversarial risk remains the same after we perform the same unitary perturbation operation $\hat{\epsilon}$ on every input quantum state $\rho\in\mathcal{H}$. We randomly pick $\rho\in\mathcal{H}$ according to the Haar measure. For each selection, the probability of misclassification occurrence is $\mu({\mathcal{E}_{\hat{\epsilon}}})=\mu(\mathcal{E})$. Therefore, we can regard each selection as a random variable, which will be $1$ when misclassification occurs and $0$ otherwise. Then, we apply Hoeffding's inequality for independent Bernoulli random variables and get with probability at least $1-\delta$ ($\delta>0$) \begin{equation} |R_E-\mu(\mathcal{E})|\leq\sqrt{\frac{1}{2n}\ln{(\frac2\delta)}}. \end{equation} This proves the first part of the Theorem $2$ in the main text. To obtain an lower bound for $\mu(\mathcal{E})$, we further resort to the no free lunch theorem \cite{shalev2014understanding} and its reformulation in the context of quantum machine learning \cite{poland2020no,sharma2020reformulation}. Unlike in Ref. \cite{poland2020no}, where quantum input and output are considered, our discussion is restricted to classification problems in which the output is classical labels. To this end, here we give a loose estimation for the lower bound of $\mu(\mathcal{E})$ with some additional constraints according to our numerical simulations. In our consideration, the quantum classifiers takes two steps to classify input samples. In the first step, the classifier takes a quantum state $\rho\in\mathcal{H}$ as input and undergoes a variational circuit to arrive at the output state $\rho_{\text{out}}$ belonging to a $d'$-dimensional Hilbert space. In the second step, the classifier outputs a label $s\in\{0,1,...,d'-1\}$ according to the largest probability among $\langle 0|\rho_{\text{out}}|0\rangle,\langle 1|\rho_{\text{out}}|1\rangle,...,\langle d'-1|\rho_{\text{out}}|d'-1\rangle$. Based on this, our analysis of $\mu(\mathcal{E})$ will lead to an average performance bound for the classifier \cite{poland2020no}. In the first step from $\rho$ to $\rho_{\text{out}}$, the quantum ground truth is defined as a unitary process $t$. Without loss of generality, we may restrict our discussion to the case of quantum pure states. The training set is rewritten as $\mathcal{S}_N=\{(|\psi_1\rangle,|\phi_1\rangle),...,(|\psi_N\rangle,|\phi_N\rangle)\}$ and the classifier learns a hypothesis operator $V$, which is a unitary process such that $t|\psi_i\rangle=V|\psi_i\rangle=|\phi_i\rangle$ for the training set. The quantum risk function is defined as \cite{monras2017inductive}. \begin{equation}\label{eq:qnfl-risk} R_t(V)\equiv\int d|\psi\rangle||t|\psi\rangle\langle\psi|t^{\dagger}-V|\psi\rangle\langle\psi|V^{\dagger}||_1^2, \end{equation} where $||A||_1$ is the trace norm for matrices \cite{Nielsen2010Quantum}. Now the quantum no free lunch theorem is described as below. \textit{Lemma B$1$.(Quantum No Free Lunch)} The quantum risk function in a classification task averaged over selection of quantum ground truth $t$ and training set $S_N$ with respect to the Haar measure can be bounded below by \begin{equation}\label{eq:thm4} \mathbb{E}_t[\mathbb{E}_{\mathcal{S}_N}[R_t(V)]]\geq1-\frac{1}{d(d+1)}(N^2+d+1). \end{equation} The proof of this lemma and more discussions about its implications are provided in Refs. \cite{poland2020no,sharma2020reformulation}. Here, we use this lemma to obtain Ineq. (4) in the main text. Noting that $||A||_1\leq 1$, hence for all the $\rho=|\psi\rangle\langle\psi|\in \mathcal{E}$, $D(t|\psi\rangle,V|\psi\rangle)=||t|\psi\rangle\langle\psi|t^{\dagger}-V|\psi\rangle\langle\psi|V^{\dagger}||_1\leq 1$. This means that $R_t(V)\leq 1$, regardless of whether the quantum data is correctly classified or not. Then we come to the case when a quantum input is classified correctly. Without loss of generality, we can assume that the ground truth gives true label and output state $t|\psi\rangle=|i\rangle$, then since the quantum data is correctly predicted, $\langle i|t|\psi\rangle\langle\psi|t^{\dagger}|i\rangle\geq\frac{1}{d'}$. From this inequality, we obtain that the fidelity $F(V|\psi\rangle,t|\psi\rangle=|i\rangle)\geq\sqrt{\frac{1}{d'}}$. We can utilize the relation between fidelity and the trace norm \begin{equation}\label{eq:two-distance} D(\rho,\sigma)^2\leq 1-F(\rho,\sigma)^2, \end{equation} where $\rho,\sigma$ denote arbitrary quantum states. Hence, for correctly classified quantum data we have $R_t(V)=D(t|\psi\rangle,V|\psi\rangle)^2=||t|\psi\rangle\langle\psi|t^{\dagger}-V|\psi\rangle\langle\psi|V^{\dagger}||_1^2\leq 1-F(t|\psi\rangle,V|\psi\rangle)^2\leq 1-\frac {1}{d'}$. As a result, the integral in Eq. \eqref{eq:qnfl-risk} is bounded by \begin{equation}\label{eq:risk-rela} R_t(V)\leq\mu(\mathcal{E})+\frac{d'-1}{d'}(1-\mu(\mathcal{E}))=\frac{1}{d'}(d'-1+\mu(\mathcal{E})). \end{equation} Combining \eqref{eq:thm4} and \eqref{eq:risk-rela}, we obtain a lower bound of $\mu({\mathcal{E}})$ averaged over ground truth $t$ and training set $\mathcal{S}_N$ \begin{equation} \mathbb{E}_t[\mathbb{E}_{\mathcal{S}_N}[\mu(\mathcal{E})]]\geq1-\frac{d'}{d(d+1)}(N^2+d+1). \end{equation} This gives the Ineq. \eqref{eq:thm2-2} and complete the proof of Theorem $2$. \begin{figure*} \hspace*{-0.48\textwidth} \includegraphics[width=.48\textwidth]{VC.pdf} \includegraphics[width=.48\textwidth]{QCNN.pdf} \caption{The structure of quantum classifiers used in the numerical simulations. (a)The illustrative structure of a general multi-layer quantum variational classifier that takes $n$-qubit state $|\psi_{\text{in}}\rangle$ as input and outputs a $m$-qubit state $|\phi_{\text{out}}\rangle$. The classifier consists of $p$ layers and each layer consists two rotation units and an entangler unit. Each rotation unit contains a Euler rotation $Z(\theta_{i,u}^{k})X(\theta_{i,v}^{k})$ [$(u,v)=(d,c)$ or $(b,a)$], where $i=1,...,p$ refer to the number of layers, $k=1,...,m+n$ denote the number of qubit. After obtaining the output state $|\phi_{\text{out}}\rangle$, we compute the probabilities of projection measurements to predict and assign a label that corresponds to the largest probability. (b) The illustrative structure of the QCNN classifier. This circuit contains six convolutional layers labeled by $C_1$ to $C_6$, two polling layers labeled by $P_1$ and $P_2$ respectively, and a fully connected layer labeled by $FC$. The initial parameters are set to random values at the beginning of the training process. } \label{qcircuit} \end{figure*} \section{C. The structures of quantum classifiers and Encoding Methods} \subsection{I. The structures of quantum classifiers} In recent years, a number of different quantum classifiers have been proposed \cite{schuld2020circuit,farhi2018classification,schuld2017implementing,mitarai2018quantum,li2017hybrid,Schuld2019Quantum,havlivcek2019supervised,zhu2019training,cong2019quantum,wan2017quantum,grant2018hierarchical,du2018implementable,uvarov2020machine,Rebentrost2014Quantum,blank2020quantum,tacchino2019artificial}. Here, we choose some of these classifiers to form the classifier set considered in this paper. As mentioned in the main text, our classifier set contains two QCNNs \cite{cong2019quantum} and six general multi-layer variational classifiers\cite{schuld2020circuit,farhi2018classification,li2017hybrid,mitarai2018quantum}. The sketch of a quantum variational circuit is shown in Fig. \ref{qcircuit}(a). In such a variational circuit model, we first prepare the $m+n$ qubit input state to be $|\psi_{\text{in}}\rangle\otimes|1\rangle^{\otimes m}$, where $|\psi\rangle_{\text{in}}$ is an $n$-qubit state that encodes the complete information of input sample to be classified. Then we apply a unitary transformation, which is composed of $p$ layers of interleaved operations, on the state. In each of the $p$ layers, there are two rotation units each performs arbitrary Euler rotations in Bloch sphere and an entangler unit consisting of CNOT gates between each pair of neighboring qubits. The adjustable parameters are the rotation angles and are collectively denoted as $\Theta$. This generates a variational state: \begin{equation} |\Phi(\Theta)\rangle=\prod_{i=1}^pU_i(|\psi_{\text{in}}\rangle\otimes|1\rangle^{\otimes m}), \end{equation} where $U_i=\prod_k Z(\theta^k_{i,d})X(\theta^k_{i,c})U_{\text{ent}}Z(\theta^k_{i,b})X(\theta^k_{i,a})$ denotes the unitary operation for the $i$-th layer, with $U_{\text{ent}}$ representing the unitary operation generated by the entangler unit. The brief structure of the QCNN and its hyperparameters utilized in this paper is shown in Fig. \ref{qcircuit}(b). The structure of the QCNN is the same as in Ref. \cite{cong2019quantum}. In our numerical simulations, we only focus on two-category classification problems. Thus, we only need one qubit to encode the labels $y=0,1$. After the variational circuits, the state of the output qubits becomes $\rho_{\text{out}}$. We compute $\mathbb{P}(y=m)=\text{Tr}(\rho_{\text{out}}|m\rangle\langle m|)$ and then assign $y=1$ if $P(y=1)\geq P(y=0)$ and $y=0$ for other cases. \subsection{II. Quantum encoding for classical data} In the main text, one of the numerical simulations we did is based on the images of handwritten digits. In this dataset, the images are encoded classically, i.e. the data is encoded into a $m$-dimensional vector $\mathbf{v}$ in $\mathbb{R}^m$. To make such classical data processable to quantum classifiers, we need to convert the classic vector into a $n$-qubit quantum (pure) state in a $d=2^n$ dimensional Hilbert space. This converting process is called a quantum encoder. In this paper, we use the amplitude encoder to transfer classical data into quantum states \cite{schuld2020circuit,schuld2017implementing,Harrow2009Quantum,cong2016quantum,Rebentrost2014Quantum,kerenidis2017quantum,giovannetti2008architectures,lloyd2013quantum,wiebe2014quantum,giovannetti2008quantum,Aaronson2015Read}. For an amplitude encoder, each component of $\mathbf{v}$ is then represented by the amplitude of the $n$-qubit ket vector $|\psi_{\text{in}}\rangle$ represented in computational basis. Without loss of generality, we assume that $m=2^n$ is a power of 2, otherwise we can attach $2^n-m$ zeros to the end of the vector $\mathbf{v}$ so that it can be transformed into a $n$ qubit pure state. The encoder can be realized by a circuit and the depth of the circuit is linear with the number of features \cite{mottonen2004quantum,knill1995approximation,plesch2011quantum}. Under certain conditions, a polynomial size of gate complexity over $m$ might be needed \cite{grover2002creating,soklakov2006efficient}. Such encoding procedure can be improved using a more complex approach like tensorial feature maps \cite{schuld2020circuit}. \subsection{III. The training process of quantum classifiers} In classical machine learning, different loss functions are introduced when training the networks and estimating the performance. In numerical simulations, we employ a quantum version of cross-entropy as \begin{equation}\label{eq:lossfunction} \mathcal{L}(h(|\psi\rangle;\Theta),{\bf p})=-\sum_{i=1}^2 p_k\log q_k, \end{equation} where ${\bf q}=(q_1,q_2)$ is the diagonalized expression of output state $\text{diag}(\rho_{\text{out}})$ and ${\bf p}=(1,0)$ for $y=0$ and ${\bf p}=(0,1)$ for $y=1$. In the training procedure of a quantum classifier, a optimizer is used to adjust the parameter $\Theta$ to minimize the empirical loss function $\mathcal{L}_N(\theta)=\frac{1}{N}\sum_{i=1}^N\mathcal{L}(h(|\psi_i\rangle;\Theta),{\bf p}_i)$. In recent years, a large family of gradient-based algorithms have been broadly used in training classical and quantum neural networks\cite{wilde2020stochastic,yamamoto2019natural,stokes2019quantum,kingma2014adam,sashank2018convergence}. In the numerical simulations in this research, we use Adam optimization algorithm \cite{kingma2014adam,sashank2018convergence}, which is a gradient-based learning algorithm with adaptive learning rate. \begin{figure} \hspace*{-0.24\textwidth} \includegraphics[width=.24\textwidth]{train1.pdf} \includegraphics[width=.24\textwidth]{train2.pdf}\\ \hspace*{-0.24\textwidth} \includegraphics[width=.24\textwidth]{MNISTtrain1.pdf} \includegraphics[width=.24\textwidth]{MNISTtrain2.pdf}\\ \caption{The average loss and accuracy for quantum classifiers $2$ and $8$ during the training process, in classifying handwritten digit images and the ground states of the 1D transverse field Ising model. (a) The training procedure of the classifier $2$ (a QCNN classifier) for classifying the ground states. Each epoch contains $30$ iterations. (b) The training procedure of the classifier $8$ with depth $p=10$ for classifying the ground states. Each epoch contains $5$ iterations. (c) The training procedure of the classifier $2$ for classifying handwritten digit images. Each epoch contains $50$ iterations. (d)The training procedure of classifier $8$ for classifying handwritten digit images. Each epoch represents $10$ iterations.} \label{training} \end{figure} To find the minimization of the loss function using multiple-step gradient-based methods, we need to calculate the gradient of $\mathcal{L}_N(\Theta)$ over parameter $\Theta$. Each component of the gradient is represented as $\frac{\partial\mathcal{L}_N(\Theta)_\theta}{\partial\theta}=\lim_{\epsilon\rightarrow 0}\frac{1}{2\epsilon}[\mathcal{L}_N(\Theta)_{\theta+\epsilon}-\mathcal{L}_N(\Theta)_{\theta-\epsilon}]$ where $\theta$ is one of the parameters of $\Theta$. Owing to the special structures of the quantum classifiers, we use the "parameter shift rule" \cite{liu2018differentiable,harrow2019low, lu2020quantum} in our numerical simulations to obtain the gradients required. In Fig. \ref{training}, we plot the average loss and accuracy of some of quantum classifiers in our classifier set during the training procedure. The numerical simulations including the training procedure and adversarial attack were done on a classical cluster using Yao.jl\cite{Yao} and its extension packages in Julia language\cite{bezanson2017julia}. To run the simulation on GPU, i.e. to perfectly fit the mini-batch gradient descent algorithm, we use CuYao.jl\cite{CuYao}. This package is an efficient extension of Yao.jl on GPU calculation that can obtain a speedup. Flux.jl\cite{innes2018flux} and Zygote.jl\cite{zygote} packages are used to calculate the differentiation of the function. We note that the overfitting risk is low as the loss of the training data and validation data is close.\cite{srivastava2014dropout}. In Table \ref{tab}, we list the number of parameters for each quantum classifier used in this paper, and their final accuracy in classifying the ground states of the 1D transverse field Ising model. \begin{table}[H] \centering \begin{tabular}{|r|r|r|r|} \hline \hline Classifier & Structure & Number of parameters & Accuracy\\ \hline 1 & QCNN & 44 & 0.917 \\ \hline 2 & QCNN & 92 & 0.950 \\ \hline 3 & Variational Circuit & 144 & 0.923 \\ \hline 4 & Variational Circuit & 171 & 0.930 \\ \hline 5 & Variational Circuit & 198 & 0.940 \\ \hline 6 & Variational Circuit & 225 & 0.930 \\ \hline 7 & Variational Circuit & 252 & 0.947 \\ \hline 8 & Variational Circuit & 279 & 0.955 \\ \hline \hline \end{tabular} \caption{The number of parameters and the final accuracy after the training process for each quantum classifier in classifying the ground states of the 1D Ising model. The accuracy is calculated over a training set that contains $300$ samples. }\label{tab} \end{table} \section{D. Adversarial algorithms} In this section, we provide more details on the algorithms for obtaining adversarial examples and perturbations. When proposing an adversarial attack on a quantum classifier that takes quantum data as input, we maximize the adversarial risk $\mu(\mathcal{E})$ mentioned in the main text. However, in practice $\mu(\mathcal{E})$ is typically inaccessible. Hence, we consider maximizing the loss function instead. It is worthwhile to mention that a maximal loss function value does not always indicates a maximal risk. In the quantum scenario, we denote the adversarial perturbation attached to the quantum sample as an operator $U_\delta$ that acts on the input state. The maximization problem of adding perturbation can be described as: \begin{equation}\label{perturbation} U_\delta\equiv\mathop{\arg\max}_{U_\delta\in\Delta}\mathcal{L}(h(U_\delta|\psi\rangle;\Theta^*),{\bf p}), \end{equation} where $\Theta^*$ denotes the optimized parameters after the training process, $\Delta$ are the possible perturbations that can be added, $|\psi\rangle$ is the original input state and ${\bf p}$ is the correct label. In the case of studying universal adversarial examples, we have a test set $\mathcal{T}_M=\{(|\psi_0\rangle,y_0),...,(|\psi_M\rangle,y_M)\}$ and a set of quantum classifiers which learn hypothesis functions $h_1,h_2,...,h_k$. In order to obtain universal adversarial examples that can deceive all the quantum classifiers in the set, we solve the following optimization problem: \begin{equation}\label{perturbation:exp1} U_{\delta}^j\equiv\mathop{\arg\max}_{U_\delta^j\in\Delta}\sum_{i=1}^{k}\mathcal{L}(h_i(U_\delta^j|\psi_j\rangle;\Theta^*),{\bf p}_j), \end{equation} where $U_\delta^j$ is the perturbation for the $j$-th sample. For the case of obtaining universal adversarial perturbations, we use an identical perturbation to implement the adversarial attack to all samples in the test set $\mathcal{T}_M$. In this case, we have one quantum classifier and its hypothesis function $h$. The maximization problem can be expressed in the similar form \begin{equation}\label{perturbation:exp2} U_{\delta}\equiv\mathop{\arg\max}_{U_\delta\in\Delta}\frac{1}{M}\sum_{i=1}^{M}\mathcal{L}(h(U_\delta|\psi_i\rangle;\Theta^*),{\bf p}_i). \end{equation} In general, the set $\Delta$ can be the set of unitary operators that are close to the identity matrix. We use automatic differentiation \cite{rall1996introduction} to improve precision when applying the perturbation. In practice, we restrict the set $\Delta$ to be the product of local unitary operators near the identity matrix. In the white-box attack scenario, the attacker has the full information of the classifiers, including their inner structures and loss functions. The attacker can then calculate the gradient of loss functions $\nabla\mathcal{L}(h(|\psi\rangle;\Theta^*),{\bf p})$. In this scenario, we use the quantum-adapted Basic Iterative Method (qBIM) introduced in Ref. \cite{lu2020quantum} to solve the above optimization problems in Eq. \eqref{perturbation:exp1} and \eqref{perturbation:exp2}. Compared with a white-box scenario, the adversary in a black-box setting does not have complete information of the quantum classifier. In classical adversarial learning, the development of black-box assumption has been divided into several categories. In non-adaptive black-box attack \cite{papernot2016transferability,papernot2017practical,tramer2016stealing}, the adversary knows nothing about the classifier's inner structure but can get access to the training data and analyze its distribution. In adaptive black-box scenario \cite{fredrikson2015model,papernot2017practical,rosenberg2017generic}, the attacker can use the classifier as an oracle without extra information provided. Another category is the strict black-box scenario \cite{hitaj2017deep}, where the data distribution is unknown but the adversary can collect the input-output pairs from the target classifier. In our simulations, we implement the non-adaptive black-box adversarial attack in which we try to use the knowledge of one quantum classifier to attack all quantum classifiers in the set that share the same training set and test set. The result shown in Fig. 2(c) in the main text indicates the effectiveness of such a black-box attack. \end{document}
{ "timestamp": "2021-02-17T02:00:26", "yymm": "2102", "arxiv_id": "2102.07788", "language": "en", "url": "https://arxiv.org/abs/2102.07788" }
\section{Introduction} Topological string theory bridges research in mathematics and physics and has been a rich source of insights for both areas. It provides a quantitative handle on physical dualities as well as exact computations, see e.~g.~\cite{Neitzke:2004ni}. Within the context of mirror symmetry, topological string theory provides the tools to study higher genus mirror symmetry \cite{Marinobook,coxkatz,Alimlectures}. Physically, the appeal of studying topological string theory originates from the fact that it shares features of physical strings while at the same time possessing clearer mathematical structures which allow one to seek answers to difficult physical questions. One such aspect, which is the focus of this work, is the fact that the free energy of topological string theory is only defined perturbatively as an asymptotic series in the topological string coupling. This is also known to be true for physical string theories as well as for many quantum field theories, see \cite{Shenker:1990uf,Marinolecture} and references therein. The topological string partition function of a given family of Calabi-Yau (CY) threefolds is defined perturbatively in the topological string coupling $\lambda$. The free energies at genus $g$ are related recursively to the free energies at lower genera by the holomorphic anomaly equations \cite{Bershadsky:1993ta,Bershadsky:1993cx}. The anomaly equations were identified in \cite{Witten:1993ed} as the projective flatness equations of a connection identifying Hilbert spaces obtained from polarization choices of a geometric quantization problem associated to the moduli space of the family. The topological string partition function becomes a flat section of this bundle of Hilbert spaces over the moduli space. Further details about this quantum mechanical interpretation including a possible Hamiltonian as well as a description of other states in this Hilbert space remain puzzling, see \cite{Neitzke:2007yw} for a the description of other potential states in this Hilbert space. The OSV conjecture \cite{Ooguri:2004zv} relates the topological string wave-function to the partition function of black holes. It provides an expectation that the non-perturbative content of topological string theory should be captured by the partition function of objects which are defined non-perturbatively on the same Calabi-Yau family, namely the BPS states forming the black holes. A number of challenges, most sharply collected in \cite{Denef:2007vg}, however limit the applicability of this connection for the study of the non-perturbative structure of topological string theory. Mathematically, the perturbative definition of topological string theory on one side of mirror symmetry corresponds to the study Gromov-Witten (GW) theory, while the enumerative content of the BPS states forming the black holes is captured by Donaldson-Thomas (DT) invariants, the equivalence of the enumerative geometry content of the two is the MNOP conjecture \cite{MNOP1,MNOP2}. The study of the non-perturbative structure of topological string theory has been much more promising for non-compact CY manifolds which exhibit dualities with Chern-Simons theory and matrix models, see e.~g.~\cite{Marinobook}. This is also the context for which early all genus results were obtained including the all genus topological string theory on the deformed conifold, whose partition function correspond to the $c=1$ string as well as to the Gaussian matrix model \cite{Ghoshal:1995wm}. The relevance of this result is that it also provides the expected universal behavior of topological string theory near singularities in the moduli space where finitely many states of the corresponding effective theory are becoming massless. Another all-genus result for topological string theory comes from a large N duality with Chern Simons theory \cite{GV} and provides the perturbative expansion of topological string theory on the resolved conifold. The study of the non-perturbative structure of topological string theory in these two cases as well as other non-compact CY manifolds has benefited a lot from their relation to matrix models, see e.~g.~\cite{Pasquetti:2009jg,Marinolecture} and references therein. For non-compact CY geometries, another path gives further insights into the non-perturbative structure of topological string theory. Considering mirror non-compact CY geometries whose relevant data is captured by the mirror curve, a quantum mechanical problem was put forward in \cite{ADKMV}. The curve equation in these cases is characterized by an algebraic equation in two complex variables which take values in $\mathbb{C}^*$ or $\mathbb{C}$. These variables are identified with conjugate phase space variables and the defining equation of the curve is interpreted as a Hamiltonian whose eigenstates are wave-functions. This quantum curve approach is useful for the study of the relation of topological string theory to integrable systems. The approach of \cite{ADKMV} was revisited in the context of refined topological string theory in \cite{Aganagic:2011mi}, shedding light on the relation of the quantization of integrable systems of supersymmetric theories and topological strings \cite{Nekrasov:2009rc}. Building on the quantum curve developments as well as on a series of insights from the study of ABJM theory \cite{Aharony:2008ug} (see \cite{Marinoloc} and references therein), a proposal for the non-perturbative definition of topological string theory was put forward in \cite{Hatsuda:2013oxa} and further scrutinized in \cite{Grassi:2014zfa}, using spectral properties of the quantum mechanical problem defined by the quantum curve. Interestingly, the wave functions obtained in this way were related to the Nekrasov-Shatashvili (NS) limit \cite{Nekrasov:2009rc} of the refined topological string with the quantization parameter $\hbar\sim \frac{1}{\lambda}$ being related to the inverse of the topological string coupling rather than $\hbar \sim \lambda^2$ as is expected from the wave-function interpretation of the anomaly equation \cite{Witten:1993ed}, which suggests perhaps two dual quantum mechanical pictures for the topological string. The spectral properties of the quantum mechanical system allowed the authors of \cite{Hatsuda:2013oxa,Grassi:2014zfa} however to extract both expansions in $\lambda$ and $\frac{1}{\lambda}$. The quantum curve setting was also recently used in \cite{Coman:2018uwk,Coman:2020qgf} to propose non-perturbative partition functions for topological strings on local CY manifolds related to the class $\mathcal{S}$ theories of \cite{GMN}. In the case of the all genus free energy of the deformed conifold, the Borel resummation gives the Barnes G-function and can be used to access the non-perturbative content of the partition function as well as the corresponding matrix model, see for instance \cite{Pasquetti:2009jg,Marinolecture}. A proposal for the non-perturbative structure of topological string theory on the resolved conifold was given in \cite{Lockhart:2012vp}, making use of correspondence with supersymmetric indices. In \cite{Hatsuda:2015owa} a modified Borel resummation was applied to the resolved conifold and the expected non-perturbative structure of \cite{Hatsuda:2013oxa,Hatsuda:2015oaa} was obtained. The expected non-perturbative structure of \cite{Hatsuda:2015owa} was further obtained in \cite{Krefl:2015vna} from the exact duality with Chern-Simons theory. Remarkable progress in defining non-perturbative topological string theory was achieved recently in mathematics \cite{BridgelandDT,BridgelandCon}, inspired by \cite{Gaiotto:2014bza}. In \cite{BridgelandDT} a Riemann-Hilbert (RH) problem was put forward which describes the wall-crossing phenomena in Donaldson-Thomas theory, This corresponds physically to the wall-crossing phenomena of BPS states whose recent study has been advanced by \cite{GMN} and many others. In \cite{BridgelandDT}, the solution of the RH problem for the Argyres-Douglas $A_1$ theory was given, this corresponds on the topological string side to the deformed conifold free energy.\footnote{The observation of the link to the deformed conifold has not been made in \cite{BridgelandDT}, but is perhaps obvious to the experts.} The subsequent \cite{BridgelandCon} solves the RH problem for the resolved conifold and suggests the resulting Tau function as a non-perturbative definition of the topological string theory on the resolved conifold given its analytic properties and the fact that it contains as an asymptotic expansion the Gromov-Witten theory of the resolved conifold. In a sense, the work of Bridgeland provides several missing links in the expectation that the BPS content of a given geometry provides the non-perturbative definition of topological string theory on that geometry. The details of this program however become quickly very challenging since it requires as an input the complete relevant BPS spectra and their wall-crossing behavior. This seems currently, especially for compact geometries, very challenging if not intractable. It is natural to wonder whether there is a more intrinsic path towards non-perturbative topological string theory which does not rely on dualities to other physical or mathematical problems and is as such not limited in its scope of applicability. Given the asymptotic nature of the expansion of the free energies, the Borel resummation of the free energy as well as the application of resurgence techniques are such paths, see e.~.g.~\cite{Aniceto:2011nu,Santamaria:2013rua,Couso-Santamaria:2014iia} as well as \cite{Couso-Santamaria:2016vwq} for a matching of the resurgence results with \cite{Grassi:2014zfa}. In the case of asymptotic series stemming from differential equations with irregular singular points, the knowledge of the differential equation itself is often more powerful than the knowledge of the asymptotic expansion around singular points. Especially in problems of mathematical physics, the ODEs in question are often the ones which have been well-studied for a long time. Such a differential equation in the topological string coupling is however not part of the defining data of topological strings. The quest for such differential equation in the string coupling was the motivation for \cite{Alim:2015qma}. In that paper, the holomorphic anomaly equations \cite{Bershadsky:1993cx} as well as the polynomial structure of the higher genus topological string amplitudes \cite{Yamaguchi:2004bt,Alim:2007qj} were used to obtain a differential equation in the string coupling in a certain scaling limit. The relevant differential equation turned out to be the Airy equation. Apart from the expected asymptotic expansion in this limit the equation has a solution which is non-perturbative in the string coupling. The latter was subsequently related to non-perturbative resurgence effects of NS-branes \cite{Couso-Santamaria:2015hva}. The aim of this work is to extend the study of the intrinsic characterization of the non-perturbative structure of topological string theory. We are in a fortunate situation where many pieces of the puzzle are already available in the recent physics and especially mathematics literature and can be readily used and put together. The starting point is a difference equation which was first proved in \cite{Iwaki} for the free energies of the WKB analysis of the Weber curve. This curve is related to the deformed conifold. A similar difference equation was proved in \cite{alim2020difference} for the free energies of the resolved conifold. Both derivations only have the asymptotic expansion as their input. A first aim of this work is to use the expected universal behavior of topological string theory on arbitrary families of CY threefolds near singular loci in the moduli space where finitely many states of the effective theory become massless and derive a difference equation for the topological string free energies in a limit around these loci. We next identify the Barnes G-function as a solution for the difference equations of the deformed conifold as well as for the universal behavior near the singularities. A solution of the difference equation for the resolved conifold \cite{alim2020difference} was identified in \cite{alim2021integrable} using building blocks of Bridgeland's Tau function for the resolved conifold \cite{BridgelandCon}. The explicit analytic solutions can be used to obtain the strong coupling expansions of topological string theory as well as to express their non-perturbative content. The characteristic traits of non-perturbative effects due to D-branes and NS-branes are obtained. Moreover, for the resolved conifold an expression involving both the Gopakumar-Vafa resummation as well as the refined topological string in the Nekrasov-Shatashvili limit is obtained. The latter was put forward in \cite{Hatsuda:2013oxa,Hatsuda:2015owa}, we obtain a matching with their results up to some factors which are discussed. The organization of this work is as follows. In sec.~\ref{sec:freeenergies}, the topological string free energies are recalled as well as their Gromov-Witten and Gopakumar-Vafa expansions. We proceed with a discussion of the expected universal behavior of topological string theory near singularities where finitely many states of the underlying effective theory in $4d$ become massless. The explicit expressions of the topological string free energies for the deformed and resolved conifold are given. In sec.~\ref{sec:diffeq}, the difference equations for the deformed and resolved conifold geometries are introduced and a similar equation for the universal behavior of the free energies in a limit around singular points is derived. We proceed with discussing the analytic solutions in the string coupling of the difference equations and extract their strong coupling expansion as well as the non-perturbative content in sec.~\ref{sec:nonpertcontent}. We furthermore give an exact non-perturbative relation between the topological string partition function and the generating function of non-commutative DT invariants. Finally we study in detail the expansion of the free energies of the resolved conifold near the locus where the $\mathbb{P}^1$ of the resolution shrinks to zero and the corresponding coordinate $t\rightarrow 0$. We prove an exact expression of the leading singular behavior as well as the sub-leading terms. In particular the constant terms in this expansion turn out to be the contributions of constant maps in Gromov-Witten theory, the higher order terms are polynomials in the coordinate. Moreover, this provides a mathematical proof in this case of the \emph{gap condition} which is expected on physical grounds and was used in \cite{Huang:2006si,Huang:2006hq} in the study of higher genus mirror symmetry. We finish in sec.~\ref{sec:conclusions} with the conclusions. \section{Topological string free energies}\label{sec:freeenergies} To a mirror family of CY threefolds, topological string theory associates the topological string partition function which is defined as an asymptotic series in the topological string coupling $\lambda$, summing over the free energies associated to the world-sheets of genus $g$: \begin{equation} Z_{top} (\lambda,t)= \exp \left(\sum_{g=0}^{\infty} \lambda^{2g-2} \mathcal{F}^{g}(t)\right)\,. \end{equation} Where $t=(t^1,\dots,t^n)$ is a set of distinguished local coordinates on the underlying moduli space $\mathcal{M}$, which is of dim $n=h^{1,1}(X_t)=h^{2,1}(\check{X}_{t(z}))$. $X_t$ and $\check{X}_{t(z)}$ are a mirror pair of CY threefolds which correspond to the A-model and B-model sides of mirror symmetry. The map $t(z)$ on the B-model side expressing the distinguished coordinates in terms of the more natural complex structure coordinates $z$ is the mirror map. It is useful to consider the total space of a line bundle $\mathcal{L}\rightarrow \mathcal{M}$ whose sections correspond to a distinguished vacuum state in the underlying SCFT and which has a different geometric interpretation on both sides of mirror symmetry. $\mathcal{M}$ is a projective special K\"ahler manifold. The special geometry as well as the holomorphic anomaly equations of BCOV, together with the boundary conditions of sec.~\ref{sec:boundary} can be used to geometrically characterize the topological string free energies at each genus. The latter are in particular non-holomorphic sections of $\mathcal{L}^{2-2g}$ \cite{Bershadsky:1993cx}. A holomorphic limit can be considered by taking the base point on $\mathcal{M}$ to $i\infty$ and expanding in canonical coordinates. \subsection{GW and GV expansions} In the holomorphic limit together with an expansion around a distinguished large volume point in the moduli space, the topological string free energies become the generating functions of higher genus Gromov-Witten invariants on the A-model side of mirror symmetry. The GW potential of $X$ is the following formal power series: \begin{equation} F(\lambda,t) = \sum_{g\ge 0} \lambda^{2g-2} F^g(t)= \sum_{g\ge 0} \lambda^{2g-2} \sum_{\beta\in H_2(X,\mathbb{Z})} N^g_{\beta} \,q^{\beta}\, , \end{equation} where $q^{\beta} := \exp (2\pi i \langle t,\beta \rangle)$ is a formal variable living in a suitable completion of the effective cone in the group ring of $H_2(X,\mathbb{Z})$. The GW potential can be furthermore written as: \begin{equation} F=F_{\beta=0} + \tilde{F}\,, \end{equation} where $F_{\beta=0}$ denotes the contribution from constant maps and $ \tilde{F}$ the contribution from non-constant maps. The constant map contribution at genus 0 and 1 are $t$ dependent and the higher genus constant map contributions take the universal form \cite{Faber}: \begin{equation} F_{\beta=0}^g = \frac{\chi(X)(-1)^{g-1}\, B_{2g}\, B_{2g-2}}{4g (2g-2)\, (2g-2)!}\,, \quad g\ge2\,, \end{equation} where $\chi(X)$ is the Euler characteristic of $X$ and the Bernoulli numbers $B_n$ are generated by: \begin{equation} \frac{w}{e^w-1} = \sum_{n=0}^{\infty} B_n \frac{w^n}{n!}\,. \end{equation} The Gopakumar-Vafa (GV) resummation of the GW potential \cite{Gopakumar:1998ii,Gopakumar:1998jq} reformulates the non-constant part of the GW potential in terms of the Gopakumar-Vafa invariants $n^g_{\beta} \in \mathbb{Z}$ which are given by a count of electrically charged $M_2$ branes in an M-theory setup. The GW potential can thus be written as: \begin{equation}\label{GVresum} \tilde{F}(\lambda,t)= \sum_{\beta>0}\sum_{g\ge 0} n^g_{\beta}\, \sum_{k\ge 1} \frac{1}{k} \left( 2 \sin \left( \frac{k\lambda}{2}\right)\right)^{2g-2} q^{k\beta}\,. \end{equation} in particular $$ \tilde{F}^0(t)=\sum_{\beta>0} n_{\beta}^0 \textrm{Li}_3(q^{\beta})\,, \quad q^{\beta}= \exp(2\pi i t^{\beta})\,.$$ \subsection{Behavior near singularities}\label{sec:boundary} Fixing a frame for $\mathcal{L}$ we will denote the functions, which are obtained from $\mathcal{F}^g \in \mathcal{L}^{2g-2}$ by $F^g$. The leading singular behavior of the free energy $F^g$ at a conifold locus has been determined in \cite{Bershadsky:1993ta,Bershadsky:1993cx,Ghoshal:1995wm,Antoniadis:1995zn,Gopakumar:1998ii,Gopakumar:1998jq} \begin{equation} \label{Gap} F^g(t_c)= b \frac{B_{2g}}{2g (2g-2) t_c^{2g-2}} + O(t^0_c), \qquad g>1\,. \end{equation} Here $t_c\sim \Delta^{\frac{1}{m}}$ is the special coordinate at the discriminant locus $\Delta=0$.\footnote{$\Delta$ is usually defined using the algebraic moduli of the problem, see e.~g.~appendix \ref{sec:quintic}.} For a conifold singularity $b=1$ and $m=1$. In particular the leading singularity in \eqref{Gap} as well as the absence of subleading singular terms follows from the Schwinger loop computation of \cite{Gopakumar:1998ii,Gopakumar:1998jq}, which computes the effect of the extra massless hypermultiplet in the space-time theory \cite{Vafa:1995ta}. The singular structure and the ``gap'' of subleading singular terms have been also observed in the dual matrix model \cite{Aganagic:2002wv} and were first used in \cite{Huang:2006si,Huang:2006hq} to fix the holomorphic ambiguity at higher genus. The space-time derivation of \cite{Gopakumar:1998ii,Gopakumar:1998jq} is not restricted to the conifold case and applies also to the case $m>1$ singularities which give rise to a different spectrum of extra massless vector and hypermultiplets in space-time. The coefficient of the Schwinger loop integral is a weighted trace over the spin of the particles~\cite{Vafa:1995ta, Antoniadis:1995zn} leading to the prediction $b=n_H-n_V$ for the coefficient of the leading singular term. A higher genus result with a singularity leading to $b=1-1=0$ was studied in \cite{Alim:2008kp}, following the effective field theory analysis of \cite{Klemm:1996kv}. \subsection{Deformed and resolved conifolds} The conifold singularity refers to a singular point in a threefold that locally looks like \begin{equation} (x_1\, x_2 - x_3 \,x_4=0) \subset \mathbb{C}^4\,, \end{equation} the singularity can be deformed by introducing a parameter $a\in\mathbb{C}^*$, after changes of the local coordinates in $\mathbb{C}^4$ this can be brought to the form: \begin{equation} X_{a}= \left\{ x_1\, x_2 + y^2= x^2 - 4 a \right\} \end{equation} which defines the family of non-compact CY threefolds known as the deformed conifold. Note that the addition of a complex parameter in the defining equation amounts to a change of the complex structure so it is natural to study this geometry using the B-model topological string theory. The result of \cite{Ghoshal:1995wm} is that the topological string free energy on this geometry obtained from the relation to the $c=1$ string has the form\footnote{Compared to \cite{Ghoshal:1995wm} we have added the $\lambda$ dependence as well as the $-\frac{3}{4}a^2$ in order to match more recent results, such as \cite{Iwaki}.}: \begin{equation}\label{defconfree} F(\lambda,a)= \lambda^{-2 } \, \left( \frac{a^2}{2} \log a -\frac{3}{4} a^2\right) -\frac{1}{12} \log a + \sum_{g=2}^{\infty} \frac{B_{2g}}{2g(2g-2) a^{2g-2}} \lambda^{2g-2}\,. \end{equation} We note that in the quantum curve setting, the curve which is quantized is: \begin{equation} \Sigma_a:= \left\{ y^2=x^2-4a\subset \mathbb{C}^2\right\}\,, \end{equation} and corresponds to the harmonic oscillator. In the exact WKB setting, this curve is known as the Weber curve and is discussed, e.~g.~in \cite{Iwaki}. Furthermore the WKB analysis encodes the BPS content of the corresponding effective $4d,\mathcal{N}=2$ theory obtained from compactifying type IIB string theory on this non-compact CY. In this case this gives the Argyres-Douglas $A_1$ theory. The CY threefold given by the total space of the rank two bundle over the projective line: \begin{equation} X_t := \mathcal{O}(-1) \oplus \mathcal{O}(-1) \rightarrow \mathbb{P}^1\,, \end{equation} corresponds to the resolution of the conifold singularity in $\mathbb{C}^4$ and is known as the resolved conifold. This geometry is defined on the A-model side of mirror symmetry and $t$ corresponds to: \begin{equation} t= \int_{C} B+ i \omega\,, \end{equation} where $B\in H^{2}(X,\mathbb{R})/H^{2}(X,\mathbb{Z})$ is the B-field, $\omega$ is the K\"ahler form and $C$ corresponds to the $\mathbb{P}^1$ class in this geometry. The GW potential for this geometry was determined in physics \cite{Gopakumar:1998ii,GV}, and in mathematics \cite{Faber} with the following outcome for the non-constant maps:\footnote{See also \cite{MM} for the determination of $F^g$ from a string theory duality and the explicit appearance of the polylogarithm expressions.} \begin{equation}\label{resconfree} \tilde{F}(\lambda,t)= \sum_{g=0}^{\infty} \lambda^{2g-2} \tilde{F}^g(t)= \frac{1}{\lambda^2} \textrm{Li}_{3}(q)+ \sum_{g=1}^{\infty} \lambda^{2g-2} \frac{(-1)^{g-1}B_{2g}}{2g (2g-2)!} \textrm{Li}_{3-2g} (q) \, , \end{equation} where $q:=\exp(2\pi i \,t)$ and the polylogarithm ist defined by: \begin{equation} \textrm{Li}_s(z) = \sum_{n=0}^{\infty} \frac{z^n}{n^s}\, ,\quad s\in \mathbb{C}\,. \end{equation} \section{Difference equations and their solutions}\label{sec:diffeq} \subsection{Difference equations} In the following we will review the derivation of a difference equation which was obtained in \cite{Iwaki} for the free energies of the Weber curve which correspond to the free energies of the deformed conifold geometry and which was adapted in \cite{alim2020difference} for the free energies of the resolved conifold. \begin{thm} \cite{Iwaki,alim2020difference} \label{diffeq} The free energy of the deformed conifold \cite{Iwaki} satisfies the following difference equation: \begin{equation} F(\lambda,a+\lambda) - 2 F(\lambda,a) + F(\lambda,a-\lambda)= \frac{\partial^2}{\partial a^2}\, F^0(a) \,,\end{equation} with: \begin{equation} F^0(a)= \frac{1}{2} a^2 \log a -\frac{3}{4} a^2\,, \end{equation} and the free energy of the resolved conifold satisfies \cite{alim2020difference}: \begin{equation} \tilde{F}(\lambda,t+\check{\lambda}) - 2 \tilde{F}(\lambda,t) + \tilde{F}(\lambda,t-\check{\lambda})= \left(\frac{1}{2\pi }\frac{\partial}{\partial t}\right)^2\, \tilde{F}^0(t) \,,\quad \check{\lambda}=\frac{\lambda}{2\pi}\,. \end{equation} with \begin{equation} \quad \tilde{F}_{0}(t)= \textrm{Li}_{3}(q)\,. \end{equation} \end{thm} The two versions of the theorem were proved in \cite{Iwaki,Iwaki2,alim2020difference}. The proof is included in the appendix \ref{sec:proof}, a notable feature is that it only requires the asymptotic expansion. For the discussion of the universal structure of topological strings near finite distance singularities, the following corollary of the above theorem is obtained: \begin{cor}\label{diffarb} For $X_t$ and $\check{X}_{t(z)}$, a mirror pair of CY threefolds which corresponding to the $A-$ and $B-$sides of mirror symmetry and which can be thought of as the fibers of corresponding families of over a base manifold $\mathcal{M}$ with dim $\mathcal{M}=n$, where $n=h^{1,1}(X_t)=h^{2,1}(\check{X}_{t(z}))$. We consider local coordinates $t=\left\{t^1,\dots,t^n\right\}$. Let $t_c$ be a coordinate near a singularity of finite distance in the special K\"ahler metric, We assume the following behavior of the topological string free energies, motivated by physical expectations:\footnote{In the following, w.~l.~o.~g.~we only highlight the dependence on the coordinate which corresponds to the singularity, for a study of the behavior of toplogical strings on higher dimensional moduli spaces near singularities in a compact setting see, e.~g.~\cite{Haghighat:2009nr,Alim:2012ss}.} \begin{equation} F^0(t_c)= b\left( \frac{1}{2} t_c^2 \log t_c -\frac{3}{4} t_c^2 \right)+ \mathcal{O}(t_c^3) \,,\quad F^1(t_c)= -\frac{b}{12} \log t_c + \mathcal{O}(t_c)\,, \end{equation} and \begin{equation} F^{g}(t_c)= b \frac{B_{2g}}{2g (2g-2) t_c^{2g-2}} + O(t^0), \quad g\ge 2\,, \end{equation} consider now $\Lambda \in \mathbb{C}^*$ and the rescaling: $$ \lambda' = \lambda \cdot \Lambda\,, \quad t_c' = t_c \cdot \Lambda\,,$$ and define: $$ F'(\lambda',t_c') := \lim_{\Lambda \rightarrow \infty} \left( F(\lambda',t_c') + b\frac{(t_c')^2}{2}\log \Lambda -\frac{b}{12}\log \Lambda\right)\,,$$ then the following difference equation is satisfied by $F'$: \begin{equation}\label{diffeqarb} F'(\lambda',t^{\circ},t_c'+\lambda') - 2 F'(\lambda',t^{\circ},t'_c) + F'(\lambda',t^{\circ},t_c'-\lambda') = b\, \log t'_c \,. \end{equation} \end{cor} \begin{proof} First consider the all-genus topological string free energy near $t_c\rightarrow 0$ whose behavior is given by the assumptions: \begin{equation} \begin{split} F(\lambda,t_c)&= \sum_{g=0}^{\infty} \lambda^{2g-2} F^g(t_c)= \frac{1}{\lambda^2}\left( b\left( \frac{1}{2} t_c^2 \log t_c -\frac{3}{4} t_c^2 \right)+ \mathcal{O}(t_c^3)\right) \\ &-\frac{b}{12} \log t_c + \mathcal{O}(t_c) + b \sum_{g=2}^{\infty} \lambda^{2g-2} \left(\frac{B_{2g}}{2g (2g-2) t_c^{2g-2}} + \mathcal{O}(t_c^0)\right)\,. \end{split} \end{equation} Inserting the rescaled $t_c'$ and $\lambda'$ we obtain: \begin{equation} \begin{split} F(\lambda',t'_c)&= \sum_{g=0}^{\infty} (\lambda')^{2g-2} F^g(t'_c)= \frac{1}{(\lambda')^2} b\left( \frac{1}{2} (t'_c)^2 \log t'_c -\frac{3}{4} (t'_c)^2 \right)- b\frac{(t'_c)^2}{2}\log \Lambda + \mathcal{O}(1/\Lambda)\\ & -\frac{b}{12} \log t'_c +\frac{b}{12}\log \Lambda + \mathcal{O}(1/\Lambda) + b \sum_{g=2}^{\infty} (\lambda')^{2g-2} \left(\frac{B_{2g}}{2g (2g-2) (t'_c)^{2g-2}} + \mathcal{O}(1/\Lambda^{2g-2})\right)\,. \end{split} \end{equation} Hence we obtain for: \begin{equation} F'(\lambda',t_c') := \lim_{\Lambda \rightarrow \infty} \left( F(\lambda',t_c') + b\frac{(t_c')^2}{2}\log \Lambda -\frac{b}{12}\log \Lambda\right) = b\cdot F_{\textrm{def}}(\lambda',t'_c)\,, \end{equation} where $F_{\textrm{def}}$ is the free energy of the deformed conifold \ref{defconfree}. The proof of the corollary therefore proceeds as in the latter case which is given in the appendix. \end{proof} \subsection{Solution for the deformed conifold} We proceed with the discussion of the solutions of the difference equations. We begin by introducing the Barnes G-function, which is defined by: \begin{equation}\label{BarnesG} \begin{split} G(z+1)&= \Gamma(z) \, G(z)\,, \quad z\in \mathbb{C}\,, \\ G(1)&=G(2)=G(3)=1\, ,\quad \frac{d^3}{dz^3} \log G(z) \ge 0\,, \quad z>0. \end{split} \end{equation} One of the equivalent forms to express the G-function is the Weierstrass canonical product, see \cite{Adamchik}: \begin{equation} G(z+1)= (2\pi)^{\frac{z}{2}} \exp \left( - \frac{z+z^2(1+\gamma)}{2}\right) \, \prod_{k=1}^{\infty} \left( 1+ \frac{z}{k} \right)^k \exp \left(\frac{z^2}{2k} -z \right)\,, \end{equation} where $\gamma$ is the Euler-Mascheroni constant. The logarithm Barnes G-function has moreover the following Taylor expansion around $z=0$: \begin{equation} \log G(z+1)= \frac{1}{2} \left( \log 2\pi -1\right) z - (1+\gamma) \frac{z^2}{2} +\sum_{n=3}^{\infty} (-1)^{n-1} \zeta(n-1) \frac{z^n}{n}\,, \end{equation} as well as asymptotic expansion for $z\rightarrow \infty$. \begin{equation} \log G(z+1) = \frac{z^2}{2} \left( \log z - \frac{3}{2}\right) -\frac{1}{12} \log z - z \zeta'(0) + \zeta'(-1)- \sum_{g=2}^{\infty} \frac{B_{2g}}{2g(2g-2) z^{2g-2}}\,, \end{equation} where $\zeta$ is the $\zeta-$function. We define: \begin{equation}\label{defconnonp} F_{np}(\lambda,a):= \log G\left(1+\frac{a}{\lambda}\right) +\frac{a^2}{2\lambda^2} \log \lambda+ \frac{a}{\lambda} \zeta'(0) + \frac{1}{12}\log(\lambda)+\zeta'(-1)\,, \end{equation} and obtain the following: \begin{prop} $F_{np}(\lambda,a) $ is the unique solution to the difference equation for the free energies of the deformed conifold with asymptotic behavior fixed by \eqref{defconfree}. \end{prop} \begin{proof} Using the functional equation of the Barnes $G$-function we obtain: \begin{equation} \begin{split} &\log G\left(1+\frac{a+\lambda}{\lambda}\right)+ \log G\left(1+\frac{a-\lambda}{\lambda}\right) - 2 \log G\left(1+\frac{a}{\lambda}\right) \\ &= \log \Gamma\left(\frac{a}{\lambda}+1\right) - \log \Gamma \left( \frac{a}{\lambda}\right) = \log \frac{a}{\lambda}\,. \end{split} \end{equation} From the additional terms in \eqref{defconnonp}, only the quadratic term in $a$ contributes to the r.h.s. of the difference equation. We obtain: \begin{equation} F_{np}(\lambda,a+\lambda) - 2 F_{np}(\lambda,a) + F_{np}(\lambda,a-\lambda)= \log a = \frac{\partial^2}{\partial a^2}\, F^0(a)\,. \end{equation} For a proof of the uniqueness of the results one may follow exactly the same reasoning as in \cite{alim2021integrable}. \end{proof} \subsection{Solution for the resolved conifold} The solution of the difference equation for the resolved conifold was identified in \cite{alim2021integrable}, by adapting building blocks of Bridgeland's Tau function for the resolved conifold \cite{BridgelandCon}. The special functions in \cite{BridgelandCon} involve the multiple sine functions which are defined using the Barnes multiple Gamma functions \cite{Barnes}. For a variable $z\in \mathbb{C}$ and parameters $\omega_1,\ldots,\omega_r \in \mathbb{C}^{*}$ these are defined by: \begin{equation} \sin_r(z\,|\, \omega_1,\dots,\omega_r):= \Gamma_{r}(z\, |\, \omega_1,\dots,\omega_r) \cdot \Gamma_{r}\left(\sum_{i=1}^r \omega_i - z\, |\, \omega_1,\dots,\omega_r\right)^{(-1)^r} \,, \end{equation} for further definitions, see e.~g.~ \cite{BridgelandCon,Ruijsenaars1} and references therein. We introduce furthermore the generalized Bernoulli polynomials, defined by the generating function: \begin{equation} \frac{x^r\, e^{zx}}{ \prod_{i=1}^r (e^{\omega_i x}-1)} = \sum_{n=0}^{\infty} \frac{x^n}{n!} \, B_{r,n}(z\,|\, \omega_1,\,\dots,\omega_r)\,. \end{equation} Consider now the function $G_3(z\,|\,\omega_1,\omega_2)$ of \cite[Sec. 4.2]{BridgelandCon}, defined by:\footnote{A subscript $3$ is added here to $G_3$ compared to \cite{BridgelandCon} to avoid confusion with the Barnes G-function.} \begin{equation}\label{g3def} G_3(z\, | \, \omega_1,\omega_2) := \exp\left(\frac{\pi i}{6} \cdot B_{3,3}(z+\omega_1\,|\,\omega_1,\omega_1,\omega_2)\right) \cdot \sin_3(z+\omega_1\, |\, \omega_1,\omega_2,\omega_3), \end{equation} and define a function \begin{equation}\label{resconfreedef} F_{np}(\lambda,t):= \log G_3(t\,|\,\check{\lambda},1)\,. \end{equation} It was shown in \cite{alim2021integrable} that $F_{np}$ is the unique solution of the difference equation with the boundary condition: $$ \lim_{\lambda \rightarrow 0} \lambda^2 F_{np}(\lambda,t) = \tilde F^0(t)=\mathrm{Li}_3(e^{2\pi i t})\,,$$ and that moreover it has the following asymptotic expansion \cite{BridgelandCon,alim2021integrable}: \begin{equation} F_{np} (\lambda,t) \sim \sum_{g=0}^{\infty} \lambda^{2g-2} \tilde{F}^{g}(t)\,, \end{equation} where $\tilde{F}^g(t)$ are the non-constant parts of the conifold free energies defined in \eqref{resconfree}. \section{Non-perturbative content of the solutions}\label{sec:nonpertcontent} \subsection{Deformed conifold} We start by discussing the non-perturbative content of topological string theory on the deformed conifold, whose structure is universal for topological strings near finite distance singularities as discussed in sec.~\ref{sec:boundary}. We therefore consider the Taylor series expansion of the solution of the difference equation as $\lambda\rightarrow\infty$ using the Taylor series expansion of the Barnes G-function, we obtain:\footnote{We have used $\zeta'(0)=-\frac{1}{2}\log 2\pi$.} \begin{equation} \begin{split} Z_{np}(\lambda,t)&=\exp(F_{np}(\lambda,t))\\ &= \exp\left(\zeta'(-1)+\frac{1}{12}\log(\lambda)- \frac{1}{2} \frac{a}{\lambda} - (1+\gamma+\log \lambda) \frac{a^2}{2 \lambda^2} \right) \\ & \times \exp \left( \sum_{n=3}^{\infty} (-1)^{n-1} \zeta(n-1) \frac{a^n}{\lambda^n n} \right) \,. \end{split} \end{equation} It is expected that non-perturbative effects due to D-branes are signaled by a factor of $e^{-1/g_s}$ and effects due to NS5-branes come with factors of $e^{-1/g_s^2}$ \cite{Strominger:1990et,Callan:1991dj,Becker:1995kb}, see also \cite{Alexandrov:2011va} and references therein as well as \cite{Couso-Santamaria:2015hva} for a discussion in the resurgence and topological string context, where $g_s$ refers to the physical string theory coupling under consideration. It is perhaps reasonable to expect a similar structure in topological string theory using the topological string coupling $\lambda$. The discussion of which solitonic branes correspond to the factors requires a distinction between the A- and B-model which would see non-perturbative objects of type IIA or IIB string theory respectively. Recall that the result of the free energies of the deformed conifold at hand is a B-model result. Although its all-genus structure is very clear, the corresponding A-model geometry is perhaps less clear. It would be interesting however to understand the precise connection between the non-perturbative topological string theory content in this case and the metric obtained in \cite{Ooguri:1996me} by smoothening the conifold singularity through instanton effects. Such as smoothening was expected in \cite{Strominger:1995cz}. We proceed here with a discussion of the non-perturbative content of the strong coupling expansion in the case of the explicit example of the quintic and its mirror in the next subsection. \subsection{Topological string universality and the quintic example} By the corollary \ref{diffarb}, the same difference equation as for the deformed conifold also holds for topological string theory on an arbitrary family of Calabi-Yau manifolds near a locus where finitely many states of the effective field theory become massless. To exemplify this we consider the quintic Calabi-Yau threefold, whose definition and mirror construction are reviewed in the appendix \ref{sec:quintic}. The singular conifold locus of the quintic and its mirror has been studied in many works, starting with \cite{Candelas:1990rm}. The expectation of the physical behavior of topological string theory near this singularity was in particular used in \cite{Huang:2006hq} to supplement the polynomial solution \cite{Yamaguchi:2004bt} of the holomorphic anomaly equations \cite{Bershadsky:1993cx} with boundary conditions. In this case the good special coordinate in the B-model side of mirror symmetry is given by a ratio of two solutions of the Picard-Fuchs equation \eqref{PF}, when the latter is considered in the coordinate: $$ \delta = \frac{1-3125z}{3125 z}\,,$$ the outcome of \cite{Huang:2006hq} is that: \begin{equation} t_c(\delta)= \delta -\frac{3}{10} \delta^2 + \mathcal{O}(\delta^3)\,. \end{equation} This special coordinate is interpreted as the mirror map and thus gives the mass of the object on the A-model side, which becomes massless in this limit. The object becoming massless in this case has D6 brane charge which was obtained by the analytic continuation studied in \cite{Candelas:1990rm} and revisited in \cite{Huang:2006hq} and more recently in \cite{Knapp:2016rec}. By the physical reasoning reviewed in sec.~\ref{sec:boundary}, it was thus expected in \cite{Huang:2006hq} that: $$ F^g(t_c) =\frac{B_{2g}}{2g (2g-2) t_c^{2g-2}} + O(t^0_c)\,, \quad g>1$$ the behavior for genus $0,1$ was further determined \cite{Huang:2006hq}: $$ F^0(t_c)= \frac{1}{2}t_c^2 \log t_c - \frac{3}{4} t_c^2 + \mathcal{O}(t_c^3) \,, \quad F^{1}(t_c)=-\frac{1}{12} \log t_c + \mathcal{O}(t_c).$$ Consider now $\Lambda \in \mathbb{C}^*$ and the rescaling: $$ \lambda' = \lambda \cdot \Lambda\,, \quad t_c' = t_c \cdot \Lambda\,,$$ and define: $$ F'(\lambda',t_c') := \lim_{\Lambda \rightarrow \infty} \left( F(\lambda',t_c') + \frac{(t_c')^2}{2}\log \Lambda -\frac{1}{12}\log \Lambda\right)\,,$$ then all the ingredients of corollary \ref{diffarb} are met, and by the proof of that corollary $F'$ satisfies the difference equation \ref{diffeqarb}. We can thus define the non-perturbative completion for topological string theory on the quintic in the limit $\Lambda\rightarrow \infty$ to be given by $F_{np}(\lambda',t_c')$, where $F_{np}$ is defined in \eqref{defconnonp}. For the strong coupling expansion we thus obtain: \begin{equation} \begin{split} Z_{np}(\lambda',t') &= \exp\left(\zeta'(-1)+\frac{1}{12}\log(\lambda')- \frac{1}{2} \frac{t_c'}{\lambda'} - (1+\gamma+\log \lambda) \frac{(t_c')^2}{2 (\lambda')^2} \right) \\ & \times \exp \left( \sum_{n=3}^{\infty} (-1)^{n-1} \zeta(n-1) \frac{(t'_c)^n}{(\lambda')^n n} \right) \,, \end{split} \end{equation} one may interpret the $-1/\lambda'$ coefficient $t_c'/2$ as a D-brane instanton action. The expected coefficient of the NS5-brane is the volume of the CY, to understand why the $(t_c')^2$ is still plausible as the volume in this limit we recall that the volume of the CY as determined by the special geometry at large radius is typically given by a period of the mirror, which has the form\footnote{See e.~g.~\cite{Alimlectures} and references therein.}: $$ \mathcal{V}\sim 2F_0 - tF_t\,,$$ in the limit $t_c\rightarrow 0$ we have indeed: $$ 2F_0(t_c) - t_c \,F_{t_c} = - \frac{t_c^2}{2}\,,$$ although geometric interpretations such as the volume become less clear once the large volume regime of the moduli space is left. It would be interesting to interpret the higher order terms in this expansion. \subsection{The resolved conifold} \subsubsection{Strong coupling expansion} The strong coupling expansion for the non-perturbative free energy of the resolved conifold is obtained from the asymptotic expansion of $\log G_3$, which is given in \cite[Prop. 4.8]{BridgelandCon}, it is given by: \begin{equation} F_{np}(\lambda,t)= -\frac{\zeta(3)}{2\pi^2} \check{\lambda} - \frac{\pi i}{12} \left( t-\frac{1}{2}\right) + \frac{\pi i}{\check{\lambda}^2} \frac{1}{3!} \left( t^3-\frac{3}{2}t^2 + \frac{t}{2}\right) + \sum_{k=1}^{\infty} \frac{B_{2k}(t)\cdot B_{2k-2}}{2k! \cdot (2\pi i)} \, \left(\frac{2\pi i}{\check{\lambda}}\right)^{2k-1}\,, \end{equation} valid for for $\lambda\rightarrow \infty$ and $\textrm{Im}\, t>0$ and where the Bernoulli polynomials $B_n(t)$ are defined by the generating function: \begin{equation} \frac{x e^{tx}}{e^x-1} = \sum_{n=0}^{\infty} B_n(t) \frac{x^n}{n!}\,, \end{equation} this expansion leads to the following: \begin{rem} \begin{itemize} \item It is interesting to note that the asymptotic expansion for $\lambda\rightarrow 0$ of the non-perturbative free energy is naturally expressed in terms of a $q=\exp(2\pi i t)$ expansion, hence corresponds to a large volume expansion. The asymptotic expansion for $\lambda \rightarrow \infty$ however is expressed in terms of $t$ and is moreover manifestly \emph{polynomial!} in $t$ for every order in $\frac{1}{\check{\lambda}}$. This suggests that $\lambda$ and $t$ are not entirely independent expansion variables but should perhaps be thought to correspond to certain phases of the combined problem in $\lambda$ and $t$, in analogy to the phases of \cite{Witten:1993yc}. \item Although the volume of the resolved conifold is infinite since it is a non-compact CY, it is interesting to observe that the $\frac{1}{\lambda^2}$ term in the expansion comes with a factor $\frac{1}{3!}t^3$, which is the classical CY volume expressed by the special geometry. It is then natural to speculate that this term corresponds to the NS5 brane contribution, to obtain a negative sign, a factor of $1/(2 \pi i)^3$ should be included in the volume identification. \end{itemize} \end{rem} \subsubsection{Strong and weak coupling resummations}\label{sec:resummation} For the resolved conifold we analyze the non-perturbative content of the solution of the difference equation. We obtain the following: \begin{prop}\label{conresumprop} $F_{np}(\lambda,t)$ can be expressed as: \begin{equation} F_{np}(\lambda,t) = \sum_{k=1}^{\infty} \frac{q^k}{k(2\sin k\lambda/2)^2} - \frac{\partial}{\partial \check{\lambda}} \left( \frac{\check{\lambda}}{4\pi} \sum_{k=1}^{\infty} \frac{e^{2\pi i k (t-1/2)/\check{\lambda}}}{k^2 \sin (\pi k /\check{\lambda})} \right)\,, \quad \check{\lambda} =\frac{\lambda}{2\pi}\,. \end{equation} Moreover, this expression can be written as: \begin{equation}\label{GVNS} F_{np}(\lambda,t)= F_{GV}(\lambda,t) - \frac{\partial}{\partial \check{\lambda}} \left(\check{\lambda} \, F_{NS} (1/\check{\lambda},(t-1/2)/\check{\lambda})\right)\,, \end{equation} where \begin{equation} F_{GV}(\lambda,t)= \sum_{k=1}^{\infty} \frac{q^k}{k(2\sin k\lambda/2)^2}\,, \end{equation} is the Gopakumar-Vafa resummation of \eqref{GVresum} for the resolved conifold and \begin{equation} F_{NS}(\hbar,t) = \frac{1}{4\pi} \sum_{k=1}^{\infty} \frac{e^{2\pi i k t}}{k^2 \sin (\pi k \hbar)}\,, \end{equation} is the refined topological string free energy for the resolved conifold in the Nekrasov-Shatashvili limit as determined in \cite{Hatsuda:2015oaa,Hatsuda:2015owa} following \cite{Hatsuda:2013oxa}. \end{prop} \begin{proof} To prove this, we use the integral representation of the function $G_3$, discussed in \cite[Prop. 4.2]{BridgelandCon}, based on \cite[Prop. 2]{Narukawa} obtaining: \begin{equation} F_{np}(\lambda,t) = - \int_{\mathcal{C}} \frac{e^{(t+\check{\lambda})s}}{(e^s-1)(e^{\check{\lambda}s}-1)^2 } \frac{ds}{s}\,, \end{equation} which is valid for $\textrm{Re}\check{\lambda} >0$ and $-\textrm{Re}\check{\lambda} < \textrm{Re} t < \textrm{Re} (\check{\lambda}+1)$ and the contour $\mathcal{C}$ is following the real axis from $-\infty$ to $\infty$ avoiding $0$ by a small detour in the upper half plane. To find the series expression in $\lambda$ including the perturbative and non-perturbative pieces from the integral representation we close the contour in the upper half plane and analyze the residues. In the upper half plane without zero, the integrand has two infinite sets of poles given by: \begin{equation} s= 2\pi i k \,, \quad \textrm{and} \quad s=\frac{2\pi i k }{\hat{\lambda}}\, \quad k \in \mathbb{N}\setminus\{0\}, \end{equation} the first set corresponds to simple poles and the second set corresponds to double poles. To avoid higher order poles we may assume that either $\textrm{Im} \lambda \ne 0$ or that $\check{\lambda} \notin \mathbb{Q}$.\footnote{We note that there is nothing wrong with $\hat{\lambda}\in \mathbb{Q}$, it just requires a separate analysis of the poles of order three.} \usetikzlibrary{calc,decorations.markings} \begin{figure} \begin{centering} \begin{tikzpicture} \draw (0,-0.5) -- (0,5.5); \draw (-5.5,0) -- (5.5,0); \foreach \y in {0,...,4} { \node at (0,\y) {$\times$}; }; \foreach \y in {1,...,4} { \node at (-0.6,\y) {\y $ \cdot \, 2\pi i$ }; }; \foreach \y in {1,...,5} { \node at (2*\y/5,4*\y/5) {$\times$}; }; \foreach \y in {1,...,4} { \node at (2*\y/5+1,4*\y/5) {\y $ \cdot \, 2\pi i/\check{\lambda}$ }; }; \draw[thick,red,xshift=0pt, decoration={ markings, mark=at position 0.2 with {\arrow{latex}}, mark=at position 0.6 with {\arrow{latex}}, mark=at position 0.8 with {\arrow{latex}}, mark=at position 0.98 with {\arrow{latex}}}, postaction={decorate}] (-5,0) -- (-0.5,0) arc (180:0:.5) -- (5,0); \draw[thick,red,xshift=0pt, decoration={ markings, mark=at position 0.2 with {\arrow{latex}}, mark=at position 0.4 with {\arrow{latex}}, mark=at position 0.6 with {\arrow{latex}}, mark=at position 0.8 with {\arrow{latex}}}, postaction={decorate}] (5,0) arc (0:180:5) -- (-5,0); \end{tikzpicture} \end{centering} \caption{Illustration of the simple and double poles as well as the contour $\mathcal{C}$.} \end{figure} The contribution of the simple poles gives the first factor of the r.h.s. of the proposition. We denote the contribution of the double poles by $I_{db}$ and obtain from Cauchy's generalized integral formula, introducing $s'=\check{\lambda}\, s$: \begin{equation} I_{db}= 2\pi i \cdot \sum_{k=1}^{\infty} \frac{d}{ds'} f(s')|_{s'=2\pi i k}\,, \end{equation} where $f(s')= -\frac{e^{ts'/\check{\lambda}}}{s'(e^{s'/\check{\lambda}}-1)}$, which gives: \begin{equation} I_{db}= -\frac{1}{4\pi} \sum_{k=1}^{\infty} \frac{(1-2\pi i k t/\check{\lambda}) e^{2\pi i k (t-1/2)/\check{\lambda}}}{k^2 \sin (\pi k/\check{\lambda})} -\frac{1}{\check{\lambda}} \sum_{k=1}^{\infty} \frac{ e^{2\pi i k t/\check{\lambda}}}{k (2\sin (\pi k/\check{\lambda}))^2}\,, \end{equation} the reformulation of the result follows after some substitution algebra. \end{proof} \begin{rem} \begin{itemize} \item The result \eqref{GVNS} matches the result of eq.~(5.6) of \cite{Hatsuda:2015owa} which was obtained by a generalized Borel resummation. There are differences of various relative factors of $2\pi i$ compared to \cite{Hatsuda:2015owa}. These originate from our definition of $q=e^{2\pi i t}$ (opposed to $q=e^{-t}$, which is often used). Including the $2\pi i$ factors corresponds to the natural choice of variable giving convergence of $q$ series at large volume $\textrm{Im} t \rightarrow \infty$ as well as having the periodicity in shifts of the $B-$field $t\rightarrow t+1$. Indeed, this makes the $1/2$ unit of $B$-field shift in the argument of $F_{NS}$, which was discussed in \cite{Hatsuda:2013oxa,Hatsuda:2015oaa,Hatsuda:2015owa} very clear. \item Since this computation reproduces results which agree with \cite{Hatsuda:2013oxa}, one may consult \cite[Sec. 4.2]{Hatsuda:2013oxa} for comments and comparison to the proposal of \cite{Lockhart:2012vp}. \item A similar computation using integral representations occurring in Chern-Simons theory was done in \cite{Krefl:2015vna}. \end{itemize} \end{rem} We conclude this subsection by noting that the special function determined by the difference equation does indeed contain the non-perturbative structure of the resolved conifold which was obtained by various different methods in \cite{Hatsuda:2015oaa,Hatsuda:2015owa,Krefl:2015vna}. This in particular confirms the expectation of \cite{BridgelandCon}, that the Tau function of the Riemann-Hilbert problem associated to wall-crossing of DT invariants of the resolved conifold defines a non-perturbative topological string partition function. \subsection{Non-perturbative conifold and non-commutative DT} With the non-perturbative topological string partition function at hand one can revisit the relation between topological strings and the generating function of non-commutative Donaldson-Thomas invariants studied in \cite{Szendroi}. We introduce therefore:\footnote{Compared to \cite{Szendroi}, $q_\lambda$ has a different sign and $2\pi i$ factors are added to $t$.} \begin{equation}\label{ncdt} Z_{\textrm{NCDT}}(\lambda,t):= M(q_{\lambda})^2 \, \prod_{k=1}^{\infty} \left(1- e^{-2\pi i t} q_{\lambda}^k\right)^k \prod_{k=1}^{\infty} \left(1- e^{2\pi i t} q_{\lambda}^k\right)^k\,, \end{equation} where the MacMahon function is given by: \begin{equation} M(q)= \prod_{k=1}^{\infty} (1-q^k)^{-k}\,, \end{equation} and $q_{\lambda}=e^{i\lambda}\,.$ Defining the non-perturbative topological string free partition function: \begin{equation}\label{nonppart} Z_{np}(\lambda,t):= \exp (F_{np}(\lambda,t))\,, \end{equation} as the exponential of \eqref{resconfreedef}, we obtain the following: \begin{prop} The relation between the generating function of non-commutative DT invariants \eqref{ncdt} and the non-perturbative topological string partition function \eqref{nonppart} is given by: \begin{equation} Z_{\textrm{NCDT}}(\lambda,t) = M(q_{\lambda})^2\,\cdot Z_{np}(\lambda,t+1)\, \cdot Z_{np}(-\lambda,-t)\,, \end{equation} \end{prop} \begin{proof} We consider the following property, proved in \cite[Prop. 4.3]{BridgelandCon} for the function $G_3$ defined in \eqref{g3def}: \begin{equation} G_{3}(z+\omega_2\, |\, \omega_1,\omega_2) \cdot G_3(z\, | \, \omega_1,-\omega_2)= \prod_{k=1}^{\infty} (1-x_2\, q_2^k)^k\cdot \prod_{k=1}^{\infty} (1-x_2^{-1}\, q_2^k)^k\,, \end{equation} valid for $\textrm{Im}(\omega_1/\omega_2)>0$ and $z\in \mathbb{C}$, where: $$x_2=\exp (2\pi i z/\omega_2)\,, \quad q_2=\exp (2\pi i \omega_1/\omega_2)\,. $$ The claim of the proposition follows by considering the property for $G_{3}(t\, | \check{\lambda},1)$ which defined $F_{np}$ and further noting that the function $G_3$ is invariant under simultaneous rescaling of all three arguments, see e.~g.~\cite[Prop. 4.2]{BridgelandCon}. \end{proof} \begin{rem} We remark that relations between $Z_{\textrm{NCDT}}$ and the partition function for the resolved conifold obtained by exponentiating the GV resummation were already observed in \cite{Szendroi} and interpreted physically in e.~g.~\cite{Jafferis:2008uf,Aganagic:2009kf}. The result obtained here is however exact and non-perturbative and therefore perhaps a little more surprising since proposition \ref{sec:resummation} shows that the non-perturbative free energy contains more information than the GV resummation. \end{rem} \subsection{Conifold locus of the resolved conifold} A natural question to ask is about the precise relation between the free energies of the resolved and deformed conifold geometries. Relatedly, the free energy of the resolved conifold should be the easiest example to explicitly test the universal behavior of topological string theory near conifold type singularities. Since the resolved conifold corresponds to the small resolution of the conifold singularity, the singular locus corresponds to the locus where $t$, the K\"ahler parameter of the $\mathbb{P}^1$ shrinks to zero $t\rightarrow 0$. Since the free energy of the resolved conifold \eqref{resconfree} is given as a series expansion in $q=e^{2\pi i t}$, which does not converge as $t\rightarrow 0$ one must analytically continue this result. We will prove the following: \begin{thm}\label{conanalytic} For $g >1$ the analytic continuation of the genus $g$ free energy of the resolved conifold to the regime $t\rightarrow 0$ is given by: \begin{equation} \begin{split} \tilde{F}^g(t)&= \frac{B_{2g}}{2g(2g-2)} \frac{1}{(2\pi t)^{2g-2}} + (-1)^g \frac{B_{2g} B_{2g-2}}{2g (2g-2)(2g-2)!} \\ &+ \frac{B_{2g}}{2g(2g-2)!} \sum_{d=g}^{\infty} (-1)^{d+1} \frac{B_{2d}}{2d(2d-2g+2)!} (2\pi t)^{2d-2g+2}\,. \end{split} \end{equation} \end{thm} \begin{proof} We begin by representing $\tilde{F}^g(t)$ for the resolved conifold as in \eqref{derrep}: \begin{equation} \tilde{F}^g(t)=\frac{(-1)^{g-1}B_{2g}}{2g (2g-2)!} \,\theta_q^{2g-3} \textrm{Li}_0(q)\,, \quad g\ge1\,, \end{equation} starting from $\textrm{Li}_0(q)=\frac{q}{(1-q)}$ and using the property: \begin{equation} \label{polylogder} \theta_q \textrm{Li}_s(q) =\textrm{Li}_{s-1} (q)\,, \quad \theta_q:= q \,\frac{d}{dq}\,. \end{equation} We introduce $a=2\pi i t$, the expression becomes: \begin{equation} \begin{split} \tilde{F}^g(a)&=\frac{(-1)^{g}B_{2g}}{2g (2g-2)!} \,\partial_a^{2g-3} \left(\frac{1}{a} \left( \frac{a}{e^a-1}\right)\right) \,\\&=\frac{(-1)^{g}B_{2g}}{2g (2g-2)!} \,\partial_a^{2g-3} \left( \sum_{n=0}^{\infty} B_n \frac{a^{n-1}}{n!}\right)\\ &= \frac{(-1)^g B_{2g}}{2g(2g-2) a^{2g-2}} + (-1)^g \frac{B_{2g} B_{2g-2}}{2g (2g-2) (2g-2)!} \\&+ (-1)^{g-1} \frac{B_{2g}}{2g(2g-2)!} \sum_{d=g}^{\infty} \frac{B_{2d}}{2d(2d-2g+2)!} a^{2d-2g+2}\,, \end{split} \end{equation} where in the first and second line the expression $Li_0(q)$ was brought into the form of the generating function of Bernoulli numbers. The result on the third line follows from considering separately the differential operator acting on the singular piece $1/a$ of the r.h.s., the constant piece comes from the differential operator acting on the term with power $a^{2g-3}$, the rest is collecting all the higher powers. We have furthermore used the vanishing of $B_{2n+1}, n>0$. The statement of the theorem follows by substituting back $a=2\pi i t$. \end{proof} \begin{rem} \begin{itemize} \item This result is a mathematical proof of the gap condition used in \cite{Huang:2006si,Huang:2006hq}, based on the physical expectation outlined in sec.~\ref{sec:boundary}, as a condition imposed on the local behavior of topological string theory on arbitrary families of CY manifolds. In the case at hand the result does not serve any computational purpose since the full exact and non-perturbative expression for topological string theory was already given, it is nevertheless gratifying to see the conjectured behavior to hold mathematically rigorously in this example. \item We think the result should have an interesting interpretation in enumerative geometry. The leading singular term gives the Euler characteristic of genus $g$ Riemann surfaces proved in \cite{HZ}, in the constant term, the contributions from constant maps determined in \cite{Faber} make a somewhat surprising reappearance, although we had explicitly not included these from the start. It is therefore natural to expect also the higher degree terms to have an interpretation. This may open up new paths to study Gromov-Witten theory around the conifold point of arbitrary families of threefolds. See also \cite{Szendroi} for speculation about the enumerative geometry content of the locus $t\rightarrow 0$ at strong coupling $\lambda\rightarrow i\infty$. \item An analytic continuation of the free energies for the resolved conifold for $g>1$ was obtained in \cite[Sec. 4.3] {Pasquetti:2009jg}using the following series expansion of the polylogarithm: \begin{equation} \textrm{Li}_{s}(e^{2\pi i t})= \Gamma(1-s) \, \sum_{m\in \mathbb{Z}} 2\pi i(m-t)^{s-1}\,, \end{equation} valid for $\textrm{Re}(s)<0$ and $t\notin \mathbb{Z}$. This gives for the free energies \cite{Pasquetti:2009jg}: \begin{equation}\label{masslessd0} \tilde{F}^g(t)= \frac{B_{2g}}{2g (2g-2)} \sum_{k\in \mathbb{Z}} \frac{1}{(2\pi(k-t))^{2g-2}}\,, \quad g>1\,. \end{equation} \item From a physical perspective, the singular behavior signals a hypermultiplet becoming massless in the effective four dimensional theory in the limit $t\rightarrow 0$. It is known that the resolved conifold supports only one BPS state from the M-theory perspective, this corresponds to the only non-vanishing GV invariant $n^0_1=1$. However from the $4d$ perspective there is an infinite set of BPS states supported on this geometry corresponding to the Kaluza-Klein modes of the $M2$ brane, these correspond to the $D_0$ brane charge bound to the $D_2$ brane. While the statement of the theorem \ref{conanalytic} only shows the singular behavior as the pure $D_2$ becomes massless at $t\rightarrow 0$, the analytic continuation \eqref{masslessd0} obtained in \cite{Pasquetti:2009jg} shows the whole tower of $D_2$ bound to an integer amount of $D_0$ becoming massless at the loci $t\in\mathbb{Z}$. \end{itemize} \end{rem} To fully test the universal behavior of topological string theory near a conifold singularity in this example, we need to analyze furthermore the behavior of the genus $0$ and genus $1$ free energies. We obtain the following: \begin{prop} The analytic continuation of the genus 0 free energy (the prepotential) $\tilde{F}^0(t)=\textrm{Li}_3(q)$ to the region $t\rightarrow 0$ is given by: \begin{equation} \tilde{F}^0(t)= (2\pi)^2 \left( \frac{t^2}{2} \log (2\pi i t) -\frac{3}{4}t^2\right) -(2\pi i)^3 \frac{t^3}{12} - \sum_{n=1}^{\infty} \frac{B_{2n}}{2n(2n+2)!} (-1)^{n+1} (2\pi t)^{2n+2}\,. \end{equation} \end{prop} \begin{proof} Since the prepotential $\tilde{F}^0(t)$ is a locally defined function, we need to analytically continue it starting from functions which are geometrically globally defined. At genus zero such an object is the Yukawa coupling\footnote{This definition using only the $q$ expansion part of $F^0(t)$ misses some classical pieces which would correspond to the classical triple intersection numbers.} $$ C_{ttt}:=\frac{1}{(2\pi i)^3}\frac{\partial^3}{\partial t^3} \tilde{F}^0(t)=\theta^3_q \textrm{Li}_3(q)=\frac{q}{1-q}\,, $$ introducing again $a=2\pi i t$, we thus have: \begin{equation} C_{ttt}= \partial_a^3 \tilde{F}^0(a)= -1 -\frac{1}{a}\left(\frac{a}{q-1}\right) = -1 - \sum_{n=0}^{\infty}B_n \frac{a^{n-1}}{n!}\,, \end{equation} which can be integrated to give: \begin{equation} \tilde{F}^0(a)= -\left( \frac{a^2}{2} \log a -\frac{3}{4}a^2\right) -\frac{a^3}{12} - \sum_{n=1}^{\infty} \frac{B_{2n}}{2n(2n+2)!} a^{2n+2}\,. \end{equation} The statement of the proposition follows by substituting $a=2\pi i t$. \end{proof} For the genus $1$ free energy we have: \begin{equation} \tilde{F}^1(t)= -\frac{1}{12} \log (1-q)= -\frac{1}{12} \log ( 2\pi i t) -\frac{1}{12} \log\left(1+\sum_{n=2}^{\infty} \frac{(2\pi i t)^{n-1}}{n!}\right)\,, \end{equation} which shows that the assumptions of corollary \ref{diffarb} are met. We can now consider $\Lambda \in \mathbb{C}^*$ and the rescaling: $$ \lambda' = \lambda \cdot \Lambda\,, \quad t' = t \cdot \Lambda\,,$$ and define: $$ F'(\lambda',t') := \lim_{\Lambda \rightarrow \infty} \left( F(\lambda',t) + \frac{t^2}{2}\log \left(\frac{\Lambda}{2\pi i}\right) -\frac{1}{12}\log \left(\frac{\Lambda}{2\pi i}\right)\right)\,, $$ one fine $$ F'(\lambda',t')=F_{\textrm{def}}(\check{\lambda'},t')\,, \quad \check{\lambda}'=\frac{\lambda}{2\pi}\,,$$ where $F_{\textrm{def}}$ is the free energy of the deformed conifold \ref{defconfree}. The different normalization of $\lambda$ is also the reason for the $2\pi i$ factors on the cutoff $\Lambda$. \section{Conclusions}\label{sec:conclusions} In this work, we addressed an intrinsic characterization of non-perturbative topological string theory. Hereby the topological string free energy is a solution to a difference equation mixing the moduli and the string coupling. The difference equations are derived using the knowledge of the asymptotic expansion and bypass any further physical or mathematical dualities. The difference equations as well as their solutions were obtained for the deformed and resolved conifold geometries. The strong coupling expansion as well as the non-perturbative content were obtained in these cases and matched to known and expected results in the literature. Fortunately, the deformed conifold also gives the expected universal behavior of topological string theory on any family of Calabi-Yau threefolds near a singularity where finitely many states of the effective field theory in $4d$ become massless. Both the difference equation as well as its analytic solution in the string coupling can thus be obtained universally in a limit where the coordinate giving the mass of the particles becoming massless at the singularity as well as the topological string coupling are rescaled and the rescaling is sent to $\infty$. Although the full exact solution is lost in the rescaled limit, the remaining qualitative results are universal and should shed new lights on contexts where topological string theory connects to problems of quantum gravity and thus requires the study of topological string theory on families of compact CY manifolds. One such problem is the connection to black hole partition functions \cite{Ooguri:2004zv}. It would for example be interesting to match the universal non-perturbative structure which also holds for compact CY to computations of the black hole partition function, perhaps along the lines of \cite{Sen:2011ba}.\footnote{I would like to thank Gabriel Lopes Cardoso for pointing this out, commenting on the potential use of \cite{Alim:2015qma} in the study of black holes.} Another quantum gravity context, where the precise knowledge of the geometry of CY moduli spaces and their non-perturbative content is important is the swampland program which has attracted a lot of attention recently. We expect the quantitative handle on the universal non-perturbative structure of topological strings to be relevant for example in the context of the swampland distance conjecture \cite{Ooguri:2006in}. Relatedly, one may also expect the non-perturbative structure of topological string theory to be relevant in the study of non-perturbative corrections to quaternionic K\"ahler geometries \cite{Ooguri:1996me,Alexandrov:2011va,Alexandrov:2013yva}. We note that difference equations provide the natural arena to study the connections between topological strings and integrable hierarchies. This was indeed the context for \cite{ADKMV}. The difference equations there were obtained from the quantization of the mirror curves, which are expressed in terms of $\mathbb{C}^*$ variables. The difference equations and their solutions in this context connect however naturally to open topological string theory \cite{ADKMV,Aganagic:2011mi}. The difference equations studied in this work are equations in the closed string moduli, they are closer in spirit to the difference equation conjectured for instance in \cite{Pandharipande} for the Gromov-Witten potential of $\mathbb{P}^1$ and which provides the link to the Toda integrable hierarchy. Indeed, using the difference equation for the resolved conifold, a conjecture \cite{Brini} relating Gromov-Witten theory of the resolved conifold to the Ablowitz-Ladik integrable hierarchy was proved in \cite{alim2021integrable}. It is tempting to speculate that there should be an underlying quantum mechanical problem that gives rise to the difference equations studied here. The latter should be a quantization problem related to the moduli space of closed strings of the geometry, which may close the circle of ideas by relating it to the quantum mechanical interpretation of the closed string as a wave-function \cite{Witten:1993ed}. Recently Quantum K-theory was studied in e.~g.~\cite{Jockers:2019wjh} as a quantum deformation problem which can be applied to the Picard-Fuchs operators of any Calabi-Yau geometry, it would be interesting to understand potential links to the approach in this paper. A further context where the difference equations studied in this work appear is as the equations characterizing the perturbative part of the Nekrasov-Okounkov partition functions of $\mathcal{N}=2$ gauge theories, see \cite[Appendix A]{NO}. The current work suggests that in addition to the perturbative piece, the difference equations also fix the non-perturbative content. It would be interesting to study the implications of this further both in the gauge theory as well as in more general contexts. \subsection*{Acknowledgements} I would like to thank Arpan Saha for collaboration on related projects as well as Florian Beck and Peter Mayr for comments on the draft. I have benefited from many discussions with members of the Emmy Noether research group on String Mathematics as well as from discussions with Vicente Cort\'es, J\"org Teschner, Iv\'an Tulli, Timo Weigand and Alexander Westphal on related projects within the quantum universe cluster of excellence. This work is supported through the DFG Emmy Noether grant AL 1407/2-1. \begin{appendix} \section{Proof of the difference equation}\label{sec:proof} We provide here the proof of \cite{Iwaki,alim2020difference} of the difference equations for the free energies of the deformed and the resolved conifold. \begin{proof} The proof of the theorems relies on the following. Consider the generating function of Bernoulli numbers: \begin{equation} \frac{w}{e^w-1} = \sum_{n=0}^{\infty} B_n \frac{w^n}{n!}\,. \end{equation} Applying $w \frac{d}{dw}$ to both sides and rearranging gives: \begin{equation} \frac{w^2 e^w}{(e^w-1)^2} = B_0 - \sum_{n=2}^{\infty} \frac{B_n}{n (n-2)!} w^n = 1 -\sum_{g=1}^{\infty} \frac{B_{2g}}{2g (2g-2)!} w^{2g}\,, \end{equation} where the last equality is obtained by noting that all $B_{2n+1}, n\in \mathbb{N}\setminus\{0\}$ vanish. This yields the following: \begin{equation} (e^w-2+e^{-w}) \left(\frac{1}{w^2} -\sum_{g=1}^{\infty} \frac{B_{2g}}{2g (2g-2)!} w^{2g-2} \right) = 1\,. \end{equation} In the next step, we replace on both sides of this equation the variable $w$ by an operator acting on functions of $t$ namely: $$ w \rightarrow \lambda \frac{\partial}{\partial a}$$ For the deformed conifold, we act with both sides on $\log a$, obtaining: \begin{equation} (e^{\lambda \partial_a}-2\cdot \textrm{id}+e^{-\lambda\partial_a}) \left( \lambda^{-2} \partial_a^{-2} -\sum_{g=1}^{\infty} \frac{B_{2g}}{2g (2g-2)!} \lambda^{2g-2}\,\partial_a^{2g-2} \right) \log a = \textrm{id}\cdot \log a\,, \end{equation} we use $$ \partial_a^{2g-2} \log a= - (2g-3)! \,a^{2-2g},$$ and interpret $\partial_a^{-1}$ as an anti-derivative to obtain: \begin{equation} (e^{\lambda \partial_a}-2\cdot \textrm{id}+e^{-\lambda\partial_a}) \left( F^0(a) -\frac{1}{12}\log a +\sum_{g=2}^{\infty} \frac{B_{2g}}{2g (2g-2)} \frac{\lambda^{2g-2}}{a^{2g-2}} \right) = \log a= \frac{\partial^2}{\partial a^2} F^0(a)\,. \end{equation} For the resolved conifold we start from $\textrm{Li}_1(q)=-\log(1-q)$ and use the property: \begin{equation} \label{polylogder} \theta_q \textrm{Li}_s(q) =\textrm{Li}_{s-1} (q)\,, \quad \theta_q:= q \,\frac{d}{dq}\,, \end{equation} we write \begin{equation}\label{derrep} \tilde{F}^g=\frac{(-1)^{g-1}B_{2g}}{2g (2g-2)!} \,\theta_q^{2g-2} \textrm{Li}_1(q)\,, \quad g\ge1\,. \end{equation} We make the replacement: $$ w \rightarrow \check{\lambda} \frac{\partial}{\partial t}= i \lambda \theta_q\,.$$ Acting with both sides on $\textrm{Li}_1(q)$ we obtain: \begin{equation} (e^{\check{\lambda} \partial_t}-2\cdot \textrm{id}+e^{-\check{\lambda}\partial_t}) \left(- \lambda^{-2} \theta_q^{-2} -\sum_{g=1}^{\infty} \frac{(-1)^{g-1}B_{2g}}{2g (2g-2)!} \lambda^{2g-2}\,\theta_q^{2g-2} \right) \textrm{Li}_1(q) = \textrm{id}\cdot \textrm{Li}_1(q) \,, \end{equation} by using $\theta_q^{2} \tilde{F}^0= \theta_q^{2} \textrm{Li}_3(q)= \textrm{Li}_1(q) $ and interpreting $\theta_q^{-1}$ as an anti-derivative, we obtain: \begin{equation} (e^{\check{\lambda} \partial_t}-2\cdot \textrm{id}+e^{-\check{\lambda}\partial_t}) \tilde{F}(\lambda,t) = -\theta_q^2 \tilde{F}^0(q) \,, \end{equation} which proves the theorem. \end{proof} The following corollary was proved in \cite{alim2020difference} but also holds for the deformed conifold case so we inlude it here as well: \begin{cor} For every $g\ge 1$ the difference equation gives a recursive differential equation which determines $\frac{\partial^2}{\partial t^2}F^g(t)\, \, g \ge 1$ by: \begin{equation} \sum_{k=0}^{g} \frac{1}{(2g-2k+2)!} \left(\frac{1}{2\pi} \frac{\partial}{\partial t}\right)^{2g-2k+2} F^k(t) =0 \,, \quad g\ge1\,. \end{equation} \end{cor} \begin{proof} This follows from expanding the L.H.S. of the theorem in $\lambda$ and then matching the coefficients of $\lambda^{2g}$ on both sides. \end{proof} \section{The quintic and its mirror}\label{sec:quintic} In the following the quintic and its mirror are reviewed. The exposition follows \cite{Alimlectures}, more details can be found in \cite{Candelas:1990rm,Cox:1999ms}. The quintic $X$ denotes the CY manifold defined by \begin{equation} X:=\{P(x)=0\} \subset \mathbb{P}^4\,, \end{equation} where $P$ is a homogeneous polynomial of degree 5 in 5 variables $x_1, \dots ,x_5$. The mirror quintic $\check{X}$ can be constructed using the Greene-Plesser construction \cite{Greene:1990ud}. Equivalently it may be constructed using Batyrev's dual polyhedra \cite{Batyrev:1993dm} in the toric geometry language\footnote{For a review of toric geometry see Refs.~\cite{Greene:1996cy,Hori:2003ms}.}. In the Greene-Plesser construction the family of mirror quintics is the one parameter family of quintics defined by \begin{equation}\label{GP} \{ p(\check{X})=\sum_{i=1}^5 x_i^5-\psi \prod_{i=1}^5 x_i=0 \} \in \mathbbm{P}^4\,, \end{equation} after a $(\mathbbm{Z}_5)^3$ quotient and resolving the singularities. In the following, the mirror construction following Batyrev will be outlined. The mirror pair of CY 3-folds $(X,\check{X})$ is given as hypersurfaces in toric ambient spaces $(W,\check{W})$. The mirror symmetry construction of Ref.~\cite{Batyrev:1993dm} associates the pair $(X,\check{X})$ to a pair of integral reflexive polyhedra $(\Delta,\check{Delta})$. \subsection{\it The A-model geometry} The polyhedron $\Delta$ is characterized by $k$ relevant integral points $\nu_i$ lying in a hyperplane of distance one from the origin in $\mathbbm{Z}^5$, $\nu_0$ will denote the origin following the conventions of Refs. \cite{Batyrev:1993dm,Hosono:1993qy}. The $k$ integral points $\nu_i(\Delta)$ of the polyhedron $\Delta$ correspond to homogeneous coordinates $u_i$ on the toric ambient space $W$ and satisfy $n=h^{1,1}(X)$ linear relations: \begin{equation}\label{toricrel} \sum_{i=0}^{k-1} l_i^a \, \nu_i=0\, , \quad a=1,\dots,n\,. \end{equation} The integral entries of the vectors $l^a$ for fixed $a$ define the weights $l_i^a$ of the coordinates $u_i$ under the $\mathbbm{C}^*$ actions $$ u_i \rightarrow (\lambda_a)^{l_i^a} u_i\,, \quad \lambda_a \in \mathbbm{C}^*\,.$$ The $l_i^a$ can also be understood as the $U(1)_a$ charges of the fields of the gauged linear sigma model (GLSM) construction associated with the toric variety \cite{Witten:1993yc}. The toric variety $W$ is defined as $W\simeq (\mathbbm{C}^{k}-\Xi)/(\mathbbm{C}^*)^n$, where $\Xi$ corresponds to an exceptional subset of degenerate orbits. To construct compact hypersurfaces, $W$ is taken to be the total space of the anti-canonical bundle over a compact toric variety. The compact manifold $X \subset W$ is defined by introducing a superpotential $\mathcal{W}_X=u_0 p(u_i)$ in the GLSM, where $x_0$ is the coordinate on the fiber and $p(u_i)$ a polynomial in the $u_{{i>0}}$ of degrees $-l_0^a$. At large K\"ahler volumes, the critical locus is at $u_0=p(u_i)=0$ \cite{Witten:1993yc}. The quintic is the compact geometry given by a section of the anti-canonical bundle over $\mathbbm{P}^4$. The charge vectors for this geometry are given by: \begin{equation}\label{chargevec} \begin{array}{ccccccc} &u_0&u_1&u_2&u_3&u_4&u_5\\ l=&(-5&1&1&1&1&1\, )\,. \end{array} \end{equation} The vertices of the polyhedron $\Delta$ are given by: \begin{eqnarray} &&\nu_0=(0,0,0,0,0)\,,\quad \nu_1=(1,0,0,0,0)\,,\quad \nu_2=(0,1,0,0,0)\,,\nonumber\\ &&\nu_3=(0,0,1,0,0)\,,\quad \nu_4=(0,0,0,1,0)\,,\quad \nu_5=(-1,-1,-1,-1,0)\,. \end{eqnarray} \subsection{\it The B-model geometry} The B-model geometry $\check{X}\subset \check{W}$ is determined by the mirror symmetry construction of Refs.~\cite{Hori:2000kt,Batyrev:1993dm} as the vanishing locus of the equation \begin{equation} p(\check{X})=\sum_{i=0}^{k-1} a_i y_i =\sum_{\nu_i\in \Delta} a_i X^{\nu_i}\, , \end{equation} where $a_i$ parameterize the complex structure of $\check{X}$, $y_i$ are homogeneous coordinates \cite{Hori:2000kt} on $\check{W}$ and $X_m\, , m=1,\dots,4$ are inhomogeneous coordinates on an open torus $(\mathbbm{C}^*)^4 \subset \check{W}$ and $X^{\nu_i}:=\prod_m X_m^{\nu_{i,m}} $ \cite{Batyrev:1993wa}. The relations (\ref{toricrel}) impose the following relations on the homogeneous coordinates \begin{equation} \prod_{i=0}^{k-1} y_i^{l_i^a}=1\, ,\quad a=1,\dots,n=h^{2,1}(\check{X})=h^{1,1}(X)\, . \end{equation} The important quantity in the B-model is the holomorphic $(3,0)$ form which is given by: \begin{equation}\label{defomega0} \Omega(a_i)= \textrm{Res}_{p=0} \frac{1}{p(\check{X})} \prod_{i=1}^4 \frac{dX_i}{X_i} \, . \end{equation} Its periods \begin{equation} \pi_{\alpha}(a_i)=\int_{\gamma^\alpha} \Omega(a_i)\, , \quad \gamma^{\alpha} \in H_3(\check{X})\,,\quad\alpha=0,\dots, 2h^{2,1}+1\, , \end{equation} are annihilated by an extended system of GKZ \cite{Gelfand:1989} differential operators \begin{eqnarray} &&\mathcal{L}(l)= \prod_{l_i >0} \left( \frac{\partial}{\partial a_i}\right)^{l_i} -\prod_{l_i<0} \left( \frac{\partial}{\partial a_i}\right)^{-l_i}\, ,\\ &&\mathcal{Z}_k =\sum_{i=0}^{k-1} \nu_{i,j} \theta_i\, , \quad j=1,\dots,4\, . \quad \mathcal{Z}_0 = \sum_{i=0}^{k-1} \theta_i +1\,,\quad \theta_i=a_i \frac{\partial}{\partial a_i}\,, \end{eqnarray} where $l$ can be a positive integral linear combination of the charge vectors $l^a$. The equation $\mathcal{L}(l)\, \pi_0(a_i)=0$ follows from the definition (\ref{defomega0}). The equations $\mathcal{Z}_k\,\pi_\alpha(a_i)=0$ express the invariance of the period integral under the torus action and imply that the period integrals only depend on special combinations of the parameters $a_i$ \begin{equation}\label{lcs} \pi_\alpha(a_i) \sim \pi_\alpha(z_a)\, ,\quad z_a=(-)^{l_0^a} \prod_i a_i^{l_i^a}\, , \end{equation} the $z_a\,, a=1,\dots,n$ define local coordinates on the moduli space $\mathcal{M}$ of complex structures of $\check{X}$. The charge vector defining the A-model geometry in Eq.~(\ref{chargevec}) gives the mirror geometry defined by: \begin{equation}\label{Batyrev} p(\check{X})=\sum_{i=0}^5 a_i y_i =0\, , \end{equation} where the coordinates $y_i$ are subject to the relation \begin{equation} y_1 y_2 y_3 y_4 y_5 = y_0^5\,. \end{equation} Changing the coordinates $y_i=x_i^5,\, i=1,\dots,5$ shows the equivalence of (\ref{GP}) and (\ref{Batyrev}) with \begin{equation} \psi^{-5}=-\frac{a_1 a_2 a_3 a_4 a_5}{a_0^5}=: z\,. \end{equation} Furthermore, the following Picard-Fuchs (PF) operator annihilating $\tilde{\pi}_{\alpha}(z_i)=a_0\,\pi_{\alpha}(a_i)$ is found: \begin{equation}\label{PF} \mathcal{L}=\theta^4- 5z \prod_{i=1}^4 (5\theta+i)\,, \quad \theta=z \frac{d}{dz}\, . \end{equation} The discriminant of this operator is \begin{equation} \label{eq:Discriminant} \Delta=1-3125\,z\,. \end{equation} and the Yukawa coupling can be computed: \begin{equation} C_{zzz}=\frac{5}{z^3\, \Delta}\,. \end{equation} The PF operator gives a differential equation which has three regular singular points which correspond to points in the moduli space of the family of quintics where the defining equation becomes singular or acquires additional symmetries, these are the points: \begin{itemize} \item $z=0$ , the quintic at this value corresponds to the quotient of $\prod_{i=1}^5 x_i=0$ which is the most degenerate Calabi-Yau and corresponds to large radius when translated to the A-model side. \item $z=5^{-5}$ this corresponds to a discriminant locus of the differential equation (\ref{PF}) and also to the locus where the Jacobian of the defining equation vanishes. This type of singularity is called a \emph{conifold} singularity. \item $z=\infty$ , this is known as the Gepner point in the moduli space of the quintic and it corresponds to a non-singular CY threefold with a large automorphism group. This is reflected by a monodromy of order 5. \end{itemize} \end{appendix} \providecommand{\href}[2]{#2}\begingroup\raggedright
{ "timestamp": "2021-04-19T02:18:16", "yymm": "2102", "arxiv_id": "2102.07776", "language": "en", "url": "https://arxiv.org/abs/2102.07776" }
\section{Introduction} Cantor expansions of real numbers were originally introduced by Cantor in 1869~\cite{Cantor:1869}. A real number $x\in[0,1)$ is represented via a base sequence $(b_n)_{n\in\mathbb N}$ of integers greater than or equal to $2$ as follows: \begin{equation} \label{eq:CantorSeries} x=\sum_{n=0}^{+\infty}\frac{a_n}{\prod_{i=0}^n b_i} \end{equation} where for each $n\in\mathbb N$, the digit $a_n$ belongs to the integer interval $[\![0,b_n-1]\!]$. If infinitely many digits $a_n$ are nonzero, then the series~\eqref{eq:CantorSeries} is called the Cantor series of $x$. Many studies are devoted to Cantor series, a large amount of which are concerned with the digit frequencies; see~\cite{Erdos&Renyi:1959,Galambos:1976,Kirschenhofer&Tichy:1984,Renyi:1956} to cite just a few. Representations of real numbers using a real base were first defined by Rényi in 1957~\cite{Renyi:1957}. In this context, a real number $x\in[0,1)$ is represented via a real base $\beta$ greater than~$1$ as follows: \begin{equation} \label{eq:RealBaseRep} x=\sum_{n=0}^{+\infty}\frac{a_{n}}{\beta^{n+1}} \end{equation} where the digits $a_n$ can be chosen by using several appropriate algorithms. The most commonly used algorithm is the greedy algorithm according to which for each $n\in\mathbb N$, $a_n=\lfloor \beta \,{T_\beta}^n(x)\rfloor$ where $T_\beta\colon [0,1)\mapsto [0,1),\ x\mapsto \beta x-\floor{\beta x}$. Expansions in a real base are extensively studied and we can only cite a few of the many possible references~\cite{Bertrand-Mathis:1986,Lothaire:2002,Parry:1960,Schmidt:1980}. This paper investigates series expansions of real numbers that are based on a sequence $\boldsymbol{\beta}=(\beta_n)_{n\in\mathbb N}$ of real numbers greater than~$1$. We call such a base sequence $\boldsymbol{\beta}$ a Cantor real base, and we talk about $\boldsymbol{\beta}$-representations. In doing so, we generalize both representations of real numbers through Cantor series and real base representations of real numbers. This paper has the following organization. In Section~\ref{sec:CantorRealBases}, we introduce the basic definitions and we give a characterization of those infinite words $a$ over the alphabet $\mathbb R_{\ge 0}$ for which there exists a Cantor real base $\boldsymbol{\beta}$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$. In Section~\ref{sec:GreedyAlgorithm}, we define the greedy $\boldsymbol{\beta}$-representations of real numbers, which we call the $\boldsymbol{\beta}$-expansions. Then we prove several fundamental properties of $\boldsymbol{\beta}$-representations, each of which extends existing results on real base representations. In Section~\ref{sec:QuasiGreedyExpansions}, we introduce the quasi-greedy $\boldsymbol{\beta}$-expansion $d_{\boldsymbol{\beta}}^{*}(1)$ of $1$ and show that $d_{\boldsymbol{\beta}}^{*}(1)$ is the lexicographically greatest $\boldsymbol{\beta}$-representation not ending in $0^\omega$ of all real numbers in $[0,1]$. In Section~\ref{sec:AdmissibleSequences}, we prove a generalization of Parry's theorem~\cite{Parry:1960} characterizing those infinite words over $\mathbb N$ that are the greedy $\boldsymbol{\beta}$-representations of some real number in the interval $[0,1)$. In Section~\ref{sec:BShift}, we introduce the notion of $\boldsymbol{\beta}$-shift. We are able to give a description of the $\boldsymbol{\beta}$-shift in full generality. In Section~\ref{sec:AlternateBases}, which is the last and biggest section, we focus on the periodic Cantor real bases, which we call alternate bases. We first give a characterization of those infinite words $a$ over the alphabet $\mathbb R_{\ge 0}$ for which there exists an alternate base $\boldsymbol{\beta}$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$. Then we obtain a characterization of the $\boldsymbol{\beta}$-expansion of 1 among all $\boldsymbol{\beta}$-representations of $1$, which generalizes a result of Parry~\cite{Parry:1960}. Finally, generalizing Bertrand-Mathis' theorem~\cite{Bertrand-Mathis:1986}, we show that for any alternate base $\boldsymbol{\beta}$, the $\boldsymbol{\beta}$-shift is sofic if and only if all quasi-greedy $\boldsymbol{\beta}^{(i)}$-expansions of $1$ are ultimately periodic, where $\boldsymbol{\beta}^{(i)}$ is the $i$-th shift of the Cantor real base $\boldsymbol{\beta}$. \section{Cantor real bases} \label{sec:CantorRealBases} Let $\boldsymbol{\beta}=(\beta_n)_{n\in\mathbb N}$ be a sequence of real numbers greater than $1$ such that $\prod_{n\in\mathbb N}\beta_n=+\infty$. We call such a sequence $\boldsymbol{\beta}$ a \emph{Cantor real base}, or simply a \emph{Cantor base}. We define the \emph{$\boldsymbol{\beta}$-value (partial) map} $\mathrm{val}_{\boldsymbol{\beta}}\colon (\mathbb R_{\ge 0})^\mathbb N\to \mathbb R_{\ge 0}$ by \begin{equation} \label{eq:representationCantor} \mathrm{val}_{\boldsymbol{\beta}}(a)=\sum_{n\in\mathbb N}\frac{a_n}{\prod_{i=0}^n\beta_i} \end{equation} for any infinite word $a=a_0a_1a_2\cdots$ over $\mathbb R_{\ge 0}$, provided that the series converges. A \emph{$\boldsymbol{\beta}$-representation} of a nonnegative real number $x$ is an infinite word $a\in\mathbb N^\mathbb N$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=x$. In particular, if $\boldsymbol{\beta}=(\beta,\beta,\ldots)$, then for all $x\in [0,1]$, a $\boldsymbol{\beta}$-representation of $x$ is a $\beta$-representation of $x$ as defined by Rényi~\cite{Renyi:1957}. In this case, we do not distinguish the notation $\boldsymbol{\beta}$ and $\beta$: we write $\mathrm{val}_{\beta}$ and we talk about $\beta$-representations, as usual. Also, any sequence $\boldsymbol{\beta}=(\beta_n)_{n\in\mathbb N}$ of real numbers greater than $1$ that takes only finitely many values is a Cantor base since in this case, the condition $\prod_{n\in\mathbb N}\beta_n=+\infty$ is trivially satisfied. We will need to represent real numbers not only in a fixed Cantor base $\boldsymbol{\beta}$ but also in all Cantor bases obtained by shifting $\boldsymbol{\beta}$. We define \[ \boldsymbol{\beta}^{(n)}=(\beta_n,\beta_{n+1},\ldots)\quad \text{for all }n\in\mathbb N. \] In particular $\boldsymbol{\beta}^{(0)}=\boldsymbol{\beta}$. We will also need to consider shifted infinite words. Let us denote by $\sigma_A$ the \emph{shift operator}. \[ \sigma_A\colon A^\mathbb N\to A^\mathbb N,\ a_0a_1a_2\cdots\mapsto a_1a_2a_3\cdots \] over the alphabet $A$. Whenever there is no ambiguity on the alphabet, we simply denote the shift operator by $\sigma$. Throughout this text, if $a$ is an infinite word then for all $n\in\mathbb N$, $a_n$ designates its letter indexed by $n$, so that $a=a_0a_1a_2\cdots$. The $\boldsymbol{\beta}$-representations of $1$ will be of interest in what follows, in particular the greedy and the quasi-greedy expansions (see Sections~\ref{sec:GreedyAlgorithm} and~\ref{sec:QuasiGreedyExpansions}). We start our study by providing a characterization of those infinite words $a$ over the alphabet $\mathbb R_{\ge 0}$ for which there exists a Cantor real base $\boldsymbol{\beta}$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$. When $\boldsymbol{\beta}=(\beta,\beta,\ldots)$, for any infinite word $a$ over $\mathbb N$ satisfying some suitable conditions, the equation $\mathrm{val}_{\beta}(a)=1$ admits a unique solution $\beta>1$ (see~\cite[Corollary 7.2.10]{Lothaire:2002}). This classical result remains true for nonnegative real digits and weaker conditions on the infinite word $a$. \begin{lemma} \label{lem:ExistRepresentation1} Let $a$ be an infinite word over $\mathbb R_{\ge 0}$ such that $a_n\in O(n^d)$ for some $d\in\mathbb N$. There exists a real base $\beta$ such that $\mathrm{val}_{\beta}(a)=1$ if and only if $\sum_{n\in\mathbb N}a_n> 1$, in which case $\beta$ is unique and $\beta\ge a_0$, and if moreover for all $n\in\mathbb N$, $a_n \le a_0$, then $\beta\le a_0+1$. \end{lemma} \begin{proof} If $\sum_{n\in\mathbb N}a_n\le 1$ then for all real bases $\beta$, $\mathrm{val}_{\beta}(a)< 1$. Indeed, this is obvious if $a= 0^\omega$, and else $\mathrm{val}_{\beta}(a)< \sum_{n\in\mathbb N}a_n\le 1$. Now, suppose that $\sum_{n\in\mathbb N}a_n> 1$. Let $N\in \mathbb N$ be such that $\sum_{n=0}^{N}a_n> 1$. The function $f\colon[0,1)\to \mathbb R,\ x\mapsto \sum_{n\in\mathbb N}a_nx^{n+1}$ is well-defined, continuous, increasing and such that $f(0)=0$ and that for all $x\in[0,1)$, $f(x)\ge \sum_{n=0}^{N}a_nx^{n+1}$. The function $g\colon\mathbb R\to \mathbb R,\ x\mapsto \sum_{n=0}^{N}a_nx^{n+1}$ is continuous, increasing and such that $g(0)=0$ and $g(1)>1$. Therefore, there exists a unique $x_0\in(0,1)$ such that $g(x_0)=1$, and hence such that $f(x_0)\ge 1$. Now, there exists a unique $\gamma\in(0,x_0]$ such that $f(\gamma)=1$. By setting $\beta=\frac{1}{\gamma}$, we get that $\beta\ge \frac{1}{x_0}>1$ and $\mathrm{val}_{\beta}(a)=f\left(\frac{1}{\beta}\right)=1$. Moreover, $\beta\ge a_0$ for otherwise $f\left(\frac{1}{\beta}\right)>f\left(\frac{1}{a_0}\right)\ge 1$. If moreover for all $n\in\mathbb N$, $a_n \le a_0$, then $\beta\le a_0+1$ for otherwise we would have \[ \mathrm{val}_{\beta}(a) =\sum_{n\in\mathbb N} \frac{a_n}{\beta^{n+1}} < a_0 \sum_{n\in\mathbb N}\frac{1}{(a_0+1)^{n+1}} =1. \] \end{proof} No upper bound on the growth order of the digits $a_n$ is needed in order to find a Cantor base $\boldsymbol{\beta}$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$. \begin{lemma} \label{lem:ExistRepresentation2} Let $a$ be an infinite word over $\mathbb R_{\ge 0}$ such that $\sum_{n\in\mathbb N}a_n=+\infty$. Then there exists a Cantor base $\boldsymbol{\beta}$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$. \end{lemma} \begin{proof First of all, observe that the hypothesis implies that $a$ does not end in $0^\omega$ and that $\prod_{n\in\mathbb N}(a_n+1)=+\infty$. We define two sequences of nonnegative integers $(n_k)_{1\le k\le K}$ and $(\ell_k)_{1\le k\le K}$ where $K\in\mathbb N\cup\{+\infty\}$. The length $K$ of these two sequences is the number of zero blocks in $a$, i.e.\ the factors of the form $0^\ell$ which are neither preceded nor followed by $0$ in $a$. Two cases stand out: either $K\in\mathbb N$ or $K=+\infty$. We describe the two cases at once. In order to do so, it should be understood that the parts of the definition where $k> K$ should just be ignored when $K\in\mathbb N$. Let $n_1$ denote the least $n\in\mathbb N$ such that $a_n=0$ and let $\ell_1$ denote the least $\ell\in\mathbb N$ such that $a_{n_1+\ell}>0$. Then for $k\ge 2$, let $n_k$ denote the least integer $n> n_{k-1}+\ell_{k-1}$ such that $a_n=0$ and let $\ell_k$ denote the least $\ell\in\mathbb N$ such that $a_{n_k+\ell}>0$. Thus, $(n_k)_{1\le k\le K}$ is the sequence of positions of appearance of the successive zero blocks in $a$ and $(\ell_k)_{1\le k\le K}$ is the sequence of lengths of these blocks. Next, for all $k\in[\![1,K]\!]$, we pick any $\alpha_k$ in the interval $(1,\sqrt[\ell_k]{a_{n_k+\ell_k}+1})$. For all $n\in\mathbb N$, we define \[ \beta_n= \begin{cases} a_n+1 & \text{if }n\in [\![0,n_1-1]\!]\text{ or }n\in \bigcup_{k=1}^K[\![n_k+\ell_k+1,n_{k+1}-1]\!]\\ \alpha_k & \text{if }n\in[\![n_k,n_k+\ell_k-1]\!] \text{ for some }k\in[\![1,K]\!]\\ \frac{a_n+1}{\alpha_k^{\ell_k}} &\text{if }n=n_k+\ell_k \text{ for some }k\in[\![1,K]\!] \end{cases} \] where we set $n_{K+1}=+\infty$ if $K\in\mathbb N$. In particular if $K=0$, i.e.\ if for all $n\in\mathbb N$, $a_n>0$, then for all $n\in\mathbb N$, $\beta_n=a_n+1$. Let us show that in any case, the obtained sequence $\boldsymbol{\beta}=(\beta_n)_{n\in\mathbb N}$ is such that $\prod_{n\in \mathbb N}\beta_n=+\infty$ and $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$. By construction, \[ \prod_{n\in\mathbb N}\beta_n =\prod_{n=0}^{n_1-1}(a_n+1)\cdot \prod_{k=1}^K \left(\alpha_k^{\ell_k}\cdot \frac{a_{n_k+\ell_k}+1}{\alpha_k^{\ell_k}} \cdot \prod_{n=n_k+\ell_k+1}^{n_{k+1}-1}(a_n+1) \right) =\prod_{n\in\mathbb N}(a_n+1). \] By induction we can show that \[ \sum_{n=0}^{n_k+\ell_k}\frac{a_n}{\prod_{i=0}^n\beta_i} = 1-\frac{1}{\prod_{i=0}^{n_k+\ell_k}\beta_i} \quad\text{for all }k\in[\![1,K]\!]. \] If $K=+\infty$ then we obtain that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$ by letting $k$ tend to infinity. Otherwise, $K\in\mathbb N$. Set $n_0=-1$ and $\ell_0=0$. By induction again, we can show that \[ \sum_{n=n_K+\ell_K+1}^m \frac{a_n}{\prod_{i=n_K+\ell_K+1}^n\beta_i} =1-\frac{1}{\prod_{i=n_K+\ell_K+1}^m\beta_i} \quad\text{for all }m\in\mathbb N. \] By letting $m$ tend to infinity, we get \[ \mathrm{val}_{\boldsymbol{\beta}^{(n_K+\ell_K+1)}}(\sigma^{n_K+\ell_K+1}(a))=1. \] Finally, we obtain \begin{align*} \mathrm{val}_{\boldsymbol{\beta}}(a) &=\sum_{n=0}^{n_K+\ell_K} \frac{a_n}{\prod_{i=0}^n\beta_i}+ \sum_{n=n_K+\ell_K+1}^{+\infty} \frac{a_n}{\prod_{i=0}^n\beta_i}\\ &=1-\frac{1}{\prod_{i=0}^{n_K+\ell_K}\beta_i} +\frac{\mathrm{val}_{\boldsymbol{\beta}^{(n_K+\ell_K+1)}}(\sigma^{n_K+\ell_K+1}(a))}{\prod_{i=0}^{n_K+\ell_K}\beta_i}\\ &= 1. \end{align*} \end{proof} \begin{proposition} \label{prop:ExistRepresentation} Let $a$ be an infinite word over $\mathbb R_{\ge 0}$. There exists a Cantor base $\boldsymbol{\beta}$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$ if and only if $\sum_{n\in\mathbb N}a_n> 1$. \end{proposition} \begin{proof} Similarly as in the proof of Lemma~\ref{lem:ExistRepresentation1}, the condition $\sum_{n\in\mathbb N}a_n> 1$ is necessary. Now, suppose that $\sum_{n\in\mathbb N}a_n> 1$. If $\sum_{n\in\mathbb N}a_n=+\infty$ then we use Lemma~\ref{lem:ExistRepresentation2}. Otherwise, we have $1<\sum_{n\in\mathbb N}a_n<+\infty$ and we apply Lemma~\ref{lem:ExistRepresentation1}. \end{proof} \section{The greedy algorithm} \label{sec:GreedyAlgorithm} For $x\in[0,1]$, a distinguished $\boldsymbol{\beta}$-representation $\varepsilon_0(x)\varepsilon_1(x)\varepsilon_2(x)\cdots$ is given thanks to the \emph{greedy algorithm}: \begin{itemize} \item $\varepsilon_{0}(x)=\floor{\beta_0 x}$ and $r_0(x)=\beta_0 x-\varepsilon_0(x)$ \item $\varepsilon_{n}(x)=\floor{\beta_n r_{n-1}(x)}$ and $r_n=\beta_n r_{n-1}(x)-\varepsilon_n(x)$ for $n\in\mathbb N_{\ge 1}$. \end{itemize} The obtained $\boldsymbol{\beta}$-representation of $x$ is denoted by $d_{\boldsymbol{\beta}}(x)$ and is called the \emph{$\boldsymbol{\beta}$-expansion} of $x$. Note that the $n$-th digit $\varepsilon_{n}(x)$ belongs to $\{0,\ldots,\floor{\beta_n}\}$. We let $A_{\boldsymbol{\beta}}$ denote the (possibly infinite) alphabet $\{0,\ldots,\sup_{n\in\mathbb N}\floor{\beta_n}\}$. The algorithm is called greedy since at each step it chooses the largest possible digit. Indeed, consider $x\in [0,1]$ and $\ell\in\mathbb N$, and suppose that the digits $\varepsilon_0(x),\ldots ,\varepsilon_{\ell-1}(x)$ are already known. Then the digit $\varepsilon_\ell(x)$ is the largest element of $ [\![0,\floor{\beta_\ell}]\!]$ such that $\sum_{n=0}^\ell \frac{\varepsilon_n(x)}{\prod_{i=0}^n \beta_i}\le x$. Thus \[ x=\sum_{n=0}^\ell\frac{\varepsilon_n(x)}{\prod_{i=0}^n\beta_i} +\frac{r_\ell(x)}{\prod_{i=0}^\ell\beta_i} \] where $r_\ell(x)\in[0,1)$. Note that since a Cantor base satisfies $\prod_{n\in\mathbb N}\beta_n=+\infty$, the latter equality implies the convergence of the greedy algorithm and that $x=\mathrm{val}_{\boldsymbol{\beta}}(d_{\boldsymbol{\beta}}(x))$. We let $D_{\boldsymbol{\beta}}$ denote the subset of $A_{\boldsymbol{\beta}}^\mathbb N$ of all $\boldsymbol{\beta}$-expansions of real numbers in the interval $[0,1)$: \[ D_{\boldsymbol{\beta}}=\{d_{\boldsymbol{\beta}}(x)\colon x\in[0,1)\}. \] In what follows, the $\boldsymbol{\beta}$-expansion of $1$ will play a special role. For the sake of clarity, we denote its digits by $\varepsilon_{n}$ instead of $\varepsilon_{n}(1)$. We sometimes write $\varepsilon_{\boldsymbol{\beta},n}(x)$ and $\varepsilon_{\boldsymbol{\beta},n}$ instead of $\varepsilon_{n}(x)$ and $\varepsilon_{n}$ when the Cantor base $\boldsymbol{\beta}$ needs to be emphasized. As previously mentioned, if $\boldsymbol{\beta}=(\beta,\beta,\ldots)$, then for all $x\in [0,1]$, the $\boldsymbol{\beta}$-expansion of $x$ is equal to the usual $\beta$-expansion of $x$ as defined by Rényi~\cite{Renyi:1957} and we write indistinctly $\boldsymbol{\beta}$ or $\beta$. We can also express the obtained digits $\varepsilon_{n}(x)$ and remainders $r_n(x)$ thanks to the $\beta_n$-transformations. For $\beta>1$, the $\beta$-transformation is the map \[ T_{\beta}\colon [0,1)\to [0,1),\ x \mapsto \beta x -\floor{\beta x}. \] Then for all $x\in[0,1)$ and $n\in\mathbb N$, we have \[ \varepsilon_{n}(x) =\floor{\beta_n\big(T_{\beta_{n-1}}\circ \cdots \circ T_{\beta_0}(x)\big)} \quad \text{and}\quad r_{n}(x) =T_{\beta_n}\circ \cdots \circ T_{\beta_0}(x). \] \begin{example} If there exists $n\in \mathbb N$ such that $\beta_n$ is an integer (without any restriction on the other $\beta_m$), then $d_{\boldsymbol{\beta}^{(n)}}(1)=\beta_n 0^\omega$. \end{example} \begin{example} For $n\in \mathbb N$, let $\alpha_n=1+\frac{1}{2^{n+1}}$ and $\beta_n=2+\frac{1}{2^{n+1}}$. The sequence $\boldsymbol{\alpha}=(\alpha_n)_{n\in\mathbb N}$ is not a Cantor base since $\prod_{n\in\mathbb N}\alpha_n<+\infty$. If we perform the greedy algorithm on $x=1$ for the sequence $\boldsymbol{\alpha}$, we obtain the sequence of digits $10^\omega$, which is clearly not an $\boldsymbol{\alpha}$-representation of $1$. However, the sequence $\boldsymbol{\beta}=(\beta_n)_{n\in\mathbb N}$ is indeed a Cantor base since $\prod_{n\in\mathbb N}\beta_n=+\infty$. \end{example} \begin{example} Let $\alpha=\frac{1+ \sqrt{13}}{2}$ and $\beta=\frac{5+ \sqrt{13}}{6}$. \begin{enumerate} \item Consider $\boldsymbol{\beta}=(\beta_n)_{n\in\mathbb N}$ the Cantor base defined by \[ \beta_n=\begin{cases} \alpha & \text{if } | \mathrm{rep}_2(n) |_1 \equiv 0 \pmod 2\\ \beta & \text{otherwise} \end{cases} \] for all $n\in \mathbb N$, where $\mathrm{rep}_2$ is the function mapping any nonnegative integer to its $2$-expansion. We get $\boldsymbol{\beta}=(\alpha,\beta,\beta,\alpha,\beta,\alpha,\alpha,\beta, \ldots)$ where the infinite word $\beta_0\beta_1\beta_2\cdots$ is the Thue-Morse word over the alphabet $\{\alpha,\beta\}$. We compute $d_{\boldsymbol{\beta}}(1)=20010110^{\omega}$, $\DBi{1}(1)=1010110^\omega$ and $\DBi{2}(1)=110^{\omega}$. \item Consider $\boldsymbol{\beta}=(\sqrt{13},\alpha,\beta,\alpha,\beta,\alpha,\beta,\ldots)$. It is easily checked that $d_{\boldsymbol{\beta}}(1)=3(10)^\omega$ and that for all $m\in\mathbb N$, $\DBi{2m+1}(1)=2010^\omega$ and $\DBi{2m+2}(1)=110^\omega$. \end{enumerate} \end{example} We call an \emph{alternate base} a periodic Cantor base, i.e.\ a Cantor base for which there exists $p\in\mathbb N_{\ge 1}$ such that for all $n\in\mathbb N$, $\beta_n=\beta_{n+p}$. In this case we simply note $\boldsymbol{\beta}=(\overline{\beta_0,\ldots,\beta_{p-1}})$ and the integer $p$ is called the \emph{length} of the alternate base $\boldsymbol{\beta}$. In what follows, most examples will be alternate bases and Section~\ref{sec:AlternateBases} will be specifically devoted to their study. \begin{example} \label{ex:3phiphi} Let $\varphi=\frac{1+\sqrt{5}}{2}$ be the Golden Ratio and let $\boldsymbol{\beta}=(\overline{3,\varphi,\varphi})$. For all $m\in\mathbb N$, we have $\DBi{3m}(1)=30^\omega$, $\DBi{3m+1}(1)=110^\omega$ and $\DBi{3m+2}(1)=1(110)^\omega$. \end{example} Let us show that the classical properties of the $\beta$-expansion theory are still valid for Cantor bases. Some are just an adaptation of the related proofs in \cite{Lothaire:2002} but for the sake of completeness the details are written. From now on, unless otherwise stated, we consider a fixed Cantor base $\boldsymbol{\beta}=(\beta_n)_{n\in\mathbb N}$. \begin{proposition} For all $x\in [0,1)$ and all $n\in \mathbb N$, we have \[ \sigma^n\circ d_{\boldsymbol{\beta}} (x) = d_{\boldsymbol{\beta}^{(n)}}\circ T_{\beta_{n-1}}\circ \cdots\circ T_{\beta_0} (x). \] \end{proposition} \begin{proof} This is a straightforward verification. \end{proof} \begin{lemma} \label{lem:Greedy} For all infinite words $a$ over $\mathbb N$ and all $x\in[0,1]$, $a=d_{\boldsymbol{\beta}}(x)$ if and only if $\mathrm{val}_{\boldsymbol{\beta}}(a)=x$ and for all $\ell\in\mathbb N$, \begin{equation} \label{eq:GreedyCondition} \sum_{n=\ell+1}^{+\infty}\frac{a_n}{\prod_{i=0}^n{\beta_i}} < \frac{1}{\prod_{i=0}^{\ell}{\beta_i}}. \end{equation} \end{lemma} \begin{proof} From the greedy algorithm, for all $x\in[0,1]$, $\mathrm{val}_{\boldsymbol{\beta}}(d_{\boldsymbol{\beta}}(x))=x$ and for all $\ell\in\mathbb N$, \[ \left(\sum_{n=\ell+1}^{+\infty} \frac{\varepsilon_n(x)}{\prod_{i=0}^n{\beta_i}}\right) \prod_{i=0}^{\ell}{\beta_i} =\left(x-\sum_{n=0}^{\ell} \frac{\varepsilon_n(x)}{\prod_{i=0}^n{\beta_i}}\right) \prod_{i=0}^{\ell}{\beta_i} =r_{\ell}(x)<1. \] Conversely, suppose that $a$ is an infinite word over $\mathbb N$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=x$ and such that for all $\ell\in\mathbb N$, \eqref{eq:GreedyCondition} holds. Let us show by induction that for all $m\in\mathbb N$, $a_m=\varepsilon_m(x)$. From~\eqref{eq:GreedyCondition} for $\ell=0$, we get that $x-\frac{a_0}{\beta_0}<\frac{1}{\beta_0}$. Thus, $\beta_0 x-1<a_0$. Since $\frac{a_0}{\beta_0}\le x$, we get that $a_0\le \beta_0 x$. Therefore, $a_0=\floor{\beta_0 x}=\varepsilon_0(x)$. Now, suppose that $m\in\mathbb N_{\ge 1}$ and that for $n\in[\![0,m-1]\!]$, $a_n=\varepsilon_n(x)$. Then \[ a_m+ \left(\sum_{n=m+1}^{+\infty} \frac{a_n}{\prod_{i=0}^n{\beta_i}}\right) \prod_{i=0}^m{\beta_i} =\varepsilon_m(x)+r_m(x). \] By using~\eqref{eq:GreedyCondition} for $\ell=m$ and since $r_m(x)<1$, we obtain that $a_m=\varepsilon_m(x)$. \end{proof} \begin{proposition} \label{prop:ShiftedGreedy} Let $a$ be a $\boldsymbol{\beta}$-representation of some real number $x$ in $[0,1]$. Then the following four assertions are equivalent. \begin{enumerate} \item The infinite word $a$ is the $\boldsymbol{\beta}$-expansion of $x$. \item For all $n\in \mathbb N_{\ge 1}$, $\mathrm{val}_{\boldsymbol{\beta}^{(n)}}(\sigma^{n}(a))<1$. \item The infinite word $\sigma(a)$ belongs to $D_{\boldsymbol{\beta}^{(1)}}$. \item For all $n\in \mathbb N_{\ge 1}$, $\sigma^{n}(a)$ belongs to $D_{\boldsymbol{\beta}^{(n)}}$. \end{enumerate} \end{proposition} \begin{proof} Since $\mathrm{val}_{\boldsymbol{\beta}}(a)=x\in[0,1]$, it follows from Lemma~\ref{lem:Greedy} that $a=d_{\boldsymbol{\beta}}(x)$ if and only if for all $\ell\in\mathbb N$, \eqref{eq:GreedyCondition} holds. In order to obtain the equivalences between the first three items, it suffices to note that the greedy condition~\eqref{eq:GreedyCondition} can be rewritten as $\mathrm{val}_{\boldsymbol{\beta}^{(\ell+1)}}(\sigma^{\ell+1}(a))<1$. Clearly $(4)$ implies $(3)$. Finally we obtain that $(3)$ implies $(4)$ by iterating the implication $(1) \implies (3)$. \end{proof} \begin{corollary} \label{cor:ShiftedGreedy} An infinite word $a$ over $\mathbb N$ belongs to $D_{\boldsymbol{\beta}}$ if and only if for all $n\in\mathbb N$, $\mathrm{val}_{\boldsymbol{\beta}^{(n)}}(\sigma^n(a))<1$. \end{corollary} \begin{proposition} \label{pro:GreedyLexGreatest} The $\boldsymbol{\beta}$-expansion of a real number $x\in [0,1]$ is lexicographically maximal among all $\boldsymbol{\beta}$-representations of $x$. \end{proposition} \begin{proof} Let $x\in [0,1]$ and $a\in\mathbb N^\mathbb N$ be a $\boldsymbol{\beta}$-representation of $x$. Proceed by contradiction and suppose that $a >_{\mathrm{lex}} d_{\boldsymbol{\beta}}(x)$. There exists $\ell\in\mathbb N$ such that $\varepsilon_0(x)\cdots \varepsilon_{\ell-1}(x)=a_0\cdots a_{\ell-1}$ and $a_\ell>\varepsilon_\ell(x)$. Then \[ \sum_{n=\ell}^{+\infty}\frac{\varepsilon_n(x)}{\prod_{i=0}^n{\beta_i}} = \sum_{n=\ell}^{+\infty}\frac{a_n}{\prod_{i=0}^n{\beta_i}} \ge \frac{\varepsilon_\ell(x)+1}{\prod_{i=0}^\ell{\beta_i}} + \sum_{n=\ell+1}^{+\infty}\frac{a_n}{\prod_{i=0}^n{\beta_i}} \] and hence \[ \sum_{n=\ell+1}^{+\infty}\frac{\varepsilon_n(x)}{\prod_{i=0}^n{\beta_i}} \ge \frac{1}{\prod_{i=0}^{\ell}{\beta_i}} \] which is impossible by Lemma~\ref{lem:Greedy}. \end{proof} \begin{proposition} \label{pro:Increasing} The function $d_{\boldsymbol{\beta}}\colon [0,1]\to {A_{\boldsymbol{\beta}}}^\mathbb N$ is increasing: \[ \forall x,y \in [0,1],\quad x<y \iff d_{\boldsymbol{\beta}}(x) <_{\mathrm{lex}} d_{\boldsymbol{\beta}}(y). \] \end{proposition} \begin{proof} Suppose that $d_{\boldsymbol{\beta}}(x)<_{\mathrm{lex}} d_{\boldsymbol{\beta}}(y)$. There exists $\ell\in\mathbb N$ such that $\varepsilon_0(x)\cdots \varepsilon_{\ell-1}(x)=\varepsilon_0(y)\cdots \varepsilon_{\ell-1}(y)$ and $\varepsilon_\ell(x)<\varepsilon_\ell(y)$. By Lemma~\ref{lem:Greedy}, we get \[ x =\sum_{n\in\mathbb N}\frac{\varepsilon_n(x)}{\prod_{i=0}^n{\beta_i}} \\ < \sum_{n=0}^{\ell-1}\frac{\varepsilon_n(y)}{\prod_{i=0}^n{\beta_i}} + \frac{\varepsilon_\ell(y)-1}{\prod_{i=0}^\ell{\beta_i}} + \frac{1}{\prod_{i=0}^\ell{\beta_i}} \\ = \sum_{n=0}^{\ell}\frac{\varepsilon_n(y)}{\prod_{i=0}^n{\beta_i}} \\ \le y. \] It follows immediately that $x<y$ implies $d_{\boldsymbol{\beta}}(x) <_{\mathrm{lex}} d_{\boldsymbol{\beta}}(y)$. \end{proof} \begin{corollary} \label{cor:DB1LexGreatest} If $a$ is an infinite word over $\mathbb N$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)\le 1$, then $a\le_{\mathrm{lex}} d_{\boldsymbol{\beta}}(1)$. In particular, $d_{\boldsymbol{\beta}}(1)$ is lexicographically maximal among all $\boldsymbol{\beta}$-representations of all real numbers in $[0,1]$. \end{corollary} \begin{proof} Let $a$ be an infinite word over $\mathbb N$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)\le 1$. By Propositions~\ref{pro:GreedyLexGreatest} and~\ref{pro:Increasing}, $a\le_{\mathrm{lex}}d_{\boldsymbol{\beta}}(\mathrm{val}_{\boldsymbol{\beta}}(a))\le_{\mathrm{lex}}d_{\boldsymbol{\beta}}(1)$. \end{proof} Recall the property of the $\beta$-expansions stating that considering two bases $\alpha$ and $\beta$, $\alpha < \beta$ if and only if $d_\alpha(1)< d_\beta(1)$ \cite{Parry:1960}. The following proposition shows the generalization of a weaker version of this property in the case of Cantor bases. \begin{proposition} Let $\boldsymbol{\alpha}=(\alpha_n)_{n\in\mathbb N}$ and $\boldsymbol{\beta}=(\beta_n)_{n\in\mathbb N}$ be two Cantor bases such that for all $n\in\mathbb N$, $\prod_{i=0}^n\alpha_i \le \prod_{i=0}^n\beta_i$. Then for all $x \in [0,1]$, we have $d_{\boldsymbol{\alpha}}(x) \le_{\mathrm{lex}} d_{\boldsymbol{\beta}}(x)$. \end{proposition} \begin{proof} Let $x \in [0,1]$ and suppose to the contrary that $d_{\boldsymbol{\alpha}}(x)>_{\mathrm{lex}}d_{\boldsymbol{\beta}}(x)$. Thus, there exists $\ell\in\mathbb N$ such that $\varepsilon_{\boldsymbol{\alpha},0}(x)\cdots \varepsilon_{\boldsymbol{\alpha},\ell-1}(x)=\varepsilon_{\boldsymbol{\beta},0}(x)\cdots \varepsilon_{\boldsymbol{\beta},\ell-1}(x)$ and $\varepsilon_{\boldsymbol{\alpha},\ell}(x)>\varepsilon_{\boldsymbol{\beta},\ell}(x)$. From Lemma~\ref{lem:Greedy} and from the hypothesis, we obtain that \[ x \le \sum_{n=0}^{\ell-1} \frac{\varepsilon_{\boldsymbol{\alpha},n}(x)}{\prod_{i=0}^n{\beta_i}} +\frac{\varepsilon_{\boldsymbol{\alpha},\ell}(x)-1}{\prod_{i=0}^\ell{\beta_i}} +\sum_{n=\ell+1}^{+\infty} \frac{\varepsilon_{\boldsymbol{\beta},n}(x)}{\prod_{i=0}^n{\beta_i}} <\sum_{n=0}^{\ell} \frac{\varepsilon_{\boldsymbol{\alpha},n}(x)}{\prod_{i=0}^n{\beta_i}} \le \sum_{n=0}^{\ell} \frac{\varepsilon_{\boldsymbol{\alpha},n}(x)}{\prod_{i=0}^n{\alpha_i}} \le x, \] a contradiction. \end{proof} \begin{corollary} Let $\boldsymbol{\alpha}=(\alpha_n)_{n\in\mathbb N}$ and $\boldsymbol{\beta}=(\beta_n)_{n\in\mathbb N}$ be two Cantor bases such that for all $n\in\mathbb N$, $\alpha_n \le\beta_n$. Then for all $x \in [0,1]$, we have $d_{\boldsymbol{\alpha}}(x)\le_{\mathrm{lex}} d_{\boldsymbol{\beta}}(x)$. \end{corollary} It is not true that $d_{\boldsymbol{\alpha}}(1)<_{\mathrm{lex}} d_{\boldsymbol{\beta}}(1)$ implies that for all $n\in\mathbb N$, $\prod_{i=0}^{n}\alpha_i \le \prod_{i=0}^{n}\beta_i$ as the following example shows. The same example shows that the lexicographic order on the Cantor bases is not sufficient either. Here, the term lexicographic order refers to the following order: $\boldsymbol{\alpha}<\boldsymbol{\beta}$ whenever there exists $\ell\in\mathbb N$ such that $\alpha_n=\beta_n$ for $n\in[\![0,\ell-1]\!]$ and $\alpha_\ell<\beta_\ell$. \begin{example} Let $\boldsymbol{\alpha}=(\overline{2+\sqrt{3},2})$ and $\boldsymbol{\beta}=(\overline{2+\sqrt{2},5})$. Then $d_{\boldsymbol{\alpha}}(1)=31^\omega$ and $d_{\boldsymbol{\beta}}(1)$ starts with the prefix $32$, hence $d_{\boldsymbol{\alpha}}(1)<_{\mathrm{lex}} d_{\boldsymbol{\beta}}(1)$. \end{example} \section{Quasi-greedy expansions} \label{sec:QuasiGreedyExpansions} A $\boldsymbol{\beta}$-representation is said to be \emph{finite} if it ends with infinitely many zeros, and \emph{infinite} otherwise. The \emph{length} of a finite $\boldsymbol{\beta}$-representation is the length of the longest prefix ending in a non-zero digit. When a $\boldsymbol{\beta}$-representation is finite, we usually omit to write the tail of zeros. When the $\boldsymbol{\beta}$-expansion of $1$ is finite, we show how to modify it in order to obtain an infinite $\boldsymbol{\beta}$-representation of $1$ that is lexicographically maximal among all infinite $\boldsymbol{\beta}$-representations of $1$. The obtained $\boldsymbol{\beta}$-representation is denoted by $d_{\boldsymbol{\beta}}^{*}(1)$ and is called the \emph{quasi-greedy $\boldsymbol{\beta}$-expansion} of $1$. It is defined recursively as follows: \begin{align} \label{def:quasigreedy} d_{\boldsymbol{\beta}}^*(1) =\begin{cases} d_{\boldsymbol{\beta}}(1) &\text{if } d_{\boldsymbol{\beta}}(1) \text{ is infinite} \\ \varepsilon_0\cdots \varepsilon_{\ell-2}(\varepsilon_{\ell-1} -1)d_{\boldsymbol{\beta}^{(\ell)}}^{*}(1) &\text{if } d_{\boldsymbol{\beta}}(1)=\varepsilon_0\cdots \varepsilon_{\ell-1} \text{ with } \ell \in \mathbb N_{\ge 1},\ \varepsilon_{\ell-1} >0. \end{cases} \end{align} \begin{example}\label{ex2:3phiphi} Let $\boldsymbol{\beta}=(\overline{3,\varphi,\varphi})$ the alternate base already considered in Example~\ref{ex:3phiphi}. Then we directly have that for all $m\in\mathbb N$, $\qDBi{3m+2}(1)=\DBi{3m+2}(1)=1(110)^\omega$. In order to compute $\qDBi{3m}(1)$ and $\qDBi{3m+1}(1)$, we need to go through the definition several times. For all $m\in\mathbb N$, we compute $\qDBi{3m}(1)=2\qDBi{3m+1}(1)=210\qDBi{3m+3}(1)=210\qDBi{3m}(1)=(210)^\omega$ and $\qDBi{3m+1}(1) =10\qDBi{3m+3}(1)=10(210)^\omega=(102)^\omega$. \end{example} \begin{example} Let $\boldsymbol{\beta}=(\overline{\beta_0,\ldots,\beta_{p-1}})$ be an alternate base such that for all $i\in[\![0,p-1]\!]$, $\beta_i\in\mathbb N_{\ge 2}$. Then for all $i\in[\![0,p-1]\!]$, $\DBi{i}(1)=\beta_i0^\omega$ and \[ \qDBi{i}(1)=((\beta_i-1)\cdots(\beta_{p-1}-1)(\beta_0-1)\ldots(\beta_{i-1}-1))^\omega. \] \end{example} When $\boldsymbol{\beta}=(\beta,\beta,\ldots)$, we recover the usual definition of the quasi-greedy $\beta$-expansion~\cite{Daroczy&Katai:1995,Komornik&Loreti:2007}. In particular, it is easy to check that in this case, if $d_{\boldsymbol{\beta}}(1)=\varepsilon_0\cdots \varepsilon_{\ell-1}$ with $\ell\in\mathbb N_{\ge 1}$ and $\varepsilon_{\ell-1}> 0$, then the quasi-greedy expansion is purely periodic and $d_{\boldsymbol{\beta}}^{*}(1)=(\varepsilon_0\ldots \varepsilon_{\ell-2}(\varepsilon_{\ell-1}-1))^\omega$. For arbitrary Cantor bases, the situation is more complicated and the quasi-greedy expansion can be not periodic. \begin{example} \label{ex:1+sqrt{13}} Consider the alternate base $\boldsymbol{\beta}=\big(\overline{\frac{1+\sqrt{13}}{2},\frac{5+\sqrt{13}}{6}}\big)$. We compute $d_{\boldsymbol{\beta}}(1)=201$ and $\DBi{1}(1)=11$. Then $\qDBi{1}(1)=(10)^\omega$ and $d_{\boldsymbol{\beta}}^{*}(1)=200\qDBi{1}(1)=200(10)^\omega$. \end{example} Moreover, even if the $\boldsymbol{\beta}$-expansion is finite, the quasi-greedy $\boldsymbol{\beta}$-representation can be infinite not ultimately periodic. Suppose that $d_{\boldsymbol{\beta}}(1)$ is finite and that an infinite quasi-greedy is involved during the computation of $d_{\boldsymbol{\beta}}^{*}(1)$. Let $n\in \mathbb N_{\ge 1}$ be the positive integer such that $\qDBi{n}(1)$ is the involved infinite expansion. Then $d_{\boldsymbol{\beta}}^{*}(1)$ is ultimately periodic if and only if so is $\qDBi{n}(1)$. \begin{example} \label{ex:PisotQuadratic} Consider the Cantor base $\boldsymbol{\beta}=(3,\beta,\beta,\beta,\beta,\ldots)$ where $\beta=\sqrt{6}(2+\sqrt{6})$. We get $d_{\boldsymbol{\beta}}(1)=3$ and $\DBi{1}(1)=d_{\beta}(1)$ is infinite not ultimately periodic since $\beta$ is a non-Pisot quadratic number~\cite{Bassino:2002}. Therefore, the quasi-greedy expansion $d_{\boldsymbol{\beta}}^{*}(1)=2\qDBi{1}(1)$ is not ultimately periodic. \end{example} \begin{proposition} \label{prop:QuasiGreedyRep1} The quasi-greedy expansion $d_{\boldsymbol{\beta}}^{*}(1)$ is a $\boldsymbol{\beta}$-representation of $1$. \end{proposition} \begin{proof} It is a straightforward verification. \end{proof} \begin{proposition} \label{prop:qDB1LexGreatest} If $a$ is an infinite word over $\mathbb N$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)< 1$, then $a<_{\mathrm{lex}} d_{\boldsymbol{\beta}}^{*}(1)$. Furthermore, $d_{\boldsymbol{\beta}}^{*}(1)$ is lexicographically maximal among all infinite $\boldsymbol{\beta}$-representations of all real numbers in $[0,1]$. \end{proposition} \begin{proof} If $d_{\boldsymbol{\beta}}(1)$ is infinite then the result follows from Corollary~\ref{cor:DB1LexGreatest}. Thus, we suppose that there exists $\ell\in\mathbb N_{\ge 1}$ such that $d_{\boldsymbol{\beta}}(1)=\varepsilon_0\cdots \varepsilon_{\ell-1}$ and $\varepsilon_{\ell-1}> 0$. First, let $a\in\mathbb N^{\mathbb N}$ be such that $\mathrm{val}_{\boldsymbol{\beta}}(a)< 1$ and suppose to the contrary that $a\ge_{\mathrm{lex}} d_{\boldsymbol{\beta}}^{*}(1)$. By Corollary~\ref{cor:DB1LexGreatest}, $a<_{\mathrm{lex}} d_{\boldsymbol{\beta}}(1)$. Then $a_0\cdots a_{\ell-2}=\varepsilon_0\cdots \varepsilon_{\ell-2}$, $a_{\ell-1}=\varepsilon_{\ell-1}-1$ and $\sigma^{\ell}(a)\ge_{\mathrm{lex}}\qDBi{\ell}(1)$. Since \begin{align*} \mathrm{val}_{\boldsymbol{\beta}}(a) &=\sum_{n=0}^{\ell-2}\frac{\varepsilon_n}{\prod_{i=0}^n{\beta_i}} +\frac{\varepsilon_{\ell-1}-1}{\prod_{i=0}^{\ell-1}{\beta_i}} +\frac{\mathrm{val}_{\boldsymbol{\beta}^{(\ell)}}\big(\sigma^{\ell}(a)\big)}{\prod_{i=0}^{\ell-1}{\beta_i}}\\ &=1-\frac{1}{\prod_{i=0}^{\ell-1}{\beta_i}} \left( 1-\mathrm{val}_{\boldsymbol{\beta}^{(\ell)}}\big(\sigma^{\ell}(a)\big) \right), \end{align*} we get that $\mathrm{val}_{\boldsymbol{\beta}^{(\ell)}}\big(\sigma^{\ell}(a)\big)<1$. By Corollary~\ref{cor:DB1LexGreatest} again, $\sigma^{\ell}(a)<_{\mathrm{lex}} \DBi{\ell}(1)$. Therefore $\DBi{\ell}(1)$ must be finite and we obtain that $a=d_{\boldsymbol{\beta}}^{*}(1)$ by iterating the reasoning. But then $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$, a contradiction. We now turn to the second part. Suppose that $a\in\mathbb N^{\mathbb N}$ does not end in $0^\omega$ and is such that $\mathrm{val}_{\boldsymbol{\beta}}(a)\le 1$. Our aim is to show that $a\le_{\mathrm{lex}}d_{\boldsymbol{\beta}}^{*}(1)$. We know from Corollary~\ref{cor:DB1LexGreatest} that $a\le_{\mathrm{lex}}d_{\boldsymbol{\beta}}(1)$. Now, suppose to the contrary that $a>_{\mathrm{lex}}d_{\boldsymbol{\beta}}^{*}(1)$. Then $a_0\cdots a_{\ell-2}=\varepsilon_0\cdots \varepsilon_{\ell-2}$, $a_{\ell-1}=\varepsilon_{\ell-1}-1$, and $\sigma^{\ell}(a) >_{\mathrm{lex}} \qDBi{\ell}(1)$. As in the first part of the proof, we obtain that $\mathrm{val}_{\boldsymbol{\beta}^{(\ell)}}(\sigma^{\ell}(a))\le 1$ and that $\DBi{\ell}(1)$ must be finite. By iterating the reasoning, we obtain that $a=d_{\boldsymbol{\beta}}^{*}(1)$, a contradiction. \end{proof} \section{Admissible sequences} \label{sec:AdmissibleSequences} In \cite{Parry:1960}, Parry characterized those infinite words over $\mathbb N$ that belong to $D_\beta$. Such infinite words are sometimes called \emph{$\beta$-admissible sequences}. Analogously, infinite word in $D_{\boldsymbol{\beta}}$ are said to be a \emph{$\boldsymbol{\beta}$-admissible sequence}. In this section, we generalize Parry's theorem to Cantor bases. \begin{lemma} \label{lem:Parry1} Let $a$ be an infinite word over $\mathbb N$ and for each $n\in\mathbb N$, let $b^{(n)}$ be a $\boldsymbol{\beta}^{(n)}$-representation of $1$. Suppose that for all $n\in\mathbb N$, $\sigma^n(a)\le_{\mathrm{lex}} b^{(n)}$. Then for all $k,\ell,m,n\in\mathbb N$ with $\ell\ge 1$, the following implication holds: \begin{equation} \label{eq:lemParry1} a_{k}\cdots a_{k+\ell-1} <_{\mathrm{lex}}b^{(n)}_{m}\cdots b^{(n)}_{m+\ell-1} \implies \mathrm{val}_{\boldsymbol{\beta}^{(k)}}(a_{k}\cdots a_{k+\ell-1}) \le \mathrm{val}_{\boldsymbol{\beta}^{(k)}}(b^{(n)}_{m}\cdots b^{(n)}_{m+\ell-1}). \end{equation} Consequently, for all $k,m,n\in\mathbb N$, the following implication holds: \begin{equation} \label{eq:lemParry2} \sigma^k(a)<_{\mathrm{lex}}\sigma^m(b^{(n)}) \implies \mathrm{val}_{\boldsymbol{\beta}^{(k)}}(\sigma^k(a)) \le \mathrm{val}_{\boldsymbol{\beta}^{(k)}}(\sigma^m(b^{(n)})). \end{equation} \end{lemma} \begin{proof} Proceed by induction on $\ell$. The base case $\ell=1$ is clear. Let $\ell\ge 2$ and suppose that for all $\ell'<\ell$ and all $k,m,n\in\mathbb N$, the implication~\eqref{eq:lemParry1} is true. Now let $k,m,n\in\mathbb N$ and suppose that $a_{k}\cdots a_{k+\ell-1}<_{\mathrm{lex}}b^{(n)}_{m}\cdots b^{(n)}_{m+\ell-1}$. Two cases are possible. Case 1: $a_{k}=b^{(n)}_{m}$. Then $a_{k+1}\cdots a_{k+\ell-1}<_{\mathrm{lex}} b^{(n)}_{m+1}\cdots b^{(n)}_{m+\ell-1}$ and by induction hypothesis, we obtain that $\mathrm{val}_{\boldsymbol{\beta}^{(k+1)}}(a_{k+1}\cdots a_{k+\ell-1})\le \mathrm{val}_{\boldsymbol{\beta}^{(k+1)}}(b^{(n)}_{m+1}\cdots b^{(n)}_{m+\ell-1})$. Therefore \begin{align*} \mathrm{val}_{\boldsymbol{\beta}^{(k)}}(a_{k}\cdots a_{k+\ell-1}) & = \frac{a_{k}}{\beta_{k}} + \frac{\mathrm{val}_{\boldsymbol{\beta}^{(k+1)}}(a_{k+1}\cdots a_{k+\ell-1})}{\beta_{k}} \\ & \le \frac{b^{(n)}_{m}}{\beta_{k}} + \frac{\mathrm{val}_{\boldsymbol{\beta}^{(k+1)}}(b^{(n)}_{m+1}\cdots b^{(n)}_{m+\ell-1})}{\beta_{k}} \\ & = \mathrm{val}_{\boldsymbol{\beta}^{(k)}}(b^{(n)}_{m}\cdots b^{(n)}_{m+\ell-1}). \end{align*} Case 2: $a_{k}<b^{(n)}_{m}$. Since $\sigma^{k+1}(a) \le_{\mathrm{lex}} b^{(k+1)}$ by hypothesis, we have \[ a_{k+1}\cdots a_{k+\ell-1} \le_{\mathrm{lex}} b_0^{(k+1)}\cdots b_{\ell-2}^{(k+1)}. \] By induction hypothesis, \[ \mathrm{val}_{\boldsymbol{\beta}^{(k+1)}}(a_{k+1}\cdots a_{k+\ell-1}) \le \mathrm{val}_{\boldsymbol{\beta}^{(k+1)}}(b_0^{(k+1)}\cdots b_{\ell-2}^{(k+1)}) \le 1. \] Then \begin{align*} \mathrm{val}_{\boldsymbol{\beta}^{(k)}}(a_{k}\cdots a_{k+\ell-1}) & = \frac{a_{k}}{\beta_{k}} + \frac{\mathrm{val}_{\boldsymbol{\beta}^{(k+1)}}(a_{k+1}\cdots a_{k+\ell-1})}{\beta_{k}} \\ & \le \frac{b^{(n)}_{m}-1}{\beta_{k}} + \frac{\mathrm{val}_{\boldsymbol{\beta}^{(k+1)}}(b_0^{(k+1)}\cdots b_{\ell-2}^{(k+1)})}{\beta_{k}} \\ & \le \mathrm{val}_{\boldsymbol{\beta}^{(k)}}(b^{(n)}_{m}\cdots b^{(n)}_{m+\ell-1}). \end{align*} Thus, the implication~\eqref{eq:lemParry1} is proved. The implication~\eqref{eq:lemParry2} immediately follows. \end{proof} \begin{lemma} \label{lem:Parry2} Let $a$ be an infinite word over $\mathbb N$ and for each $n\in\mathbb N$, let $b^{(n)}$ be a $\boldsymbol{\beta}^{(n)}$-representation of $1$. Suppose that for all $n\in\mathbb N$, $\sigma^n(a) <_{\mathrm{lex}} b^{(n)}$. Then for all $n\in\mathbb N$, $\mathrm{val}_{\boldsymbol{\beta}^{(n)}}(\sigma^n(a))<1$ unless there exists $\ell \in\mathbb N_{\ge 1}$ such that \begin{itemize} \item $b^{(n)}=b^{(n)}_0\cdots b^{(n)}_{\ell-1}$ with $b^{(n)}_{\ell-1}>0$ \item $a_na_{n+1}\cdots a_{n+\ell-1}=b^{(n)}_0\cdots b^{(n)}_{\ell-2}(b^{(n)}_{\ell-1}-1)$ \item $\mathrm{val}_{\boldsymbol{\beta}^{(n+\ell)}}(\sigma^{n+\ell}(a))=1$ \end{itemize} in which case $\mathrm{val}_{\boldsymbol{\beta}^{(n)}}(\sigma^n(a))=1$. \end{lemma} \begin{proof} Let $n\in\mathbb N$. By hypothesis, $\sigma^n(a) <_{\mathrm{lex}} b^{(n)}$. So there exists $\ell\in\mathbb N_{\ge 1}$ such that $a_n \cdots a_{n+\ell-2} = b^{(n)}_0\cdots b^{(n)}_{\ell-2}$ and $a_{n+\ell-1}<b^{(n)}_{\ell-1}$. By hypothesis, we also have $\sigma^{n+\ell}(a)<_{\mathrm{lex}}b^{(n+\ell)}$. We get from Lemma~\ref{lem:Parry1} that \[ \mathrm{val}_{\boldsymbol{\beta}^{(n+\ell)}}(\sigma^{n+\ell}(a)) \le \mathrm{val}_{\boldsymbol{\beta}^{(n+\ell)}}(b^{(n+\ell)})=1. \] Then \begin{align*} \mathrm{val}_{\boldsymbol{\beta}^{(n)}}(\sigma^n(a)) & = \mathrm{val}_{\boldsymbol{\beta}^{(n)}}(a_{n}\cdots a_{n+\ell-2}) +\frac{a_{n+\ell-1}}{\prod_{i=n}^{n+\ell-1}\beta_i} +\frac{\mathrm{val}_{\boldsymbol{\beta}^{(n+\ell)}}(\sigma^{n+\ell}(a))}{\prod_{i=n}^{n+\ell-1}\beta_i}\\ & \le \mathrm{val}_{\boldsymbol{\beta}^{(n)}}(b^{(n)}_0\cdots b^{(n)}_{\ell-2}) +\frac{b^{(n)}_{\ell-1}-1}{\prod_{i=n}^{n+\ell-1}\beta_i} +\frac{1}{\prod_{i=n}^{n+\ell-1}\beta_i}\\ & = \mathrm{val}_{\boldsymbol{\beta}^{(n)}}(b^{(n)}_0\cdots b^{(n)}_{\ell-1}) \\ & \le 1. \end{align*} Moreover, the equality holds throughout if and only if $b^{(n)}=b^{(n)}_0\cdots b^{(n)}_{\ell-1}$, $a_{n+\ell-1}=b^{(n)}_{\ell-1}-1$ and $\mathrm{val}_{\boldsymbol{\beta}^{(n+\ell)}}(\sigma^{n+\ell}(a))=1$. The conclusion follows. \end{proof} The following theorem generalizes Parry's theorem for real bases \cite{Parry:1960}. \begin{theorem} \label{thm:Parry} An infinite word $a$ over $\mathbb N$ belongs to $D_{\boldsymbol{\beta}}$ if and only if for all $n\in\mathbb N$, $\sigma^n(a)<_{\mathrm{lex}} \DBi{n}^*(1)$. \end{theorem} \begin{proof} In view of Corollary~\ref{cor:ShiftedGreedy}, it suffices to show that the following two assertions are equivalent. \begin{enumerate} \item For all $n\in\mathbb N$, $\mathrm{val}_{\boldsymbol{\beta}^{(n)}}(\sigma^n(a))<1$. \item For all $n\in\mathbb N$, $\sigma^n(a)<_{\mathrm{lex}} \qDBi{n}(1)$. \end{enumerate} The fact that (1) implies (2) follows from Proposition~\ref{prop:qDB1LexGreatest}. Since any quasi-greedy expansion of $1$ is infinite, we obtain that (2) implies (1) by Proposition~\ref{prop:QuasiGreedyRep1} and Lemma~\ref{lem:Parry2}. \end{proof} \begin{example} Let $\boldsymbol{\beta}=(\overline{3,\varphi,\varphi})$ be the alternate base already studied in Examples~\ref{ex:3phiphi} and~\ref{ex2:3phiphi}. Then $a=210(110)^\omega$ is the $\boldsymbol{\beta}$-expansion of some $x\in(0,1)$. In fact, since $\qDBi{0}(1)=(210)^\omega$, $\qDBi{1}(1)=(102)^\omega$ and $\qDBi{2}(1)=1(110)^\omega$, by Theorem~\ref{thm:Parry}, there exists $x\in[0,1)$ such that $a=d_{\boldsymbol{\beta}}(x)$. We can compute that $a=d_{\boldsymbol{\beta}}(\mathrm{val}_{\boldsymbol{\beta}}(a))= d_{\boldsymbol{\beta}}\big(\frac{19 + 9 \sqrt{5}}{3 (7 + 3 \sqrt{5})}\big)$. \end{example} We obtain a corollary characterizing the $\boldsymbol{\beta}$-expansions of a real number $x$ in the interval $[0,1]$ among all its $\boldsymbol{\beta}$-representations. \begin{corollary} \label{cor:Parry1} A $\boldsymbol{\beta}$-representation $a$ of some real number $x\in[0,1]$ is its $\boldsymbol{\beta}$-expansion if and only if for all $n\in\mathbb N_{\ge 1}$, $\sigma^n(a)<_{\mathrm{lex}} \qDBi{n}(1)$. \end{corollary} \begin{proof} Let $a\in\mathbb N^\mathbb N$ be such that $\mathrm{val}_{\boldsymbol{\beta}}(a)\in[0,1]$. From Theorem~\ref{thm:Parry}, $\sigma(a)$ belongs to $D_{\boldsymbol{\beta}^{(1)}}$ if and only if for all $n\in\mathbb N_{\ge 1}$, $\sigma^n(a)<_{\mathrm{lex}} \qDBi{n}(1)$. The conclusion then follows from Proposition~\ref{prop:ShiftedGreedy}. \end{proof} \begin{example} Consider $\boldsymbol{\beta}=\big(\overline{\frac{16+5\sqrt{10}}{9},9}\big)$. Then $d_{\boldsymbol{\beta}}(1)=d_{\boldsymbol{\beta}}^{*}(1)=34(27)^\omega$, $\DBi{1}(1)=90^\omega$ and $\qDBi{1}(1)=834(27)^\omega$. For all $m\in \mathbb N_{\ge 1}$, we have $\sigma^{2m}(34(27)^\omega)<_{\mathrm{lex}}d_{\boldsymbol{\beta}}^{*}(1)$ and $\sigma^{2m-1}(34(27)^\omega)<_{\mathrm{lex}} \qDBi{1}(1)$ as prescribed by Corollary~\ref{cor:Parry1}. \end{example} In comparison with the $\beta$-expansion theory, considering a Cantor base $\boldsymbol{\beta}$ and an infinite word $a$ over $\mathbb N$, Corollary~\ref{cor:Parry1} does not give a purely combinatorial condition to check whether $a$ is the $\boldsymbol{\beta}$-expansion of $1$. We will see in Section~\ref{sec:AlternateBases} that even though an improvement of this result in the context of alternate bases can be proved, a purely combinatorial condition cannot exist. \section{The $\boldsymbol{\beta}$-shift} \label{sec:BShift} Let $S_{\boldsymbol{\beta}}$ denote the topological closure of $D_{\boldsymbol{\beta}}$ with respect to the prefix distance of infinite words: $S_{\boldsymbol{\beta}}=\overline{D_{\boldsymbol{\beta}}}$. \begin{proposition} \label{prop:ParryDS} An infinite word $a$ over $\mathbb N$ belongs to $S_{\boldsymbol{\beta}}$ if and only if for all $n\in\mathbb N$, $\sigma^n(a)\le_{\mathrm{lex}} \DBi{n}^*(1)$. \end{proposition} \begin{proof} Suppose that $a\in S_{\boldsymbol{\beta}}$. Then there exists a sequence $(a^{(k)})_{k\in\mathbb N}$ of $D_{\boldsymbol{\beta}}$ converging to $a$. By Theorem~\ref{thm:Parry}, for all $k,n\in\mathbb N$, we have $\sigma^n(a^{(k)})<_{\mathrm{lex}} \qDBi{n}(1)$. By letting $k$ tend to infinity, we get that for all $n\in\mathbb N$, $\sigma^n(a)\le_{\mathrm{lex}} \qDBi{n}(1)$. Conversely, suppose that for all $n\in\mathbb N$, $\sigma^n(a)\le_{\mathrm{lex}} \qDBi{n}(1)$. For each $k\in \mathbb N$, let $a^{(k)}=a_0\cdots a_k0^\omega$. Then $\lim\limits_{k\to+\infty}a^{(k)}=a$ and for all $k,n\in\mathbb N$, $\sigma^n(a^{(k)})\le_{\mathrm{lex}} \sigma^n(a)\le_{\mathrm{lex}} \qDBi{n}(1)$. Since $\qDBi{n}(1)$ is infinite, for all $k,n\in\mathbb N$, $\sigma^n(a^{(k)})<_{\mathrm{lex}} \qDBi{n}(1)$. By Theorem~\ref{thm:Parry}, we deduce that for all $k\in\mathbb N$, $a^{(k)}\in D_{\boldsymbol{\beta}}$. Therefore $a\in S_{\boldsymbol{\beta}}$. \end{proof} \begin{proposition} Let $a,b \in S_{\boldsymbol{\beta}}$. \begin{enumerate} \item If $a<_{\mathrm{lex}} b$ then $\mathrm{val}_{\boldsymbol{\beta}}(a)\leq\mathrm{val}_{\boldsymbol{\beta}}(b)$. \item If $\mathrm{val}_{\boldsymbol{\beta}}(a)<\mathrm{val}_{\boldsymbol{\beta}}(b)$ then $a<_{\mathrm{lex}} b$. \end{enumerate} \end{proposition} \begin{proof} Consider two sequences $(a^{(k)})_{k\in\mathbb N}$ and $(b^{(k)})_{k\in\mathbb N}$ of $D_{\boldsymbol{\beta}}$ such that $\lim_{k\to \infty} a^{(k)}=a$ and $\lim_{k\to \infty} b^{(k)}=b$. Suppose that $a<_{\mathrm{lex}} b$. Then there exists $\ell\in\mathbb N_{\ge 1}$ such that $a_0\cdots a_{\ell-1}= b_0 \cdots b_{\ell-1}$ and $a_\ell<b_\ell$. By definition of the prefix distance, there exists $K\in\mathbb N$ such that for all $k\ge K$, $a_0^{(k)}\cdots a_{\ell}^{(k)}=a_0\cdots a_{\ell}$ and $b_0^{(k)}\cdots b_{\ell}^{(k)}=b_0\cdots b_{\ell}$. Therefore, for all $k\ge K$, we have $a^{(k)}<_{\mathrm{lex}} b^{(k)}$, and then by Proposition~\ref{pro:Increasing}, $\mathrm{val}_{\boldsymbol{\beta}}(a^{(k)})<\mathrm{val}_{\boldsymbol{\beta}}(b^{(k)})$. Since the function $\mathrm{val}_{\boldsymbol{\beta}}$ is continuous, by letting $k$ tend to infinity, we obtain $\mathrm{val}_{\boldsymbol{\beta}}(a)\leq \mathrm{val}_{\boldsymbol{\beta}}(b)$. This proves the first item. The second item follows immediately. \end{proof} Further, we define \[ \Delta_{\boldsymbol{\beta}}=\bigcup_{n\in\mathbb N}D_{\boldsymbol{\beta}^{(n)}} \quad\text{and}\quad \Sigma_{\boldsymbol{\beta}}=\overline{\Delta_{\boldsymbol{\beta}}}. \] \begin{proposition} \label{prop:subshift} The sets $\Delta_{\boldsymbol{\beta}}$ and $\Sigma_{\boldsymbol{\beta}}$ are both shift-invariant. \end{proposition} \begin{proof} Let $a$ be an infinite word over $\mathbb N$ and $n\in\mathbb N$. It follows from Corollary~\ref{cor:ShiftedGreedy} that if $a\in D_{\boldsymbol{\beta}^{(n)}}$ then $\sigma(a)\in D_{\boldsymbol{\beta}^{(n+1)}}$. Then, it is easily seen that if $a\in S_{\boldsymbol{\beta}^{(n)}}$ then $\sigma(a)\in S_{\boldsymbol{\beta}^{(n+1)}}$. \end{proof} Recall some definitions of symbolic dynamics. Let $A$ be a finite alphabet. A subset of $A^\mathbb N$ is a \emph{subshift} of $A^{\mathbb N}$ if it is shift-invariant and closed with respect to the topology induced by the prefix distance. In view of Proposition~\ref{prop:subshift}, the subset $\Sigma_{\boldsymbol{\beta}}$ of $A_{\boldsymbol{\beta}}^{\mathbb N}$ is a subshift, which we call the ${\boldsymbol{\beta}}$-\emph{shift}. For a subset $L$ of $A^\mathbb N$, we let $\mathrm{Fac}(L)$ (resp.\ $\mathrm{Pref}(L)$) denote the set of all finite factors (resp.\ prefixes) of all elements in $L$. \begin{proposition} \label{prop:Fac} We have $\mathrm{Fac}(D_{\boldsymbol{\beta}})=\mathrm{Fac}(\Delta_{\boldsymbol{\beta}})=\mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})$. \end{proposition} \begin{proof} By definition, $\mathrm{Fac}(D_{\boldsymbol{\beta}})\subseteq\mathrm{Fac}(\Delta_{\boldsymbol{\beta}})=\mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})$. Let us show that $\mathrm{Fac}(D_{\boldsymbol{\beta}})\supseteq\mathrm{Fac}(\Delta_{\boldsymbol{\beta}})$. Let $f\in \mathrm{Fac}(\Delta_{\boldsymbol{\beta}})$. There exist $n\in\mathbb N$ and $a\in D_{\boldsymbol{\beta}^{(n)}}$ such that $f\in\mathrm{Fac}(a)$. It follows from Corollary~\ref{cor:ShiftedGreedy} that $0^na$ belongs to $D_{\boldsymbol{\beta}}$. Therefore, $f\in\mathrm{Fac}(D_{\boldsymbol{\beta}})$. \end{proof} We define sets of finite words $X_{\boldsymbol{\beta},\ell}$ for $\ell\in\mathbb N_{\ge 1}$ as follows. If $d_{\boldsymbol{\beta}}^{*}(1)=t_0t_1\cdots$ then we let \[ X_{\boldsymbol{\beta},\ell}=\{t_0\cdots t_{\ell-2}s \colon s\in[\![0,t_{\ell-1}-1]\!]\}. \] Note that $X_{\boldsymbol{\beta},\ell}$ is empty if and only if $t_{\ell-1}=0$. \begin{proposition} \label{prop:DX} We have \[ D_{\boldsymbol{\beta}}=\bigcup_{\ell_0\in\mathbb N_{\ge 1}} X_{\boldsymbol{\beta},\ell_0} \Bigg(\bigcup_{\ell_1\in\mathbb N_{\ge 1}} X_{\boldsymbol{\beta}^{(\ell_0)},\ell_1} \Bigg(\bigcup_{\ell_2\in\mathbb N_{\ge 1}} X_{\boldsymbol{\beta}^{(\ell_0+\ell_1)},\ell_2} \Bigg( \quad\cdots\quad \Bigg) \Bigg) \Bigg). \] \end{proposition} \begin{proof} For the sake of conciseness, we let $X_{\boldsymbol{\beta}}$ denote the right-hand set of the equality. For $n\in\mathbb N$, write $\qDBi{n}(1)=t_0^{(n)}t_{1}^{(n)}\cdots$. Let $a\in D_{\boldsymbol{\beta}}$. By Theorem~\ref{thm:Parry}, for all $n\in\mathbb N$, $\sigma^n(a)<\qDBi{n}(1)$. In particular, $a<d_{\boldsymbol{\beta}}^{*}(1)$. Thus, there exist $\ell_0\in\mathbb N_{\ge 1}$ such that $t_{\ell_0-1}^{(0)}>0$ and $s_0\in[\![0,t^{(0)}_{\ell_0-1}-1]\!]$ such that $a=t_0\cdots t_{\ell_0-2}s_0 \sigma^{\ell_0}(a)$. Next, we also have $\sigma^{\ell_0}(a)<\qDBi{\ell_0}(1)$. Then there exist $\ell_1\in\mathbb N_{\ge 1}$ such that $t_{\ell_1-1}^{(\ell_0)}>0$ and $s_1\in[\![0,t_{\ell_1-1}^{(\ell_0)}-1]\!]$ such that $\sigma^{\ell_0}(a)=t_0^{(\ell_0)}\cdots t_{\ell_1-2}^{(\ell_0)}s_1 \sigma^{\ell_0+\ell_1}(a)$. We get that $a\in X_{\boldsymbol{\beta}}$ by iterating the process. Now, let $a\in X_{\boldsymbol{\beta}}$. Then there exists a sequence $(\ell_k)_{k\in\mathbb N}$ of $\mathbb N_{\ge 1}$ such that $a=u_0u_1u_2\cdots$ where for all $k\in\mathbb N$, $u_k \in X_{\boldsymbol{\beta}^{(\ell_0+\cdots \ell_{k-1})},\ell_k}$. By Theorem~\ref{thm:Parry}, in order to prove that $a\in D_{\boldsymbol{\beta}}$, it suffices to show that for all $n\in\mathbb N$, $\sigma^n(a)<_{\mathrm{lex}}\qDBi{n}(1)$. Let thus $n\in\mathbb N$. There exist $k\in\mathbb N$ and finite words $x$ and $y$ such that $u_k=xy$, $y\ne\varepsilon$ and $\sigma^n(a)=yu_{k+1}u_{k+2}\cdots$. Then $n=\ell_0+\cdots+\ell_{k-1}+|x|$ and $\sigma^n(a)<_{\mathrm{lex}} \sigma^{|x|}\big(\qDBi{\ell_0+\cdots \ell_{k-1}}(1)\big)$. If $x=\varepsilon$ then we obtain $\sigma^n(a)<_{\mathrm{lex}} \qDBi{\ell_0+\cdots \ell_{k-1}}(1)=\qDBi{n}(1)$. Otherwise it follows from Corollary~\ref{cor:Parry1} that $\sigma^{|x|}\big(\DBi{\ell_0+\cdots \ell_{k-1}}(1)\big) <_{\mathrm{lex}} \qDBi{\ell_0+\cdots \ell_{k-1}+|x|}(1) =\qDBi{n}(1)$, hence we get $\sigma^n(a) <_{\mathrm{lex}}\qDBi{n}(1)$ as well. \end{proof} \begin{corollary} We have $D_{\boldsymbol{\beta}}=\displaystyle{\bigcup_{\ell\in\mathbb N_{\ge 1}} X_{\boldsymbol{\beta},\ell} D_{\boldsymbol{\beta}^{(\ell)}}}$. \end{corollary} \begin{corollary} \label{cor:DX} Any prefix of $d_{\boldsymbol{\beta}}^{*}(1)$ belongs to $\mathrm{Pref}(D_{\boldsymbol{\beta}})$. \end{corollary} \begin{proof} Write $d_{\boldsymbol{\beta}}^{*}(1)=t_0t_1t_2\cdots$ and let $\ell\in\mathbb N_{\ge 1}$. Since $d_{\boldsymbol{\beta}}^{*}(1)$ is infinite, there exists $k> \ell$ such that $t_{k-1}> 0$. Choose the least such $k$ and let $s\in[\![0,t_{k-1}-1]\!]$. Then $t_0\cdots t_{\ell-1}0^{k-\ell-1}s$ belongs to $X_{\boldsymbol{\beta},k}$. The conclusion follows from Proposition~\ref{prop:DX}. \end{proof} \section{Alternate bases} \label{sec:AlternateBases} Recall that an alternate base is a periodic Cantor base. The aim of this section is to discuss some results that are specific to these particular Cantor bases. We start with a few elementary observations. First, the condition $\prod_{n\in\mathbb N}\beta_n=+\infty$ is trivially satisfied in the context of alternate bases since the sequence $(\beta_n)_{n\in\mathbb N}$ takes only finitely many values. Then, for an alternate base $\boldsymbol{\beta}$ of length $p$, the $\boldsymbol{\beta}$-value~\eqref{eq:representationCantor} of an infinite word $a$ over $\mathbb R_{\ge 0}$ can be rewritten as \[ \mathrm{val}_{\boldsymbol{\beta}}(a) =\sum_{n\in\mathbb N} \frac{a_n}{\big(\prod_{i=0}^{p-1}\beta_i\big)^{\lfloor\frac{n}{p}\rfloor} \prod_{i=0}^{n\bmod p}\beta_i} \] or as \begin{equation} \label{eq:representationAlternate} \mathrm{val}_{\boldsymbol{\beta}}(a) =\sum_{m=0}^{+\infty} \frac{1}{\big(\prod_{i=0}^{p-1}\beta_i\big)^m} \sum_{j=0}^{p-1} \frac{a_{pm+j}}{\prod_{i=0}^{j}\beta_i}. \end{equation} Further, the alphabet $A_{\boldsymbol{\beta}}$ is finite since $A_{\boldsymbol{\beta}}=\{0,\ldots,\max_{i\in[\![0,p-1]\!]}{\floor{\beta_i}}\}$. Finally, note that a Cantor base of the form $(\beta,\beta,\ldots)$ is an alternate base of length $1$, in which case, as already mentioned in Section~\ref{sec:CantorRealBases}, all definitions introduced so far coincide with those of Rényi~\cite{Renyi:1957} for real bases $\beta$. In Proposition~\ref{prop:ExistRepresentation}, we gave a characterization of those infinite words $a \in(\mathbb R_{\ge 0})^\mathbb N$ for which there exists a Cantor base $\boldsymbol{\beta}$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$. Here, we are interested in the stronger condition of the existence of an alternate base $\boldsymbol{\beta}$ satisfying $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$. \begin{proposition} \label{pro:ExistAlternateBase} Let $a$ be an infinite word over $\mathbb R_{\ge 0}$ such that $a_n\in O(n^d)$ for some $d\in\mathbb N$ and let $p\in\mathbb N_{\ge 1}$. There exists an alternate base $\boldsymbol{\beta}$ of length $p$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$ if and only if $\sum_{n\in\mathbb N}a_n> 1$. If moreover $p\ge 2$, then there exist uncountably many such alternate bases. \end{proposition} \begin{proof} From Proposition~\ref{prop:ExistRepresentation}, we already know that the condition $\sum_{n\in\mathbb N}a_n> 1$ is necessary. Now, suppose that $\sum_{n\in\mathbb N}a_n> 1$. If $p=1$ then the result follows from Lemma~\ref{lem:ExistRepresentation1}. Suppose that $p\ge 2$. Consider any $(p-1)$-tuple $(\beta_1,\ldots,\beta_{p-1})\in (\mathbb R_{>1})^{p-1}$. For all $\beta_0>1$, we can write $\mathrm{val}_{\boldsymbol{\beta}}(a)=\mathrm{val}_{\beta_0}(c)$ with $\boldsymbol{\beta}=(\overline{\beta_0,\beta_1,\ldots,\beta_{p-1}})$ and \[ c_m=\frac{1}{\big(\prod_{i=1}^{p-1}\beta_i\big)^m} \sum_{j=0}^{p-1} \frac{a_{pm+j}}{\prod_{i=1}^j \beta_i} \quad \text{for all } m\in\mathbb N. \] Note that $c\in(\mathbb R_{\ge 0})^\mathbb N$ and that $c_m\in O(m^d)$. By hypothesis, there exists $N\in\mathbb N$ such that $\sum_{n=0}^{N}a_n>1$. Then \[ \sum_{m=0}^{\floor{\frac{N}{p}}}c_m > \frac{\sum_{m=0}^{\floor{\frac{N}{p}}} \sum_{j=0}^{p-1} a_{pm+j}}{\big(\prod_{i=1}^{p-1}\beta_i\big)^{\floor{\frac{N}{p}}+1 }} \ge \frac{\sum_{n=0}^N a_n}{\big(\prod_{i=1}^{p-1}\beta_i\big)^{\floor{\frac{N}{p}}+1 }}. \] Therefore, any $(p-1)$-tuple $(\beta_1,\ldots,\beta_{p-1})\in (\mathbb R_{>1})^{p-1}$ satisfying \[ \Bigg(\prod_{i=1}^{p-1}\beta_i\Bigg)^{\floor{\frac{N}{p}}+1} \le \;\sum_{n=0}^N a_n \] is such that $\sum_{m=0}^{\floor{\frac{N}{p}}}c_m>1$, and hence there exist uncountably many of them. For such a $(p-1)$-tuple, the infinite word $c$ satisfies the hypothesis of Lemma~\ref{lem:ExistRepresentation1}, so there exists $\beta_0>1$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=\mathrm{val}_{\beta_0}(c)=1$. \end{proof} \subsection{The greedy algorithm} The greedy and the quasi-greedy $\boldsymbol{\beta}$-expansions of $1$ enjoy specific properties whenever $\boldsymbol{\beta}$ is an alternate base. From now on, we let $\boldsymbol{\beta}$ be a fixed alternate base and we let $p$ be its length. \begin{proposition} \label{pro:PurelyPeriodic} The $\boldsymbol{\beta}$-expansion of $1$ is not purely periodic. \end{proposition} \begin{proof} Suppose to the contrary that there exists $q\in\mathbb N_{\ge 1}$ such that for all $n\in\mathbb N$, $\varepsilon_n=\varepsilon_{n+q}$. By considering $\ell=\lcm(p,q)$, we get that $\boldsymbol{\beta}^{(\ell)}=\boldsymbol{\beta}$ and for all $n\in\mathbb N$, $\varepsilon_n=\varepsilon_{n+\ell}$. Therefore \[ 1 =\mathrm{val}_{\boldsymbol{\beta}}\big(\varepsilon_0\cdots \varepsilon_{\ell-1}\big) + \frac{1}{\prod_{i=0}^{\ell-1}\beta_i}\\ =\mathrm{val}_{\boldsymbol{\beta}}\big(\varepsilon_0\cdots \varepsilon_{\ell-2} (\varepsilon_{\ell-1}+1)\big). \] Thus $\varepsilon_0\cdots \varepsilon_{\ell-2}(\varepsilon_{\ell-1}+1)$ is a $\boldsymbol{\beta}$-representation of $1$ lexicographically greater than $d_{\boldsymbol{\beta}}(1)$, which is impossible by Proposition~\ref{pro:GreedyLexGreatest}. \end{proof} One might think at first that if for each $i\in[\![0,p-1]\!]$, $\qDBi{i}(1)$ is ultimately periodic, then for $\beta=\prod_{i=0}^{p-1}\beta_i$, $d_\beta^{*}(1)$ must be ultimately periodic as well. This is not the case, as the following example shows. Moreover, the same example shows that the $\boldsymbol{\beta}$-expansion of $1$ can be ultimately periodic with a period which is coprime with the length $p$ of $\boldsymbol{\beta}$. \begin{example} Let $\boldsymbol{\beta}=(\overline{\sqrt{6},3,\frac{2+\sqrt{6}}{3}})$. It is easily checked that $\DBi{0}(1)=2(10)^\omega$, $\DBi{1}(1)=3$ and $\DBi{2}(1)=11002$. But the product $\beta=\prod_{i=0}^{p-1}\beta_i=\sqrt{6}(2+\sqrt{6})$ is such that $d_\beta^*(1)$ is not ultimately periodic as explained in Example~\ref{ex:PisotQuadratic}. \end{example} \begin{proposition} \label{pro:NeverReach} The quasi-greedy expansion $d_{\boldsymbol{\beta}}^{*}(1)$ is ultimately periodic if and only if either an ultimately periodic expansion is reached or only finite expansions are involved within the first $p$ recursive calls to the definition of $d_{\boldsymbol{\beta}}^{*}(1)$. \end{proposition} \begin{proof} If there exists $n\in\mathbb N$ such that the infinite expansion $\qDBi{n}(1)$ is involved in the computation of $d_{\boldsymbol{\beta}}^{*}(1)$, then clearly $d_{\boldsymbol{\beta}}^{*}(1)$ is ultimately periodic if and only if so is $\qDBi{n}(1)$. Now, suppose that only finite expansions are involved within $p$ recursive calls to the definition of $d_{\boldsymbol{\beta}}^{*}(1)$. Then $d_{\boldsymbol{\beta}}(1)$ is finite. Thus, $d_{\boldsymbol{\beta}}(1)=\varepsilon_{\boldsymbol{\beta},0}\cdots \varepsilon_{\boldsymbol{\beta},\ell_0-1}$ with $\ell_0\in\mathbb N_{\geq1}$ and $\varepsilon_{\boldsymbol{\beta},\ell_0-1}> 0$. Then \[ d_{\boldsymbol{\beta}}^{*}(1)= \varepsilon_{\boldsymbol{\beta},0}\cdots \varepsilon_{\boldsymbol{\beta},\ell_0-2} (\varepsilon_{\boldsymbol{\beta},\ell_0-1}-1) \qDBi{i_1}(1) \] where $i_1= \ell_0\bmod p$. By hypothesis, $\DBi{i_1}(1)$ is finite as well. Thus we have $\DBi{i_1}(1)=\varepsilon_{\boldsymbol{\beta}^{(i_1)},0}\cdots \varepsilon_{\boldsymbol{\beta}^{(i_1)},\ell_1-1}$ with $\ell_1\in\mathbb N_{\ge 1}$ and $\varepsilon_{\boldsymbol{\beta}^{(i_1)},\ell_1-1}> 0$. Repeating the same argument, we obtain \[ \qDBi{i_1}(1)= \varepsilon_{\boldsymbol{\beta}^{(i_1)},0}\cdots \varepsilon_{\boldsymbol{\beta}^{(i_1)},\ell_1-2} (\varepsilon_{\boldsymbol{\beta}^{(i_1)},\ell_1-1}-1) \qDBi{i_2}(1) \] where $i_2= \ell_0+\ell_1 \bmod p$. By continuing in the same fashion and by setting $i_0=0$, we obtain two sequences $(\ell_j)_{j\in[\![0,p-1]\!]}$ and $(i_j)_{j\in[\![0,p]\!]}$. Because for all $j\in[\![0,p]\!]$, we have $i_j\in[\![0,p-1]\!]$, there exist $j,k\in[\![0,p]\!]$ such that $j<k$ and $i_j=i_k$. Then $d_{\boldsymbol{\beta}}^{*}(1)=xy^\omega$ where \[ x= \varepsilon_{\boldsymbol{\beta}^{(i_0)},0}\cdots \varepsilon_{\boldsymbol{\beta}^{(i_0)},\ell_0-2} (\varepsilon_{\boldsymbol{\beta}^{(i_0)},\ell_0-1}-1) \ \cdots \ \varepsilon_{\boldsymbol{\beta}^{(i_{j-1})},0}\cdots \varepsilon_{\boldsymbol{\beta}^{(i_{j-1})},\ell_{j-1}-2} (\varepsilon_{\boldsymbol{\beta}^{(i_{j-1})},\ell_{j-1}-1}-1)\\ \] and \[ y= \varepsilon_{\boldsymbol{\beta}^{(i_j)},0}\cdots \varepsilon_{\boldsymbol{\beta}^{(i_j)},\ell_j-2} (\varepsilon_{\boldsymbol{\beta}^{(i_j)},\ell_j-1}-1) \ \cdots \ \varepsilon_{\boldsymbol{\beta}^{(i_{k-1})},0}\cdots \varepsilon_{\boldsymbol{\beta}^{(i_{k-1})},\ell_{k-1}-2} (\varepsilon_{\boldsymbol{\beta}^{(i_{k-1})},\ell_{k-1}-1}-1). \] \end{proof} \subsection{Admissible sequences} The condition given in Corollary~\ref{cor:Parry1} does not allow us to check whether a given $\boldsymbol{\beta}$-representation of $1$ is the $\boldsymbol{\beta}$-expansion of $1$ without effectively computing the quasi-greedy $\boldsymbol{\beta}$-expansion of $1$, and hence the $\boldsymbol{\beta}$-expansion of $1$ itself. The following proposition provides us with such a condition in the case of alternate bases, provided that we are given the quasi-greedy $\boldsymbol{\beta}^{(i)}$-expansions of $1$ for $i\in[\![1,p-1]\!]$. \begin{proposition} \label{prop:Parry2} A $\boldsymbol{\beta}$-representation $a$ of $1$ is the $\boldsymbol{\beta}$-expansion of $1$ if and only if for all $m\in\mathbb N_{\ge 1}$, $\sigma^{pm}(a)<_{\mathrm{lex}} a$ and for all $m\in\mathbb N$ and $i\in[\![1,p-1]\!]$, $\sigma^{pm+i}(a)<_{\mathrm{lex}} \qDBi{i}(1)$. \end{proposition} \begin{proof} The condition is necessary by Corollary~\ref{cor:Parry1} and since $d_{\boldsymbol{\beta}}^{*}(1)\le_{\mathrm{lex}}d_{\boldsymbol{\beta}}(1)$. Let us show that the condition is sufficient. Let $a$ be a $\boldsymbol{\beta}$-representation of $1$ such that for all $m\in\mathbb N_{\ge 1}$, $\sigma^{pm}(a)<_{\mathrm{lex}} a$ and for all $m\in\mathbb N$ and $i\in[\![1,p-1]\!]$, $\sigma^{pm+i}(a)<_{\mathrm{lex}} \qDBi{i}(1)$. By Proposition~\ref{pro:GreedyLexGreatest}, $a\le_{\mathrm{lex}}d_{\boldsymbol{\beta}}(1)$. By Theorem~\ref{thm:Parry}, if $a<_{\mathrm{lex}}d_{\boldsymbol{\beta}}^{*}(1)$ then $\mathrm{val}_{\boldsymbol{\beta}}(a)<1$, which contradicts that $a$ is a $\boldsymbol{\beta}$-representation of $1$. Thus, $d_{\boldsymbol{\beta}}^{*}(1)\le_{\mathrm{lex}} a\le_{\mathrm{lex}} d_{\boldsymbol{\beta}}(1)$. If $d_{\boldsymbol{\beta}}(1)$ is infinite, then $a=d_{\boldsymbol{\beta}}(1)$ as desired. Now, suppose that $d_{\boldsymbol{\beta}}(1)=\varepsilon_0\cdots\varepsilon_{\ell-1}$ with $\ell\in\mathbb N_{\ge 1}$ and $\varepsilon_{\ell-1}> 0$. Then $a_0\cdots a_{\ell-2}=\varepsilon_0\cdots\varepsilon_{\ell-2}$ and $a_{\ell-1}\in\{\varepsilon_{\ell-1}-1,\varepsilon_{\ell-1}\}$. Since $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$, if $a_{\ell-1}=\varepsilon_{\ell-1}$ then $a=d_{\boldsymbol{\beta}}(1)$. Therefore, in order to conclude, it suffices to show that $a_{\ell-1}\ne\varepsilon_{\ell-1}-1$. Suppose to the contrary that $a_{\ell-1}=\varepsilon_{\ell-1}-1$. Then $\qDBi{\ell}(1)\le_{\mathrm{lex}} \sigma^{\ell}(a)$. By hypothesis, $\ell \equiv 0\pmod p$. Therefore $ d_{\boldsymbol{\beta}}^{*}(1) \le_{\mathrm{lex}} \sigma^{\ell}(a)\le_{\mathrm{lex}} d_{\boldsymbol{\beta}}(1)$. By repeating the same argument, we obtain that $a_{\ell}\cdots a_{2\ell-2}=\varepsilon_0\cdots\varepsilon_{\ell-2}$ and $a_{2\ell-1}\in\{\varepsilon_{\ell-1}-1,\varepsilon_{\ell-1}\}$. Since $\sigma^{\ell}(a)<_{\mathrm{lex}} a$ by hypothesis, we must have $a_{2\ell-1}=\varepsilon_{\ell-1}-1$. By iterating the argument, we obtain that $a=(\varepsilon_0\cdots\varepsilon_{\ell-2}(\varepsilon_{\ell-1}-1))^\omega$, contradicting that $\sigma^{\ell}(a)<_{\mathrm{lex}}a$. \end{proof} When $p=1$, Proposition~\ref{prop:Parry2} provides us with the purely combinatorial condition proved by Parry~\cite{Parry:1960} in order to determine whether a given $\boldsymbol{\beta}$-representation of $1$ is the $\boldsymbol{\beta}$-expansion of $1$. However, when $p\ge 2$, we need to compute the quasi-greedy $\boldsymbol{\beta}^{(i)}$-expansions of $1$ for every $i\in[\![1,p-1]\!]$ first. This might lead us to a circular computation, in which case the condition may seem not useful in practice. Indeed, suppose that $p=2$ and that we are provided with a $\boldsymbol{\beta}$-representation $a$ of $1$ and a $\boldsymbol{\beta}^{(1)}$-representation $b$ of $1$. Then in order to check if $a=d_{\boldsymbol{\beta}}(1)$, we need to compute $\qDBi{1}(1)$, and hence $\DBi{1}(1)$ first. But then, in order to check if $b=\DBi{1}(1)$, we need to compute $d_{\boldsymbol{\beta}}^{*}(1)$, and hence $d_{\boldsymbol{\beta}}(1)$, which brings us back to the initial problem. Nevertheless, this condition can be useful to check if a specific $\boldsymbol{\beta}$-representation of $1$ is the $\boldsymbol{\beta}$-expansion of $1$. For example, consider a $\boldsymbol{\beta}$-representation $a$ of $1$ such that for all $m\in \mathbb N_{\ge 1}$, $\sigma^{pm}(a)<_{\mathrm{lex}} a$ and for all $m\in \mathbb N$ and $i\in[\![1,p-1]\!]$, $a_{pm+i}<\floor{\beta_i}-1$, then the infinite words $a$ satisfies the hypothesis of Proposition~\ref{prop:Parry2} and $a$ is the $\boldsymbol{\beta}$-expansion of $1$. We have seen that considering an infinite word $a$ over $\mathbb N$ and a positive integer $p$, there may exist more than one alternate base $\boldsymbol{\beta}$ of length $p$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$. Moreover, among all of these alternate bases, it may be that some are such that $a$ is greedy and others are such that $a$ is not. Thus, a purely combinatorial condition for checking whether a $\boldsymbol{\beta}$-representation is greedy cannot exist. \begin{example} Consider $a=2(10)^\omega$. Then $\mathrm{val}_{\boldsymbol{\alpha}}(a)=\mathrm{val}_{\boldsymbol{\beta}}(a)=1$ for both $\boldsymbol{\alpha}=(\overline{1+\varphi,2})$ and $\boldsymbol{\beta}=(\overline{\frac{31}{10},\frac{420}{341}})$. It can be checked that $d_{\boldsymbol{\alpha}}(1)=a$ and $d_{\boldsymbol{\beta}}(1)\ne a$. \end{example} Furthermore, an infinite word $a$ over $\mathbb N$ can be greedy for more than one alternate base. \begin{example} \label{ex:110} The infinite word $110^\omega$ is the expansion of $1$ with respect to the three alternate bases $(\overline{\varphi,\varphi})$, $(\overline{\frac{5+\sqrt{13}}{6},\frac{1+\sqrt{13}}{2}})$ and $(\overline{1.7,\frac{1}{0.7}})$. \end{example} At the opposite, it may happen that an infinite word $a$ is a $\boldsymbol{\beta}$-representation of $1$ for different alternate bases $\boldsymbol{\beta}$ but that none of these are such that $a$ is greedy. As an illustration, by Proposition~\ref{pro:PurelyPeriodic}, for all purely periodic infinite words $a$ over $\mathbb N$, all alternate bases $\boldsymbol{\beta}$ such that $\mathrm{val}_{\boldsymbol{\beta}}(a)=1$ are such that $a$ is not the $\boldsymbol{\beta}$-expansion of $1$. \begin{example} The infinite word $(10)^{\omega}$ is a representation of $1$ with respect to the three alternate bases considered in Example~\ref{ex:110}. However, the infinite words $(10)^{\omega}$ is purely periodic therefore, by Proposition~\ref{pro:PurelyPeriodic}, it is not the expansion of $1$ in any alternate base. \end{example} \subsection{The $\boldsymbol{\beta}$-shift} We define sets of finite words $Y_{\boldsymbol{\beta},h}$ for $h\in[\![0,p-1]\!]$ as follows. If $d_{\boldsymbol{\beta}}^{*}(1)=t_0t_1\cdots$ then we let \[ Y_{\boldsymbol{\beta},h}=\{t_0\cdots t_{\ell-2}s \colon \ell\in\mathbb N_{\ge 1},\ \ell\bmod p= h,\ t_{\ell-1}>0,\ s\in[\![0,t_{\ell-1}-1]\!] \}. \] Note that $Y_{\boldsymbol{\beta},h}$ is empty if and only if for all $\ell\in\mathbb N_{\ge 1}$ such that $\ell\bmod p= h$, $t_{\ell-1}=0$. So, unlike the sets $X_{\boldsymbol{\beta},h}$ defined in Section~\ref{sec:BShift}, the sets $Y_{\boldsymbol{\beta},h}$ can be infinite. More precisely, $Y_{\boldsymbol{\beta},h}$ is infinite if and only if there exists infinitely many $\ell\in\mathbb N_{\ge 1}$ such that $\ell\bmod p= h$ and $t_{\ell-1}>0$. \begin{proposition} \label{prop:DY} We have \[ D_{\boldsymbol{\beta}}=\bigcup_{h_0=0}^{p-1} Y_{\boldsymbol{\beta},h_0} \Bigg(\bigcup_{h_1=0}^{p-1} Y_{\boldsymbol{\beta}^{(h_0)},h_1} \Bigg(\bigcup_{h_2=0}^{p-1} Y_{\boldsymbol{\beta}^{(h_0+h_1)},h_2} \Bigg( \quad\cdots\quad \Bigg) \Bigg) \Bigg). \] \end{proposition} \begin{proof} It is easily seen that for all $h\in[\![0,p-1]\!]$, \[ \bigcup_{h=0}^{p-1} Y_{\boldsymbol{\beta},h} =\bigcup_{\ell\in\mathbb N_{\ge 1}} X_{\boldsymbol{\beta},\ell}. \] The conclusion follows from Proposition~\ref{prop:DX}. \end{proof} \begin{corollary} We have $D_{\boldsymbol{\beta}}=\displaystyle{\bigcup_{h=0}^{p-1} Y_{\boldsymbol{\beta},h} D_{\boldsymbol{\beta}^{(h)}}}$. \end{corollary} In the case where all $\qDBi{i}(1)$ are ultimately periodic, we define an automaton $\mathcal{A}_{\boldsymbol{\beta}}$ over the finite alphabet $A_{\boldsymbol{\beta}}$. Let $\qDBi{i}(1)=t_0^{(i)}\cdots t_{m_i-1}^{(i)} \big(t_{m_i}^{(i)}\cdots t_{m_i+n_i-1}^{(i)}\big)^\omega$. The set of states is \[ Q=\big\{q_{i,j,k}\colon i,j\in[\![0,p-1]\!],\ k\in[\![0,m_i+n_i-1]\!]\big\}. \] The set $I$ of initial states and the set $F$ of final states are defined as \[ I=\big\{q_{i,i,0}\colon i\in[\![0,p-1]\!]\big\} \quad\text{and}\quad F=Q. \] The (partial) transition function $\delta\colon Q\times A_{\boldsymbol{\beta}}\to Q$ of the automaton $\mathcal{A}_{\boldsymbol{\beta}}$ is defined as follows. For each $i,j\in[\![0,p-1]\!]$ and each $k\in[\![0,m_i+n_i-1]\!]$, we have \[ \delta(q_{i,j,k},t_k^{(i)})= \begin{cases} q_{i,(j+1)\bmod p,k+1} & \text{ if }k\ne m_i+n_i-1\\ q_{i,(j+1)\bmod p,m_i} & \text{ else} \end{cases} \] and for all $s\in[\![0,t_k^{(i)}-1]\!]$, we have \[ \delta(q_{i,j,k},s)=q_{(j+1)\bmod p,(j+1)\bmod p,0}. \] \begin{example} Let $\boldsymbol{\beta}=(\overline{\varphi^2, 3+\sqrt{5}})$. Then $\DBi{0}(1)=2(30)^\omega$ and $\DBi{1}(1)=5(03)^\omega$. The corresponding automaton $\mathcal{A}_{\boldsymbol{\beta}}$ is depicted in Figure~\ref{fig:Automaton-230-503}. \begin{figure}[htb] \centering \VCDraw{\begin{VCPicture}{(0,-4)(10,17)} \LargeState \StateVar[q_{0,0,0}]{(0,13)}{1} \StateVar[q_{0,0,1}]{(5,13)}{2} \StateVar[q_{0,0,2}]{(10,13)}{3} \StateVar[q_{0,1,0}]{(0,9)}{4} \StateVar[q_{0,1,1}]{(5,9)}{5} \StateVar[q_{0,1,2}]{(10,9)}{6} \StateVar[q_{1,0,0}]{(0,4)}{7} \StateVar[q_{1,0,1}]{(5,4)}{8} \StateVar[q_{1,0,2}]{(10,4)}{9} \StateVar[q_{1,1,0}]{(0,0)}{10} \StateVar[q_{1,1,1}]{(5,0)}{11} \StateVar[q_{1,1,2}]{(10,0)}{12} \Initial{1} \Initial{10} \VArcL[.2]{arcangle=15}{1}{5}{2} \ChgLArcCurvature{1} \VArcR[0.25]{arcangle=-73}{1}{10}{0,1} \ChgLArcCurvature{0.8} \VArcL[.8]{arcangle=15}{2}{6}{3} \EdgeR[.5]{2}{10}{0,1,2} \VArcL[.8]{arcangle=15}{3}{5}{0} \EdgeL{4}{1}{0,1} \EdgeL[.75]{4}{2}{2} \VArcL[.3]{arcangle=15}{5}{1}{0,1,2} \VArcL[.2]{arcangle=15}{5}{3}{3} \VArcL[.2]{arcangle=15}{6}{2}{0} \EdgeR[.3]{7}{10}{0,1,2,3,4} \EdgeR[.7]{7}{11}{5} \VArcL[.8]{arcangle=15}{8}{12}{0} \VArcL[.8]{arcangle=20}{9}{11}{3} \ChgLArcCurvature{1.4} \VArcL{arcangle=130}{9}{10}{0,1,2} \EdgeL[.7]{10}{8}{5} \VArcL[.15]{arcangle=120}{10}{1}{0,1,2,3,4} \ChgLArcCurvature{0.8} \VArcL[.2]{arcangle=15}{11}{9}{0} \VArcL[.2]{arcangle=15}{12}{8}{3} \ChgLArcCurvature{1.4} \VArcR[.85]{arcangle=-100}{12}{1}{0,1,2} \end{VCPicture}} \caption{The automaton $\mathcal{A}_{(\overline{\varphi^2, 3+\sqrt{5}})}$.} \label{fig:Automaton-230-503} \end{figure} By removing the non-accessible states, we obtain the automaton of Figure~\ref{fig:Automaton-230-503-accessible}. \begin{figure}[htb] \centering \VCDraw{\begin{VCPicture}{(0,-1)(10,8)} \LargeState \StateVar[q_{0,0,0}]{(0,6)}{1} \StateVar[q_{0,0,2}]{(10,6)}{3} \StateVar[q_{0,1,1}]{(5,6)}{5} \StateVar[q_{1,0,1}]{(5,0)}{8} \StateVar[q_{1,1,0}]{(0,0)}{10} \StateVar[q_{1,1,2}]{(10,0)}{12} \Initial{1} \Initial{10} \VArcL[.5]{arcangle=15}{1}{5}{2} \VArcL[.5]{arcangle=15}{1}{10}{0,1} \VArcL[.5]{arcangle=15}{3}{5}{0} \VArcL[.4]{arcangle=15}{5}{1}{0,1,2} \VArcL[.5]{arcangle=15}{5}{3}{3} \VArcL[.4]{arcangle=15}{8}{12}{0} \EdgeL{10}{8}{5} \VArcL[.5]{arcangle=15}{10}{1}{0,1,2,3,4} \VArcL[.5]{arcangle=15}{12}{8}{3} \ChgLArcCurvature{1.4} \EdgeR{12}{1}{0,1,2} \end{VCPicture}} \caption{An accessible automaton accepting $\mathrm{Fac}(\Sigma_{(\overline{\varphi^2, 3+\sqrt{5}})})$.} \label{fig:Automaton-230-503-accessible} \end{figure} \end{example} The following result extends a result of Bertrand-Mathis for real bases~\cite{Bertrand-Mathis:1986}. Recall that a subshift $S$ of $A^{\mathbb N}$ is called \emph{sofic} if the language $\mathrm{Fac}(S)\subseteq A^*$ is accepted by a finite automaton. \begin{theorem} The $\boldsymbol{\beta}$-shift $\Sigma_{\boldsymbol{\beta}}$ is sofic if and only if for all $i\in [\![0,p-1]\!]$, $\qDBi{i}(1)$ is ultimately periodic. \end{theorem} \begin{proof} Suppose that for all $i\in [\![0,p-1]\!]$, $\qDBi{i}(1)$ is ultimately periodic. We show that the automaton $\mathcal{A}_{\boldsymbol{\beta}}$ accepts the language $\mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})$. From Propositions~\ref{prop:subshift} and~\ref{prop:Fac}, we obtain that \begin{equation} \label{eq:UnionPref} \mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})=\mathrm{Pref}(\Delta_{\boldsymbol{\beta}})=\bigcup_{i=0}^{p-1}\mathrm{Pref}(D_{\boldsymbol{\beta}^{(i)}}). \end{equation} Therefore, it suffices to show that for each $i\in[\![0,p-1]\!]$, the language accepted from the initial state $q_{i,i,0}$ is precisely $\mathrm{Pref}(D_{\boldsymbol{\beta}^{(i)}})$. Let thus $i\in[\![0,p-1]\!]$. First, consider a word $w$ accepted from $q_{i,i,0}$. By Corollary~\ref{cor:DX}, if $w$ is a prefix of $\qDBi{i}(1)$ then $w\in\mathrm{Pref}(D_{\boldsymbol{\beta}^{(i)}})$. Otherwise, by construction of $\mathcal{A}_{\boldsymbol{\beta}}$, $w$ starts with some $u\in Y_{\boldsymbol{\beta}^{(i)},h_0}$ where $h_0=|u|\bmod p$. Moreover, the state reached after reading $u$ from $q_{i,i,0}$ is $q_{j,j,0}$ where $j=(i+h_0)\bmod p$. We obtain that $w\in \mathrm{Pref}(D_{\boldsymbol{\beta}^{(i)}})$ by iterating the reasoning and by using Proposition~\ref{prop:DY}. Conversely, let $w\in \mathrm{Pref}(D_{\boldsymbol{\beta}^{(i)}})$. By Proposition~\ref{prop:DY}, we know that there exists $\ell\in\mathbb N$ and $h_0,\ldots,h_\ell\in[\![0,p-1]\!]$ such that $w=u_0\cdots u_{\ell-1}x$ with $u_k\in Y_{\boldsymbol{\beta}^{(i+h_0+\cdots h_{k-1})},h_k}$ for all $k\in[\![0,\ell-1]\!]$ and $x$ is a (possibly empty) prefix of $\qDBi{i_\ell}(1)$ where $i_\ell = (i+h_0+\cdots+h_{\ell-1}) \bmod p$. By construction of $\mathcal{A}_{\boldsymbol{\beta}}$, by reading $u_0$ from the state $q_{i,0}^{(i)}$, we reach the state $q_{i_1,i_1,0}$ where $i_1= (i+h_0) \bmod p$. Then, by reading $u_1$ from the latter state, we reach the state $q_{i_2,i_2,0}$ where $i_2= (i+h_0+h_1) \bmod p$. By iterating the argument, after reading $u_0\cdots u_{\ell-1}$, we end up in the state $q_{i_\ell,i_\ell,0}$. Since $x$ is a prefix of $\qDBi{i_\ell}(1)$, it is possible to read $x$ from the state $q_{i_\ell,i_\ell,0}$ in $\mathcal{A}_{\boldsymbol{\beta}}$. Since all states of $\mathcal{A}_{\boldsymbol{\beta}}$ are final, we obtain that $w$ is accepted from $q_{i,i,0}$. We turn to the necessary condition. Let \[ \qDBi{i}(1)=t_0^{(i)}t_1^{(i)}\cdots\quad \text{ for every } i\in[\![0,p-1]\!]. \] Suppose that $j\in[\![0,p-1]\!]$ is such that $\qDBi{j}(1)$ is not ultimately periodic. Our aim is to find an infinite sequence $(w^{(m)})_{m\in\mathbb N}$ of finite words over $A_{\boldsymbol{\beta}}$ such that for all distinct $m,n\in\mathbb N$, the words $w^{(m)}$ and $w^{(n)}$ are not right-congruent with respect to $\mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})$. Recall that words $x$ and $y$ are not right-congruent with respect to a language $L$ if $x^{-1}L\ne y^{-1}L$, i.e.\ if there exists some word $z$ such that either $xz\in L$ and $yz\notin L$, or $xz\notin L$ and $yz\in L$. If we succeed then we will know that the number of right-congruence classes is infinite and we will be able to conclude that $\mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})$ is not accepted by a finite automaton. We define a partition $(G_1,\ldots,G_q)$ of $[\![0,p-1]\!]$ as follows. Let $r=\Card\{\qDBi{i}(1)\colon i\in[\![0,p-1]\!]\}$ and let $i_1,\ldots,i_r\in[\![0,p-1]\!]$ be such that $\qDBi{i_1}(1),\ldots,\qDBi{i_r}(1)$ are pairwise distinct. Without loss of generality, we can suppose that $\qDBi{i_1}(1)>_{\mathrm{lex}}\cdots>_{\mathrm{lex}}\qDBi{i_r}(1)$. Let $q\in[\![1,r]\!]$ be the unique index such that $\qDBi{i_q}(1)=\qDBi{j}(1)$. We set \[ G_s=\{i\in[\![0,p-1]\!]\colon \qDBi{i}(1)=\qDBi{i_s}(1)\}\quad \text{for }s\in[\![1,q-1]\!] \] and \[ G_q=\{i\in[\![0,p-1]\!]\colon \qDBi{i}(1)\le \qDBi{j}(1)\}. \] For each $s\in[\![1,q-1]\!]$, we write $G_s=\{i_{s,1},\ldots,i_{s,\alpha_s}\}$ where $i_{s,1}<\ldots<i_{s,\alpha_s}$ and we use the convention that $i_{s,\alpha_s+1}=i_{s+1,1}$ for $s\le q-2$ and $i_{q-1,\alpha_{q-1}+1}=j$. Moreover, we let $g\in\mathbb N_{\ge 1}$ be such that for all $i,i'\in [\![0,p-1]\!]$ such that $\qDBi{i}(1)\ne \qDBi{i'}(1)$, the length-$g$ prefixes of $\qDBi{i}(1)$ and $\qDBi{i'}(1)$ are distinct. Then, for $s\in[\![1,q-1]\!] $, we define $C_s$ to be the least $c\in\mathbb N_{\ge 1}$ such that $t^{(i_s)}_{g-1+c}> 0$. Finally, let $N\in\mathbb N_{\ge 1}$ be such that $pN\ge \max\{g,C_1,\ldots,C_{q-1}\}$. For all $m\in\mathbb N$, consider \[ w^{(m)} =\left( \prod_{s=1}^{q-1} \prod_{k=1}^{\alpha_s} t_0^{(i_s)}\cdots t_{g-1}^{(i_s)} 0^{p(2N+1)-g+i_{s,k+1}-i_{s,k}} \right) t_0^{(j)}\cdots t_{m-1}^{(j)}. \] For all $m\in\mathbb N$, $s\in [\![1,q-1]\!]$ and $k\in [\![1,\alpha_s]\!]$, the factor $t_0^{(i_s)}\cdots t_{g-1}^{(i_s)}0^{p(2N+1)-g+i_{s,k+1}-i_{s,k}}$ has length $p(2N+1)+i_{s,k+1}-i_{s,k}$, and hence occurs at a position congruent to $i_{s,k}-i_{1,1}$ modulo $p$ in $w^{(m)}$. Similarly, for all $m\in\mathbb N$, the factor $t_0^{(j)}\cdots t_{m-1}^{(j)}$ occurs at a position congruent to $j-i_{1,1}$ modulo $p$ in $w^{(m)}$. These observations will be crucial in what follows. The situation is illustrated in Figure~\ref{fig:wm}.\begin{figure}[htb] \begin{tikzpicture} \draw (-1,0.1) node[above]{$w^{(m)}=$}; \draw (0,0) rectangle (2,0.6); \draw (1,0) node[above]{$w_{1,1}$}; \draw (0,0) node[]{$\bullet$}; \draw (0,0) node[below]{${\scriptstyle 0}$}; \draw (0.2,-0.9) node[]{$\cdots$}; \draw (2.2,0) rectangle (4,0.6); \draw (3.1,0) node[above]{$w_{1,2}$}; \draw (2.2,0) node[]{$\bullet$}; \draw (2.2,0) node[below]{${\scriptstyle i_{1,2}-i_{1,1}}$}; \draw (5.5,0) node[above]{$\cdots$}; \draw (7,0) rectangle (8.7,0.6); \draw (7.85,0) node[above]{$w_{1,\alpha_1}$}; \draw (7,0) node[]{$\bullet$}; \draw (7,0) node[below]{${\scriptstyle i_{1,\alpha_1}-i_{1,1}}$}; \draw (0.2,-2.9) node[]{$\cdots$}; \draw (0,-2) rectangle (2.3,-1.4); \draw (1.15,-2) node[above]{$w_{s,1}$}; \draw (0,-2) node[]{$\bullet$}; \draw (0,-2) node[below]{${\scriptstyle i_{s,1}-i_{1,1}}$}; \draw (2.5,-2) rectangle (4.5,-1.4); \draw (3.5,-2) node[above]{$w_{s,2}$}; \draw (2.5,-2) node[]{$\bullet$}; \draw (2.5,-2) node[below]{${\scriptstyle i_{s,2}-i_{1,1}}$}; \draw (5.95,-2) node[above]{$\cdots$}; \draw (7.4,-2) rectangle (9.5,-1.4); \draw (8.45,-2) node[above]{$w_{s,\alpha_s}$}; \draw (7.4,-2) node[]{$\bullet$}; \draw (7.4,-2) node[below]{${\scriptstyle i_{s,\alpha_s}-i_{1,1}}$}; \draw (0,-4) rectangle (2.2,-3.4); \draw (1.1,-4) node[above]{$w_{q-1,1}$}; \draw (0,-4) node[]{$\bullet$}; \draw (0,-4) node[below]{${\scriptstyle i_{q-1,1}-i_{1,1}}$}; \draw (2.4,-4) rectangle (4.2,-3.4); \draw (3.3,-4) node[above]{$w_{q-1,2}$}; \draw (2.4,-4) node[]{$\bullet$}; \draw (2.4,-4) node[below]{${\scriptstyle i_{q-1,2}-i_{1,1}}$}; \draw (5.35,-4) node[above]{$\cdots$}; \draw (6.5,-4) rectangle (8.8,-3.4); \draw (7.65,-4) node[above]{$w_{q-1,\alpha_{q-1}}$}; \draw (6.5,-4) node[]{$\bullet$}; \draw (6.5,-4) node[below]{${\scriptstyle i_{q-1,1}-i_{1,1}}$}; \draw (0,-5.25) rectangle (2.5,-4.65); \draw (1.25,-5.4) node[above]{$t_0^{(j)}\cdots t_{m-1}^{(j)}$}; \draw (0,-5.25) node[]{$\bullet$}; \draw (0,-5.25) node[below]{${\scriptstyle j-i_{1,1}}$}; \end{tikzpicture} \caption{Positions modulo $p$ of the occurrences of the factors $w_{k,s}$ and $t_0^{(j)}\cdots t_{m-1}^{(j)}$ in $w^{(m)}$, where $w_{k,s}=t_0^{(i_s)}\cdots t_{g-1}^{(i_s)} 0^{p(2N+1)-g+i_{s,k+1}-i_{s,k}}$.} \label{fig:wm} \end{figure} Now, let $m,n\in\mathbb N$ be distinct. Since $\qDBi{j}(1)$ is not ultimately periodic, $\sigma^m \big(\qDBi{j}(1)\big)\ne \sigma^n\big(\qDBi{j}(1)\big)$. Thus, there exists $\ell\in\mathbb N_{\ge 1}$ such that $t_m^{(j)}\cdots t_{m+\ell-2}^{(j)}=t_n^{(j)}\cdots t_{n+\ell-2}^{(j)}$ and $t_{m+\ell-1}^{(j)}\ne t_{n+\ell-1}^{(j)}$. Without loss of generality, we suppose that $t_{m+\ell-1}^{(j)}>t_{n+\ell-1}^{(j)}$. Let $z=t_m^{(j)}\cdots t_{m+\ell-1}^{(j)}$. Our aim is to show that $w^{(m)}z \in \mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})$ and $w^{(n)}z\notin \mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})$. In order to obtain that $w^{(m)}z \in \mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})$, we show that $w^{(m)}z \in \mathrm{Pref}(D_{\boldsymbol{\beta}^{(i_{1,1})}})$. First, for all $s\in[\![1,q-1]\! ]$ and $k\in[\![1,\alpha_s]\!]$, $t_0^{(i_s)}\cdots t_{g-1}^{(i_s)}0^{C_s}\in Y_{\boldsymbol{\beta}^{(i_{s,k})},(g+C_s) \bmod p}$. Second, for all $i\in[\![0,p-1]\!]$, $0\in Y_{\boldsymbol{\beta}^{(i)},1}$. Third, by Corollary~\ref{cor:DX}, for all $h\in[\![0,p-1]\!]$, $t_0^{(j)}\cdots t_{m-1}^{(j)}z\in\mathrm{Pref}(Y_{\boldsymbol{\beta}^{(j)},h})$. The conclusion follows from Proposition~\ref{prop:DY}. In view of \eqref{eq:UnionPref}, in order to prove that $w^{(n)}z\notin \mathrm{Fac}(\Sigma_{\boldsymbol{\beta}})$, it suffices to show that for all $i\in[\![0,p-1]\!]$, $w^{(n)}z\notin \mathrm{Pref}(D_{\boldsymbol{\beta}^{(i)}})$. Proceed by contradiction and let $i\in [\![0,p-1]\!]$ and $w\in D_{\boldsymbol{\beta}^{(i)}}$ such that $w^{(n)}z$ is a prefix of $w$. By Theorem~\ref{thm:Parry}, for all $s\in [\![1,q]\!]$, the factor $t_0^{(i_s)}\cdots t_{g-1}^{(i_s)} 0^{C_s}$ occurs at a position $e$ in $w$ such that $(i+e)\bmod p$ belongs to $G_1\cup\cdots\cup G_s$. For $s=1$, we obtain that for all $k\in[\![1,\alpha_1]\!]$, $(i+i_{1,k}-i_{1,1})\bmod p\in G_1$, and hence that \[ G_1=\{(i+i_{1,1}-i_{1,1})\bmod p,\ldots,(i+i_{1,\alpha_1}-i_{1,1})\bmod p\}. \] For $s=2$, we get that for all $k\in[\![1,\alpha_2]\!]$, $(i+i_{2,k}-i_{1,1})\bmod p\in G_1\cup G_2$. If $(i+i_{2,k}-i_{1,1})\bmod p\in G_1$ for some $k\in[\![1,\alpha_2]\!]$, then there exists $k'\in[\![1,\alpha_1]\!]$ such that $(i+i_{2,k}-i_{1,1})\bmod p=(i+i_{1,k'}-i_{1,1})\bmod p$, hence such that $i_{2,k}=i_{1,k'}$, which is impossible since $G_1$ and $G_2$ are pairwise disjoint. It follows that \[ G_2=\{(i+i_{2,1}-i_{1,1})\bmod p,\ldots,(i+i_{2,\alpha_2}-i_{1,1})\bmod p\}. \] By iterating the reasoning, we obtain that \[ G_s=\{(i+i_{s,1}-i_{1,1})\bmod p,\ldots,(i+i_{s,\alpha_s}-i_{1,1})\bmod p\} \quad\text{for all } s\in[\![1,q-1]\!]. \] We finally get that $(i+j-i_{1,1})\bmod p$ belongs to $G_q$. Then $\qDBi{(i+j-i_{1,1})\bmod p}(1)\le_{\mathrm{lex}}\qDBi{j}(1)$. Let $r$ be the position where the factor $t_0^{(j)}\cdots t_{n-1}^{(j)}$ occurs in $w^{(n)}$, and hence also in $w$ since $w^{(n)}z$ is a prefix of $w$. We have seen that $r\equiv j-i_{1,1}\pmod p$. Since $w\in D_{\boldsymbol{\beta}^{(i)}}$, it follows from Theorem~\ref{thm:Parry} that \[ \sigma^r(w) <_{\mathrm{lex}}\qDBi{i+r}(1) =\qDBi{(i+j-i_{1,1})\bmod p}(1) \le_{\mathrm{lex}}\qDBi{j}(1). \] We have thus reached a contradiction since the factor $t_0^{(j)}\cdots t_{n-1}^{(j)}z$ is lexicographically greater than the length-($n+\ell$) prefix of $\qDBi{j}(1)$. \end{proof} Note that, in the classical case $p=1$, the previous proof is much shorter since $\mathrm{Fac}(\Sigma_{\beta})=\mathrm{Pref}(D_{\beta})$, and hence we can directly deduce that the words $t_0^{(j)}\cdots t_{m-1}^{(j)}$ and $t_0^{(j)}\cdots t_{n-1}^{(j)}$ (where in fact, $j=0$) are not right-congruent with respect to $\mathrm{Fac}(\Sigma_{\beta})$. A subshift $S$ of $A^{\mathbb N}$ is said to be of \emph{finite type} if its minimal set of forbidden factors is finite. For $p=1$, it is well known that the $\beta$-shift is of finite type if and only if $d_\beta(1)$ is finite \cite{Bertrand-Mathis:1986}. However, this result does not generalize to $p\ge 2$ as is illustrated by the following example. \begin{example} Consider the alternate base $\boldsymbol{\beta}=(\overline{\frac{1+\sqrt{13}}{2},\frac{5+\sqrt{13}}{6}})$ of Example~\ref{ex:1+sqrt{13}}. Then $\qDBi{0}(1)=200(10)^\omega$ and $\qDBi{1}(1)=(10)^\omega$. We see that all words in $2(00)^{*}2$ are factors avoided by $\Sigma_{\boldsymbol{\beta}}$, so the $\boldsymbol{\beta}$-shift $\Sigma_{\boldsymbol{\beta}}$ is not of finite type. \end{example} \section{Acknowledgment} Célia Cisternino is supported by the FNRS Research Fellow grant 1.A.564.19F. \bibliographystyle{abbrv}
{ "timestamp": "2021-02-16T02:40:08", "yymm": "2102", "arxiv_id": "2102.07722", "language": "en", "url": "https://arxiv.org/abs/2102.07722" }
\subsection{Introduction} {\it Introduction. } The quantum-classical correspondence has been a fundamental problem since the foundation of quantum mechanics \cite{qc_correspondence}. The problem of how to deduce the classical mechanics from the quantum theory has been discussed in various ways, for instance, the Ehrenfest's theorem \cite{Ehrenfest}, the WKB analysis \cite{WKB}. Physical quantities in the classical mechanics appears as the expectation values in the quantum mechanics. It follows that the essential problem in the quantum-classical correspondence is to find a quantum state in which the expectation value of the canonical variable of the system coincides with the classical counterpart. The coherent states are known to be quantum states which behave classically \cite{coherent_original}. In particular, in the case of the harmonic oscillator, a coherent state is a localized wave packet which oscillates exactly in the same frequency as the classical particle without changing its form \cite{coherent_ho}. Concerning the integrable system, coherent states play a prominent roll with respect to the quantum-classical correspondence. In \cite{Ichikawa}, KdV soliton was constructed as a coherent state of the unharmonic oscillator. Moreover, the field theoretical coherent state was constructed in the sine-Gordon model \cite{Aoyama}. In this letter, we consider the quantum-classical correspondence in integrable field theories. The most simple and fundamental example is the one-dimensional Bose gas with contact interactions, described by the Hamiltonian \begin{align} \hll=\sum_{j=1}^{N}\(-\partial_j^2\)+\sum_{1\leq j<k \leq N}2c\delta(x_j,x_k), \label{hll} \end{align} where $N$ is the number of particle, $c$ is the coupling constant and $\del_j:=\dfrac{\del}{\del x_j}$. This is a quantum integrable system and exact eigenstates and eigenenergies are obtained via the Bethe ansatz method \cite{Lieb-Liniger}. In the quantum field description, the time evolution of the Boson field operator obeys the quantum nonlinear Schr{\"o}dinger (NLS) equation. In the classical limit where the quantum field operator is replaced by a commutative complex scalar field, the classical NLS equation is known to be classically integrable and has soliton solutions \cite{ZS}. Identifying the quantum state corresponding to the classical soliton has been a long standing problem. In the attractive case $c<0$, the classical NLS equation has the bright soliton solution and the corresponding quantum state is constructed in terms of the bound states associated with the complex Bethe roots called string \cite{Nohl, WKK}. In the repulsive case $c>0$, the classical solution is the dark soliton. It has been argued that the quantum wave packet constructed from the superposition of the hole-type excitations via the Bethe ansatz is corresponding to the classical dark soliton \cite{SKKD, SKKD2}. In the attractive case, the bright soliton is obtained from the $N$-particle bound states in the limit $N\to\infty$ and $c\to0$ while keeping the product $Nc$ finite. This can be regarded as a large quantum-number limit. Moreover, the time evolution of the quantum state does not obey the law of quantum mechanics. The time dependent quantum state is obtained from the Galilean transformation. In the repulsive case, the quantum wave packet corresponding to the classical dark soliton collapses due to the interference of the different energy eigenstates. In this letter, we construct a quantum soliton using the coherent state. Then we define the quantum and classical time evolutions of this state. Comparing their difference, we examine the stability of the quantum soliton state. It is known that the nonlinearity violates the stability of coherent states \cite{coherent_stable_conditions}. {\it Quantum field theory. } The 1D Bose gas \eqref{hll} can be described by the Bose field operators satisfying the canonical equal-time commutation relations \begin{align} &[\hat{ \psi}(x,t), \hat{ \psi}^{\dagger}(y,t)]=\delta(x-y), \\ &[\hat{ \psi}(x,t), \hat{ \psi}(y,t)]=[\hat{ \psi}^{\dagger}(x,t), \hat{ \psi}^{\dagger}(y,t)]=0. \end{align} The vacuum $\vac$ satisfies \begin{align} \hat{ \psi}(x,t)\vac=0, \quad \vvac\hat{ \psi}^{\dagger}(x,t)=0, \qquad \bra 0 | 0 \ket=1. \end{align} The state space is generated by the successive actions of the creation operator $\hpd(x)$ on the vacuum as \begin{align} |\vp_N\ket=\int\d x_1\cdots\d x_N & \vp_N(x_1,\cdots,x_N) \nn\\&\times \hpd(x_1)\cdots\hpd(x_N)\vac, \end{align} where $\vp_N(x_1,\cdots,x_N)$ is the corresponding $N$-body wave function. The Hamiltonian $\hhll$ is written in terms of the field operators as \begin{align} \hhll= \int\d x \[ -\hat{\psi}^\dagger \del_x^2\hat{\psi} + c \hat{\psi}^\dagger \hat{\psi}^\dagger \hat{\psi} \hat{\psi} \]. \label{hhll} \end{align} In the Heisenberg picture, the time evolution of the field operator $\hat{\psi}(x,t)$ is given by \begin{align} \i\del_t\hat{\psi} =[\hat{\psi}, \hhll] = -\del_x^2 \hat{\psi} + 2c \hat{ \psi}^{\dagger} \hat{ \psi} \hat{ \psi}, \end{align} which we call the quantum nonlinear Schr{\"o}dinger (NLS) equation. The formal solution is explicitly written as \begin{align} \hat{\psi}(x,t) = e^{\i\hhll t} \hat{\psi}(x) e^{-\i\hhll t}. \end{align} {\it Classical field theory. } Replacing the quantum field operators $\hp(x,t)$ and $\hpd(x,t)$ by the commutative complex scalar field $\fcl(x,t)$ and $\fast(x,t)$, which we call ``{\it classicalization}, we obtain the classical NLS equation \begin{align} \i\del_t\fcl = -\del_x^2 \fcl + 2c \fast \fcl \fcl. \label{cNLS} \end{align} This can be solved via the inverse scattering method and has a soliton solutions \cite{ZS}. The energy functional is obtained through the classicalization in the Hamiltonian \eqref{hhll} as \begin{align} E[\fcl,\fast]=\int\d x\[-\fast \del_x^2 \fcl+c\fast\fast\fcl\fcl\], \end{align} in terms of which the classical NLS equation \eqref{cNLS} can be recast into the form \begin{align} \del_t\fcl=\{\fcl, E\}, \quad \del_t\fast=\{\fast, E\}, \label{cNLS2} \end{align} where the Poisson bracket for two functionals is defined by \begin{align} \{F, G\}:=\frac1\i\int\d x \( \frac{\delta F}{\delta \fcl}\frac{\delta G}{\delta \fast} - \frac{\delta G}{\delta \fcl}\frac{\delta F}{\delta \fast} \). \end{align} Then it follows that the equal-time canonical relation \begin{align} \{\fcl(x,t), \fast(y,t)\}=\frac1\i\delta(x-y) \end{align} and the time evolution of the physical quantity $F$ \begin{align} \del_t F=\{F, E\}. \end{align} {\it Coherent state. } Let $\fcl(x, t)$ be the exact soliton solution of the classical NLS equation \eqref{cNLS} or \eqref{cNLS2}. We construct a quantum state corresponding to the classical soliton at the initial time $t=0$. The main object of this letter is the coherent state in the quantum field theory defined as \cite{Aoyama} \begin{align} |\fcl \ket:=e^{\ah}\vac, \quad \ah:=\int\d x \[\fcl(x) \hpd(x)-\fast(x) \hp(x)\], \label{fket} \end{align} where $f(x):=f(x,t=0)$. The normalization $\bra\fcl|\fcl\ket=1$ follows from $\ah^\dagger=-\ah$. One can easily see that \begin{align} & [\hp(x), \ah]=\fcl(x), \quad [\hpd(x), \ah]=\fast(x), \\& \{\fcl(x), \ah\}=\i\hp(x), \quad \{\fast(x), \ah\}=\i\hpd(x). \end{align} It follows that the coherent state is an eigenvector of $\hat{\psi}(x)$ with the eigenvalue $\fcl(x)$ \begin{align} \hat{\psi}(x)|\fcl \ket=\fcl(x)|\fcl \ket. \end{align} Moreover, as for the expectation values, we have the following relations \begin{align} \bra\fcl|\hp(x)|\fcl\ket=\fcl(x), \quad \bra\fcl|\hpd(x)|\fcl\ket=\fast(x), \end{align} which means that the coherent state classicalize the field operators $\hp, \hpd$ to the scalar fileds $\fcl, \fast$, respectively. {\it Time evolution. } Let us proceed to the time evolution of the coherent state. According to the principle of quantum mechanics, the coherent state $|\fcl\ket$ is time-evolved as \begin{align} |\fcl, t\ket:=e^{-\i\hhll t}|\fcl\ket=e^{\ah(-t)}\vac, \end{align} where \begin{align} \ah(t):=\int\d x \[\fcl(x) \hpd(x,t)-\fast(x) \hp(x,t)\]. \end{align} The expectation value of the field operator at time $t$ is given by \begin{align} \bra\fcl, t|\hp(x)|\fcl, t\ket =\bra\fcl|\hp(x, t)|\fcl\ket, \end{align} which is {\it not} equal to the classical soliton $\fcl(x, t)$. Here let us introduce the ``{\it classically}" time evolved coherent state $|\widetilde{f,t}\ket$ as \begin{align} & |\widetilde{f,t}\ket:=e^{\at(t)}\vac, \\& \at(t):=\int\d x \[\fcl(x,t) \hpd(x)-\fast(x,t) \hp(x)\], \end{align} whose expectation value describes the exact time evolution of the classical soliton \begin{align} & \bra\widetilde{f,t}|\hp(x)|\widetilde{f,t}\ket=\fcl(x,t). \end{align} At the initial time $t=0$, they are equal to each other \begin{align} \ah(0)=\at(0)=\ah, \quad |\fcl, t=0\ket=|\widetilde{f,t=0}\ket=|\fcl\ket. \end{align} They are evolved in time according to \begin{align} \del_t\ah(t)=\frac1\i[\ah,\hhll], \quad \del_t\at(t)=\{\ah,E\}. \end{align} As a quantity representing the difference between quantum and classical states, we introduce a function $r(t)$ as \begin{align} r(t):=\bra\fcl, t|\widetilde{f,t}\ket =\vvac e^{-\ah(-t)} e^{\at(t)} \vac, \end{align} which starts from 1 and dacays to 0. At an infinitesimal time $\Delta t$, we have \begin{align} & \ah(-\Delta t)=\ah-\Delta t \frac1\i[\ah,\hhll]=:\ah-\Delta t F, \\& \at(\Delta t)=\ah+\Delta t \{\ah,E\}=:\ah+\Delta t G. \end{align} Using Baker-Campbell-Hausdorff formula, we can explicitly evaluate $r(t)$ in the form \begin{align} r(\Delta t)=1-\i c\Delta t\int|f|^4\d x+c\,\mathcal{O}(\Delta t^2). \end{align} In the case of $c=0$, the overlap $r(t)$ remains 1 for all $t$ and the time evolution of the coherent state exactly coincides with the classical soliton. The cancellation of the linear terms suggests that the instability of quantum soliton originates from the nonlinear term in the Hamiltonian \eqref{hhll}. {\it Conclusion. } In this letter, we constructed the field theoretical coherent state \eqref{fket} and discussed the quantum-classical correspondence in the integrable field theory. We consider here the case of the one-dimensional Bose gas as an example. However, similar arguments are possible in the case of other integrable field theories. The expectation values of the field operator with respect to the coherent state is identical to the classical soliton at initial time. However, the unitary time evolution of the coherent states breaks the coherent property due to the nonlinearity. Consequently the expectation values of the field operators with respect to this state do not coincide with the classical solutions anymore. We discussed the difference between quantum and classical time evolutions of coherent states by evaluating the overlaps between these states. To elucidate the relationship between the coherent states and the previously constructed quantum bright and dark solitons are the next problem to be studied.
{ "timestamp": "2018-11-09T02:04:17", "yymm": "1811", "arxiv_id": "1811.03186", "language": "en", "url": "https://arxiv.org/abs/1811.03186" }
\section*{Acknowledgments} \vspace{-0.1in} \noindent We especially thank E. Bellini and S. Melville for numerous discussions and shared insights. We also acknowledge several useful discussions with T. Brinckmann, C. de Rham, P. Ferreira, A. Refregier, A. Tolley and E. Trincherini. JN acknowledges support from Dr. Max R\"ossler, the Walter Haefner Foundation and the ETH Zurich Foundation. AN acknowledges support from SNF grant 200021\_169130. In deriving the results of this paper, we have used: CLASS \cite{Blas:2011rf}, corner \cite{corner}, hi\_class \cite{Zumalacarregui:2016pph}, MontePyton \cite{Audren:2012wb,Brinckmann:2018cvx} and xAct \cite{xAct}. \bibliographystyle{apsrev4-1}
{ "timestamp": "2018-11-08T02:18:55", "yymm": "1811", "arxiv_id": "1811.03082", "language": "en", "url": "https://arxiv.org/abs/1811.03082" }
\section{Introduction} In the zero-shot learning task, a classifier is trained with datapoints from seen classes and applied to recognize previously unseen dataponts belonging to unseen classes. The main objective is to leverage knowledge from label embeddings, \emph{e.g.}~attributes, word embedding or class hierarchy information, to build a universal mapping that can classify unseen datapoints without retraining the system on new unseen classes. Firstly, let us denote $\mathbf{X}_{tr}$ as training datapoints from seen classes $C_s$, $\mathbf{X}_{ts}$ to be testing datapoints from unseen classes $C_u$ such that $C_s \cap C_u = \emptyset$. The model is trained on $\mathbf{X}_{tr}$ but needs to assign a label $l \in C_u$ for each datapoint from $\mathbf{X}_{ts}$. Recently, researchers have argued that standard zero-shot learning protocols are biased towards good results on unseen classes while neglecting performance on seen classes. To address this issue, a generalized zero-shot learning task was proposed for which testing datapoints come from seen and unseen classes, and the classifier needs to cope well with all classes $C = C_s \cup C_u$. It has emerged that most of zero-shot learning methods achieve low accuracy in such a protocol because training datapoints come only from the seen classes. In most cases, the strong imbalance of data distribution will make the classifier assign datapoints from seen classes to unseen classes. The use of Generalized Adversarial Network (GAN) to generate auxiliary datapoints for unseen classes \cite{xian2018feature} enables the classifier to be trained on datapoints from both seen and unseen categories. Inspired by such an extension, we found that using the auxiliary and original training data to learn a classifier, \emph{e.g.}~Support Vector Machine (SVM), can be further improved by treating the classification of original datapoints separately, that is, by decomposing the generalized zero-shot learning into two disjoint classification tasks: one classifier dealing with datapoints from seen classes and another classifier dealing with datapoints of unseen classes. In this paper, we propose to use the auxiliary data of unseen classes generated by GAN together with the original training data to build a model selection approach for generalized zero-shot learning. We refer to our approach as ModelSel and propose its three variants in Section \ref{sec:approach}. We evaluate ModelSel on four standard datasets and demonstrate state-of-the-art results. \section{Related Work} Zero-shot learning is a form of transfer learning. Specifically, it utilizes the knowledge learned on datapoints of seen classes and attribute vectors to generalize and recognize testing datapoints from new classes. The majority of previous zero-shot learning methods use some linear mapping to capture the relation between the feature and attribute vectors. Attribute Label Embedding (ALE) \cite{akata2013label} uses the attributes as label embedding and presents an objective inspired by a structured WSABIE ranking method that assigns more importance to the top of the ranking list. Embarrassingly Simple Zero-Shot Learning (ESZSL) \cite{romera2015embarrassingly} uses a linear mapping and simple empirical objective with several regularization terms that impose penalty on the projection of features from the Euclidean into the attribute space and the projection of attribute vectors back to the Euclidean space. Structured Joint Embedding (SJE) \cite{akata2015evaluation} proposes an objective inspired by the structured SVM and applied as linear mapping while \cite{XianCVPR2017} proposes new data splits and evaluation protocols to eliminate the overlap between classes of ImageNet \cite{ILSVRC15} and zero-shot learning datasets. Zero-shot Kernel Learning (ZSKL) \cite{zhang2018zero} proposes a non-linear kernel method with weak incoherence constraints to make the columns of projection matrix weakly incoherent. Feature Generating Networks \cite{xian2018feature} leverages a conditional Wasserstein Generative Adversarial Network (WGAN) to generate auxiliary datapoints for unseen classes from attribute vectors followed by training a simple Softmax classifier. SoSN \cite{zhang2018second} and So-HoT \cite{koniusz2018museum} use second-order statistics \cite{koniusz2018deeper} for similarity learning and domain adaptation. \section{Approach} \label{sec:approach} \subsection{Notations} Let us denote seen classes as $C_s$, unseen classes as $C_u$. $\mathbf{X}_{tr}$ denotes original training datapoints, $\mathbf{X}_{ge}$ are the generated datapoints for unseen classes. Each datapoint is a column vector in one of the above matrices. $M_{sel}$ is the selector between seen/unseen class, $M_s$ is the model for $C_s$, $M_u$ is the model for $C_u$, $M_t$ is a model for $C_s\cup C_u$. Moreover, $\boldsymbol{w}_{sel}$, $b_{sel}$, $\boldsymbol{W}_s$, $\mathbf{b}_s$, $\boldsymbol{W}_u$, $\mathbf{b}_u$, $\boldsymbol{W}_t$ and $\mathbf{b}_t$ are the projection vector/matrices and biases used by our models as detailed below. \subsection{Model Selection Mechanism} In this paper, we propose a mechanism that leverages several classifiers to perform generalized zero-shot learning. Firstly, we label the original datapoints as $1$ and auxiliary datapoints as $-1$ to train $M_{sel}$, which is a linear SVM classifier. Model $M_s$ is a classifier trained with datapoints from seen classes $C_s$, model $M_u$ is trained with auxiliary datapoints from GAN corresponding to unseen classes $C_u$. Model $M_t$ is trained for $C_s\cup C_u$ simultaneously. $M_s$, $M_u$ and $M_t$ are trained separately via the SoftmaxLog classifier. While we use a single training process, we distinguish three selection models applied at the testing stage. The output of each classifier can be defined as: \begin{align} & \mathbf{g}_s(\mathbf{x}) = \boldsymbol{W}_s^T\mathbf{x} + \mathbf{b}_s,\\ & \mathbf{g}_u(\mathbf{x}) = \boldsymbol{W}_u^T\mathbf{x} + \mathbf{b}_u,\\ & \mathbf{g}_t(\mathbf{x}) = \boldsymbol{W}_t^T\mathbf{x} + \mathbf{b}_t. \end{align} \noindent{\textbf{ModelSel-2Way.}} The testing mechanism of ModelSel-2Way can be illustrated as follows. For each testing datapoint $\mathbf{x} \in \mathbf{X}_{tr}$, we feed it firstly into $M_{sel}$. The role of $M_{sel}$ is to decide if $\mathbf{x}$ belongs to the seen or unseen class based on which we select either $M_s$ or $M_u$ model for the final classification: \begin{equation} s(\mathbf{x}) = \boldsymbol{w}_{sel}^T\mathbf{x} + b_{sel}. \end{equation} Then, the final prediction for $\mathbf{x}$ becomes: \begin{equation} \mathbf{f}(\mathbf{x}, s(\mathbf{x})) = \begin{cases} {\mathbf{g}_s(\mathbf{x})}, &\text{if } s\geq0, \\ {\mathbf{g}_u(\mathbf{x})}, &\text{otherwise.} \end{cases} \end{equation} \begin{figure}[t] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[height=3cm]{images/2way} \caption{Our ModelSel-2Way approach.} \label{fig:test1} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[height=3cm]{images/2way-sa} \caption{Our ModelSel-2Way-SA approach.} \label{fig:test2} \end{minipage} \end{figure} \noindent{\textbf{ModelSel-2Way-SA}}. We also propose to use the Sigmoid function to generate soft assignment scores from the output of $M_{sel}$ as the weights assigned to the outputs of $M_s$ and $M_u$. We call this method as ModelSel-2Way-SA. The intuition behind this model is that $M_{sel}$ suffers from the quantization errors close to the classification boundary, thus we model the assignment uncertainty in $M_{sel}$ to reduce quantization errors. The probability that $\mathbf{x}$ belongs to seen classes $C_s$ or $C_u$ is denoted $p_s(\mathbf{x})$ and $p_u(\mathbf{x}) = 1 - p_s(\mathbf{x})$, respectively, and $p_s(\mathbf{x})$ is given as: \begin{equation} p_s(\mathbf{x}) = \frac{1}{1 + e^{-\sigma s(\mathbf{x})}}, \end{equation} where $\sigma$ is the parameter to control the slope of the Sigmoid function. Then, the output of ModelSel-2Way-SA is given as: \begin{equation} \mathbf{f}(\mathbf{x}) = p_s(\mathbf{x})\cdot\mathbf{g}_s(\mathbf{x}) + p_u(\mathbf{x})\cdot \mathbf{g}_u(\mathbf{x}). \end{equation} \noindent{\textbf{ModelSel-3Way.}} For the ModelSel-3Way, we use additionally classifier $M_t$ trained with both original and auxiliary datapoints so it can classify data from both seen and unseen classes. While its performance is worse than $M_s$ and $M_u$ in each domain, we leverage the output of $M_t$ as a mask to correct some incorrect predictions from $M_u$ and $M_s$. \begin{figure}[b] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[height=3cm]{images/3way} \caption{Our ModelSel-3Way approach.} \label{fig:3-way-model} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[height=3cm]{images/3way-dist} \caption{The selection of classifiers in our ModelSel-3Way.} \label{fig:3-way-dist} \end{minipage} \end{figure} The output of our ModelSel-3Way model, shown in Figure \ref{fig:3-way-model}, is defined as follows: \begin{equation} \mathbf{f}(\mathbf{x}, s(\mathbf{x})) = \max \left( \begin{array}{c} \!\! \begin{cases} c\cdot \mathbf{g}_t(\mathbf{x}) + \mathbf{g}_s(\mathbf{x}) - o_s \text{ if } s\!\geq\!0\\ c\cdot\mathbf{g}_t(\mathbf{x}) + \mathbf{g}_u(\mathbf{x}) - o_u \text{ if } s\!<\!0 \end{cases}\!\!\!\!\!\!\!,\\ \mathbf{g}_t(\mathbf{x}) \end{array} \right)\!, \begin{array}{c} \leftarrow\text{gray regions in Fig. \ref{fig:3-way-dist}} \\ \leftarrow\text{black regions in Fig. \ref{fig:3-way-dist}} \\ \leftarrow\text{white region in Fig. \ref{fig:3-way-dist}} \end{array} \end{equation} where $c$, $o_s$ and $o_u$ adjust the importance of $M_t$ and offset for $M_s$ and $M_u$. Intuitively, close to the classification boundaries, predictions of $\mathbf{g}_s(\mathbf{x})$ and $\mathbf{g}_u(\mathbf{x})$ become replaced by $\mathbf{g}_t(\mathbf{x})$ in this model. Figure \ref{fig:3-way-dist} illustrates the selection of classifiers in our ModelSel-3Way approach. We define $N$ as the total number of testing data, $N_s$ and $N_u$ as the number of testing data assigned to seen and unseen classes $C_s$ and $C_u$, respectively. The distribution map has the same size as $\mathbf{g}_t(\mathbf{X})\in\mbr{C\times N}$, the light gray color highlights successful predictions from $\mathbf{g}_s(\mathbf{X}_{tr}) \in \mbr{C_s \times N_s}$ while the dark black color highlights successful predictions from $\mathbf{g}_u(\mathbf{X}_{te}) \in \mbr{C_u \times N_u}$. \section{Experiments} Below we detail datasets used in our experiments, describe evaluation protocols and show our experimental results to demonstrate usefulness of our approach. \subsection{Setup} \vspace{0.05cm} \noindent{\textbf{Datasets.}} We evaluate proposed models on four datasets. Attribute Pascal and Yahoo (\textit{APY}) contains 15339 images, 64 attributes and 32 classes. The 20 classes from Pascal VOC are used for training and 12 classes collected from Yahoo! are used for testing. Animals with Attributes (\textit{AWA1}) contains 30475 images from 50 classes. Each class is annotated with 85 attributes. The zero-shot learning split of AWA1 is 40 classes for training and 10 classes for testing. The Animal with Attributes 2 (\textit{AWA2}) proposed by \cite{XianCVPR2017} is the updated and open source version of AWA1. It has the same number of classes, attributes and train/test split with AWA1. Flower102 (\textit{FLO}) \cite{nilsback_flower102} contains 8189 images from 102 classes. An evaluation paper \cite{XianCVPR2017} proposes a novel zero-shot learning splits to eliminate the overlap between the classes in zero-shot datasets and ImageNet \cite{XianCVPR2017}, and evaluates most popular zero-shot learning methods. In this paper, we follow the new splits to make a fair comparison to other state-of-the-art methods. \vspace{0.05cm} \noindent{\textbf{Parameters.}} We perform the mean extraction and standard deviation normalization on both original and auxiliary datapoints to train $M_{sel}$ to alleviate the imbalance between two distributions. For $M_s$ and $M_u$, we simply use the original data provided in paper \cite{XianCVPR2017} without any preprocessing. Our models use classifiers with the SoftmaxLog objective. We use the Adam solver with mini-batches of size 60, the parameters of Adam are set to $\beta1 = 0.9$ and $\beta2 = 0.99$. We run the solver for 50 epochs. The learning rate is set to $1e\!-\!4$. The parameters used by ModelSel-2Way and ModelSel-3Way are chosen via cross-validation. \vspace{0.05cm} \noindent{\textbf{Protocols.}} For training, all models are trained at once as the training process is the same for each model. To perform testing, we follow the generalized zero-shot learning protocols in \cite{XianCVPR2017}. There are two testing splits for seen and unseen classes, respectively. We evaluate the two testing splits, and collect two per-class mean top-1 accuracies $Acc_S$ and $Acc_U$ as suggested by \cite{XianCVPR2017}. We report the harmonic mean over the two results as the final score: \begin{equation} H = 2\frac{Acc_S\cdot Acc_U}{Acc_S + Acc_U}. \end{equation} \begin{figure}[t] \centering \includegraphics[height=3.5cm]{images/sigmoid.pdf} \vspace{-0.3cm} \caption{The influence of $\sigma$ on the classification accuracy.} \label{fig:sigma} \vspace{0.2cm} \end{figure} \begin{table}[htbp] \centering \makebox[\textwidth]{\begin{tabular}{ll|ccc|ccc|ccc|ccc|} & \multicolumn{3}{c}{AWA1} & \multicolumn{3}{c}{AWA2} & \multicolumn{3}{c}{FLO} & \multicolumn{3}{c|}{APY} \\ Method & & ts & tr & H & ts & tr & H & ts & tr & H & ts & tr & H \\ \hline DAP &\kern-0.6em\cite{lampert2014attribute} & 0.0&88.7&0.0 & 0.0&84.7&0.0 & -&-&- & 4.8&78.3&8.0 \\ SSE & \kern-0.6em\cite{zhang2015zero} &7.0&80.5&12.9 & 8.1&82.5&14.8 & -&-&- & 0.2&78.9&0.4 \\ LATEM & \kern-0.6em\cite{latem_cvpr16a} & 7.3&71.7&13.3 & 11.5&77.3&20.0 & 14.7&28.8&19.5 & 0.1&73.0&0.2\\ ALE & \kern-0.6em\cite{akata2013label}& 16.8&76.1&27.5 & 14.0&81.8&23.9 & 21.8&33.1&26.3 & 4.6&73.7&8.7 \\ DEVISE & \kern-0.6em\cite{frome2013devise} & 13.4&68.7&22.4 & 17.1&74.7&27.8 & 9.9&44.2&16.2 & 4.9&76.9&9.2\\ SJE & \kern-0.6em\cite{akata2015evaluation}& 11.3&74.6&19.6 & 8.0&73.9&14.4 & 13.9&47.6&21.5 & 3.7&55.7&6.9\\ ESZSL & \kern-0.6em\cite{romera2015embarrassingly} & 6.6&75.6&12.1 & 5.9&77.8&11.0 & 11.4&56.8&19.0 & 2.4&70.1&4.6\\ SYNC & \kern-0.6em\cite{changpinyo2016synthesized} & 8.9&87.3&16.2 & 10.0&90.5&18.0 & -&-&- & 7.4&66.3&13.3\\ SAE & \kern-0.6em\cite{sae} & 1.8&77.1&3.5 & 1.1&82.2& 2.2& -&-&- & 0.4&80.9&0.9 \\ ZSKL & \kern-0.6em\cite{zhang2018zero} & 18.3&79.3&29.8 & 18.9&82.7&30.8 & -&-&- & 11.9&76.3&20.5 \\ f-CLSWGAN & \kern-0.6em\cite{xian2018feature}& 57.9&61.4&59.6 & 53.7&68.2&60.1 & 59.0&73.8&65.6 & 8.7&75.4&15.5 \\ \hline ModelSel-2Way & & 50.1&77.7&61.0 & 41.7&84.2&55.8 & 46.9&60.9&53.0 & 27.5&76.9&40.5 \\ ModelSel-2Way-SA & & 55.8&69.6&62.0 & 55.2&70.8&62.0 & 52.6&54.7&53.6 & 30.3&70.3&\textbf{42.3} \\ ModelSel-3Way & & 52.6&76.7&\textbf{62.4} & 52.3&81.3&\textbf{63.7} & 56.1&81.2&\textbf{66.4} & 28.4&75.5&41.2 \\ \hline \end{tabular}} \vspace{0.1cm} \caption{Evaluations on generalized zero-shot learning} \label{tabel:results} \end{table} \subsection{Evaluations} Figure \ref{fig:sigma} shows how the classification accuracy varies w.r.t. $\sigma$ of ModelSel-2Way-SA. It can be seen that the soft assignment score obtained by passing SVM scores via the Sigmoid function helps improve the performance of our model. Table \ref{tabel:results} shows that our models obtain state-of-the-art results on AWA1, AWA2, FLO and APY datasets. Compared to f-CLSWGAN, our ModelSel-3Way achieves a $2.8\%$ higher accuracy on AWA1, $3.6\%$ on AWA2 and $0.8\%$ on FLO. The biggest improvement for ModelSel-2Way-SA is observed on APY, where the accuracy increased from $20.5\%$ of ZSKL \cite{zhang2018zero} to $42.3\%$. The above evaluations illustrate that our models can combine predictions on seen and auxiliary datapoints better than current state-of-the-art approaches. \section{Conclusions} In this paper, we have presented three approaches to the model selection, which introduce a novel way of leveraging generated datapoints on generalized zero-shot learning task. Different from \cite{xian2018feature}, our models use original and generated datapoints to train a selector function which distinguishes between classifiers for seen and unseen training datapoints. Evaluations on our ModelSel variants achieve state-of-the-art results on four publicly available datasets. \bibliographystyle{splncs}
{ "timestamp": "2018-11-09T02:07:28", "yymm": "1811", "arxiv_id": "1811.03252", "language": "en", "url": "https://arxiv.org/abs/1811.03252" }
\section{Introduction and background} Damage and failure are central to many fields, from civil to aerospace engineering, from nano- to Earth-scales. Yet, they remain difficult to anticipate: Stress enhancement at defects makes the behavior observed at the macroscopic scale extremely dependent on the presence of material inhomogeneities down to very small scales. As a consequence, in heterogeneous brittle solids upon slowly varying external loading, the failure processes are sometimes observed to be erratic, with random cascades of microfracturing events spanning a variety of scales. Such dynamics are {\em e.g. } revealed by the acoustic noise emitted during the failure of various solids \cite{petri1994_prl,garcimartin97_prl,davidsen05_prl,baro13_prl} and, at much larger scale, by the seismic activity going along with earthquakes \cite{bak02_prl,corral04_prl}. Generic features in the field are the existence of scale-free statistics for individual microfracturing/acoustic/seismic events (see \cite{bonamy2009_jpd} for a review) and the non-trivial organization of the event sequences into characteristic aftershock sequences obeying specific laws initially derived in seismology (see \cite{bonamy2009_jpd} for a review). For brittle solids under tension, the difficulty is tackled by reducing the problem down to that of the destabilization and subsequent growth of a single pre-existing crack \cite{Bonamy17_crp}. Linear Elastic Fracture Mechanics (LEFM) provides a powerful framework to address this so called situation of nominally brittle fracture, and links deterministically crack dynamics to applied loading\cite{lawn93_book}. Still, such a continuum approach fails in some situations. In particular, the crack growth is sometimes observed \cite{maloy06_prl,Marchenko06_apl,Astrom06_pl,Kovoisto07_prl,Stojanova14_prl} to be erratic, made of random and local front jumps -- avalanches -- whose statistics share some of the scale-free features mentioned above. This so-called crackling dynamics can be interpreted by mapping the in-plane motion of the crack front to the problem of a long-range (LR) elastic interface propagating within a two-dimensional random potential \cite{Schmittbuhl95_prl,ramanathan97_prl}, so that the driving force self-adjusts around the depinning threshold \cite{bonamy2008_prl}. This approach reproduces quantitatively many of the statistical features observed in the simplified 2D experimental configuration of an interfacial crack driven along a weak heterogeneous plate \cite{bonamy2008_prl,Laurson13_natcom,Ponson17_pre}. Still, whether or not this approach allows describing the bulk fracture of real three-dimensional solids remains an open question (see \cite{bares14_prl} for preliminary work in this context). Beyond their individual scale-free features, whether or not the events get organized into the characteristic aftershock sequences of seismology in this more tractable single crack problem is an important question (see \cite{grob09_pag,bares2018_natcom} for preliminary works). The work gathered here aims at filling this gap. We designed a fracture experiment which consists in driving a tensile crack throughout an artificial rock of tunable microstructure. At slow enough driving speed, the crack dynamics displays an irregular burst-like dynamics. The fluctuations of instantaneous crack speed and mechanical energy release are both monitored and used to characterize the crackling dynamics at the continuum-level (global) scale. The induced acoustic events are recorded and provide information at the local scale. The so-obtained experimental data are contrasted with the crackling features predicted by the depinning approach at both global and local scales. Beyond their individual statistics the time-energy organization is analyzed in a similar way to that developed in statistical seismology. \section{Material \& Methods} \begin{figure}[!h] \centering\includegraphics[width=\columnwidth]{fig_method.png} \caption{(a) This sketch depicts a nominally brittle crack propagating in an heterogeneous solid in opening mode due to a prying forcing quantified by the elastic energy release rate $G(\overline{f},t)$. (b) The time evolution of the crack front (red solid line) projected onto the mean crack plane $(x,z)$ is described by the function $f(z,t)$. The sample width is $\mathcal{L}$ and the characteristic heterogeneity size is $\ell$. (c) Sketch of the experimental fracture set-up. A model rock made of sintered polymer bead is fractured by means of a wedge-splitting geometry, by pushing at constant speed a triangular wedge into a rectangular notch cut out on the sample. This allows driving a slow stable crack in tension (red arrows). During the crack growth, the propagation is monitored by eight acoustic transducers (four in the front four in the back) and a global force sensor.} \label{fig_method} \end{figure} \subsection{Theoretical \& numerical aspects} The continuum framework of LEFM addresses the problem of a straight slit crack embedded in an homogeneous solid. Crack motion is governed by the balance between the amount of mechanical energy released by the solid as the crack propagates over a unit length, $G$, and the fracture energy, $\Gamma$, which is the energy dissipated in the fracture process zone to create two new fracture surfaces of unit area \cite{lawn93_book}. In the standard LEFM framework, $G$ depends on the imposed loading and specimen geometry and $\Gamma$ is a material constant. For a slow enough motion, crack speed $v$ is given by: \begin{equation} \dfrac{1}{\mu} v=G - \Gamma, \end{equation} \noindent where $\mu$ is the crack front mobility. In a perfect linear elastic material (and in the absence of any environmental effect such as stress corrosion for instance), $\mu$ can be related to the Rayleigh wave speed $c_R$ via $\mu= c_R / \Gamma$. For a viscoelastic material like the polystyrene used here, viscoelasticity effects are not negligible and $\mu$ is expected to be much smaller. The depinning approach explicitly introduces the microstructure disorder (see Fig. \ref{fig_method}) by adding a stochastic term in the local fracture energy: $\Gamma(x,y,z)=\overline{\Gamma}+\eta(x,y,z)$. Here and thereafter, $x$, $y$ and $z$ axis are respectively oriented along the growth direction, tensile loading direction, and front direction, as shown in Fig. \ref{fig_method}. This induces in-plane and out-of-plane distortions of the front which, in turn, generates local variations in $G$. As such, the problem is {\em a-priori} 3D; however, to first order, it can be decomposed into two independent effective 2D problems: an equation of motion with describes the dynamics of the in-plane projection of the crack line and an equation of trajectory which describes the $x$ evolution of the out-of-plane roughness -- $x$ being the analog of time (see \cite{bares2014_ftp} for details). The underlying reasons are: (i) The out-of-plane corrugations are logarithmically rough \cite{larralde1995_epl,ramanathan97_prl,bares2014_ftp} and $\vec{v}$ and $\eta(x,y,z)$ reduces to their in-plane projections at large scales; (ii) to first order, to the first order, the variations of $G$ depend on the in-plane front distortion only \cite{movchan1998_ijss}. One can then use Rice's analysis \cite{rice1985_jam,gao1989_jam} to relate the local value $G(z,t)$ of energy release to the in-plane projection of the front shape, $f(z,t)$ (Fig. \ref{fig_method}(b)): \begin{align} & G(z,t) =\overline{G}(1+J(z,\lbrace f \rbrace)), \label{eq_mod0}\\ & \mathrm{with} \, J(z,\lbrace f \rbrace)=\dfrac{1}{\pi} \times pp \int_{\text{front}}{\dfrac{f(\zeta,t)-f(z,t)}{(\zeta-z)^2}d\zeta}, \nonumber \end{align} \noindent where $pp$ denotes the \textit{principal part} of the integral. Note that the long-range kernel $J$ is more conveniently defined by its $z$-Fourier transform $\hat{J}(q)=-\vert q \vert \hat{f}$. $\overline{G}$ denotes the energy release rate that would have been used in the standard LEFM description, after having coarse-grained the microstructure disorder and replaced the distorted front by a straight one at the effective position $\overline{f}(t)=\langle f(z,t) \rangle_z$ obtained after having averaged over the specimen thickness. Once injected in the equation of motion, this yields: \begin{equation} \dfrac{1}{\mu} \dfrac{\partial f}{\partial t}=F(\overline{f},t)+\overline{\Gamma}J(z,\lbrace f \rbrace)+\eta(z,x=f(z,t)), \label{eq_mod1} \end{equation} \noindent where $F(\overline{f},t)=G(\overline{f},t)-\overline{\Gamma}$ is the loading. The random term $\eta(z,x)$ is characterized by two main quantities, the noise amplitude defined as $\tilde{\Gamma}=\langle \eta^2(z,x)\rangle^{1/2}_{x,z}$ and the spatial correlation length $\ell$ over which the correlation function $C(x,z)=\langle \eta(x_0+x,z_0+z)\eta(x_0,z_0)\rangle_{x_0,z_0}$ decreases. We consider now situations of stable growth -- both in terms of dynamics and trajectory. These are encountered in systems of geometry making $G$ decrease with crack length, keeping the T-stress negative and loaded externally by imposing time-increasing displacements \cite{bonamy2008_prl,bares2014_ftp}. Then, $F(\overline{f},t)$ writes \cite{bares13_prl}: \begin{equation} F(\overline{f},t)=\dot{G}t-G'\overline{f} \label{eq_mod2} \end{equation} \noindent where $\dot{G}=\partial G/\partial t$ (driving rate) and $G'=-\partial G/\partial\overline{f}$ (unloading factor) are positive constants. Equations \ref{eq_mod1} and \ref{eq_mod2} provide the equation of motion of the crack line. It is convenient to introduce dimensionless time, $t\rightarrow t/(\ell/\mu\overline{\Gamma})$, and space, $\{x,z,f\}\rightarrow \{x/\ell,z/\ell,f/\ell\}$ to reduce the number of parameters from seven to four: \begin{equation} \dfrac{\partial f}{\partial t}=ct - k \overline{f}+\overline{\Gamma}J(z,\lbrace f \rbrace)+\eta(z,f(z,t)), \label{eqLine} \end{equation} \noindent where $c=\dot{G}\ell/\mu\overline{\Gamma}^2$ is the dimensionless loading speed, $k=G'\ell/\overline{\Gamma}$ is the dimensionless unloading factor. The two other parameters are the system size $N$ (in $\ell$ unit) and the dimensionless noise amplitude $\tilde{\Gamma} \rightarrow \tilde{\Gamma}/ \overline{\Gamma}$. In the following, all these parameters were fixed to values ensuring a clear crackling dynamics \cite{bares13_prl}, with scale-free statistics ranging over a wide number of decades: $c=2 \times 10^{-6}$, $k=10^{-4}$, $\tilde{\Gamma}=1$ and $N=1024$. The front line is discretized along $z$, $f(z,t)=f_z(t)$ with $z \in \{1,..., N\}$. The time evolution of $f_z(t)$ is obtained by solving Eq. \ref{eqLine} via a fourth order Runge-Kutta scheme (discretization time step: $dt=0.1$), as in \cite{bares13_prl,bares2014_ftp}. The space-time dynamics $f(z,t)$ is obtained. Its time derivative gives the local front speed, $v(z,t)=df_z(t)/ dt$ and the spatially-averaged crack speed is deduced (Figure \ref{fig_display}(a)): \begin{equation} \overline{v}(t)=\frac{1}{N}\sum_{z=1}^{N} v(z,t) \end{equation} \noindent As we will see in section \ref{Sec:definition}, the global events will be identified with the bursts in $\overline{v}(t)$, while the local ones will be dug out from the space-time maps $v(z,t)$. The movie provided as electronic supplementary material shows the evolution of both $v(z,t)$ and $\overline{v}(t)$. \subsection{Experimental aspects} The fracture experiments presented here were carried out on an home-made artificial rock obtained by sintering polystyrene beads. The sintering procedure is detailed in \cite{cambonie2015_pre,bares2018_natcom} and summarized herein. First, a mold filled with monodisperse polystyrene beads (Dynoseeds from Microbeads SA, diameter $d$) is heated up to $90$\% of the temperature at glass transition, $T=105^\circ$C. Then, the mold is gradually compressed up to a prescribed pressure, $P$, by means of an electromechanical loading machine, while keeping $T=105^\circ$C. Both $P$ and $T$ are then kept constant for one hour to achieve the sintering. Then, the system is unloaded and cooled down to ambient temperature at a rate slow enough to avoid residual stress ($\sim 8$ hours to cool down from $T$ to room temperature). This procedure provides a so-called artificial rock of homogeneous microstructure, the porosity and length-scale of which are set by the prescribed values $P$ and $d$ \cite{cambonie2015_pre}. Note that the formation of natural rocks are much more complex and cannot be approached by a process as the one used here. However, our model materials share two important features of the simplest rocks (sandstone for instance): They are composed of small cemented grains and the cracks propagate in a brittle manner between these grains. In the experiments reported here, $d=583~\mu\mathrm{m}$ and $P$ is large enough (larger than $1~\mathrm{MPa}$) so that a dense rock is obtained, with no porosity. It breaks in a nominally brittle manner, by the propagation of a single inter-granular crack in between the sintered grains. The disordered nature of the grain joint network yields small out-of-plane deviations -- roughness --, the statistics of which has been analyzed in \cite{cambonie2015_pre}. These out-of-plane deviations, in turn, result in small variations in the landscape of effective toughness (term $\eta(x,z)$ in Eq. \ref{eqLine}). The typical length-scale to be associated with this quenched disordered toughness, hence, is set by $d$ \cite{bares2018_natcom,cambonie2015_pre}. In the so-obtained materials, stable cracks were driven by means of the wedge splitting fracture test depicted in Fig. \ref{fig_method}(c). Parallelepipedic samples of length $140~\mathrm{mm}$ (along $x$), width $125~\mathrm{mm}$ (along $y$), and thickness $15~\mathrm{mm}$ (along $z$) are first machined. A rectangular notch is then cut out on one of the two lateral $(y-z)$ edges and an initial seed crack ($10$~mm long) is introduced in the middle of the cut with a razor blade. A triangular wedge (semi-angle $15^{\circ}$) is then pushed into this rectangular notch at a constant speed $V_{\text{wedge}}=16$~nm/s (Fig. \ref{fig_method}(c)). When the applied loading is large enough, the seed crack destabilizes and starts growing. During the experiment, the force $F(t)$ applied by the wedge is monitored via a S-type Vishay cell force (acquisition rate of $50$~kHz, accuracy of $1$~N), and the instantaneous specimen stiffness $\kappa(t) = F(t)/V_{\text{wedge}} \times t$ is deduced. Such a wedge splitting arrangement also ensure stable crack paths: The compression along x induced by the wedge (vertical axis on Fig. 1c) produces a negative T-stress \cite{seitl2011_cs} and, hence, encourages the crack to stay in the vicinity of the symmetry plane of the specimen ($y=0$) at large scales \cite{cotterell1980_ijf}. Two go-between steel blocks placed between the wedge and the specimen limit parasitic mechanical dissipation and ensure the damage and failure processes to be the dominating dissipation source for mechanical energy in the system (see \cite{bares14_prl,bares2018_natcom} for more details). As a result, both the instantaneous elastic energy stored in the specimen, $\mathcal{E}(t)$, and instantaneous crack length (spatially averaged over specimen thickness), $\overline{f}(t)$, can be determined with very high resolution (see \cite{bares14_prl} for details). Indeed, in a linear elastic material, $\mathcal{E}(t)=\frac{1}{2}F^2(t)/\kappa(t)$ and, for a prescribed geometry, $\kappa$ is a function of $\overline{f}$ only. The reference curve $\kappa$ \textit{vs.} $\overline{f}$ curve was then computed in our geometry by finite element calculations (Cast3M software), and used to infer the spatially-averaged crack position at each time step: $\overline{f}(t)=\kappa^{-1}(F(t)/V_{\text{wedge}} \times t)$. Time derivation of $\overline{f}(t)$ and $-\mathcal{E}(t)$ provides the instantaneous crack speed, $\overline{v}(t)$ and mechanical power released, $\mathcal{P}(t)$ (Fig. \ref{fig_display}(f)). Both quantities were found to be proportional \cite{bares14_prl}. This actually results from the nominally brittle character of the specimen fracture, so that the mechanical energy release rate per unit length, $G=-\rm d \mathcal{E}/\rm d \overline{f}=\mathcal{P}(t)/\overline{f}(t)$, is equal at each time step to the fracture energy, $\Gamma$, which is a material constant. For the artificial rocks considered here: $\Gamma =100~\mathrm{J/m}^2$ \cite{bares14_prl}. Note finally that, in addition to $\overline{f}(t)$ and $\mathcal{P}(t)$, the acoustic emission was collected at eight different locations via eight broadband piezoacoustic transducers (see \cite{bares2018_natcom} for details). The signals were preamplified, band-filtered, and recorded via a PCI-2 acquisition system (Europhysical Acoustics) at $40$~MSamples/s. An acoustic event (AE), $i$, is defined to start at the time $t^{start}_i$ when the preamplified signal $\mathcal{V}(t)$ goes above a prescribed threshold ($40$~dB), and to stop when $\mathcal{V}(t)$ decreases below this threshold at time $t^{end}_i$. The minimal time interval between two successive events is $402$~$\mu$s. This interval breaks down into two parts: The hit definition time (HDT) of $400\,\mu$s and the the hit lockout time (HLT) of $2\,\mu$s. The former sets the minimal interval during which the signal should not exceed the threshold after the event initiation to end it and the latter is the interval during which the system remains deaf after the HDT to avoid multiple detections of the same event due to reflexions. The wave speed in our model rocks was measured to be $c_W=2048~\mathrm{m/s}$ and the emerging waveform frequency, $\nu$, ranges from $40$ to $130~\mathrm{kHz}$ depending on the considered event. This yields typical wavelengths $\lambda=c_W/\nu = 1.5-5~\mathrm{mm}$. Such wavelengths are of the order of the specimen thickness and, as such, are conjectured to coincide with the resonant modes of the plate. We hence propose the following scenario: As a depinning event occurs and the front line jumps over an increment, an acoustic event is produced. The frequencies of the so-emitted pulse spans {\em a priori} from $\sim 40~\mathrm{kHz}$ (resonant modes) to few MHz (selected by the characteristic jump size, of the order of $d$). Due to the absorption properties of the material (a polymer, that is a viscoelastic material), the high frequency portion of the signal attenuates rapidly and only the lowest frequency part survives when the pulse reaches the transducers. Each so-detected AE is characterized by three quantities: occurrence time, energy and spatial location. The occurrence time is identified with the starting time $t^{start}_i$. Its energy is defined as the squared maximum value of $V(t)$ between $t^{start}_i$ and $t^{end}_i$. In the scenario depicted above, indeed, the pulse duration is not correlated to the underlying depining event and the initial value is more relevant than the integral over the whole duration; we checked however that the results do not change if the event energy is defined as this integral \cite{bares2018_natcom}. The spatial location is obtained from the arrival time at each of the eight transducers. The spatial accuracy, here, is set by the typical pulse width $\lambda \simeq 5~\mathrm{mm}$. The movie provided as electronic supplementary material shows the synchronized evolution of both the continuum-level scale quantities and acoustic events as the crack is driven in our artificial rock. As in the numerical simulation the global events will be identified with the bursts in the signal $\overline{v}(t)$ (see next section). A priori, acoustic events are more connected to the local avalanches, but, as will be seen later in this manuscript, there is no direct mapping between the two. \section{On the different types of avalanches and their production rate}\label{Sec:definition} The dynamics emerging in the above experiments and simulations are analyzed both at the global and local scale. Figures \ref{fig_display}(a) and \ref{fig_display}(f) display the time evolution of $\overline{v}$ for the simulation and experiment respectively. Erratic dynamics are observed, with sharp bursts corresponding to the sudden jumps of the crack front. These jumps are thereafter referred to as global avalanches or events. To dig them out, we adopt the standard procedure used for crackling signals \cite{sethna01_nature}; a threshold $v_{th}$ is prescribed and the avalanches are identified with the parts of the signal where $\overline{v}(t) \geq v_{th}$ (Fig. \ref{fig_display}(a) and (f)). The avalanche $i$ starts at time $t^{start}_i$ when the signal $\overline{v}(t)$ first rises above $v_{th}$, and subsequently ends at time $t_i^{end}$ when $\overline{v}(t)$ returns below this value. The position $x_i$ of this avalanche is defined as $x_i=\overline{f}(t^{start}_i)$. The avalanche size $S_i$, in the numerical case, is defined as the area swept by the crack front during the burst: $S_i=N \int_{t_i^{start}}^{t_i^{end}} (\overline{v}(t) - v_{th}) dt$. In the experimental case, $S_i$ is defined as the energy released during avalanche $i$: $S_i=\mathcal{E}(t_i^{end})-\mathcal{E}(t_i^{start})$. Let us recall here that this energy released is proportional to the area swept by the crack front during the event, and the proportionality constant is $\Gamma$ \cite{bares14_prl}. Examples of avalanches detected with this method are displayed in Fig. \ref{fig_display}(a) and (f). In the numerical simulations, the jumps of the crack line can also be analyzed at the local scale, from the space-time evolution of $v(z,t)$. Two distinct methods are used to identify the avalanches. In both cases, special attention has been paid to take properly into account the periodic boundary conditions in the clustering methods. The first method, pioneered by \cite{tanguy1998_pre}, is a generalization of the procedure used to dig out the global avalanches. We consider the spatio-temporal map $v(z,t)$ and apply the same threshold $v_{th}$ as the one considered for global avalanches. The avalanches are then defined as the connected clusters, in the $(z,t)$ space, where $v(z,t)>v_{th}$. Avalanche $i$ starts at time $t^{start}_i$ defined as the first time where $v(z,t)>v_{th}$ in the considered cluster. It ends at $t^{end}_i$ which is the last time so that $v(z,t)>v_{th}$ in the same area. Avalanche size $S_i$ is given by the local area swept by $f(z,t)$ between $t^{start}_i$ and $t^{end}_i$. The 2D avalanche position; $(x_i,z_i)$; is defined such as $x_i=f(z_i,t^{start}_i)$ where $z_i$ is the first location (in $z$) where $f$ enters into the considered cluster at $t^{start}_i$ (see \cite{bares13_phd} for details). An example of the location of these local avalanches is shown in Fig. \ref{fig_display}(b) and (c). The second method used here to identify the local avalanches was initially proposed by \cite{maloy06_prl}. It consists in building a space-space activity map,$W(x,z)$, from the time spent by the crack line at each location $(x,z)$. The inverse of this map provides a space-space cartography of local speeds, $V(x,z)=1/W(x,z)$. A threshold value, $V_{th}$, is then defined and the avalanches are identified with the clusters of connected points where $V(x,z) \geq V_{th}$. Such an activity map is shown in Fig. \ref{fig_display}(d). The avalanche size $S_i$ is given by the cluster area, its position $(x_i,z_i)$ is defined by that of its center of mass and its duration $D_i$ is the sum of the waiting times $W(x,z)$ over the considered cluster (\textit{cf}. \cite{bares13_phd} for details). Note that an accurate occurrence time cannot be attributed to the avalanche identified within this method. The procedure described above to dig out avalanches at the local scales from the space-time dynamics of $v(z,t)$, unfortunately, cannot be applied to our experiments. Conversely, these local avalanches may be at the origin of the acoustic events recorded during our experiments. As such, these latter have been analyzed accordingly (Fig. \ref{fig_display}(h)). The different methods presented above allow obtaining catalogs, for both local and global avalanches in the numerical and experimental experiments, which gathers different quantities: First the avalanche size $S_i$ and position $x_i$ along the crack propagation direction for all types of events. Considering local avalanches, their position $z_i$ along the crack is also measured. For all methods but the one based on activity map, starting and ending time, $t^{start}_i$ and $t^{end}_i$ are also determined; occurrence time, $t_i$ is then identified with $t^{start}_i$. The duration $D_i$ of each avalanche is deduced: $D_i=t^{end}_i-t^{start}_i$. The waiting time $\Delta t_i$ between two consecutive avalanches is computed as $\Delta t_i=t^{start}_{i+1}-t^{start}_i$. When the spatial location of the avalanche is obtained just like in the case of the local avalanches measured from the $v(z,t)$ map, we also define the jump $\Delta r_i$ between two consecutive avalanches as $\Delta r_i=\sqrt{(x_{i+1}-x_{i})^2+(z_{i+1}-z_{i})^2}$. Table \ref{tab_method} synthesizes the five types of avalanches considered here (two for the experiment, three for the simulation) and the quantities collected in their respective catalogs. \begin{table}[!h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $S$ & $t$ & $D$ & $x$ & $z$ & $\Delta t$ & $\Delta r$ \\ \Xhline{3\arrayrulewidth} numerical $\overline{v}(t)$ signal & $\times$ & $\times$ & $\times$ & $\times$ & & $\times$ & \\ \hline experimental $\overline{v}(t)$ signal & $\times$ & $\times$ & $\times$ & $\times$ & & $\times$ & \\ \Xhline{3\arrayrulewidth} numerical spatio-temporal $v(z,t)$ map & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ \hline numerical activity map $W(x,z)$ & $\times$ & & $\times$ & $\times$ & $\times$ & & \\ \hline experimental acoustic signal & $\times$ & $\times$ & & $\sim$ & $\sim$ & $\times$ & $\sim$ \\ \hline \end{tabular} \caption{Synthesis of the different types of avalanches defined here and associated catalogs. The first two columns are to be associated with the global avalanches while the three formers are connected to the local avalanches. $S$, $t$, $D$ $x$, $z$, $\Delta t$, $\Delta r$ denote size, occurrence time, duration, position along growth direction, position along crack front, inter-event time and inter-event distance, respectively. $\times$ denotes accurate measurements while $\sim$ denotes coarse ones.} \label{tab_method} \end{center} \end{table} \begin{figure}[!h] \centering\includegraphics[width=1.05\columnwidth]{fig_display_simul_expe.png} \caption{(a): Evolution of the mean crack speed $\bar{v}$ in the numerical simulations. The blue horizontal line shows the avalanche detection threshold $v_{th}$ and the colored discs display the size of the detected avalanches according to the colorbar provided on the right. (b),(c): Position of the avalanches detected at the local scale on the $v(z,t)$ signal, in the spatio-temporal and spatial maps respectively. The disc color indicates the avalanche size. (d),(e): Avalanches detected on the activity map. Different colors stand for different avalanches in (d) while in (e) the color code indicates the avalanche size. (f): Evolution of the mean crack speed $\bar{v}$ in the experiment. The blue horizontal line shows the avalanche detection threshold $v_{th}$ and the colored discs display the size of the detected avalanches. The magenta curve shows the evolution of the elastic energy $E$ stored in the system. (g),(h): Position of the avalanches detected via the acoustic transducers, in the spatio-temporal and spatial maps respectively. The disc color indicates the avalanche size. The figures (a)-(e) on the left were obtained from the numerical simulations while the figures (f)-(h) on the right were obtained from experiments. In all figures,the disc radius is proportional to the logarithm of the avalanche size.} \label{fig_display} \end{figure} Figure \ref{fig_space_density} displays the cumulative number of avalanches as a function of the length traveled by the crack, for all types of events. In all cases, the number of events linearly increases with crack length. For acoustic avalanches (Fig. \ref{fig_space_density}(a)), this has been interpreted by stating that the production rate of acoustic events is simply given by the number of heterogeneities met by the crack front as it propagates over a unit length \cite{bares2018_natcom}; this suggests a density of events $s_{ea}\sim \mathcal{L}/d^2$, which is of the order of the measured value\footnote{Here and thereafter, subscript $ea$ stands for 'experiment acoustic'.} ($s_{ea}=18.76$~avl/$d$). Still the different ways to define avalanches for the same sample induce rates that are orders of magnitude different from each other: it goes from $4.51$~avl/$d$ for the global speed signal, to $18.76$~avl/$d$ for the acoustic signal, in the experiment (Fig. \ref{fig_space_density}(a)); and from $0.71$~avl/s.u. (space unit) for the global speed signal to $8.67$~avl/s.u. for avalanche detected on the spatio-temporal map, in the simulation (Fig. \ref{fig_space_density}(b)). This suggests that avalanches detected on different local or global signals are not easy to map with each others. However the very close avalanche rates for both local detection methods on $v(z,t)$ and $W(x,z)$ ($s_{na}=8.01$~avl/s.u.) suggests that avalanches are similar\footnote{Here and thereafter, subscript $na$ stands for 'numerics activity'.}. \begin{figure}[!h] \centering\includegraphics[width=0.8\columnwidth]{fig_space_density.png} \caption{Cumulative number of events as a function of crack length for experiments (left) and simulations (right). The different curves stand for the different types of avalanches: acoustic events (green, panel a), events detected on the experimental $\overline{v}(t)$ signal (black, panel a), events detected on the numerical $\overline{v}(t)$ signal (blue, panel b), events detected on the numerical spatio-temporal map $v(z,t)$ (red, panel b), events detected on the activity map $W(x,z)$ (purple, panel b). All curves have been fitted linearly (dashed lines), the obtained density of events $s$ are: $s_{ea}=18.76\pm0.02$~avl./$d$, $s_{eg}=4.51\pm0.02$~avl./$d$, $s_{na}=8.01$~avl/s.u. (space unit), $s_{nl}=8.67$~avl/s.u. and $s_{ng}=0.71$~avl/s.u..} \label{fig_space_density} \end{figure} \section{Statistical features of individual events} We first look at the statistics of individual events. In this context a generic feature common to crackling systems is the observation of scale-free statistics and scaling laws, characterized by well defined exponents \cite{sethna01_nature}. We first compare the statistics of avalanche size $S$ as obtained for the different definitions of avalanches. As presented in Fig. \ref{fig_richter}, in all cases and both in experiments and numerics, the statistics is scale-free; the probability density function (PDF) $P(S)$ follows a power-law spanning over several decades. More particularly, $P(S)$ is well fitted by: \begin{equation} P(S) \sim \frac{e^{-S/S_{max}}}{(1+S/S_{min})^{\beta}} , \label{eq_PS} \end{equation} \noindent where $S_{min}$ and $S_{max}$ are the lower and upper cut-offs respectively and $\beta$ is the exponent of this gamma law. Equation \ref{eq_PS} is reminiscent of the Gutenberg-Richter law for earthquake energy\footnote{Note however that, contrary to what is presented in Fig. \ref{fig_richter}, the energy distribution observed in seismology often takes the form of a pure power-law. As such, the earthquake energy $E$ -- analog to the size here -- is more commonly quantified by its magnitude, which is linearly related to the logarithm of the energy \cite{kanamori1977_jgr}: $\log_{10}(E) = 1.5M + 11.8$. The energy distribution is then presented via the classical Gutenberg-Richter frequency-magnitude relation: $\log_{10}(N(M)) = a - bM$, where $N(M)$ is the number of earthquakes per year with magnitude larger than $M$ and $a$ and $b$ are constants. This having been defined, the $b$-value relates to the exponent $\beta$ involved in Eq. \ref{eq_PS} via: $\beta = b/1.5 + 1$.} \cite{gutenberg44_bssa,gutenberg56_bssa} Figure \ref{fig_richter}(a) does not reveal any smooth lower cut-off $S_{min}$ on the acoustic event (at least larger than the value $10^{-4}$ corresponding to the sensitivity of the acquisition system). The acoustic exponent is $\beta_{ea}=0.96\pm0.03$ \cite{bares2018_natcom}. This exponent is significantly lower than the one to be associated with the size distribution of global avalanches, displayed in Fig. \ref{fig_richter}(b): $\beta_{eg}=1.35\pm 0.1$ \footnote{Here and thereafter, subscript $eg$ stands for 'experiment global'.}. This value was found to decrease as $\overline{v}$ increases \cite{bares14_prl}, but always remains significantly larger than $\beta_{ea}$. As emphasized in \cite{bares2018_natcom}, there is no one-to-one correspondence between acoustic and global events; in particular, the number of the former is much larger than that of the latter (see end of section \ref{Sec:definition} and Fig. \ref{fig_space_density}). Concerning global avalanches, the size distribution are similar in the experiments and simulations: Within the error-bars, the exponents are the same: $\beta_{eg}=1.35\pm0.1$ and $\beta_{ng}=1.30\pm0.03$ (Fig. \ref{fig_richter}(c))\footnote{Here and thereafter, subscript $ng$ stands for 'numerics global'.}. These exponents are also in agreement with the one predicted for the long range depinning transition $\beta_g= 1.28$ \cite{bonamy2009_jpd,ledoussal09_pre}. At local scale, the observed exponents are significantly higher. Avalanches dug out from the spatio-temporal map reveal an exponent\footnote{Here and thereafter, subscript $nl$ stands for 'numerics local'.} $\beta_{nl}=1.62\pm0.03$ while those identified in the activity map are characterized by $\beta_{na}=1.66\pm 0.05$ (Fig. \ref{fig_richter}(c)). The similarity between the two, again, suggests that these two procedures to identify avalanches at the local scale are equivalent. Note that these two exponents are compatible with the values observed in earlier simulations \cite{bonamy2008_prl,laurson10_pre}, and in experiments within a 2D interfacial configuration \cite{grob09_pag,maloy06_prl}: $\beta_{na}=1.7$. Moreover it is worth noting that this last exponent is clearly different from the one obtained from the acoustics emission in experiments. Acoustic emission are not directly related to the local depinning jumps of the fracture front. The inset of Fig. \ref{fig_richter}(c) shows that the threshold $v_{th}$, heuristically chosen to measure avalanches, does not change the value of $\beta$. Conversely, it significantly affects the upper cut-off $S_{max}$. This is shown here on the global $\overline{v}_{th}$ signal of the numerical simulation. This has been found to be true for the other measurement methods, on the different observables. This is even true for the other statistical laws presented in this paper: The signal thresholding used to define the avalanches only modify the power-laws cut-offs. Similarly it has been shown numerically on the global avalanches that $S_{min}$ increases with $c/k$ and $S_{max}$ decreases with $c/k$, leading to the disappearance of the power-law at high $c$ and low $k$ \cite{bares18_prb}. \begin{figure}[!h] \centering\includegraphics[width=\columnwidth]{fig_richter.png} \caption{Distribution of individual event size $P(S)$ for experiments (panel a and b) and simulations (panel c). The different curves stand for the different types of avalanches: acoustic events (green $\pentagon$, panel a), global events detected on the experimental $\overline{v}(t)$ signal (black $\Square$, panel b), global events detected on the numerical $\overline{v}(t)$ signal (blue $\Circle$, panel c), local events detected on the numerical spatio-temporal map $v(z,t)$ (red $\rhd$, panel c), and local events detected on the activity map $W(x,z)$ (purple $\Diamond$, panel c). All curves have been fitted using Eq. \ref{eq_PS} (dashed lines). The obtained fitting parameters are: $S_{min}^{ea}<<10^{-3}$, $S_{max}^{ea}=4.93\times10^{2}\pm0.11\times10^{2}$ and $\beta_{ea}=0.96\pm0.03$; $S_{min}^{eg}=0.20\pm 0.09$, $S_{max}^{eg}=1,90\times10^{2}\pm0.72\times10^{2}$ and $\beta_{eg}=1.35\pm0.1$; $S_{min}^{ng}=9.5\pm 4.4$, $S_{max}^{ng}=3.9\times10^{4}\pm0.7\times10^{4}$ and $\beta_{ng}=1.30\pm0.03$; $S_{min}^{nl}=2.04\pm0.5$, $S_{max}^{nl}=1.8\times10^{4}\pm0.2\times10^{4}$ and $\beta_{nl}=1.62\pm0.03$; $S_{min}^{na}=1.17\pm0.80$, $S_{max}^{na}=1.10\times10^{4}\pm0.18\times10^{4}$ and $\beta_{na}=1.66\pm0.05$. The inset in panel c shows $P(S)$ obtained from the numerical $\overline{v}(t)$ signal, for different $\overline{v}_{th}$. This parameter only have an effect on the upper cut-off. In panel b, the points are obtained by superimposing data from different avalanche detection threshold $\overline{v}_{th}$. The size is then scaled by the bead size $d$.} \label{fig_richter} \end{figure} The avalanche duration $D$ also obeys power-law distribution, both in the experiment and simulation (Fig. \ref{fig_duree}). In the numerical case, the data are well fitted by the following PDF : \begin{equation} P(D) \sim \frac{e^{-D/D_{max}}}{(1+D/D_{min})^{\delta}}, \label{eq_PD} \end{equation} \noindent where $D_{min}$ and $D_{max}$ are the lower and upper cut-offs respectively and $\delta$ is the exponent of this gamma law. From the experimental side, $P(D)$ is a pure power-law without any cut-off when global avalanches are considered (Fig. \ref{fig_duree}(a)). The associated exponent is: $\delta_{eg}=1.85\pm0.06$. This value is significantly higher than the one measured in its numerical counterpart: $\delta_{ng}=1.40\pm0.05$. It has been shown, in \cite{bares18_prb}, that this exponent varies with $c$ (loading speed) and $k$ (unloading factor). Most likely, the $c$ and $k$ values prescribed in the numerical simulation do not correspond with the ones of the experiment so we do not expect $\delta_{eg}$ and $\delta_{ng}$ to be equal. Still $\delta_{ng}$ is close to the value expected for long-range depinning transition in the quasistatic limit, $\delta_g=1.5$ \cite{bonamy2009_jpd}. Regarding the local avalanches in the simulation (dug out from the $v(z,t)$ spatio-temporal map), the measured exponent is $\delta_{nl}=2.29\pm0.25$. A significantly lower value is obtained when the local avalanches are detected from the $W(x,z)$ activity map: $\delta_{na}=1.80\pm0.03$ (Fig.\ref{fig_duree}(b)). We also note that the avalanche duration measured acoustically on the experiment is meaningless since, due to wave reverberation, it depends on the sample geometry. \begin{figure}[!h] \centering\includegraphics[width=0.8\columnwidth]{fig_duree.png} \caption{Distribution of individual event duration $P(D)$ for experiments (panel a) and simulations (panel b). The different curves stand for different types of avalanches: events detected on the experimental $\overline{v}(t)$ signal (black $\Square$, panel a), events detected on the numerical $\overline{v}(t)$ signal (blue $\Circle$, panel b), events detected on the numerical spatio-temporal map $v(z,t)$ (red $\rhd$, panel b), events detected on the activity map $W(x,z)$ (purple $\Diamond$, panel b). All curves have been fitted using Eq. \ref{eq_PD} (dashed lines). The obtained fitting parameters are: $D_{min}^{eg}<<0.2$, $D_{max}^{eg}>>30$ and $\delta_{eg}=1.85\pm0.06$; $D_{min}^{ng}=0.56\pm0.16$, $D_{max}^{ng}=6.6\times10^{2}\pm0.9\times10^{2}$ and $\delta_{ng}=1.40\pm0.05$; $D_{min}^{nl}=9.7\pm4.6$, $D_{max}^{nl}=3.4\times10^{2}\pm0.8\times10^{2}$ and $\delta_{nl}=2.29\pm0.25$; $D_{min}^{na}=183\pm33$, $D_{max}^{na}=1.4\times10^{6}\pm0.3\times10^{3}$ and $\delta_{na}=1.80\pm0.03$. In panel a, the points are obtained by superimposing data from different avalanche detection threshold $\overline{v}_{th}$.} \label{fig_duree} \end{figure} Figure \ref{fig_SvsD} presents the scaling of avalanche size, $S$, with duration, $D$. Regardless of the type of avalanche considered, one gets: \begin{equation} D \sim S^{\gamma} \label{eq_SD} \end{equation} \noindent Experimentally and with regard to global avalanches, the exponent is $\gamma_{eg}=0.91\pm0.01$ (Fig. \ref{fig_SvsD}(a)). Avalanches were obtained using different detection thresholds $v_{th}$ and, as such, $S$ and $D$ have been rescaled by their respective mean values so that all curves collapse onto a single master one. This experimental exponent is found to be very close to the one observed in the simulation (Fig. \ref{fig_SvsD}(b)): $\gamma_{ng}=0.880\pm0.006$. These two exponents are however significantly higher than that at the critical point for a long-range depinning transition in the quasi-static limit (that is $c \rightarrow 0$, $k \rightarrow 0$): $\gamma=0.55$ \cite{bonamy2009_jpd}. They are also higher than the values $0.55-0.7$ reported in 2D interfacial crack experiments \cite{Laurson13_natcom,janicevic2016_prl}. For local avalanches detected from the $W(x,z)$ activity maps and on $v(z,t)$ spatio-temporal maps, the exponents are different: $\gamma_{na}=0.996\pm0.003$ in the case of activity maps and $\gamma_{nl}=0.470\pm0.003$ in the case of spatio-temporal maps, that is about half the exponent measured for global avalanches. \begin{figure}[!h] \centering\includegraphics[width=0.8\columnwidth]{fig_SvsD.png} \caption{Scaling between avalanche size $S$ and duration $D$ for experiment (panel a) and simulation (panel b). The different curves stand for the different types of avalanches: Global events detected on the experimental $\overline{v}(t)$ signal (black $\Square$, panel a), global events detected on the numerical $\overline{v}(t)$ signal (blue $\Circle$, panel b), local events detected on the numerical $v(z,t)$ spatio-temporal map (red $\rhd$, panel b), local events detected on the $W(x,z)$ activity map (purple $\Diamond$, panel b). All points have been fitted by power-laws (straight lines). The obtained exponents are: $\gamma_{eg}=0.91\pm0.01$, $\gamma_{ng}=0.880\pm0.006$, $\gamma_{nl}=0.470\pm0.003$ and $\gamma_{na}=0.996\pm0.003$.} \label{fig_SvsD} \end{figure} Finally, we have characterized the temporal shape of the global avalanches, and their evolution with $D$ (Fig. \ref{fig_forme}). This observable, indeed, provides an accurate characterization of the considered crackling signal and, as such, has been measured experimentally and numerically in a variety of systems \cite{zapperi05_nat,mehta06_pre,laurson06_pre,Papanikolao11_nat,Danku13_prl,Laurson13_natcom,bares14_prl}. The standard procedure was adopted here: First, we identified all avalanches with durations $D_i$ falling within a prescribed interval $\left[ D-\varepsilon , D+\varepsilon \right]$; and second, we averaged the shape $\overline{v}(t|D)/\max_{t \in \left[ t_{i}^{start} , t_{i}^{end} \right]} (\overline{v}(t|D)) , t \in \left[ t_{i}^{start} , t_{i}^{end} \right]$ over all the collected avalanches. Figures \ref{fig_forme}(a) and \ref{fig_forme}(b) show the resulting shape, for the experiment and simulation. We observe in both case that the shape is nearly parabolic at small $D$ with a very small asymmetry. The shapes were fitted using the scaling form proposed in \cite{Laurson13_natcom}: \begin{equation} \left\langle\frac{\overline{v}(t|D)}{\max(\overline{v}(t|D)} \right\rangle=\left[4 \frac{t}{D}\left(1-\frac{t}{D}\right)\right]^{\sigma-1}\left[1-a\left(\frac{t}{D}-\frac{1}{2}\right)\right], \label{eq_Shp} \end{equation} \noindent where $\sigma_{eg}$ (resp. $\sigma_{ng}$) is the shape exponent and $a_{eg}$ (resp. $a_{ng}$) quantifies the shape asymmetry in the experiment (resp. in the simulation). At small $D$, $\sigma_{eg} \approx \sigma_{ng} \approx 2$ which is consistent with a parabolic shape. We note that the prediction \cite{Laurson13_natcom} $\sigma=1/\gamma$ is not fulfilled in our case, neither in the experiment nor in the simulation. This may be due to the combined effects of a finite driving rate and a finite threshold value, yielding both overlaps between the depinning avalanches \cite{bares13_prl} and the splitting of depinning avalanches into separate sub-avalanches \cite{janicevic2016_prl}); neither of these effects are taken into account in the analysis proposed in \cite{Laurson13_natcom}. We also note that $\sigma$ evolves with $D$: It increases with increasing $D$ in the experiment and decreases with increasing $D$ in the simulation (Fig. \ref{fig_forme}(c)). We finally note that the visual flattening observed in Figs. \ref{fig_forme}(a) and \ref{fig_forme}(b) is captured less and less by the scaling form \ref{eq_Shp} as $D$ gets large. Similar features were observed in Barkhausen pulses \cite{Papanikolao11_nat} and was shown to result from the finite value of the demagnetization factor. The same is to be expected here since the unloading factor $k$ in Eq. \ref{eqLine} plays the same role as the demagnetization factor in the Barkhausen problem \cite{bares14_prl}. Finally a small but clear leftward asymmetry is detected (positive $a$ in Fig. \ref{fig_forme}(c)): The bursts start faster than they stop. We note that it is the opposite of what is observed for plasticity avalanches in amorphous materials \cite{liu16_prl} and consistent with that observed in \cite{Laurson13_natcom}. The asymmetry is much more pronounced in experiments than in the simulations. We conjectured \cite{bares14_prl} that it results from the viscoelastic nature of the polymer rock fractured here, which provides a negative inertia to the crack front, that is the addition of a retardation term in the dynamics equation \ref{eqLine} which was demonstrated \cite{zapperi05_nat}, in the Barkhausen context, to yield a significant leftward asymmetry in the pulse shape. \begin{figure}[!h] \centering\includegraphics[width=0.85\columnwidth]{fig_forme.png} \caption{Avalanche shapes extracted from averaged crack speed $\overline{v}(t)$ for experiments (panel a) and simulations (panel b). The duration $D$ of the avalanche collected to measure the shape are varied from $0.2$~s to $0.9$~s in the experimental case and $1$ to $7$ in the numerical case. In both panels (a) and (b), the markers are the measured shapes and the lines are fits using Eq. \ref{eq_Shp}. The fitted exponent $\sigma$ and asymmetry parameter $a$ are plotted as a function of $D$ in panels (c) and (e). In panels (c) and (d), blue symbols $\triangleright$ correspond to the simulation while black symbols $o$ correspond to experiment. Errorbars show a $95\%$ confident interval.} \label{fig_forme} \end{figure} \section{Time-size organization of the event sequences} We now turn to the statistical organization of the successive events, beyond their individual scale-free statistics. Regarding global avalanches, the recurrence time, $\Delta t$, is power-law distributed in both the experiments (Fig. \ref{fig_attente}(a)) and simulations (Fig. \ref{fig_attente}(b)). In both cases, the associated exponents, $p_{eg}$ (experiments) and $p_{ng}$ (numerics) are not universal; they significantly evolve with the mean crack speed \cite{bares13_phd,bares18_prb}. Since there is no one-to-one relation between the experimental and numerical control parameters, we cannot comment further on the difference between $p_{eg}$ and $p_{ng}$. Experimentally, the waiting time separating two successive acoustic events is also power-law distributed (Fig. \ref{fig_attente}(b)). The associated exponent, $p_{ea}$, is significantly smaller than $p_{eg}$: $p_{ea} \simeq 1.16$ for $\overline{v}=2.7~\mu\mathrm{m/s}$, to be compared to $p_{eg} \simeq 1.76$ in the same experiment. Note also that, $p_{ea}$, as $p_{eg}$, significantly depends on $\overline{v}$ \cite{bares2018_natcom}. Experiments performed in artificial rocks made from beads of smaller sizes ($d=24,\mu\mathrm{m}$ or $d=233,\mu\mathrm{m}$) have also revealed that $p_{ea}$ depends on the microstructural length-scale \cite{bares2018_natcom}. Back to numerical simulations, the analysis of the local avalanches identified from the statio-temporal maps does not reveal any special time correlation; the waiting time is not scale-free (Fig. \ref{fig_attente}(c)). This suggests that the time correlation evidenced in the global avalanches emerges from the time overlapping of the local avalanches. Note that the time clustering evidenced here in the acoustic emission (as well as its absence with respect to local avalanches in the simulation) is visually reflected in the spatio-temporal map shown in Fig. \ref{fig_display}(g) (resp. in that shown in Fig. \ref{fig_display}(b)), with acoustic events gathered in time bands (resp. numerical avalanches distributed randomly). \begin{figure}[!h] \centering\includegraphics[width=\columnwidth]{fig_attente.png} \caption{Distribution of waiting time, $\Delta t$, between two consecutive events for experiments (panel a and b) and simulations (panel c). The different curves stand for different types of avalanches: acoustic events (green $\pentagon$, panel a), events detected on the experimental $\overline{v}(t)$ signal (black $\Square$, panel b), events detected on the numerical $\overline{v}(t)$ signal (blue $\Circle$, panel c), events detected on the numerical spatio-temporal map $v(z,t)$ (red $\rhd$, panel c). In panel a, the dashed line is a gamma-law fit with exponent $p_{ea}=1.29\pm0.02$ and upper cut-off $\Delta t_{max}=4.35\times10^{2}\pm0.49\times10^{2}$. In panel b and c, the curves corresponding to the avalanches detected on the $\overline{v}(t)$ signal have been fitted by a power-law (straight dashed lines). The fitted exponents are: $p_{eg}=1.43\pm0.03$ and $p_{ng}=1.75\pm0.03$.} \label{fig_attente} \end{figure} In this context, it is of interest to look at the distribution of inter-event distances, $\Delta r$, for the local avalanches identified in the space-time maps (Fig. \ref{fig_space_clustering}). These statistics are found to be power-law distributed: \begin{equation} P(\Delta r) \sim \Delta r^{-\lambda}, \label{eq_davidsen} \end{equation} \noindent with an associated exponent $\lambda \simeq 0.23$. Similar scale-free statistics are observed in seismicity catalog \cite{davidsen2013_prl}, or in lab scale experiments driving a tensile crack front along an heterogeneous interface \cite{grob09_pag}. In both these cases, the value $\lambda$ is reported to be significantly larger than that measured here, around $0.6$. \begin{figure}[!h] \centering\includegraphics[width=0.45\columnwidth]{fig_space_clustering.png} \caption{Distribution of distances, $\Delta r$, between two consecutive local events detected on the spatio-temporal map $v(z,t)$. The red shaded area shows the $95$\% errorbar. The straight dashed line is a power-law fit with exponent $\lambda = 0.23\pm0.01$.} \label{fig_space_clustering} \end{figure} The time correlation evidenced above, for global and acoustic events, are reminiscent of what is observed in earthquakes \cite{bak02_prl}, or during the gradual damaging of heterogeneous solids under compressive loading conditions \cite{baro13_prl,makinen2015_prl}. In both these situations, the events are known to form characteristic aftershock (AS) sequences obeying specific scaling laws: Productivity law \cite{helmstetter2003_prl,utsu1971_jfs} telling that the number of produced AS goes as a power-law with the mainshock (MS) size; B\r{a}th's law stating that the ratio between the MS size and that of its largest AS is independent of the MS magnitude, and Omori-Utsu law stipulating that the production rate of AS decays algebraically with time to MS. Hence, for each type of events, we have decomposed the series into aftershock sequences and analyzed them at the light of these laws. In the seismology context, many different clustering methods \cite{stiphout2012_com} have been set-up to separate the AS sequences. Most of them are based on the proximity between events, in both time and space. Unfortunately, spatial proximity is not relevant here, because of the lack of information on the event position for global and acoustic events (Tab. \ref{tab_method}). Hence, we have chosen the method developed in \cite{baro13_prl,bares2018_natcom}, which makes use of the occurrence time $t_i$ only. The procedure is the following: First, a size $S_{MS}$ is prescribed and all events of size falling within the interval $S_{MS} \pm \delta S_{MS}$ are labeled as MS; second, for each MS, all subsequent events are considered as AS, until an event of size larger than that of the MS is encountered. From the numerical side, the analysis has been performed on both the global avalanches (dug up from $\overline{v}(t)$) and the local ones (dug up from the space-time map $v(z,t)$). From the experimental side, the analysis has been performed on the acoustic events. conversely, It could not have been achieved on the global experimental events, due to a lack of statistics (few hundreds of events only). Figure \ref{fig_productivite} shows the mean number of AS, $N_{AS}$, triggered by a MS of size $S_{MS}$, for acoustic events (panel a) and global/local avalanches in the simulation (panel b). In the three cases, the productivity law is fulfilled and there is a range of decades over which $N_{AS}$ scales as a power-law with $S_{MS}$. Actually, such a behavior has been demonstrated \cite{bares2018_natcom} to emerge naturally from the scale-free statistics of size; calling $F(S)=\int_{S_{min}}^S P(S) \rm d S$ the cumulative distribution of size, the total number of events in the series to be labeled AS is $F(S_{MS})$ and the total number of MS -- hence AS sequences -- is $1-F(S_{MS})$. Hence, the mean number per AS sequence is the ratio between the two: \begin{equation} N_{AS}(S_{MS})=\frac{F(S_{MS})}{1-F(S_{MS})}, \label{eq_prod} \end{equation} \noindent which fits perfectly the data, without any adjustable parameter. Note that,for a pure scale-free statistics $P(S) \sim S^{-\beta}$, Eq. \ref{eq_prod} would have yielded $N_{AS} \sim S_{MS}^{\beta-1}$. In other words, it is the presence of finite lower and upper cutoffs, $S_{min}$ and $S_{max}$, which is responsible for the departure to this pure power-law scaling. \begin{figure}[!h] \centering\includegraphics[width=0.8\columnwidth]{fig_productivite.png} \caption{Mean AS number, $N_{AS}$, as a function of the triggering MS size, $S_{MS}$, for experiments (panel a) and simulations (panel b). The different curves stand for the different types of avalanches: Acoustic events (green $\pentagon$, panel a), global events detected on the numerical $\overline{v}(t)$ signal (blue $\Circle$, panel b), local events detected on the numerical spatio-temporal map $v(z,t)$ (red $\rhd$, panel b). In each case, the dashed line is given by Eq. \ref{eq_prod}.} \label{fig_productivite} \end{figure} B\r{a}th's law relates the largest AS size in the sequence to that of the triggering MS; it states that the ratio between the two is independent of the MS size. This ratio $S_{MS}/\max\{S_{AS}\}$ is plotted as a function of $S_{MS}$ in Fig. \ref{fig_bath} for the experiments (acoustic events) and simulations (global and local avalanches). As for the productivity law, a simple prediction can be obtained by considering independent events whose distribution in size is $P(S)$. One can then use extreme event theory to derive the statistical distribution of a largest event of size $S$ in a sequence with $N_{AS}$ AS \cite{bares2018_natcom}. The mean value of this maximum value follows \cite{bares2018_natcom}: \begin{equation} \frac{\max(S_{AS})}{S_{MS}}=N_{AS}(S_{MS})\times \int_{S_{min}}^{S_{MS}} S F(S)^{N_{AS}(S_{MS})-1}P(S)\rm d S, \label{eq_bath} \end{equation} \noindent where $N_{AS}(S_{MS})$ is given by Eq. \ref{eq_prod}, P(S) is given by Eq. \ref{eq_PS}, and $F(S)=\int_{S_{min}}^S P(S)\rm d S$. \begin{figure}[!h] \centering\includegraphics[width=0.8\columnwidth]{fig_bath.png} \caption{Mean size ratio, $S_{MS}/\max\{S_{AS}\}$, between a MS and its largest AS for experiments (panel a) and simulations (panel b). The different curves stand for the different types of avalanches: acoustic events (green $\pentagon$, panel a), global events detected on the numerical $\overline{v}(t)$ signal (blue $\Circle$, panel b), local events detected on the numerical spatio-temporal map $v(z,t)$ (red $\rhd$, panel b). In each case, the dashed line is given by Eq. \ref{eq_bath}.} \label{fig_bath} \end{figure} Finally, Omori-Utsu law was addressed. For each type of events, the number of AS per unit time, $r_{AS}(t|S_{MS})$, is computed by binning the AS events over $t-t_{MS}$ and subsequently averaging the so-obtained curves over all MS with size falling into the prescribed interval $(1\pm\epsilon)S_{MS}$. In all cases, the algebraic decay expected from the Omori-Utsu law is observed. The prefactor increases with $S_{MS}$, which is expected since $N_{AS}$ increases with $S_{MS}$ (Eq. \ref{eq_prod}). It has been reported in \cite{bares2018_natcom} that, for acoustic events, all curves can be collapsed by dividing time by $N_{AS}(S_{MS})$, so that the overall production rate writes: \begin{equation} r_{AS}(t|S_{MS})=f\left(\frac{t-t_{MS}}{N_{AS}(S_{MS})}\right)\quad \mathrm{with} \quad f(u)\sim \frac{e^{-u/\tau_{max}}}{(1+u/\tau_{min})^p} \label{eq_omori} \end{equation} \noindent This collapse is verified here, not only for acoustic events (Fig. \ref{fig_omori}(a)), but also for the global avalanches in simulations (Fig. \ref{fig_omori}(b)). It has also been demonstrated on AE \cite{bares2018_natcom} that the Omori-Utsu exponent, $p$, is the same as that of $P(\Delta t)$. This is found to be true for the global avalanches, also. Let us finally mention that Eq. \ref{eq_omori} is not fulfilled for the local avalanches detected onto the space-time numerical maps (Fig. \ref{fig_omori}(b)); this is coherent with the fact that inter-event times were not scale-free for this type of avalanches, neither. \begin{figure}[!h] \centering\includegraphics[width=0.8\columnwidth]{fig_omori.png} \caption{AS rate, $r_{AS}$, as a function of the time elapsed since MS, $t_{AS}-t{MS}$, for experiments (panel a) and simulations (panel b). The curves are scaled by the productivity $N_{AS}$ as proposed by Eq. \ref{eq_omori}. The different curves stand for the different types of avalanches: acoustic events (green $\pentagon$, panel a), events detected on the numerical $\overline{v}(t)$ signal (blue $\Circle$, panel b), events detected on the numerical spatio-temporal map $v(z,t)$ (red $\rhd$, panel b). All curves have been fitted using Eq. \ref{eq_omori} (dashed lines). The obtained fitting parameters are: $\tau_{min}^{ea}=2.83\times10^{-6}\pm1.36\times10^{-6}$, $\tau_{max}^{ea}=9.3\pm5.2$ and $p_{ea}=1.17\pm0.02$; $\tau_{min}^{ng}=8.45\times10^{-3}\pm4.86\times10^{-3}$, $\tau_{max}^{ng}>>10$ and $p_{ng}=1.75\pm0.11$. In the experimental case (panel a) the points are obtained by superimposing data with different $S_{MS}$. In all cases, the avalanche size threshold is fixed $S_{th}=0$.} \label{fig_omori} \end{figure} \section{Concluding discussion} We examined here the crackling dynamics in nominally brittle crack problem. Experimentally, a single crack was slowly pushed into an artificial rock made of sintered polymer beads. An irregular burst-like dynamics is evidenced at the global scale, made of successive depinning jumps spanning a variety of sizes. The area swept by each of these jumps, their duration, and the overall energy released during the event is power-law distributed, over several orders of magnitude. Despite their individual giant fluctuations, the ratio between instantaneous, spatially-averaged, crack speed and power release remains fairly constant and defines a continuum-level scale material constant fracture energy. The features depicted above can be understood in a model which explicitly takes into account the microstructure disorder by introducing a stochastic term into the continuum fracture theory. Then, the problem of crack propagation maps to that of a long-range elastic interface driven by a force self-adjusting around the depinning threshold. This approach reproduces the crackling dynamics observed at global scale. The agreement is quantitative regarding size distribution; the exponents measured experimentally and numerically are very close. They are also very close to the value $\beta_{g}=1.28$ predicted theoretically via Functional Renormalization Group (FRG) method \cite{bonamy2009_jpd,ledoussal09_pre}. Conversely, the exponent characterizing the scale-free statistics of the event duration, $\delta_g$, is different in the experiment and in the simulation. The former is rather close to the predicted FRG value, $\delta_g=1.50$. Note that FRG analysis presupposes a quasi-static process, with a vanishing driving rate (parameter $c$ in Eq. \ref{eqLine} and simulation, $V_{wedge}$ in the experiment). By yielding some overlap between the global avalanches, a finite driving rate may change the value of $\delta$ \cite{white03_prl,bares13_prl}. Different driving rates in the experiment and simulation may also be at the origin of the difference between $\delta_{eg}$ and $\delta_{ng}$. Note also that the long-range elastic kernel in Eq. \ref{eq_mod0} is actually derived assuming infinite thickness. This may not be relevant in our experiment where the specimen thickness is only 30 times larger than the microstructure length-scale. In this respect, it is worth to note that values $\sim 1.5$ were experimentally measured in interfacial growth experiments with ratios thickness over microstructure scale much larger \cite{janicevic2016_prl}, i.e. more in line with the long range elastic kernel of Eq. \ref{eq_mod0}. The analysis of the simulations has permitted to define avalanches at the local scale, as localized depinning events in both space and time (in contrast with the global avalanches identified with $\overline{v}(t)$ bursts localized in time only). Two definitions were proposed: digging out these local avalanches either from activity map $W(x,z)$ or from space-time velocity map $v(z,t)$. Both cases lead to similar, scale-free, statistics for avalanche size; the two procedure are conjectured to be equivalent. Conversely, the obtained exponent, $\beta_l \simeq 1.65$, are significantly higher than those associated with global avalanches. This illustrates that local and global avalanches are distinct entities; each global avalanche is actually made of numerous local avalanches \cite{laurson10_pre}. Unfortunately, the statistics of these local avalanche could not be determined in our experiments. Conversely, the value observed here is very close to that reported in interfacial crack experiments \cite{maloy06_prl,grob09_pag}. This global crackling dynamics goes along, in the experiment, with the emission of numerous acoustic events which are also power-law distributed in energy. The associated exponent, $\beta_{ea}\simeq 1$, is significantly smaller than those associated with global or local avalanche size. Actually, AE are elastodynamics quantities different from the depinning (elastostatic) avalanches: They are the signature of the elastic waves triggered by the local accelerations/decelerations within the depinning events, but their energy is not proportional to the depinning area (or to the total elastostatic energy released during the depinning). In particular, the acoustic waveform will depend not only on the depinning event, but also on the complete geometry of the specimen at the time of the event, the eigenmodes at that time, etc. Quite surprisingly, the size of the global avalanches (that is the length of the crack jump caused by a depinning event) has been observed \cite{bares2018_natcom} to be proportional to the number of acoustic events produced during the event rather than to the sum of acoustic energy cumulated over the event as was initially proposed in \cite{Stojanova14_prl}. Deriving the rationalization tools to infer the relevant information on the underlying depinning event from the analysis of the acoustic waveform provide a tremendous challenge for future investigation. Beyond their individual scale-free features, the acoustic events get organized in time and form characteristic AS sequences obeying the fundamental laws of seismicity: The productivity law relating the number of produced AS with the triggering MS size; B\r{a}th's law relating the size of the largest AS to that of the triggering MS and the Omori-Utsu law relating the AS production rate to the time elapsed since MS. These laws were recently demonstrated \cite{bares2018_natcom} to be a direct consequence of the individual scale-free statistics for size (for the productivity and B\r{a}th's law) and the scale-free statistics of inter-event time (for Omori-Utsu law). The sequences of global avalanches also obey similar time and size organization. In this context, the observation of Omori-Utsu law and scale-free statistics of inter-event times may appear surprising. Depinning models usually predict that, at vanishing driving rate, depinning events are randomly distributed, with an exponential distribution for inter-event time \cite{sanchez2002_prl}. However, it has been recently shown \cite{janicevic2016_prl} how the application of a finite threshold to identify the pulses in $\overline{v}(t)$ splits each true depinning avalanches into disconnected sub-avalanches with power-law distributed inter-event time. Note that, in this scenario, the characteristic exponent of the inter-event time is equal to that of the individual event duration, which is not observed here (Tab. \ref{tab_exponent}). This may result from a difference in the definition of the inter-event time, given by the difference in starting time between two successive events in our case, and by the difference between the starting time of an event and the ending time of its predecessor in \cite{janicevic2016_prl}. It is also interesting to note that local avalanches, in the simulation, do not display scale-free statistics for the inter-event times. Work in progress aims at understanding how such a scale-free statistics emerge at the global scale from the coalescence of the local avalanches at finite driving rate \cite{bares18_prb}. \begin{table}[] \centering \begin{tabular}{|c|c|c|c|c|} \hline statistics & observable & exponent & value & variability \\ \hline \multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Richter-Gutenberg\\ $P(S)$\end{tabular}} & from simulated $\overline{v}(t)$ & $\beta_{ng}$ & $1.36 \pm 0.05$ & \begin{tabular}{c} $\sim$ const. \cite{bares2014_ftp} \\ sligthly $\nearrow$ with $c$ \cite{bares18_prb} \end{tabular} \\ \cline{2-5} & from simulated $v(z,t)$ & $\beta_{nl}$ & $1.62 \pm 0.03$ & $\sim$ const. \cite{bares13_phd} \\ \cline{2-5} & from simulated activity $W(x,z)$ & $\beta_{na}$ & $1.66 \pm 0.05$ & $\sim$ const. \cite{bares13_phd,bonamy2008_prl} \\ \cline{2-5} & from experimental $\overline{v}(t)$ & $\beta_{eg}$ & $1.35 \pm 0.10$ & slightly $\searrow$ with $\overline{v}$ \cite{bares13_prl} \\ \cline{2-5} & from experimental acoustic & $\beta_{ea}$ & $0.96 \pm 0.03$ & slightly $\searrow$ with $\overline{v}$ \cite{bares2018_natcom}\\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Duration\\ $P(D)$\end{tabular}} & from simulated $\overline{v}(t)$ & $\delta_{ng}$ & $1.40 \pm 0.05$ & $\sim$ const. \cite{bares13_phd,bares2014_ftp} \\ \cline{2-5} & from simulated $v(z,t)$ & $\delta_{nl}$ & $2.29 \pm 0.25$ & \\ \cline{2-5} & from simulated activity $W(x,z)$ & $\delta_{na}$ & $1.80 \pm 0.03$ & \\ \cline{2-5} & from experimental $\overline{v}(t)$ & $\delta_{eg}$ & $1.85 \pm 0.06$ & \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Waiting time\\ $P(\Delta t)$\end{tabular}} & from simulated $\overline{v}(t)$ & $p_{ng}$ & $1.75 \pm 0.03$ & $\nearrow$ with $\overline{v}$ \cite{bares18_prb} \\ \cline{2-5} & from experimental $\overline{v}(t)$ & $p_{eg}$ & $1.43 \pm 0.03$ & \\ \cline{2-5} & from experimental acoustic & $p_{ea}$ & $1.29 \pm 0.02$ & $\nearrow$ with $\overline{v}$ \cite{bares2018_natcom}\\ \hline \begin{tabular}[c]{@{}c@{}}Jump\\ $P(\Delta r)$\end{tabular} & from simulated $v(z,t)$ & $\lambda_{nl}$ & $0.23 \pm 0.01$ & \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Omori\\ $r_{AS}(t_{AS}-t_{MS})$\end{tabular}} & from simulated $\overline{v}(t)$ & $p_{ng}$ & $1.75 \pm 0.11$ & \\ \cline{2-5} & from experimental acoustic & $p_{ea}$ & $1.17 \pm 0.02$ & \\ \hline \multirow{4}{*}{S vs. D} & from simulated $\overline{v}(t)$ & $\gamma_{ng}$ & $0.880 \pm 0.006$ & slightly $\searrow$ with $\overline{v}$ \cite{bares13_prl} \\ \cline{2-5} & from simulated $v(z,t)$ & $\gamma_{nl}$ & $0.470 \pm 0.003$ & \\ \cline{2-5} & from simulated activity $W(x,z)$ & $\gamma_{na}$ & $0.992 \pm 0.003$ & \\ \cline{2-5} & from experimental $\overline{v}(t)$ & $\gamma_{eg}$ & $0.91 \pm 0.01$ & \\ \hline \end{tabular} \caption{Table of the exponents measured for the different statistical laws for avalanches detected on different numerical and experimental observables. In the subscript of the exponent names, `ng' stands for \textit{numerics global}, `nl' stands for \textit{numerics local}, `na' stands for \textit{numerics activity}, `eg' stands for \textit{experiment global} ans `ea' stands for \textit{experiment acoustic}.} \label{tab_exponent} \end{table} \noindent \textbf{Acknowledgements}: Support through the ANR project MEPHYSTAR (ANR-09-SYSC-006-01) and by "Investissements d'Avenir" LabEx PALM (ANR-10-LABX-0039-PALM). We thank Thierry Bernard for technical support, and Luc Barbier, Davy Dalmas and Alberto Rosso for fruitful discussions.
{ "timestamp": "2018-11-08T02:18:36", "yymm": "1811", "arxiv_id": "1811.03072", "language": "en", "url": "https://arxiv.org/abs/1811.03072" }
\section{Introduction} \label{sect_intro} The ionised gas in the solar atmosphere could be highly structured by the magnetic field, forming a variety of loop features therein. These loop features, namely magnetic loops, are one of the fundamental building block of the solar atmosphere. These loops are normally brighter than the background and could be easily traced in the remote-sensing data. Therefore, they are popular objects used to investigate transporting process of magnetic flux and energy from the solar interior to the outer solar atmosphere. \par The physical parameters, such as velocity, density and temperature of magnetic loops, are directly related to the pressure and emissivity, whose distribution along the loop length depends on heating therein\,\citep[e.g.][]{1998Natur.393..545P,2006SoPh..234...41K,2007ApJ...664.1214P,2008ApJ...677.1395W,2008ApJ...682.1351K,2013ApJ...771..115V,2014LRSP...11....4R,2018ApJ...856..178P}. Therefore, these parameters are crucial to understand the mechanism and spatial distribution of heating in the loop. Measurements of those parameters have been achieved, but mostly for magnetic loops in the corona thanks to their relatively stable geometries\,\citep[an overview of these observations of coronal loops can be found in][]{2017ApJ...842...38X}. However, it is challenging to obtain such measurements for cool transition region loops, because this class of loops are highly dynamic on time scale of minutes\,\citep{1998SoPh..182...73K,2015ApJ...810...46H}. In the \textit{SOHO} era, magnetic loops with line-of-sight velocities from a few tens to a hundred kilometres per second have been reported in \textit{CDS} O\,{\sc V} ($2.5\times10^5$\,K) observations\,\citep{1997SoPh..175..511B,1998SoPh..182...73K, 2003A&A...406..323D} and \textit{SUMER} O\,{\sc VI} ($3.2\times10^5$\,K) data\,\citep{2000ApJ...533..535C,2004A&A...427.1065T,2006A&A...452.1075D}. Since the Interface Region Imaging Spectragraph\,\citep[IRIS,][]{2014SoPh..tmp...25D} achieved its first light in 2013, the transition region has been observed with unprecedented spatial and temporal resolutions. With IRIS observations, \citet{2015ApJ...810...46H} confirmed that this class of loops are diverse and dynamic, and they observed siphon flows of 10--20\,{\rm km\;s$^{-1}$}\ in a group of loops. Regarding density and temperature, accurate measurements for cool transition region region loops are rare due to lack of suitable spectroscopic data. \par Dynamic phenomena in cool transition region loops could also provide insight for the energy and mass transportation therein. It has reported that interaction between the cool transition region loops could produce explosive events\,\citep{2015ApJ...810...46H,2017MNRAS.464.1753H,2018ApJ...854...80H}, which are signature of energy releases in the solar transition region via magnetic reconnection\,\citep[e.g.][etc.]{1991JGR....96.9399D,1997Natur.386..811I,2014ApJ...797...88H,2018arXiv180610205L}. It has also observed that interaction between cool transition region loops could result in heating and then forming hotter coronal loops\,\citep[e.g.][]{2017A&A...603A..95A,2018A&A...611A..49Y,2018arXiv180611045C}. \par Here, I report on multi-wavelength observations of a set of magnetic loops above a region with flux emergence at its late phase. Rather than an individual loop, I will focus on the global observational characteristics of these loops. I will investigate the dynamics of the loops and its thermal structures coupling from chromospheric to coronal temperatures. By analysing the spectroscopic data, I will also determine the physical parameters of the loops, such as velocities, densities and temperatures. Basing on these observations, I will have discussion on behaviors of flux emergence at its late phase and possible heating processes of these emerging loops. The study is organised as followed: the data information is described in Section\,\ref{sect_data}; the analysis results and discussion are given in Section\,\ref{sect_results}; the conclusion is given in Section\,\ref{sect_conclusion}. \section{Observations and data reductions} \label{sect_data} \subsection{Data description} The data were acquired during an observing campaign in coordination with the Goode Solar Telescope\,\citep[GST,][]{2003JKAS...36S.125G,2010AN....331..620G,2010AN....331..636C} operated in the Big Bear Solar Observatory\,\citep[BBSO,][]{1970S&T....39..215Z}, the Interface Region Imaging Spectragraph\,\citep[IRIS,][]{2014SoPh..tmp...25D}, Hinode\,\citep{2007SoPh..243....3K} and the Solar Dynamics Observatory\,\citep[SDO,][]{2012SoPh..275....3P}. On August 26 2017, the region of interest was observed by a raster scan of IRIS spectrograph from 18:00\,UT to 19:27\,UT, and also by Hinode instruments. During this period of time, the GST did not target on this region due to the other arrangement. \par Here, I analysed the spectroscopic data obtained by the EUV Imaging Spectrometer \,\citep[EIS,][]{2007SoPh..243...19C} aboard Hinode and IRIS, UV and EUV images taken with the Atmospheric Imaging Assembly\,\citep[AIA,][]{2012SoPh..275...17L} aboard SDO and IRIS, soft X-ray images taken with the X-Ray Telescope\,\citep[XRT,][]{2007SoPh..243...63G} aboard Hinode and line-of-sight magnetograms measured by the Helioseismic and Magnetic Imager\,\citep[HMI,][]{2012SoPh..275..207S} aboard SDO. \begin{table}[!ht] \centering \caption{Summary of the imaging data analysed presently.} \label{tab_dimg} \begin{tabular}{c c c c c} \hline Inst. & passband & cadence & pixel size&Peak Tem- \\ &(\AA)& (s) & & perature (K)\\ \hline \hline IRIS$^\dag$ &1330&65&0.17$^{''}$$\times$0.17$^{''}$&$\sim2.5\times10^4$\\ &1400&---&---&$\sim8\times10^4$\\ &2796&---&---&$\sim1\times10^4$\\ \hline AIA &1700&24&0.6$^{''}$$\times$0.6$^{''}$&$5\times10^3$\\ &304&12&---&$5\times10^4$\\ &171&---&---&$6.3\times10^5$\\ &193&---&---&$1.3\times10^6$\\ &94&---&---&$6.3\times10^6$\\ \hline XRT &Al\_poly&$\sim40$&1$^{''}$$\times$1$^{''}$&$8\times10^6$\\ \hline HMI &magnetogram&45$^{*}$&0.5$^{''}$$\times$0.5$^{''}$&N/A\\ \hline \end{tabular} \tablenotetext{$\dag$}{The peak temperature of IRIS SJ passbands are estimated to be the formation temperature of the major emission lines in the passbands.} \tablenotetext{*}{The cadence of the HMI magnetograms earlier than 18:00\,UT is 90\,s in order to reduce the load of data for downloading.} \end{table} \begin{table}[!ht] \centering \caption{ Spectral lines analysed in the present study as taken by IRIS and Hinode/EIS.} \label{tab_dsp} \begin{tabular}{c c c c c c} \hline Inst. & ion & wavelength &$T_{max}$ (K)& step size & slit pixel\\ \hline \hline IRIS &Mg\,{\sc ii}&2796.4\,\AA&$1.4\times10^4$&0.35$^{''}$&0.17$^{''}$\\ &Mg\,{\sc ii}&2803.5\,\AA&---&---&---\\ &C\,{\sc ii}&1334.5\,\AA&$2.5\times10^4$&---&---\\ &C\,{\sc ii}&1335.7\,\AA&---&---&---\\ &Si\,{\sc iv}&1393.8\,\AA&$7.9\times10^4$&---&---\\ &Si\,{\sc iv}&1402.8\,\AA&---&---&---\\ &O\,{\sc iv}&1399.8\,\AA&$1.4\times10^5$&---&---\\ &O\,{\sc iv}&1401.2\,\AA&---&---&---\\ \hline EIS &He\,{\sc ii}&256.3\,\AA$^b$&$5\times10^4$&2$^{''}$&1$^{''}$\\ &Fe\,{\sc viii}$^{*}$&186.6\,\AA&$4.5\times10^5$&---&---\\ &Fe\,{\sc ix}$^{*}$&197.8\,\AA&$7.1\times10^5$&---&---\\ &Fe\,{\sc x}$^{*}$&184.5\,\AA&$1.1\times10^6$&---&---\\ &Fe\,{\sc xi}$^{*}$&180.4\,\AA&$1.4\times10^6$&---&---\\ &Fe\,{\sc xii}&186.9\,\AA&$1.6\times10^6$&---&---\\ &Fe\,{\sc xii}$^{*}$&195.1\,\AA&---&---&---\\ &Fe\,{\sc xiii}$^{*}$&202.0\,\AA&$1.8\times10^6$&---&---\\ &Fe\,{\sc xiii}&203.8\,\AA$^b$&---&---&---\\ &Fe\,{\sc xiv}$^{*}$&270.5\,\AA$^b$&$2.0\times10^6$&---&---\\ &Fe\,{\sc xv}$^{*}$&284.2\,\AA$^b$&$2.2\times10^6$&---&---\\ &Fe\,{\sc xvi}$^{*}$&263.0\,\AA&$2.8\times10^6$&---&---\\ \hline \end{tabular} \tablenotetext{}{IRIS spectrometer was observing with an exposure time of 15\,s, and EIS was observing with an exposure time of 45\,s. The width of the IRIS slit is 0.35$^{''}$\ and that of the EIS slit is 2$^{''}$. The spectral lines blended with lines from different species of ions are denoted by $^b$. The EIS spectral lines denoted by asterisks are used in DEM analysis.} \end{table} \par IRIS was running in a 320-step dense raster mode, and the spectrograph obtained spectral data of a region with a size of 112$^{''}$$\times$129$^{''}$\ and a center at (x=--3.3$^{''}$, y=33.2$^{''}$). Hinode/EIS was scanning a region centering at (x=--21.2$^{''}$, y=--26.6) in short wavelength (SW) band with a size of 120$^{''}$$\times$512$^{''}$. The Hinode/XRT field-of-view has a size of 384$^{''}$$\times$384$^{''}$\ and centers at (x=--3.0$^{''}$, y=--0.5). A summary of the imaging data analysed in this study is given in Table\,\ref{tab_dimg}, and the information of spectral data is shown in Table\,\ref{tab_dsp}. \par All the data were first prepared with standard procedures provided by the instrument teams. The coalignment of the data were obtained by cross-correlations of observations at wavelength with closest representative temperatures. In the present case, the referent coordinates are given by HMI magnetograms. The AIA 1700\,\AA\ images were used to align with the HMI magnetograms, and the rest AIA passbands were aligned to each other via cross-correlations. The IRIS 1330\,\AA\ data were then aligned to AIA 1700\,\AA\ images. The XRT Al-poly images were aligned to AIA 94\,\AA, and Hinode EIS Fe\,{\sc xii}\,195\,\AA\ raster images are then aligned to soft X-ray. \par The region of interest where a group of magnetic loops were observed and its context on AIA 304\,\AA\ passband are shown in Figure\,\ref{fig:fov}. The region is part of the active region NOAA AR12672. By the beginning of the observations, the active region itself had been fully grown and no large-scale magnetic flux emergence was observed in this period of time. In the region of interest, small-scale flux emergence in the later phase was observed in the current dataset (see details in Section\,\ref{subsect_mag}). This small flux emergence formed formed a small loop system that is the main object of the present work. This is different from many previous studies on flux emergences, which focused on emerging stage of the active regions themselves. \begin{figure*} \includegraphics[clip,trim=1cm 0cm 0cm 0cm,width=\textwidth]{fov_contex.eps} \caption{(a): an area near the center of the solar disc viewed in AIA 304\,\AA\ passband at 18:09\,UT on 2017 August 26, in which the region of interest with cool transition region loops studied in the present work is enclosed by dotted lines. (b)--(d): The IRIS SJ 1400\,\AA\ image (b), the HMI magnetogram (c) and the AIA 171\,\AA\ image (d) of the region of interest, on which the white and blue contour curves represent magnetic flux density measured at 18:09\,UT with levels of $-800$\,Mx\,cm$^{-2}$ and $800$\,Mx\,cm$^{-2}$, respectively. The HMI magnetogram is artificially saturated at $-1000$\,Mx\,cm$^{-2}$ (black) and $1000$\,Mx\,cm$^{-2}$ (white). The positive and negative polarities that are connected by the magnetic loops are marked in the magnetogram as ``P'' and ``N'', respectively.} \label{fig:fov} \end{figure*} \begin{figure} \includegraphics[clip,trim=0.5cm 0cm 0cm 0cm,width=\linewidth]{mag_lc.eps} \caption{Evolution of HMI magnetograms. Left: HMI magnetogram of the region, which is scaled from $-1000$\,Mx\,cm$^{-2}$ (black) to $1000$\,Mx\,cm$^{-2}$ (white). Right: the variation of the total magnetic flux of negative polarity in the region as shown on the left. The dotted line on the right panel indicates the observing time of the magnetogram shown in the left panel. (An animation is provided online.)} \label{fig:maglc} \end{figure} \subsection{Analysis of spectroscopic data} IRIS data used in the present study are level 2 data, which have been fully prepared by the instrument team and no further calibration is applied. EIS data were downloaded as level 0. They were firstly reduced by the standard procedures packaged in {\it eis\_prep.pro}, which also includes radiometric calibration. The details of the procedures could be found in the document of {\it eis\_swnote\_01} in the {\it solarsoft} directories. \par To derive line parameters of the observed spectral lines, including peak intensity, line centre and line width, Gaussian fits are used. For those spectral lines used to derive Doppler velocities, nonthermal velocities and electron density, blending lines should be carefully handled. Most of these lines could be well fitted by single Gaussian function even though some are blended. In these cases, the blended lines could not be removed with multi-Gaussian fits and their effects will be considered while understanding the observational results. The only case that the blending line could be removed by double Gaussian fits is Fe\,{\sc xiii}\,203.8\,\AA\ that is blended by Fe\,{\sc xii}\,203.7\,\AA. \par To obtain Doppler velocities, wavelength calibration is required. The wavelength calibration has been performed in IRIS level 2 data by the instrument team using reference of neutral lines (see details in IRIS Technical Note 20). However, absolute wavelength calibrations for the EIS data are difficult. Alternatively, an averaging profile from a quiet-sun region could be used as reference\,\citep{2012ApJ...744...14Y}. However, this quiet-sun method cannot be used in the present because very few quiet-sun region is covered in the field-of-view. In the present study, a spectral profile averaging of the entire field-of-view was first obtained and its line center was taken as the rest wavelength of that spectral line. In this case, the obtained Doppler velocities from EIS data should have an accuracy of about 4.4\,{\rm km\;s$^{-1}$}\,\citep{2010SoPh..266..209K}. Comparing to calibration with quiet Sun region, this wavelength calibration method could lead to an offset of 5.4\,{\rm km\;s$^{-1}$}\ toward red-shifts in the corona\,\citep[see discussion in][]{2012ApJ...744...14Y}, and this offset has been taken into account in the present study. \par While constructing a map for Doppler velocities, thermal drifts and slit tilt should also be corrected. For EIS data, the thermal drifts could be obtained through a method proposed by \citet{2010SoPh..266..209K} and the slit tilt could be obtained from specially-designed observations. These corrections could be achieved by softwares provided in the {\it solarsoft} by the instrument team and they have been included for producing the Doppler velocity maps. For IRIS data, the basic corrections for thermal drifts and slit tilts have been applied by the data processing pipeline and no further corrections are required in the present case. \par To derive nonthermal velocities, the calculation steps given in \citet{1998ApJ...505..957C} are followed. The instrument broadening given by the instrument team are used, which are 0.054\,\AA\ (FWHM) for EIS Fe\,{\sc xii} 195.1\,\AA\ and Fe\,{\sc xiii} 202.0\,\AA\ and 0.032\,\AA\ (FWHM) for IRIS Si\,{\sc iv}\ 1393.8\,\AA. The thermal temperatures of a spectral line is assumed to be the formation temperatures as listed in Table\,\ref{tab_dsp}. \section{Results and discussion} \label{sect_results} In this section, I show the observational analysis of the magnetic loops using multi-wavelength imaging and spectroscopic data. \subsection{Evolution of the magnetic features} \label{subsect_mag} In Figure\,\ref{fig:fov}, one can see that the magnetic loops are rooted in a large patch of positive polarity in the south (marked as ``P'' in Figure\,\ref{fig:fov}c) and a patch of negative polarity in the north (marked as ``N'' in Figure\,\ref{fig:fov}c). The positive polarity is corresponding to a set of small sunspots that were part of the following sunspot group of the active region. The evolution of magnetic features in the region are shown in Figure\,\ref{fig:maglc} and the associated animation. We can see that the negative polarity clearly shows an emerging process from 10:00\,UT to 20:00\,UT. The total negative flux in the region increases about 25 times from about $10^{19}$\,Mx, which gives an flux-emerging rate of $\sim7\times10^{15}$\,Mx\,s$^{-1}$ in average. During the emerging process, small magnetic features with negative polarity were appearing at the edge of the major positive polarity (``P'' in Figure\,\ref{fig:fov}c) and then moved northward and merged each other to form the large patch as seen in Figure\,\ref{fig:fov}c (denoted as ``N''). Since the positive polarity is predominant in the region, its emergence was not as clearly as that of the negative one in the observations. \par As seen in the variation of the negative flux (right panel of Figure\,\ref{fig:maglc}), the IRIS and Hinode observations were taken at the late phase of the emergence, when many magnetic loops have already formed between the two major polarities. During this period of observing time (i.e. 18:00\,UT -- 19:27\,UT), many small negative polarities were still appearing at the edge of the major positive polarity and moving toward the major negative one. Meanwhile, we could also see many small positive polarities moving into the region between the positive and negative polarities. These small positive polarities might have appeared at the edge of the major negative polarity or split from the major positive polarity. Most of these small polarities (both positive and negative) could not reach the major ones. They might simply disappear on the half way; might also meet and cancel each other. These emerging, splitting and cancelling of small polarities could be well traced by the small dynamic brightenings in AIA 1700\,\AA\ images (see the animation associated with Figure\,\ref{fig:mulimgs}). \begin{figure*} \includegraphics[clip,trim=1.5cm 3cm 0cm 0cm,width=\linewidth]{mul_imgs.eps} \caption{The loop system viewed in multiple passbands of IRIS, AIA and XRT. The blue and white solid lines in panel (a) are the contours of magnetic flux densities measured at 18:43\,UT with levels of $800$\,Mx\,cm$^{-2}$ and $-800$\,Mx\,cm$^{-2}$, respectively. The cyan dotted lines denote a loop thread that is clearly seen in XRT image, and the yellow dotted lines denote two loop threads that are clearly seen in AIA 304\,\AA\ image. The white solid line in panel (c) indicates the cut, where the variations of radiation in the observing passbands are shown in Figure\,\ref{fig:clplc}. (An animation is provided online.)} \label{fig:mulimgs} \end{figure*} \begin{figure} \includegraphics[clip,trim=0.5cm 4cm 0cm 0cm,width=\linewidth]{crs_lps_lcs.eps} \caption{The variations of radiation in multiple passbands along the cut marked in Figure\,\ref{fig:mulimgs}c. The yellow dashed lines indicate the locations of the loop threads that are marked as yellow dotted lines in Figure\,\ref{fig:mulimgs}. The cyan dashed lines indicates the location of the loop thread that are marked as cyan dotted lines in Figure\,\ref{fig:mulimgs}. The black dashed lines denote locations of another two loop threads that could be identified by all the AIA EUV channels and XRT channel.} \label{fig:clplc} \end{figure} \subsection{Imaging of the magnetic loop system} \label{subsect:imaging} In Figure\,\ref{fig:mulimgs} and the associated animation, I show the evolution of the magnetic loop system viewed in multiple passbands of the imagers. The details of the loop system could be well seen in the IRIS SJ 1330\,\AA\ and 1400\,\AA\ images. While many loops were connecting the two major polarities, we can also see that some others were connecting the major polarities and the weaker ones. The loop system was very dynamic and evolved (appearing and disappearing) very fast in time scale comparable to the cadences of the observations. Brightenings frequently occurred in the footpoint regions and led to flaring of many loop threads. However, there are also some flaring loop threads do not show bright footpoints in the IRIS SJ 1330\,\AA\ and 1400\,\AA\ images. In general, the loop system shows clear response in images of IRIS SJ passbands, AIA EUV channels and XRT Al\_poly filter. However, the structures viewed in AIA and XRT are much more fuzzy than that in IRIS SJ passbands. One of the possible reasons is the lower spatial resolutions of the AIA and XRT images, and another reason could be that the counterparts of nearby loop threads at higher temperatures cannot be separated from each other. \par The higher resolution images from IRIS SJ 1330\,\AA\ and 1400\,\AA\ (Figures\,\ref{fig:mulimgs}c\&d) show many fine threads of loops with a cross-section $\lesssim0.5$$^{''}$\ (full width at half maximum) and $\sim$12$^{''}$\ separation between the two footpoints. Beside these loops connecting the two major polarities, we also see smaller ones with length of about 5$^{''}$\ that are connecting a major polarity with minor ones in between (see the animation associated with Figure\,\ref{fig:mulimgs}). For comparison, the cross-sections of the loop threads determined from the AIA 171\,\AA\ and XRT images are around 2$^{''}$ (Figures\,\ref{fig:mulimgs}\&\ref{fig:clplc}). The width of the hot threads are consistent with those measured in TRACE filter images\,\citep{2005ApJ...633..499A}. Such hot threads seen in AIA data might contain multiple finer threads as reported in Hi-C data\,\citep{2013ApJ...772L..19B}, and this could be the case in the present loop system although we do not have higher-resolution images at hot temperatures. \par We also notice that not all loop threads in lower temperatures have visible counterparts in higher temperatures. In Figure\,\ref{fig:clplc}, I show light curves taken along a slit across the loop system seen at around 18:43:30\,UT (shown as the white solid line in Figure\,\ref{fig:mulimgs}c). A peak in these curves is representative of a loop thread seen in the corresponding channel. Many loop threads seen in the IRIS SJ 1330\,\AA\ and 1400\,\AA\ passbands do not have distinguishable counterparts in the images taken in passbands with hotter representative temperatures (see Figures\,\ref{fig:mulimgs}\&\ref{fig:clplc}), which is consistent with previous studies\,\citep[e.g.][]{1997SoPh..175..487F,2000A&A...359..716S}. In Figure\,\ref{fig:mulimgs}, I highlight a loop thread (outlined by the cyan dotted lines) that is clearly seen in X-ray and AIA 94\,\AA\ images but almost invisible in the other passbands. (This could also be seen in Figure\,\ref{fig:clplc}, the peaks in the light curves denoted by cyan dashed line). In contrast, two loop threads (outlined by yellow dotted lines in Figure\,\ref{fig:mulimgs}) are clearly visible in AIA 304\,\AA\ and 171\,\AA\ but almost invisible in X-ray. (This could also be seen in Figure\,\ref{fig:clplc}, the peaks in the light curves denoted by yellow dashed lines). This suggests that these loop threads have counterparts only in a narrow temperature range, and this is consistent with that reported by \citet{2008ApJ...686L.131W}. \par There are also some loop threads having clear counterparts in all the passbands. We could see that two peaks indicated in Figure\,\ref{fig:clplc} by black dashed lines are corresponding to two loop threads identified on the images of all AIA EUV channels and XRT Al-Poly filter. By taking into account the spatial resolution of the instruments, we can also consider that these two loop threads also correspond to peaks in IRIS SJ 1330\,\AA\ and 1400\,\AA\ light curves (see Figure\,\ref{fig:clplc}). \begin{figure*} \includegraphics[clip,trim=1cm 1cm 0.5cm 0cm,width=\linewidth]{iris_eis_raster_imgs2.eps} \caption{The radiance maps of the magnetic loop system taken with IRIS and EIS spectral lines. IRIS scanned this region from 18:25\,UT to 18:56\,UT, and EIS scanned the region in the time period of 19:00--19:17\,UT. The blue and purple lines are the contours of magnetic flux densities measured at 18:48\,UT with levels of $800$\,Mx\,cm$^{-2}$ and $-800$\,Mx\,cm$^{-2}$, respectively. The black dotted lines are tracers of a few magnetic loop threads identified in Si\,{\sc iv}\ image (panel b), which could be used as references to compare images taken at different wavelengths. The radiance from the regions enclosed by the rectangles (solid lines) and denoted by letters of ``A--E'' in panel (h) are used for DEM analysis. The background/foreground subtraction is obtained from the region enclosed by dashed lines and denoted by ``BK'' (see Section\,\ref{sect_dem} for details).} \label{fig:sprad} \end{figure*} \subsection{IRIS and EIS spectroscopic observations} This region was scanned by the IRIS slit from 18:25\,UT to 18:56\,UT, and repetitively by EIS slit from 18:00\,UT to 23:59\,UT. Because IRIS and EIS were scanning the region of interest at different time, the analysis here focuses on the global characteristics of the loop system rather than any individual loop thread. In Figure\,\ref{fig:sprad}, I show the IRIS and EIS raster images of the region in a few spectral lines. With the high-resolution data, IRIS could well distinguish many loop threads in the raster images (Figure\,\ref{fig:sprad}a\&b). Even though the loop system in EIS data is fuzzy and it cannot distinguish any single loop threads, we could also see that the region occupied by the loops is brighter than the background (especially in He\,{\sc ii}, Fe\,{\sc xi}--Fe\,{\sc xvi}). Using the imaging data as guidance, I believe that some of the loop threads (for example, see the fuzzy bright structure denoted by the arrow in Figure\,\ref{fig:sprad}k) have been heated to more than $2\times10^6$\,K (i.e. formation temperatures of Fe\,{\sc xv} and Fe\,{\sc xvi}). These hot loops are seemingly not visible in the raster images of Fe\,{\sc viii}--Fe\,{\sc x} (Figure\,\ref{fig:sprad}d--f). It again confirms that most of the loop threads have a narrow range of temperatures. \par In C\,{\sc ii}\ and Si\,{\sc iv}\ radiance maps (Figure\,\ref{fig:sprad}a\&b), the footpoints of the loop system appear to be dark region that suggests lower emission than background (see the blue and purple contours in the figure). The north footpoint (purple contour) are most clearly seen in He\,{\sc ii}, Fe\,{\sc x} and Fe\,{\sc xi} images (Figure\,\ref{fig:sprad}c,f,g), though its southwest portion is clearly seen in the other EIS spectral lines. While He\,{\sc ii} 256.3\,\AA\ is blended by Si\,{\sc x} and Fe\,{\sc x}, it suggests that the temperature of the north footpoint is in the range of $1.1\times10^6$--$1.4\times10^6$\,K (i.e. between formation temperatures of Fe\,{\sc x} and Fe\,{\sc xi}). The south footpoint (blue contour) is most clearly seen in Fe\,{\sc xi} (Figure\,\ref{fig:sprad}g), suggesting its temperature at about $1.4\times10^6$\,K. This suggest that heating was concentrated in the footpoints of the loop system. To further understand these loops, I exploit the spectroscopic data to deduce their physical parameters and show in the following. \begin{figure*} \includegraphics[clip,trim=0cm 1cm 0cm 0cm,width=\linewidth]{iris_eis_raster_dopp2new.eps} \caption{Dopplergrams of the magnetic loop system measured with IRIS Si\,{\sc iv}\,1393.8\,\AA\ and EIS Fe\,{\sc viii}\,185.2\,\AA, Fe\,{\sc x}\,184.5\,\AA, Fe\,{\sc xii}\,195.1\,\AA, Fe\,{\sc xiii}\,202.0\,\AA\ and Fe\,{\sc xv}\,284.1\,\AA. The observing time was as same as that given in Figure\,\ref{fig:sprad}. The purple contour lines indicate the magnetic flux density at $-800$\,Mx\,cm$^{-2}$ and the black ones are representative of that at $800$\,Mx\,cm$^{-2}$. The black dotted lines are tracing a few loop threads identified in the Si\,{\sc iv}\ radiance image as shown in Figure\,\ref{fig:sprad}, which could be used as references to compare different images.} \label{fig:spdopp} \end{figure*} \begin{figure*} \includegraphics[clip,trim=0cm 0cm 0cm 0cm,width=\textwidth]{fe12_195_profi_samp.eps} \caption{A few samples of Fe\,{\sc xii}\,195.12\,\AA\ profiles from the region of interest observed by EIS. The observed profiles are shown as black solid lines, and the single Gaussian fits to the observed profiles are shown as red dashed lines. } \label{fig:fe12samp} \end{figure*} \subsubsection{Doppler velocities} To derive the Doppler velocities, a spectral profile averaging of the entire field-of-view was first obtained and its line center was taken as the rest wavelength of that spectral line. In Figure\,\ref{fig:spdopp}, I display the Doppler maps of the region measured with different spectral lines by IRIS and EIS. In IRIS Si\,{\sc iv}\ 1393.8\,\AA\ (transition region temperature), the loop system shows systematic blue-shifted ($\sim$10\,{\rm km\;s$^{-1}$}) in the middle of the loops, while their footpoints are red-shifted (Figure\,\ref{fig:spdopp}a). The blue-shifts in the middle of the loops are in line with the picture of flux emergence. Because the blue-shifted pattern is systematic in the loop region, it indicates that the loop system should keep emerging during the period of the IRIS raster ($\sim$25\,minutes). This should not be the case for any particular loop thread, otherwise it would move upward for about 20$^{''}$\ and this should be observed in the imaging data (unless they are all moving along line-of-sight). The systematic blue-shifts could be understood if there are many loops continuously moving upward in the transition region. Because no clear upward motion was observed in the imaging data, these loops should not moving very far while they are having transition region temperatures. The absence of downflows (red-shifts) could be due to heating in the loops that leads to disappearance of the loops in the transition region temperature. This understanding is supported by the extremely dynamic nature (frequently appearring and disappearring) of the loops seen in the IRIS SJ 1330\,\AA\ and 1400\,\AA\ images (see section\,\ref{subsect:imaging}). Because these loops have relatively small range of temperatures, the continuing emergence could be observed as frequent appearance of the loop threads in these passbands and heating of the loop threads could lead to their disappearance. Therefore, the downflows could only be seen in higher temperatures, and these are observed in EIS Fe\,{\sc viii}--Fe\,{\sc xv} (Figure\,\ref{fig:spdopp}b--f). \par With EIS data, Doppler maps are derived from Fe\,{\sc viii} 185.2\,\AA\ (Figure\,\ref{fig:spdopp}b), Fe\,{\sc x} 184.5\,\AA\ (Figure\,\ref{fig:spdopp}c), Fe\,{\sc xii} 195.1\,\AA\ (Figure\,\ref{fig:spdopp}d), Fe\,{\sc xiii} 202.0\,\AA\ (Figure\,\ref{fig:spdopp}e) and Fe\,{\sc xv} 284.1\,\AA\ (Figure\,\ref{fig:spdopp}f), with temperatures ranging from $4.5\times10^5$\,K to $2.2\times10^6$\,K. In the Doppler maps of Fe\,{\sc viii} to Fe\,{\sc xiii}, it is clear that red-shifts are predominant in the middle of the loops. The Doppler map of Fe\,{\sc xv} (Figure\,\ref{fig:spdopp}f) also shows clear red-shifted pattern at places around (X=$-$3$^{''}$, Y=55$^{''}$) where loop structure appears (see Figure\,\ref{fig:sprad}k). From Fe\,{\sc x} to Fe\,{\sc xv}, the Doppler velocities in the middle of the loops show a trend from large ($\gtrapprox$10\,{\rm km\;s$^{-1}$}) to small ($<$5\,{\rm km\;s$^{-1}$}\ that is closed to the accuracy of the measurement). This might imply that the loop plasmas become more stationary while they are heated to higher temperatures. \par Note that the Doppler shifts measured with Fe\,{\sc xii}\,195.1\,\AA\ are significantly larger than that with Fe\,{\sc xii} 202.0\,\AA, and this bias might be brought in by the blend of Fe\,{\sc xii} 195.2\,\AA\ at the red-wing of the Fe\,{\sc xii}\,195.1\,\AA\ lines. While randomly checking tens of locations selected from the region, in agreement with \citet{2009A&A...495..587Y}, I found that the contribution of Fe\,{\sc xii} 195.2\,\AA\ is more significant in the brighter region than that in average (see a few examples in Figure\,\ref{fig:fe12samp}). However, the blending cannot be easily removed in most of the positions of the region because their profiles are well fitted by single Gaussian functions (see examples in Figures\,\ref{fig:fe12samp}c\&d). Although the blending might be removed by double Gaussian fits with assumptions of a few parameters of the blending component\,\citep[see e.g.][]{2016ApJ...827...99T}, it could easily bring in additional artificial effect and also the fittings cannot converge in many cases. Nevertheless, the Fe\,{\sc xii}\,195.1\,\AA\ Doppler map still gives a good indication of velocity at the corresponding temperature while we have the Fe\,{\sc xii} 202.0\,\AA\ one for reference. \begin{figure} \includegraphics[clip,trim=0cm 0cm 0cm 0cm,width=\linewidth]{iris_si4_raster_width_fe13.eps} \caption{Nonthermal velocities of the region measured with IRIS Si\,{\sc iv}\ 1393.8\,\AA\ (panel a), EIS Fe\,{\sc xii} (panel b) and Fe\,{\sc xiii} (panel c). The purple and black contour lines outline the magnetic polarities, and the black dotted lines are tracing a few loop threads identified in the Si\,{\sc iv}\ radiance image as shown in Figure\,\ref{fig:sprad}.} \label{fig:spwid} \end{figure} \subsubsection{Nonthermal velocities} We measured the nonthermal velocities of the region by assuming the thermal temperature to be the formation temperatures (listed in Table\,\ref{tab_dsp}). In Figure\,\ref{fig:spwid}, I display the nonthermal velocities of the region measured with IRIS Si\,{\sc iv}\,1393.8\,\AA\ and EIS Fe\,{\sc xii}\,195.1\,\AA\ and Fe\,{\sc xiii}\,202.0\,\AA. \par In IRIS Si\,{\sc iv}\,1393.8\,\AA, a few locations where flaring loops exist have nonthermal velocities of $\sim$30{\rm km\;s$^{-1}$}, which is significantly larger than that of the surrounding background. Apart from those locations, the nonthermal velocities of the loop region are less than 10\,{\rm km\;s$^{-1}$}, much smaller than that of the surrounding region. This suggests that most of the emerged loop plasma was less disturbed by nonthermal processes in the transition region before flaring-up. The small nonthermal velocity is dominant in the loops, suggesting that most of the emerging loops have not been heated (by nonthermal processes) before reaching the transition region. \par Using EIS data, nonthermal velocities from Fe\,{\sc xii}\,195.1\,\AA\ and Fe\,{\sc xiii} 202.0\,\AA\ are derived (Figure\,\ref{fig:spwid}b\&c). In the loop region, the nonthermal velocities derived from Fe\,{\sc xii}\,195.1\,\AA\ are in the range of 40--50\,{\rm km\;s$^{-1}$}, which are overall larger than that from Fe\,{\sc xiii} 202.0\,\AA\ (30--40\,{\rm km\;s$^{-1}$}). This discrenpancy is again due to the blending of Fe\,{\sc xii} 195.2\,\AA\ at the red wing of Fe\,{\sc xii}\,195.1\,\AA. Please note that radiation calibration has been applied on the present data, which might overestimate the values by 10--20\%\,\citep{2016ApJ...820...63B,2016ApJ...827...99T}. Taking into account this effect, the nonthermal velocities measured here are still larger than that in normal active regions\,\citep[$\sim20$\,{\rm km\;s$^{-1}$}, see e.g.][]{2016ApJ...820...63B,2016ApJ...827...99T}. Since these loops are active and have similar size as coronal bright points, the nonthermal velocities of these loops in corona are consistent with that of active region bright points\,\citep{2009ApJ...705L.208I}. \par Nevertheless, both nonthermal velocity maps from EIS data indicate that the entire loop region has similar nonthermal velocities as the surrounding quiet corona. From the point of nonthermal velocity, this indicates that the emerged loop plasma is not much different from the normal coronal plasma. If the emerging loops have been heated by any nonthermal processes, they should have occurred before reaching these temperatures. \subsubsection{Electron density measured with coronal lines} \begin{figure*} \includegraphics[clip,trim=0cm 0cm 0cm 0cm,width=\linewidth]{eis_fe12_13_density.eps} \caption{Electron-density maps of the loop region measured with EIS line pairs of Fe\,{\sc xii} $\lambda\lambda$186.9/195.1 and Fe\,{\sc xiii} $\lambda\lambda$ 202.0/203.8. The contours and the dotted lines are as same as those shown in Figures\,\ref{fig:sprad}--\ref{fig:spwid}.} \label{fig:eisne} \end{figure*} Using EIS spectroscopic data, I produce electron-density maps (Figure\,\ref{fig:eisne}) of the loop region using the intensity ratios of the line pairs of Fe\,{\sc xii} $\lambda\lambda$186.9/195.1 and Fe\,{\sc xiii} $\lambda\lambda$ 202.0/203.8\,\citep[both are recommended for density diagnostics, see][]{2007PASJ...59S.857Y,2009A&A...495..587Y} and version 8.0.7 of the CHIANTI atomic database\,\citep{1997A&AS..125..149D,2015A&A...582A..56D}. We could see that electron density derived from Fe\,{\sc xii} $\lambda\lambda$186.9/195.1 is larger than that from Fe\,{\sc xiii} $\lambda\lambda$ 202.0/203.8, by a factor of $\sim$2. This discrepancy between the two line pairs has been discussed in detail by \citet{2009A&A...495..587Y}. The discrepancy should be much less since the new model used in this version of CHIANTI database\,\citep{2015A&A...582A..56D}. However, this discrepancy has been found to be worse in the EIS data obtained in recent years, and cannot be corrected by the new atomic database alone (Giulio Del Zanna, private communications). The discrepancy appears to be a result of a combination of many factors that have yet to be fully understood. The electron density in the loop region is in the range of $2\sim8\times10^9$\,cm$^{-3}$ measured with Fe\,{\sc xii} $\lambda\lambda$186.9/195.1, and in the range of $1\sim4\times10^9$\,cm$^{-3}$ with Fe\,{\sc xiii} $\lambda\lambda$ 202.0/203.8. These values are significant larger than that in the background corona (where the density is about $0.6\times10^9$\,cm$^{-3}$). This indicates that these loops have not only been heated to coronal temperature, but also been filled in with denser plasma. Furthermore, the electron density of the north footpoint is larger than that of the south, by a factor of $\sim$2. \subsubsection{Electron density along a cool loop} \begin{figure*} \includegraphics[clip,trim=0cm 0cm 0cm 0cm,width=\linewidth]{loop_o4_ne.eps} \caption{The electron density derived in a few regions along a bright transition region loop using IRIS spectroscopic data. For each region, the O\,{\sc iv} 1399.8\,\AA\ and 1401.2\,\AA\ and Si\,{\sc iv}\ 1402.8\,\AA\ (normalised) profiles are displayed. The labels following ``U'' give the normalised scale for each profile. In panels (b--e), the orange lines show the single Gaussian fits to the corresponding profiles. In panels (b--e), the labels following ``R1'' give the line ratio of O\,{\sc iv} 1399.8\,\AA\ to O\,{\sc iv} 1401.2\,\AA, and the electron density (in log$_{10}$) derived from this line ratio is shown as the numbers following to the labels of ``Ne1''. The labels following to ``R2'' show the line ratio of Si\,{\sc iv}\ 1402.8\,\AA\ to O\,{\sc iv} 1401.2\,\AA, and the electron density (in log$_{10}$) derived from this line ratio is shown as the numbers following to the labels of ``Ne2''. These values of electron density are derived with the theoretical model based on the quiet-sun DEM, as shown in Table 2 of \citet{2018ApJ...857....5Y}.} \label{fig:irisne} \end{figure*} In some cases (normally in flaring events), the density sensitive line pair of O\,{\sc iv}\,$\lambda\lambda$1399.8\,\AA/1401.2\,\AA\ are strong enough in IRIS observations and thus could provide an accurate diagnostics of electron density in the solar transition region with unprecedentedly-high resolution\,\citep[see][]{2015arXiv150905011Y,2018ApJ...857....5Y}. Here, I found that the line pair of O\,{\sc iv} have good signal-to-noise ratio at some locations of a flaring transition region loop (see Figure\,\ref{fig:irisne}). It gives an opportunity to measure electron density along this transition region loop. In Figure\,\ref{fig:irisne}, I show O\,{\sc iv} line profiles of a few locations along the flaring loop. The length of the loop is about 12$^{''}$. It is apparently flaring at its north portion, and the length of the flaring section is about 8$^{''}$. The flaring portion of the loop shows about 10\,{\rm km\;s$^{-1}$}\ blueshift in the Doppler map and about 25\,{\rm km\;s$^{-1}$}\ nonthermal velocity in Si\,{\sc iv}\,1393.8\,\AA. In Figure\,\ref{fig:irisne}, we can see that the O\,{\sc iv} 1399.8\,\AA\ line of a few locations in the middle of the loop have relatively good signal-to-noise ratio (Figure\,\ref{fig:irisne}b--d) and allow density diagnostics using the O\,{\sc iv} line pair. The electron density measured in these three locations are given as $5.0\times10^{10}$\,cm$^{-3}$ (b), $7.9\times10^{10}$\,cm$^{-3}$ (c) and $1.6\times10^{10}$\,cm$^{-3}$ (d). These values are consistent with that measured in active region loops by \citet{2016A&A...594A..64P}. \par Due to very weak O\,{\sc iv} lines in IRIS data, the line ratio of Si\,{\sc iv}\ 1402.8\,\AA\ to O\,{\sc iv} 1401.2\,\AA\ are also frequently used for density diagnostics\,\citep[see details in][]{2018ApJ...857....5Y}. Using the theoretical model based on quiet-sun DEM\,\citep[see Table 2 of][]{2018ApJ...857....5Y}, the electron densities derived from the line ratios of Si\,{\sc iv}\ 1402.8\,\AA\ to O\,{\sc iv} 1401.2\,\AA\ at the three locations are given as $1.0\times10^{11}$\,cm$^{-3}$ (b), $7.9\times10^{10}$\,cm$^{-3}$ (c) and $6.3\times10^{10}$\,cm$^{-3}$ (d). Even though the line ratios here fall in the range where the line ratios are not much sensitive to the electron densities\,\citep[see Figure 2 of][]{2018ApJ...857....5Y}, I am confident that the electron density measured with the line ratio of Si\,{\sc iv}\ 1402.8\,\AA\ to O\,{\sc iv} 1401.2\,\AA\ is above $5\times10^{10}$\,cm$^{-3}$, which is in the same magnitude as that from the O\,{\sc iv} line pair. This suggests that using the line ratios of Si\,{\sc iv}\ 1402.8\,\AA\ to O\,{\sc iv} 1401.2\,\AA\ is an acceptable approach to estimate electron density. Because the dependence of the line ratios of Si\,{\sc iv}\ 1402.8\,\AA\ to O\,{\sc iv} 1401.2\,\AA\ to electron density could significantly vary in different theoretical models, the loops here are better fitted in the quiet-sun DEM model than the log-linear DEM model\,\citep[see][for details]{2018ApJ...857....5Y}. \par The electron density measured with transition region lines is a magnitude larger than that with coronal lines. It suggests that the loop threads with coronal temperatures is not spatially identical to those with transition region temperatures. This is in agreement with the imaging data, in which most of the loop threads appear to have narrow range of temperatures. \subsubsection{Estimation of temperatures} \label{sect_dem} \begin{figure} \centering \includegraphics[clip,trim=0.2cm 2cm 0.2cm 0.5cm,height=0.9\textheight]{dem_5regs_subbk.eps} \caption{The DEM curves for the five regions (``A--E'') marked in Figure\,\ref{fig:sprad}h. The grey curves are the solutions given by Monte Carlo simulations, and the best solution is given by the black line.} \label{fig:eisdem} \end{figure} The spectroscopic data provide us with observations in multiple spectral lines at various temperatures. This gives an opportunity to investigate the temperature profiles of the region using the differential emission measurement (DEM) technique\,\citep[see e.g.][and references therein]{2009ApJ...705.1522B,2011ApJ...730...85B,2012A&A...539A.146H,2015ApJ...807..143C,2018ApJ...856L..17S}. Here, I used the DEM procedure packed with CHIANTI ({\it chianti\_dem.pro}), in which the {\it XRT\_DEM} package was called to calculate the DEM profiles. In order to reduce the error brought in by the radiometry calibrations of different instruments, I used the spectral lines from EIS only. Additionally, I used only the spectral lines from the element of iron to avoid the uncertainty brought in by the abundance. The spectral lines used for DEM analysis are denoted in Table\,\ref{tab_dsp}, and most of them have been used in previous studies\,\citep[e.g.][]{2009ApJ...705.1522B,2011ApJ...730...85B}. The spectral lines used in the DEM analysis are formed in temperatures between $4.5\times10^5$--$2.8\times10^6$\,K (i.e. logT/K=5.65--6.45), therefore, the derived DEM profiles are constrained by the observations only in this temperature range. \par Background and foreground subtractions are another important issue that has to be considered in the DEM analysis\,\citep[see e.g.][]{2003A&A...406.1089D,2005ApJ...633..499A}. A region that is representative of background and foreground is normally selected as close to the analyzed region as possible, and it should not contain any active features (e.g. other loop threads) in any spectral lines used for DEM analysis. The background and foreground subtractions become difficult because the region of interest studied here contain many loop threads and they cannot be distinguished in the EIS data. Here, I performed the DEM analysis on five subregions of the loop system (see Figure\,\ref{fig:sprad}h), including three from the footpoints (``A'', ``B'' and ``E'') and two taken from the middle of the loops (``C'' and ``D''). I first selected a large region that includes all the analyzed regions (see that denoted as ``BK'' in Figure\,\ref{fig:sprad}h). While all the pixels in the ``BK'' region are sorted in the order from low to high intensity, the background and foreground emission is taken by averaging 10\% of all the pixels counting from the lowest intensity one. \par The DEM profiles of the analyzed regions are shown in Figure\,\ref{fig:eisdem}. The Monte Carlo simulations appear to prefer peak temperatures at around logT/K=6.1--6.2 (i.e. $1.3-1.6\times10^6$\,K). However, we could see that these DEM profiles are not Gaussian with either no clear peak or flat peaks, therefore, it does not allow quantitively evaluate the temperature distributions\,\citep[see e.g.][]{2008ApJ...686L.131W,2009ApJ...700..762W}. Such flat peaks in the DEM profiles could be indications of multi-thermal components in the analyzed regions, where loop threads with different temperatures are included. \subsection{Discussions} \subsubsection{Flux emergence at its late phase} While the magnetic flux tube that hosts loop plasma is emerging from the solar convective zone, it experiences the dramatic change of plasma environment in the solar lower atmosphere, where the plasma ionisation turns from partial to fully and plasma beta turns from greater than one to less then one. This might lead to serpentine geometry of flux tube observed in the solar lower atmosphere\,\citep{2001ApJ...554L.111F,2007A&A...467..703C,2008ApJ...687.1373C,2009ApJ...691.1276A,2014LRSP...11....3C,2018ApJ...853L..26H}, and this could lead to a variety of energetic events such as Ellerman bombs and UV bursts\,\citep[see e.g.][]{2004ApJ...614.1099P,2009ApJ...701.1911P,2014Sci...346C.315P,2015ApJ...813...86I,2015ApJ...798...19N,2015ApJ...812...11V,2016MNRAS.463.2190N,2016ApJ...824...96T,2017ApJ...836...52Z,2017ApJ...838..101H,2018ApJ...852...95N,2018arXiv180505850Y}. \par By analysing observations of a flux emerging region at its early stage, \citet{2017ApJ...836...63T} reported two types of local heating events that could result from magnetic reconnection in the bald-patches (magnetic dips) of the emerging flux and shocks or strong compression caused by fast downflows along overlying magnetic system. Similarly, \citet{2018ApJ...854..174T} observed IRIS bombs with spatially-resolved bi-directional jets at the earliest stage of a flux emerging region, and found that they are associated with bald-patches and located in regions with large squashing factor at height of about 1\,Mm, which strongly suggested magnetic reconnection in the solar lower atmosphere. Moreover, \citet{2018ApJ...856..127G} reported observations of long-lasting UV bursts occurring at the late phase of flux emergence while emerging flux tubes are interacting with ambient fields. Therefore, activities in magnetic loops above flux emerging region could provide observational hints about how magnetic flux is emerging through and affecting the ambient field in the solar atmosphere. \par At the late phase of this flux emergence, very few energetic UV bursts were observed. This is different from the behaviors of the flux emergence in the early phase. Beside many loops with multiple temperatures that had been formed between the two major polarities, we could also see smaller loops connecting the major polarities and the smaller ones. These smaller loops could be formed by (1) newly emerging flux that observed as small magnetic features with opposite polarity appearing at the side of and moving away from the major ones, and (2) preexisted loops that are dragged downward and observed as small magnetic features with the same polarity splitting and moving away from the major ones. While the total magnetic flux is still increasing and the loops show systematic blue-shifts in the transition region Doppler map, it indicates that the emerging flux should be still dominated at this stage. These smaller loops might move toward and interact with each other, which leads to magnetic cancellation between small magnetic features. A question is whether such interactions among the smaller loops could result in energy release heating the loops. In the present observations, only brightenings in AIA 1700\,\AA\ are seen and no energetic event (such as UV bursts) has been detected in the IRIS Si\,{\sc iv}\ spectral data. This question remains open, and we will make a further investigation in a following-up work using IRIS sit-and-stare data that were taken after the raster data reported here. \subsubsection{Heating of the loops} We found that the nonthermal velocities of the loop system are generally smaller than the background in the transition region, but comparable in the corona. This suggests that most of the emerging loops should be heated only after they reaching the transition region. Because the footpoints of the loop system have higher temperatures and the flaring transition region loops are showing bright footpoints in prior, it suggests that the heating processes should take place in the footpoints. \par The parameters (such as loop width, electron density and temperature profiles) of these loops measured with EIS data are similar to those in larger loops\,\citep[see e.g.][]{2005ApJ...633..499A,2008ApJ...686L.131W,2009ApJ...694.1256T,2009ApJ...700..762W}. Actually, some larger loop systems studied previously also contain loop threads at temperatures from transition region to coronal\,\citep{2008ApJ...686L.131W,2009ApJ...694.1256T}. Although those loop system might be experiencing similar flux emerging processes as the current one, the loop system observed here is much smaller (in length) and the heating therein could be much different. \par Not like many hot loops in the corona, the cool loops observed here show very dynamic evolution. It suggests that they should be heated in an impulsive way. We also observed that some flaring cool loops did not have any bright footpoints in the spectroscopic data. This is consistent with recent one-dimensional simulations of loops with apex density more than $10^9$\,cm$^{-3}$ that were assumed to be heated by nano-flares occurring at the apex with heating either in an electron beam model with energy cutoff up to 15\,keV or in a thermal conduction model\,\citep{2018ApJ...856..178P}. However, the loops observed here are systematically emerging and evolving and also they are much shorter than that modelled in \citet{2018ApJ...856..178P}. Because heating mechanism also depends on loop length, whether the loops observed here were heated in the same way needs further modelling constrained by the present observations. The following-up work using sit-and-stare data will also shed more light on this problem. \par The size of these loops is comparable to coronal bright points\,\citep[see][for a review of this topics]{madjarska_lrsp}, which also have multi-thermal nature\,\citep[see e.g.][]{2012A&A...545A..67M}. One of the heating mechanisms of coronal bright points is believed to be magnetic reconnection among converging magnetic loops\,\citep[e.g.][]{1994ApJ...427..459P,2016ApJ...818....9M}. Whether could such loop systems as observed here be formed and heated in the similar way? This question is worthy to investigate further using higher-cadence IRIS data, and it will also help understand the possible heating mechanisms of coronal bright points. \section{Conclusions} \label{sect_conclusion} In the present study, I report on IRIS, Hinode/EIS, Hinode/XRT, SDO/AIA and SDO/HMI observations of a magnetic loop system above a flux-emerging region. The flux emergence is at its late phase that the two major polarities had been formed. The separation between the two major polarity is about 12$^{''}$. At the side of a major polarity, small magnetic features with opposite polarity were emerging and moving toward the other major polarity. Small magnetic features with the same polarity might also split from a major polarity and move toward the other major polarity. The small magnetic features with opposite polarities might meet and cancel each other while they are moving toward. \par A set of loops had been formed between the two major polarities at the beginning of our observations. Some of the loop threads are connecting the two major polarities and some others are connecting one major and one smaller polarities. The cross-section of the loop threads is about 0.5$^{''}$\ measured with IRIS SJ images. We found that they consist of loop threads with temperatures from $2.5\times10^4$\,K (low transition region) to $2.8\times10^6$\,K (corona). Most of loop threads with different temperatures are not identical in space suggesting that the loop threads appear to have temperatures in a small range. \par In the middle of the loop system, the Doppler maps show an upward velocity of $\sim$10\,{\rm km\;s$^{-1}$}\ in the transition region (Si\,{\sc iv}) and downward velocity of $\sim$10\,{\rm km\;s$^{-1}$}\ in the corona (Fe\,{\sc xii}). In the transition region, the nonthermal velocities of most of the loops are found to be less than 10\,{\rm km\;s$^{-1}$}\ that are much smaller than the surrounding region. While in the corona, they are not different from the surrounding region. The electron densities of the loop system measured in coronal temperature are found to be in the range of 1--$4\times10^9$\,cm$^{-3}$ with larger values in the footpoints. Using IRIS O\,{\sc iv} line pair, we are also able to measure electron density of a flaring loop thread in the transition region and found to be in the range of $2-8\times10^{10}$\,cm$^{-3}$ with an average of $\sim5\times10^{10}$\,cm$^{-3}$. The DEM profiles derived with EIS iron lines in a few locations of the loops system imply that these locations should contain loop threads with different temperatures. \par Our observations indicate that the flux emergence in its late phase is much different from that at the early stage. It might consist of magnetic reconnection between newly emerging flux and pre-existed flux, but it does not produce any UV burst that are signatures of magnetic reconnection in the lower solar atmosphere. Most of the emerging loops at this stage are likely to be heated after they reaching the transition region or above because most of them have small nonthermal velocity in Si\,{\sc iv}. The dynamics of these loops suggests that they are heated impulsively, and how it actually works requires further investigation. In a following study, we will exploit an IRIS sit-and-stare dataset to investigate the evolution of this loop system, which will shed more light on their physics. \acknowledgments {\it Acknowledgments:} I would like to thank the anonymous referee for the constructive comments and helpful suggestions, and Dr. Giulio Del Zanna for helpful discussions. This research is supported by National Natural Science Foundation of China (41474150, 41627806, U1831112, 41404135). Z.H. thanks the China Postdoctoral Science Foundation and the Young Scholar Program of Shandong University, Weihai (2017WHWLJH07). The observation program at BBSO is supported by the Strategic Priority Research Program | The Emergence of Cosmological Structures of the Chinese Academy of Sciences, Grant No. XDB09000000. Z.H. is grateful to BBSO, IRIS and Hinode operating teams for their help and to the BBSO staff for their hospitality while carrying out the observing campaign. Z.H. acknowledges comments from Prof. Lidong Xia and useful discussion with Dr. Hui Tian. IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ(Japan), STFC (U.K.), NASA, ESA, and NSC (Norway). Courtesy of NASA/SDO, the AIA and HMI teams and JSOC. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA) and the University of Cambridge (UK). \bibliographystyle{aasjournal}
{ "timestamp": "2018-11-09T02:05:50", "yymm": "1811", "arxiv_id": "1811.03219", "language": "en", "url": "https://arxiv.org/abs/1811.03219" }
\section{Introduction} Generative models such as Generative Adversarial Networks (GANs) \citep{goodfellow2014generative, li2015generative, arjovsky2017wasserstein, dziugaite2015training} have recently stood out as an important unsupervised method for learning and efficient sampling from a complex, multi-modal target data distribution. Despite the celebrated empirical success, many questions on the theory \citep{liu2017approximation, liang2017well, singh2018nonparametric, liu2018inductive} and mechanism of GANs \citep{arora2017gans,arora2017generalization,daskalakis2017training, mescheder2017numerics} remain to be resolved. At the population level, one general formulation of the adversarial framework \citep{arjovsky2017wasserstein,li2015generative, dziugaite2015training, liu2017approximation, mroueh2017sobolev} considers the following minimax problem, \begin{align*} \min_{\mu \sim \mathcal{D}_{G}} \max_{f \in \mathcal{F}_{D}} \E_{Y \sim \mu} f(Y) - \E_{X \sim \nu} f(X). \end{align*} In plain language, given a target distribution $\nu$, one seeks for a probability distribution $\mu$ from a \textit{generator class} $\mathcal{D}_G$, such that it minimizes the loss incurred by the best test function inside the \textit{discriminator class} $\mathcal{F}_D$. In practice, both \textit{the generator and the discriminator classes} are represented by deep neural networks. To be concrete, $\mathcal{D}_G$ quantifies the transformed implicit distributions realized by a neural network with random input units (for example, uniform distribution), and $\mathcal{F}_D$ represents functions realizable by a certain neural network architecture. In practice, one only has access to finite samples of the target distribution $\nu$. Let us denote $\widehat{\nu}^n$ to be the empirical measure based on $n$ i.i.d. samples from $\nu$. Given finite data samples, the adversarial framework solves the following problem \begin{align} \label{eq:gan} \widehat{\mu}_n = \argmin_{\mu \sim \mathcal{D}_{G}} \max_{f \in \mathcal{F}_{D}} \E_{Y \sim \mu} f(Y) - \E_{X \sim \widehat{\nu}^n} f(X). \end{align} The adversarial loss is also referred to as the integral probability metric (IPM). Define the IPM for a symmetric function class $\mathcal{F}$ as \begin{align*} d_{\mathcal{F}}(\mu, \nu) := \sup_{f \in \mathcal{F}} \E_{Y \sim \mu} f(Y) - \E_{X \sim \nu} f(X) = \sup_{f \in \mathcal{F}} \int\limits_{\Omega} f (d\mu - d\nu). \end{align*} By choosing different $\mathcal{F}$'s, the adversarial framework can express commonly used metrics. To name a few, (1) Wasserstein GAN \citep{arjovsky2017wasserstein}: $\mathcal{F}$ consists of Lipschitz-$1$ functions, and the IPM is the Wasserstein-1 metric $d_W(\cdot, \cdot)$. (2) Maximum Mean Discrepancy (MMD) GAN \citep{dziugaite2015training,li2015generative, arbel2018gradient}: let $\mathcal{H}$ be a reproducing kernel Hilbert space (RKHS), and $\mathcal{F}$ consists of functions with bounded RKHS norm $\mathcal{F} = \{ f \in \mathcal{H} ~|~ \| f \|_{\mathcal{H}} \leq 1 \}$. (3) Sobolev GAN \citep{mroueh2017sobolev}: $\mathcal{F}$ is the Sobolev space with certain smoothness. (4) Total Variation metric $d_{TV}(\cdot, \cdot)$: $\mathcal{F}$ represents all functions bounded by $1$. Due to space constraints, we refer the readers to \cite{liu2017approximation} for other formulations of GANs. In the statistical literature, density estimation has been a conventional topic in nonparametric statistics \citep*{nemirovski2000topics, tsybakov2009introduction, wassermann2006all}, as well as in parametric statistics \citep*{brown1986fundamentals}. In the parametric case, learning density simply reduces to parameter estimation. In the nonparametric case, the minimax optimal rate of convergence has been understood fairly well, for a wide range of density function classes quantified by the smoothness property \citep*{stone1982optimal}. We would like to point out a simple yet important connection between two fields: in nonparametric statistics, the model grows in size to accommodate the complexity of the data, which is reminiscent of the sample-dependent complexity (such as depth, width, or norms of weights) of the deep neural networks. Therefore, characterizing rates with explicit dependence on the complexity of both the generator and the discriminator (for neural network classes and more) will shed light on how well GANs learn distributions. The current paper studies both the \textit{Adversarial Framework} and \textit{Generative Adversarial Networks} for learning densities from a statistical vantage point. The focus of the current paper is \textit{not} on the optimization side of how to solve for $\widehat{\mu}_n$ efficiently, rather on the statistical front. We intend to answer: \begin{enumerate} \item How well GANs learn a wide range of target distributions (both nonparametric and parametric), under a collection of objective evaluation metrics? \item How to utilize the adversarial framework to achieve better theoretical guarantee through the lens of regularization? \end{enumerate} We discover and isolate a new notion of regularization, which we call \textit{generator/discriminator pair regularization}, that provides rigorous guidance on balancing the complexities of the generator and discriminator. We emphasize that several curious features of this pair regularization appear to be new to the literature. As a unified theme in the theory, we develop powerful oracle inequalities for analyzing the generative adversarial framework, which could be of independent interest for further theoretical research on GANs. \subsection{Contribution and Organization} \label{sec:contribution} The paper is organized into two main parts: the \textit{adversarial framework} and the \textit{generative adversarial networks}. \paragraph{\bf Roadmap of Results and Overall Goal} Our overall goal is to provide a complete statistical treatment of the adversarial framework and GANs' mechanism under two important settings: first, the generator/discriminator being the nonparametric class for the adversarial framework; second, the generator/discriminant being the class parametrized by neural networks as in GANs. We summarize in Table~\ref{table: summary} a roadmap of results for readers to navigate. In Table~\ref{table: summary}, we reserve the following symbols for characteristics of the theorems. \begin{align} \label{eq:special-symbols} & ({\color{red} \mathcal{G} \dagger}): \quad \text{generator $\mathcal{G}$ mis-specified for $\nu$, $\nu \notin \mathcal{G}$} \\ & ({\color{blue} \mathcal{F} \ddagger}): \quad \text{discriminator $\mathcal{F}$ mis-specified for the metric, $d_{\mathcal{F}} \neq d_{eval}$} \nonumber\\ & ({\color{brown} m \ast}): \quad \text{the result accounts for finite $m$ samples of the generator} \nonumber \end{align} The main \textit{technical contributions} are the development of the oracle inequalities for GANs, and the formulation of the novel generator/discriminator pair regularization. \begin{table}[ht!] \caption{Roadmap of results. The symbols are defined in \eqref{eq:special-symbols}: ($\color{red} \mathcal{G} \dagger$) and ($\color{blue} \mathcal{F} \ddagger$) to denote the mis-specification for the generator class and the discriminator class respectively, and ($\color{brown} m \ast$) to indicate the dependence on the number of generator samples.} \newcolumntype{Y}{>{\centering\arraybackslash}X} \begin{tabularx}{\columnwidth}{@{} m{2cm} Y Y Y| Y Y c @{}} \toprule Goal & Evaluation Metric & \multicolumn{2}{c|}{Results} & Generator Class $\mathcal{G}$ & Discriminator Class $\mathcal{F}$ & ~~~Property \\ \midrule \multirow[m]{3}{=}{Adversarial Framework (nonparametric)} & \multirow{3}{*}{$d_{\mathcal{F}}$} & Sobolev GAN & minimax optimal (Thm. \ref{thm:optimal-rates-sobolev}) & Sobolev $W^\alpha$ & Sobolev $W^{\beta}$ & \\ \cmidrule{3-7} & & MMD GAN & upper bound (Thm. \ref{thm:optimal-rates-rkhs}) & smooth subspace in RKHS & RKHS $\mathcal{H}$ & \\ \cmidrule{3-7} & & & oracle results (Thm. \ref{thm:nonparam-gans}) & any & Sobolev $W^{\beta}$ & $\color{red} \mathcal{G} \dagger$ \\ \midrule \multirow[m]{3}{=}{Generative Adversarial Networks (parametric)} & $d_{TV}$ & leaky-ReLU GANs & upper bound (Thm.~\ref{thm:leaky-ReLU}) & leaky-ReLU & leaky-ReLU & $\color{blue} \mathcal{F} \ddagger$, $\color{brown} m \ast$\\ \cmidrule{2-7} & $d_{TV}, d_{JS}, d_{H}$ & any GANs & oracle results (Thm.~\ref{thm:param-gans} \& \ref{thm:param-gans-hellinger}) & neural networks & neural networks & $\color{red} \mathcal{G} \dagger$, $\color{blue} \mathcal{F} \ddagger$, $\color{brown} m \ast$\\ \cmidrule{2-7} & $d_{W}$ & Lipschitz GANs & oracle results (Cor.~\ref{cor:wass}) & Lipschitz neural networks & Lipschitz neural networks & $\color{red} \mathcal{G} \dagger$, $\color{blue} \mathcal{F} \ddagger$, $\color{brown} m \ast$\\ \bottomrule \end{tabularx} \label{table: summary} \end{table} \paragraph{\bf Adversarial Framework} One key component of GANs is the adversarial framework: evaluating the performance of the learned density by the adversarial loss. Under the adversarial loss $d_{\mathcal{F}_D}(\cdot, \cdot)$ (IPM induced by the specified discriminator $\mathcal{F}_D$), we study the minimax optimal rates for the target density $\nu$ based on $n$-i.i.d. samples. We formulate such adversarial framework following the classic nonparametric literature by considering a wide range of nonparametric target densities $\mu$ and discriminator classes $\mathcal{F}_D$ quantified by their smoothness property. Using a simple oracle inequality, we extend to the case when the generator class $\mathcal{D}_{G}$ mis-specifies the target density $\nu$, for the procedure \begin{align} \label{eq:adv-frame} \widehat{\mu}_n = \argmin_{\mu \sim \mathcal{D}_{G}} \max_{f \in \mathcal{F}_{D}} \E_{Y \sim \mu} f(Y) - \E_{X \sim \widehat{\nu}^n} f(X). \end{align} This procedure is in a general form, not specific to neural networks. \textit{Our contributions} are: (1) we characterize the minimax optimal rates of the adversarial framework for learning densities for classic nonparametric distribution families, and how to achieve them; (2) we show how the structure of target $\nu$ and that of the class $\mathcal{F}$ affect the minimax rate explicitly, and under what cases fast rates are possible. \paragraph{\bf Generative Adversarial Networks} In practice, GANs are parametrized by neural networks. Built on top of the adversarial framework, we directly analyze the rates for the following parametrized GANs estimator with the generator network $\mathcal{G}$ (parametrized by $\theta$) and discriminator network $\mathcal{F}$ (parametrized by $\omega$) \begin{align} \label{eq:gan-network} \widehat{\theta}_{m,n} \in \argmin_{\theta: g_\theta \in \mathcal{G}} \max_{\omega: f_\omega \in \mathcal{F}} ~~ \left\{ \widehat{\E}_m f_\omega (g_\theta(Z)) - \widehat{\E}_n f_\omega(X) \right\}. \end{align} Here $m$ and $n$ denote the number of the generator samples and target distribution samples. We remark on two key facts about this procedure. First, the density estimator is implicit, which is the distribution of the random variable $g_{\widehat{\theta}_{m,n}}(Z)$. Theory for the implicit density estimator (such as GANs) is less developed in the literature. Second, the evaluation metrics we investigate include Jensen-Shannon divergence $d_{JS}$, Total Variation $d_{TV}$, Wasserstein $d_W$ and Hellinger $d_{H}$ distances, which are mis-specified by the generator $\mathcal{F}$. \textit{Our contributions} are: (1) we study the parametric rates for the implicit density estimator (density of $g_{\widehat{\theta}_{m,n}}(Z)$) for the target $\nu$, when both $\mathcal{G}$ and $\mathcal{F}$ are parametrized by general neural networks; (2) We rigorously formulate the complex trade-offs on the choices of the generator $\mathcal{G}$ and the discriminator $\mathcal{F}$ as a \textit{pair regularization}. We evaluate how this new notion of regularization affects the rates for GANs; (3) As a direct application of the general theoretical framework, we showcase how to identify good $\mathcal{G}$ and $\mathcal{F}$ pairs to obtain fast parametric rates using two examples: learning densities realizable by deep leaky ReLU networks, and learning multivariate Gaussian densities with GANs. In both cases, the upper rates do provide optimal sample complexity (up to logarithmic factors). Finally, the paper is organized as follows. Section~\ref{sec:nonparam} consists of main nonparametric results and the adversarial framework. Section~\ref{sec:param} contains the main parametric results for GAN with neural network generator and discriminator classes, where we introduce the new notion of pair regularization. Further discussions on the generator/discriminator pair regularization is deferred to Section~\ref{sec:discussion}. The main proofs are collected in Section~\ref{sec:proof}, with remaining proofs and supporting lemmas in Appendix~\ref{sec:append}. \subsection{Preliminaries} We now introduce the preliminaries and notations. In the discussion, unless otherwise specified, we restrict the input space to be $\Omega = [0,1]^d$ and the base measure to be the Lebesgue measure. We use $\mu, \nu, \pi$ to denote the distributions, and also reserve $\mu(x), \nu(x), \pi(x)$ for the corresponding density functions w.r.t the Lebesgue measure on $\Omega$ (the Radon-Nikodym derivative). In other words, for ease of notation we use $\int_{\Omega} f(x) \mu(x) dx, \int_{\Omega} f d\mu$ to denote the same integration. $\| f\|_q := \left(\int_{\Omega} |f(x)|^q dx \right)^{1/q}$ denotes the $\ell_q$-norm, for $1\leq q \leq \infty$. For a vector $w$, $\|w \|_q$ denotes the vector $\ell_q$-norm. We use the asymptotic notation $A(n) \precsim n^{\alpha}$, if $\varlimsup\limits_{n\rightarrow \infty} \frac{\log A(n)}{\log n} \leq \alpha$, holding other parameters fixed, similarly $A(n) \succsim n^{\alpha}$ if $\varliminf\limits_{n\rightarrow \infty} \frac{\log A(n)}{\log n} \geq \alpha$. Call $A(n) \asymp n^{\alpha}$ if $A(n) \precsim n^{\alpha}$ and $A(n) \succsim n^{\alpha}$. $[K]:=\{ 0, 1, \ldots, K\}$ refers to the index set, for any $K \in \mathbb{N}_{>0}$. For a vector or a multi-index (possibly infinite dimensional), the subscript $i$ denotes the $i$-th component. Next, we introduce the function spaces. Let $d$ denotes the dimension. For a multi-index $\gamma \in \mathbb{N}_0^d$, we use $D^{(\gamma)}$ to denote the $\gamma$-weak derivative for a function. For example, for infinitely differentiable $f \in C^{\infty}(\Omega)$, $D^{(\gamma)} f$ takes the form $D^{(\gamma)} f = \partial^{|\gamma|} f/\partial x_1^{\gamma_1} \ldots \partial x_d^{\gamma_d}$. \begin{defn}[Sobolev space: $\alpha \in \mathbb{N}_{>0}$] \label{def:sobolev} For an integer $\alpha$, define the Sobolev space $W^{\alpha, q}(r)$ for $1\leq q\leq \infty$ with radius $r \in \mathbb{R}_{\geq 0}$ to be \begin{align*} W^{\alpha, q}(r) &:= \left\{ f \in \Omega \rightarrow \mathbb{R} : ( \sum_{|\gamma| \leq \alpha} \| D^{(\gamma)} f \|_{q}^q )^{1/q} \leq r \right\}, \end{align*} where $\gamma$ is a multi-index and $D^{(\gamma)}$ denotes the $\gamma$-weak derivative. \end{defn} We further consider general Reproducing Kernel Hilbert Space (RKHS) $\mathcal{H} \subset L^2_{\pi}$ (with $\pi$ as the base measure) endowed with RKHS norm $\| \cdot \|_{\mathcal{H}}$, and the corresponding positive semidefinite kernel $K(\cdot, \cdot): \Omega \times \Omega \rightarrow \mathbb{R}$. By the Mercer's theorem, one can characterize this RKHS via the following integral operator $\mathcal{T}_\pi: L^2_\pi \rightarrow \mathcal{H}$. \begin{defn}[Integral operator of RKHS] \label{def:integral-operator} Define the integral operator $\mathcal{T}_\pi: L^2_\pi \rightarrow \mathcal{H}$, \begin{align*} \mathcal{T}_{\pi} f(z) = \int_{\Omega} K(z, \cdot) f(\cdot) d \pi(\cdot), \end{align*} and denote the eigenfunctions of this operator by $\psi_i$, and eigenvalues by $t_i, i \in \mathbb{N}$, with \begin{align*} \mathcal{T}_{\pi} \psi_i = t_i \psi_i, ~\text{and}~ \int_{\Omega} \psi_i \psi_j d \pi = \delta_{ij}. \end{align*} \end{defn} The following notion of combinatorial dimension for real-valued function is credited to \citet*{pollard1990empirical}, which we will employ as a complexity measure in deriving rates for GANs. \begin{defn}[Pseudo-dimension] \label{def:comb-dim} Let $\mathcal{F}:\Omega \rightarrow \mathbb{R}$ be a class of functions. The pseudo-dimension of $\mathcal{F}$, denoted by ${\rm Pdim}(\mathcal{F})$, is the largest integer $m$ such that there exists $(X_i, y_i) \in \Omega \times \mathbb{R}, 1\leq i \leq m$ such that for any $(b_1, \ldots, b_m) \in \{ -1, 1 \}^m$ there exists $f \in \mathcal{F}$ such that $\sign(f(X_i) - y_i) = b_i, \forall 1\leq i \leq m$. For a class $\mathcal{F}$ of real-valued functions, we can also define its Vapnik-Chervonenkis dimension ${\rm VCdim}(\mathcal{F}) := {\rm VCdim}(\sign(\mathcal{F}))$. \end{defn} Finally, for two functions $f: \mathbb{R}^d \rightarrow \mathbb{R}$ and $g: \mathbb{R}^p \in \mathbb{R}^d$, we denote $f\circ g$ to be the composition $f(g(x))$. We use the following notion for composition of function classes \begin{align} \label{eq:composition} \mathcal{F} \circ \mathcal{G} := \{ f\circ g ~|~ \forall f \in \mathcal{F}, g \in \mathcal{G}\}. \end{align} \section{The Adversarial Framework} \label{sec:nonparam} We start with investigating the adversarial framework including the Wasserstein, Sobolev, and MMD GANs. Recall that the adversarial framework employed by GANs proposes to evaluate the accuracy of learning densities via the adversarial loss specified by the discriminator class. The goal of this section is to study the fundamental difficulty and minimax optimal rates of learning a wide range of densities for different evaluation metric defined by the adversarial framework. Through the lens of nonparametric statistics, we answer how the structure of the density and the choice of the evaluation metric affect the minimax rates, and when fast rates are possible. \subsection{Minimax Optimal Rates} \begin{thm}[Minimax optimal rates, Sobolev] \label{thm:optimal-rates-sobolev} Consider $\Omega = [0,1]^d$. Consider the target density $\nu(x) \in \mathcal{G} = W^{\alpha}(r)$ (w.r.t. the Lebesgue measure) in the Sobolev space with smoothness $\alpha \in \mathbb{N}_{\geq 0}$ for some constant $r>0$, and the evaluation metric induced by $\mathcal{F} = W^{\beta}(1)$ the Sobolev space with smoothness $\beta \in \mathbb{N}_{\geq 0}$. Then the minimax optimal rate is \begin{align*} \inf_{\widetilde{\nu}_n} \sup_{\nu \in \mathcal{G}} \E d_{\mathcal{F}}\left( \nu, \widetilde{\nu}_n \right) \asymp n^{-\frac{\alpha+\beta}{2\alpha+d}} \vee n^{-\frac{1}{2}}, \end{align*} where $\widetilde{\nu}_n$ is any estimator for $\nu$ based on $n$ i.i.d. drawn samples $X_1, X_2,\ldots X_n \sim \nu$. \end{thm} \begin{remark} \rm The above establishes the minimax optimal rate for Sobolev GAN ($\beta=1$ for Wasserstein GAN as a special case), with explicit dependence on the smoothness of the density $\alpha$ and that of the evaluation metric $\beta$. First, note there is an interesting transition at $\beta = d/2$ (without depending on $\alpha$): above it the rate is parametric $n^{-1/2}$, and below it the rate is nonparametric. Second, to avoid the curse of dimensionality in the rates, one needs the sum of smoothness to be proportional to the dimension, i.e. $\alpha+\beta = \Theta(d)$. Note when $\beta$ is large, the rate is indeed faster, however under a weaker evaluation metric. How to choose good discriminator $\mathcal{F}$ with provable guarantee under strong evaluation metric such as $d_{TV}$ for GANs will be answered in Theorems \ref{thm:param-gans}-\ref{thm:leaky-ReLU}. \end{remark} \begin{remark}[Relations to the literature] \rm The above theorem is an improvement to an earlier draft \citep*{liang2017well} of this paper, which was the first to formalize nonparametric estimation under the adversarial framework. Admittedly, the improvement for the upper bound is in one line of the original argument, specifically Eqn.~\eqref{eq:improvement}. The minimax lower bound of $n^{-\frac{\alpha+\beta}{2\alpha+d}}$ was first established in this paper (the earlier draft, \cite*{liang2017well}, p.18-19, for density estimation). In this version we also provide a formal construction for the lower bound of $n^{-\frac{1}{2}}$. We acknowledge an improvement of the upper bound in \cite*{liang2017well} was also carried out in a concurrent work \citep*{singh2018nonparametric}, in a general form (see also the reference therein). We remark that the optimal upper bound was also obtained by \cite*{mair1996statistical}, in a slightly different setting. \end{remark} One can generalize the above theorem to more general RKHS. The motivation is to accommodate target distributions supported on image manifolds, with similarity better measured by non-linear kernels. It is useful to derive the explicit dependence on the intrinsic dimension of the manifold and the kernel, rather than the ambient dimension $d$. The Sobolev space considered in Thm.~\ref{thm:optimal-rates-sobolev} is a special RKHS. In addition, the generalization will enable us to provide theoretical rates for MMD GAN \citep{dziugaite2015training,li2015generative,arbel2018gradient}. In the next theorem, we assume that for all target density $\nu \in \mathcal{G}$ of interest and all $i\in \mathbb{N}_{>0}$, there exists a universal constant on the variance of eigenfunctions in Def.~\ref{def:integral-operator}, \begin{align} \label{eq:assumption-rkhs} \E_{X \sim \nu} \psi_i(X)^2 \leq C. \end{align} \begin{thm}[MMD rates, RKHS] \label{thm:optimal-rates-rkhs} Consider a RKHS $\mathcal{H} \in L^2_\pi$ with base measure $\pi$. Assume that the eigenvalues of the integral operator $\mathcal{T}_{\pi}$ decay as $t_i \asymp i^{-\kappa}$ for all $i$, with parameter $\kappa \in \mathbb{R}_{>0}$. Consider the evaluation metric $\mathcal{F} = \{ f ~|~ \| f \|_{\mathcal{H}} \leq 1\}$, and the target distribution $\nu$, whose Radon-Nikodym derivative $\frac{d\nu}{d\pi}$ w.r.t the base measure $\pi$ lies in a smooth subspace $\mathcal{G} = \{ \nu ~|~ \| \mathcal{T}_{\pi}^{-(\alpha-1)/2} \frac{d\nu}{d\pi} \|_{\mathcal{H}} \leq r \}$ with smoothness parameter $\alpha \in \mathbb{R}_{>0}$ (for some fixed radius $r>0$). Under the assumption~\eqref{eq:assumption-rkhs}, we have \begin{align*} \sup_{\nu\in \mathcal{G}} \E d_{\mathcal{F}}\left( \nu, \widetilde{\nu}_n \right) \precsim n^{-\frac{(\alpha+1)\kappa}{2\alpha \kappa+2}} \vee n^{-\frac{1}{2}} . \end{align*} \end{thm} \begin{remark} \rm Remark that the above corollary works with general base measure $\pi$ and domain $\Omega$. Here the target (Radon-Nikodym derivative $\frac{d\nu}{d\pi}$) lies in a subset of the RKHS, with $\alpha$ quantifies its smoothness: the high frequency component decays sufficiently fast. This is a standard formulation studied in the RKHS literature, see \cite*{caponnetto2007optimal}. The parameter $\kappa$ describes the intrinsic dimension of the integral operator. When $\kappa > 1$, the intrinsic dimension (trace of $\mathcal{T}_\pi$) is bounded as ${\rm Tr}(\mathcal{T}_\pi) = \sum_{i \geq 1} i^{-k} \leq C$, therefore the upper bound reads the parametric rate $n^{-\frac{(\alpha+1)\kappa}{2\alpha \kappa+2}} \vee n^{-\frac{1}{2}} = n^{-1/2}$. When $\kappa < 1$, to obtain $ \E d_{\mathcal{F}}\left( \nu, \widetilde{\nu}_n \right) \leq \epsilon, $ the sample complexity scales $$n = \epsilon^{2 + \frac{2}{\alpha+1}\left( \frac{1}{\kappa} - 1 \right) }.$$ Therefore the curse of dimensionality only appears in the ``effective dimension'', described by $1/\kappa - 1$. The Sobolev space $W^{\beta}$ can be regarded as a special RKHS with $\kappa = \frac{2\beta}{d}$. Therefore the lower bound in Thm.~\ref{thm:optimal-rates-sobolev} suggests that the rate for MMD GAN is also sharp, for a particular subclass. \end{remark} \subsection{Oracle Inequality and Regularization} \label{sec:nonparam-oracle} In this section, we use a simple oracle inequality to show that when the generator class $\mathcal{D}_G$ --- typically represented by neural networks --- is mis-specified for the target distribution $\nu$, one can still derive oracle results based on the adversarial framework. Let us recall the notations. Denote $\mathcal{D}_{G}$ to be class of distributions represented by the generator, and $\mathcal{F}_{D}$ to be the class of functions realized by the discriminator \begin{align} \label{eq:gan-nonparam} \mu_n = \argmin_{\mu \sim \mathcal{D}_{G}} \max_{f \in \mathcal{F}_{D}} \left\{ \E_{Y \sim \mu} f(Y) - \E_{X \sim \nu_n} f(X) \right\}. \end{align} where $\nu_n$ is some estimate of the density based on $n$ i.i.d. drawn samples from the target distribution $\nu$. The goal in this section to extend our adversarial framework to obtain upper rates for \eqref{eq:gan-nonparam}. In addition, the oracle inequalities (Lemma \ref{lem:oracle-ineq} and \ref{lem:oracle-ineq-general}) developed will be crucial for model mis-specification, which makes the results of practical relevance. \begin{thm}[Mis-specification: nonparametric] \label{thm:nonparam-gans} Let $\mathcal{D}_G$ be any generator class. Consider the discriminator metric (and the evaluation metric) induced by $\mathcal{F}_{D} = W^{\beta}(1)$. Consider the target density $\nu(x) \in W^{\alpha}(r)$. With the empirical measure $\widehat{\nu}^{n} := \frac{1}{n} \sum_{i=1}^n \delta_{X_i}$ as the plug-in, the GAN estimator \begin{align*} \widehat{\mu}_{n} \in \argmin_{\mu \in \mathcal{D}_G} \max_{f \in \mathcal{F}_D} \left\{ \int_{\Omega} f d\mu - \int_{\Omega} f d\widehat{\nu}^{n} \right\}, \end{align*} learns the target density with rate \begin{align*} \E d_{\mathcal{F}_D}(\widehat{\mu}_n, \nu) \leq \min_{\mu \in \mathcal{D}_G} d_{\mathcal{F}_D}(\mu, \nu) + n^{-\frac{\beta}{d}} \vee \frac{\log n}{\sqrt{n}}. \end{align*} In contrast, there exists a smoothed/regularized empirical measure $\widetilde{\nu}^{n}$ as the plug-in \begin{align*} \widetilde{\mu}_{n} \in \argmin_{\mu \in \mathcal{D}_G} \max_{f \in \mathcal{F}_D} \left\{ \int_{\Omega} f d \mu - \int_{\Omega} f d\widetilde{\nu}^{n} \right\}, \end{align*} where a faster rate is attainable \begin{align*} \E d_{\mathcal{F}_D}(\widetilde{\mu}_n, \nu) \leq \min_{\mu \in \mathcal{D}_G} d_{\mathcal{F}_D}(\mu, \nu) + n^{-\frac{\alpha+\beta}{2\alpha+d}} \vee \frac{1}{\sqrt{n}}. \end{align*} \end{thm} The proof of the above theorem is based on the following simple oracle inequality Lemma~\ref{lem:oracle-ineq}. Later, we will generalize the oracle inequality (see Lemma~\ref{lem:oracle-ineq-general}) to establish rates when both the generator and discriminator are neural networks, and when one only has finite $m$-samples from the generator. Curiously, a generalization of the oracle inequality gives rise to a curious notion of pair regularization, which we will study in Section~\ref{sec:param}. \begin{lem}[Simple oracle inequality] \label{lem:oracle-ineq} Under the condition that $\mathcal{F}_D$ is symmetric class, i.e., $\mathcal{F}_D = -\mathcal{F}_D$, the GAN estimator in \eqref{eq:gan-nonparam} satisfies \begin{align*} d_{\mathcal{F}_D}(\nu, \mu_n) \leq \min_{\mu \in D_G} d_{\mathcal{F}_D}(\mu, \nu) + 2d_{\mathcal{F}_D}(\nu, \nu_n), \end{align*} where we refer the first term as the approximation error, and second as the stochastic error. \end{lem} \begin{remark}[Regularization] \rm Observe that the rates satisfy $n^{-\frac{\alpha+\beta}{2\alpha+d}} \vee n^{-1/2} \precsim n^{-\frac{\beta}{d}} \vee n^{-1/2}\log n$. Namely, the regularized empirical density as the plug-in for GANs attains a better upper bound. We mention that to obtain a implementable algorithm for the smoothed/regularized empirical density $\widetilde{\nu}^n(x)$ in Thm.~\ref{thm:nonparam-gans}, one may use the following in practice \begin{align*} \widetilde{\nu}^n(x) = \frac{1}{nh_n} K\left(\frac{x-x_i}{h_n}\right), \end{align*} with specific choices of the kernel $K$ and bandwidth $h_n$ as in the nonparametric literature. When using the Gaussian kernel, this so-called ``instance noise'' technique \citep{sonderby2016amortised,arjovsky2017towards,mescheder2018training} is used in GAN training: each time when evaluating the stochastic gradients for generator/discriminator, sample a mini-batch of data and then perturb them by a Gaussian. Statistically, one may view this data augmentation (or stability to data perturbation) as a form of regularization \citep*{yu2013stability}, to prevent the generator from memorizing the empirical data and learning a too complex model. We will show in Section~\ref{sec:param} that, specific choice of generator and discriminator pair can also serve the goal of regularization in the parametric regime, in a curious way. \end{remark} \section{Generative Adversarial Networks} \label{sec:param} In this section, we consider when both the generator and discriminator are neural networks, and derive rates applicable to GANs used in practice. To be specific, let $\mathcal{F} = \{ f_\omega(x): \mathbb{R}^d \rightarrow \mathbb{R} \}$ be the discriminator functions realized by a neural network with parameter $\omega$ describing the weights of the network. Let $\mathcal{G} = \{ g_{\theta}(z): \mathbb{R}^{d} \rightarrow \mathbb{R}^{d} \}$ be the generator neural network transformation with weights parameter $\theta$. Consider $Z \sim \pi$ as the random input distribution with distribution $\pi$, and the target distribution $X \sim \nu$. Denote $\mu_\theta$ as the density of $g_{\theta}(Z)$. Consider the parametrized GAN estimator used in practice \begin{align} \label{eq:gan-nn} \widehat{\theta}_{m,n} \in \argmin_{\theta: g_\theta \in \mathcal{G}} \max_{\omega: f_\omega \in \mathcal{F}} ~~ \left\{ \widehat{\E}_m f_\omega (g_\theta(Z)) - \widehat{\E}_n f_\omega(X) \right\}, \end{align} where $m$ and $n$ denote the number of the generator samples and target distribution samples. Let us state the goal of the current section, and connections to the adversarial framework established. So far, we have derived the optimal rates for nonparametric densities under strong evaluation metric such as Wasserstein ($\beta=1$) or total variation distance ($\beta=0$). The curse of dimensionality in sample complexity is inevitable unless the density class of interest is sufficiently structured (smooth). Two questions are raised naturally. First, for the structured parametric densities such as the ones parametrized by the generator networks in GANs, are fast parametric rates attainable? Second, can one obtain fast rates under the strong evaluation metric via discriminator networks in GANs, which is mis-specified and differs from the evaluation metric? We will answer both questions, directly for GANs estimator \eqref{eq:gan-nn}. \subsection{Generalized Oracle Inequality and Parametric Rate} First, we will generalize the oracle results to GANs estimator $\widehat{\theta}_{m,n}$ \eqref{eq:gan-nn}. Then we will show that the oracle approach, when applied to neural networks as in Thm.~\ref{thm:param-gans}, sheds light on the choice of \textit{generator/discriminator pair} as regularization. \begin{lem}[Generalized oracle inequality] \label{lem:oracle-ineq-general} Consider the GAN estimator $\widehat{\theta}_{m,n}$ defined in \eqref{eq:gan-nn}. Recall the composition in Def. \eqref{eq:composition}. Under the condition that $\mathcal{F}$ and $\mathcal{F} \circ \mathcal{G}$ are symmetric, the following oracle inequality holds for any $\mu_\theta$ with $g_\theta \in \mathcal{G}$, \begin{align*} d_{\mathcal{F}}\left( \mu_{\widehat{\theta}_{m,n}}, \nu \right) \leq d_{\mathcal{F}}(\mu_\theta, \nu) + 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi). \end{align*} Here for any measure $\mu$, we use $\widehat{\mu}^n$ to denote the empirical measure with $n$ i.i.d. samples from $\mu$. \end{lem} The innovative aspects of the above lemma are two-fold. Firstly, the lemma provides upper bound on the \textit{implicit} density estimator $\mu_{\widehat{\theta}_{m,n}}$ (distribution of the random variable $g_{\widehat{\theta}_{m,n}}(Z)$), without knowing the explicit form of the density in general. We do have direct sampling mechanisms by transforming the random variable $Z$, which is a computational advantage. Secondly, we show the dependence on the number of generator samples $m$, in addition to the number of target samples $n$. The role and complexity of the generator network is made explicit in the bound. It is clear that when $m \rightarrow \infty$, the current lemma reduces to Lemma~\ref{lem:oracle-ineq}. Next, we apply Lemma~\ref{lem:oracle-ineq-general} to establish parametric rates for densities realized by neural networks, in the following Thm.~\ref{thm:param-gans} and \ref{thm:leaky-ReLU} (with their corollaries). We emphasize again here that GANs only use a mis-specified discriminator $\mathcal{F}$ parametrized by neural networks with limited capacity. And $d_{\mathcal{F}}$ is \textit{different} from the the objective evaluation metrics such as $d_{TV}, d_{H}$. \begin{thm}[GANs upper rate on KL: parametric] \label{thm:param-gans} Consider GANs estimator \begin{align} \label{eq:gan-nn-B} \widehat{\theta}_{m,n} \in \argmin_{\theta: g_\theta \in \mathcal{G}} \max_{\omega: f_\omega \in \mathcal{F}, \| f_\omega \|_\infty \leq B} ~~ \left\{ \widehat{\E}_m f_\omega (g_\theta(Z)) - \widehat{\E}_n f_\omega(X) \right\}. \end{align} where $B>0$ is some absolute constant, $m$ and $n$ denote the number of the generator samples and target distribution samples. Recall the pseudo-dimension defined in Def.~\ref{def:comb-dim}. Then for total variation distance, and Kullback-Leibler divergence, we have \begin{align} &\E d_{TV}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right) \leq \frac{1}{4}\left[ \E d_{KL}\left(\nu || \mu_{\widehat{\theta}_{m,n}} \right) + \E d_{KL}\left(\mu_{\widehat{\theta}_{m,n}} || \nu \right) \right] \nonumber \\ & \leq \frac{1}{2} \sup_{\theta} \inf_{\omega} \left\| \log \frac{\nu}{\mu_{\theta}} - f_\omega \right\|_{\infty} + \frac{B}{4\sqrt{2}} \inf_\theta \left\| \log \frac{\mu_\theta}{\nu} \right\|_{\infty}^{1/2} \\ & \quad \quad + C \cdot \sqrt{\text{Pdim}(\mathcal{F}) \left( \frac{\log m}{m} \vee \frac{\log n}{n} \right)} \vee \sqrt{ \text{Pdim}(\mathcal{F} \circ \mathcal{G}) \frac{\log m}{m} } , \nonumber \end{align} where $C>0$ is some universal constant independent of $\text{Pdim}(\mathcal{F})$, $\text{Pdim}(\mathcal{F} \circ \mathcal{G})$ and $m, n$. \end{thm} The upper bound in the above theorem on the Jensen-Shannon/Kullback Leibler divergence (and TV distance) consists of three parts: the approximation errors $A_1(\mathcal{F},\mathcal{G},\nu)$, $A_2(\mathcal{G}, \nu)$ and the stochastic error $S(\mathcal{F}, \mathcal{G}, n, m)$, \begin{align} \label{eq:approx-stat-error} A_1(\mathcal{F},\mathcal{G},\nu) &:= \frac{1}{2} \sup_{\theta} \inf_{\omega} \left\| \log \frac{\nu}{\mu_{\theta}} - f_\omega \right\|_{\infty} \\ A_2(\mathcal{G}, \nu) & := \frac{B}{4\sqrt{2}} \inf_\theta \left\| \log \frac{\mu_\theta}{\nu} \right\|_{\infty}^{1/2} \nonumber \\ S_{n,m}(\mathcal{F}, \mathcal{G}) &:= \sqrt{\text{Pdim}(\mathcal{F}) \left( \frac{\log m}{m} \vee \frac{\log n}{n} \right)} \vee \sqrt{ \text{Pdim}(\mathcal{F} \circ \mathcal{G}) \frac{\log m}{m} } \nonumber. \end{align} We emphasize that the term $A_1(\mathcal{F},\mathcal{G},\nu)$ is in the $\sup_{\theta} \inf_{\omega}$ form, which is crucial and differs from the adversarial idea with the form $\inf_{\theta} \sup_{\omega}$. In English, $A_1(\mathcal{F},\mathcal{G},\nu)$ describes how the best discriminator function $f_\omega$ that can express the class of density ratios $\mu_{\theta}/\nu$, $A_2(\mathcal{G}, \nu)$ reflects the the expressiveness of the generator class, and $S_{n,m}(\mathcal{F}, \mathcal{G})$ describes the statistical complexity of both the generator and discriminator. In the next section, we will elaborate on the interplay among the two approximation error terms $A_1(\mathcal{F},\mathcal{G},\nu), A_2(\mathcal{G}, \nu)$, and the stochastic error term $S_{n,m}(\mathcal{F}, \mathcal{G})$. \begin{remark} \rm To obtain non-trivial rates, the above theorem requires $\mu_\theta$ and $\nu$ to be absolutely continuous, for all $\theta$ of interest. However, this is not essential, as similar results hold qualitatively the same for the non-absolutely continuous case, based on the Hellinger distance. As shown in the next theorem, $-1\leq \frac{\sqrt{\nu} - \sqrt{\mu_{\theta}}}{\sqrt{\nu} + \sqrt{\mu_{\theta}}} \leq 1$ is well-defined even for non-absolutely continuous distributions $\mu_\theta$ and $\nu$. \end{remark} \begin{thm}[GANs upper rate on Hellinger: parametric] \label{thm:param-gans-hellinger} Consider the same GANs estimator $\widehat{\theta}_{m,n}$ as in Thm.~\ref{thm:param-gans}. The for the Hellinger distance, \begin{align} d_{H}(\mu, \nu) := \left( \int (\sqrt{\mu(x)} - \sqrt{\nu(x)})^2 dx \right)^{1/2}, \end{align} we have \begin{align} &\E d_{TV}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right) \leq \E d_{H}^2 \left(\nu, \mu_{\widehat{\theta}_{m,n}} \right) \nonumber \\ & \leq 2 \sup_{\theta} \inf_{\omega} \left\| \frac{\sqrt{\nu} - \sqrt{\mu_{\theta}}}{\sqrt{\nu} + \sqrt{\mu_{\theta}}} - f_\omega \right\|_{\infty} + 2B \inf_\theta \left\| \frac{\sqrt{\nu} - \sqrt{\mu_{\theta}}}{\sqrt{\nu} + \sqrt{\mu_{\theta}}} \right\|_{\infty} \\ & \quad \quad + C \cdot \sqrt{\text{Pdim}(\mathcal{F}) \left( \frac{\log m}{m} \vee \frac{\log n}{n} \right)} \vee \sqrt{ \text{Pdim}(\mathcal{F} \circ \mathcal{G}) \frac{\log m}{m} } , \nonumber \end{align} where $C>0$ is some universal constant. \end{thm} Finally, as a corollary of Thm.~\ref{thm:param-gans}, one can establish similar results for the Wasserstein distance. \begin{coro} \label{cor:wass} Recall the definitions in \eqref{eq:approx-stat-error}. Assume that $\mathcal{F}$ is with Lipschitz constant $L_\mathcal{F}$ and $\mathcal{G}$ with $L_\mathcal{G}$. Then for either (1) $Z \sim N(0, I_d)$, or (2) $Z, X$ lie in $[0, 1]^d$, we have \begin{align*} \E d_{W}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right) &\leq C_1 \cdot A_1(\mathcal{F},\mathcal{G},\nu)+ C_2 \cdot A_2(\mathcal{G}, \nu) + C_3 \cdot S_{n,m}(\mathcal{F}, \mathcal{G}) \end{align*} where $C_1, C_2, C_3>0$ are some constants independent of $\text{Pdim}(\mathcal{F})$, $\text{Pdim}(\mathcal{F} \circ \mathcal{G})$ and $m, n$, but depend on $L_\mathcal{F}, L_\mathcal{G}$. \end{coro} \subsection{Generator/Discriminator Pair Regularization} \label{sec:pair-regularization} In this section, we investigate the new pair regularization, and its trade-off presented in Thm.~\ref{thm:param-gans}. One key fact about regularization in GAN is that both the generator and discriminator are choices of ``tuning parameters'', for users to specify. Therefore, the trade-off is more complex. For a target density of interest, we use the following two thought experiments to explain the intricacies on the interplay between the generator/discriminator pair. \begin{enumerate} \item For a fixed generator class $\mathcal{G}$, when the discriminator class $\mathcal{F}$ becomes more complex, it will be easier for the discriminator to tell apart good and bad generators in the TV sense (w.r.t. the target distribution). However, the stochastic error becomes larger as one is learning from a large discriminator model in GANs. This is reflected in the upper bounds obtained in Thm.~\ref{thm:param-gans} and \ref{thm:param-gans-hellinger}, shown along the blue dashed arrow direction in Fig.~\ref{fig:pair-dis-gen}. \item For a fixed discriminator class $\mathcal{F}$, as the generator $\mathcal{G}$ becomes richer, it is capable of expressing densities that are closer to the target distribution. However, at the same time it introduces difficulty for two reasons. First, the generator may create densities that are far away from the target in the TV sense, but being indistinguishable to the discriminator. Second, the stochastic error becomes worse as one is learning from a larger generator model. This is shown by the red dashed arrow direction in Fig.~\ref{fig:pair-dis-gen}. \end{enumerate} In general, regularization using the generator/discriminator pair is more subtle than the conventional bias-variance (or approximation-stochastic error) trade-offs. We visualize such trade-offs in Fig.~\ref{fig:pair-dis-gen}, with $A_1(\mathcal{F},\mathcal{G},\nu), A_2(\mathcal{G}, \nu)$ and $S_{n,m}(\mathcal{F}, \mathcal{G})$ defined in \eqref{eq:approx-stat-error}. Here, the tuning parameters lie in a two dimensional domain, rather than in an one dimensional index. For a fixed target $\nu$, as $(\mathcal{G}, \mathcal{F})$ both become richer, $A_2(\mathcal{G}, \mu)$ decreases, $S_{n,m}(\mathcal{F}, \mathcal{G})$ increases, but $A_1(\mathcal{F},\mathcal{G},\nu)$ may increase, decrease or stay unchanged. On one hand, one can eliminate some $(\mathcal{G}, \mathcal{F})$ pairs due to notions of dominance on the two dimensional domain. The simple U-shaped picture for bias-variance trade-off no longer exists. On the other hand, by stepping into the two dimensional tuning domain, there are more choices for tuning pairs that potentially give rise to better rates, which we will showcase in Thm.~\ref{thm:leaky-ReLU}. \begin{figure}[ht!] \centering \includegraphics[width=0.6\textwidth]{pair-dis-gen.pdf} \caption{Pair regularization diagram on how well GANs learn densities in TV distance, when tuning with generator $\mathcal{G}$ and discriminator $\mathcal{F}$ pair. The diagram is illustrated based on upper bounds on TV distance, namely $A_1(\mathcal{F},\mathcal{G},\nu)+ A_2(\nu, \mathcal{G}) +S_{n,m}(\mathcal{F}, \mathcal{G})$ in Thm.~\ref{thm:param-gans}. The red shaded region corresponds to $A_2(\mathcal{G}, \nu) = 0$ and the blue shaded region is $A_1(\mathcal{F},\mathcal{G},\nu) = 0$. The grey dashed line corresponds to the indifference curve for the statistical error $S_{n,m}(\mathcal{F}, \mathcal{G})$. One can see that the choice $(\mathcal{G}_\star, \mathcal{F}_\star)$ dominates the other choices in the grey shaded area, and the other choice on the same grey dashed line.} \label{fig:pair-dis-gen} \end{figure} The following corollary concerns $A_1(\mathcal{F},\mathcal{G},\nu)$ and $A_2(\mathcal{G}, \nu)$ through choosing the generator/discriminator pair, as a step towards understanding the new notion of pair regularization for GANs. \begin{coro}[Choice of generator/discriminator] \label{cor:design} Consider the target density class $\log \nu \in \mathcal{D}_{R}$, and the generator class $\log \mu_{\theta} \in \mathcal{D}_G$. With the discriminator chosen as \begin{align*} \mathcal{F}_D = \mathcal{D}_R - \mathcal{D}_G := \{ \log \nu - \log \mu_{\theta} ~|~ \text{for all}~ \log \nu \in \mathcal{D}_R,~ \log \mu_{\theta} \in \mathcal{D}_G\}, \end{align*} then \begin{align} \label{eq:choice-dis} A_1(\mathcal{F},\mathcal{G},\nu) = 0. \end{align} In addition, if the generator is well-specified in the sense $\mathcal{D}_G \supseteq \mathcal{D}_R$, then \begin{align} \label{eq:choice-gen} A_2(\mathcal{G},\nu) = 0. \end{align} And \eqref{eq:choice-dis} and \eqref{eq:choice-gen} altogether imply $\E d_{TV}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right) \precsim S_{n,m}(\mathcal{F}, \mathcal{G}).$ \end{coro} \begin{remark}[Pair regularization and diagram] \rm Let us illustrate the above corollary using Fig.~\ref{fig:pair-dis-gen}. Eqn.~\eqref{eq:choice-dis} corresponds to the blue shaded region in the diagram, Eqn.~\eqref{eq:choice-gen} represents the red shaded region, and the intersection is highlighted by the grey shaded region. At the intersection, the approximation error $A(\mathcal{F},\mathcal{G},\nu)$ is zero, so all pairs are dominated by the choice $(\mathcal{G}_\star, \mathcal{F}_\star)$ (as other pairs have a larger variance $S_{n,m}(\mathcal{F}, \mathcal{G})$). In addition, we argue that $(\mathcal{G}_\star, \mathcal{F}_\star)$ is also the best solution along the indifference curve for $S_{n,m}(\mathcal{F}, \mathcal{G})$, denoted by the grey dashed line. To see this, moving $(\mathcal{G}_\star, \mathcal{F}_\star)$ towards the northwest direction on the indifference curve away from $(\mathcal{G}_\star, \mathcal{F}_\star)$, $A_1, S_{m,n}$ stay unchanged, but $A_2(\mathcal{G}_\star, \nu) \leq A_2(\mathcal{G}', \nu)$. Moving $(\mathcal{G}', \mathcal{F}')$ towards the southeast direction, $A_2, S_{m,n}$ stay the same, but $A_1(\mathcal{G}_\star, \mathcal{F}_\star, \nu) \leq A_1(\mathcal{G}', \mathcal{F}', \nu)$. Similarly, one can argue that all pairs above the indifference curve is dominated by $(\mathcal{G}_\star, \mathcal{F}_\star)$. We acknowledge that the diagram is illustrated using an upper bound on the TV distance, however, qualitatively, similar phenomenon extends to $\E d_{TV}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right)$ and $\E d_{H}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right)$ (see the first paragraph in Section~\ref{sec:pair-regularization}). We defer the further discussion on the pair regularization versus classic regularization to Section~\ref{sec:discussion}. \end{remark} \subsection{Applications: Leaky ReLU Networks} We showcase how to apply our theory and regularization insight to GANs used in practice in this section. We consider two special cases of leaky ReLU generator and discriminator, to make explicit the rates for estimating parametric densities. The main tools are Thm.~\ref{thm:param-gans} and the pair regularization. The goal of this section is to show for good choice of $(\mathcal{G}, \mathcal{F})$, near optimal sample complexity is attainable. Admittedly, we do not aim to identify the optimal pair of $(\mathcal{G}_\star, \mathcal{F}_\star)$ over the entire two dimensional turning domain. In fact, such optimization can be hard. The reason is, to characterize the implicit density of $g_{\widehat{\theta}_{m,n}}(Z)$ given by neural networks transformations, and how it approximates general nonparametric target density $\nu$ is a challenging future work outside the statistical goal of the current paper. Let's introduce the neural networks parameter space. The \textit{generator} $x = g_{\theta}(z) : \mathbb{R}^d \rightarrow \mathbb{R}^d$ is parametrized by a multi-layer perceptron (MLP): \begin{align*} h_0 &= z, \\ h_l &= \sigma_a(W_l h_{l-1} + b_l), ~ 0< l <L \\ x &= W_L h_{L-1} + b_L, \end{align*} where $h_l$ denotes the output of hidden units, and $x$ is the transformed final output of the MLP. Here the activation is leaky ReLU \begin{align} \label{eq:leaky-ReLU} \sigma_a(t) = \max\{ t, a t \},~ \text{for some fixed $0<a\leq 1$}. \end{align} Denote the parameter space for the generator weights as \begin{align*} \theta \in \Theta(d, L) := \{ \theta= (W_l \in \mathbb{R}^{d \times d}, b_l \in \mathbb{R}^d, 1\leq l \leq L) ~|~ {\rm rank}(W_l) = d, \forall 1\leq l\leq L \}. \end{align*} We require the $W_l$ to be full rank so that the generator transformation $g_{\theta}$ is invertible. One can verify, when the input distribution $Z \sim U([0,1]^d)$ is uniform, the class of densities realizable by $g_{\theta}(Z)$, for $\theta \in \Theta(d, L)$ has the following closed form, \begin{align} \label{eq:analytic-form} \log \mu_\theta(x) = c_1 \sum_{l=1}^{L-1} \sum_{i=1}^d \mathbbm{1}_{m_{li}(x) \geq 0} + c_0(\theta), \end{align} with some proper choice of $c_1, c_0(\theta)$. Here $m_{li}(x)$ is the function computed by the $i$-th hidden unit in the $l$-th layer of a certain MLP \footnote{The architecture and weights depend on the generator network $g_\theta$, with depth $L$ and $d$ hidden units in each layer.}, with dual leaky ReLU activation (defined in next paragraph) and weights properly chosen as a function of $\theta$. For details, see derivation \eqref{eq:m} and \eqref{eq:realizable-density-MLP}. Remark that from the closed form expression, as the depth grows (as a function of $n$), the generator is capable of expressing increasingly complex distributions. Clearly from the expression, one can see that for any $\theta, \theta' \in \Theta(d, L)$, $\mu_\theta$ and $\mu_{\theta'}$ are \textit{absolutely continuous}. The \textit{discriminator} $f_\omega(x): \mathbb{R}^d \rightarrow \mathbb{R}$ is parametrized by a feedforward neural network with activation functions include dual leaky ReLU activation \begin{align} \label{eq:leaky-ReLU-dual} \sigma^\star_a(t) := \min\{ t, a t \}, ~\text{for $a \geq 1$}, \end{align} and threshold activation $\sigma^\star_\infty(t) := \mathbbm{1}_{t\leq 0}$. The structure a feedforward network is that hidden units are grouped in a sequence of $L$ layers (the depth of the network), where a node is in layer $1\leq l \leq L$, if it has a predecessor in layer $l-1$ and no predecessor in any layer $l' \geq l$. Computation of the final output unit proceeds layer-by-layer: at any layer $l < L$, each hidden unit $u$ receives an input in the form of a linear combination $\widetilde{x}_u' w_u + b_u$, and then outputs $\sigma_a(\widetilde{x}_u' w_u + b_u)$, where the vector $\widetilde{x}_u$ collects the output of all the units with a directed edge into $u$ (i.e., from prior layers). $\omega$ denotes all the weights in such feedforward network. \begin{thm}[Leaky-ReLU generator and discriminator, uniform as input] \label{thm:leaky-ReLU} Consider a multi-layer perceptron generator $g_{\theta}:\mathbb{R}^d \rightarrow \mathbb{R}^d$, $\theta \in \Theta(d, L)$ with depth $L$ and width $d$, using leaky ReLU $\sigma_a(\cdot)$ activation \eqref{eq:leaky-ReLU} with any $0<a \leq 1$. Consider the class of realizable densities, i.e., $X \sim \nu$ enjoys the same distribution as $g_{\theta_*}(Z)$ with some $\theta_* \in \Theta(d, L)$ and $Z \sim U([0,1]^d)$. Choose the discriminator $f_\omega: \mathbb{R}^d \rightarrow \mathbb{R}$ to be a feedforward neural network (architecture shown in Fig.~\ref{fig:relu}) with depth $L+2$, using dual leaky ReLU $\sigma^\star_{1/a}(\cdot)$ \eqref{eq:leaky-ReLU-dual} and threshold activations (only at the final layer), with parameter $\omega \in \Omega(d, L)$ defined in \eqref{eq:param-space-omega}. Then, the GAN estimator $\mu_{\widehat{\theta}_{m,n}}$ defined in \eqref{eq:gan-nn-B}, satisfies the following parametric rates for the total variation distance, \begin{align*} \E d_{TV}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right) \precsim \sqrt{d^2 L^2 \log (dL) \left( \frac{\log m}{m} \vee \frac{\log n}{n} \right)}. \end{align*} \end{thm} \begin{figure}[ht!] \centering \includegraphics[width=0.6\textwidth]{relu.pdf} \caption{Illustration of discriminator $\mathcal{F}$ (feed-forward network) and generator $\mathcal{G}$ (multi-layer perceptron) in Theorem~\ref{thm:leaky-ReLU}, for $L=3$. } \label{fig:relu} \end{figure} \begin{remark}[Relations to literature] \rm The above theorem is built on top of Thm.~\ref{thm:param-gans} and Cor.~\ref{cor:design}. Remark here we use the neural networks' architecture as pair regularization. Remark that in our setting, we can allow for \textit{very deep} ReLU neural network with $L \precsim \sqrt{n \wedge m/\log (n \wedge m) }$, with generator's width being as small as the dimension $d$. Investigations on the parametric rates for GANs have been considered in \cite*{bai2018approximability}, based on spectral norm-based capacity controls as regularization of networks, i.e. $\forall l \in [L], \| W_l \|_{\rm op}, \| W_l^{-1} \|_{\rm op} \leq C$. The approach they are taking is to establish multiplicative equivalence on $d_{\mathcal{F}}(\mu, \nu) \asymp d_{W}(\mu, \nu)$ for $\mu, \nu \in \mathcal{G}$ restricted to the generator class. In contrast, we make use of the oracle inequality approach developed in an early version of the current paper \citep{liang2017well}, and the notion of pair regularization. We study through the angle of pseudo-dimensions, without requiring that the spectral radius of each $W_l, W_l^{-1}$ is bounded. This has two advantages. First, the generator class can express a wider range of densities, as we only require that $W_l$ has full rank. Second, we make explicit the dependence of the depth of the neural networks $L$ in the rate. In addition, we were able to get a better polynomial dependence on both the dimension $d$ and the depth $L$, in the error. \end{remark} Finally, as a sanity check, we show that GANs can also achieve the correct dimension dependence in sample complexity ($n = O(d^2\log d)$) when estimating multivariate Gaussian with unknown mean and covariance (where from information-theoretic lower bound we need at least $n = O(d^2)$ samples). This is to showcase that with the power of pair regularization, GANs can obtain provable guarantee in classic statistical realms. \begin{coro}[Multivariate Gaussian estimation, isotropic Gaussian as input] \label{cor:multivariate-gaussian} Consider $\nu \sim N(b_*, \Sigma_*)$ to be a multivariate Gaussian in $\mathbb{R}^d$. Consider a linear generator (neural network with no hidden layer) with input distribution $N(0, I_p)$ ($p\geq d$), and the discriminator to be a one hidden layer neural network with quadratic activation $\sigma(t) = t^2$, the GAN estimator $\mu_{\widehat{\theta}_{m,n}}$ defined in \eqref{eq:gan-nn-B}, satisfies the following rates, \begin{align*} \E d_{TV}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right) \precsim \sqrt{ \frac{d^2 \log d}{n} + \frac{(pd+d^2) \log (p + d)}{m} }. \end{align*} \end{coro} \section{Conclusion and Discussion} \label{sec:discussion} We further discuss on the following question: even overlooking computation, what is the advantage of GANs compared to classic nonparametric density estimation, and the classic parametric models. We use the diagram as in Fig.~\ref{fig:pair-dis-gen} to point out some conclusions (based on Thm. \ref{thm:optimal-rates-sobolev}-\ref{thm:param-gans-hellinger} obtained in this paper) and some conjectures. \begin{figure}[ht!] \centering \includegraphics[width=0.6\textwidth]{gan.pdf} \caption{Diagram for generator/discriminator pair regularization.} \label{fig:gan} \end{figure} \begin{enumerate} \item Classic parametric models: can be viewed as the left interval (along y-axis) in Fig.~\ref{fig:gan}, where the generator class $\mathcal{G}$ is simple and limited. The discriminator can be viewed as assessing how well we are estimating the finite parameters, which relates to how well we are learning densities in the parametric class. More advanced discriminator won't help. The pair regularization effectively reduces to one dimensional tuning on the discriminator: what is a good loss function on the parameter set. \item Classic nonparametric density estimation: can be viewed as the top interval (along x-axis) in Fig.~\ref{fig:gan}. Here the $d_{\mathcal{F}} = d_{TV}$, and by tuning the generator class $\mathcal{G}$ (using sieves, kernels, etc.), one can achieve the optimal rates when the target density lies in a certain nonparametric class. The minimax theory for the adversarial framework (Thm.~\ref{thm:optimal-rates-sobolev}) informs us, when the target is truly nonparametric, tuning with the generator class is optimal: there is no theoretical gain in utilizing the generator/discriminator pair to tune. Though, with simpler evaluation metrics, one can obtain faster rates, shown in Thm.~\ref{thm:optimal-rates-sobolev}. \item Empirical density, or data memorization: can be viewed as the right interval (along y-axis) in Fig.~\ref{fig:gan}. Here the generator class is flexible enough to memorize the training data, and one should try to avoid this by means of regularization (Thm.~\ref{thm:nonparam-gans}). \item For a certain target density $\nu$ (in between parametric and nonparametric for many realistic cases), tuning with the generator and discriminator pair $(\mathcal{G}, \mathcal{F})$ as illustrated in Fig.~\ref{fig:gan} could potentially do better than both that in the parametric and nonparametric tuning domains. We \textit{conjecture} that \textit{tuning with the generator/discriminator pair} $(\mathcal{G}_\star, \mathcal{F}_\star)$ could potentially explain the empirical success of GANs on the statistical side, as one has the choice of flexibly tuning the generator and discriminator pair with deep neural networks, in the two dimensional domain balancing $A_1(\mathcal{F}, \mathcal{G}, \nu), A_2(\mathcal{G}, \nu), S_{n, m}(\mathcal{F}, \mathcal{G})$ simultaneously. \end{enumerate} Admittedly, to fully understand such phenomenon in pair-regularization, one may need to re-think the class of distributions of interest, and what constitutes ``low complexity/structured'' class rather than the ``smoothness'' used in the nonparametric literature. In this paper, we only consider the statistical problem of how well GANs learn density, assuming the optimization, say \eqref{eq:gan-nn-B}, can be done to sufficient accuracy. Admittedly, computation of GANs is a considerably harder question \citep*{mescheder2017numerics, daskalakis2017training,liang2018interaction,arbel2018gradient,lucic2017gans}, which we leave as future work. \section{Proof of Main Results} \label{sec:proof} \subsection{Oracle Inequalities} We now develop the oracle inequalities, which are the main innovative tool for analyzing the rates for GANs. We remark that these are deterministic inequalities that hold generally, which could be of independent interest for further research on GANs. \begin{proof}[Proof of Lemma~\ref{lem:oracle-ineq}] For any $\mu \in \mu_G$, we know that due to the optimality of GAN in \eqref{eq:gan-nonparam}, \begin{align*} d_{\mathcal{F}_D}(\mu, \nu_n) - d_{\mathcal{F}_D}(\mu_n, \nu_n) \geq 0. \end{align*} Due to the triangle inequality of IPM, we have \begin{align*} d_{\mathcal{F}_D}(\mu_n, \nu) & \leq d_{\mathcal{F}_D}(\mu_n, \nu_n) + d_{\mathcal{F}_D}(\nu_n, \nu) \\ & \leq d_{\mathcal{F}_D}(\mu, \nu_n) + d_{\mathcal{F}_D}(\nu_n, \nu) \quad \text{(optimality of $\nu_n$)} \\ & \leq d_{\mathcal{F}_D}(\mu, \nu) + d_{\mathcal{F}_D}(\nu, \nu_n) + d_{\mathcal{F}_D}(\nu_n, \nu). \end{align*} Now take $\mu = \argmin_{\mu \in \mu_G} d_{\mathcal{F}_D}(\mu, \nu)$, and recall that $\mathcal{F}_D$ is symmetric around 0, we have \begin{align*} d_{\mathcal{F}_D}(\mu_n, \nu) \leq \min_{\mu \in \mu_G} d_{\mathcal{F}_D}(\mu, \nu) + 2 d_{\mathcal{F}_D}(\nu, \nu_n). \end{align*} \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:oracle-ineq-general}] For ease of notation, we abbreviate $\widehat{\theta}_{m,n}$ as $\widehat{\theta}$ in this proof when there is no confusion. Recall GANs estimator \eqref{eq:gan-nn}, and the definition of $d_{\mathcal{F}}\left( \mu_{\widehat{\theta}_{m,n}}, \nu \right)$, we have \begin{align*} & d_{\mathcal{F}}\left( \mu_{\widehat{\theta}_{m,n}}, \nu \right) = \sup_{f_\omega \in \mathcal{F}} \left\{ \E f_\omega \circ g_{\widehat{\theta}}(Z) - \E f_\omega (X) \right\} \\ &\leq \sup_{f_\omega \in \mathcal{F}} \left\{ \E f_\omega \circ g_{\widehat{\theta}}(Z) - \widehat{\E}_n f_\omega (X) \right\} + \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_n f_\omega (X)- \E f_\omega (X) \right\} \\ &\leq \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_m f_\omega \circ g_{\widehat{\theta}}(Z) - \widehat{\E}_n f_\omega (X) \right\} + \sup_{f_\omega \in \mathcal{F}} \left\{ \E f_\omega \circ g_{\widehat{\theta}}(Z) -\widehat{\E}_m f_\omega \circ g_{\widehat{\theta}}(Z) \right\} \\ & \hspace{3cm} + \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_n f_\omega (X)- \E f_\omega (X) \right\}. \end{align*} Here the first inequality we insert the quantity $\widehat{\E}_n f_\omega (X)$, and the second we insert the quantity $\widehat{\E}_m f_\omega \circ g_{\widehat{\theta}}(Z)$ to the first term. For any $\theta$ such that $g_\theta \in \mathcal{G}$, we recall the optimality condition of GANs estimator \begin{align*} \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_m f_\omega \circ g_{\widehat{\theta}_{m,n}}(Z) - \widehat{\E}_n f_\omega (X) \right\} \leq \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_m f_\omega \circ g_{\theta}(Z) - \widehat{\E}_n f_\omega (X) \right\}, \end{align*} then one can proceed with (for any $\theta$ with $g_\theta \in \mathcal{G}$) \begin{align*} &d_{\mathcal{F}}\left( \mu_{\widehat{\theta}_{m,n}}, \nu \right) \\ & \leq \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_m f_\omega \circ g_{\theta}(Z) - \widehat{\E}_n f_\omega (X) \right\} \quad \text{(optimality of $\widehat{\theta}_{m,n}$)}\\ & \quad \quad + \sup_{f_\omega \in \mathcal{F}} \left\{ \E f_\omega \circ g_{\widehat{\theta}}(Z) -\widehat{\E}_m f_\omega \circ g_{\widehat{\theta}}(Z) \right\} + \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_n f_\omega (X)- \E f_\omega (X) \right\} \\ & \leq \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_m f_\omega \circ g_{\theta}(Z) - \E f_\omega \circ g_{\theta}(Z) \right\} + \sup_{f_\omega \in \mathcal{F}} \left\{\E f_\omega \circ g_{\theta}(Z) - \E f_\omega (X) \right\} \\ & \quad \quad + \sup_{f_\omega \in \mathcal{F}} \left\{ \E f_\omega (X) - \widehat{\E}_n f_\omega (X) \right\} \quad \text{(insert $\E f_\omega \circ g_{\theta}(Z)$ and $ \E f_\omega (X)$)} \\ & \quad \quad + \sup_{f_\omega \in \mathcal{F}} \left\{ \E [f_\omega \circ g_{\widehat{\theta}}(Z)] -\widehat{\E}_m [f_\omega \circ g_{\widehat{\theta}}(Z)] \right\} + \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_n [f_\omega (X)- \E f_\omega (X) \right\} \\ &\leq 2 \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_n f_\omega (X)- \E f_\omega (X) \right\} + \sup_{f_\omega \in \mathcal{F}} \left\{ \widehat{\E}_m f_\omega \circ g_{\theta}(Z) - \E f_\omega \circ g_{\theta}(Z) \right\} \\ &\quad \quad + \sup_{f_\omega \in \mathcal{F}} \left\{ \E f_\omega \circ g_{\widehat{\theta}}(Z) -\widehat{\E}_m f_\omega \circ g_{\widehat{\theta}}(Z) \right\} + \sup_{f_\omega \in \mathcal{F}} \left\{\E f_\omega \circ g_{\theta}(Z) - \E f_\omega (X) \right\} \end{align*} where the last step uses the fact that $f_\omega \in \mathcal{F}$ then $-f_\omega \in \mathcal{F}$. As the above holds for any $\theta$ such that $g_\theta \in \mathcal{G}$, we know then (by moving the last term to the LHS) \begin{align*} &d_{\mathcal{F}}\left( \mu_{\widehat{\theta}_{m,n}}, \nu \right) - d_{\mathcal{F}}(\mu_\theta, \nu)\\ &\leq 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) +\sup_{f_\omega \in \mathcal{F}} \left\{ \E f_\omega \circ g_{\widehat{\theta}}(Z) -\widehat{\E}_m f_\omega \circ g_{\widehat{\theta}}(Z) \right\} \\ & \leq 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) +\sup_{f_\omega \in \mathcal{F}, g_\theta \in \mathcal{G}} \left\{ \E f_\omega \circ g_{\theta}(Z) -\widehat{\E}_m f_\omega \circ g_{\theta}(Z) \right\} \\ & \leq 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi). \end{align*} Here the second to the last step is by the fact that $g_{\widehat{\theta}} \in \mathcal{G}$. \end{proof} \subsection{Minimax Optimal Rates} We start with an equivalent definition of the Sobolev space for $W^{\alpha, q}(r)$ for $q=2$ is through the coefficients of the Fourier series. The following is also called the Sobolev ellipsoid. The definition (for $q=2$) naturally extends to non-integer $\alpha \in \mathbb{R}_{>0}$ through the Bessel potential. Denote $\mathbf{F}[f](\xi)$ denotes the Fourier transform of $f(x)$, and $\mathbf{F}^{-1}$ as its inverse. \begin{defn} \label{def:sobolev-frac} For $\alpha \in \mathbb{R}_{>0}$, the Sobolev space $W^{\alpha, 2}(r)$ definition extends to non-integer $\alpha$, \begin{align*} W^{\alpha}(r) := \left\{ f \in \Omega \rightarrow \mathbb{R}: \left\| \mathbf{F}^{-1}\left[ (1+|\xi|^2)^{\frac{\alpha}{2}} \mathbf{F} [f](\xi) \right] \right\|_2 \leq r \right\}. \end{align*} \end{defn} \begin{defn}[Sobolev ellipsoid] \label{eq:sobolev-ellipsoid} Let $\theta = \{\theta_\xi, \xi = (\xi_1, \ldots, \xi_d) \in \mathbb{N}^d \}$ collects the coefficients of the Fourier series, define \begin{align*} \Theta^{\alpha}(r) := \left\{ \theta \in \mathbb{N}^d \rightarrow \mathbb{R} : \sum_{\xi \in \mathbb{N}^d} (1+\sum_{i=1}^d \xi_i^2 )^{\alpha} \theta^2_{\xi} \leq r^2 \right\}. \end{align*} \end{defn} It is clear that $\Theta^\alpha(r)$ (frequency domain) is an equivalent representation of $W^{\alpha}(r)$ (spatial domain, Def.~\ref{def:sobolev-frac}) in $L^2(\mathbb{N}^d)$ for trigonometric Fourier series. For more details on Sobolev spaces, we refer the readers to \cite*{nemirovski2000topics, tsybakov2009introduction, nickl2007bracketing}. \begin{proof}[Proof of Theorem~\ref{thm:optimal-rates-sobolev}] The proof consists of three main parts, the upper bound and the nonparametric minimax lower bound, and the parametric lower bound. In the proof, for simplicity, we only consider $\alpha, \beta \in \mathbb{N}_{\geq 0}$. Extensions to the $\mathbb{R}_{\geq 0}$ follows the same proof idea. \paragraph{\bf Step 1: upper bound} Recall that the base measure $\pi(x)$ to be a uniform measure on $[0,1]^d$ (Lebesgue measure). For the density $\nu(x)$ w.r.t. the Lebesgue measure, we can represent it in the Fourier trigonometric series form \begin{align*} \nu(x) = \sum_{\xi \in \mathbb{N}^d} \theta_{\xi}(\nu) \psi_{\xi}(x),\quad \text{$\theta(\nu) \in \mathbb{N}^d$ denotes the coefficients of $\nu$} \end{align*} with the tensorized basis $\psi_{\xi}(x) = \prod_{i=1}^d \psi_{\xi_i}(x_i)$ . We construct the following estimator $\widetilde{\nu}_n$, with a cut-off parameter $M$ to be determined later, \begin{align*} \widetilde{\nu}_n(x):= \sum_{\xi \in \mathbb{N}^d} \widetilde{\theta}_{\xi}(\nu) \psi_{\xi}(x), \end{align*} where based on i.i.d. samples $X^{(1)}, X^{(2)}, \ldots X^{(n)} \sim \nu$ \begin{equation*} \widetilde{\theta}_{\xi}(\nu) := \begin{cases} \frac{1}{n} \sum_{j=1}^n \prod_{i=1}^d \psi_{\xi_i}(X^{(j)}_i), & \text{for $\xi$ satisfies}~ \|\xi\|_{\infty} \leq M \\ 0, &\text{otherwise} \end{cases}. \end{equation*} Note $\widetilde{\nu}_n$ filters out all the high frequency (less smooth) components, when the multi-index $\xi$ has largest coordinate larger than $M$. Similarly, expand the discriminator function $f \in \mathcal{F}$ in the same Fourier basis, $$ f(x) = \sum_{\xi \in \mathbb{N}^d} \theta_{\xi}(f) \psi_{\xi}(x). $$ Recall the Sobolev ball Def.~\ref{eq:sobolev-ellipsoid}, for any $\nu(x) \in W^{\alpha}(r)$, we have for the estimator $\widetilde{\nu}_n$ \begin{align*} \E d_{\mathcal{F}}(\nu, \widetilde{\nu}_n) &= \E \sup_{f \in \mathcal{F}} \int f(x) \left(\nu(x) - \widetilde{\nu}_n(x) \right) dx \\ &= \E \sup_{f \in \mathcal{F}} \sum_{\xi \in \mathbb{N}^d} \theta_{\xi}(f) \left( \widetilde{\theta}_{\xi}(\nu) - \theta_{\xi}(\nu) \right) \\ &= \E \sup_{f \in \mathcal{F}} \left\{ \sum_{\xi \in [M]^d} \theta_{\xi}(f) \left( \widetilde{\theta}_{\xi}(\nu) - \theta_{\xi}(\nu) \right) + \sum_{\xi \in \mathbb{N}^d \backslash [M]^d} \theta_{\xi}(f) \theta_{\xi}(\nu) \right\} \\ &\leq \E \sup_{f \in \mathcal{F}} \sum_{\xi \in [M]^d} \theta_{\xi}(f) \left( \widetilde{\theta}_{\xi}(\nu) - \theta_{\xi}(\nu) \right) + \E \sup_{f \in \mathcal{F}} \sum_{\xi \in \mathbb{N}^d \backslash [M]^d} \theta_{\xi}(f) \theta_{\xi}(\nu). \end{align*} For the truncated first term, we know \begin{align} & \E \sup_{f \in \mathcal{F}} \sum_{\xi \in [M]^d} \theta_{\xi}(f) \left( \widetilde{\theta}_{\xi}(\nu) - \theta_{\xi}(\nu) \right) \nonumber\\ & \leq \E \sup_{f \in \mathcal{F}} \left\{\sum_{\xi \in [M]^d} ( 1+\|\xi\|_2^2 )^{\beta} \theta^2_{\xi}(f) \right\}^{1/2} \left\{\sum_{\xi \in [M]^d} ( 1+\|\xi\|_2^2 )^{-\beta} \left( \widetilde{\theta}_{\xi}(\nu) - \theta_{\xi}(\nu) \right)^2 \right\}^{1/2} \nonumber\\ & \leq \E \left\{\sum_{\xi \in [M]^d} ( 1+\|\xi\|_2^2 )^{-\beta} \left( \widetilde{\theta}_{\xi}(\nu) - \theta_{\xi}(\nu) \right)^2 \right\}^{1/2} \quad\text{(as $\sup_{f \in \mathcal{F}} \sum_{\xi \in [M]^d} ( 1+\|\xi\|_2^2 )^{\beta} \theta^2_{\xi}(f) \leq 1$)} \label{eq:improvement} \\ & \leq \left\{ \sum_{\xi \in [M]^d} ( 1+\|\xi\|_2^2 )^{-\beta} \E \left( \widetilde{\theta}_{\xi}(\nu) - \theta_{\xi}(\nu) \right)^2 \right\}^{1/2} \quad \text{(Jensen's inequality)} \\ & \leq \sqrt{C_{d, \beta} \frac{M^{d - 2\beta} \vee 1}{n} } \nonumber \end{align} where the last line $\E \left( \widetilde{\theta}_{\xi}(\nu) - \theta_{\xi}(\nu) \right)^2 \leq \frac{1}{n} \E_{X \sim \nu} \psi^2_{\xi}(X) \leq \frac{1}{n}$ for trigonometric series for any multi-index $\xi$. In addition, simple calculus shows that $$\sum_{\xi \in [M]^d} ( 1+\|\xi\|_2^2 )^{-\beta} \leq C'_{d, \beta} \int_{0}^{\sqrt{d}M} \frac{r^{d-1}}{(1+r^2)^\beta} dr \leq C_{d, \beta} \left(M^{d - 2\beta} \vee 1 \right).$$ For the second term, the following inequality holds \begin{align*} &\E \sup_{f \in \mathcal{F}} \sum_{\xi \in \mathbb{N}^d \backslash [M]^d} \theta_{\xi}(f) \theta_{\xi}(g) \leq \E \sup_{f \in \mathcal{F}} \left\{\sum_{\xi \in [M]^d} \theta^2_{\xi}(f) \right\}^{1/2} \cdot \left\{\sum_{\xi \in [M]^d} \theta^2_{\xi}(g) \right\}^{1/2} \\ &\leq \sup_{f \in \mathcal{F}} \left\{ (1+M^2)^{-\beta} \sum_{\xi \in [M]^d} (1+ \| \xi\|_2^2)^{\beta}\theta^2_{\xi}(f) \right\}^{1/2} \left\{ (1+M^2)^{-\alpha} \sum_{\xi \in [M]^d} (1+ \| \xi\|_2^2)^{\alpha} \theta^2_{\xi}(g) \right\}^{1/2} \\ &\leq r \sqrt{\frac{1}{M^{2(\alpha+\beta)}}}. \end{align*} Combining two terms, we have for any $\nu \in \mathcal{G}$, with the optimal choice of $M \asymp n^{\frac{1}{2\alpha+d}}$ \begin{align} \sup_{\nu\in \mathcal{G}} \E d_{\mathcal{F}}(\nu, \widetilde{\nu}_n) &\leq \inf_{M \in \mathbb{N}} \left\{ \sqrt{C \frac{M^{d-2\beta} \vee 1}{n} } + r \sqrt{\frac{1}{M^{2(\alpha+\beta)}}} \right\} \label{eq:l_a_l_b}\\ &\precsim n^{-\frac{\alpha+\beta}{2\alpha+d}} \vee n^{-\frac{1}{2}}. \nonumber \end{align} Let us now establish the lower bound. Again we consider the $\Omega = [0,1]^d$ as the domain, which is the same as in the upper bound. \paragraph{\bf Step 2: nonparametric lower bound} The main idea behind the proof is to reduce the estimation problem to a multiple hypothesis testing problem that is at least as hard. In this proof, it turns out the H\"{o}lder space $W^{\alpha,\infty}$ --- which is a subspace of the Sobolev space $W^{\alpha}$ --- suffices for the minimax lower bound. First, we need to construct multiple hypothesis $\nu$'s that are valid densities in $W^{\alpha, \infty}(1)$. Specify a kernel function $K(u) = (a_1 \exp(-\frac{1}{1 - 4u^2}) - a_2) I(|u|<1/2), u \in \mathbb{R}$ for some small fixed $a_1, a_2>0$ to ensure that $K(x) \in W^{\alpha \vee \beta, \infty}(1)$, and $\int K(u) du = 0$. Let $m$ be a parameter (that depends on the sample size $n$) to be determined later, and denote $h_m = 1/m$. Define the hypothesis class to be (of cardinality $2^{m^d}$) \begin{align*} \Omega_\alpha = \left\{ g_w(x) = 1 + \sum_{\xi \in [m]^d} w_\xi h_m^{\alpha} \varphi_\xi(x) , w \in \{0,1\}^{m^d} \right\} , \\ \Lambda_\beta = \left\{ f_v(x) = \sum_{\xi \in [m]^d} v_\xi h_m^{\beta} \varphi_\xi(x), v \in \{-1,1\}^{m^d} \right\}, \end{align*} where \begin{align*} \varphi_\xi(x) &= \prod_{i=1}^d K\left(\frac{x_i - \frac{\xi_i - 1/2}{m}}{h_m} \right), \quad \text{with $h_m = 1/m$}.\\ \end{align*} Let us verify (1) $\Omega_\alpha \subset W^{\alpha, \infty}(r)$ for some $r$, and that each element in the hypothesis set is a valid density; (2) $\Lambda_\beta \subset W^{\beta, \infty}(1)$. To start, for any multi-index $\gamma$ such that $|\gamma| \leq \alpha$, \begin{align*} \| D^{(\gamma)} g_w \|_{\infty} \leq \sup_{\xi \in [m]^d} h_m^{\alpha} \| D^{(\gamma)} \varphi_\xi \|_{\infty} = h_m^{\alpha - |\gamma|} \| D^{(\gamma)} K(u) \|_{\infty} \leq h_m^{\alpha - |\gamma|} \leq 1. \end{align*} Similarly for $\forall \gamma, |\gamma| \leq \beta$, we know $$\| D^{(\gamma)} f_v(x) \|_{\infty} \leq h_m^{\beta - |\gamma|} \leq 1.$$ We also need to bound $\| g_w \|_{\infty}$, for any $w$ \begin{align} \label{eq:infty-bound} \| g_w \|_{\infty} \leq 1 + h_m^{\alpha} \sup_{\xi \in [m]^d} \| \varphi_\xi(x) \|_\infty \leq 1 + h_m^{\alpha} \leq 1 + 1/100, \end{align} as long as $m$ is large enough. So far we have shown $\Omega_\alpha \subset W^{\alpha, \infty}(r)$ and $\Lambda_\beta \subset W^{\beta, \infty}(1)$. Last, we can check $g_\omega$ is a proper density as we know $g_w(x) \geq 0$, and \begin{align*} \int \varphi_\xi(x) dx = \prod_{i=1}^d \int K\left(\frac{x_i - \frac{\xi_i - 1/2}{m}}{h_m} \right) dx_i = 0, \\ \int g_{\omega}(x) dx = 1+ \sum_{\xi \in [m]^d} w_\xi h_m^{\alpha} \int \varphi_\xi(x) dx = 1. \end{align*} To select hypothesis within $\Omega_\alpha$ are hard to distinguish based on finite samples, we use the Varshamov-Gilbert construction in conjunction with Fano's inequality (we use the version in Lemma \ref{lem:fano}). The technicality is to construct multiple hypothesis that are separated w.r.t. the adversarial loss, then show that the hypothesis are close in statistical sense. Let's use the construction credited to Varshamov-Gilbert (Lemma 2.9 in \cite{tsybakov2009introduction}): we know that there exists a subset $\{ w^{(0)}, \ldots, w^{(H)} \} \subset \{0,1\}^{h}$ such that $w^{(0)} = (0, \ldots, 0)$, \begin{align*} &\rho(w^{(j)}, w^{(k)}) \geq \frac{h}{8}, ~\forall~j, k\in [H],~ j\neq k, \\ &\log H \geq \frac{h}{8} \log 2, \end{align*} where $\rho(w, w')$ denotes the Hamming distance between $w$ and $w'$ on the hypercube. In our case $h = m^d$. For the loss function, any $w, w' \in \{ w^{0}, \ldots, w^{H} \}$ \begin{align*} d_\mathcal{F}(g_w, g_{w'}) &:= \sup_{f \in W^{\beta}(1)} \int f(x) g_w(x) dx - \int f(x) g_{w'}(x) dx \\ & \geq \sup_{f \in W^{\beta, \infty}(1)} \int f(x) g_w(x) dx - \int f(x) g_{w'}(x) dx \\ & \geq \sup_{f \in \Lambda_{\beta}} \int f(x) \left(g_w(x) - g_{w'}(x) \right) dx \\ & = \sup_{v \in \{-1, +1\}^{m^d}} h_m^{\alpha +\beta} \sum_{\xi\in [m]^d} v_{\xi} (w_{\xi} - w_{\xi}') \int \varphi_\xi^2(x) dx \\ & = h_m^{\alpha +\beta + d} \sum_{\xi \in [m]^d} I(w_\xi \neq w_{\xi}') \int \prod_{i=1}^d K^2\left(u_i \right) du \\ &\geq c_{a_1, a_2} h_m^{\alpha +\beta + d} \rho(w, w') \geq c_{a_1, a_2} \frac{m^d}{8} h_m^{\alpha +\beta + d} \asymp h_m^{\alpha +\beta}. \end{align*} Now let's show that based $n$ i.i.d. data generated from density $g_w(x)$, it is hard to distinguish the hypothesis. Note that for $|t| < 1/50$, $\log(1+t) \geq t - t^2$. Recall \eqref{eq:infty-bound} we know $\| (g_w(x) - g_{0}(x))/g_w(x)\|_{\infty} \leq \frac{1/100}{1- 1/100} \leq 1/50$. Therefore \begin{align*} d_{KL}\left(P_{w^{(j)}}^{\otimes n}, P_{w^{(0)}}^{\otimes n}\right) & = n \cdot d_{KL}\left(P_{w^{(j)}}, P_{w^{(0)}}\right) \\ & = n \int - \log \left( 1 + \frac{g_{0} - g_{w^{(j)}}}{g_{w^{(j)}}} \right) g_{w^{(j)}} dx \\ & \leq n \int \frac{(g_{0} - g_{w^{(j)}})^2}{g_{w^{(j)}}} dx \leq 1.01 n \sum_{\xi \in [m]^d} \int h_m^{2\alpha} \varphi_\xi^2(x) dx \\ & \leq 1.01 n \sum_{\xi \in [m]^d} \int h_m^{2\alpha+d} \prod_{i=1}^d K^2\left(u_i \right) du \precsim n h_m^{2\alpha +d} m^d. \end{align*} Therefore if we choose $m \asymp n^{-\frac{1}{2\alpha + d}}$, we know $$\frac{1}{H} \sum_{j=1}^H D_{\rm KL}(P_{w^{(j)}}^{\otimes n}, P_{w^{(0)}}^{\otimes n}) \leq c \log H = c' m^d.$$ Using the Fano's inequality, the lower bound for density estimation is of the order $h_m^{\alpha+\beta} = n^{-\frac{\alpha+\beta}{2\alpha+d}}$, as \begin{align*} \inf_{\widetilde{\nu}_n} \sup_{\nu \in W^{\alpha}(r)}\E d_\mathcal{F}\left(\widetilde{\nu}_n, \nu \right) &\geq \inf_{\hat{g}} \sup_{g \in W^{\alpha,\infty}(r)} \E \sup_{f \in W^{\beta,\infty}(1)} \int f(x)\left( \hat{g}(x) - g(x) \right) dx \\ &\geq \inf_{\hat{w}} \sup_{w \in \{w^{(0)},\ldots, w^{(H)}\}} \E d_\mathcal{F}\left(g_{\hat{w}}, g_{w}\right)\\ &\geq c h_m^{\alpha +\beta} \cdot \inf_{\hat{w}} \sup_{w \in \{w^{(0)},\ldots, w^{(H)}\}} P_w\left( d_\mathcal{F}\left(g_{\hat{w}}, g_{w}\right) \geq c h_m^{\alpha +\beta} \right) \\ & \geq c h_m^{\alpha +\beta} \frac{\sqrt{H}}{1+\sqrt{H}} \left( 1 - 2c' - \sqrt{\frac{2c'}{\log H}} \right) \quad \text{(Lemma~\ref{lem:fano})} \\ &\geq c n^{-\frac{\alpha+\beta}{2\alpha + d}}. \end{align*} \paragraph{\bf Step 3: parametric lower bound} The parametric rate lower bound $n^{-1/2}$ can be obtained by the following reduction to a two point hypothesis testing problem. Consider the uniform measure $\nu_0(x) = 1$ for $x \in [0, 1]^d$, and \begin{align*} \nu_1(x) = \left\{ \begin{matrix} 3/2, & 0\leq x(1) <2/n \\ 1/2, & 2/n \leq x(1)<4/n \\ 1 & \text{o.w.} \end{matrix}\right. . \end{align*} One can verify both $\nu_0, \mu_1$ are valid densities on $[0, 1]^d$ with \begin{align*} d_{\chi^2}(\nu_1^{\otimes n}, \nu_0^{\otimes n}) = (1 + d_{\chi^2}(\nu_1, \nu_0))^n - 1 = (1+1/n)^n - 1 \leq e-1 \end{align*} Therefore, we know by Pinsker's inequality \begin{align*} d_{TV}(\nu_1^{\otimes n}, \nu_0^{\otimes n}) \leq \sqrt{d_{\chi^2}(\nu_1^{\otimes n}, \nu_0^{\otimes n})/2} \leq \sqrt{(e-1)/2}. \end{align*} It is clear that both $\nu_0, \nu_1 \in W^{\infty}(r) \subset W^{\alpha}(r)$ for any $\alpha$, with some proper constant $r$. In addition, we know $\nu_0(x) - \nu_1(x) \in W^{\infty}(1/\sqrt{n})$. Hence, by Le Cam's method (Lemma 4 in \cite{cai2015law}), for any $\widetilde{\nu}_n$ \begin{align*} \sup_{\nu \in W^{\alpha}(r)} \E d_\mathcal{F}\left(\widetilde{\nu}_n, \nu \right) &\geq \sup_{\nu \in \{ \nu_0, \nu_1 \}} \E d_\mathcal{F}\left(\widetilde{\nu}_n, \nu \right) \\ & \geq c \cdot d_{\mathcal{F}}(\nu_0, \nu_1) (1 - d_{TV}(\nu_1^{\otimes n}, \nu_0^{\otimes n})) \\ & \geq c' \cdot d_{W^{\infty}(1)}(\nu_0, \nu_1) \geq c' \frac{1}{\sqrt{n}} \end{align*} where the last step is by choosing $f(x) = \sqrt{n} [\nu_0(x) - \nu_1(x)] \in W^{\infty}(1) \subseteq \mathcal{F} = W^{\beta}(1)$. Proof completed. \end{proof} \subsection{Rates for Neural Networks} \begin{proof}[Proof of Theorem~\ref{thm:param-gans}] The proof consists of three steps. Remark in this proof, we wrote $\int$ as $\int_{\Omega}$ as there won't be confusion. \paragraph{\bf Step 1: $A_1(\mathcal{F}, \mathcal{G}, \nu)$ approximation term} For any distribution $g_{\widehat{\theta}_{m,n}}(Z)$ (we abbreviate $\widehat{\theta}_{m,n}$ as $\widehat{\theta}$ in this proof), by Pinsker's inequality (Lemma~\ref{lem:pinsker-bobkov}), \begin{align*} d_{TV}^2\left(X, g_{\widehat{\theta}}(Z) \right) \leq \frac{1}{2} d_{KL}\left(X||g_{\widehat{\theta}}(Z) \right). \end{align*} The above implies that for any $X \sim \nu$ \begin{align*} 4 d_{TV}^2\left(X, g_{\widehat{\theta}}(Z) \right) &\leq d_{KL}\left(X||g_{\widehat{\theta}}(Z) \right) + d_{KL}\left(g_{\widehat{\theta}}(Z)||X \right) \\ &= \int \log \frac{\nu(x)}{\mu_{\widehat{\theta}}(x)} \left(\nu(x) - \mu_{\widehat{\theta}}(x) \right) dx \quad \text{(for any $f_{\omega} \in \mathcal{F}$)} \\ &= \int \left(\log \frac{\nu(x)}{\mu_{\widehat{\theta}}(x)} - f_\omega(x) \right) \left( \nu(x) - \mu_{\widehat{\theta}}(x) \right) dx + \int f_\omega(x) \left( \nu(x) - \mu_{\widehat{\theta}}(x) \right) dx \\ &\leq \int \left(\log \frac{\nu(x)}{\mu_{\widehat{\theta}}(x)} - f_\omega(x) \right) \left( \nu(x) - \mu_{\widehat{\theta}}(x) \right) dx + d_{\mathcal{F}}\left( \mu_{\widehat{\theta}}, \nu \right) \\ &\leq \left\| \log \frac{\nu(x)}{\mu_{\widehat{\theta}}(x)} - f_\omega(x) \right\|_{\infty} \left\| \nu(x) - \mu_{\widehat{\theta}}(x) \right\|_1 + d_{\mathcal{F}}\left( \mu_{\widehat{\theta}}, \nu \right) \\ &\leq 2 \left\| \log \frac{\nu}{\mu_{\widehat{\theta}}} - f_\omega \right\|_{\infty} + d_{\mathcal{F}}\left( \mu_{\widehat{\theta}}, \nu \right) \end{align*} where the last line is due to the fact that $\mu_{\widehat{\theta}}, \nu(x)$ are both proper densities, so $\left\| \nu(x) - \mu_{\widehat{\theta}}(x) \right\|_1 \leq 2$. Take $f_{\omega}$ to be the one minimize the first term on RKS, we have \begin{align*} 4 d_{TV}^2\left(X, g_{\widehat{\theta}}(Z) \right) \leq 2 \inf_{f\in \mathcal{F}} \left\| \log \frac{\nu}{\mu_{\widehat{\theta}}} - f_\omega \right\|_{\infty} + d_{\mathcal{F}}\left( \mu_{\widehat{\theta}}, \nu \right). \end{align*} \paragraph{\bf Step 2: oracle inequality and $A_2(\mathcal{G}, \nu)$ approximation term} Now, let's apply the oracle approach developed in Lemma~\ref{lem:oracle-ineq-general} to $d_{\mathcal{F}}\left( \mu_{\widehat{\theta}}, \nu \right)$. For any $\theta$ such that $g_\theta \in G$, we know \begin{align*} d_{\mathcal{F}}\left( \mu_{\widehat{\theta}}, \nu \right) &\leq d_{\mathcal{F}}(\mu_\theta, \nu) + 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi) \\ &\leq B d_{TV}(\mu, \nu) + 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi)\\ & \leq B \sqrt{ \frac{1}{4} \left[ d_{KL}\left(\mu_\theta || \nu \right) + d_{KL}\left(\nu||\mu_\theta \right) \right] } \\ &\quad\quad + 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi) \\ &\leq B \sqrt{ \frac{1}{4} \left\| \log \frac{\mu_\theta}{\nu} \right\|_{\infty} \|\mu_\theta - \nu\|_1 } + 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi) \end{align*} where second line uses the fact that for any $f \in \mathcal{F}$, $\| f \|_\infty \leq B$. \paragraph{\bf Step 3: the stochastic term $S_{m,n}(\mathcal{F}, \mathcal{G})$ by empirical processes} Assemble the bounds, we have for any $\theta$ \begin{align*} 4 d_{TV}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right) & \leq 2 \inf_{\omega} \left\| \log \frac{\nu}{\mu_{\widehat{\theta}}} - f_\omega \right\|_{\infty} + B \sqrt{ \frac{1}{2} \left\| \log \frac{\mu_\theta}{\nu} \right\|_{\infty} } \\ & \quad \quad + 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi) \end{align*} Therefore by choosing $\theta_\star$ minimizes $\left\| \log \frac{\mu_\theta}{\nu} \right\|_{\infty}$ over the generator class \begin{align*} \E d_{TV}^2\left(\nu, \mu_{\widehat{\theta}_{m,n}} \right) & \leq \frac{1}{2} \E\left\{ \inf_{\omega} \left\| \log \frac{\nu}{\mu_{\widehat{\theta}}} - f_\omega \right\|_{\infty} \right\} + \frac{B}{4\sqrt{2}} \sqrt{ \inf_\theta \left\| \log \frac{\mu_\theta}{\nu} \right\|_{\infty} } \\ & \quad \quad + \E \left\{ 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_{\theta_\star}, \mu_{\theta_\star} \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi) \right\} \\ &\leq \frac{1}{2} \sup_{\theta} \inf_{\omega} \left\| \log \frac{\nu}{\mu_{\theta}} - f_\omega \right\|_{\infty} + \frac{B}{4\sqrt{2}} \inf_\theta \left\| \log \frac{\mu_\theta}{\nu} \right\|_{\infty}^{1/2} \\ & \quad \quad + \E \left\{ 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_{\theta_\star}, \mu_{\theta_\star} \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi) \right\}. \end{align*} Recall the symmetrization in Lemma~\ref{lem:symmetrization}, \begin{align*} &\E \left\{ 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_{\theta_\star}, \mu_{\theta_\star} \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi) \right\} \\ & \leq 4 \E \mathcal{R}_n \left(\mathcal{F}\right) + 2\E \mathcal{R}_m \left( \mathcal{F} \right) + 2 \E \mathcal{R}_{m} \left( \mathcal{F} \circ \mathcal{G} \right) \\ & \leq C \sqrt{\text{Pdim}(\mathcal{F}) \left( \frac{\log m}{m} \vee \frac{\log n}{n} \right)} + C \sqrt{ \text{Pdim}(\mathcal{F} \circ \mathcal{G}) \frac{\log m}{m} }, \end{align*} where the last step uses the relationship between Rademacher complexity and pseudo-dimension, shown in Lemma~\ref{lem:rad-vc}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:param-gans-hellinger}] Due to Le Cam's inequality (Lemma 2.3 in \cite{tsybakov2009introduction}), we know \begin{align*} d_{TV}^2\left(X, g_{\widehat{\theta}}(Z) \right) & \leq d_{H}^2\left(X, g_{\widehat{\theta}}(Z) \right) = \int \left( \sqrt{\nu(x)} - \sqrt{\mu_{\widehat{\theta}}(x)} \right)^2 dx \\ &= \int \frac{\sqrt{\nu} - \sqrt{\mu_{\widehat{\theta}}}}{\sqrt{\nu}+ \sqrt{\mu_{\widehat{\theta}}}} \left(\nu - \mu_{\widehat{\theta}} \right) dx \quad \text{for any $f_{\omega} \in \mathcal{F}$} \\ & \leq 2 \left\| \frac{\sqrt{\nu} - \sqrt{\mu_{\widehat{\theta}}}}{\sqrt{\nu}+ \sqrt{\mu_{\widehat{\theta}}}} - f_{\omega} \right\|_{\infty} + d_{\mathcal{F}}\left( \mu_{\widehat{\theta}}, \nu \right). \end{align*} Due to the oracle inequality Lemma~\ref{lem:oracle-ineq-general}, one has for any $\theta$ \begin{align*} d_{\mathcal{F}}\left( \mu_{\widehat{\theta}}, \nu \right) &\leq d_{\mathcal{F}}(\mu_\theta, \nu) + 2 d_{\mathcal{F}}\left( \widehat{\nu}^n, \nu \right) + d_{\mathcal{F}}\left( \widehat{\mu}^m_\theta, \mu_\theta \right) + d_{\mathcal{F} \circ \mathcal{G}}(\widehat{\pi}^m, \pi). \end{align*} For the first term, we can further upper bound, \begin{align*} d_{\mathcal{F}}(\mu_\theta, \nu) &\leq B d_{TV}(\mu_\theta, \nu) \leq B \sqrt{ d_{H} \left(\mu_\theta, \nu \right) } \\ &\leq 2B \left\| \frac{\sqrt{\nu} - \sqrt{\mu_{\theta}}}{\sqrt{\nu} + \sqrt{\mu_{\theta}}} \right\|_{\infty} \end{align*} where the last line follows because \begin{align*} \sqrt{ d_{H} \left(\mu_\theta, \nu \right) } &= \sqrt{ \int \left( \frac{\sqrt{\nu} - \sqrt{\mu_{\theta}}}{\sqrt{\nu} + \sqrt{\mu_{\theta}}} \right)^2 (\sqrt{\nu} + \sqrt{\mu_{\theta}})^2 dx } \\ & \leq \left\| \frac{\sqrt{\nu} - \sqrt{\mu_{\theta}}}{\sqrt{\nu} + \sqrt{\mu_{\theta}}} \right\|_{\infty} \sqrt{ \int 2 (\nu+\mu_\theta) dx}. \end{align*} The rest of the proof follows exactly the same as in Thm.~\ref{thm:param-gans}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:leaky-ReLU}] The proof proceeds in three steps. \paragraph{\bf Step 1: recursive formula of generator density} Consider the generator network realized by a multi-layer perceptron: \begin{align*} h_1 &= \sigma(W_1 z + b_1) \\ & \ldots \\ h_l &= \sigma(W_l h_{l-1} + b_l) \\ & \ldots \\ x &= W_L h_{L-1} + b_L \enspace. \end{align*} Denote the parameter space of interest \begin{align} \label{eq:param-space-theta} \theta \in \Theta(d, L) := \{ (W_l \in \mathbb{R}^{d \times d}, b_l \in \mathbb{R}^d, 1\leq l \leq L) ~|~ {\rm rank}(W_l) = d, \forall 1\leq l\leq L \} . \end{align} Consider the density evolution from layer $l-1$ to layer $l$ (basic change of variables with Jacobian $\partial h_{l}/ \partial h_{l-1}$) \begin{align*} \log \mu_l(h_l) &= \log \mu_{l-1} (h_{l-1}) - \log |\det \left( \frac{\partial h_{l}}{\partial h_{l-1}} \right)| \\ &= \log \mu_{l-1} (h_{l-1}) - \log |\det W_l| - \sum_{i=1}^d \log |\sigma'( \sigma^{-1} (h_l(i)))|. \end{align*} Recursively apply the above equality to track the density of $X$, we have \begin{align*} \log \mu_\theta(x) &= \log \mu_{L-1} (h_{L-1}) - \log |\det W_{L}|, \quad \text{where $h_{L-1} = W_{L}^{-1}(x - b_L)$} \\ &= \log \mu_{L-2} (h_{L-2}) - \sum_{j=L-1}^{L} \log |\det W_{j}| - \sum_{i=1}^d \log |\sigma'(\sigma^{-1}(h_{L-1}(i)))|, \\ & \ldots \quad \quad \text{where $h_{L-2} = W_{L-1}^{-1}(\sigma^{-1}(h_{L-1}) - b_{L-1})$}\\ & = \log \mu(z) - \sum_{j=1}^L \log |\det W_{j}| - \sum_{j=1}^{L-1} \sum_{i=1}^d \log |\sigma'(\sigma^{-1}(h_{j}(i)))|, \\ & \quad \quad \quad \text{where $z = W_1^{-1} (\sigma^{-1}(h_1) - b_1)$}. \end{align*} Now consider $\mu(z) = 1$ to be the uniform measure on $z \in [0, 1]^d$. Consider leaky ReLU activation $\sigma(t) = \max(t, at)$ for $0< a \leq 1$, then $\sigma^{-1}(t) = \min(t, t/a)$, and $\log |\sigma'(t)| = \log(a) \cdot 1_{t\leq 0} $. Let's consider the realizable case when $\log \nu(x) = \log \mu_{\theta_*}(x)$ for some $\theta_* \in \Theta(d, L)$. Denote $m_{l} := \sigma^{-1}(h_{L-l})$, for any $1\leq l \leq L-1$. Then it follows that \begin{align} \label{eq:m} m_1 &= \sigma^{-1}( W_{L}^{-1} x - W_{L}^{-1} b_L) \\ m_{l} &= \sigma^{-1}( W^{-1}_{L-l+1} m_{l-1} - W^{-1}_{L-l+1} b_{L-l+1} ), \quad 1\leq l \leq L-1. \end{align} Therefore, the density can be written out explicitly, \begin{align} \label{eq:realizable-density-MLP} \log \mu_\theta(x) &= - \sum_{j=1}^L \log |\det W_{j}| - \sum_{j=1}^{L-1} \sum_{i=1}^d \log \sigma'(m_{L-j}(i)) \\ &= - \sum_{j=1}^L \log |\det W_{j}| - \sum_{j=1}^{L-1} \sum_{i=1}^d \log \sigma'(m_{j}(i)) \end{align} In addition, we know that for any $\theta$ and $\theta_*$, $\mu_\theta$ and $\mu_{\theta_*}$ (namely $\nu$) are absolutely continuous to each other, as $\mu_{\theta}(x)>0$ for any $x \in [0, 1]^d$. \paragraph{\bf Step 2: construction of discriminator networks} Now consider a discriminator network which follows \begin{align*} m_1 &= \sigma^{-1} ( V_1 x + c_1 )\\ & \ldots \\ m_{L-1} &= \sigma^{-1} (V_{L-1} m_{L-2} + c_{L-1}) \\ h_{\omega}(x) &= \sum_{j=1}^{L-1} \sum_{i=1}^d \log (1/a) 1_{m_j(i) \leq 0} +c_L \enspace. \end{align*} Here the parameter set is, \begin{align} \label{eq:param-space-omega} \omega \in \Omega(d, L) := \{(V_l \in \mathbb{R}^{d \times d}, c_l \in \mathbb{R}^d, c_L \in \mathbb{R}, 1\leq l \leq L-1 )~|~ {\rm rank}(V_l) = d, \forall 1\leq l\leq L-1 \}. \end{align} Choose the discriminator function $w = (w_1, w_2)$ where $w_1, w_2 \in \Omega(d, L)$ \begin{align*} f_\omega(x) = h_{\omega_1}(x) - h_{\omega_2}(x). \end{align*} Then we can verify that Cor.~\ref{cor:design} follows. Recall the upper bound in Theorem~\ref{thm:leaky-ReLU}, we can see that for the choice of generator and discriminator \begin{align*} \frac{1}{2} \sup_{\theta} \inf_{\omega} \left\| \log \frac{\nu}{\mu_{\theta}} - f_\omega \right\|_{\infty} = 0 \\ \frac{B}{4\sqrt{2}} \inf_\theta \left\| \log \frac{\mu_\theta}{\nu} \right\|_{\infty}^{1/2} = 0 \end{align*} as $\log v(x)$ can be realized by $\log \mu_{\theta_*}(x)$, and that for any $\theta \in \Theta(d, L)$, there exist an $\omega \in \Omega(d, L)$ such that \begin{align*} f_\omega(x) = \log \nu(x) - \log \mu_{\theta}(x). \end{align*} \paragraph{\bf Step 3: complexity bound} Recall the result in \cite*{Bartlett-etal2017_COLT} on the Vapnik-Chervonenkis dimension of feed-forward neural networks (See Lemma~\ref{lem:VC-dim-bound} with degree at most $1$ and number of pieces $p+1=2$), we know for leaky-ReLU neural networks $\mathcal{F}$ and $\mathcal{F} \circ \mathcal{G}$ respectively by simple counting \begin{align*} \text{for network $\mathcal{F}$}: & \quad \text{number of weights}~W_{\mathcal{F}} \leq 2(d^2 L + 2dL)+2, \\ & \quad ~\text{number of units}~U_{\mathcal{F}} \leq 4 dL, \\ & \quad ~\text{depth}~L_{\mathcal{F}} \leq L+2 ~; \\ \text{for network $\mathcal{F} \circ \mathcal{G}$}:& \quad \text{number of weights}~W_{\mathcal{F}\circ \mathcal{G}} \leq W_{\mathcal{F}} + d^2 L\\ & \quad ~\text{number of units}~U_{\mathcal{F}\circ \mathcal{G}} \leq U_{\mathcal{F}} + dL, \\ & \quad ~\text{depth}~L_{\mathcal{F}\circ \mathcal{G}} \leq L_{\mathcal{F}} + L ~. \end{align*} Therefore, we have the following upper bound on VC-dimension, \begin{align*} {\rm Pdim}(\mathcal{F}) \asymp \text{VCdim}(\mathcal{F}) \leq C \cdot L_\mathcal{F} W_\mathcal{F} \log U_\mathcal{F} = C d^2 L^2 \log(dL),\\ {\rm Pdim}(\mathcal{F}\circ \mathcal{G}) \asymp \text{VCdim}(\mathcal{F} \circ \mathcal{G}) \leq C \cdot L_{\mathcal{F}\circ \mathcal{G}} W_{\mathcal{F}\circ \mathcal{G}} \log U_{\mathcal{F}\circ \mathcal{G}} \leq C' d^2 L^2 \log(dL). \end{align*} Finally, by Cor.~\ref{cor:design}, we have the result proved. \end{proof} \section*{Acknowledgement} We thank three anonymous referees for constructive comments. In particular, we credit the creation of Table.~\ref{table: summary} to one of the referees. \bibliographystyle{plainnat}
{ "timestamp": "2019-06-05T02:25:49", "yymm": "1811", "arxiv_id": "1811.03179", "language": "en", "url": "https://arxiv.org/abs/1811.03179" }
\section{Introduction}\label{Sec:Introduction} Argumentation is a form of reasoning that makes explicit the reasons for the conclusions that are drawn, and how conflicts between reasons are resolved. Recent years have witnessed intensive study of both logic-based and human orientated models of argumentation and their use in formalising agent reasoning, decision making, and inter-agent dialogue \cite{ArgInAI, RahSim}. Much of this work builds on Dung's seminal theory of abstract argumentation \cite{dung95acceptability}. A Dung argumentation framework ($AF$) \cite{dung95acceptability} is essentially a directed graph relating arguments by binary, directed forms of conflict, called attacks. The sceptically or credulously justified arguments are those in the intersection, respectively union, of sets---called extensions---of `acceptable' arguments evaluated under various semantics (see \cite{baroni11introduction} for an overview). Extensions are evaluated based on two core principles. Firstly, a set of arguments should not contain internal attacks, that is, it should be {\em conflict-free}. Secondly, it should defend itself in the sense that any argument $a$ in the set is either un-attacked, or if attacked by some argument $b$, there is then an argument in the set that defends $a$ by attacking $b$ (in which case $a$ is said to be defended by, or {\em acceptable with respect to}, the set of arguments). Arguments and attacks may be seen as primitive or assumed to be defined by an instantiating set of sentences in natural language or in a formal logical language. The former case thus provides for characterisations of more human-orientated uses of argument in reasoning, while in the latter case, the claims of the justified arguments identify the non-monotonic inferences from the instantiating set of logical formulae, thus providing for dialectical characterisations of non-monotonic logics \cite{dung95acceptability}. Dung's theory has been extended in a number of directions. In particular, `exogenously' given information about the relative strength of arguments has been used to determine which attacks succeed as defeats, so that the acceptable arguments are evaluated with respect to the arguments related by defeats rather than attacks. In this way one can effectively arbitrate amongst credulously justified conflicting arguments. Examples include \cite{AmCay,ModgilAIJ,Modgil2013361} that make use of a preference relation over arguments (where in the case of \cite{ModgilAIJ}, preferences are expressed by arguments that attack attacks amongst arguments) and \cite{BC03} that makes use of an ordering over the values promoted by arguments. Other notable developments include approaches that associate probabilities with arguments \cite{Hunter2017, Li2012}, and weights on attacks so that extensions do not necessarily comply with the conflict-freeness requirement on Dung extensions; arguments in an extension may attack each other provided that the summative weight of attacks does not exceed a given `inconsistency budget' \cite{dunne11weighted}.\footnote{See also \cite{Janssen} which account for the relative strength of attacks amongst arguments.} \paragraph{Context: graduality in argumentation} It has long been recognized---since at least \cite{BesnardHunter01} and \cite{CLS04}---that one of the drawbacks of Dung's theory of abstract argumentation is the limited level of granularity the theory offers in differentiating the strength or status of arguments (essentially three, but cf. \cite{wu10labelling}). Consider the following informal example: \begin{example}\label{ExampleGuilty} Suppose an argument $I$ concluding the presumed innocence of a suspect. $I$ is attacked by an argument $G$ which consists of two sub-arguments $G_1$ and $G_2$ that respectively conclude that a suspect had \emph{opportunity} and \emph{motive}, where $G$ defeasibly extends these sub-arguments to conclude that the suspect is guilty. Suppose an argument $I_1$ (in support of an alibi) that attacks $G_1$ on the assumption that the suspect does not have an alibi. Suppose then an additional argument $I_2$ that attacks an assumption in $G_2$. The level to which $I$ is defended, and so said to be justified, is intuitively increased in the case that we have counter-arguments $I_1$ and $I_2$ that argue against the suspect having motive \emph{and} opportunity, as compared with when one only has the counter-argument $I_1$. \end{example} The above example highlights one amongst a number of intuitions that are formalised by the above mentioned \cite{BesnardHunter01} and \cite{CLS04}, and more recently \cite{AmgoudNaim,Amgoud:2016,conf/jelia/MattT08}, in which the numbers of attackers and defenders are used to give a more fine grained assignment of status to (and hence ranking of) arguments. While these approaches are defined wth respect to $AF$s, the status of arguments is not determined using the standard `Dungian' concepts of sets of arguments (extensions) and arguments defended by these sets; rather, measures of the strengths of arguments based on the numbers of attackers and/or defenders are propagated through the $AF$ graph\footnote{The exception being \cite{conf/jelia/MattT08} in which the strength of arguments is evaluated by reference to two person games in which a proponent defends an argument against counter-attacks by an opponent.}. These approaches, which we refer to here as `propagation based approaches' (see \cite{bonzon16comparative} for a recent comparative overview), have also been developed to account for exogenously given information about the strength of arguments, which provide the initial measures that are then adjusted based on how they are propagated through the graph. Notable examples of the latter include \cite{gabbay-rodrigues:13,Gabbay2015} and \cite{conf/ijcai/LeiteM11}. \paragraph{Paper Contribution} The central aim of this paper is to show that a more fine grained assignment of status to arguments that does not rely on exogenous information, and thus only on the numbers of attackers and defenders of arguments, can be formalised as a natural generalisation of the Dungian notions of conflict free sets of arguments and the defense of arguments by sets of arguments. This aim is achieved in the following way. {\em First}, our starting point is the classical definition of an admissible set of arguments as one that is conflict free and that defends all its contained arguments \cite{dung95acceptability}. We show that the logical structure of this definition naturally generalises so as to yield more fine grained graded notions of conflict freeness and defense that account for the number of attackers and defenders, thereby obtaining a graded variant of the concept of admissibility. {\em Second}, this graded form of admissibility serves as a basis for defining graded variants of all the classic semantics of abstract argumentation studied in \cite{dung95acceptability}: complete, grounded, stable and preferred. Dung's definitions of the standard semantics can all be retrieved as special cases of our graded variants, showing that the form of graduality this paper studies is rooted in a principled way in the classical theory of abstract argumentation. These graded semantics are, intuitively, ways of interpreting the standard Dung semantics in `stricter' or `looser' ways. For instance, the grounded semantics can be interpreted more `�strictly' by requiring that all attackers be counter-attacked by at least two arguments, instead of just one as in the classic case. So each Dung semantics now comes equipped with a family of strengthenings and weakenings, which we call {\em graded semantics}. {\em Third}, these strengthenings and weakenings define a natural (partial) ordering dictated by set-inclusion: a stricter semantics will define sets of arguments which are subsets of the sets defined by a weaker one. This natural ordering then induces an ordering on the arguments themselves, thereby defining a ranking over the arguments in the framework. Such a ranking enables arbitration amongst arguments without recourse to exogenous preference information. As this ranking is induced from a generalization of Dung standard semantics, our approach is at the outset methodologically distinct from propagation-based approaches, where argument rankings are defined directly by reference to the argument graph. {\em Fourth}, we study the application of graded semantics and their induced rankings to two key instantiations of Dung $AF$s: classical logic instantiations that, under the standard Dung semantics, yield a dialectical characterisation of non-monotonic inference in Preferred Subtheories \cite{bre89}, and instantiations that accommodate more human orientated uses of argumentation through the use of schemes and critical questions \cite{Wal96}. In so doing, we seek to substantiate some of the intuitions captured by our generalisation of Dung semantics, and show how the graded semantics capture a simple form of counting based accrual of arguments, which has traditionally been regarded as being incompatible with Dung's theory \cite{PrakAccrual}. \paragraph{Outline of the paper} The paper is structured in three parts. \fbox{Part 1} concerns the development of the abstract theory of graded argumentation. It starts with Section \ref{Sec:Background}, where we review Dung's theory, giving prominence to its fixpoint-theoretic underpinnings, which have remained relatively under-investigated in the literature, and that we thus consider to be of some independent interest. Section \ref{Sec:Graded Acceptability} then generalises Dung's notions of conflict freeness and defense, yielding grading variants of these notions in terms of the number of arguments attacking and defending any given $a$ whose acceptability with respect to a given set of arguments is at issue. This yields a ranking among types of conflict freeness and defense. Section \ref{Sec:GradedSemantics} then generalises Dung's standard semantics, so that extensions are graded with respect to the attacks and counter-attacks on their contained arguments. These semantics---which we call {\em graded}---are shown to generalise Dung's theory and are studied providing constructive existence results. \fbox{Part 2} first shows, in Section \ref{Sec:Ranking}, how the new graded semantics yield a natural way of ranking arguments according to how strongly they are justified under different graded semantics, thereby enabling endogenous arbitration among credulously justified arguments. Then, Section \ref{Sec:Applications} illustrates application of these type of rankings to \emph{ASPIC+} instantiations of $AF$s \cite{Modgil2013361} that formalise stereotypical patterns of argumentation encoded in schemes and critical questions \cite{Wal96}, thus accounting for more human-orientated uses of argument, and \emph{ASPIC+} instantiations of $AF$s that provide dialectical characterisations of non-monotonic inference in Preferred Subtheories \cite{bre89}. Both types of instantiation are then shown to capture a simple form of counting based accrual, whereby multiple arguments in support of the same conclusion mutually strengthen each other. In \fbox{Part 3}, Section \ref{Sec:RelatedWork} develops a thorough comparison of our approach to the existing approaches to graduality and rankings in argumentation, leveraging the systematization recently introduced in \cite{bonzon16comparative}. This allows us to place more precisely graded argumentation in the growing landscape of ranking-based semantics. We conclude in Section \ref{Sec:Future} outlining some avenues for future research in graded argumentation. \section{Preliminaries: Abstract Argumentation} \label{Sec:Background} This preliminary section reviews key concepts and results from Dung's abstract argumentation theory \cite{dung95acceptability}. The presentation we provide gives prominence to the fixpoint theory underpinning Dung's theoretical framework. After \cite{dung95acceptability} the fixpoint theory of abstract argumentation has remained relatively under-investigated in the literature. It is, however, the most natural angle from which to pursue the objectives of this paper. We are unaware of any comprehensive exposition to date of the fixpoint theory of abstract argumentation, and so hope this preliminary section is of independent interest. Since this paper will provide a generalization of Dung's original theory, all results presented in this section can actually be obtained as direct corollaries of the results we will establish later in Section \ref{Sec:GradedSemantics}. This, we argue, should be a desirable feature for any theory of graduality in argumentation which bases itself on Dung's original proposal. For completeness of the exposition, direct proofs of the results dealt with in this section can be found in Section \ref{appendix:proofs}. \subsection{Basic Definitions} \begin{figure} \begin{center} \scalebox{0.75}{ \begin{tikzpicture}[node distance=2.9cm,shorten >=1pt,>=stealth,auto] \node[state] (a) {$a$}; \node[state] (b) [below of= a] {$b$}; \path[<->] (a) edge node {} (b); \end{tikzpicture} \hspace{1.5cm} \begin{tikzpicture}[node distance=2cm,shorten >=1pt,>=stealth,auto] \node[state] (c) {$c$}; \node[state] (a) [above left of= c] {$a$}; \node[state] (b) [below left of= c] {$b$}; \path[->] (a) edge node {} (b); \path[->] (c) edge node {} (a); \path[->] (b) edge node {} (c); \end{tikzpicture} \hspace{1cm} \begin{tikzpicture}[node distance=2cm,shorten >=1pt,>=stealth,auto] \node[state] (c) {$c$}; \node[state] (a) [above left of= c] {$a$}; \node[state] (b) [below left of= c] {$b$}; \node[state] (d) [right of= c] {$d$}; \node[state] (e) [right of= d] {$e$}; \path[<->] (a) edge node {} (b); \path[->] (a) edge node {} (c); \path[->] (b) edge node {} (c); \path[->] (c) edge node {} (d); \path[->] (d) edge node {} (e); \end{tikzpicture} } \end{center} \caption{Argumentation frameworks} \label{figure:graphs} \end{figure} \begin{definition}[Frameworks]\label{definition:attack_graph} An {\em argumentation framework} (AF) $\Delta$ is a tuple $\tuple{\mathcal{A}, \ar}$ where $\mathcal{A} \neq \emptyset$, and $\ar$ $\subseteq$ $\mathcal{A}^2$ is a binary attack relation on $\mathcal{A}$. Notation $x \ar y$ denotes that $x$ attacks $y$, and $X \ar x$ denotes that $\exists y \in X$ s.t. $y \ar x$. Similarly, $x \ar X$ denotes that $\exists y \in X$ s.t. $x \ar y$. For a given $\Delta$ we write $\overline{x}$ to denote $\set{y \in \Delta \mid y \rightarrow x}$ (the direct attackers of $x$), and $\overline{\overline{x}}$ to denote $\set{z \in \Delta \mid z \rightarrow y, y \in \overline{x}}$ (the direct defenders of $x$). Also, as is customary, $+$ denotes the transitive closure of a given relation, so that $b \ar^+ a$ stands for `there exists a path of attacks from $b$ to $a$'. Finally, an AF such that for each $x$, $\overline{x}$ is finite, is called {\em finitary}, whereas an AF with a finite number of arguments $\mathcal{A}$ is called {\em finite}, else {\em infinite}. \end{definition} We will sometimes refer to $AF$s as {\em attack graphs}.\footnote{Although note that all definitions in this paper equally apply to `defeat' graphs which assume a binary defeat relation on arguments, obtained through use of preferences in deciding which attacks succeed as defeats.} Figure \ref{figure:graphs} depicts three $AF$s. An argument $a \in \mathcal{A}$ is said to be acceptable w.r.t. $X \subseteq \mathcal{A}$, if any argument attacking $a$ is attacked by some argument in $X$, in which case $X$ is said to defend $a$. An $AF$'s characteristic (also called `defense') function, applied to some $X \subseteq \mathcal{A}$, returns the arguments defended by $X$ \cite{dung95acceptability} (henceforth, `$\pw$' denotes powerset): \begin{definition}[Defense Function]\label{definition:defense} The defense function $\cf{\Delta}: \pw{\mathcal{A}} \map \pw{\mathcal{A}}$ for $\Delta = \tuple{\mathcal{A}, \ar}$ is defined as follows. For any $X \subseteq \mathcal{A}$:\\[-15pt] \begin{eqnarray*} \cf{\Delta}(X) & = & \set{x \in \mathcal{A} \mid \forall y \in \mathcal{A}: \IF y \ar x \THEN X \ar y}\\[-18pt] \end{eqnarray*} Where no confusion arises we may drop the subscript $\Delta$ in $\cf{\Delta}$. \end{definition} An argument $a \in \mathcal{A}$ is not attacked by a set $X \subseteq \mathcal{A}$ if no argument in $X$ attacks $a$. One can define a function which, applied to some $X \subseteq \mathcal{A}$ in an $AF$, returns the arguments that are not attacked by $X$. This function was introduced by Pollock in \cite{pollock87defeasible} for his theory of defeasible reasoning, and we refer to it here as the `neutrality function'. \begin{definition}[Neutrality Function]\label{definition:neutrality} The neutrality function $\cff{\Delta} : \pw{\mathcal{A}} \map \pw{\mathcal{A}}$ for $\Delta = \tuple{\mathcal{A}, \ar}$ is defined as follows. For any $X \subseteq \mathcal{A}$: \[ \cff{\Delta}(X) = \set{x \in \mathcal{A} \mid \NOT X \ar x}. \] Again, where no confusion arises we may drop the subscript $\Delta$ in $\cff{\Delta}$. \end{definition} One final bit of terminology. In what follows we will often use the notion of function iteration for $\cf{}$ and $\cff{}$ which we define in the standard inductive way, for $F \in \set{\cf{}, \cff{}}$: $F^0(X) = X$; $F^{k+1}(X) = F(F^k(X))$. \begin{example}[Defense and neutrality in Figure \ref{figure:graphs}] The functions applied to the symmetric graph of Figure \ref{figure:graphs} (left) yield the following equations: \[ \begin{array}{ccccccc} \cf{}(\emptyset) & = & \emptyset & & \cff{}(\emptyset) & = & \set{a, b} \\ \cf{}(\set{a}) & = & \set{a} & & \cff{}(\set{a}) & = & \set{a} \\ \cf{}(\set{b}) & = & \set{b} & & \cff{}(\set{b}) & = & \set{b} \\ \cf{}(\set{a,b}) & = & \set{a,b} & & \cff{}(\set{a,b}) & = & \emptyset \end{array} \] Notice that the output of $\cff{}$ on $\emptyset$ corresponds to the whole set of arguments, as no arguments can be attacked by $\emptyset$. Notice also that while $\set{a}$ and $\set{b}$ are included in $\cff{}(\set{a})$ and $\cff{}(\set{b})$, $\set{a,b}$ is not. \end{example} One can define the extensions of an $AF$ $\Delta$ under Dung's semantics, in terms of the fixpoints ($X= \cf{\Delta}(X)$ and $X= \cff{\Delta}(X)$) or post-fixpoints ($X \subseteq \cf{\Delta}(X)$ and $X \subseteq \cff{\Delta}(X)$) of the defense and neutrality functions, as recapitulated in Table \ref{table:basics}. The justified arguments are then defined under various semantics: \begin{definition}[Justification under Semantics] \label{def:DungSemantics} Let $\Delta$ = $\tuple{\mathcal{A},\ar}$ Then for semantics $S \in \{$grounded, stable, preferred$\}$\footnote{Typically, the justified arguments are not defined w.r.t. the complete semantics, which subsume each of grounded, stable and preferred.}, $a \in \mathcal{A}$ is {\em credulously}, respectively {\em sceptically, justified} under $S$, if $a$ is in at least one, respectively all, $S$ extensions of $\Delta$. \end{definition} \begin{table}[t] \begin{tabular}{lcl} \hline $X$ is conflict-free in $\Frame$ & iff & $X \subseteq \cff{\Frame}(X)$ \\ $X$ is self-defended in $\Frame$ & iff & $X \subseteq \cf{\Frame}(X)$ \\ $X$ is admissible in $\Frame$ & iff & $X \subseteq \cff{\Frame}(X)$ and $X \subseteq \cf{\Frame}(X)$ \\ $X$ is a complete extension in $\Frame$ & iff & $X \subseteq \cff{\Frame}(X)$ and $X = \cf{\Frame}(X)$ \\ $X$ is a stable extension of $\Frame$& iff & $X = \cff{\Frame}(X)$ \\ $X$ is the grounded set in $\Frame$ & iff & $X$ is the smallest complete ext. of $\Frame$ ($X = \lfp.\cf{\Frame}$)\\ $X$ is a preferred extension of $\Frame$& iff & $X$ is a largest complete extension of $\Frame$ \\ \hline \end{tabular} \caption{Classical notions of abstract argumentation theory from \cite{dung95acceptability}.} \label{table:basics} \end{table} Finally, we recapitulate some well-known properties, first established in \cite{dung95acceptability}, of the defense and neutrality functions that will be referred to later: \begin{fact} \label{fact:simple} Let $\Delta = \tuple{\mathcal{A}, \ar}$ be an $AF$ and $X, Y \subseteq \mathcal{A}$. The following holds: \begin{eqnarray*} X \subseteq Y & \IMPLIES & \cf{}(X) \subseteq \cf{}(Y) \\ X \subseteq Y & \IMPLIES & \cff{}(Y) \subseteq \cff{}(X) \\ \cf{}(X) & = & \cff{}(\cff{}(X)) \end{eqnarray*} \end{fact} That is, function $\cf{}$ is monotonic, function $\cff{}$ is antitonic, and the composition of $\cff{}$ with itself, which we will also denote $\cff{} \circ \cff{}$, is function $\cf{}$. For example, in Figure \ref{figure:graphs} (right), we have that $\cff{}(\set{a})$ = $\set{a,d,e}$, and $\cff{}(\set{a,d,e}) = \set{a,d} = \cf{}(\set{a})$. \begin{fact}[$\omega$-continuity\footnote{Cf. \cite[Lemma 28]{dung95acceptability}.}] \label{fact:continuous} If $\Frame$ is finitary, then $\cf{\Frame}$ is (upward-)continuous for any $X \subseteq \mathcal{A}$, i.e., for any upward directed set $D \in \pw{\pw{\mathcal{A}}}$ of finite subsets of $\mathcal{A}$:\footnote{We recall that an upward directed set $D \in \pw{\pw{\mathcal{A}}}$ is a set of sets such that any two elements $X$ and $Y$ in $D$ have an upper bound in $D$, that is, there also exists a superset $Z \supseteq X \cup Y$ in $D$. A downward directed set is defined dually in the obvious way.} \begin{align} \cf{\Frame}\left(\bigcup_{X \in D} X \right) & = \bigcup_{X \in D} \cf{\Frame}(X). \label{eq:upward} \end{align} Similarly, $\cf{\Frame}$ is (downward-)continuous for any $X \subseteq \mathcal{A}$, i.e., for any downward directed set $D \in \pw{\pw{\mathcal{A}}}$: \begin{align} \cf{\Frame}\left(\bigcap_{X \in D} X\right) & = \bigcap_{X \in D} \cf{\Frame}(X). \label{eq:downward} \end{align} \end{fact} The above fact establishes important properties of the behaviour of the defense function with respect to sequences of sets of arguments and their limits. As we will also later see in the graded generalization of Dung's theory, these properties are key in the construction of extensions through the iteration of the defense function. \putaway \subsection{Rudiments of fixpoint theory of Dung's extensions} In light of Facts \ref{fact:simple} and \ref{fact:continuous} we can rely on general results from order theory to establish the existence of the least fixpoint ($\lfp$) of the characteristic function, that is, the existence of the grounded extension. The monotonicity of $\cf{\Frame}$ guarantees the existence of the least fixpoint of $\cf{\Frame}$ as the intersection of all pre-fixpoints of $\cf{\Frame}$: \begin{align} \lfp.\cf{\Frame} & = \bigcap\set{X \subseteq \mathcal{A} \mid \cf{\Frame}(X) \subseteq X}. & \mbox{ (Knaster-Tarski Theorem)} \label{formula:knaster} \end{align} Given a set of arguments $X$, the $n$-fold iteration of $\cf{\Frame}$ is denoted $\cf{\Frame}^n$ for $0 \leq n < \omega$ and its (countably) infinite iteration is denoted $\cf{\Frame}^\omega$. For a given $X$, an infinite iteration generates an infinite sequence, or stream, $\cf{\Frame}^0(X), \cf{\Frame}^1(X), \cf{\Frame}^2(X), \ldots$. A stream is said to stabilize if and only if there exists $0 \leq n < \omega$ such that $\cf{\Frame}^{n}(X) = \cf{\Frame}^{n +1}(X)$. Such set $\cf{\Frame}^{n}(X)$ is then called the limit of the stream. The monotonicity and continuity of the characteristic function, in finitary frameworks, guarantee together that such least fixpoint can be computed `from below' through a stream $\cf{\Frame}^0(X), \cf{\Frame}^1(X), \cf{\Frame}^2(X), \ldots$: \begin{align} \lfp.\cf{\Frame} & = \bigcup_{0 \leq n < \omega}\cf{\Frame}^n(\emptyset) \label{formula:computation} \end{align} The reader is referred to \cite{davey90introduction} for a detailed presentation of these results. We will come back later in some more detail to equation \eqref{formula:computation}, which is the stepping stone of some of the results the paper presents. } \subsection{The Fixpoint Theory of Acceptability and Conflict-freeness} We show how any admissible set of arguments can be saturated to a complete extension through a process of fixpoint approximation. This establishes a general result concerning the computation of complete extensions in (finitary) attack graphs which, to the best of our knowledge, has never been reported in the literature. It is, however, a generalization of well-known existing results such as \cite[Lemma 46, Theorem 47]{dung95acceptability} (cf. also \cite{lifschitz96foundations}). \subsubsection{Construction of Fixpoints from Admissible Sets} Fix a framework $\Frame$ and take a set $X$ such that $X \subseteq \cf{}(X)$ and $X \subseteq \cff{}(X)$ (i.e., an admissible set). By iterating $\cf{}$, consider the stream of sets $ \cf{}^0(X), \cf{}^1(X), \ldots, $ and $ \cf{}^0(\cff{}(X)), \cf{}^1(\cff{}(X)), \ldots. $ Since $X$ is admissible, $\cf{}$ is monotonic and $\cff{}$ antitonic (Fact \ref{fact:simple}), the first stream (see the lower stream in Figure \ref{figure:decomposition1}) is non-decreasing and the second stream (see the upper stream in Figure \ref{figure:decomposition1}) is non-increasing, with respect to set inclusion. In finite attack graphs, these streams must therefore stabilize reaching a limit at state $|\mathcal{A}| + 1$. In infinite but finitary attack graphs, we will see that the limit can be reached at $\omega$. We will see (Lemma \ref{theorem:approxim8}) that the limits of these streams correspond to the {\em smallest} fixpoint of $\cf{}$ {\em containing} the admissible set $X$ and, respectively, the {\em largest} fixpoint of $\cf{}$ which is {\em contained} in $\cff{}(X)$. We denote the first one by $\lfp_X.\cf{}$ and the second one by $\gfp_X.\cf{}$. Intuitively, the two sets denote the smallest superset of $X$ which is equal to the set of arguments it defends\footnote{Theorem \ref{lemma:smallest} will show this set is also conflict-free and it is therefore the smallest complete extension containing $X$.} and, respectively, the largest set which is not attacked by $X$ and which is equal to the set of arguments it defends.\footnote{Note that such a set is not necessarily conflict-free. E.g., consider $\Frame = \langle \set{a,b,c}$, $\set{(b,c),(c,b)} \rangle$, that is $b \ar c$ and $c \ar b$. Then $\cff{}(\set{a}) = \set{a,b,c}$ and $\cf{}(\set{a,b,c}) = \set{a,b,c}$. But clearly it is not the case that $\set{a,b,c} \subseteq \cff{}{\set{a,b,c}}$, that is, $\set{a,b,c}$ is not conflict-free.} The construction is illustrated in Figure \ref{figure:decomposition1} below. \begin{example} Consider the cycle of length three in Figure \ref{figure:graphs} (center), and take the admissible set $\emptyset$. By applying the above construction we obtain immediately $\cf{}(\emptyset) = \emptyset$ as limit of the lower stream, and $\cff{}(\emptyset) = \set{a,b,c} = \cf{}(\set{a,b,c})$ as limit of the upper stream. $\emptyset$ is the smallest self-defended set containing $\emptyset$, $\set{a, b, c}$ is the largest self-defended set contained in $\cff{}(\emptyset)$. \end{example} We prove now the correctness of the above construction by showing that the limits of the above streams correspond indeed to the desired fixpoints. First of all the following important lemma shows how conflict-freeness is preserved by the above process of iteration of the defense function: \begin{lemma} \label{lemma:preserve_cf} Let $\Frame$ be a finitary attack graph and $X \subseteq \mathcal{A}$ be admissible. Then for any $n$ s.t. $0 \leq n < \omega$, $$ X \subseteq \cf{\Frame}^n(X) \subseteq \cff{\Frame}(\cf{\Frame}^n(X)) \subseteq \cff{\Frame}(X). $$ \end{lemma} That is, each $\cf{\Frame}^n(X)$ in the stream of iteration of the defense function from an admissible set $X$ is a conflict-free set. The lemma can be seen as a reformulation of \cite[Lemma 10]{dung95acceptability}, known as Dung's {\em fundamental lemma}.\footnote{It also generalises \cite[Lemma 46]{dung95acceptability} to the case of $X$ admissible, instead of $X =\emptyset$.} \begin{figure}[t] \centering \begin{tikzpicture} \draw (0,0) node (0) {X} (1,4) node (1) {$\cff{}(X)$} (2,1) node (2) {$\cff{}^2(X)$} (3,3.2) node (3) {$\cff{}^3(X)$} (4,1.4) node (4) {$\cff{}^4(X)$} (5,2.9) node (5) {$\cff{}^5(X)$} (6,1.6) node (6) {$\cff{}^6(X)$} (7,2.8) node (7) {$\ldots$} (10,2.8) node (9) {$\gfp_X.\cf{\Frame}$} (8,1.7) node (8) {$\ldots$} (10,1.7) node (10) {$\lfp_X.\cf{\Frame}$} ; \draw[densely dotted] (0) -- (1); \draw[densely dotted] (1) -- (2); \draw[densely dotted] (2) -- (3); \draw[densely dotted] (3) -- (4); \draw[densely dotted] (4) -- (5); \draw[densely dotted] (5) -- (6); \draw[densely dotted] (6) -- (7); \draw[densely dotted] (7) -- (8); \draw[densely dotted] (10) -- (9); \draw[thin,loosely dotted] plot[smooth] coordinates {(0,0) (2,1) (4,1.4) (6,1.6) (8,1.7) (10,1.7)}; \draw[thin,loosely dotted] plot[smooth] coordinates {(1,4) (3,3.2) (5,2.9) (7,2.8) (10,2.8)}; \end{tikzpicture} \caption{Streams generated by indefinite iteration of $\cff{}$ applied to an admissible set $X$ (in symbols, $\cff{}^\omega(X)$). Position with respect to the horizontal axis indicates the number of iterations (growing from left to right), while position with respect to the vertical axis indicates set theoretic inclusion. The lower stream $X, \cff{}^2(X), \cff{}^4(X), \ldots$ stabilizes to $\lfp_X.\cf{\Frame}$. The upper stream $\cff{}(X), \cff{}^3(X), \cff{}^5(X), \ldots$ stabilizes to $\gfp_X.\cf{\Frame}$.} \label{figure:decomposition1} \end{figure} We can show that the above streams obtained through the process of iteration of the defense function construct the desired fixpoints: \begin{lemma} \label{theorem:approxim8} Let $\Frame$ be a finitary attack graph and $X \subseteq \mathcal{A}$ be admissible: \begin{align} \lfp_X.\cf{\Frame} & = \bigcup_{0 \leq n < \omega} \cf{\Frame}^n(X) \label{eq:below} \\ \gfp_X.\cf{\Frame} & = \bigcap_{0 \leq n < \omega} \cf{\Frame}^n(\cff{\Frame}(X)) \label{eq:above} \end{align} \end{lemma} Notice that since $\cff{}^2 = \cf{}$ (Fact \ref{fact:simple}), a stream generated by the indefinite iteration of the defense function can actually be viewed as a stream generated by the indefinite iteration of the neutrality function. So equations \eqref{eq:below} and \eqref{eq:above} of Theorem \ref{theorem:approxim8} can be rewritten as follows: \begin{align} \lfp_X.\cf{\Frame} & = \bigcup_{0 \leq n < \omega} (\cff{\Frame}^2)^n(X) \\ \gfp_X.\cf{\Frame} & = \bigcap_{0 \leq n < \omega} (\cff{\Frame}^2)^n(\cff{\Frame}(X)) \end{align} In this light, Lemmas \ref{lemma:smallest} and \ref{theorem:approxim8} capture several of the key features of the stream generated by the indefinite iteration of the neutrality function on an admissible set $X$. First, the stream can be split into two parts, the part consisting of even and, respectively, odd iterations of $\cff{\Frame}$. Second, the stream of even iterations converges to a limit which is the smallest complete set including $X$, and the stream of odd iteration converges to a limit which is the largest self-defended set contained in $\cff{\Frame}(X)$ (that is, not attacked by $X$).\footnote{Again the finitariness assumption in the theorem could be lifted by making use of transfinite induction. Cf. Remark \ref{remark:ordinal} below.} Notice that such a set is just free of conflict with respect to $X$, but it is not necessarily conflict-free, and hence it is not necessarily a complete extension. Third, the two streams can actually be viewed as streams of the defense function $\cf{\Frame}$ applied to $X$ and, respectively, to $\cff{\Frame}(X)$. Fourth, the two parts grow towards each other as the stream of even iterations is increasing, while the one of odd iterations is decreasing. See Figure \ref{figure:decomposition1} for an illustration. \begin{remark} The proof of Lemma \ref{theorem:approxim8} relies in an essential manner on the finitariness assumption on the underlying framework. The assumption simplifies the proof but, it should be stressed, could be lifted. For infinite graphs which are not finitary, the lemma could be proved by resorting to transfinite induction: \begin{eqnarray*} \cf{\Frame}^0(\emptyset) & = & X \\ \cf{\Frame}^{\alpha + 1}(\emptyset) & = & \cf{\Frame}(\cf{\Frame}^{\alpha}(X) \\ \cf{\Frame}^{\lambda} & = & \bigcup_{\alpha < \lambda} \cf{\Frame}^{\alpha}(X) \ \ \mbox{ (for $\lambda$ arbitrary limit ordinal)}. \end{eqnarray*} By the monotonicity of $\cf{\Frame}$ it can then be shown that there exists an ordinal $\alpha$ of cardinality at most $|A|$ such that: $\lfp_X.\cf{\Frame} = \cf{\Frame}^{\alpha}(X)$. A proof of this statement in the general setting of complete partial orders can be found in \cite[Ch. 3]{venema08lectures}. Transfinite induction relies on the Axiom of Choice (or equivalent formulations such as Zorn's Lemma or the Well-Ordering Principle), which is known to be required for the existence results of the standard Dung semantics (cf. \cite{baumann15infinite}). \label{remark:ordinal} \end{remark} \subsubsection{Construction of Complete Extensions} With the above results in place, one can then show how complete extensions can be constructed through a process of fixpoint approximation. \begin{theorem} \label{lemma:smallest} Let $\Frame$ be a finitary $AF$ and $X \subseteq \mathcal{A}$ be admissible. Then the limit $\bigcup_{0 \leq n < \omega} \cf{\Frame}^n(X)$ is the smallest complete extension of $\Frame$ that includes $X$. \end{theorem} The theorem establishes that, in finitary frameworks, any complete set can be computed via a process of iteration at $\omega$ of the defense function, starting with some admissible set. It is a novel simple generalization of the earlier result in \cite{dung95acceptability} for the case of $X$ = $\emptyset$. This process starts by including the arguments that have no attackers or that belong to an initial admissible set, then including those arguments that are defended by the first set of arguments included, and so on.\footnote{Cf. Remark \ref{remark:ordinal}.} At an intuitive level, the theorem states that the indefinite iteration of the defense from an admissible $X$ can be considered as a formalization of the process whereby an agent constructs a rational argumentative position---a complete extension---starting from $X$. \subsubsection{Construction of Other Dung Extensions} \label{Sec:pref} Theorem \ref{lemma:smallest} also yields a constructive proof of existence (in finitary graphs) for the grounded extension. By setting $X = \emptyset$ (the trivially admissible set), the theorem returns the known result for the construction of the grounded extension $\lfp.\cf{\Frame} = \bigcup_{0 \leq n < \omega}\cf{\Frame}^n(\emptyset) $ \cite[Th. 47]{dung95acceptability}. In this section we show how the theorem relates to the other classical Dung extensions. \smallskip Specific conditions can be identified which guarantee that the indefinite iteration of the defense function constructs preferred and stable extensions from a given admissible set $X$. It is fairly easy to see that if the chosen admissible set $X$ of $\Frame$ is `big enough' in the precise sense that it contains enough arguments to be able, from some argument in $X$, to reach any argument in the graph via the attack relation, i.e., if $\mathcal{A} \subseteq \set{a \mid \exists b \in X: b \ar^+ a}$, then the stream of iterations of $\cf{\Frame}$ from $X$ converges to a complete extension containing $X$ (by Theorem \ref{lemma:smallest}), but this extension is now maximal as all arguments in $\Frame$ can be reached from $X$. The condition under which Theorem \ref{lemma:smallest} constructs stable extensions is particularly interesting. If the streams of even and odd iterations of the neutrality function (recall Figure \ref{figure:decomposition1}) converge to the same limit, then the process of fixpoint approximation defines a stable extension: \begin{fact} Let $\Frame$ be a finitary $AF$ and $X \subseteq \mathcal{A}$ be admissible. If $\bigcup_{0 \leq n < \omega} \cf{\Frame}^n(X) = \bigcap_{0 \leq n < \omega} \cf{\Frame}^n(\cff{\Frame}(X))$, then both sets coincide with the unique stable extension of $\Frame$ that includes $X$. \end{fact} \begin{proof} By Theorem \ref{lemma:smallest}, $\bigcup_{0 \leq n < \omega} \cf{\Frame}^n(X)$ is the smallest complete extension containing $X$. However, as $\bigcup_{0 \leq n < \omega} \cf{\Frame}^n(X) = \bigcap_{0 \leq n < \omega} \cf{\Frame}^n(\cff{\Frame}(X))$ by assumption, the set $\bigcup_{0 \leq n < \omega} \cf{\Frame}^n(X)$ is therefore also a fixpoint of the neutrality function, and therefore a stable extension. \end{proof} This observation is, to the best of our knowledge, novel and provides a characterization of the existence of a stable extension that includes a given admissible set. \begin{example}[Construction of complete extensions in Figure \ref{figure:graphs}] Consider the rightmost $AF$. Starting with the admissible set $\set{a}$, the non-decreasing stream $$ \set{a}, \set{a,d}, \set{a,d}, \ldots $$ converges after one step to the smallest complete extension $\lfp_{\set{a}}.\cf{} = \set{a,d}$ containing $\set{a}$. The non-increasing stream $ \set{a,d,e}, \set{a,d}, \set{a,d}, \ldots $ converges to $\gfp_{\set{a}}.\cf{} = \set{a,d}$, i.e., to the same set which is also the largest fixpoint of $\cf{}$ included in $\cff{}(\set{a}) = \set{a,d,e}$. As $\set{a,d}$ is also conflict-free, it is a preferred extension. Notice also that if we were to start with $\emptyset$, the resulting streams would be $\emptyset, \emptyset, \ldots$ and $\set{a,b,c,d,e},$ $ \set{a,b,c,d,e}, \ldots$. The first one constructs the grounded extension $\emptyset$ of the $AF$, and the second the largest fixpoint of $\cf{}$ in the $AF$, that is, $\set{a,b,c,d,e}$. \end{example} \begin{figure}[t] \begin{center} \includegraphics[scale=0.25]{Pics/motivation2.pdf} \hspace{1cm} \includegraphics[scale=0.25]{Pics/motivation1.pdf} \end{center} \caption{Variants of neutrality (left) and defense (right). The lower integer $\ell$ is, the more neutral is the set of arguments $X$ with respect to $a$ (left). The lower integer $m$ is and the higher integer $n = \min(n_1, \ldots, n_k)$ is, the better is $a$ defended by $X$ (right). Standard neutrality and defense correspond to $\ell = m = n = 1$.} \label{figure:motivation2} \end{figure} \section{Graded Acceptability}\label{Sec:Graded Acceptability} We now turn to the main contribution of this paper: a graded generalisation of Dung's acceptability semantics. We first introduce the intuitions behind our generalisation, and then define and study graded variants of Dung's defense and neutrality functions, which capture the proposed intuitions. These functions will then be used later (Section \ref{Sec:GradedSemantics}) to define and study a family of graded variants of Dung's semantics, and the rankings they enable (Section \ref{Sec:Ranking}). \subsection{Introducing Graded Acceptability: Intuitions}\label{Sec:GradedAcceptabilityIntuitions} The central tenet of argumentation theory is that any individual argument cannot, in and of itself, constitute definitive grounds for believing that a claim is true. Rather, the status of an epistemic claim as true justified belief is not established by an individual argument, but through the dialectical consideration of counter-arguments and defenders of these counter-arguments \cite{SMReasoner}. Similarly in practical reasoning, the status of (a claim representing) a decision option supported by an individual practical argument is not considered to be the option that simply maximises a given objective, but rather the best option contingent on having dis-preferred alternative options and refuted challenges made to the assumptions made in support of the argument. Pragmatically however, we operate under the assumption that the claim is true/the decision option is the best available, to the extent that as of yet we know of no good reason to suppose otherwise. Argumentatively, a claim can be considered established only in as much as it is the claim of a justified argument included in a network of interrelated arguments and counter-arguments. Dung's abstract argumentation theory captures these principles by assuming {\em sets of} arguments, rather than individual arguments, as the units of analysis, and studying formal criteria (semantics) for sets of arguments to be acceptable. Apart from trivial cases (unattacked arguments), arguments are acceptable only as members of a set of acceptable arguments. The graded theory of acceptability that we aim at, captures a notion of graduality while at the same time retaining the notion of a set of arguments as the central unit of analysis. \subsubsection{Graded neutrality} According to the standard definition of neutrality (Definition \ref{definition:neutrality}) a set $X$ is neutral with respect to $a$ if $a$ is not attacked by {\em any} argument in $X$. A less demanding criterion of neutrality of $X$ with respect to $a$ would require that there exists {\em at most one} attacker of $a$ in $X$, a yet less demanding one (or at least not `as demanding as') would require that there exist {\em at most two} attackers of $a$ in $X$, and so on. Intuitively, these weakened neutrality criteria capture the idea that one (two, three, \ldots) attackers are not enough to rule out the co-acceptability of $a$ and its attacking arguments.\footnote{Using terminology from logic, this may be viewed as an argumentative form of paraconsistency. In more `human orientated' argumentation formalisms (e.g., \cite{dunne11weighted} and developments thereof), this may be viewed as accounting for an attacking argument not establishing definitive grounds for its claim (as discussed at the beginning of Section \ref{Sec:GradedAcceptabilityIntuitions}), and hence not definitively ruling out the claim of the attacked argument.} One then obtains a natural way to generalise the neutrality function (Definition \ref{definition:sensitive-neutrality} below) by making explicit a numerical level $\ell$ of neutrality of a set of arguments with respect to a given argument, as depicted in Figure \ref{figure:motivation2} (left).\footnote{Weighted Argument Systems \cite{dunne11weighted} propose a somewhat similar idea, whereby an inconsistency budget sets a threshold on the number of attacks that can be tolerated within a given set. However, notice that our notion of a threshold set, yielded by graded neutrality, is local in the sense that it pertains to the incoming attacks \emph{on each individual argument}. We will later compare this and other related approaches in more detail (Section \ref{Sec:RelatedWork}).} So we say that $a$ is $\ell$-neutral with respect to $X$ whenever there are at most $\ell-1$ attackers of $a$ in $X$. \subsubsection{Graded defense} According to the standard notion of defense, an argument $a$ is defended by a set of arguments $X$ whenever {\em every} attacker of $a$ is attacked by {\em some} argument in $X$. The quantification pattern (`for all', `some') involved in this definition offers again a natural handle to generalise the notion of defense. If {\em all but at most one} attackers of $a$ are attacked by at least one argument in $X$, the quality of such defense (and hence the extent to which $a$ is acceptable w.r.t. $X$) can reasonably be considered `lower' than in the case in which \emph{all} arguments are counter-attacked by at least one argument in $X$. But the former quality of defense is still `higher' than (or at least not `as low as') the case in which {\em all but at most two} attackers are counter-attacked by at least one argument in $X$, and so on. Similarly, if all attackers of $a$ are counterattacked by {\em at least two} arguments in $X$, then the quality of this defense can reasonably be considered `higher' than in standard acceptability, but `lower' than (or at least not `as high as') the case in which all attackers of $a$ are counterattacked by {\em at least three} arguments in $X$. Combining these intuitions---depicted in Figure \ref{figure:motivation2} (right)---one obtains a way to generalise the defense function (Definition \ref{definition:sensitive} below) by making explicit, through numeric grades ($m$ and $n$) of the above type, how well a set $X$ defends an argument $a$. So we say that $X$ $mn$-defends $a$ whenever there are at most $m-1$ attackers of $a$, which are not counterattacked by at least $n$ arguments in $X$. This notion of graded defense is related in a natural way to the above notion of graded neutrality: the set of arguments that are $mn$-defended by $X$, is the set of arguments which is not attacked by at least $m$ arguments, that are not in turn attacked by at least $n$ arguments in $X$ (i.e., arguments that are $m$-neutral with respect to the set of arguments that are $n$-neutral with respect to $X$). In other words, the notion of tolerance towards attack (graded neutrality) can be iterated to obtain a notion of graded defense. Fact \ref{fact:properties_dn} will establish this claim formally. We illustrate the above intuitions with a few examples. \begin{example}\label{Ex:intuitions} In Figures \ref{Motivating1}i) -- \ref{Motivating1}iv), the encircled set $Xi$ defends $ai$ ($i = 1 \ldots 4$) under Dung's Definition \ref{definition:defense}. However we can differentiate these cases based on the number of attackers and defenders of $ai$. For instance, $X2$ more strongly defends $a2$ than $X1$ defends $a1$, as $a2$ is defended by two arguments whereas $a1$ is defended by one argument (i.e., the standard of defense that allows at most 0 attackers to not be defended by 2 arguments is met by $X2$'s defense of $a2$ but not by $X1$'s defense of $a1$). We will later, in Example \ref{Ex:DEJ}, reference the defense of $a3$ by $X3$ and of $a4$ by $X4$ to illustrate that neither can be said to be a more strong defense than the other. While neither $X5$ or $X6$ defend $a5$, respectively $a6$, under Dung's Definition \ref{definition:defense}, observe that $X5$'s defense of $a5$ is stronger than $X6$'s defense of $a6$. The former meets a standard of defense that allows at most one attacker ($d5$) not to be defended by at least one defender (which goes hand in hand with accommodating the co-acceptability of $a5$ with at most one undefended attacker; i.e., $d5$ is $2$-neutral with respect to $X5$). This standard is not met by $X6$'s defense of $a6$, since $a6$ is attacked by two undefended attacks (from $d6$ and $e6$). In the latter case, a weaker standard of defense is met, which again goes hand in hand with accommodating the co-acceptability of $a6$ with its two undefended attackers. These notions then naturally generalise so that one can discriminate standards of defense based only on the number of attackers. Allowing at most one attacker not to be defended by two arguments, is a standard of defense met by $X1$s defense of $a1$, but not $X3$'s defense of $a3$ (the former defense thus being stronger than the latter). \end{example} \begin{figure}[h!!] \centering \includegraphics[scale=0.5]{Pics/Motivating1.pdf} \caption{i)--vi) $X_{i\in\{1\ldots6\}}$ defending $a_1,\ldots a_6$. vii) is the Hasse diagram of a preorder on the set of all arguments from i)--v) relevant for Example \ref{Ex:DEJ}.} \label{Motivating1} \end{figure} \subsection{Graded Defense and Neutrality Functions}\label{Sec:DefiningDefense} Let us move now to the formal definitions of graded defense and neutrality. Take an argument $a$ and a set of arguments $X$. Let $k$ be the number of $a$'s attackers ($b_1, \ldots, b_k$) and, for each $b_i$ ($1 \leq i \leq k$) let $n_i$ be the (non zero) number of attackers of $b_i$ (i.e., defenders of $a$) in $X$. Finally, let $n$ be the minimum among the $n_i$s, i.e., $n = \min(\set{n_i}_{1 < i \leq k})$. We can now count the number $m$ ($\leq k$) of attackers of $a$, which are counter-attacked by at least $n$ arguments in $X$. Integers $m$ and $n$ therefore encode information about how strongly $a$ is defended by $X$, in the sense that they express a maximum number (i.e., $m - 1$) of attackers of $a$ which are not counterattacked by a minimum given number (i.e., $n$) of arguments in $X$. We can now generalise Definition \ref{definition:defense} as follows: \begin{definition}[Graded defense] \label{definition:sensitive} Let $\Delta = \tuple{\mathcal{A}, \ar}$ be an $AF$ and let $m$ and $n$ be two positive integers ($m,n > 0$). The graded defense function $\cf{\substack{m \\ n}} : \pw{\mathcal{A}} \map \pw{\mathcal{A}}$ for $\Delta$ is defined as follows. For any $X \subseteq \mathcal{A}$: \begin{align*} \cfmn{m}{n}(X) & = \set{x \in \mathcal{A} \mid \nexistsn{y}{m}: [~y \ar x \AND \nexistsn{z}{n}: [~z \ar y \AND z \in X~]~]} \end{align*} where $\exists_{\geq n} x$, for integers $n$ (`there exist at least $n$ arguments $x$') are the standard first-order logic counting quantifiers.\footnote{Cf. \cite{dalen80logic}. The definition can be reformulated without counting quantifiers as follows: $$ \cf{\substack{m \\ n}}(X) = \set{ x \in \mathcal{A} \ST |\set{y \in \overline{x} \ST |\set{z \in \overline{y} \cap X }| < n }| < m } $$ where we write $\overline{x}$ to denote $\set{y \in \mathcal{A} \mid y \rightarrow x}$. } In the rare cases in which we need to make $\Delta$ explicit we write $\cfmn{m}{n}^\Delta$. \end{definition} So, $\cfmn{m}{n}(X)$ is the set of arguments (in the given framework) which have at most $m-1$ attackers that are not counter-attacked by at least $n$ arguments in $X$. \begin{example} In Figure \ref{Motivating1}, $a1 \in \cf{\substack{1 \\ 1}}(X1)$ and $a2 \in \cf{\substack{1 \\ 1}}(X2)$ since in both cases the following holds: at most $0$ arguments attacking $a1$, respectively $a2$, are not attacked by at least one argument in $X1$, respectively $X2$. However if we increment $n$ by $1$ we have that: $a2 \in \cf{\substack{1 \\ 2}}(X2)$ but $a1 \notin \cf{\substack{1 \\ 2}}(X1)$ since \emph{it is} the case that at least one argument attacking $a1$ is not attacked by at least two arguments in $X1$. Intuitively, this standard of defense allows for up to $0$ attackers to not be counter-attacked by two defenders, a standard met by $X2$'s defense of $a2$, but not by $X1$'s defense of $a1$. Continuing with Figure \ref{Motivating1}, $a5 \in \cf{\substack{3 \\ 1}}(X5)$ and $a6 \in \cf{\substack{3 \\ 1}}(X6)$, since in both cases the standard of defense that allows for no more than 2 unattacked arguments is met. However, $a5 \in \cf{\substack{2 \\ 1}}(X5)$ and $a6 \notin \cf{\substack{2 \\ 1}}(X6)$ since this standard of defense accommodates up to a maximum of 1 unattacked attackers, and in the latter case there is more than one unattacked attacker of $a6$. Finally, $a1 \notin \cf{\substack{1 \\ 2}}(X1)$ and $a3 \notin \cf{\substack{1 \\ 2}}(X3)$ since the $\cf{\substack{1 \\ 2}}$ standard of defense requires that all attackers of $a1$ ($a3$) are attacked by at least two arguments. However, $a1 \in \cf{\substack{2 \\ 2}}(X1)$ and $a3 \notin \cf{\substack{2 \\ 2}}(X3)$, since the standard of defense allowing at most one attacker not to be defended by two counter-attackers is met by $a1$ but not by $a3$. Notice that in this last case the two arguments $a1$ and $a3$ are discriminated based on the number of their attackers. \end{example} By the same logic, Definition \ref{definition:neutrality} can be generalised as follows: \begin{definition}[Graded neutrality function] \label{definition:sensitive-neutrality} Let $\Delta = \tuple{\mathcal{A}, \ar}$ be an $AF$ and let $\ell$ be any positive integer. The graded neutrality function $\cff{l} : \pw{\mathcal{A}} \map \pw{\mathcal{A}}$ for $\Delta$ is defined as follows. For any $X \subseteq \mathcal{A}$: \begin{align*} \cff{\ell}(X) & = \set{x \in \mathcal{A} \mid \nexistsn{y}{\ell}: y \ar x \AND y \in X}. \end{align*} \end{definition} So, given a set of arguments $X$, $\cff{\ell}(X)$ denotes the set of arguments which have at most $\ell - 1$ attackers in $X$.\footnote{Equivalently, graded neutrality can be defined as follows, without the use of counting quantifiers: $$ \cff{\ell}(X) = \set{x \in \mathcal{A} \ST |\overline{x} \cap X|<\ell}. $$ } \begin{example}\label{IllustratingNeutrality} In Figure \ref{Motivating1}, $n_2(X_5) = \{a5,b5,c5,d5\}$. Notice that $n_2(\{a5,b5,c5,d5\}) = \{b5,c5,d5\}$. Also, $n_2(X_6) = \{b6,c6,d6,e6\}$. \end{example} \subsection{Properties of Graded Defense and Neutrality} The following two facts show that the graded defense and neutrality functions are generalisations of the standard functions defined in Definitions \ref{definition:defense} and \ref{definition:neutrality}, and that such generalisations remain well-behaved in the sense that they retain many of the key features of their standard variants. \begin{fact} \label{fact:properties_dn} For any $AF$ $\Delta = \tuple{\mathcal{A}, \ar}$, $m$, $n$ and $\ell$ positive integers: \begin{eqnarray} \cf{\substack{1 \\ 1}}(X) & = & \cf{}(X) \label{eq:defense} \\ \cff{1}(X) & = & \cff{}(X) \label{eq:neutrality} \\ X \subseteq Y & \IMPLIES & \cff{\ell}(Y) \subseteq \cff{\ell}(X) \label{formula:antitonicm} \\ X \subseteq Y & \IMPLIES & \cf{\substack{m \\ n}}(X) \subseteq \cf{\substack{m \\ n}}(Y) \label{formula:monotonicmn} \\ \cff{m}(\cff{n}(X)) & = & \cf{\substack{m \\ n}}(X) \label{eq:twofold} \end{eqnarray} \end{fact} \begin{proof} Equation \eqref{eq:defense} follows from the fact that Definition \ref{definition:defense} can be retrieved from Definition \ref{definition:sensitive} by setting $n = m = 1$. Similarly \eqref{eq:neutrality} follows from the fact that Definition \ref{definition:neutrality} can be retrieved from Definition \ref{definition:sensitive-neutrality} by setting $\ell = 1$. Equation \eqref{eq:twofold} follows from Definitions \ref{definition:sensitive} and \ref{definition:sensitive-neutrality} by the following series of equations: \begin{eqnarray*} \cff{m}(\cff{n}(X)) & = & \cff{m}(\set{y \in A \mid \nexistsn{z}{n}: [~z \ar y \AND z \in X] }) \\ & = & \set{x \in A \mid \nexistsn{y}{m}: [~y \ar x \AND \nexistsn{z}{n}: [~z \ar y \AND z \in X]]} \\ & = & \cfmn{m}{n}(X) \end{eqnarray*} Formulae \eqref{formula:antitonicm} and \eqref{formula:monotonicmn} are direct consequences of Definitions \ref{definition:sensitive} and \ref{definition:sensitive-neutrality}. \end{proof} Equation \eqref{eq:defense} reformulates $\cf{}(X)$ as the set of arguments for which it is not the case that there are one or more attackers, which are not counter-attacked by one or more arguments in $X$; that is, no attacker is not attacked by some argument in $X$. So does Equation \eqref{eq:neutrality} for $\cff{}(X)$. The remaining formulae generalise Fact \ref{fact:simple} to the graded setting. In particular, graded defense is monotonic \eqref{formula:monotonicmn}, graded neutrality is antitonic \eqref{formula:antitonicm}, and equation \eqref{eq:twofold} shows that, as in the standard case, the defense function is the two-fold iteration of the neutrality function (as in the standard case we may use the notation $\cff{n} \circ \cff{m}$ to denote this composition). Importantly, the continuity of the defense function is also preserved in the graded setting: \begin{fact}[$\omega$-continuity of graded defense] \label{fact:graded_continuous} If $\Frame$ is finitary, then function $\cfmn{m}{n}$ is \mbox{(upward-)} continuous for any $X \subseteq \mathcal{A}$, $m$ and $n$ positive integers. I.e., for any upward directed set $D$ of finite subsets of $\mathcal{A}$: \begin{align} \cfmn{m}{n}(\bigcup_{X \in \mathcal{D}} X) & = \bigcup_{X \in \mathcal{D}} \cfmn{m}{n}(X) \label{eq:scott} \end{align} Similarly, $\cfmn{m}{n}$ is (downward-)continuous for any $X \subseteq \mathcal{A}$, $m$ and $n$ positive integers. I.e., for any downward directed set $D$ of finite subsets of $\mathcal{A}$: \begin{align} \cfmn{m}{n}(\bigcap_{X \in \mathcal{D}} X) & = \bigcap_{X \in \mathcal{D}} \cfmn{m}{n}(X) \label{eq:scottt} \end{align} \end{fact} \begin{proof}[Sketch of proof] The argument used to prove Fact \ref{fact:continuous} carries through in exactly the same manner, exploiting the monotonicity of $\cfmn{m}{n}$ \eqref{formula:monotonicmn} and the finitariness assumption over $\Frame$. \end{proof} Finally, we establish some properties showing how the values for the defence and neutrality functions are affected by varying the parameters $m$ and $n$. \begin{fact} \label{fact:accrual_relations} For any $AF$ $\Delta = \tuple{\mathcal{A}, \ar}$, $X \subseteq \mathcal{A}$, and $\ell$, $m$ and $n$ positive integers: \begin{eqnarray} \cff{\ell}(X) & \subseteq &\cff{\ell+1}(X) \label{eq:increase1} \\ \cfmn{m}{n}(X) & \subseteq & \cfmn{m+1}{n}(X) \label{eq:increase2} \\ \cfmn{m}{n}(X) & \supseteq & \cfmn{m}{n+1}(X) \label{eq:increase3} \end{eqnarray} \end{fact} \begin{proof} Recall the definition of the neutrality function (Definition \ref{definition:sensitive-neutrality}). To establish \eqref{eq:increase1} it suffices to notice that the property expresses the contrapositive of the following statement: if there exist at least $\ell+1$ attackers in $X$ then there exist at least $\ell$ attackers in $X$. Property \eqref{eq:increase2} then follows directly by \eqref{eq:increase1} above and \eqref{eq:twofold} (Fact \ref{fact:properties_dn}), through the following series of relations: \begin{align*} \cfmn{m}{n}(X) & = \cff{m}(\cff{n}(X)) \\ & \subseteq \cff{m+1}(\cff{n}(X)) = \cfmn{m+1}{n}(X). \end{align*} A similar argument applies to establish \eqref{eq:increase3}, which follows by \eqref{eq:increase1} above, \eqref{eq:twofold}, and the antitonicity of $\cff{}$ (Fact \ref{fact:properties_dn}): \begin{align*} \cfmn{m}{n}(X) & = \cff{m}(\cff{n}(X)) \\ & \supseteq \cff{m}(\cff{n+1}(X)) = \cfmn{m}{n+1}(X). \end{align*} This completes the proof. \end{proof} Intuitively, \eqref{eq:increase1} states that the set of arguments attacked by at most $\ell$ arguments in $X$ is included in the set of arguments attacked by at most $\ell+1$ arguments in $X$. This establishes an ordering, in terms of logical strength, among the values of different neutrality functions: the lower is $\ell$ the stricter is the value of $\cff{l}$ applied to a same set of arguments $X$. Properties \eqref{eq:increase2} and \eqref{eq:increase3} then follow by combining this simple fact with the fact that $\cfmn{m}{n}$ is the composition of $\cff{m}$ with $\cff{n}$ \eqref{eq:twofold}. \subsection{Comparing Graded Defense and Neutrality Functions} \label{Sec:RankingGradedDF} \begin{figure}[t] \begin{center} \scalebox{.9}{ \begin{tikzpicture}[domain=0:4] \draw[very thin,color=gray] (-0.1,-0.1) grid (4.1,4.1); \draw[->] (-0.2,0) -- (5,0) node[right] {$m$}; \draw[->] (0,-0.2) -- (0,5) node[above] {$n$}; \draw (1,0) node [below] {$1$}; \draw (2,0) node [below] {$2$}; \draw (3,0) node [below] {$3$}; \draw (4,0) node [below] {$4$}; \draw (1,1) node [blue] {\textbullet}; \draw (1,0.6) node {Dung's}; \draw (0,1) node [left] {$1$}; \draw (0,2) node [left] {$2$}; \draw (0,3) node [left] {$3$}; \draw (0,4) node [left] {$4$}; \draw[blue,thick, -latex] (1,1) -- (1,2); \draw[blue,thick, -latex] (1,2) -- (1,3); \draw[blue,thick, -latex] (1,3) -- (1,4); \draw[blue,thick,dotted, -latex] (1,4) -- (1,5); \draw[blue,thick, -latex] (2,1) -- (2,2); \draw[blue,thick, -latex] (2,2) -- (2,3); \draw[blue,thick, -latex] (2,3) -- (2,4); \draw[blue,thick,dotted, -latex] (2,4) -- (2,5); \draw[blue,thick, -latex] (3,1) -- (3,2); \draw[blue,thick, -latex] (3,2) -- (3,3); \draw[blue,thick, -latex] (3,3) -- (3,4); \draw[blue,thick,dotted, -latex] (3,4) -- (3,5); \draw[blue,thick, -latex] (4,1) -- (4,2); \draw[blue,thick, -latex] (4,2) -- (4,3); \draw[blue,thick, -latex] (4,3) -- (4,4); \draw[blue,thick,dotted, -latex] (4,4) -- (4,5); \draw[blue,thick, -latex] (2,1) -- (1,1); \draw[blue,thick, -latex] (3,1) -- (2,1); \draw[blue,thick, -latex] (4,1) -- (3,1); \draw[blue,thick,dotted, -latex] (5,1) -- (4,1); \draw[blue,thick, -latex] (2,2) -- (1,2); \draw[blue,thick, -latex] (3,2) -- (2,2); \draw[blue,thick, -latex] (4,2) -- (3,2); \draw[blue,thick,dotted, -latex] (5,2) -- (4,2); \draw[blue,thick, -latex] (2,3) -- (1,3); \draw[blue,thick, -latex] (3,3) -- (2,3); \draw[blue,thick, -latex] (4,3) -- (3,3); \draw[blue, thick,dotted, -latex] (5,3) -- (4,3); \draw[blue, thick, -latex] (2,4) -- (1,4); \draw[blue, thick, -latex] (3,4) -- (2,4); \draw[blue, thick, -latex] (4,4) -- (3,4); \draw[blue, thick,dotted, -latex] (5,4) -- (4,4); \draw[blue, dashed, -] (1,1) -- (4,4); \draw[blue, dashed, -] (1,2) -- (3,4); \draw[blue, dashed, -] (1,3) -- (2,4); \draw[blue, dashed, -] (2,1) -- (4,3); \draw[blue, dashed, -] (3,1) -- (4,2); \end{tikzpicture} } \end{center} \caption{Depiction of the partial order $\rhd$ over the set of all graded defense functions (with $0 < m,n \in \mathbb{N}$). The horizontal and vertical axes consist of the values of $m$ and, respectively, $n$. Arrows go from `weaker' to `stronger' defense functions. Dashed lines (diagonals) denote incomparability. The point $m = n = 1$ denotes the position that Dung's characteristic function occupies in the ordering.} \label{figure:order} \end{figure} Fact \ref{fact:accrual_relations} provides ground for a natural way in which different graded defense and neutrality functions can be ordered as their parameters $m$ and $n$ vary. The choice of these parameters determines the logical strength of different `types' or `standards' of conflict-freeness, which is based on neutrality, and acceptability, which is based on defense. In light of Fact \ref{fact:accrual_relations}, comparing different neutrality functions is straightforward. Any relaxation on the requirement that no argument in a set be attacked by other arguments in that set leads to weaker forms of conflict-freeness. For any $X \subseteq \mathcal{A}$, $\cff{\ell}(X) \subseteq \cff{k}(X)$ for $k$ and $\ell$ positive integers whenever $\ell \leq k$. So neutrality functions can simply be ordered linearly like natural numbers, with lower numbers denoting `stronger' forms of neutrality and hence conflict-freeness. The ordering of defense functions is more interesting, as these functions are parameterized by two integers: \begin{definition}\label{DefGradedDefFunction} $\cfmn{m}{n} \rhd \cfmn{s}{t}$ (to be read ``is at least as strong as'') iff for any $X \subseteq \mathcal{A}$, $\cfmn{m}{n}(X) \subseteq \cfmn{s}{t}(X)$, with $m,n,s,t$ positive integers. \end{definition} Relation $\rhd$ orders the set of all graded defense functions in a well-behaved manner: \begin{fact}\label{Fact:Ordering} Let $\Delta = \tuple{\mathcal{A}, \ar}$ be an $AF$, and let $\rhd$ be defined as above. Then: \begin{enumerate}[(i)] \item $\cfmn{m}{n} \rhd \cfmn{s}{t}$ iff $m \leq s$ and $t \leq n$; \item Relation $\rhd$ is a {\em partial order}, i.e., reflexive, antisymmetric and transitive. \end{enumerate} \end{fact} \begin{proof} \fbox{{\em (i)}} is a direct consequence of Fact \ref{fact:accrual_relations}. \fbox{{\em (ii)}} follows directly from how relation $\rhd$ is defined and the properties of set inclusion. \end{proof} The relation is depicted in its generality in Figure \ref{figure:order}. Expressions $\cfmn{m}{n} \rhd \cfmn{s}{t}$ may be read as follows: `being $mn$-defended is {\em weakly preferable over} being $st$-defended' or `the $mn$-defense function is {\em at least as strong as} the $st$-defense function'. Intuitively, the partial order $\rhd$ uses logical strength as a way to order graded defense functions. This equates with the intuition that if an argument meets a demanding standard of defense it also meets a less demanding one. \begin{figure}[t] \centering \includegraphics[width=2in]{Pics/FigEx1and3version3.pdf} \caption{Framework of Example \ref{example:defense_neutrality}} \label{MotivatingDEJ} \end{figure} \begin{example} \label{example:defense_neutrality} Referring to the framework in Figure \ref{MotivatingDEJ}, we illustrate Formula \eqref{eq:increase2} and Fact \ref{Fact:Ordering}: $\cfmn{1}{1}(\{c1,c2\})$ = $\{c1,c2,b2\}$ $\subseteq$ $\cfmn{2}{1}(\{c1,c2\})$ = $\{c1,c2,a,b2\}$ $\subseteq$ $\cfmn{3}{1}(\{c1,c2\})$ = $\{c1,c2,a,b1,b2\}$. We also illustrate Formula \eqref{eq:increase3} and Fact \ref{Fact:Ordering} with reference to Figure \ref{Motivating1}ii): $\cfmn{1}{1}(\{d2,c2,a2\})$ = $\{d2,c2,a2\}$ $\supseteq$ $\cfmn{1}{2}(\{d2,c2,a2\})$ = $\{d2,c2,a2\}$ $\supseteq$ $\cfmn{1}{3}(\{d2,c2,a2\})$ = $\{d2,c2\}$. \end{example} \subsubsection{On the Partiality of $\rhd$}\label{Sec:ExtendingPartialOrder} As the relation $\rhd$ is a partial order, some defense functions may be incomparable (see Figure \ref{figure:order}) and this, we claim, is intuitive. By way of example, consider Dung's defense $\cfmn{1}{1}$. This standard of defense is strengthened by $\cfmn{1}{2}$ (higher $n$ parameter) and weakened by $\cfmn{2}{1}$ (higher $m$ parameter). But under the definition of $\rhd$, $\cfmn{2}{2}$ defines a standard of defense which is incomparable with respect to $\cfmn{1}{1}$: it demands more defenders per attacker, but tolerates more attackers that are not counter-attacked to the desired level. In general, incomparability arises every time the parameters of the functions do not meet the condition $m \leq s$ and $t \leq n$ of Fact \ref{Fact:Ordering}. It should be clear, however, that the partial order $\rhd$ over graded defense functions could be further refined to a total order by resolving incomparability. This can be done in two ways: by either giving priority to parameter $m$ or to parameter $n$. For example, if a set of arguments is $mn$-defended and another one is $st$-defended, where $m < s$ and $n < t$ (i.e., they are incomparable w.r.t. $\rhd$) then the first one can be stipulated to be more strongly defended because it is less tolerant with respect to the failure of defense. Therefore, for $m < s$ and $n < t$, belonging to $\cfmn{m}{n}(X)$ is `better' than belonging to $\cfmn{s}{t}(X)$. One could then redefine $\rhd$ as follows: $\cfmn{m}{n} \rhd \cfmn{s}{t}$ iff either $m < s$, or $m = s$ and $n \geq t$. This yields a lexicographic order over graded defense functions giving priority to the $m$ parameter over the $n$ parameter. We do not investigate such refinements further in this paper. \section{Graded Semantics for Abstract Argumentation} \label{Sec:GradedSemantics} By means of the graded defense and neutrality functions, Dung's notions of acceptability and conflict-freeness can be generalised to graded variants in a natural way. A set of arguments $X$ is said to be conflict-free at grade $\ell$ (or, $\ell$-conflict-free) whenever none of its arguments is attacked by at least $\ell$ arguments in $X$. A set of arguments $X$ is said to be acceptable at grade $mn$ (or, $mn$-acceptable) whenever all of its arguments are such that at most $m-1$ of their attackers are not counter-attacked by at least $n$ arguments in $X$. A graded notion of admissibility follows ($\ell$-conflict-freeness plus $mn$-acceptability) and we thereby obtain graded variants of all the main admissibility-based semantics, which are simply Dung's standard semantics based on graded admissibility instead of standard admissibility. The first part of this section formally defines graded semantics. The rest of the section then develops a core theory of graded semantics. In the tradition of abstract argumentation, our results focus on the central questions of the existence and construction of graded extensions, and provides positive results under certain constraints on the parameters $n$, $m$ and $l$. \subsection{Graded Generalisation of Dung's Semantics} \begin{table}[t] \hspace*{-1cm} \begin{tabular}{lcl} \hline $X$ is $\ell$-conflict-free in $\Frame$ & iff & $X \subseteq \cff{\ell}(X)$ \\ $X$ is $mn$-self-defended & iff & $X \subseteq \cfmn{m}{n}(X)$ \\ $X$ is $\ell mn$-admissible in $\Delta$ & iff & $X \subseteq \cff{\ell}(X)$ and $X \subseteq \cfmn{m}{n}(X)$ \\ $X$ is an $\ell mn$-complete extension of $\Delta$ & iff & $X \subseteq \cff{\ell}(X)$ and $X = \cfmn{m}{n}(X)$ \\ $X$ is an $\ell mn$-stable extension of $\Delta$ & iff & $X = n_n(X) = n_m(X) \subseteq n_l(X)$ \\ $X$ is the $\ell mn$-grounded extension of $\Delta$ & iff & $X$ is the smallest $\ell mn$-complete ext. of $\Delta$ \\ $X$ is an $\ell mn$-preferred extension of $\Delta$ & iff & $X$ is a largest $\ell mn$-complete ext. of $\Delta$ \\ \hline \end{tabular} \caption{Graded generalizations of standard argumentation theory notions from \cite{dung95acceptability}.} \label{table:graded} \end{table} We are now in the position to generalise Definition \ref{def:DungSemantics} as follows: \begin{definition}[Graded Extensions]\label{table:accrual-sensitive} Let $\Delta = \tuple{\mathcal{A}, \ar}$ be an $AF$, $X \subseteq \mathcal{A}$, and $\ell$, $m$ and $n$ be positive integers. Graded extensions are defined as in Table \ref{table:graded}. We may write $\adm_{lmn}(\Delta)$, $\prf_{lmn}(\Delta)$ and $\stb_{lmn}(\Delta)$ to denote, respectively, the set of $lmn$-admissible, $lmn$-preferred. and $lmn$-stable extensions of $\Delta$, and $\grn_{lmn}(\Delta)$ to denote the $lmn$-grounded extension of $\Delta$. Finally, for an extension type $S \in \set{\mathit{grounded}, \mathit{preferred}, \mathit{stable}}$, we say that $a \in \mathcal{A}$ is {\em credulously} justified w.r.t. $\ell mn$-S if $a \in \bigcup S_{\ell mn}(\Delta)$; and {\em sceptically} justified w.r.t. $\ell mn$-S if $a \in \bigcap S_{\ell mn}(\Delta)$ Henceforth we assume the sceptical definition when referring to an argument simply as being {\em justified}. \end{definition} The definition deserves some comment. Note first of all that when $l = m = n =1$, we recover the standard definition of conflict-freeness, admissibility and extensions (Definition \ref{def:DungSemantics}), which we henceforth refer to as `Dung conflict-freeness' and `Dung admissibility' and `Dung extensions'. The key notion is graded admissibility, which is obtained by parameterizing the conflict-freeness requirement by $\ell$ --- i.e., $X \subseteq \cff{\ell}(X)$ ---, and parameterizing the self-defense requirement by $m$ and $n$ --- i.e., $X \subseteq \cfmn{m}{n}(X)$. The remaining graded semantics are defined by extending graded admissability in exactly the same way in which Dung admissibility is extended to define the standard Dung semantics. So, a graded complete extension, with parameters $\ell, m$ and $n$, is a fixpoint of $\cfmn{m}{n}$, which is also $\ell$-conflict-free, the graded grounded extension is the smallest $\ell mn$-complete extension, and the graded preferred extensions are the largest $\ell mn$-complete extensions. Finally, a graded stable extension, with parameters $\ell, m$ and $n$, is a fixpoint of $n_n(X)$ and $n_m(X)$ (and therefore of $\cfmn{m}{n}$), which is also $\ell$-conflict-free. Constructive existence results for these semantics are provided in the next section. Each graded extension type should then be interpreted as a class of weakenings and strengthenings of its standard Dung counterpart. For example: Dung complete extensions are strengthened by $11n$-complete extensions, with $n > 1$, which require a higher number of defenders for each attacked argument (that is, the requirements for acceptability are strengthened); and are weakened by $\ell 11$-complete extensions, with $\ell > 1$, which tolerate a higher level of internal conflict (that is, weakening the conflict-freeness requirement), or by $1 m1$-complete extensions, with $m > 1$, which tolerate a higher level of undefended arguments (that is, weakening the acceptability requirement). So for each Dung extension type, we now have an ordered family of extensions incorporating a form of graduality. \subsection{Fixpoint Construction for Graded Exensions} We proceed as in the standard case (cf. Section \ref{Sec:Background}). The basic idea is as follows: given a graded admissible set, we show that, and under what assumptions on the parameters $\ell$, $m$ and $n$, this can be expanded into a graded complete set through a process of fixpoint approximation. Fix a framework $\Frame$ and take a set $X$ such that $X \subseteq \cfmn{m}{n}(X)$ and $X \subseteq \cff{\ell}(X)$ (that is, an $\ell mn$-admissible set). By iterating $\cfmn{m}{n}$, consider the stream of sets \begin{align} \cfmn{m}{n}^0(X), \cfmn{m}{n}^1(X), \ldots \label{stream1} \end{align} and the stream of sets \begin{align} \cff{n}(\cfmn{m}{n}^0(X)), \cff{n}(\cfmn{m}{n}^1(X)), \ldots \label{stream1.5} \end{align} which, by Fact \ref{fact:properties_dn}, is equivalent to the stream \begin{align} \cfmn{n}{m}^0(\cff{n}(X)), \cfmn{n}{m}^1(\cff{n}(X)), \ldots. \label{stream2} \end{align} By the above assumptions on $X$, and since $\cfmn{m}{n}$ is monotonic and $\cff{n}$ is antitonic (cf. Fact \ref{fact:properties_dn}), the stream in \eqref{stream1} is non-decreasing and the stream in \eqref{stream2} is non-increasing, with respect to set inclusion. In finite attack graphs, these streams must therefore stabilize reaching a limit at iteration $|\mathcal{A}| + 1$. In infinite but finitary attack graphs, we will see that the limit can be reached at $\omega$. We will also see (Lemma \ref{theorem:approxim81}) that the limits of these streams correspond to the {\em smallest} fixpoint of $\cfmn{m}{n}$ {\em containing} $X$ and, respectively, the {\em largest} fixpoint of $\cfmn{n}{m}$ which is {\em contained} in $\cff{n}(X)$.\footnote{Notice the reversal in the parameters $m$ and $n$ due to the fact that $\cff{n}(\cfmn{m}{n}(X)) = \cfmn{n}{m}(\cff{n}(X))$, a direct consequence of Fact \ref{fact:properties_dn}.} We denote the first one by $\lfp_X.\cfmn{m}{n}$ and the second one by $\gfp_X.\cfmn{n}{m}$. Intuitively, the two sets denote the smallest superset of $X$ which is equal to the set of arguments it $mn$-defends and, respectively, the largest set whose arguments are not attacked by at least $n$ arguments in $X$ and which is equal to the set of arguments it $nm$-defends. The above construction is illustrated in Figure \ref{figure:decomposition} below. We prove now its correctness showing that the limits of the streams in \eqref{stream1} and \eqref{stream2} correspond indeed to the desired fixpoints. \begin{figure}[t] \centering \begin{tikzpicture} \draw (0,0) node (0) {X} (0,2.2) node (12) {$\cff{m}(X)$} (1,4) node (1) {$\cff{n}(X)$} (2,1) node (2) {$\cfmn{m}{n}(X)$} (3,3.2) node (3) {$\cfmn{n}{m}(\cff{n}(X))$} (3,4) node (3a) {$ \begin{array}{c} \cff{n}(\cfmn{m}{n}(X)) \\ \rotatebox{90}{=} \end{array} $} (4,1.4) node (4) {$\cfmn{m}{n}^2(X)$} (5,2.9) node (5) {$\cfmn{n}{m}^2(\cff{n}(X))$} (5.2,3.8) node (5a) {$ \begin{array}{c} \cff{n}(\cfmn{m}{n}^2(X)) \\ \rotatebox{90}{=} \end{array} $} (6,1.6) node (6) {\ldots} (7,2.8) node (7) {\ldots} (10,2.8) node (10) {$\gfp_X.\cfmn{n}{m}$} (10,3.6) node (10a){$ \begin{array}{c} \cff{n}(\lfp_X.\cfmn{m}{n}) \\ \rotatebox{90}{=} \end{array} $} (8,1.7) node (8) {\ldots} (10,1.7) node (11) {$\lfp_X.\cfmn{m}{n}$}; \draw[densely dotted] (0) -- (1); \draw[densely dotted] (0) -- (12); \draw[densely dotted] (12) -- (1); \draw[densely dotted] (1) -- (2); \draw[densely dotted] (2) -- (3); \draw[densely dotted] (3) -- (4); \draw[densely dotted] (4) -- (5); \draw[densely dotted] (5) -- (6); \draw[densely dotted] (6) -- (7); \draw[densely dotted] (7) -- (8); \draw[densely dotted] (11) -- (10); \draw[thin,loosely dotted] plot[smooth] coordinates {(0,0) (2,1) (4,1.4) (6,1.6) (8,1.7) (10,1.7)}; \draw[thin,loosely dotted] plot[smooth] coordinates {(1,4) (3,3.2) (5,2.9) (7,2.8) (10,2.8)}; \end{tikzpicture} \caption{ The two streams of \eqref{stream1} and \eqref{stream2} under the assumption that $X$ is $mmn$-admissible and the two integers $n$ and $m$ are such that $n \geq m$. Position with respect to the horizontal axis indicates the number of iterations, and positions with respect to the vertical axis indicates set theoretic inclusion. Cf. Figure \ref{figure:decomposition1} depicts the same behaviour for the standard, non-graded, case.} \label{figure:decomposition} \end{figure} \smallskip First of all the following important lemma shows under what conditions on $m$ and $n$ graded conflict-freeness can be preserved by the above process of iteration of the defense function.\footnote{In the following two lemmas we handle only parameters $m$ and $n$ directly. This, we will see, is sufficient to establish results concerning also parameter $\ell$ later (Theorem \ref{fact:smallest_complete}).} \begin{lemma} \label{lemma:preserve_cf1} Let $\Frame$ be a finitary attack graph, $X \subseteq \mathcal{A}$ be such that $X \subseteq \cfmn{m}{n}(X)$ and $X \subseteq \cff{m}(X)$, and $m,n$ be two positive integers such that $n \geq m$. Then for any $k$ s.t. $0 \leq k < \omega$, $$ X \subseteq \cfmn{m}{n}^k(X) \subseteq \cff{m}(\cfmn{m}{n}^k(X)) \subseteq \cff{m}(X). $$ \end{lemma} \begin{proof} \fbox{$X \subseteq \cfmn{m}{n}^k(X)$} holds by the assumption that $X$ is admissible and by the monotonicity of $\cfmn{m}{n}$ (Fact \ref{fact:properties_dn}). \fbox{$\cfmn{m}{n}^k(X) \subseteq \cff{m}(\cfmn{m}{n}^k(X))$} is proven by induction over $k$. The \fbox{base case} holds by assumption as $\cfmn{m}{n}^0(X) = X$ is $mmn$-admissible and therefore $m$-conflict-free. For the \fbox{induction step} assume (IH) that $\cfmn{m}{n}^k(X) \subseteq \cff{m}(\cfmn{m}{n}^k(X))$ (that is, the $k^{\mathit{th}}$ step is $m$-conflict-free). We show that $\cfmn{m}{n}^{k+1}(X) \subseteq \cff{m}(\cfmn{m}{n}^{k+1}(X))$. Suppose towards a contradiction that is not the case. Then there exists $x$ and $y_1, \ldots, y_m$ in set $\cfmn{m}{n}^{k+1}(X) = \cfmn{m}{n}(\cfmn{m}{n}^{k}(X))$ such that $x \al y_i$ for $1 \leq i \leq m$. Since $n \geq m$ by assumption, by the definition of $\cfmn{m}{n}$ there exist $z_1, \ldots, z_n \in \cfmn{m}{n}(X)$ such that $y_\ell \al z_i$ for $1 \leq i \leq n$. That is, at least one among all $y_i$'s (w.l.o.g. assumed to be $y_m$) is attacked by at least $n$ arguments in $\cfmn{m}{n}(X)$. But from this and the assumption that $n \geq m$, there exist $w_1, \ldots, w_n \in \cfmn{m}{n}(X)$ such that $z_n \al w_i$ for $1 \leq i \leq n$. That is, at least one among all $z_i$'s (w.l.o.g. assumed to be $z_n$) is also attacked by at least $n$ arguments in $\cfmn{m}{n}(X)$. Now since $n \geq m$ we conclude that $\cfmn{m}{n}(X)$ is not $m$-conflict-free, against IH. \fbox{$\cff{m}(\cfmn{m}{n}^k(X)) \subseteq \cff{m}(X)$} follows from the first claim by the antitonicity of $\cff{m}$ (Fact \ref{fact:properties_dn}). \end{proof} That is, each stage $\cfmn{m}{n}^k(X)$ in the stream of iteration of the graded defense function from set $X$ is an $m$-conflict-free set. \begin{lemma} \label{theorem:approxim81} Let $\Frame$ be a finitary attack graph and $X \subseteq \mathcal{A}$ be such that $X \subseteq \cfmn{m}{n}(X)$ and $X \subseteq \cff{m}(X)$, and $m,n$ be two positive integers such that $n \geq m$: \begin{align} \lfp_X.\cfmn{m}{n} & = \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) \label{eq:below1} \\ \gfp_X.\cfmn{n}{m}& = \bigcap_{0 \leq k < \omega} \cfmn{n}{m}^k(\cff{n}(X)) \label{eq:above1} \end{align} \end{lemma} \begin{proof} \fbox{\eqref{eq:below1}} \fbox{First}, we prove that $\bigcup_{0 \leq n < \omega} \cfmn{m}{n}^k(X)$ is a fixpoint by the following series of equations: \begin{align*} \cfmn{m}{n} \left( \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) \right) & = \bigcup_{0 \leq k < \omega}\cfmn{m}{n}(\cfmn{m}{n}^k(X)) \\ & = \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) \end{align*} where the first equation holds by the $\omega$-continuity of $\cfmn{m}{n}$ (Fact \ref{fact:graded_continuous}) and the second by the fact that, since $X$ is $\ell mn$-admissible by assumption and $\cfmn{m}{n}$ is monotonic (Fact \ref{fact:properties_dn}) we have that: $X = \cfmn{m}{n}^0(X) \subseteq \cfmn{m}{n}^k(X) \subseteq \cfmn{m}{n}^{k +1}(X)$ for any $k$ s.t. $0 \leq k < \omega$. \fbox{Second}, we prove that $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$ is indeed the least fixpoint containing $X$. Suppose, towards a contradiction that there exists $Y$ s.t.: $X \subset Y = \cfmn{m}{n}^k(Y) \subset \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$. It follows that $X \subset Y = \cfmn{m}{n}^k(Y) \subset \cfmn{m}{n}^k(X)$ for some $k$ s.t. $0 \leq k < \omega$. But, by the monotonicity of $\cfmn{m}{n}$ (Fact \ref{fact:properties_dn}), we have that $\cfmn{m}{n}^k(X) \subseteq \cfmn{m}{n}^k(Y)$. Contradiction. \noindent \fbox{\eqref{eq:above1}} The proof is similar to the previous case. \fbox{First}, we prove that $\bigcap_{0 \leq k < \omega} \cfmn{n}{m}^k(\cff{n}(X))$ is a fixpoint, through the series of equations \begin{align*} \cfmn{n}{m} \left( \bigcap_{0 \leq k < \omega} \cfmn{n}{m}^k(\cff{n}(X)) \right) & = \bigcap_{0 \leq k < \omega}\cfmn{n}{m}(\cfmn{n}{m}^k(\cff{n}(X))) \\ & = \bigcap_{0 \leq n < \omega} \cfmn{n}{m}^k(\cff{n}(X)) \end{align*} which hold by the $\omega$-continuity of $\cfmn{n}{m}$ (Fact \ref{fact:graded_continuous}) and by the fact that, since $X$ is $m mn$-admissible by assumption, and $\cfmn{n}{m}$ is monotonic (Fact \ref{fact:properties_dn}), $\cff{n}(X) = \cfmn{n}{m}^0(\cff{n}(X))$ $\supseteq$ $\cfmn{n}{m}^k(\cff{n}(X))$ for any $k$ s.t. $0 \leq k < \omega$. The latter property holds because $X$ is assumed to be such that $X \subseteq \cfmn{m}{n}(X)$. By the antitonicity of $\cff{n}$ and the interdefinability of $\cfmn{m}{n}$ as $\cff{n} \circ \cff{m}$ (Fact \ref{fact:properties_dn}) we therefore have that $\cff{n}(\cfmn{m}{n}(X)) = \cfmn{n}{m}(\cff{n}(X)) \subseteq \cff{n}(X)$ from which it follows that $\cfmn{n}{m}(\cff{n}(X)) \subseteq \cff{n}(X)$. A simple induction on $k$ then establishes the claim. \fbox{Second}, it remains to be proven that $\bigcap_{0 \leq k < \omega} \cfmn{n}{m}^k(\cff{n}(X))$ is indeed the largest fixpoint of $ \cfmn{n}{m}$ contained in $\cff{n}(X)$. Like in the previous case we proceed towards a contradiction. Suppose there exists $Y$ s.t.: $\bigcap_{0 \leq k < \omega} \cfmn{n}{m}^k(\cff{n}(X)) = \bigcap_{0 \leq k < \omega} \cff{n}(\cfmn{m}{n}^k(X)) \subset Y = \cfmn{n}{m}(Y) \subseteq \cff{n}(X)$. There must therefore exist an integer $k$ such that, as a consequence of Lemma \ref{lemma:preserve_cf1}, $X \subseteq \cfmn{m}{n}^k(X) \subseteq \cff{n}(\cfmn{m}{n}^k(X)) \subset Y$. By the antitonicity of $\cff{n}$ and again by the interdefinability of $\cf{}$ and $\cff{}$ (Fact \ref{fact:properties_dn}), and since $Y$ is taken to be a fixpoint of $\cfmn{n}{m}$, it follows that $\cff{n} (Y) = \cff{n}(\cfmn{n}{m}(Y)) = \cfmn{m}{n}(\cff{n}(Y))$. Then, from the fact that $ \cff{n}(\cfmn{m}{n}^k(X)) \subset Y$, and the fact that $\cff{m}$ is antitonic (Fact \ref{fact:properties_dn}) we have that $\cff{n} (Y) = \cff{m}(\cff{n}(\cfmn{m}{n}^k(X))) = \cfmn{m}{n}(\cfmn{n}{m}^k(X))$ which is a subset of $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$. So we have that $\cff{n} (Y)$ is also a fixpoint of $\cfmn{m}{n}$, it contains $X$ and it is included in $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$, which, by the previous claim, we know to be the smallest fixpoint of $\cf{\Frame}$ containing $X$. Contradiction. \end{proof} The content of Lemmas \ref{lemma:preserve_cf1} and \ref{theorem:approxim81} underpinning the construction of the two fixpoints is depicted in Figure \ref{figure:decomposition}. Notice that like in the standard case of $\cfmn{1}{1}$ and $\cff{1}$ the two streams can be generated by the indefinite iteration of the application of $\cff{n}$ followed by $\cff{m}$. The fact that $n \geq m$ is assumed, guarantees that each set of arguments $\cfmn{m}{n}^k(X)$ in the lower stream remains included in the set of arguments towards which it is $m$-neutral; that is, it remains $m$-conflict fee (and a fortiori $n$-conflict-free). \begin{remark} Like for the proof of Lemma \ref{theorem:approxim8}, finitariness plays an essential role in the proof of Lemma \ref{theorem:approxim81}. However, also in this case the assumption can be lifted and the proof could proceed using transfinite induction (cf. Remark \ref{remark:ordinal}). \end{remark} \subsubsection{Constructing Graded Complete and Grounded Extensions} We now move on to show how the above results provide constructive proofs of existence of graded semantics. We will focus on finitary graphs, but it should be clear that the non-finitary case can be handled by ordinal induction.\footnote{Cf. Remark \ref{remark:ordinal}.} \begin{theorem} \label{fact:smallest_complete} Let $\Delta$ be a finitary $AF$, $X \subseteq \mathcal{A}$ be such that $X \subseteq \cff{\ell}(X)$ and $X \subseteq \cfmn{m}{n}$, with $\ell, m$ and $n$ positive integers such that $n \geq m$ and $\ell \geq m$. Then $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$ is the smallest $\ell mn$-complete extension containing $X$. \end{theorem} \begin{proof} By Lemma \ref{theorem:approxim81}, we know that $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) = \lfp_X.\cfmn{m}{n}$, that is, the smallest fixpoint of $\cfmn{m}{n}$ that contains $X$. By Lemma \ref{lemma:preserve_cf1} we know that this set is $m$-conflict-free, that is, $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) \subseteq \cff{m}(\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X))$. Since $\ell \geq m$ by assumption, by Fact \ref{fact:accrual_relations} and the transitivity of $\subseteq$ we obtain that $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) \subseteq \cff{\ell}(\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X))$. It therefore follows that $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$ is a fixpoint of $\cfmn{m}{n}$, it is $\ell$-conflict-free and it is the smallest such set containing $X$. As claimed, it is therefore the smallest $\ell mn$-complete extension (Definition \ref{table:accrual-sensitive}) containing $X$. \end{proof} \begin{corollary} \label{fact:grounded} Let $\Delta$ be a finitary $AF$, $n \geq m$ and $\ell \geq m$. Then $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(\emptyset)$ is the $\ell mn$-grounded extension of $\Delta$. \end{corollary} So given an $\ell mn$ admissible set ($n \geq m$ and $\ell \geq m$), the smallest $\ell mn$-complete extension containing $X$ is the least fixpoint of $\cfmn{m}{n}$ which contains $X$ ($\lfp_X.\cfmn{m}{n}$), and the $\ell mn$-grounded extension is simply the smallest fixpoint of $\cfmn{m}{n}$ ($\lfp.\cfmn{m}{n}$). \subsubsection{Constructing Graded Preferred Extensions} Theorem \ref{fact:smallest_complete} implies that if we choose a `large enough' $\ell mn$-admissible set $X$, in the sense that such a set can reach through a chain of attacks any other argument, then the indefinite iteration of $\cfmn{m}{n}$ will yield an $\ell mn$-preferred extension (when $n \geq m$ and $\ell \geq m$). \begin{fact} \label{fact:prf} Let $\Delta$ be a finitary $AF$, $X \subseteq \mathcal{A}$ be such that $X \subseteq \cff{\ell}(X)$ and $X \subseteq \cfmn{m}{n}(X)$, and $ n \geq m$ and $\ell \geq m$. Assume furthermore that $\mathcal{A} \subseteq \set{a \mid \exists b \in X: b \ar^+ a}$. Then $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$ is the $\ell mn$-preferred extension of $\Delta$ that contains $X$. \end{fact} \begin{proof} By Theorem \ref{fact:smallest_complete} we know that $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$ is the smallest $\ell mn$-complete extension containing $X$. It needs to be shown that there exists no $\ell mn$-admissible set $Y$ such that $X \subset Y$. Suppose, towards a contradiction, that this is the case. Then $X \subseteq \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) \subset Y \subseteq \cfmn{m}{n}(Y) \subseteq \set{a \mid \exists b \in X: a \al^+ b}$. It follows that there exists $x \in Y \subseteq \cfmn{m}{n}(Y)$ such that $x \not\in \cfmn{m}{n}^k(X)$ for some positive integer $k$. By assumption, there exists a finite path of attacks from $X$ to $x$. So let $X_1$ be the smallest set of arguments that $mn$-defends $x$, $X_2$ the smallest set of arguments that $mn$-defends the arguments in $X_1$ and so on. It follows there exists a $j$ such that $X_j \subseteq X$, otherwise $x$ would not be $mn$-defended. It therefore follows that $x \in \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$. Contradiction. \end{proof} \subsubsection{Constructing Graded Stable Extensions} We finally arrive at a characterization of graded stable extensions as limits of streams generated by the graded neutrality function: \begin{fact} \label{theorem:approximate_stable} Let $\Delta$ be a finitary $AF$, $X \subseteq \mathcal{A}$ be such that $X \subseteq \cff{\ell}(X)$ and $X \subseteq \cfmn{m}{n}(X)$, and $n \geq m$ and $\ell \geq m$. Then, $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$ is the smallest $\ell mn$-stable extension containing $X$ if and only if $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) = \bigcap_{0 \leq k < \omega} \cff{n}(\cfmn{m}{n}^k(X))$. \end{fact} \begin{proof} \rightleft Assume $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) = \bigcap_{0 \leq k < \omega} \cff{n}(\cfmn{m}{n}^k(X))$. It therefore follows that $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) = \cff{n}(\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X))$. We conclude that $\bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X)$ is a fixpoint of $\cff{n}$, that is by Definition \ref{table:accrual-sensitive}, an $\ell mn$-stable extension. \leftright Straightforward. \end{proof} So, as in the standard case, graded stable extensions are the results of the convergence of the upper and lower streams of iteration of the graded defense function (cf. Figure \ref{figure:decomposition}). As in the standard case, such convergence is not guaranteed in general. \subsection{On the constraints $n \geq m$ and $\ell \geq m$} Theorem \ref{fact:smallest_complete} assumed that the three parameters of graded admissibility $\ell, m$ and $n$ are in the relation $n \geq m$ and $\ell \geq m$. Notice that Dung's standard semantics trivially meets this constraint with $\ell = m = n = 1$. This assumption plays a crucial role in the proofs of the above results and one can in fact show that graded semantics for a choice of parameters failing the constraint may not exist in some frameworks.\footnote{This is a situation fully analogous to the case of stable extensions in the standard Dung framework, where they are not always guaranteed to exist.} \begin{example} Let $\Delta$ be the $AF$ consisting of the arguments and attacks in Figure \ref{Motivating1}iii). Then $\Delta$ has no $221$-grounded extension. To establish this let us try to construct such an extension with the construction of Theorem \ref{fact:smallest_complete}: \[ \cfmn{2}{1}(\emptyset) = \set{b_3, c_3, d_3, e_3}, \cfmn{2}{1}(\set{b_3, c_3, d_3, e_3}) = \set{a_3, b_3, c_3, d_3, e_3} \] So the whole set of arguments $\mathcal{A}$ in that framework constitutes the smallest fixpoint of $\cfmn{2}{1}$. Such a set is, however, not $2$-conflict-free, that is $\mathcal{A} \not\subseteq \cff{2}(\mathcal{A})$. In fact no $221$-complete extensions exist in this framework, as these would have to include the set of unattacked arguments $\set{d_3, e_3}$ which, it is easy to see, $21$-defends all arguments.\footnote{See Example \ref{example:inexistence2} later for another such example.} \end{example} Intuitively, the constraint imposes two properties on graded admissibility: first, that the level $m$ of failure of defense which we are willing to tolerate, should not exceed the number $n$ of counter-attackers we want for such a defense to hold; second, that such a level $m$ of failure of defense, should not exceed the level $\ell$ of conflict-freeness that we are willing to tolerate. So to guarantee existence of a graded semantics tolerating arguments for which $m$ counter-attacks fail, one has to set the number $n$ of required counter-attackers higher or equal to $m$; and similarly, one has to set the level of tolerance $\ell$ to internal attacks at least as high as $m$.\footnote{Consider a $232$-admissible extension $X$ (where in violation of the constraints, $l = 2,m = 3,n=2$), and some $x \in X$ such that $x$ is attacked by two undefended attackers, in keeping with the $m$ parameter indicating that $x$ can be considered acceptable up to a cumulative threshold of two fully acceptable (i.e., two undefended) attackers of $x$. This level of tolerance with respect to the acceptability of $x$ is then at odds with the more restrictive toleration of the co-acceptability of $x$ with a maximum of one attacker $y \in X$ on $x$ (as indicated by $l=2$). Moreover, the fact that a third attacker $z$ of $x$ need only be counter-attacked by two arguments (as indicated by $n=2$), means that one can still (given $m = 3$) consider $z$ to be an acceptable argument by the above reasoning applied to $x$, which in turn implies that the tolerated cumulative threshold on attackers of $x$ has been exceeded. Precisely when this sort of mismatch between levels of tolerance of acceptability and co-acceptability occurs, graded semantics may fail to exist. } \subsection{Basic Properties of Graded Semantics} A direct consequence of Definition \ref{table:accrual-sensitive} is that graded extensions are in the same logical relations as standard Dung's extensions: \begin{fact} For any AF $\Delta = \tuple{\mathcal{A}, \rightarrow}$, and integers $\ell, m$ and $n$, graded semantics are related according to the following diagram: \[ \begin{diagram} \set{X \subseteq \mathcal{A} \mid X \subseteq \cfmn{m}{n}(X)} \ \ \ \ & & & & \ \ \ \ \set{X \subseteq \mathcal{A} \mid X \subseteq \cff{\ell}(X} \\ & \luLine & & \ruLine & \\ \set{X \subseteq \mathcal{A} \mid \cfmn{m}{n}(X) \subseteq X} \ \ \ \ & & \adm_{\ell mn}(\Delta) & & \\ & \luLine & \uLine & & \\ & & \cmp_{\ell mn}(\Delta) & & \\ & \ruLine & \uLine & & \\ \set{\grn_{\ell mn}(\Delta)} & & \prf_{\ell mn}(\Delta) & & \ \ \ \ \set{X \subseteq \mathcal{A} \mid \cff{n}(X) \subseteq X} \\ & & \uLine & \ruLine & \\ & & \stb_{\ell mn}(\Delta) & & \\ \end{diagram} \] where two nodes are connected when the lower node in the pair is a subset ($\subseteq$) of the upper node in the pair. \end{fact} So all $\ell mn$ extensions are $mn$-admissible and $\ell$-conflict-free; $\ell mn$-grounded, -stable and -preferred are all $\ell mn$-complete extensions; and $\ell mn$-stable extensions are $\ell mn$-preferred. We will further illustrate Definition \ref{table:accrual-sensitive}, and the associated constructions, with a series of examples later in Section \ref{section:examples}. \begin{fact} For any $AF$ $\Delta$, and integers $\ell, m$ and $n$ such that $n \geq m$ and $\ell \geq m$: \begin{align} \set{x \mid \overline{x} = \emptyset} & \subseteq \grn_{\ell mn}(\Delta) \label{eq:grn} \\ \bigcup S_{\ell mn}(\Delta) & \subseteq \bigcup S_{\ell' m'n'}(\Delta) \label{eq:lmn0} \\ \bigcap S_{\ell mn}(\Delta) & \subseteq \bigcap S_{\ell' m'n'}(\Delta) \label{eq:lmn} \end{align} for $S \in \set{\mathit{grounded}, \mathit{preferred}, \mathit{stable}}$ and any $\ell', m', n'$ such that $n' \geq m'$ and $\ell' \geq m'$ and $\ell' \geq \ell$, $m' \geq m$ and $n' \leq n$. \end{fact} \begin{proof} \fbox{\eqref{eq:grn}} It is easy to see that for any $\ell, m$ and $n$ if $\overline{x} = \emptyset$ then $x \in \cfmn{m}{n}(\emptyset)$ and therefore, by Definition \ref{table:accrual-sensitive} $x \in \grn_{\ell mn}(\Delta)$. \fbox{\eqref{eq:lmn0} \& \eqref{eq:lmn}} Both claims follow from Definition \ref{table:accrual-sensitive} by Fact \ref{fact:accrual_relations}. The constraint on $\ell', m'$ and $n'$ is necessary to guarantee existence under such parameters. \end{proof} \subsection{Some Examples} \label{section:examples} \begin{example} \label{example:inexistence2} Referring to Figure \ref{MotivatingSem}'s $AF$ below: \[ \cfmn{1}{2}(\emptyset) = \set{d,f,g}; \cfmn{1}{2}(\set{d,f,g}) = \set{d,f,g}. \] So $\set{d,f,g}$ is the smallest fixpoint of $\cfmn{1}{2}$ and it is $1$-conflict-free (as well as $\ell$-conflict-free for every $\ell \geq 1$). It is therefore also the $112$-grounded extension (as well as $\ell12$-grounded extension for every $\ell \geq 1$) of the given framework. Consider now: \[ \begin{array}{c} \cfmn{2}{1}(\emptyset) \\ \rotatebox{90}{=} \\ \set{c,d,f,g} \end{array} \ \begin{array}{c} \cfmn{2}{1}(\set{c,d,f,g}) \\ \rotatebox{90}{=} \\ \set{a,b,c, d, f, g} \end{array} \ \begin{array}{c} \cfmn{2}{1}(\set{a, b, c,d,f,g}) \\ \rotatebox{90}{=} \\ \set{a,b,c, d, f, g} \end{array} \] So $\set{a,b,c, d, f, g}$ is the smallest fixpoint of $\cfmn{2}{1}$. However, it is not $2$-conflict-free (argument $a$ in the set is attacked by two other arguments in the set) and therefore it is not a $221$-grounded extension. To make it a graded grounded extension one has to tolerate more internal attacks, setting for instance $\ell = 3$. The set is indeed a $321$-grounded extension. \end{example} \begin{example} Consider now the central graph (3-cycle) in Figure \ref{figure:graphs}. We know that such a graph has no grounded extension and the only complete extension is $\emptyset$. But one can slightly relax the defence and conflict-free requirements to obtain a non-empty graded complete extension. Set $\ell = 2$, $m = 2$ and $n = 1$. We can construct the $221$-grounded extension of this framework as follows: \[ \cfmn{2}{1}(\emptyset) = \set{a,b,c}, \cfmn{2}{1}(\set{a,b,c}) = \set{a,b,c} \] So, $\set{a,b,c}$ is the smallest fixpoint of the graded defense function $\cfmn{2}{1}$ and such a set is clearly $2$-conflict-free (that is, $\set{a,b,c} \subseteq \cffm{2}(\set{a,b,c})$). It is therefore also a $221$-preferred and -stable extension. \end{example} \section{Ranking Arguments by Graded Semantics} \label{Sec:Ranking} The theory developed in the previous sections offers a novel perspective on how arguments can be compared from an (abstract) argumentation theoretic point of view. In this section we describe two natural ways in which the theory of graded acceptability can be applied to define orderings on arguments: a `contextual' way, whereby, given a fixed set of arguments, one can rank arguments by how well they are iteratively defended by the given set; and an `absolute' way, whereby arguments are compared based on their acceptability under given graded semantics. The definitions introduced are illustrated by means of several examples in this section and later in Section \ref{Sec:Applications}. \subsection{Contextual Approach: Ranking by Quality of Defense} The same recursive principles underpinning the standard Dung semantics can be used to characterise how strongly the set of arguments defended by a given set, defends another set, and how the latter set defends yet another set, and so forth. That is to say, given a set $X$---the context---one can iteratively apply $\cfmn{m}{n}$ to $X$. Iterated graded defense can thus rank arguments with respect to how well a given set $X$ defends them: \begin{definition} \label{definition:degree} Let $\Delta = \tuple{\mathcal{A}, \ar}$ be a finitary $AF$ and $X \subseteq \mathcal{A}$. For $a,b \in \mathcal{A}$, we define that $a$ is `{\em at least as justified as}' $b$ w.r.t $X$ as follows: \begin{align*} a \succeq^X b & \IFF \forall m,n > 0 \IF b \in \bigcup_{0 \leq k < \omega}\cfmn{m}{n}^k(X) \THEN a \in \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(X) \end{align*} The strict part $\succ_X$ of the above relation is defined in the obvious way. \end{definition} As usual, $a \approx^X b$ denotes that $a \succeq^X b$ and $b \succeq^X a$. When we want to make the underlying $AF$ explicit we use the heavier notation $a \succeq_\Delta^X b$. The key intuition behind this definition is the following. Take two arguments $a$ and $b$, and some fixed set $X$. Is it the case that every time $b$ is defended through the iteration of some graded defense function, $a$ also is? If that is the case, it means that (w.r.t. $X$) every standard of defense met by $b$ is also met by $a$, but $a$ may satisfy yet stronger ones (recall Fact \ref{Fact:Ordering}). It is easy to see that $\succeq^X$ is a partial order, for any $X \subseteq \mathcal{A}$. \begin{example}\label{Ex:DEJ} Let us rank, by iterated defense w.r.t $\emptyset$, the arguments in Figure \ref{Motivating1}i-iv). Applying Definition \ref{definition:degree} we obtain the partial order shown in the Hasse diagram in Figure \ref{Motivating1}vii). Note that we assume one single $AF$ $\Delta$ consisting only of the arguments and attacks shown in Figures \ref{Motivating1}i-iv). Under the standard Dung semantics, all arguments in $\bigcup_{i=1}^4 Xi$ are in the iterated application of $\cf{\Delta}$ to $\emptyset$ (i.e., in the grounded extension of $\Delta$). However, we can now differentiate amongst these arguments. As expected the best arguments are those with no attackers. Second-best is $a2$ whose attackers are all counter-attacked by two un-attacked arguments. Third-best is $a1$, since $a2 \in \bigcup_{0 \leq k < \omega} \cfmn{1}{2}^k(\emptyset)$ but $a1 \notin \bigcup_{0 \leq k < \omega} \cfmn{1}{2}^k(\emptyset)$ (and so $a1 \nsucceq^{\emptyset} a2$), but $\forall m,n$: if $a1 \in \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(\emptyset)$ then $a2 \in \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(\emptyset)$ (and so $a2 \succeq^{\emptyset} a1$). We then have that $a3$ and $a4$ are incomparable (recall Section \ref{Sec:ExtendingPartialOrder}). Formally, $a3 \in \bigcup_{0 \leq k < \omega} \cfmn{3}{3}^k(\emptyset)$ and $a3 \notin \bigcup_{0 \leq k < \omega} \cfmn{2}{2}^k(\emptyset)$, but $a4 \notin \bigcup_{0 \leq k < \omega} \cfmn{3}{3}^k(\emptyset)$ and $a4 \in \bigcup_{0 \leq k < \omega} \cfmn{2}{2}^k(\emptyset)$. Critically, we can also differentiate amongst the rejected arguments (those not in the Dung grounded extension). Thus $b1, b3, c3, c4$, (each of which are attacked by one argument), are ranked above $b2, b4, d4$ (each of which are attacked by two arguments). \end{example} \subsection{Absolute Approach: Ranking by Quality of Justification} More generally, for a given semantics, we can rank the justification status of an argument with respect to a given framework, exactly as we did in Definition \ref{definition:degree} for iterated graded defense: \begin{definition}[Ranking arguments by graded semantics]\label{definition:degree-semantics} Let $\Delta = \tuple{\mathcal{A}, \ar}$ be an $AF$. For $a,b \in \mathcal{A}$, and for $S \in \{$$\ell mn$-grounded, $\ell mn$-stable, $\ell mn$-preferred$\}$:\footnote{Recall we are working with the sceptical notion of justifiability throughout the paper.} \begin{align*} a \succeq^S b & \IFF \forall \ell, m,n > 0, \IF b \mbox{ is justified w.r.t. } S \THEN a \mbox{ is justified w.r.t. } S. \end{align*} The strict part $\succ_S$ of the above relation is defined in the obvious way. \end{definition} As usual, $a \approx^S b$ denotes that $a \succeq^S b$ and $b \succeq^S a$. Again, it is easy to see that $\succeq^S$ is a partial order. When we want to make the underlying $AF$ explicit we use the heavier notation $a \succeq_\Delta^S b$. \smallskip We illustrate how the ranking of arguments by graded semantics can be applied to arbitrate between arguments that are credulously but not sceptically justified under Dung's standard semantics. \begin{figure}[t] \centering \includegraphics[width=3in]{Pics/motivatingParamSem.pdf} \caption{An $AF$ with two preferred extensions \label{MotivatingSem}} \end{figure} Consider the $AF$ $\Delta$ in Figure \ref{MotivatingSem} that has two preferred extensions---$X1$ and $X2$---under the standard Dung semantics. This equates with $X1$ and $X2$ both being $111$-preferred extensions in the graded terminology. Hence $a$ and $b$ are credulously justified, while only $f$, $g$ and $d$ are sceptically justified. Typically, one would then rely on exogenously given preferences \cite{AmCay,Modgil2013361} to arbitrate between such arguments. However, we can make use of the endogenously derived ranking of arguments yielded by our graded semantics to arbitrate amongst $a$ and $b$. Intuitively, $b$ is more strongly defended in $X2$ than $a$ is defended in $X1$, and the graded semantics can make use of this information in the standard framework to arbitrate in favour of $b$ over $a$. Specifically, only $X2$ is a $222$-admissible (and hence a subset of a $222$-preferred\footnote{The $222$-preferred extension is $X2 \cup \{a\}$.}) extension since only one attacker ($a$) of $b$ is defended by strictly less than two arguments, whereas $X1$ is not a $222$-admissible ($X1$ is not a subset of a $222$-preferred) extension since both attackers of $a$ ($b$, $c$) are defended by strictly less than two arguments. Indeed, $\forall l, m,n \geq 0$: $ \IF a$ is justified w.r.t. $lmn$-preferred $ \THEN b$ is justified w.r.t. $lmn$-preferred, whereas \emph{it is not the case} (as illustrated above) that $\forall l, m,n \geq 0$: $ \IF b$ is justified w.r.t. $lmn$-preferred $ \THEN a$ is justified w.r.t. $lmn$-preferred. Hence $b \succ_\Delta^{lmn\text{-preferred}} a$, so arbitrating in favour of $b$ over $a$. It is worth noting that the contextual and absolute approaches to argument ranking are related as follows. \begin{fact} Let $\Delta$ be a finitary $AF$, $n \geq m$ and $\ell \geq m$. Then: \begin{align*} a \succeq^\emptyset b & \IFF a \succeq^{lmn\text{-grounded}} b \end{align*} \end{fact} \begin{proof} The claim is a direct consequence of Corollary \ref{fact:grounded} and Definitions \ref{definition:degree} and \ref{definition:degree-semantics}. \end{proof} Further properties of rankings based on graded semantics will be discussed in detail later (Section \ref{Sec:RelatedWork}), in the context of other existing approaches to argument rankings in argumentation. \section{Instantiating Graded Semantics}\label{Sec:Applications} We have thus far focussed on graded defense, acceptability and semantics for abstract argumentation frameworks. We now illustrate these notions by reference to instantiated frameworks. In particular, \textit{ASPIC+} \cite{Modgil2013361, hp10aspicJAC} provides a general framework for specifying logical instantiations of Dung frameworks. It has been shown to formalise human orientated accounts of argumentation based reasoning that make use of Schemes and Critical Questions \cite{Wal96}, and provide a dialectical characterisation of non-monotonic inference in Brewka's Preferred Subtheories \cite{bre89} and Prioritised Default Logic \cite{PDL} (in \cite{Modgil2013361} and, respectively, \cite{Young2016}). In what follows we show how our graded semantics can be applied to both these instantiations. \subsection{Graded Semantics for Instantiations based on Schemes and Critical Questions}\label{SecGradedSchCQ} We use a well-established instantiated argumentation setting to illustrate the usefulness of the theory of graded acceptability for evaluating argument strength. \subsubsection{Schemes and Critical Questions} Schemes and Critical Questions (\emph{SchCQ}) have been developed by the informal logic community, most notably by Walton \cite{Wal96}, and capture stereotypical patterns of argument as deployed in epistemic and practical reasoning. For example, the \emph{argument from expert opinion} scheme states that if $E$ is an expert in domain $D$, and $E$ states that $S$ is true (false), and $S$ is within domain $D$, then (presumably) $S$ is true (false). Echoing our motivation for graded defense and acceptability in Section \ref{Sec:GradedAcceptabilityIntuitions}, Walton emphasises that it is not feasible for reasoning agents to establish beyond doubt the validity of a scheme's \emph{presumptions} if one is to effectively engage in epistemic or practical reasoning. The presumptive nature of the grounds used to establish a claim means that any given argument instantiating a scheme, does not in and of itself provide grounds for acceptance of the claim as having the status of true justified belief, or being the decision option that indisputably maximises a given objective. Rather, one establishes confidence in the claim sufficient to reason or act on the basis of the claim \cite{Walton2013}. Moreover, each scheme is associated with critical questions that render explicit the presumptive nature of an argument's grounds. For example, `Is $E$ an expert in domain $D$ ?' and `Is $E$ reliable as as source ?'. A natural way to formalise reasoning with argument schemes is to regard them as defeasible inference rules and to regard critical questions as pointers to counter-arguments that may themselves instantiate schemes \cite{m+p14, Verheij2003}. \textit{ASPIC+} arguments are built from such defeasible rules defined over a first order language, and premises that are first order formulae. Arguments can be \emph{rebut} attacked on the conclusions of an argument's defeasible rules, \emph{undermine} attacked on the argument's premises, or \emph{undercut} attacked by challenging the applicability of a defeasible rule (via a naming mechanism for rules such that the attacking argument concludes the negation of the name of the rule in the attacked argument). For simplicity of presentation we will illustrate using rule based formulations of schemes defined over a propositional language. Contravening the notational conventions used thus far, but in line with the standard notation in the instantiations literature, we will in this section use upper case Roman letters $A,B,C,\ldots$ to denote arguments, lower case roman letters $a, b, c, \ldots$ for propositions, Greek letters $\alpha, \beta, \gamma, \ldots$ for variables in schemes, and notation $[a], [b], [c], \ldots$ to denote subarguments consisting of a single proposition. We will use the following (abbreviated) schemes and (selected) critical questions, and instantiations of these schemes represented as propositional defeasible inference rules (whose names are shown as subscripts on the defeasible inference symbol $\Rightarrow$): \smallskip \paragraph{Argument from position to know} ($APK$): \begin{itemize} \item Source $\alpha$ is in a position to know about proposition $\gamma$; \item Source $\alpha$ asserts that $\gamma$ is true (false); \item Therefore presumably $\gamma$ is true (false). \end{itemize} \noindent $APK$ critical questions: (APK1) Is $\alpha$ in a position to know about proposition $\gamma$?; (APK2) Is $\alpha$ an honest/trustworthy/reliable source?; (APK3) Did $\alpha$ assert that $\gamma$ is true (false)? \begin{figure}[t] \centering \includegraphics[width=3.6in]{Pics/Arguments.pdf} \caption{\textit{ASPIC+} arguments are upside down trees whose roots are the arguments' claims and whose leaves are the arguments' premises. Notice that $C_1$ asymmetrically undermine attacks $B_1$ on its sub-argument $B_1'$. $C_1$ and $B_1'$ also symmetrically attack ($C_1$ undermine attacks $B_1'$ and $B_1'$ rebut attacks $C_1$). \label{Arguments}} \end{figure} \paragraph{Argument from expert opinion} ($AEO$): \begin{itemize} \item $\alpha$ is an expert in domain $\beta$; \item The domain $\beta$ contains proposition $\gamma$; \item $\alpha$ asserts that $\gamma$ is true (false); \item Therefore presumably $\gamma$ is true (false). \end{itemize} \noindent $AEO$ critical questions: (AEO1) how credible an expert is $\alpha$?; (AEO2) Is $\alpha$ an expert in domain $\beta$?; (AEO3) Is $\alpha$ reliable as a source?; (AEO4) Did $\alpha$ assert that $\gamma$ is true (false)?; (AEO5) Is $\gamma$ consistent with what other experts assert? \subsubsection{A Detailed Example} \begin{example} Assume the premises $b$ = `Blair is in a position to know about whether removing Assad will achieve democracy in Syria'\footnote{Ex UK prime minister Tony Blair was appointed middle east envoy for peace in 2007.} and $c$ = `Blair asserts that removing Assad will achieve democracy in Syria is true', and the rule $b, c \Rightarrow_{r_1} g$ where $g$ denotes `removing Assad will achieve democracy in Syria'. We then have the argument $A_1$ in Figure \ref{Arguments} Assume furthermore the premises $h$ = `Chilcot is an expert in the domain of Blair's conduct in the Iraq war', $j$ = 'the domain Blair's conduct in the Iraq war contains proposition Blair lied about weapons of mass destruction (WMD)', $k$ = `Chilcot asserts that Blair lied about WMD', and the rules $h,j, k \Rightarrow_{r_2} m$ (where $m$ = `Blair lied about WMD'), $m \Rightarrow_{r_3} \neg q$ (where $q$ = `Blair is an honest source'), and $\neg q \Rightarrow_{r_4} \neg r_1$. We then have the argument $B_1$ in Figure \ref{Arguments}, where $B_1$ addresses critical question APK2 and undercuts $A_1$ on the inference $r_1$. \end{example} Intuitively, $B_1$'s claim does not have the status of an incontrovertible belief, since its grounds do not have such status.\footnote{Commentators varyingly interpreted the Chilcot report's investigation into the Iraq war (\url{www.iraqinquiry.org.uk/the-report}) as asserting that Blair did/did not lie.} Hence, one may retain some residual confidence in $A_1$'s claim. Graded semantics allow for this level of granularity. So, in the $AF$ containing $A_1$ and $B_1$, $A_1$ and $B_1$ are in the $222$-grounded extension.\footnote{All sub-arguments of $A_1$ and $B_1$, i.e., $[b]$, $[c]$, $[h]$, $[j]$, $[k]$, and the sub-arguments of $B_1$ that conclude $m$ and $\neg q$, are also in the $222$-grounded extension (none of these additional arguments attack or are attacked by any other arguments).} Clearly however, both our contextual (Definition \ref{definition:degree}, the natural context being $\emptyset$) and absolute (Definition \ref{definition:degree-semantics}) approaches for ranking arguments would rank $B_1$ above $A_1$. \begin{example}\label{ExTwoVOneAttack} Suppose an additional instance of the $AEO$ scheme constructed from the premises $t$ = `The Arab League is an expert in the domain of Blair's knowledge of middle east affairs' ; $u$ = 'the domain Blair's knowledge of middle east affairs contains the proposition Blair is an unreliable source' ; $v$ = `The Arab League assert that Blair is an unreliable source', and the rules $t, u, v \Rightarrow_{r_5} l$ (where $l$ = `Blair is an unreliable source'), $l \Rightarrow_{r_6} \neg r_1$. We then have argument $B_2$ in Figure \ref{Arguments}, where $B_2$ addresses critical question APK2 (by claiming unreliability rather than dishonesty) and undercuts $A_1$ on the inference $r_1$. We now have two unattacked arguments attacking $A_1$. $A_1$ is now further weakened, and neither $\{A_1,B_1\}$ nor $\{A_1,B_2\}$ are included in $222$-preferred extensions (the threshold of allowing one unattacked attacker on $A_1$ and thus maintaining co-acceptability of $A_1$ with one such attacker, has been exceeded). Abstractly, this equates with the ranking $a1 \succ^S a1^*$ (according to Definition \ref{definition:degree-semantics}) in the AF shown in Figure \ref{SchCQAFs}i. \end{example} In the remainder of this section we will focus on $\ell mn$-preferred extensions and assume $S$ to be of such type. \begin{figure}[t] \centering \includegraphics[width=4.2in]{Pics/SchCQAFs.pdf} \caption{$AF$s in which $a1^* \succ^{\emptyset} a1$, illustrating the analysis of arguments in Example \ref{ExTwoVOneAttack} (i), Example \ref{ExReinstatement} (ii), Example \ref{ExReinstatement2Same} (iii), and Example \ref{ExampleTwoVOneReinstatment} (iv). In v) we show the two Dung extensions for Example \ref{ExAccrual}. \label{SchCQAFs}} \end{figure} \begin{example}\label{ExReinstatement} Suppose we now have only $A_1$, $B_1$ (and their sub-arguments) and the argument $C_1$ that cites an interpretation of the report $i$ = `Chilcot did not use the word lie'\footnote{Chilcot did not use the word ``lie''. His report specified that it ``is not questioning'' Mr Blair's fixed belief, but the former Prime Minister deliberately blurred the distinction between what he believed and what he actually knew. } to conclude that Chilcot did not assert that Blair lied about WMD ($\neg k$), so that $C_1$ undermine attacks $B_1$ (on its sub-argument $B_1' = [k]$), and $C_1$ undermine attacks $B_1'$ and $B_1'$ rebut attacks $C_1$ (Figure \ref{Arguments}). In an $AF$ consisting only of $A_1$ and its sub-arguments $A_1' = [b]$ and $A_1'' = [c]$, then for all $l,m,n$ set $\{A_1, A_1', A_1'' \}$ would be the single $lmn$-complete extension. However, in the $AF$ that includes $C_1 \rightarrow B_1 \rightarrow A_1$, $C_1 \leftrightarrow B1'$ (the additional subarguments of $C_1, B_1$ and $A_1$ are not listed here as none of these are involved in attacks) $A_1$ is not in every $lmn$-complete extension (for instance, $\{A_1,C_1\}$ is not $112$-admissible). Abstractly, this equates with $a1^* \succ^S a1$ in the $AF$ shown in Figure \ref{SchCQAFs}ii (absolute ranking), and $a1^* \succ^\emptyset a1$ (contextual ranking). \end{example} We note with interest that the higher ranking for unattacked versus reinstated arguments (illustrated by the above example) is supported by experimental findings reporting that human subjects appear to have higher confidence in claims of arguments that are unattacked, than when those arguments are subsequently attacked and then defended \cite{rahwan:behavioral}. This suggests that our theory of graded acceptability incorporates features of human argumentation.\footnote{We do see the theory of graded acceptability also as a contribution to the long term goal of providing formal frameworks of argumentation accommodating both computational and human argumentation \cite{AddValue}.} \begin{example}\label{ExReinstatement2Same} Suppose the $AF$ $\Delta$ that includes the arguments $A1, B_1$ and $C_1$ in Figure \ref{Arguments}, and the $AF$ $\Delta'$ that includes the additional argument $D_1$ that cites another interpretation of the report that concludes $\neg k$. Then $D_1$ also undermines attack $B_1$ on its sub-argument $B_1'$ and we also now have $D_1 \leftrightarrow B1'$. The defense of $A_1$ against $B_1$ by $\set{C_1, D_1}$ is now stronger than that offered by $\set{C_1}$ alone. Abstractly, this equates with $a1^* \succ^\emptyset a1$ in the $AF$ shown in Figure \ref{SchCQAFs}iii (contextual ranking). Furthermore, $\{D_1,C_1,A_1\}$ is a $112$-preferred extension of $\Delta'$ but $\{C_1,A_1\}$ is not a $112$-preferred extension of $\Delta$. Abstractly, this equates with $a1^* \succ^S a1$ in the $AF$ shown in Figure \ref{SchCQAFs}iii (absolute ranking). \end{example} \begin{example}\label{ExampleTwoVOneReinstatment} Suppose the $AF$ $\Delta$ that includes the arguments $A1, B_1$ and $C_1$ in Figure \ref{Arguments}, and the $AF$ $\Delta'$ that now includes the additional argument $B_2$ in Figure \ref{Arguments}, as well as the argument $C_2$ claiming that the Arab League are not credible experts, so that $C_2$ undercut attacks $B_2$ (on $r5$). That is, we have that $\Delta'$ includes: \begin{quote} $C_1 \rightarrow B_1 \rightarrow A_1$, $C_1 \leftrightarrow B1'$, $C_2 \rightarrow B_2 \rightarrow A_1$. \end{quote} Intuitively, $A_1$ is more strongly justified in $\Delta$ than in $\Delta'$ since in the latter case we have two arguments ($B_1$ and $B_2$) that continue to exert a weakening effect on $A_1$ as opposed to the one ($B_1$) in $\Delta$. $A_1$ is in a $222$-preferred extension of $\Delta$ but $A_1$ is not in a $222$-preferred extension of $\Delta'$. This equates with $a1^* \succ^S_{\Delta} a1$ in the $AF$ shown in Figure \ref{SchCQAFs}iv. \end{example} \subsubsection{Graded Acceptability and Accrual} We now briefly illustrate, by means of a simple example, the relationship between graded acceptability and the notion of accrual. In the following section we then show how graded acceptability captures a simple form of accrual when applying graded semantics to a dialectical characterisation of non-monotonic inference. \begin{example}\label{ExAccrual} Suppose an argument in support of invading Syria, instantiating the scheme for practical reasoning \cite{atk05}: \begin{quote} Assad is suppressing Syrians, and invading Syria will remove Assad from power, and \textbf{removing Assad will achieve democracy in Syria}, so promoting the value of peace. \end{quote} Now, as well as pointing to counter-arguments, critical questions can also be posed as challenges to presumptions, shifting the burden of proof so as to provide an argument in support of the presumption. So the critical question `Do the consequences of the action achieve the goal ?' can be posed to the presumption emphasised in bold in the above argument (with the consequences being `the removal of Assad' and the goal being `achieve democracy in Syria'). Suppose that in response to this challenge, the above argument is then extended to an argument $I$ that now includes as a sub-argument, the argument $A_1$ in Figure \ref{Arguments} whose conclusion is the bold text in the above. Now suppose two additional arguments $A_2$ and $A_3$, each of which are instances of the AEO scheme. $A2$ cites the Institute of Middle Eastern Studies at King's College London who are experts in the domain of middle east politics and who assert that removing Assad will \emph{not} achieve democracy in Syria (i.e., $A2$ concludes $\neg g$). $A_3$ cites the United Nations working group on middle east affairs who are also experts that assert that removing Assad will \emph{not} achieve democracy in Syria. Both $A_2$ and $A_3$ symmetrically rebut attack $A_1$. Moreover, both $A2$ and $A3$ asymmetrically rebut attack $I$ on its sub-argument $A1$. We thus have two Dung admissible extensions $E1$ and $E2$ in Figure \ref{SchCQAFs}v. However, $A_2$ and $A_3$ accrue in support of each other's claim ($\neg g$) and so strengthen each other at the expense of $A_1$. Hence, although $E1$ and $E2$ are both subsets of Dung preferred extensions, we have that $A2, A3 \succ^{S}_{\Delta} A1, I$ (intuitively each attack on $I$ and $A_1$ is defended by one counter-attacker, whereas each attack on $A_2$ and $A_3$ is defended by two counter-attackers). \end{example} Observe also that $C_1$ and $D_1$ accrue in support of $\neg k$ to strengthen $A_1$ in Example \ref{ExReinstatement2Same}. The incompatibility of accrual with Dung's standard theory has been discussed and argued for in \cite{PrakAccrual}. However, as illustrated in the above examples, our graded semantics partially challenges this view by showing how a simple counting-based form of accrual can be coherently accommodated within Dung's theory. \subsection{Graded Semantics and Characterisations of Non-monotonic Inference}\label{Sec:GradedPrefSubtheories} A number of works \cite{Modgil2013361, Amgoud2010,Thang01082014} provide argumentation-based characterisations of non-monotonic inference defined by Brewka's Preferred Subtheories \cite{bre89}. The latter starts with a a \emph{totally} ordered ($\leq$) set $\mathcal{B}$ of classical wff \footnote{Where as usual, $\alpha \approx \beta$ iff $\beta \leq \alpha$ and $\alpha \leq \beta$, and $\beta < \alpha$ iff $\beta \leq \alpha$ and $\alpha \nleq \beta$.} partitioned into equivalence classes $(\mathcal{B}_1,\ldots,\mathcal{B}_n)$ (for $i = 1 \dots n$, $\alpha, \beta \in \mathcal{B}_i$ iff $\alpha \approx \beta$) and such that: \begin{quote}$\forall \alpha \in \mathcal{B}_i, \forall \beta \in \mathcal{B}_j$, $i < j$ iff $\beta < \alpha$. \end{quote} A `preferred subtheory' (`\emph{ps}' for short) is obtained by taking a maximal under set inclusion consistent subset of $\mathcal{B}_1$, maximally extended with a subset of $\mathcal{B}_2$, and so on. In this way, multiple \emph{ps}s may be constructed; for example $(\mathcal{B}_1 = \{\neg a \vee \neg b\},\mathcal{B}_2 = \{a , b \})$ yields the \emph{ps}s $\{\neg a \vee \neg b, a\}$ and $\{\neg a \vee \neg b, b\}$. \begin{figure}[t] \centering \includegraphics[width=4in]{Pics/FigPS.pdf} \caption{Arguments in $AF1$ that defeat each other are connected by double headed arrows. $E1$, $E2$ and $E3$ are Dung admissible (i.e., $111$-admissible extensions). Only $E3$ is $112$-admissible.\label{FigPS}} \end{figure} \begin{figure}[t] \centering \includegraphics[width=4in]{Pics/FigPS2.pdf} \caption{$AF2$ defined by the base in which $\neg a$ is ordered below $a, b$ and $\neg a \vee \neg b$. $E1'$, $E2'$ and $E3'$ are $111$-admissible extensions. None are$112$-admissible. \label{FigPS2}} \end{figure} Classical logic arguments in \textit{ASPIC+} consist of consistent subsets of premises $\Delta \subseteq \mathcal{B}$ in support of a claim $\alpha$ classically entailed by $\Delta$ (and such that no proper subset of $\Delta$ entails $\alpha$) via a strict inference rule $\alpha_1,\ldots,\alpha_n \rightarrow \alpha$ ($\Delta = \bigcup_{i=1}^n\alpha_i$) that encodes the classical entailment, and which we represent here as $(\Delta,\alpha)$. The arguments constructed from $\mathcal{B}$ are then related by a binary defeat relation, obtained on the basis of the attacks and a strict preference relation over arguments defined by reference to the ordering on $\mathcal{B}$: \begin{quote} -- $(\Delta,\alpha) \prec (\Delta',\alpha')$ iff $\exists \gamma \in \Delta$, $\forall \beta \in \Delta'$: $\gamma < \beta$; \\[3pt] -- $(\Delta_1,\alpha_1)$ \emph{attacks} $(\Delta_2,\alpha_2)$ on $(\{\beta\},\beta)$ iff $\beta \in \Delta_2$, $\beta = \neg \alpha_1$ or\\[-16pt] \hspace{3mm} $\alpha_1 = \neg \beta$; $(\Delta_1,\alpha_1)$ \emph{defeats} $(\Delta_2,\alpha_2)$ if $(\Delta_1,\alpha_1) \nprec (\{\beta\},\beta)$. \end{quote} \cite{Modgil2013361} then shows a correspondence such that each \emph{ps} of $\mathcal{B}$ is the set of premises in a stable extension of the $AF$ consisting of the arguments and defeats defined by the totally ordered $\mathcal{B}$. Then $\alpha$ is a sceptical (credulous) \emph{ps}-inference iff $\alpha$ is entailed by all (respectively at least one) \emph{ps}, iff $\alpha$ is a sceptical (credulous) argumentation defined inference (i.e., $\alpha$ is the conclusion of an argument in all, respectively at least one, stable extension). Moreover, it has subsequently been shown in \cite{modgil17StableEqualsPreferred}, that the stable extensions of any $AF$ of classical logic arguments and defeats, as defined above by a totally ordered set $\mathcal{B}$ of classical logic formulae, coincide with the preferred extensions.\footnote{It is well known that any stable extension of an $AF$ is preferred. Therefore \cite{modgil17StableEqualsPreferred} focuses on showing that any preferred extension is also stable.} The correspondence in \cite{Modgil2013361} can therefore also be stated for preferred subtheories and preferred extensions, and so in what follows, we focus on graded preferred semantics. We can apply graded semantics to $AF$s that relate arguments by \emph{defeats} rather than attacks. Firstly, assume $\mathcal{B}$ is the single equivalence class $\mathcal{B}_1$ = $\{a,b,\neg a \vee \neg b, \neg a\}$, generating the defeat graph $AF1$ in Figure \ref{FigPS} in which all attacks succeed as defeats (represented as double arrows). Note that for every argument $(\Delta,\alpha)$ shown, the $AF$ also includes the argument $(\Delta,\beta)$ where $\beta$ is any classical consequence of $\Delta$. However, no attacks originate from these additional arguments. It therefore suffices to consider only the arguments shown in Figure \ref{FigPS}, since these are the only arguments from which attacks originate. Hence if an extension of $AF1$ contains a $(\Delta,\alpha)$ shown in Figure \ref{FigPS}, then it will also contain any $(\Delta,\beta)$ such that $\Delta \vdash \beta$. Now observe that $E3$ is a subset of the single $112$-preferred extension that includes all arguments with consistent subsets of $\{b,\neg a \vee \neg b, \neg a\}$ (every defeat on $X \in E3$ is defended by two arguments in $E3$). However $E1$ and $E2$ are not $112$-admissible, and are not subsets of $112$-preferred extensions. Indeed, we obtain that $G, D, B, C \succ^{\ell mn-preferred}_{AF1} A, E, F$ according to Definition \ref{definition:degree-semantics}. Once again we witness how graded semantics can effectively account for a form of accrual. We have two distinct arguments claiming $\neg a$ ($G$ and $D$) which accrue so as to strengthen each other, and so privilege the inference $\neg a$ over $a$. Moreover these arguments accrue in their defense of $B, C$ and $D$. Notice that explicit preferences over arguments take precedence over the implicit preference yielded by accruing arguments and the arguments they defend. Suppose that $\mathcal{B}$ consists of the two equivalence classes $\mathcal{B}_1$ = $\{a,b,\neg a \vee \neg b\}$ and $\mathcal{B}_2$ = $\{\neg a\}$ (i.e., $\neg a < a,b,\neg a \vee \neg b$) generating the defeat graph $AF2$ in Figure \ref{FigPS2}. The fact that $G \prec A$ now means that $G$ and $D$ no longer accrue in support of $\neg a$ and an implicit preference for $\neg a$, and no longer accrue in defense of $B, C$ and $D$. Now none of $E1'$, $E2'$ and $E3'$ are subsets of $112$-preferred extensions, and $\forall X,Y \in \{G, D, B, C, A, E, F\}$, $X \approx^{\ell mn-preferred}_{AF2} Y$. \section{Related Work} \label{Sec:RelatedWork} We discuss related work and provide a detailed comparison of our semantics based on graded acceptability with some of the approaches proposed in the literature. \subsection{Approaches to Graduality in Abstract Argumentation} The existing literature aiming at introducing some form of 'graduality' or 'ranking' in abstract argumentation can roughly be classified in two strands: those introducing a more fine-grained notion of argument status, and which are closely based on Dung's theory; those that depart from Dung's notions of defense, acceptablity and extensions under various semantics, so as to provide a unique ranking on arguments, typically through the use of techniques for value propagation on graphs. Within the first category, \cite{wu10labelling} pushes the envelope of Dung's theory by showing how standard notions (e.g., belonging to or being attacked by the grounded extension, belonging to some or no admissible sets, and being attacked by some or no admissible sets) suffice to isolate six different statuses of arguments, without introducing an explicit notion of graduality. In \emph{Weighted Argumentation Systems} (\emph{WAS}) \cite{dunne11weighted}, weights are associated with attacks, and Dung extensions are generalised to relax the requirement for conflict-freeness, and allow for extensions whose contained attacks' summative weight does not exceed some predefined `inconsistency budget'. The case where the weight of each attack is 1 (which we refer to as \emph{WAS}$_1$), and in which the inconsistency budget therefore equates with the number of attacks tolerated in an extension, invites comparison with our $\ell 11$ graded semantics accommodating extensions that are not necessarily Dung conflict free (i.e, when $\ell > 1$).\footnote{We will in Section \ref{Sec:Future} suggest future research in which we generalise graded neutrality, defense and semantics to account for weights on attacks.} The first thing to note is that \emph{WAS}$_1$ considers the neutrality of an extension w.r.t. the number of contained attacks, rather than w.r.t. the number of attacks on any given contained argument. Moreover, we have argued that toleration w.r.t. the co-acceptability of attacking arguments should go hand in hand with toleration w.r.t. weaker forms of defence. However, the implications of relaxing conflict-freeness on the existence and construction of Dung's semantics in \emph{WAS} have yet to be studied in depth (\cite{dunne11weighted} only notes that the existence of a \emph{unique} grounded extension is not guaranteed). Amongst the graph propagation approaches, we mention the equational approaches that assign a more fine grained ranking to arguments by evaluating fixed points of functions that assign a numerical value to any given argument based on the values of its attackers. In particular, the equational approach of Gabbay and Rodrigues \cite{GabbayR16} who conjecture that their approach yields a unique solution for cyclic graphs, the compensation based semantics of Amgoud et.al. \cite{Amgoud:2016}, which assigns the same ranking to all arguments in cycles and yield a unique solution for cyclic graphs, and the social argumentation approach of Leite and Martin (\textbf{LM}) \cite{conf/ijcai/LeiteM11} (who are concerned with propagating user votes on arguments, but also yield fine grained rankings without recourse to exogenous information). Other graph propagation approaches that do not use an equational fixed point approach, typically account for the attack and defense paths terminating in the argument being ranked, where these paths are sequences of, respectively even and odd, numbers of attacking arguments. Besnard and Hunter (\textbf{BH}) \cite{BesnardHunter01} rank classical logic arguments in acyclic graphs, through a \emph{categoriser} function -- $v: \mathcal{A} \rightarrow [0,1]$ defined as $v(a) = \frac{1}{1 + \sum_{b \rightarrow a} v(b)}$ -- that assigns high values to arguments with low-valued attackers (and the maximum value to un-attacked arguments) and low values to arguments with high-valued attackers. Cayrol and Lagasquie-Schiex (\textbf{CL}) \cite{CLS04} then generalises use of this function to Dung $AF$s to develop (in their terms) a `local approach' to valuation of arguments, and then formalise a `global approach' that they argue gives more intuitive outcomes. Their approach requires a highly involved transformation of cyclic graphs to infinite acyclic graphs. More recently, Amgoud and Ben-Naim \cite{AmgoudNaim} propose two ranking-based semantics, which they call discussion-based (\textbf{AB-d}) and burden-based (\textbf{AB-b}). These semantics are also based on the processing of attack paths and their general applicability relies on conjectures concerning the processing of cyclic paths. Finally, Matt and Toni (\textbf{MT}) \cite{conf/jelia/MattT08} provide a highly original paradigm for ranking arguments, defining argument strength in terms of the value of a repeated two-person zero-sum strategic game with imperfect information. \subsection{Graded Semantics in the Landscape of Ranking-Based Semantics} Properties of rankings have been proposed by the above works, and a very informative comparison of these approaches---in particular \textbf{LM}, \textbf{BH}, \textbf{CL} (for acyclic $AF$s), \textbf{AB-d},\textbf{AB-b} and \textbf{MT}---in terms of whether or not these properties are satisfied, has been provided in \cite{bonzon16comparative}. It is instructive to study these properties as they apply to our graded rankings, in part because such a study reveals fundamental distinctions between propagation based and Dung semantics based approaches to evaluation of arguments. In what follows, we refer to absolute graded rankings $x \succeq^{S} y$ (Definition \ref{definition:degree-semantics}), which are the more naturally comparable with the other existing approaches to graduality. Unless stated otherwise, we assume $S$ stands for any of the semantics in $\set{\mathit{grounded}, \mathit{preferred}, \mathit{stable}}$. Moreover, we assume the constraint $n \geq m$ and $l \geq m$. Albeit not essential for the comparison, this assumption streamlines some of the proofs in this section. We now turn to the properties studied in \cite{bonzon16comparative}, and discuss whether our approach satisfies each of them. In doing so we will recall which approaches satisfy each property (writing \textbf{All} if satisfied by all, and \textbf{None} if satisfied by none). \subsubsection{Abstraction and Independence} \textbf{Abstraction} (\textbf{All}) states that rankings of arguments are preserved by isomorphisms. It should be obvious to see that graded rankings satisfy this property. \smallskip \textbf{Independence} (\textbf{All}) states that the ranking of $a$ and $b$ should be independent of any argument that is neither connected to $a$ or $b$. Formally, the connected components of an $AF$ $\Delta$ are the set of largest subgraphs of $\Delta$, denoted $cc(\Delta)$, where two arguments are in the same component iff there is some path of attacks (ignoring the direction of attack) between them. Graded rankings satisfy independence: \begin{proposition}\label{PropInd} $\forall \Delta' = (\mathcal{A}',\rightarrow') \in cc(\Delta = (\mathcal{A},\rightarrow))$, $x, y \in \mathcal{A}'$, and positive integers $\ell$, $m$ and $n$ such that $\ell \geq m$ and $n \geq m$: \begin{align} x \succeq^{S}_{\Delta'} y \IMPLIES x \succeq^{S}_{\Delta} y. \label{eq:independence} \end{align} \end{proposition} \begin{proof} To prove \eqref{eq:independence} we establish the following diagram of implications. For any integers $\ell$, $m$ and $n$ such that $\ell \geq m$ and $n \geq m$, for any extension $S \in \{\mathit{grounded}, \mathit{preferred}$, $\mathit{stable}\}$ such that $S_{\ell mn}(\Delta) \neq \emptyset$:\footnote{This condition is obviously relevant only for $S = \mathit{stable}$.} \begin{center} \begin{tabular}{cccr} $y \in \bigcap S_{\ell mn}(\Delta')$ & $\IMPLIES$ & $x \in \bigcap S_{\ell mn}(\Delta')$ & Assumption \\ $\Updownarrow$ & & $\Updownarrow$ & \fbox{Claim} \\ $y \in \bigcap S_{\ell mn}(\Delta)$ & $\IMPLIES$ & $x \in \bigcap S_{\ell mn}(\Delta)$ & Conclusion \end{tabular} \end{center} for $x, y \in \Delta'$. We need to establish the equivalences denoted \fbox{Claim} in the above diagram, for each of the semantics. Firstly note that, clearly, since $\Delta' \in cc(\Delta)$:\footnote{Recall notation introduced in Definition \ref{definition:attack_graph}.} \begin{equation}\label{PropIndEq} \forall x \in \Delta' : \overline{x}_{\Delta'} = \overline{x}_\Delta \AND \overline{\overline{x}}_{\Delta'} = \overline{\overline{x}}_\Delta \end{equation} \fbox{$S = \mathit{grounded}$} It suffices to show that $\forall lmn$ satisfying the constraint we have that $\grn_{\ell mn}(\Delta') = \grn_{\ell mn}(\Delta) \cap \mathcal{A}'$. We proceed by induction.\footnote{To illustrate the argument we use here standard induction, which to be precise works only for the case in which $\Delta$ is finitary. For the general case one needs to generalize the argument in the obvious way to transfinite induction. Cf. Remark \ref{remark:ordinal}.} By Corollary \ref{fact:grounded} $\grn_{lmn}(\Delta) = \bigcup_{0 \leq k < \omega} \cfmn{m}{n}^k(\emptyset)$. For the base case, clearly $\cfmn{m}{n}^{\Delta'}(\emptyset) = \cfmn{m}{n}^{\Delta}(\emptyset) \cap \mathcal{A}'$. Assume then that for $i > 1$ $(\cfmn{m}{n}^{\Delta'})^i(\emptyset) = (\cfmn{m}{n}^{\Delta})^i(\emptyset) \cap \mathcal{A}'$ (IH). By \eqref{PropIndEq} and IH it immediately follows that the same claim holds for $i +1$ and therefore, in the limit, $\grn_{\ell mn}(\Delta') = \grn_{\ell mn}(\Delta) \cap \mathcal{A}'$ as desired. \fbox{$S = \mathit{preferred}$} First of all, one can straightforwardly show that $\forall E$, if $E \in \prf_{\ell mn}(\Delta)$ then $E \cap \mathcal{A}' \in \prf_{lmn}(\Delta')$. So for any $lmn$-preferred extension in $\Delta$ there is a corresponding $lmn$-preferred extension in $\Delta'$. To establish the claim it then suffices to prove the converse, that is: \begin{equation} \label{PropIndEq2} \forall E' \in \prf_{lmn}(\Delta'), \exists E \in \prf_{\ell mn}(\Delta) \ST E' = E \cap \mathcal{A}'. \end{equation} To prove \eqref{PropIndEq2} an inductive argument similar to the above one for $S = \mathit{grounded}$ can be used by exploiting Fact \ref{fact:prf}, for each $\ell m n$-preferred extension. The claim then follows again from \eqref{PropIndEq}. \fbox{$S = \mathit{stable}$}. Assume $\stb_{\ell mn}(\Delta) \neq \emptyset$. It follows that $\stb_{\ell mn}(\Delta') \neq \emptyset$. Then the same argument used for the case of preferred extensions applies to prove \fbox{Claim}. If instead, $\stb_{\ell mn}(\Delta) = \emptyset$ then it trivially holds that $x \in \bigcap \stb_{\ell mn}(\Delta) \Leftrightarrow y \in \bigcap \stb_{\ell mn}(\Delta)$, no matter whether $y \in \bigcap \stb_{\ell mn}(\Delta') \Rightarrow x \in \bigcap \stb_{\ell mn}(\Delta')$. This suffices to establish \eqref{eq:independence} for $S = \mathit{stable}$. \end{proof} An inspection of the proof of Proposition \ref{PropInd} should show, however, that the claim can be strengthened to $\forall \Delta' \in cc(\Delta)$: $x \succ^{S}_{\Delta'} y$ $\Rightarrow$ $x \succ^{S}_{\Delta} y$, only for $S \in \{\mathit{grounded},$ $\mathit{preferred}\}$. This stronger formulation fails for $S = \mathit{stable}$ because of the well known non-existence of Dung stable extensions due to arguments in an odd cycle `contaminating' unrelated arguments. For example, in \begin{quote} \hspace{-5mm}$\Delta = \langle \{a,b,c\}, \{a \rightarrow a, b \rightarrow c \}\rangle$ and $\Delta' \in cc(\Delta) = \langle \{b,c\}, \{b \rightarrow c \}\rangle$ \end{quote} we have that $b \succ^{stable}_{\Delta'} c$ since $b$, but not $c$, is in the single $111$-stable---i.e., Dung stable---extension. However for $\Delta$, there are no $\ell$, $m$ and $n$ such that $b$, but not $c$, is justified under $\ell m n$-stable semantics. \smallskip So with respect to the properties of abstraction and independence, graded rankings behave in line with most existing approaches to graduality, with some important differences when the underlying semantics is assumed to be the stable one. \subsubsection{Void Precedence and Self Contradiction} The above example also illustrates why \textbf{Void Precedence} (\textbf{All})---any non-attacked argument is ranked strictly higher than any attacked argument---is also not satisfied by graded rankings under stable semantics, as $b \approx^{\mathit{stable}}_{\Delta} c$. However, the situation is more positive under grounded and preferred semantics, for which an---arguably---natural weakening of the property holds. We need some auxiliary notation. Define the variant of the graded defense function (Definition \ref{definition:sensitive}) that requires an infinity of defenders for any attack: $\cfmn{1}{\infty} = $ $\{x \in \mathcal{A} \mid$ $\forall y \ST y \ar x, \exists^\infty z \in X \ST z \ar y\}$. So this function outputs, for any set, the set of arguments whose attackers are counter-attacked by an infinity of arguments in that set. Consider now the smallest fixpoint $\lfp. \cfmn{1}{\infty}$ of such a function. We can refer to this as the $11\infty$-grounded extension. Observe right away that if the underlying framework is finitary, then $\lfp. \cfmn{1}{\infty}$ is equal to the set of unattacked arguments in that framework. We have the following result: \begin{proposition} Let $\Delta = \langle \mathcal{A}, \rightarrow \rangle$ be an $AF$ and let $x \in \mathcal{A}$ be s.t. $x \in \lfp. \cfmn{1}{\infty}^\Delta$. Then $\forall y \in \mathcal{A} \backslash \lfp. \cfmn{1}{\infty}^\Delta$ s.t. $\overline{y} \neq \emptyset$: $x \succ^{S}_{\Delta} y$, for $S \in \set{\mathit{grounded},\mathit{preferred}}$. \end{proposition} \begin{proof} Let $y \in \mathcal{A} \backslash \lfp. \cfmn{1}{\infty}^\Delta$. Then $\overline{y} \neq \emptyset$ and $\overline{\overline{y}}$ is finite. Then let $k = \max \set{| \overline{z} | \mid z \in \overline{y}}$. It follows that there exists no $11k+1$-admissible (and hence grounded or preferred) extension which contains $y$. Hence, if $x \in \lfp. \cfmn{1}{\infty}^\Delta$ then $x \succ^{S}_{\Delta} y$. \end{proof} A direct consequence of the proposition is that, if $\Delta$ is finitary, the claim simplifies to the standard formulation of void preference, that is: $\forall x \in \mathcal{A}$ s.t. $\overline{x} = \emptyset$, then $\forall y \in \mathcal{A}$ s.t. $\overline{y} \neq \emptyset$: $x \succ^{S}_{\Delta} y$, $S \in \set{\mathit{grounded},\mathit{preferred}}$. \smallskip Finally, and related to the above discussion, the property of \textbf{Self Contradiction} (\textbf{MT}), which states that a self-attacking argument is ranked strictly lower than any non self-attacking argument, is also not satisfied by our approach. Consider $\Delta$ = $\langle \{a,a1,a2,b\}$, $\{a \rightarrow a , a1 \rightarrow b, a2 \rightarrow b\} \rangle$, where $a$ is in all $222$-admissible (and hence $222$-preferred) extensions, whereas $b$ is not in any $222$-admissible extension, and so $b \nsucceq^{\mathit{preferred}}_{\Delta} a$. \medskip So with respect to the properties of void precedence and self contradiction, graded rankings behave essentially in line with most existing approaches to graduality. Some differences arise with respect to void precedence, when the underlying semantics is assumed to be the stable one and when frameworks are not finitary, in which case graded rankings cannot distinguish between unattacked arguments and arguments whose attackers are recursively counter-attacked by an infinity of defenders ($\lfp. \cfmn{1}{\infty}$). \subsubsection{Further Postulates} \textbf{Cardinality Precedence} (\textbf{AB-d},\textbf{AB-b}) states that if the number of attacks on $b$ is strictly greater than the number of attacks on $a$, then $a$ is ranked strictly higher than $b$. Like \textbf{LM}, \textbf{BH}, \textbf{CL} and \textbf{MT}, our approach does not satisfy this postulate. This is because our ranking also accounts for the number of defenders, as witnessed by the incomparability of $a3$ and $a4$ in Figure \ref{Motivating1}, and the fact that $a3 \nsucceq^{grounded}_{AF} a4$ and $a4 \nsucceq^{grounded}_{AF} a3$ (recall Example \ref{Ex:DEJ}). It is worth remarking however that, as discussed in Section \ref{Sec:ExtendingPartialOrder}, our ranking could be further refined to a lexicographic ordering giving precedence to the minimization of attackers, that would then satisfy the cardinality precedence postulate. \smallskip \textbf{Quality Precedence} (\textbf{None}) states that if there is an attacker of $b$ that is ranked strictly higher than all attackers of $a$, then $a$ is ranked strictly higher than $b$. The principle is not satisfied by our approach. We leave it to the reader to verify that given $e \rightarrow c \rightarrow b$, $e1 \rightarrow d1 \rightarrow a$, $e2 \rightarrow d1$, $e3 \rightarrow d2 \rightarrow a$, $e4 \rightarrow d2$, then although each of $d1$ and $d2$ are ranked lower than $c$, both $a$ and $b$ are incomparable. \begin{figure}[t] \begin{center} \includegraphics[scale=0.6]{Pics/SCTCounterExampleV2.pdf} \end{center} \caption{Frameworks and graded rankings referenced in the discussion of postulates for ranking based semantics reviewed in \cite{bonzon16comparative}.} \label{SCTCounterExample} \end{figure} \smallskip \textbf{Strict Counter-transitivity} (\textbf{LM}, \textbf{BH}, \textbf{AB-d},\textbf{AB-b}) states that if the attackers of $b$ are strictly more than those of $a$ (i.e., cardinality precedence), or the number of attackers of each are the same, but at least one attacker of $b$ is ranked strictly higher than at least one attacker of $a$, and not vice versa, then $a$ is ranked strictly higher than $b$. Like \textbf{CL} and \textbf{MT}, graded rankings do not satisfy this property. Consider the $AF$ and ranking of arguments in Figure \ref{SCTCounterExample}i), where $a$ and $b$ both have one attacker (respectively $d$ and $c$). Then $c \succ^{S}_{AF} d$, but $a \nsucc^{S}_{AF} b$. In fact, $a \approx^{S}_{AF} b$. With respect to this, it is worth observing the following. By virtue of our generalisation of Dung semantics, analysis of any individual argument under graded semantics is inherently bound up in the analysis of sets of (co-acceptable) arguments (recall the discussion in Section \ref{Sec:GradedAcceptabilityIntuitions}). Now, observe that the strongest defense needed to accept the defender $e1$ of $a$, and that then suffices to defend $a$, is when $m$ = $2$ (since $e1$ is not defended). But then any such standard of defense also accepts $b$ (since the attack by $c$ need not then be defended). \smallskip \textbf{Defense Precedence} (\textbf{LM}, \textbf{BH}, \textbf{AB-d},\textbf{AB-b}) states that for two arguments with the same number of attackers, a defended argument is ranked strictly higher than a non-defended argument. Again, like \textbf{CL} and \textbf{MT}, this property is not satisfied by graded rankings, as illustrated in Figure \ref{SCTCounterExample}ii), where the defended $a$ and undefended $b$ have the same ranking. We again see that the analysis of any individual argument is inherently bound up in the analysis of sets of (co-acceptable) arguments. In this example, the strongest defense needed to accept $d$ and that suffices to accept $a$, is when $m = 2$ (since $d$ is undefended). But then this suffices to accept $b$. \smallskip Cayrol and Lagasquie-Schiex \cite{CLS04} propose a number of properties that relate to the addition and extension of attack/defense paths.\footnote{These properties, referencing addition of and extension (increase in) paths, assume that the arguments in the additional (extension to the) path are disjoint from the existing arguments.} Improving the ranking of an argument by \textbf{Strict Addition of a Defense Path}\footnote{We use the term `path' rather than the term `branch' used in \cite{bonzon16comparative}.} (\textbf{None}) is clearly not satisfied by graded rankings. Indeed, the reverse is the case: $a$ is ranked higher when un-attacked than when attacked and then defended (as discussed in Example \ref{ExReinstatement}). Restriction of this property to the case where a defense path is added only to an argument that is already attacked, is satisfied only by \textbf{CL}, and is not satisfied by graded rankings. Satisfaction of \textbf{Addition of an Attack Path} (\textbf{All}) means that the ranking of an argument $a$ is degraded when adding an attack path terminating in $a$, and equates with $a$ being ranked strictly higher than $a1$ in the $AF$ $\langle c \rightarrow b \rightarrow a, c1 \rightarrow b1 \rightarrow a1, x_n \rightarrow \ldots \rightarrow x1 \rangle$, where $x_n \rightarrow \ldots \rightarrow x_1$ is a path of attacks s.t. $x_1 = a_1$ and $n \in 2 \mathbb{N}$. This property is in general not satisfied by graded rankings. Consider the example $\Delta$ in Figure \ref{SCTCounterExample}iii). The strongest iterated defense from $\emptyset$ (and hence the `strongest' $lmn$-grounded extension) that includes $a$ is when $l = m = n = 3$. This defense also includes $a1$. Observe that: $\cfmn{3}{3}(\emptyset)$ = $E_0 = \{k1,k2,h1,h2,h3,a,v,m1,m2,j1,j2,j3,x,w\}$; $\cfmn{3}{3}(E_0)$ = $E_1 = E_0 \cup \{a1\}$; $\cfmn{3}{3}(E_1)$ = $E_1$ is the $333$-grounded extension. Indeed, there is no $lmn$-grounded extension that contains $a$ and not $a1$. Intuitively, if $m = 2$ ($n \geq 2$), then the defense of $a$ requires at least two of $h1,h2$ and $h3$ (since $v$ is unattacked and so $f$ must be defended against by $n \geq 2$ arguments), but then a defense of any $h_i$ requires that $m = 3$ (since each $h_i$ is attacked by two unattacked arguments). Hence inclusion of $a$ in an $lmn$ grounded extension requires that $m \geq 3$. But then any such defense allows an additional attack path terminating in $a1$, while accommodating $a1$ under any such standard of defense. Put briefly, and recalling our discussion of strict counter-transitivity and defense precedence, it is the standard of defense met by $h1,h2$ or $h3$ that determines the strongest defense accommodating $a$, and this standard of defense allows for an additional attack path on $a1$. \smallskip The two properties, \textbf{Increase of an Attack Path} (\textbf{All} except \textbf{MT}) and \textbf{Increase of a Defense Path} (\textbf{All} except \textbf{MT}) are distinctive of the propagation based approaches whereby the ranking of arguments is propagated down chains of attacks. Neither is satisfied by graded rankings. The former equates with $b1$ being ranked strictly higher than $b$, and the latter equates with $a1$ being ranked strictly lower than $a$, in the $AF$ in Figure \ref{SCTCounterExample}iv). We have that $b \approx^{S}_{AF} b1$ and $a \approx^{S}_{AF} a1$. Once again, the strongest standard of defense needed to accept $a1$ is that needed to defend $c1$, which in turn has the same ranking as $a$. \smallskip To the above discussed properties, \cite{bonzon16comparative} also proposes that all arguments can be compared according to their ranking, which is clearly not satisfied by graded rankings (cf. discussion in Section \ref{Sec:ExtendingPartialOrder}), and that all non-attacked arguments have the same ranking, which is obviously satisfied by our approach. \subsubsection{Discussion} We briefly comment on the above analysis and its positioning of our proposal within the growing field of ranking-based semantics. Like all existing approaches, our rankings satisfy key properties such as abstraction and independence, as well as void precedence, albeit with some interesting caveats concerning graded stable extensions and the finitariness condition. With respect to other postulates, rankings based on graded acceptability tend to behave differently from approaches based on the propagation idea (and appear to behave more closely to \textbf{MT} which is also underpinned by intuitions different from propagation), for reasons inherent to the way graded semantics generalise standard Dung acceptability. To recap, we recall the discussion in Section \ref{Sec:GradedAcceptabilityIntuitions} where we emphasise that our graded theory of acceptability aims at capturing a notion of graduality while at the same time retaining the Dungian focus on {\em sets of} arguments, rather than individual arguments, as the units of analysis. Hence, as our above analysis repeatedly shows, it is the defense of co-acceptable defenders of an argument $x$ that determines the strongest defense needed to accept (and hence determine the ranking of) $x$. We further illustrate this point with a variation of Section \ref{SecGradedSchCQ} 's examples of graded semantics applied to frameworks instantiated by schemes and critical questions. \begin{figure}[t] \centering \includegraphics[width=3.7in]{Pics/SchCQAFs2.pdf} \caption{$AF$ in which $a1^* \approx^{\emptyset} a1$. \label{SchCQAFs2}} \end{figure} \begin{example}\label{ExReinstatement2Diff} Suppose the $AF$ $\Delta$ that includes the arguments $A_1, B_1$ and $C_1$ in Figure \ref{Arguments}, and the $AF$ $\Delta'$ that includes the additional argument $D_1$ that concludes $\neg h$ (i.e., an argument addressing \emph{AEO}'s critical question AEO2, by claiming that Chilcott is not an expert in the domain of Blair's conduct in the Iraq war\footnote{For example, because Chilcott was not given access to the relevant documents reporting on Blair's deliberations.}). Notice that unlike Example \ref{ExReinstatement2Same}, $D_1$ undermine attack $B_1$ on a different sub-argument $B_2' = [h]$ to that (i.e., $B_1'$) undermine attacked by $C_1$. We now also have $D_1 \leftrightarrow B_2'$. Now, $A_1$ is not then strengthened upon inclusion of $D_1$. Abstractly, this equates with the ranking $a1^* \approx^S a1$ in the AF in Figure \ref{SchCQAFs2}. Intuitively, the strongest defense needed to accept a defender (either $c1^*$ or $d1^*$) of $a1^*$, and that then suffices to defend $a1^*$, is when $m$ = $1$ and $n$ = $1$. But then this is also the strongest defense needed to accept $a1$. Contrast this with Example \ref{ExReinstatement2Same} in which $C_1$ and $D_1$ accrue in support of the same conclusion $\neg k$ so that both undermine attack $B_1$ on $B_1'$. Recall that this equates with $a1^* \succ^S a1$ in the AF in Figure \ref{SchCQAFs}iii), in which the strongest defense needed to accept a defender (either $c1^*$ or $d1^*$) of $a1^*$ that then suffices to defend $a1^*$, is when $m$ = $1$ and $n$ = $2$. This standard of defense does not suffice to accept $a1$ in this example. Finally, notice that in the contextual approach to rankings, we do have that $a1^* \succ^{\set{c_1, c_1^*, d_1^*}} a1$ in the AF in Figure \ref{SchCQAFs2}. That is, if we commit to $\set{c_1, c_1^*, d_1^*}$ (the context), $a_1^*$ is then ranked higher than $a1$. \end{example} All in all, our analysis shows that rankings based on graded acceptability yield an original combination of postulates with respect to existing approaches, and show how a graded generalisation of Dung semantics can contribute a rich perspective to the ranking-based semantics programme. \smallskip Finally, we observe that in contrast to the above described graph propagation approaches, we have (in Section \ref{Sec:Applications} and in the above example) studied our notions of graduality and rankings as they apply to instantiated argumentation. In particular, we have studied well-known instantiations of Dung's theory yielded by the use of schemes and critical questions, and classical logic instantiations that yield dialectical characterisations of non-monotonic inference. We believe that such a study is an important complement to the use of postulates in evaluating the intuitions captured by proposals for graduality and rankings. It is with this aim in mind that Section \ref{Sec:Applications} seeks to substantiate the intuitions captured by our graded generalisation of Dung's theory. \section{Conclusions and Future Work}\label{Sec:Future} \paragraph{Summary} This paper has presented three contributions. First, we have generalised Dung's standard semantics by providing graded variants of the classic extensions studied in abstract argumentation, and studied their fixpoint-theoretic behavior. In doing so, the paper also provided a comprehensive exposition of the fixpoint theory of standard Dung's semantics (Section \ref{Sec:Background}) which, to the best of our knowledge, was not yet available in the literature. The resulting graded semantics for abstract argumentation has then been shown to enable a simple way of ranking arguments according to how strongly they are justified under different graded semantics. We showed how this enables an arguably natural way of arbitrating, in abstract argumentation, among justified arguments that are credulously justified under the standard Dung semantics, without recourse to exogenous information. Such rankings have then also been applied to instantiated Dung frameworks, specifically via \emph{ASPIC+} formalisations of `human orientated' argumentation encoded through the use of argument schemes and critical questions, and dialectical characterisations of non-monotonic inference defined by Brewka's Preferred Subtheories. In so doing, we have also shown how the graded semantics accounts for the mutual strengthening of arguments with the same conclusion (i.e., the accrual of arguments) within the Dung paradigm. Finally, the novel rankings have been thoroughly compared with existing approaches to ranking-based semantics, allowing us to highlight the similarities and core differences between rankings based on graded semantics and other influential approaches. \paragraph{Future work} The paper has aimed at establishing foundations for a graded generalisation of Dung's argumentation theory. Based on our results, many natural avenues for future research present themselves. We confine ourselves to mentioning those that in our opinion are the most promising. \smallskip We observed in Section \ref{Sec:ExtendingPartialOrder} that the rankings enabled by our graded semantics are, without further assumptions, partial. The ensuing theory of graded rankings is therefore less committal than existing proposals in the ranking semantics literature, and could therefore be extended to accommodate further pre-theoretical intuitions. For example, some application domains may warrant distinguishing amongst arguments justified under the standard Dung semantics, by assigning a higher ranking to those that have a higher number of attacks. For example, consider that scientific theories establish their credibility to the extent that they successfully defend themselves against arguments attempting to refute (attack) them. Exploring directions in which graded rankings could be extended to capture this alternative intuition in a principled manner, is clearly a natural direction of research. \smallskip Dung's original semantics were born out of an attempt to systematize patterns underpinning the semantics of all the main non-monotonic logics. Section \ref{Sec:GradedPrefSubtheories}'s application of graded semantics to dialectical characterisations of Preferred Subtheories strongly suggests that graded semantics could offer a similar tool for systematizing paraconsistent non-monotonic logics. By relaxing the conflict-freeness and self-defence requirements, paraconsistent non-monotonic inference relations could be defined akin to the way in which non-monotonic inferences are defined by Dung's argumentation theory. As graded semantics enable a 'controlled' relaxation of conflict-freeness, these inference relations should be able to keep the explosivity of the ensuing logic at bay. The development of these ideas can lead, we argue, to novel interesting insights at the interface of paraconsistent logic and argumentation theory. This in turn may motivate the study of rationality postulates for argumentation \cite{Caminada2007286} that are reformulated to accommodate paraconsistent inference relations. Notice also, that in Section \ref{Sec:GradedPrefSubtheories} we applied graded semantics to defeat graphs that employ exogenous preferences to decide which attacks succeed as defeats, and highlighted how these explicit preferences take precedence over the implicit preferences yielded by our rankings. The interaction between exogenous and endogenously defined preferences is also an obvious direction for future research. \smallskip The theory of graded argumentation as developed in this paper has focused on extracting graduality from information endogenous to standard argumentation frameworks, essentially by counting the attackers of arguments. This intuition can however be extended along lines explored by weighted argumentation frameworks \cite{dunne11weighted}. We briefly sketch how this could be done, assuming now each attack is assigned a value in $\mathbb{R}^+$ by a weighting function $w$. Recall Definition \ref{definition:sensitive} and consider the following alternative definition of graded defense: \begin{align} \cfmn{m}{n}(X) & = \set{x \in \mathcal{A} \mid \rho( \set{(y,x) \in \overline{x} \mid \rho(y^- \cap X) < n}) < m} \label{eq:weighted} \end{align} where, for $R \subseteq \ar$, $\rho(R) = \sum_{(x,y) \in R} w(x,y)$. Clearly the definition of graded defense given in Definition \ref{definition:sensitive} is the special case of \eqref{eq:weighted} where $w$ assigns the weight of $1$ to each attack. This generalization suggests that the logic behind graded semantics can be leveraged beyond the standard case explored here, and therefore also applied to existing proposals based on weighted argumentation (e.g., \cite{dunne11weighted}). \smallskip In Section \ref{Sec:RelatedWork} we discussed differences between rankings defined by graded semantics and propagation-based approaches. Although, as we have shown, the two approaches build on different underpinning intuitions, it should be stressed that they remain compatible. In fact an interesting direction for future research would be to explore how features of the latter might be integrated with our graded approach. For example, recalling the example illustrating that strict counter-transitivity is not satisfied, one might note that since $b$'s attacker $c$ is ranked above $a$'s attacker $d$ by the graded ranking in Figure \ref{SCTCounterExample}i, then the ranking could be refined to favour $a$ over $b$. Similarly, defense precedence could be enforced by noting that since $b$'s attacker $f$ is ranked strictly higher than $a$'s attacker $c$ (according to the graded ranking in Figure \ref{SCTCounterExample}ii, then the ranking can be refined to rank $b$ strictly below $a$. By the same reasoning, the ranking in Figure \ref{SCTCounterExample}iii could be refined to rank $b1$ above $b$ and so $a$ above $a1$ (so enforcing satisfaction of `increase of an attack path' and `increase of a defense path'), and the ranking $a1^* \approx^S a1$ in Example \ref{ExReinstatement2Diff} could be refined to obtain $a1^* \succ^S_\Delta a1$ by noting that $b1 \succ^S_{\Delta} b1^*$. \smallskip Finally, Dung's standard semantics lend themselves to natural dialectical interpretations via argument games (cf. \cite{modgil09proof} for an overview). Given that graded semantics retain many of the key fixpoint-theoretic properties of standard Dung's semantics, we believe that standard argument games---specifically the game for the grounded extension and for the credulous preferred semantics---could be appropriately modified to obtain game-theoretic characterizations of (at least some of) our semantics.\footnote{Games for all semantics could be obtained via a detour through suitable logic games. Cf. \cite{grossi13abstract}.} These types of games would shed novel light on the under-investigated link between dialectical approaches to argumentation and the younger literature on ranking-based semantics.
{ "timestamp": "2018-11-09T02:12:17", "yymm": "1811", "arxiv_id": "1811.03355", "language": "en", "url": "https://arxiv.org/abs/1811.03355" }
\section{Introduction} In this paper we study submodular maximization under matroid constraints in the adaptive complexity model. The adaptive complexity model was recently introduced in the context of submodular optimization in~\cite{BS18a} to quantify the information theoretic complexity of black-box optimization in a parallel computation model. Informally, the \emph{adaptivity} of an algorithm is the number of sequential rounds it makes when each round can execute polynomially-many function evaluations in parallel. The concept of adaptivity is heavily studied in computer science and optimization as it provides a measure of efficiency of parallel computation. Since submodular optimization is regularly applied on very large datasets, we seek algorithms with low adaptivity to enable speedups via parallelization. For the basic problem of maximizing a monotone submodular function under a cardinality constraint $k$ the celebrated greedy algorithm which iteratively adds to the solution the element with largest marginal contribution is $\Omega(k)$ adaptive. Until very recently, even for this basic problem, there was no known constant-factor approximation algorithm whose adaptivity is sublinear in $k$. In the worst case $k \in \Omega(n)$ and hence greedy and all other algorithms had adaptivity that is \emph{linear} in the size of the ground set. The main result in~\cite{BS18a} is an \emph{adaptive sampling} algorithm for maximizing a monotone submodular function under a cardinality constraint that achieves a constant factor approximation arbitrarily close to $1/3$ in $\mathcal{O}(\log n)$ adaptive rounds as well as a lower bound that shows that no algorithm can achieve a constant factor approximation in $\tilde{o}(\log n)$ rounds. Consequently, this algorithm provided a constant factor approximation with an exponential speedup in parallel runtime for monotone submodular maximization under a cardinality constraint. In~\cite{BRS19, EN19}, the adaptive sampling technique was extended to achieve an approximation guarantee arbitrarily close to the optimal $1-1/e$ in $\O(\log n)$ adaptive rounds. This result was then obtained with a linear number of queries \cite{FMZ19}, which is optimal. Functions with bounded curvature have also been studied using adaptive sampling under a cardinality constraint~\cite{BS18b}. The more general family of packing constraints, which includes partition and laminar matroids, has been considered in~\cite{chekuri2018submodular}. In particular, under $m$ packing constraints, a $1-1/e-\epsilon$ approximation was obtained in $\O(\log^2m \log n)$ rounds using a combination of continuous optimization and multiplicative weight update techniques. \subsection{Submodular maximization under a matroid constraint} For the fundamental problem of maximizing a monotone submodular function under a general matroid constraint it is well known since the late 70s that the greedy algorithm achieves a $1/2$ approximation~\cite{NWF78} and that even for the special case of cardinality constraint no algorithm can obtain an approximation guarantee better than $1-1/e$ using polynomially-many value queries~\cite{nemhauser1978best}. Thirty years later, in seminal work, Vondr{\'{a}}k introduced the continuous greedy algorithm which approximately maximizes the multilinear extension of the submodular function~\cite{CCPV07} and showed it obtains the optimal $1-1/e$ approximation guarantee~\cite{vondrak08}. Despite the surge of interest in adaptivity of submodular maximization, the problem of maximizing a monotone submodular function under a matroid constraint in the adaptive complexity model has remained elusive. As we discuss in Section~\ref{related_work}, when it comes to matroid constraints there are fundamental limitations of the techniques developed in this line of work. The best known adaptivity for obtaining a constant factor approximation guarantee for maximizing a monotone submodular function under a matroid constraint is achieved by the greedy algorithm and is linear in the rank of the matroid. The best known adaptivity for obtaining the optimal $1-1/e$ guarantee is achieved by the continuous greedy and is linear in the size of the ground set. \begin{center} \emph{Is there an algorithm whose adaptivity is sublinear in the size of the rank of the matroid that obtains a constant factor approximation guarantee?} \end{center} \subsection{Main result} Our main result is an algorithm for the problem of maximizing a monotone submodular function under a matroid constraint whose approximation guarantee is arbitrarily close to the optimal $1-1/e$ and has near optimal adaptivity of $\O(\log(n)\log(k))$. \begin{theorem*} For any $\epsilon>0$ there is an $\O\left(\log(n) \log\left(\frac{k}{\epsilon^3}\right) \frac{1}{\epsilon^3}\right)$ adaptive algorithm that, with probability $1 - o(1)$, obtains a $1-1/e - \O(\epsilon)$ approximation for maximizing a monotone submodular function under a matroid constraint. \end{theorem*} Our result provides an exponential improvement in the adaptivity for maximizing a monotone submodular function under a matroid constraint with an arbitrarily small loss in approximation guarantee. As we later discuss, beyond the information theoretic consequences, this implies that a very broad class of combinatorial optimization problems can be solved exponentially faster in standard parallel computation models given appropriate representations of the matroid constraints. Our main result is largely powered by a new technique developed in this paper which we call \emph{adaptive sequencing}. This technique proves to be extremely powerful and is a departure from all previous techniques for submodular maximization in the adaptive complexity model. In addition to our main result we show that this technique gives us a set of other strong results that include: \begin{itemize} \item An $\O(\log(n)\log(k))$ adaptive combinatorial algorithm that obtains a $\frac{1}{2}-\epsilon$ approximation for monotone submodular maximization under a matroid constraint (Theorem~\ref{thm:comb}); \item An $\O(\log(n)\log(k))$ adaptive combinatorial algorithm that obtains a $\frac{1}{P+1} - \epsilon$ approximation for monotone submodular maximization under intersection of $P$ matroids (Theorem~\ref{thm:intersection}); \item An $\O(\log(n)\log(k))$ adaptive algorithm that obtains an approximation of $1-1/e-\epsilon$ for monotone submodular maximization under a partition matroid constraint that can be implemented in the PRAM model with polylogarithmic depth (Appendix~\ref{sec:explicit}). \end{itemize} In addition to these results the adaptive sequencing technique can be used to design algorithms that achieve the same results as those for cardinality constraint in~\cite{BRS19,EN19,FMZ19} and for non-monotone submodular maximization under cardinality constraint as in~\cite{BBS18} (Appendix~\ref{sec:explicit}). \subsection{Technical overview} \label{sec:technical_overview} The standard approach to obtain an approximation guarantee arbitrarily close to $1-1/e$ for maximizing a submodular function under a matroid constraint $\M$ is by the continuous greedy algorithm due to Vondr{\'{a}}k~\cite{vondrak08}. This algorithm approximately maximizes the multilinear extension $F$ of the submodular function~\cite{CCPV07} in $\O(n)$ adaptive steps. In each step the algorithm updates a continuous solution $\mathbf{x} \in[0,1]$ in the direction of ${\mathbf 1}_S$, where $S$ is chosen by maximizing an additive function under a matroid constraint. In this paper we introduce the \emph{accelerated continuous greedy} algorithm whose approximation is arbitrarily close to the optimal $1-1/e$. Similarly to continuous greedy, this algorithm approximately maximizes the multilinear extension by carefully choosing $S\in \M$ and updating the solution in the direction of ${\mathbf 1}_S$. In sharp contrast to continuous greedy, however, the choice of $S$ is done in a manner that allows making a \emph{constant} number of updates to the solution, each requiring $\O(\log(n)\log(k))$ adaptive rounds. We do this by constructing a feasible set $S$ using $\O(\log(n)\log(k))$ adaptive rounds, at each one of the $1/\lambda$ iterations of accelerated continuous greedy, s.t. $S$ approximately maximizes the contribution of taking a step of constant size $\lambda$ in the direction of ${\mathbf 1}_{S}$. We construct $S$ via a novel combinatorial algorithm introduced in Section~\ref{sec:comb}. The new combinatorial algorithm achieves by itself a $1/2$ approximation for submodular maximization under a matroid constraint in $\O(\log(n)\log(k))$ adaptive rounds. This algorithm is developed using a fundamentally different approach from all previous low adaptivity algorithms for submodular maximization (see discussion in Section~\ref{related_work}). This new framework uses a single random \emph{sequence} $(a_1, \ldots, a_k)$ of elements. In particular, for each $i \in [k]$, element $a_i$ is chosen uniformly at random among all elements such that $S \cup \{a_1, \ldots, a_i\} \in \M$. This random feasibility of each element is central to the analysis. Informally, this ordering allows the sequence to navigate randomly through the matroid constraint. For each position $i$ in this sequence, we analyze the number of elements $a$ such that $S \cup \{a_1, \ldots, a_i\} \cup a \in \M$ and $f_{S \cup \{a_1, \ldots, a_i\}}(a)$ is large. The key observation is that if this number is large at a position $i$, by the randomness of the sequence, $f_{S \cup \{a_1, \ldots, a_i\}}(a_{i+1})$ is large w.h.p., which is important for the approximation. Otherwise, if this number is low we discard a large number of elements, which is important for the adaptivity. In Section~\ref{sec:continuous} we analyze the approximation of the accelerated continuous greedy algorithm, which is the main result of the paper. We use the algorithm from Section~\ref{sec:comb} to selects $S$ as the direction and show $F(\mathbf{x} + \lambda {\mathbf 1}_S) - F(\mathbf{x}) \geq (1 - \epsilon)\lambda(\texttt{OPT} - F(\mathbf{x}))$, which implies a $1-1/e-\epsilon$ approximation. Finally, in Section~\ref{sec:matroid} we parallelize the matroid oracle queries. The random sequence generated in each iteration of the combinatorial algorithm in Section~\ref{sec:comb} is independent of function evaluations and requires zero adaptive rounds, though it sequentially queries the matroid oracle. For practical implementation it is important to parallelize the matroid queries to achieve fast parallel runtime. When given explicit matroid constraints such as for uniform or partition matroids, this parallelization is relatively simple (Section~\ref{sec:explicit}). For general matroid constraints given via rank or independence oracles we show how to parallelize the matroid queries in Section~\ref{sec:matroid}. We give upper and lower bounds by building on the seminal work of Karp, Upfal, and Wigderson on the parallel complexity of finding the base of a matroid~\cite{karp1988complexity}. For rank oracles we show how to execute the algorithms with $\O(\log(n)\log(k))$ parallel steps that matches the $\O(\log(n)\log(k))$ adaptivity. For independence oracles we show how to execute the algorithm using $\tilde{\O}(n^{1/2})$ steps of parallel matroid queries and give an $\tilde{\Omega}(n^{1/3})$ lower bound even for additive functions and partition matroids. \subsection{Previous optimization techniques in the adaptive complexity model}\label{related_work} The random sequencing approach developed in this paper is a fundamental departure from the adaptive sampling approach introduced in~\cite{BS18a} and employed in previous combinatorial algorithms that achieve low adaptivity for submodular maximization~\cite{BS18b,BBS18,BRS19,EN19,FMZ19,FMZ18}. In adaptive sampling an algorithm samples multiple large feasible sets at every iteration to determine elements which should be added to the solution or discarded. The issue with these uniformly random feasible sets is that, although they have a simple structure for uniform matroids, they are complex objects to generate and analyze for general matroid constraints. Chekuri and Quanrud recently obtained a $1 - 1/e - \epsilon$ approximation in $\O(\log^2m \log n)$ adaptive rounds for the family of $m$ packing constraints, which includes partition and laminar matroids~\cite{chekuri2018submodular}. This setting was then also considered for non-monotone functions in~\cite{ene2018submodular}. Their approach also uses the continuous greedy algorithm, combined with a multiplicative weight update technique to handle the constraints. Since general matroids consist of exponentially many constraints, a multiplicative weight update approach over these constraints is not feasible. More generally packing constraints assume an explicit representation of the matroid. For general matroid constraints, the algorithm is not given such a representation but an oracle. Access to an independence oracle for a matroid breaks these results as shown in Section~\ref{sec:matroid}: any constant factor approximation algorithm with an independence oracle must have $\tilde{\Omega}(n^{1/3})$ sequential steps. \subsection{Preliminaries} \paragraph{Submodularity.} A function $f : 2^N \rightarrow \R_+$ over ground set $N = [n]$ is \emph{submodular} if the marginal contributions $f_S(a) := f(S \cup a) - f(S)$ of an element $a \in N\setminus S$ to a set $S \subseteq N$ are diminishing, meaning $f_S(a) \geq f_T(a)$ for all $S \subseteq T \subseteq N$ and $a \in N \setminus T$. Throughout the paper, we abuse notation by writing $S \cup a$ instead of $S \cup \{a\}$ and assume $f$ is monotone, so $f(S) \leq f(T)$ for all $S \subseteq T$. The value of the optimal solution $O$ for the problem of maximizing the submodular function under some constraint $\M$ is denoted by $\texttt{OPT}$, i.e. $O := \argmax_{S \in \M}f(S)$ and $\texttt{OPT} := f(O)$. \paragraph{Adaptivity.} Given a value oracle for $f$, an algorithm is \textbf{$r$-adaptive} if every query $f(S)$ for the value of a set $S$ occurs at a round $i \in [r]$ s.t. $S$ is independent of the values $f(S')$ of all other queries at round $i$, with at most $\poly(n)$ queries at every round. \paragraph{Matroids.} A set system $\M \subseteq 2^N$ is a \emph{matroid} if it satisfies the \emph{downward closed} and \emph{augmentation} properties. A set system $\M$ is downward closed if for all $S \subseteq T$ such that $T \in \M$, then $S \in \M$. The augmentation property is that if $S, T \in \M$ and $|S| < |T|$, then there exists $a \in T$ such that $S \cup a \in \M$. We call a set $S \in \M$ \emph{feasible} or \emph{independent}. The rank $k = {\textsc{rank}}(\M)$ of a matroid is the maximum size of an independent set $S$. The rank ${\textsc{rank}}(S)$ of a set $S$ is the maximum size of an independent subset $T \subseteq S$. A set $B \in \M$ is called a base of $\M$ if $|B| = {\textsc{rank}}(\M)$. The matroid polytope $P(\M)$ is the collection of points $\mathbf{x} \in [0,1]^n$ in the convex hull of the independent sets of $\M$, or equivalently the points $\mathbf{x}$ such that $\sum_{i \in S} x_i \leq {\textsc{rank}}(S)$ for all $S \subseteq [n]$. \paragraph{The multilinear extension.} The multilinear extension $F : [0,1]^n \rightarrow \R_+$ of a function $f$ maps a point $\mathbf{x} \in [0,1]^n$ to the expected value of a random set $R \sim \mathbf{x}$ containing each element $i \in [n]$ with probability $x_i$ independently, i.e. $F(\mathbf{x}) = \E_{R \sim \mathbf{x}}[f(R)]$. We note that given an oracle for $f$, one can estimate $F(\mathbf{x})$ arbitrarily well in one round by querying in parallel a sufficiently large number of samples $R_1, \ldots, R_m \iid \mathbf{x}$ and taking the average value of $f(R_i)$ over $i \in [m]$ \cite{chekuri2015multiplicative,chekuri2018submodular}. For ease of presentation, we assume throughout the paper that we are given access to an exact value oracle for $F$ in addition to $f$. The results which rely on $F$ then extend to the case where the algorithm is only given an oracle for $f$ with an arbitrarily small loss in the approximation, no loss in the adaptivity, and additional $\O(n \log n)$ factor in the query complexity.\footnote{With $\O(\epsilon^{-2} n \log n)$ samples, $F(\mathbf{x})$ is estimated within a $(1 \pm \epsilon)$ multiplicative factor with high probability\cite{chekuri2018submodular}.} \section{The Combinatorial Algorithm}\label{sec:comb} \label{sec:mainresult} In this section we describe a combinatorial algorithm used at every iteration of the accelerated continuous greedy algorithm to find a direction ${\mathbf 1}_S$ for an update of a continuous solution. In the next section we will show how to use this algorithm as a subprocedure in the accelerated continuous greedy algorithm to achieve an approximation arbitrarily close to $1-1/e$ with $\O(\log(n)\log(k))$ adaptivity. The optimization of this direction $S$ is itself an instance of maximizing a monotone submodular function under a matroid constraint. The main result of this section is a $\O(\log(n)\log(k))$ adaptive algorithm, which we call {\textsc{Adaptive Sequencing}}, that returns a solution $\{a_i\}_i$ s.t., for all $i$, the marginal contribution of $a_i$ to $\{a_1, \ldots, a_{i-1}\}$ is near optimal with respect to all elements $a$ s.t. $\{a_1, \ldots, a_{i-1}, a\} \in \M$. We note that this guarantee also implies that {\textsc{Adaptive Sequencing}} \ itself achieves an approximation that is arbitrarily close to $1/2$ with high probability. As discussed in Section~\ref{sec:technical_overview} unlike all previous low-adaptivity combinatorial algorithms for submodular maximization, the {\textsc{Adaptive Sequencing}} \ algorithm developed here does not iteratively sample large sets of elements in parallel at every iteration. Instead, it samples a \emph{single} random \emph{sequence} of elements in every iteration. Importantly, this sequence is generated without any function evaluations, and therefore can be executed in zero adaptive rounds. The goal is then to identify a high-valued prefix of the sequence that can be added to the solution and discard a large number of low-valued elements at every iteration. Identifying a high valued prefix enables the approximation guarantee and discarding a large number of elements in every iteration ensures low adaptivity. \subsection{Generating random feasible sequences} The algorithm crucially requires generating a random sequence of elements in zero adaptive rounds. \begin{definition} Given a matroid $\M$ we say that $(a_1, \ldots, a_{{\textsc{rank}}(\M)})$ is a \textbf{random feasible sequence} if for all $i \in [{\textsc{rank}}(\M)]$, $a_i$ is an element chosen u.a.r. from $\{a : \{a_1, \ldots, a_{i-1}, a\} \in \M\}$. \end{definition} A simple way to obtain a random feasible sequence is by sampling feasible elements sequentially. \begin{algorithm}[H] \caption{ {\textsc{Random Sequence}} \ } \begin{algorithmic} \item[{\bf Input:}] matroid $\M$ \STATE \textbf{for} $i = 1$ to ${\textsc{rank}}(\M)$ \textbf{do} \STATE \ \ \ $X \leftarrow \{a : \{a_1, \ldots, a_{i-1}, a \} \in \M\}$ \STATE \ \ \ $a_i \sim$ a uniformly random element from $X$ \RETURN $a_1, \ldots, a_{{\textsc{rank}}(\M)}$ \end{algorithmic} \label{alg:bbs} \end{algorithm} It is immediate that Algorithm~\ref{alg:bbs} outputs a random feasible sequence. Since Algorithm~\ref{alg:bbs} is independent of $f$, its adaptivity is zero. For ease of presentation, we describe the algorithm using {\textsc{Random Sequence}} \ as a subroutine, despite its sequential calls to the matroid oracle. In Section~\ref{sec:matroid} we show how to efficiently parallelize this procedure using standard matroid oracles. \subsection{The algorithm} The main idea behind the algorithm is to generate a random feasible sequence in each adaptive round, and use that sequence to determine which elements should be added to the solution and which should be discarded from consideration. Given a position $i \in\{1,\ldots, l\}$ in a sequence $(a_1,a_2,\ldots,a_{l})$, a subset $S$, and threshold $t$, we say that an element $a$ is \emph{good} if adding it to $S \cup \{a_1, \ldots, a_{i}\}$ satisfies the matroid constraint and its marginal contribution to $S \cup \{a_1, \ldots, a_{i}\}$ is at least threshold $t$. In each adaptive round the algorithm generates a random feasible sequence and finds the index $i^\star$ which is the minimal index $i$ such that at most a $1 - \epsilon$ fraction of the surviving elements $X$ are good. The algorithm then adds the set $\{a_1,\ldots,a_{i^\star}\}$ to $S$. A formal description of the algorithm is included below. We use $\M(S, X) := \{T \subseteq X : S \cup T \in \M\}$ to denote the matroid over elements $X$ where a subset is feasible in $\M(X,S)$ if its union with the current solution $S$ is feasible according to $\M$. \begin{algorithm}[H] \caption{{\textsc{Adaptive Sequencing}}} \begin{algorithmic} \item[{\bf Input:}] function $f$, feasibility constraint $\M$ \STATE $S \leftarrow \emptyset, t \leftarrow \max_{a \in N}f(a)$ \STATE \textbf{for} $\Delta$ iterations \textbf{do} \STATE \ \ \ $X \leftarrow N$ \STATE \ \ \ \textbf{while} $X \neq \emptyset$ \textbf{do} \STATE \ \ \ \ \ \ $a_1, \ldots, a_{{\textsc{rank}}(\M(S,X))} \leftarrow {\textsc{Random Sequence}}(\M(S,X))$ \STATE \ \ \ \ \ \ $X_i \leftarrow \{a \in X : S \cup \{a_1, \ldots, a_{i}, a\} \in \M \text{ and } f_{S \cup \{a_1, \ldots, a_{i}\}}(a) \geq t\}$ \STATE \ \ \ \ \ \ $i^{\star} \leftarrow \min\left\{i : |X_i| \leq (1 - \epsilon)|X|\right\}$ \STATE \ \ \ \ \ \ $S \leftarrow S \cup \{a_1, \ldots, a_{i^{\star}}\}$ \STATE \ \ \ \ \ \ $X \leftarrow X_{i^{\star}}$ \STATE \ \ \ $t \leftarrow (1 - \epsilon) t$ \RETURN $S$ \end{algorithmic} \label{alg:1} \end{algorithm} Intuitively, adding $\{a_1,\ldots,a_{i^\star}\}$ to the current solution $S$ is desirable for two important reasons. First, for a random feasible sequence we have that $S \cup \{a_1,\ldots,a_{i^\star}\} \in \M$ and for each element $a_i$ at a position $i \leq i^{\star}$, there is a high likelihood that the marginal contribution of $a_i$ to the previous elements in the sequence is at least $t$. Second, by definition of $i^\star$ a constant fraction $\epsilon$ of elements are not good at that position, and we discard these elements from $X$. This discarding guarantees that there are at most logarithmically many iterations until $X$ is empty. The threshold $t$ maintains the invariant that it is approximately an upper bound on the optimal marginal contribution to the current solution. By submodularity, the optimal marginal contribution to $S$ decreases as $S$ grows. Thus, to maintain the invariant, the algorithm iterates over decreasing values of $t$. In particular, at each of $\Delta = \O\left(\frac{1}{\epsilon}\log\left(\frac{k}{\epsilon}\right)\right)$ iterations, where $k := {\textsc{rank}}(\M)$, the algorithm decreases $t$ by a $1-\epsilon$ factor when there are no more elements which can be added to $S$ with marginal contribution at least $t$, so when $X$ is empty. \subsection{Adaptivity} In each inner-iteration the algorithm makes polynomially-many queries that are independent of each other. Indeed, in each iteration, we generate $X_1,\ldots,X_{k - |S|}$ non-adaptively and make at most $n$ function evaluations for each $X_i$. The adaptivity immediately follows from the definition of $i^\star$ that ensures an $\epsilon$ fraction of surviving elements in $X$ are discarded at every iteration. \begin{lemma} \label{lem:adaptivity} With $\Delta = \O\left(\frac{1}{\epsilon}\log\left(\frac{k}{\epsilon}\right)\right)$, {\textsc{Adaptive Sequencing}} \ has adaptivity $\O\left(\log(n)\log\left(\frac{k}{\epsilon}\right)\frac{1}{\epsilon^2}\right)$. \end{lemma} \begin{proof} The for loop has $\Delta$ iterations. The while loop has at most $\O(\epsilon^{-1} \log n)$ iterations since, by definition of $i^{\star}$, an $\epsilon$ fraction of the surviving elements are discarded from $X$ at every iteration. We can find $i^{\star}$ by computing $X_i$ for each $i \in [k]$ in parallel in one round. \end{proof} We note that the query complexity of the algorithm is $\O\left(nk\log(n)\log\left(\frac{k}{\epsilon}\right)\frac{1}{\epsilon^2}\right)$ and can be improved to $\O\left(n\log(n)\log(k)\log\left(\frac{k}{\epsilon}\right)\frac{1}{\epsilon^2}\right)$ if we allow $\O\left(\log(n)\log(k)\log\left(\frac{k}{\epsilon}\right)\frac{1}{\epsilon^2}\right)$ adaptivity by doing a binary search over at most $k$ sets $X_i$ to find $i^{\star}$. The details can be found in Appendix~\ref{sec:appcomb}. \subsection{Approximation guarantee} The main result for the approximation guarantee is that the algorithm returns a solution $S = \{a_1, \ldots, a_l\}$ s.t. for all $i \leq l$, the marginal contribution obtained by $a_i$ to $\{a_1, \ldots, a_{i-1}\}$ is near optimal with respect to all elements $a$ such that $\{a_1, \ldots, a_{i-1}, a\} \in \M$. To prove this we show that the threshold $t$ is an approximate upper bound on the maximum marginal contribution. \begin{lemma} \label{lem:invariants} Assume that $f$ is submodular and that $\M$ is downward closed. Then, at any iteration, $t \geq (1 - \epsilon) \max_{a : S \cup a \in \M} f_S(a).$ \end{lemma} \begin{proof} The claim initially holds by the initial definitions of $t = \max_{a \in N}f(a)$, $S = \emptyset$ and $X = N$. We show that this invariant is maintained through the algorithm when either $S$ or $t$ are updated. First, assume that at some iteration of the algorithm we have $t \geq (1 - \epsilon) \max_{a : S \cup a \in \M} f_S(a)$ and that $S$ is updated to $S \cup \{a_1, \ldots, a_{i^{\star}}\}$. Then, for all $a$ such that $S \cup a \in \M$, $$f_{S \cup \{a_1, \ldots, a_{i^{\star}}\}}(a) \leq f_{S}(a) \leq t/(1- \epsilon)$$ where the first inequality is by submodularity and the second by the inductive hypothesis. Since $\{a : S \cup \{a_1, \ldots, a_{i^{\star}}\} \cup a \in \M\} \subseteq \{a : S \cup a \in \M\}$ by the downward closed property of $\M$, $$\max_{a : S \cup \{a_1, \ldots, a_{i^{\star}}\} \cup a \in \M} f_{S \cup \{a_1, \ldots, a_{i^{\star}}\}}(a) \leq \max_{a : S \cup a \in \M} f_{S \cup \{a_1, \ldots, a_{i^{\star}}\}}(a).$$ Thus, when $S$ is updated to $S \cup \{a_1, \ldots, a_{i^{\star}}\}$, we have $t \geq (1 - \epsilon) \max_{a : S \cup \{a_1, \ldots, a_{i^{\star}}\} \cup a \in \M} f_S(a)$. Next, consider an iteration where $t$ is updated to $t' = (1 - \epsilon)t$. By the algorithm, $X = \emptyset$ at that iteration with current solution $S$. Thus, by the algorithm, for all $a \in N$, $a$ was discarded from $X$ at some previous iteration with current solution $S'$ s.t. $S' \cup \{a_1, \ldots, a_{i^{\star}}\} \subseteq S$. Since $a$ was discarded, it is either the case that $S' \cup \{a_1, \ldots, a_{i^{\star}}\} \cup a \not \in \M$ or $f_{S \cup \{a_1, \ldots, a_{i^{\star}}\}}(a) < t$. If $S' \cup \{a_1, \ldots, a_{i^{\star}}\} \cup a \not \in \M$ then $S \cup a \not \in \M$ by the downward closed property of $\M$ and since $S' \cup \{a_1, \ldots, a_{i^{\star}}\} \subseteq S$. Otherwise, $f_{S' \cup \{a_1, \ldots, a_{i^{\star}}\}}(a) < t$ and by submodularity, $f_S(a) \leq f_{S' \cup \{a_1, \ldots, a_{i^{\star}}\}}(a) < t = t' / (1-\epsilon)$. Thus, $\forall a \in N$ s.t. $S \cup a \in \M$, $t' \geq (1 - \epsilon) f_S(a)$ and the invariant is maintained. \end{proof} By exploiting the definition of $i^{\star}$ and the random feasible sequence property we show that Lemma~\ref{lem:invariants} implies that every element added to $S$ at some iteration $j$ has near-optimal expected marginal contribution to $S$. We define $X_i^{\M} := \{a \in X: S \cup \{a_1, \ldots, a_i\}\cup a \in \M\}$. \begin{lemma} \label{lem:marg} Assume that $a_1, \ldots, a_{{\textsc{rank}}(\M(S,X))}$ is a random feasible sequence, then for all $i \leq i^{\star}$, $$\E_{a_i}\left[f_{S \cup \{a_1, \ldots, a_{i-1}\}}(a_i)\right] \geq (1 - \epsilon)^2 \max_{a : S \cup \{a_1, \ldots, a_{i-1}\} \cup a \in \M} f_{S \cup \{a_1, \ldots, a_{i-1}\}}(a_i).$$ \end{lemma} \begin{proof} By the random feasibility condition, we have $a_i \sim \U(X_{i-1}^{\M})$. We get $$\Pr_{a_i}\left[f_{S \cup \{a_1, \ldots, a_{i-1}\}}(a_i) \geq t\right] \cdot t = \frac{|X_{i-1}|}{|X_{i-1}^{\M}|} \cdot t \geq \frac{|X_{i-1}|}{|X|} \cdot t \geq (1-\epsilon)(1-\epsilon)\max_{a : S \cup \{a_1, \ldots, a_{i-1}\} \cup a \in \M} f_{S_{i-1}}(a_i)$$ where the equality is by definition of $X_{i-1}$, the first inequality since $X_{i-1}^{\M} \subseteq X$, and the second since $i \leq i^{\star}$ and by Lemma~\ref{lem:invariants}. Finally, note that $\E\left[f_{S \cup \{a_1, \ldots, a_{i-1}\}}(a_i)\right] \geq \Pr\left[f_{S \cup \{a_1, \ldots, a_{i-1}\}}(a_i) \geq t\right] \cdot t$. \end{proof} Next, we show that if every element $a_i$ in a solution $S = \{a_1, \ldots, a_k\}$ of size $k = {\textsc{rank}}(\M)$ has near-optimal expected marginal contribution to $S_{i-1} := \{a_1, \ldots, a_{i-1}\}$, then we obtain an approximation arbitrarily close to $1/2$ in expectation. \begin{lemma} Assume that $S = \{a_1, \ldots, a_k\}$ such that $\E_{a_i}[f_{S_{i-1}}(a_i)] \geq (1 - \epsilon) \max_{a: S_{i-1} \cup a \in \M}f_{S_{i-1}}(a)$ where $S_i = \{a_1, \ldots, a_i\}$. Then, for a matroid constraint $\M$, we have $\E\left[f(S)\right] \geq (1/2 - \O(\epsilon))\texttt{OPT}$. \end{lemma} \begin{proof} Let $O = \{o_1, \ldots, o_k\}$ such that $\{a_1, \ldots, a_{i-1}, o_i\}$ is feasible for all $i$, which exists by the augmentation property of matroids. We get, $$ \E[f(S)] = \sum_{i \in [k]}\E[f_{S_{i-1}}(a_i)] \geq (1-\epsilon)\sum_{i \in [k]}\E[f_{S_{i-1}}(o_i)] \geq (1-\epsilon)f_S(O) \geq (1-\epsilon)(\texttt{OPT} - f(S)). \qedhere$$ \end{proof} A corollary of the lemmas above is that {\textsc{Adaptive Sequencing}} \ has $\O(\log(n)\log(k))$ adaptive rounds and provides an approximation that is arbitrarily close to $1/2$, in expectation. To obtain this guarantee with high probability we can simply run parallel instances of the while-loop in the algorithm and include the elements obtained from the best instance. We also note that the solution $S$ returned by {\textsc{Adaptive Sequencing}} \ might have size smaller than ${\textsc{rank}}(\M)$, which causes an arbitrarily small loss for sufficiently large $\Delta$. We give the full details in Appendix~\ref{sec:appcomb}. \begin{restatable}{rThm}{thmcomb} \label{thm:comb} For any $\epsilon>0$, there is an $\O\left(\log(n)\log\left(\frac{k}{\epsilon}\right)\frac{1}{\epsilon^2}\right)$ adaptive algorithm that obtains a $1/2- \O(\epsilon)$ approximation with probability $1 - o(1)$ for maximizing a monotone submodular function under a matroid constraint. \end{restatable} In Appendix~\ref{sec:appcomb}, we generalize this result and obtain a $1/(P+1)- \O(\epsilon)$ approximation with high probability for the intersection of $P$ matroids. \section{The Accelerated Continuous Greedy Algorithm} \label{sec:continuous} In this section we describe the accelerated continuous greedy algorithm that achieves the main result of the paper. This algorithm employs the combinatorial algorithm from the previous section to construct a continuous solution which approximately maximizes the multilinear relaxation $F$ of the function $f$. This algorithm requires $\O(\log(n)\log(k))$ adaptive rounds and it produces a continuous solution whose approximation to the optimal solution is with high probability arbitrarily close to $1-1/e$. Finally, since the solution is continuous and we seek a feasible \emph{discrete} solution, it requires rounding. Fortunately, by using either dependent rounding~\cite{chekuri2009dependent} or contention resolution schemes~\cite{vondrak2011submodular} this can be done with an arbitrarily small loss in the approximation guarantee without any function evaluations, and hence without any additional adaptive rounds. \subsection{The algorithm} The accelerated continuous greedy algorithm follows the same principle as the (standard) continuous greedy algorithm~\cite{vondrak08}: at every iteration, the solution $\mathbf{x} \in[0,1]^n$ moves in the direction of a feasible set $S \in \M$. The crucial difference between the accelerated continuous greedy and the standard continuous greedy is in the choice of this set $S$ guiding the direction in which $\mathbf{x}$ moves. This difference allows the accelerated continuous greedy to terminate after a \emph{constant} number of iterations, each of which has $\O(\log(n)\log(k))$ adaptive rounds, in contrast to the continuous greedy which requires a linear number of iterations. To determine the direction in every iteration, the accelerated continuous greedy applies {\textsc{Adaptive Sequencing}} \ on the surrogate function $g$ that measures the marginal contribution to $\mathbf{x}$ when taking a step of size $\lambda$ in the direction of $S$. That is, $g(S) := F_\mathbf{x}(\lambda S) = F(\mathbf{x} + \lambda S) - F(\mathbf{x})$ where we abuse notation and write $\lambda S$ instead of $\lambda {\mathbf 1}_S$ for $\lambda \in[0,1]$ and $S\subseteq N$. Since $f$ is a monotone submodular function it is immediate that $g$ is monotone and submodular as well. \begin{algorithm}[H] \caption{{\textsc{Accelerated Continuous Greedy}}} \begin{algorithmic} \item[{\bf Input:}] matroid $\M$, step size $\lambda$ \STATE $\mathbf{x} \leftarrow {\mathbf 0}$ \STATE \textbf{for} $1/\lambda$ \text{iterations} \textbf{do} \STATE \ \ \ define $g:2^N \to \mathbb{R}$ to be $g(T) = F_{\mathbf{x}}(\lambda T)$ \STATE \ \ \ $S \leftarrow {\textsc{Adaptive Sequencing}}(g, \M)$ \STATE \ \ \ $\mathbf{x} \leftarrow \mathbf{x} + \lambda S$ \RETURN $\mathbf{x}$ \end{algorithmic} \label{alg:continuous} \end{algorithm} The analysis shows that in every one of the $1/\lambda$ iterations, {\textsc{Adaptive Sequencing}} \ finds $S$ such that the contribution of taking a step of size $\lambda$ in the direction of $S$ is approximately a $\lambda$ fraction of $\texttt{OPT} - F(\mathbf{x})$. For any $\lambda$ this is a sufficient condition for obtaining the $1 - 1/e - \epsilon$ guarantee. The reason why the standard continuous greedy cannot be implemented with a constant number of rounds $1/\lambda$ is that in every round it moves in the direction of ${\mathbf 1}_{S}$ for $S := \argmax_{T \in \M} \sum_{a \in T} g(a)$. When $\lambda$ is constant $F_{\mathbf{x}}(\lambda S)$ is arbitrarily low due to the potential overlap between high valued singletons (see Appendix~\ref{sec:applayerone}). Selecting $S$ using {\textsc{Adaptive Sequencing}} \ is the crucial part of the accelerated continuous greedy which allows implementing it in a constant number of iterations. \subsection{Analysis} We start by giving a sufficient condition on {\textsc{Adaptive Sequencing}} \ to obtain the ${1-1/e-\O(\epsilon)}$ approximation guarantee. The analysis is standard and the proof is deferred to Appendix~\ref{sec:applayerone}. \begin{restatable}{rLem}{lemmetaoneExp} \label{lem:metaoneExp} For a given matroid $\M$ assume that ${\textsc{Adaptive Sequencing}}$ outputs $S \in \M$ s.t. $\E_S\left[F_{\mathbf{x}}(\lambda S) \right] \geq (1 - \epsilon) \lambda (\texttt{OPT} - F(\mathbf{x}))$ at every iteration of {\textsc{Accelerated Continuous Greedy}} . Then {\textsc{Accelerated Continuous Greedy}} \ outputs $\mathbf{x} \in P(\M)$ s.t. $\E[F(\mathbf{x})] \geq \left(1 - 1/e - \epsilon\right) \texttt{OPT}$. \end{restatable} For a set $S=\{a_1,\ldots,a_k\}$ we define $S_i := \{a_1,\ldots,a_i\}$ and $S_{j:k} := \{a_{j}, \ldots, a_k\}$. We use this notation in the lemma below. The lemma is folklore and proved in Appendix~\ref{sec:applayerone} for completeness. \begin{restatable}{rLem}{lemmatroidO} \label{lem:matroidO} Let $\M$ be a matroid, then for any feasible sets $S=\{a_1,\ldots,a_k\}$ and $O$ of size $k$, there exists an ordering of $O = \{o_1, \ldots, o_k\}$ where for all $i \in [k]$, $S_i \cup O_{i+1:k} \in \M$ and $S_i \cap O_{i+1:k} = \emptyset$. \end{restatable} The following lemma is key in our analysis. We argue that unless the algorithm already constructed $S$ of sufficiently large value, the sum of the contributions of the optimal elements to $S$ is arbitrarily close to the desired $\lambda(\texttt{OPT}- F(\mathbf{x}))$. \begin{lemma} \label{lem:Oik} Assume that $g(S) \leq \lambda(\texttt{OPT} - F(\mathbf{x}))$, then $\sum_i g_{S \setminus O_{i:k}}(o_i) \geq \lambda (1 - \lambda) (\texttt{OPT} - F(\mathbf{x})).$ \end{lemma} \begin{proof} We first lower bound this sum of marginal contribution of optimal elements with the contribution of the optimal solution to the current solution $\mathbf{x} + \lambda S$ at the end of the iteration: \begin{align*} \sum_{i \in [k]} g_{S \setminus O_{i:k}}(o_i) = \sum_{i \in [k]} F_{\mathbf{x} + \lambda S \setminus O_{i:k}}(\lambda o_i) \geq \sum_{i \in [k]} F_{\mathbf{x} + O_{i-1}+ \lambda S}(\lambda o_i) \geq \lambda \sum_{i \in [k]} F_{\mathbf{x} + O_{i-1} + \lambda S }(o_i) = \lambda F_{\mathbf{x} + \lambda S}(O) \end{align*} where the first inequality is by submodularity and the second by the multilinearity of $F$. In the standard analysis of greedy algorithms the optimal solution $O$ may overlap with the current solution. In the continuous algorithm, since the algorithm takes steps of size $\lambda$, we can bound the overlap between the solution at this iteration $\lambda S$ and the optimal solution: \begin{align*} F_{\mathbf{x} + \lambda S}(O) = F_{\mathbf{x}}(O + \lambda S) - F_{\mathbf{x}}(\lambda S) \geq F_{\mathbf{x}}(O) - \lambda(\texttt{OPT} - F(\mathbf{x})) = (1- \lambda) \left(\texttt{OPT} - F(\mathbf{x})\right) \end{align*} the first inequality is by monotonicity and lemma assumption and the second by monotonicity. \end{proof} As shown in Lemma~\ref{lem:matroidO}, {\textsc{Adaptive Sequencing}} \ picks elements $a_i$ with near-optimal marginal contributions. Together with Lemma~\ref{lem:Oik} we get the desired bound on the contribution of $\lambda S$ to $\mathbf{x}$. \begin{lemma} \label{lem:maincontinuous} Let $\Delta = \O\left(\frac{1}{\epsilon} \log\left(\frac{k}{\epsilon \lambda}\right) \right)$ and $\lambda = \O(\epsilon)$. For any $\mathbf{x}$ such that $F(\mathbf{x}) < (1-1/e) \texttt{OPT}$, the set $S$ returned by ${\textsc{Adaptive Sequencing}}(g, \M)$ satisfies $\E\left[F_{\mathbf{x}}(\lambda S)\right] \geq (1- \O(\epsilon)) \lambda(\texttt{OPT} - F(\mathbf{x})).$ \end{lemma} \begin{proof} Initially, we have $t_i < \texttt{OPT}$. After $\Delta = \O\left(\frac{1}{\epsilon} \log\left(\frac{k}{\epsilon \lambda}\right)\right)$ iterations of the outer loop of {\textsc{Adaptive Sequencing}}, we get $t_f = (1-\epsilon)^{\Delta} \texttt{OPT} = \O\left(\frac{\epsilon \lambda\texttt{OPT}}{k}\right)$. We begin by adding dummy elements to $S$ so that $|S| = k$, which enables pairwise comparisons between $S$ and $O$. In particular, we consider $S'$, which is $S$ together with ${\textsc{rank}}(\M) - |S|$ dummy elements $a_{|S| + 1}, \ldots a_k$ such that, for any $\mathbf{y}$ and $\lambda$, $F_{\mathbf{y}}(\lambda a) = t_f$, which is the value of $t$ when {\textsc{Adaptive Sequencing}} \ terminates. Thus, by Lemma~\ref{lem:invariants}, for dummy elements $a_i$, $g_{S_{i-1}}(a_i) = t_f \geq (1 - \epsilon) \max_{a : S_{i-1} \cup a \in \M} g_{S_{i-1}}(a)$. We will conclude the proof by showing that $S$ is a good approximation to $S'$. From Lemma~\ref{lem:marg} that the contribution of $a_i$ to $S_{i-1}$ approximates the optimal contribution to $S_{i-1}$: $$\E\left[F_{\mathbf{x}}(\lambda S')\right] = \sum_{i=1}^k \E\left[g_{S_{i-1}}(a_i)\right] \geq \sum_{i=1}^k (1 - \epsilon)^2 \max_{a : S_{i-1} \cup a \in \M} g_{S_{i-1}}(a_i). $$ By Lemma~\ref{lem:matroidO} and submodularity, we have $ \max_{a : S_{i-1} \cup a \in \M} g_{S_{i-1}}(a_i) \geq g_{S \setminus O_{i:k}}(o_i).$ By Lemma~\ref{lem:Oik}, we also have $\sum_{i=1}^k g_{S \setminus O_{i:k}}(o_i) \geq \lambda (1 - \lambda) (\texttt{OPT} - F(\mathbf{x}))$. Combining the previous pieces, we obtain $$\E\left[F_{\mathbf{x}}(\lambda S')\right] \geq (1 - \epsilon)^2\lambda (1 - \lambda) (\texttt{OPT} - F(\mathbf{x})).$$ We conclude by removing the value of dummy elements, $$ \E\left[F_{\mathbf{x}}(\lambda S)\right] = \E\left[F_{\mathbf{x}}(\lambda S') - F_{\mathbf{x} + \lambda S}(\lambda (S'\setminus S)) \right] \geq \E\left[F_{\mathbf{x}}(\lambda S')\right] - k t_f \geq \E\left[F_{\mathbf{x}}(\lambda S')\right] - \epsilon \lambda\texttt{OPT}.$$ The lemma assumes that $F(\mathbf{x}) < (1-1/e)\texttt{OPT}$ and $\lambda = \O(\epsilon)$, so $\texttt{OPT} \leq e(\texttt{OPT} - F(\mathbf{x}))$ and $ \epsilon \lambda\texttt{OPT} = \O(\epsilon) \lambda(\texttt{OPT} - F(\mathbf{x}))$. We conclude that $\E\left[F_{\mathbf{x}}(\lambda S)\right] \geq \left(1 - \O(\epsilon)\right)\lambda(\texttt{OPT} - F(\mathbf{x}))$. \end{proof} The approximation guarantee of the {\textsc{Accelerated Continuous Greedy}} \ follows from lemmas~\ref{lem:maincontinuous} and~\ref{lem:metaoneExp}, and the adaptivity from Lemma~\ref{lem:adaptivity}. We defer the proof to Appendix~\ref{sec:applayerone}. \begin{restatable}{rThm}{thmexpectation} \label{thm:expectation} For any $\epsilon>0$ {\textsc{Accelerated Continuous Greedy}} \ makes $ \O\left(\log(n) \log\left(\frac{k}{\epsilon^2}\right) \frac{1}{\epsilon^2}\right)$ adaptive rounds and obtains a $1-1/e - \O(\epsilon)$ approximation in expectation for maximizing a monotone submodular function under a matroid constraint. \end{restatable} The final step in our analysis shows that the guarantee of {\textsc{Accelerated Continuous Greedy}} \ holds not only in expectation but also with high probability. To do so we argue in the lemma below that if over all iterations $i$, $F_{\mathbf{x}}(\lambda S)$ is close \emph{on average over the rounds} to $\lambda(\texttt{OPT} - F(\mathbf{x}))$, we obtain an approximation arbitrarily close to $1 - 1/e$ with high probability. The proof is in Appendix~\ref{sec:applayerone}. \begin{restatable}{rLem}{lemmetaone} \label{lem:metaone} Assume that ${\textsc{Adaptive Sequencing}}$ outputs $S \in \M$ s.t. $F_{\mathbf{x}}(\lambda S) \geq \alpha_i \lambda (\texttt{OPT} - F(\mathbf{x}))$ at every iteration $i$ of {\textsc{Accelerated Continuous Greedy}} \ and that $\lambda \sum_{i=1}^{\lambda^{-1}} \alpha_i \geq 1 -\epsilon$. Then {\textsc{Accelerated Continuous Greedy}} outputs $\mathbf{x} \in P(\M)$ s.t. $F(\mathbf{x}) \geq \left(1 -1/e - \epsilon\right) \texttt{OPT}$. \end{restatable} The approximation $\alpha_i$ obtained at iteration $i$ is $1 - \O(\epsilon)$ in expectation by Lemma~\ref{lem:maincontinuous}. Thus, by a simple concentration bound, w.h.p. it is close to $1 - \O(\epsilon)$ in average over all iterations. Together with Lemma~\ref{lem:metaone}, this implies the $1 -1/e - \epsilon$ approximation w.h.p.. The details are in Appendix~\ref{sec:applayerone}. \begin{restatable}{rThm}{thmhp} \label{thm:hp} {\textsc{Accelerated Continuous Greedy}} \ is an $\O\left(\log(n) \log\left(\frac{k}{\epsilon \lambda}\right) \frac{1}{\epsilon \lambda}\right)$ adaptive algorithm that, with probability $1 - \delta$, obtains a $1-1/e - \O(\epsilon)$ approximation for maximizing a monotone submodular function under a matroid constaint, with step size $\lambda = \O\left(\epsilon^2 \log^{-1}\left(\frac{1}{\delta}\right)\right)$. \end{restatable} \section{Parallelization of Matroid Oracle Queries} \label{sec:matroid} Throughout the paper we relied on {\textsc{Random Sequence}} \ as a simple procedure to generate a random feasible sequence to achieve our $\O(\log(n) \log(k))$ adaptive algorithm with an approximation arbitrarily close to $1-1/e$. Although {\textsc{Random Sequence}} \ has zero adaptivity, it makes ${\textsc{rank}}(\M)$ sequential steps depending on membership in the matroid to generate the sets $X_1,\ldots,X_{{\textsc{rank}}(M)}$. From a practical perspective, we may wish to accelerate this process via parallelization. In this section we show how to do so in the standard \emph{rank} and \emph{independence} oracle models for matroids. \subsection{Matroid rank oracles} Given a rank oracle for the matroid, we get an algorithm that only makes $\O\left (\log(n)\log(k) \right )$ steps of matroid oracle queries and has polylogarithmic depth on a PRAM machine. Recall that a rank oracle for $\M$ is given a set $S$ and returns its rank, i.e. the maximum size of an independent subset $T \subseteq S$. The number of steps of matroid queries of an algorithm is the number of sequential steps it makes when polynomially-many queries to a matroid oracle for $\M$ can be executed in parallel in each step~\cite{karp1988complexity}.\footnote{More precisely, it allows $p$ queries per step and the results depend on $p$, we consider the case of $p = \poly(n)$.} We use a parallel algorithm from~\cite{karp1988complexity} designed for constructing a base of a matroid with a rank oracle, and show that it satisfies the random feasibility property. \begin{algorithm}[H] \caption{Parallel {\textsc{Random Sequence}} \ for matroid constraint with rank oracle} \begin{algorithmic} \item[{\bf Input:}] matroid $\M$, ground set $N$ \STATE $b_1, \ldots, b_{|N|} \leftarrow $ random permutation of $N$ \STATE $r_i \leftarrow {\textsc{rank}}(\{b_1, \ldots, b_{i}\})$, for all $i \in \{1, \ldots, n\}$ \STATE $a_i \leftarrow i$th $b_j$ s.t. $r_j - r_{j-1} = 1$ \RETURN $a_1, \ldots, a_\ell$ \end{algorithmic} \label{alg:rank} \end{algorithm} With Algorithm~\ref{alg:rank} as the {\textsc{Random Sequence}} \ subroutine for {\textsc{Adaptive Sequencing}}, we obtain the following result for matroid rank oracles (proof in Appendix~\ref{sec:appmatroid}). \begin{restatable}{rThm}{thmrank} For any $\epsilon>0$, there is an algorithm that obtains, with probability $1 - o(1)$, a $1/2 - \O(\epsilon)$ approximation with $\O\left(\log(n)\log\left(\frac{k}{\epsilon}\right)\frac{1}{\epsilon^2}\right)$ adaptivity and steps of matroid rank queries. \end{restatable} This gives $\O(\log(n) \log(k))$ adaptivity and steps of independence queries with $1-1/e-\epsilon$ approximation for maximizing the multilinear relaxation and $1/2-\epsilon$ approximation for maximizing a monotone submodular function under a matroid constraint. In particular, we get polylogarithmic depth on a PRAM machine with a rank oracle. \subsection{Matroid independence oracles} Recall that an independence oracle for $\M$ is an oracle which given $S\subseteq N$ answers whether $S \in \M$ or $S \not \in M$. We give a subroutine that requires $\tilde{\O}(n^{1/2})$ steps of independence matroid oracle queries and show that $\tilde{\Omega}(n^{1/3})$ steps are necessary. Similar to the case of rank oracles we use a parallel algorithm from~\cite{karp1988complexity} for constructing a base of a matroid that can be used as the {\textsc{Random Sequence}} \ subroutine while satisfying the random feasibility condition. \paragraph{$\tilde{O}(\sqrt{n})$ upper bound.} We use the algorithm from \cite{karp1988complexity} for constructing a base of a matroid. \begin{algorithm}[H] \caption{Parallel {\textsc{Random Sequence}} \ for matroid constraint with independence oracle} \begin{algorithmic} \item[{\bf Input:}] matroid $\M$, ground set $N$ \STATE $c \leftarrow 0, X \leftarrow N$ \STATE \textbf{while} $|N| > 0$ \textbf{do} \STATE \ \ \ $b_1, \ldots, b_{|X|} \leftarrow $ random permutation of $X$ \STATE \ \ \ $i^{\star} \leftarrow \max\{i : \{a_1, \ldots, a_c\} \cup \{b_1, \ldots, b_i\} \in \M\}$ \STATE \ \ \ $a_{c + 1}, \ldots, a_{c + i^{\star}} \leftarrow b_1, \ldots, b_{i^{\star}}$ \STATE \ \ \ $c \leftarrow c + i^{\star}$ \STATE \ \ \ $X \leftarrow \{a \in X : \{a_1, \ldots, a_c, a\} \in \M\}$ \RETURN $a_1, \ldots, a_c$ \end{algorithmic} \label{alg:3} \end{algorithm} With Algorithm~\ref{alg:3} as the {\textsc{Random Sequence}} \ subroutine for {\textsc{Adaptive Sequencing}}, we obtain the following result with independence oracles. We defer the proof to Appendix~\ref{sec:appmatroid}. \begin{restatable}{rThm}{thmindependence} \label{thm:independence} There is an algorithm that obtains, w.p. $1 - o(1)$, a $1/2 - \O(\epsilon)$ approximation with $\O\left(\log(n)\log\left(\frac{k}{\epsilon}\right)\frac{1}{\epsilon^2}\right)$ adaptivity and $O\left(\sqrt{n}\log(n)\log\left(\frac{k}{\epsilon}\right)\frac{1}{\epsilon^2}\right)$ steps of independence queries. \end{restatable} This gives $\O(\log(n) \log(k))$ adaptivity and $\sqrt{n}\log(n)\log(k)$ steps of independence queries with $1-1/e-\epsilon$ approximation for maximizing the multilinear relaxation and $1/2-\epsilon$ approximation for maximizing a monotone submodular function under a matroid constraint. In particular, even with independence oracles we get a sublinear algorithm in the PRAM model. \paragraph{$\tilde{\Omega}(n^{1/3})$ lower bound.} We show that there is no algorithm which obtains a constant approximation with less than $\tilde{\Omega}(n^{1/3})$ steps of independence queries, even for a cardinality function $f(S) = |S|$. We do so by using the same construction for a hard matroid instance as in \cite{karp1988complexity} used to show an $\tilde{\Omega}(n^{1/3})$ lower bound on the number of steps of independence queries for constructing a base of a matroid. Although the matroid instance is the same, we use a different approach since the proof technique of \cite{karp1988complexity} does not hold in our case (see proof and discussion in Appendix~\ref{sec:appmatroid}). \begin{restatable}{rThm}{thmlowerbound} \label{thm:lowerbound} For any constant $\alpha$, there is no algorithm with $\frac{n^{1/3}}{4 \alpha \log^2 n} - 1$ steps of $\poly(n)$ matroid queries which, w.p. strictly greater than $n^{-\Omega(\log n)}$, obtains an $\alpha$ approximation for maximizing a cardinality function under a partition matroid constraint when given an independence oracle. \end{restatable} To the best of our knowledge, the gap between the lower and upper bounds of $\tilde{Omega}(n^{1/3})$ and $O(n^{1/2})$ parallel steps for constructing a matroid basis given an independence oracle remains open since~\cite{karp1988complexity}. Closing this gap for submodular maximization under a matroid constraint given an independence oracle is an interesting open problem that would also close the gap of~\cite{karp1988complexity}. \newpage \bibliographystyle{alpha}
{ "timestamp": "2018-11-09T02:00:12", "yymm": "1811", "arxiv_id": "1811.03093", "language": "en", "url": "https://arxiv.org/abs/1811.03093" }
\section{Introduction} \label{sec:intro} \begin{figure*}[!ht] \centering \begin{tabular}{ccc} \includegraphics[width = 0.27\linewidth]{img/ill1} & \includegraphics[width = 0.27\linewidth]{img/ill2} & \includegraphics[width = 0.27\linewidth]{img/ill3} \\ {\small (a)} & {\small (b)} & {\small (c)} \end{tabular} \caption{Illustration of the guidance process. (a) Calibration Wizard proposes the next best pose based on the previous calibration results. (b) The camera should be moved towards the proposed target pose. (c) When the camera is close enough to the suggested pose, the system acquires an image and then proposes a next pose. Demo code:~\url{https://github.com/pengsongyou/CalibrationWizard}.} \label{fig:wizard-process} \end{figure*} Camera calibration is a prerequisite to many methods and applications in computer vision and photogrammetry, in particular for most problems where 3D reconstruction or motion estimation is required. In this paper we adopt the popular usage of planar calibration objects, as introduced\footnote{Planar targets were used before, but essentially in combination with motion stages, in order to effectively generate 3D targets \cite{tsai1987versatile,lenz1988techniques,wei1993complete,li1994camera}} by \cite{sturm1999plane,zhang2000flexible} and made available through OpenCV~\cite{opencv} and Matlab~\cite{bouguet2000matlab}, even though our approach can be directly applied to 3D calibration objects too. It is well known that when using planar calibration targets, the accuracy of the resulting calibration depends strongly on the poses used to acquire images. From the theoretical study of degenerate sets of camera poses in \cite{sturm1999plane,zhang2000flexible}, it follows for example intuitively that it is important to vary the orientation (rotation) of the camera as much as possible during the acquisition process. It is also widely known to practitioners that a satisfactory calibration requires images such that the target successively covers the entire image area, otherwise the estimation of radial distortion and other parameters usually remains suboptimal. We have observed that inexperienced users usually do not take calibration images that lead to a sufficiently accurate calibration. Several efforts have been done in the past to guide users in placing the camera. In photogrammetry for instance, the so-called network design problem was addressed, through an off-line process: how to place a given number of cameras such as to obtain an as accurate as possible 3D reconstruction of an object of assumed proportions \cite{Mason1995a,Olague2002a}. Optimal camera poses for camera calibration have been computed in~\cite{ricolfe2011camera}, however only for constrained camera motions and especially, only for the linear approach of \cite{zhang2000flexible}, whereas we consider the non-linear optimization for calibration. Also, these poses are difficult to realize, even for expert users. The ROS~\cite{quigley2009ros} monocular camera calibration toolbox provides text instructions so that users can move the target accordingly. More recently, some cameras, such as the ZED stereo system from StereoLabs, come with software that interactively guides the user to good poses during the calibration process; these poses are however pre-computed for the particular stereo system and this software cannot be used to calibrate other systems, especially monocular cameras. In this paper we propose a system that guides the user through a simple graphical user interface (GUI) in order to move the camera to poses that are optimal for calibrating a camera. Optimality is considered for the bundle adjustment type non-linear optimization formulation for calibration. For each new image to be acquired, the system computes the optimal pose, i.e.\ which adds most new information on intrinsic parameters, in addition to that provided by the already acquired images. The most closely related works we are aware of are \cite{richardson2013aprilcal,rojtberg2018efficient}. They also suggest next best poses to the user. However, unlike ours, they are both strategy-based methods, where suggestions are selected from a fixed dataset of pre-defined poses, which may not be enough for various camera models or calibration targets. In our approach, each new suggested pose results from a global optimization step. Furthermore, we propose a novel method for incorporating the uncertainty of corner point positions rigorously throughout the entire pipeline, for calibration but also next best pose computation. Our approach is not specific to any camera model; in principle, any monocular camera model can be plugged into it, although tests with very wide field of view cameras need to be done. The paper is organized as follows. Section~\ref{sec:method} describes the theory and mathematical details of Calibration Wizard. Section~\ref{sec:autocorrelation} shows how to incorporate corner point uncertainty in the process. Experiments are reported in Section~\ref{sec:results}, followed by conclusions in Section~\ref{sec:conclusions}. \section{Methodology} \label{sec:method} Our goal is to provide an interactive guidance for the acquisition of good images for camera calibration. An initial calibration is carried out from a few (typically 3) freely taken images of a calibration target. The system then guides the user to successive next best poses, through a simple GUI, cf.\ Fig~\ref{fig:wizard-process}. The underlying computations are explained in the following subsections. \subsection{Calibration formulation} \label{method-formulation} Let the camera be modeled by two \textbf{local projection functions} $x=q_x(\Theta, S), y=q_y(\Theta, S)$ that map a 3D point $S$ given in the local camera coordinate system, to the image coordinates $x$ and $y$. These functions depend on intrinsic parameters $\Theta$ (assumed constant across all images). For example, the standard 3-parameter pinhole model consisting of a focal length $f$ and principal point $(u,v)$, i.e.\ with $\Theta=(f, u, v)^\top$, has the following local projection functions (a full model with radial distortion is handled in supplementary material, section 3): \begin{equation} q_x(\Theta,S) = u + f \frac{S_1}{S_3} \enspace\enspace\enspace q_y(\Theta,S) = v + f \frac{S_2}{S_3} \label{eq:pinhole} \end{equation} Let camera pose be given by a 6-vector $\Pi$ of extrinsic parameters. We use the representation $\Pi=(t_1,t_2,t_3,\alpha,\beta,\gamma)^\top$, where $t=(t_1,t_2,t_3)^\top$ is a translation vector and the 3 angles define a rotation matrix as the product of rotations about the 3 coordinate axes: $R = R_z(\gamma) R_y(\beta) R_x(\alpha)$. $R$ and $t$ map 3D points $Q$ from the world coordinate system to the local camera coordinate system according to: \begin{equation} S = RQ + t \end{equation} Other parameterization may be used too, e.g.\ quaternions. A camera with pose $\Pi$ is thus described by two \textbf{global projection functions} $p_x$ and $p_y$: \begin{align} p_x(\Theta, \Pi, Q) &= q_x(\Theta, RQ+t) = q_x(\Theta, S)\\ p_y(\Theta, \Pi, Q) &= q_y(\Theta, RQ+t) = q_y(\Theta, S) \end{align} Since a planar calibration target is used, the 3D points $Q$ are pre-defined and their corresponding $Z$ coordinates $Q_3$ are set to $0$. We now consider $m$ images of a target consisting of $n$ calibration points. Inputs to the calibration are thus the image points $(x_{ij}, y_{ij})$ for $i=1\cdots m$ and $j=1\cdots n$, which are detected by any corner detector, e.g.\ the OpenCV \verb+findChessboardCorners+ function~\cite{opencv}. For ease of explanation, we assume here that all points are visible in all images, although this is not required in the implementation. Optimal calibration requires a non-linear simultaneous optimization of all intrinsic and extrinsic parameters (bundle adjustment). This comes down to the minimization of the geometric reprojection error \cite{hartley2003multiple}: \begin{equation} \min_{\Theta, \{\Pi_i\}} \sum_{i,j} \left( x_{ij} - p_x(\Theta, \Pi_i, Q_j) \right)^2 + \left( y_{ij} - p_y(\Theta, \Pi_i, Q_j) \right)^2 \label{eq:cost} \end{equation} Usually, local non-linear least square optimizers are used, such as Levenberg-Marquardt. Our system is independent of the optimizer used; all it requires is the computation, at the found solution, of the partial derivatives of \eqref{eq:cost}, see next. \subsection{Computation of next best pose} \label{ss:next_pose} We suppose that we have already acquired $m$ images and estimated intrinsic parameters and poses from these, by solving \eqref{eq:cost}. The goal now is to compute the next best pose; the objective is to reduce, as much as possible, the expected uncertainty on the estimated intrinsic parameters. Let us consider the Jacobian matrix $J$ of the cost function~\eqref{eq:cost}, evaluated at the estimated parameters. $J$ contains the partial derivatives of the cost function's residuals, i.e.\ of terms $\hat{x}_{ij} = x_{ij} - p_x(\Theta, \Pi_i, Q_j)$ and $\hat{y}_{ij} = y_{ij} - p_y(\Theta, \Pi_i, Q_j)$. $J$ contains one row per residual. Its columns are usually arranged in groups, such that the first group contains the partial derivatives with respect to the intrinsic parameters $\Theta$, and subsequent groups of columns, the derivatives relative to extrinsic parameters of the successive images. The highly sparse form of $J$ is thus as follows (we assume here that there are $k$ intrinsic parameters): \begin{equation} \hspace{-0.7em} J = \begin{pmatrix} A_1 & B_1 & 0 & \cdots & 0 \\ A_2 & 0 & B_2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ A_m & 0 & 0 & \cdots & B_m \end{pmatrix} \label{eq:jacob2} \end{equation} \begin{equation*} A_i = \begin{pmatrix} \frac{\partial \hat{x}_{i1}}{\Theta_1} & \cdots & \frac{\partial \hat{x}_{i1}}{\Theta_k} \\ \frac{\partial \hat{y}_{i1}}{\Theta_1} & \cdots & \frac{\partial \hat{y}_{i1}}{\Theta_k} \\ \vdots & \ddots & \vdots \\ \frac{\partial \hat{x}_{in}}{\Theta_1} & \cdots & \frac{\partial \hat{x}_{in}}{\Theta_k} \\ \frac{\partial \hat{y}_{in}}{\Theta_1} & \cdots & \frac{\partial \hat{y}_{in}}{\Theta_k} \end{pmatrix} \enspace B_i = \begin{pmatrix} \frac{\partial \hat{x}_{i1}}{\Pi_{i,1}} & \cdots & \frac{\partial \hat{x}_{i1}}{\Pi_{i,6}} \\ \frac{\partial \hat{y}_{i1}}{\Pi_{i,1}} & \cdots & \frac{\partial \hat{y}_{i1}}{\Pi_{i,6}} \\ \vdots & \ddots & \vdots \\ \frac{\partial \hat{x}_{in}}{\Pi_{i,1}} & \cdots & \frac{\partial \hat{x}_{in}}{\Pi_{i,6}} \\ \frac{\partial \hat{y}_{in}}{\Pi_{i,1}} & \cdots & \frac{\partial \hat{y}_{in}}{\Pi_{i,6}} \end{pmatrix} \end{equation*} where $A_i$ are matrices of size $2n \times k$, containing the partial derivatives of residuals with respect to $k$ intrinsic parameters, whereas $B_i$ are matrices of size $2n \times 6$, containing the partial derivatives with respect to extrinsic parameters. Now, the so-called information matrix is computed as $J^\top J$: \begin{equation} J^\top J = \begin{pmatrix} \sum\limits_i A_i^\top A_i & A_1^\top B_1 & A_2^\top B_2 & \cdots & A_m^\top B_m \\ B_1^\top A_1 & B_1^\top B_1 & 0 & \cdots & 0 \\ B_2^\top A_2 & 0 & B_2^\top B_2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ B_m^\top A_m & 0 & 0 & \cdots & B_m^\top B_m \end{pmatrix} \label{eq:infoMat} \end{equation} Like $J$, $J^\top J $ is highly sparse. Importantly, its inverse $(J^\top J)^{-1}$ provides an estimation of the covariance matrix of the estimated intrinsic and extrinsic parameters. For camera calibration, we are only interested in the covariance matrix of the intrinsic parameters, i.e.\ the upper-left $k\times k$ sub-matrix of $(J^\top J)^{-1}$. Due to the special structure of $(J^\top J)^{-1}$, it can be computed efficiently. Let \begin{align*} U &= \sum_i A_i^\top A_i \\ V &= \text{diag}\left(B_1^\top B_1, \cdots, B_m^\top B_m\right) \\ W &= \begin{pmatrix} A_1^\top B_1 & \cdots & A_m^\top B_m \end{pmatrix} \end{align*} Then, \begin{equation} J^\top J = \begin{pmatrix} U & W\\ W^\top & V \end{pmatrix} \end{equation} and as described in~\cite{hartley2003multiple}, the upper-left sub-matrix of $(J^\top J)^{-1}$ is given by $\Sigma = (U-W V^{-1} W^\top)^{-1}$, i.e.\ the inverse of a $k\times k$ symmetric matrix. Let us now return to our goal, determining the next best pose $\Pi_{m+1}$. The outline of how to achieve this is as follows. We extend the Jacobian matrix in Eq.~\eqref{eq:jacob2} with a part corresponding to an additional image, whose pose is parameterized by $\Pi_{m+1}$. The coefficients in $A_{m+1}$ and $B_{m+1}$, are thus functions of $\Pi_{m+1}$. Naturally, $\Pi_{m+1}$ is also implicitly embedded in the intrinsic parameter's covariance matrix associated with this extended system. To reduce the uncertainty of the calibration, we wish to determine $\Pi_{m+1}$ that makes $\Sigma$ as ``small'' as possible. Inspired by~\cite{haner2015phd}, we choose to minimize the trace of this $k \times k$ matrix. Since we wish to compute the next best pose within the entire 3D working space, we use a global optimization method. Our experiments suggest that simulated annealing~\cite{van1987simulated} or ISRES~\cite{runarsson2005search} work well for this small optimization problem~\footnote{We did not consider the stopping criterion in the current version, but one could simply stop our method when the relative residual of the trace of the covariance matrix mentioned above is smaller than a threshold.}. Especially the latter works in interactive time. Note that the computation of the partial derivatives used to build the $A_i$ and $B_i$ matrices can be done very efficiently using the chain rule. Further, computation of $\Sigma$ for different trials of $\Pi_{m+1}$ can also be done highly efficiently by appropriate pre-computations of the parts of matrices $U$ and $W V^{-1} W^\top$ that do not depend on $\Pi_{m+1}$. See more details in the supplementary material, section 2. \section{Taking into Account the Uncertainty of Corner Points} \label{sec:autocorrelation} \begin{figure*}[t]\centering {\renewcommand{\arraystretch}{0.4} \begin{tabular}{cccccc} \multirow{3}{*}[1.2cm]{\includegraphics[width = 0.33\linewidth]{img/graph_blur_255.png}} & \includegraphics[width = 0.07\linewidth, frame]{img/30_0.png}& \includegraphics[width = 0.07\linewidth, frame]{img/50_0.png}& \includegraphics[width = 0.07\linewidth, frame]{img/70_0.png}& \includegraphics[width = 0.07\linewidth, frame]{img/90_0.png}& \includegraphics[width = 0.07\linewidth, frame]{img/110_0.png} \\ & \includegraphics[width = 0.07\linewidth, frame]{img/30_1.png}& \includegraphics[width = 0.07\linewidth, frame]{img/50_1.png}& \includegraphics[width = 0.07\linewidth, frame]{img/70_1.png}& \includegraphics[width = 0.07\linewidth, frame]{img/90_1.png}& \includegraphics[width = 0.07\linewidth, frame]{img/110_1.png} \\ & \includegraphics[width = 0.07\linewidth, frame]{img/30_2.png}& \includegraphics[width = 0.07\linewidth, frame]{img/50_2.png}& \includegraphics[width = 0.07\linewidth, frame]{img/70_2.png}& \includegraphics[width = 0.07\linewidth, frame]{img/90_2.png}& \includegraphics[width = 0.07\linewidth, frame]{img/110_2.png}\\ & $30^\circ$& $50^\circ$& $70^\circ$& $90^\circ$& $110^\circ$ \end{tabular}} \caption{Uncertainty of corner position as a function of opening angle and blur. Left: plot of the first eigenvalue of the autocorrelation matrix, over opening angle and for different blur levels (Gaussian blur for $\sigma=0,1,2,3$). Right: corners for different opening angles and blur levels ($\sigma=0,1,2$) and computed $95\%$ confidence level uncertainty ellipses (enlarged 10$\times$ for display).} \label{fig:corner_uncertainty} \end{figure*} So far, we have not used information on the uncertainty of corner point positions: in Eq.~\eqref{eq:cost}, all residuals have the same weight (equal to $1$). Ideally, in any geometric computer vision formulation, one should incorporate estimates of uncertainty when available. In the following, we explain this for our problem, from two aspects: first for the actual calibration, i.e.\ the parameter estimation in Eq.~\eqref{eq:cost}. Second, more originally, for computing the next best pose. \subsection{Corner Uncertainty in Calibration} Consider a corner point extracted in an image; the uncertainty of its position can be estimated by computing the autocorrelation matrix $C$ for a window of a given size around the point (see for instance \cite{harris1988a}). Concretely, $C$ is an estimate of the inverse of the covariance matrix of the corner position. Now, let $C_{ij}$ be the autocorrelation matrix for the $j$th corner in the $i$th image. The $C_{ij}$ can be incorporated in the calibration process by inserting the block-diagonal matrix composed by them, in the computation of the information matrix of Eq.~\eqref{eq:infoMat}: \begin{equation} H = J^\top \text{diag}(C_{11}, C_{12}, \cdots, C_{1n}, C_{21}, \cdots, C_{mn}) J \label{eq.corrected_information_matrix} \end{equation} This uncertainty-corrected information matrix can then be used by Gauss-Newton or Levenberg-Marquardt type optimizers for estimating the calibration (for other optimizers, the autocorrelation matrix may have to be used differently) as well as for the quantification of the uncertainty of the computed intrinsic parameters. \subsection{Next Best Pose Computation} The second usage of corner uncertainty concerns the computation of the next best pose. In particular, for each hypothetical next pose that we examine, we can project the points of the calibration target to the image plane, using this hypothetical pose and the current estimates of intrinsic parameters. This gives the expected positions of corner points, if an actual image were to be taken from that pose. Importantly, what we would like to compute in addition, is the expected uncertainty of corner extraction, i.e.\ the uncertainty of the corner positions extracted in the expected image. Or, equivalently, the autocorrelation matrices computed from the expected pixel neighborhoods around these corner points. If we are able to do so, we can plug these into the estimation of the next best pose, by inserting the expected autocorrelation matrices in Eq.~\eqref{eq:infoMat} the same way as done in Eq.~\eqref{eq.corrected_information_matrix}. Before explaining how to estimate expected autocorrelation matrices, we describe the benefits of this approach. Indeed, we have found that without doing so, the next best pose is sometimes rather extreme, with a strong grazing viewing angle relative to the calibration target. This makes sense in terms of pure geometric information contributed by such a pose, but is not appropriate in practice, since with extreme viewing angles image corners are highly elongated: they may be difficult to extract in the actual image and also, their uncertainty is very large in one direction. While this may be compensated by using images acquired from poses with approximately perpendicular viewing directions one from another, it is desirable and indeed more principled to fully integrate corner uncertainty right from the start in the computation of the next best pose. Let us now explain how to compute expected autocorrelation matrices of corner points, for a hypothetical next pose. This is based on a simple reasoning. The overall shape of an image corner (in our case, a crossing in a checkerboard pattern), is entirely represented by the ``opening angle'' of the corner. What we do is to precompute autocorrelation matrices for synthetic corners for the entire range of opening angles, cf.\ Fig.~\ref{fig:corner_uncertainty}: the top row on the right shows ideal corners generated by discretizing continuous black-and-white corners, for a few different opening angles. For each of them, the autocorrelation matrix (cf.\ \cite{harris1988a}) was computed; as mentioned, its inverse is an estimate of the corner position's covariance matrix. The figure shows the plots of $95\%$ uncertainty ellipses derived from these covariance matrices (enlarged 10 times, for better visibility). Such ideal corners are of course not realistic, so we repeat the same process for images blurred by Gaussian kernels of different $\sigma$ (2nd and 3rd rows of the figure). Naturally, blurrier corner images lead to smaller autocorrelation matrices and larger uncertainty ellipses. One may note several things. First, between $30^\circ$ and $90^\circ$ opening angles, the largest uncertainty differs by a factor of about $2$, see the uncertainty ellipses in Fig.~\ref{fig:corner_uncertainty}. Second, intuitively, the uncertainty ellipse of a corner with opening angle $\alpha$ is the same as with opening angle $180^\circ - \alpha$, but turned by $90^\circ$ (cf.\ the 3rd and 5th columns in Fig.~\ref{fig:corner_uncertainty}, for $70^\circ$ and $110^\circ$). Hence, the eigenvalues of the autocorrelation matrix of a corner with opening angle $\alpha$, are the same as that for $180^\circ - \alpha$, but they are ``swapped'' (associated with the respective opposite eigenvector). The left part of Fig.~\ref{fig:corner_uncertainty} shows plots of the first eigenvalue (associated with eigenvector $(0,1)$) of the autocorrelation matrix $C$ as a function of opening angle. Due to the above observation, the second eigenvalue (associated with eigenvector $(1,0)$) associated with opening angle $\alpha$ is simply given by the first eigenvalue associated with $180^\circ - \alpha$. The graphs on the left of the figure confirm that increasing blur decreases the autocorrelation matrices eigenvalues. Let us note that we also simulated Gaussian pixel noise on the corner images; even for larger than realistic noise levels, the impact on the results shown in Fig.~\ref{fig:corner_uncertainty}, was negligible. Let us finally explain how to use these results. First, we determine the average blur level in the already acquired images, from the strength of image gradients across edges, and then select the graph in Fig.~\ref{fig:corner_uncertainty} associated with the closest simulated blur (for more precision, one could also compute the graph for the actual detected blur level). Let the function represented by the graph be $f(\alpha)$ -- one can represent it as a lookup table or fit a polynomial to the data of the graph (we did the latter). This allows to compute the diagonal coefficients of the autocorrelation matrix from the opening angle, as $f(\alpha)$ and $f(180^\circ - \alpha)$. Second, so far we have only considered axis-aligned corners. If we now consider a corner with opening angle $\alpha$, but that is rotated by an angle $\beta$, then its autocorrelation matrix is nothing else than: \begin{equation} \footnotesize C = \begin{pmatrix} \cos\beta & -\sin\beta \\ \sin\beta & \cos\beta \end{pmatrix} \begin{pmatrix} f(\alpha) & 0 \\ 0 & f(180^\circ - \alpha) \end{pmatrix} \begin{pmatrix} \cos\beta & \sin\beta \\ -\sin\beta & \cos\beta \end{pmatrix} \end{equation} We now have all that is needed to incorporate corner uncertainty in next best pose computation. For each hypothetical pose we project, as shown above, all calibration points. For each point (corner), using its neighbors, we can compute the opening angle $\alpha$ and rotation angle $\beta$ and thus, the expected autocorrelation matrix $C$. It can then be inserted in the computation of the information matrix, like in Eq.~\eqref{eq.corrected_information_matrix}. The effect of this approach on proposing the next best pose is to strike a balance between maximizing pure geometric ``strength'' of a pose (often achieved by strong tilting of the calibration pattern) and maximizing corner extraction accuracy (in fronto-parallel poses, corners are overall closest to exhibiting right angles, i.e.\ where their auto-correlation matrices have maximal trace). \subsection{Possible Extensions} \label{sec:autocorrelation.next} So far we have described the basic idea for incorporating corner uncertainty. The following extensions may be applied; we plan this for future work. The values plotted in Fig.~\ref{fig:corner_uncertainty} are obtained for corners exhibiting the full range of 256 greylevels (black to white). In real images, the range is of course smaller. If the difference between largest and smallest greylevels is $x$, then the plotted values (left of Fig.~\ref{fig:corner_uncertainty}) are divided by $255^2/x^2$ (easy to prove but not shown due to lack of space). In turn, the uncertainty ellipses are scaled up by a factor of $255/x$. This should be taken into account when predicting auto-correlation matrices for the next pose. The range of greylevels depends on various factors, such as distance to the camera and lighting conditions. One can learn the relationship between pose and greylevel range for a given calibration setup as part of the calibration process and use it to predict the next best pose. Similarly, one might predict expected blur per corner, based on a learned relationship between blur and distance to camera, e.g.\ by inferring the camera's depth of field during calibration. We observed that within the depth of field, image blur is linearly related to the distance to the camera. Using these observations should allow to further improve the next best pose proposal, by achieving an even better compromise between geometric information of the pose and accuracy of image processing (here, corner extraction), both of which influence calibration accuracy. \section{Results and Evaluation} \label{sec:results} Synthetic and real-world experiments are performed here to evaluate the effectiveness of our Calibration Wizard. Note that in the optimization process, we ensure that all corner points should be within the image plane, otherwise the optimization loss is set to an extremely large value. \subsection{Synthetic evaluations} To assess the proposed system, we simulate the process of camera calibration with pre-defined intrinsic parameters, with Matlab. Here we first briefly introduce the procedure of producing random checkerboard poses. \textbf{Data preparation.} First, $9\times 6$ target points are defined, with $Z$ components set to $0$. Next, the 3D position of the virtual camera is created, with $X$ and $Y$ coordinates proportional to $Z$ within a plausible range. Then the camera is first oriented such that its optical axis goes through the center of the target and finally, rotations about the three local coordinate axes by random angles between $-15^{\circ}$ and $15^{\circ}$ are applied. Now, from the position, rotation matrix and the given intrinsic parameters, we can project the 3D target points to the image, and finally add zero-mean Gaussian noise with the same noise level to them. Moreover, we ensure that all 54 points are located within the field of view of a $640\times 480$ image plane. \textbf{Evaluation of accuracy and precision.} We primarily compare the calibration accuracy obtained from random images, with that from images acquired as proposed by our system, with and without taking into account the autocorrelation matrix explained in the previous section. To this end, the experimental process is as follows: create 3 initial random images, based on which we have 3 paths to acquire the calibration results: \begin{itemize} \setlength\itemsep{0.1em} \item Produce many other random images \item Obtain 17 images proposed by the wizard, so $3 + 17 = 20$ images in total \item Obtain 17 wizard images taking the autocorrelation matrix into account \end{itemize} $100$ trials are performed for each experiment, hence we acquire $100$ samples of intrinsic parameters for each. In the first test, we set $f = 800, (u,v) = (320,240)$ and radial distortion coefficients $ k_1 = 0.01, k_2 = 0.1$. Fig.~\ref{fig:syn-test1} illustrates the statistical results of the estimated focal length from $100$ trials. It can be easily noticed that the focal length acquired using our wizard is not only much closer to the ground truth, but also more concentrated (precise) than the estimation from pure random images. For example, the estimated focal length acquired from only $3$ random + $4$ wizard images has outperformed the one from 20 random images. Moreover, not shown in the graph: $3$ random + $17$ wizard images still give higher accuracy than $60$ random images, which directly demonstrates the usefulness of our approach. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width = 0.48\linewidth]{img/synFequal800_mean_smallK} & \includegraphics[width = 0.48\linewidth]{img/synFequal800_std_smallK} \end{tabular} \caption{Comparisons of the focal length estimated from three schemes on synthetic data: randomly taken images, calibration wizard and wizard using autocorrelation matrix. $f = 800, (u,v) = (320,240), k_1 = 0.01, k_2 = 0.1$. Initial calibration was done with $3$ random images. Left: Mean values of the estimated focal length, where the {\color{red} red} dashed line represents the ground truth $f=800$. Right: Standard deviations of the estimated focal length. Wizard images provide significantly more accurate and precise calibration results than random ones.} \label{fig:syn-test1} \end{figure} \begin{figure}[!t] \centering \begin{tabular}{cc} \includegraphics[width = 0.48\linewidth]{img/synFequal800_mean} & \includegraphics[width = 0.48\linewidth]{img/synFequal800_std} \\ \includegraphics[width = 0.48\linewidth]{img/synK1equal05_mean} & \includegraphics[width = 0.48\linewidth]{img/synK1equal05_std} \\ \includegraphics[width = 0.48\linewidth]{img/synK2equal1_mean} & \includegraphics[width = 0.48\linewidth]{img/synK2equal1_std} \\ \end{tabular} \caption{Comparisons of the intrinsic parameters estimated from three schemes on synthetic data: randomly taken images, calibration wizard without and with autocorrelation matrix. $f = 800, (u,v) = (320,240), k_1 = \mathbf{0.5}, k_2 = \mathbf{1}$. Wizard images achieve superior performance over random images on all intrinsic parameters. Considering the autocorrelation matrices can further provide the most accurate and precise estimation outcomes.} \label{fig:syn-test2} \end{figure} However, in this experiment, we notice that our system does not show much advantage over the randomly-taken images of the estimated distortion coefficients $k_1$ and $k_2$. Thus, a second experiment is performed with larger radial distortion coefficients $k_1 = 0.5$ and $k_2 = 1$, while the focal length and principal point stay the same. Fig.~\ref{fig:syn-test2} shows the effectiveness of the proposed system, especially with the consideration of autocorrelation matrix for target points. When the radial distortion is large, we notice that not only both distortion coefficients, but also the focal length and principle points (not shown here) estimated from purely random images deviate much from the ground truth, as was also reported in~\cite{weng1992camera}. In contrast, our system still manifests the ability of centering around the ground truth with incomparably low standard deviation. Furthermore, compared to the simple case of the proposed system, both first-order statistics features appear to be most desirable when considering the autocorrelation matrices for the feature points. \textbf{Robustness to noise.} We are also interested in the performance of our approach with respect to the level of noise added to 2D corner points. In this experiment, we compare 4 different configurations: $20$ random images, $40$ random images, $3$ random + 17 wizard images and $3$ random + 17 wizard-Auto images. Zero-mean Gaussian noise with standard deviation of $0.1, 0.2, 0.5, 1$ with respect to $2$ pixels has been added to the image points respectively, and the comparisons are shown in Fig.~\ref{fig:syn-test3}. Specifically, it can be distinctly seen from the figure that, even when unrealistically strong noise is added ($\sigma = 2$), both versions of our approach (3 random + 17 wizard images) still provide better accuracy than even 40 random images. More synthetic experiments can be found in the supplementary material. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[width = 0.48\linewidth]{img/synNoise-mean} & \includegraphics[width = 0.48\linewidth]{img/synNoise-std} \end{tabular} \caption{Comparisons among various calibration schemes of the robustness to noise. Zero-mean Gaussian noise with standard deviation of $0.1, 0.2, 0.5, 1$ respectively $2$ pixels is added to 2D target points. The focal length estimated from both of our methods with only 20 images, is more accurate (left) and precise (right) than that from 40 random images, especially when the noise level is high.} \label{fig:syn-test3} \end{figure} \subsection{Real-world evaluations} \label{sec:result-real} Although the performance of the Calibration Wizard has been demonstrated for the synthetic data, we ultimately want to evaluate the effectiveness of its proposed next best pose on real-world examples. We designed two experiments for this purpose where we also compare with calibrations obtained with freely taken images. Evaluating calibration results is difficult since ground truth is not readily available; we thus devised two experiments where calibration quality is assessed through evaluating results of applications -- pose estimation and SfM. We used the commonly-used Logitech C270H HD Webcam in our experiments. It has an image size of $640\times 480$ and around $60^\circ$ field of view. Fig.~\ref{fig:img-example} provides some sample calibration images. One may notice that wizard-suggested images indeed correspond to poses often chosen by experts for stable calibration: large inclination of the target along the viewing direction, targets covering well the field of view and/or reaching the image border. In the following, we denote ``$x$-free'' the calibration results from $x$ images acquired freely by an experienced user using OpenCV, compared to ``$x$-wizard'' where guidance was used. \noindent\textbf{Pose estimation.} Similar to the experiment performed in~\cite{ha2017deltille}, in order to quantitatively evaluate the quality of camera calibration, we design the first real-world experiment where, apart from the images used for calibration, we then also acquire a number of extra checkerboard images which are only used for evaluation, cf.\ Fig.~\ref{fig:pnp}. First, $4$ corner points are utilized to calculate the pose with EP$n$P~\cite{lepetit2009epnp}, given the intrinsic parameters provided by the calibration. Then, since we have assumed the $Z$ components of the target points to be $0$ in the world coordinate system, it is straightforward to back-project the remaining $50$ points to 3D, onto the target plane, using the calibrated intrinsic parameters and the computed pose (cf.\ Fig.~\ref{fig:pnp} right). The smaller Euclidean distance between the back-projected and theoretical 3D points, the better the calibration. There are 80 images in total for testing so we have $50\times 80 = 4,000$ points for assessment. The mean and standard deviation of the 4,000 distance errors are applied as metric. Table~\ref{tab:3d-test} demonstrates that our system, when using only 15 images for calibration, still exceeds the performance of using 50 freely acquired images and exceeds that of using 20 such images by about 5\%. This seemingly small improvement may be considered as significant since it may be expected that differences are not large in this experiment. Even with a moderately incorrect calibration, pose estimation from 4 outermost target points will somewhat balance the reconstruction errors for the inner corner points. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width = 0.44\linewidth]{img/pnp-min} & \includegraphics[width = 0.47\linewidth,frame]{img/pnp_comp_new} \end{tabular} \caption{Pose estimation test. Left: checkerboard image where $4$ {\color{green}green} corner points are used for pose estimation, and the remaining $50$ {\color{red}red} points for reconstruction. Right: $50$ ground-truth points in black and residuals between them and the reconstructed corner points in {\color{red}red} (enlarged 50 times for visualization).} \label{fig:pnp} \end{figure} \begin{table}[!htb]\centering \caption{Pose estimation test with Logitech C270H HD Webcam.} \footnotesize \begin{tabular}{c|c|c||c|c|c} \hline & mean & std & & mean & std \\ \hline \hline \; 3-free & 0.856 & 1.130 & 3-free + 4-wizard & 0.862 & 1.155 \\ 10-free & 0.815 & 1.115 & 3-free + 6-wizard & 0.783 & 1.092 \\ 20-free & 0.802 & 1.115 & 3-free + 9-wizard & 0.788 & 1.104 \\ 50-free & 0.789 & 1.108 & \textbf{3-free + 12-wizard} & \textbf{0.763} & \textbf{1.082} \\ \hline \end{tabular} \label{tab:3d-test} \end{table} We also tested our approach on the FaceTime HD camera of a MacBook Pro. This camera has higher resolution and different field of view compared to other commonly use webcams, so it is a suitable alternative to show the robustness of our method. As shown in Table~\ref{tab:3d-test-mac}, adding only one or two wizard images can largely reduce the Euclidean distance and outperforms the results from freely taking many more images. \begin{table}[!htb]\centering \caption{Pose estimation test with FaceTime HD camera.} \footnotesize \begin{tabular}{c|c|c||c|c|c} \hline & mean & std & & mean & std \\ \hline \hline \; 3-free & 2.503 & 2.557 & 3-free + 1-wizard & 1.455 & 1.630 \\ 10-free & 1.664 & 1.839 & \textbf{3-free + 2-wizard} & \textbf{1.165} & \textbf{1.491} \\ 20-free & 1.255 & 1.606 & & & \\\hline \end{tabular} \label{tab:3d-test-mac} \end{table} \begin{figure*}[!ht] \centering \includegraphics[width = 0.18\linewidth]{img/rand1.jpg} \includegraphics[width = 0.18\linewidth]{img/rand2.jpg} \includegraphics[width = 0.18\linewidth]{img/rand3.jpg} \includegraphics[width = 0.18\linewidth]{img/rand5.jpg} \includegraphics[width = 0.18\linewidth]{img/rand4.jpg}\\ \includegraphics[width = 0.18\linewidth]{img/wizard1.jpg} \includegraphics[width = 0.18\linewidth]{img/wizard2.jpg} \includegraphics[width = 0.18\linewidth]{img/wizard3.jpg} \includegraphics[width = 0.18\linewidth]{img/wizard5.jpg} \includegraphics[width = 0.18\linewidth]{img/wizard4.jpg} \caption{Sample images used for the calibration in real-world tests. Top row: freely-taken images. Bottom row: wizard guided images.} \label{fig:img-example} \end{figure*} \noindent\textbf{Structure from Motion test.} In this last experiment, we assess our Calibration Wizard by investigating the quality of 3D reconstruction in a structure-from-motion (SfM) setting. The object to be reconstructed is the backrest of a carved wooden bed, as shown in Fig.~\ref{fig:sfm}. We devised a simple but meaningful experiment to evaluate the quality of the calibration, as follows. We captured images from the far left of the object and gradually move to the right side, and then proceed backwards and return to the left, approximately to the starting point. The acquired images are then provided as input to VisualSfM~\cite{wu2013towards}; we added an identical copy of the first image of the sequence, to the end of the sequence, but without ``telling'' this to the SfM tool and without using a loop detection method during SfM. The purpose of doing so is: if calibration is accurate, the incremental SfM should return poses for the first image and the added identical last image, which are close to one another. Measuring the difference in pose is not sufficient since the global scale of the reconstruction can be arbitrarily chosen by the SfM tool for each trial. So instead, we project all 3D points that were reconstructed on the basis of interest points extracted in the first image, using the pose computed by SfM for the identical last image, and measure the distance between the two sets of 2D points such constructed. This distance is independent of the scene scale and is thus a good indicator of the quality of the SfM result which in turn is a good indicator of the quality of the calibration used for SfM. Note that we only match two consecutive frames instead of full-pairwise matching within the given sequence. In this case, 2D errors are accumulated so the reconstruction results highlight the calibration accuracy more strongly. The experiment is described as follows. We first obtain the 5-parameter calibration result (including two radial distortion coefficients), from 3 freely acquired images (``3-free''). Then, on the one hand, another 17 images are taken, from which the intrinsic parameters of ``7-free'' and ``20-free'' are obtained. On the other hand, we take another 4 sequential images proposed by the Calibration Wizard, where we get the intrinsic parameters of ``3-free + 2-wizard'' and ``3-free + 4-wizard''. And now, we load VisualSfM with intrinsic parameters of these five configurations respectively, along with the backrest sequence taken by the same camera. It is worth mentioning that we conduct five trials of VisualSfM for each configuration in order to lessen the influence of the stochastic nature of the SfM algorithm. \begin{figure}[!htb] \centering \includegraphics[width = 0.9\linewidth]{img/pano} \includegraphics[width = 0.9\linewidth]{img/capture1_edit} \caption{Structure from motion test. Top: Panorama stitched by Hugin (\url{http://hugin.sourceforge.net}), showing the test scene. Bottom: Result of applying VisualSfM~\cite{wu2013towards} to build a 3D model. We started capturing images from the left and moved clockwise, finally came back approximately to the starting point.} \label{fig:sfm} \end{figure} \begin{table}[t] \centering \caption{2D errors of SfM tests under various calibration schemes.} \begin{tabular}{c||r@{}l|r@{}l|r@{}l} \hline Calibration scheme & \multicolumn{2}{c|}{mean} & \multicolumn{2}{c|}{std} & \multicolumn{2}{c}{median} \\ \hline \hline 3-free & 43 & .6 & 11 & .5 & 44 & .3 \\ 7-free & 30 & .5 & 11 & .7 & 31 & .7 \\ 20-free & 15 & .7 & 10 & .5 & 16 & .1 \\ 3-free $+$ 2-wizard \quad & 17 & .4 & 10 & .8 & 13 & .2 \\ 3-free $+$ 4-wizard \quad & \textbf{14} & \textbf{.4} & \textbf{9} & \textbf{.1} & \textbf{10} & \textbf{.6} \\ \hline \end{tabular} \label{tab:sfm} \end{table} Results are listed in Table~\ref{tab:sfm}, where we evaluate the 2D errors across all 5 trials. Some observations can be made: with only 5 images in total, ``3-free + 2-wizard'' has already provided an accuracy competitive to 20 freely-taken images. Both ``7-free'' and ``3-free + 4-wizard'' use 7 images for calibration, but it can be clearly noticed that the latter one has far lower errors in all aspects. It is reasonable to conclude that our method notably improves the quality of calibration and 3D reconstruction with a considerably small number of calibration images. Finally, it is worth mentioning that all real-world experiments were performed with a 2.7 GHz Intel i5 CPU (no GPU used). To compute the next best pose with a $9\times6$ target, our un-optimized C++ code took about $0.4s$ for 3 target images and $1.5s$ for 15 images (increasing roughly linearly per image), but we found that 10 images are usually sufficient for a good calibration. \section{Conclusions} \label{sec:conclusions} Calibration Wizard is a novel approach which can guide any user through the calibration process. We have shown that accurate intrinsic parameters can be obtained from only a small number of images suggested by this out-of-the-box system. Some ideas for future work were already mentioned in section \ref{sec:autocorrelation.next}. We also plan to apply the approach to very wide field of view cameras such as fisheyes. {\small \bibliographystyle{ieee_fullname} \section{Another Synthetic Experiment} \label{s.experiments} When users try to calibrate a camera, they tend to randomly move the camera around and get as many images as possible for calibration. With our calibration wizard, we have already shown the superiority over such randomly-captured case in the main paper. Now, considering that calibration is an interactive procedure, i.e., a user can move the camera around, we could actually keep all the intermediate frames when moving from one position to the next, and then all these frames can be used for calibration. Here we perform another synthetic test to validate that the calibration can still be improved by moving to optimal poses under this scenario. Provided the frame rate is $25$ fps and it takes 1 second to move from one pose to the next, 25 images can then be acquired within \textit{one path}. To this end, the experimental process is as follows. First, we create 4 random poses using exactly the same way described in the main paper, from which we could acquire 3 paths by pose interpolation. Thus, there are $3\times 25 = 75$ frames in total for the case ``Random''. This process mimics moving the camera from one pose to another continuously for three times. Then, we randomly take 5 images within the first path to get the initial calibration for our wizard. The wizard then proposes a next best pose and while the simulated camera moves there, 25 additional images for calibration are acquired. This is done 3 times, for 75 additional calibration images in total. This input leads to the results labeled as ``Wizard'' in the following. The same process is also applied for taking autocorrelation matrix into account in next best pose computation (``Wizard-Auto'') Finally, we simply compare the calibration results acquired from random-generated paths and wizard-generated ones. As illustrated in Fig.~\ref{fig:syn-test21_supp}, we can clearly notice that the calibration results from our system have higher precision and accuracy than the results from random paths. We also perform the comparison with respect to the level of noise added to 2D corner points. In this experiment, zero-mean Gaussian noise with standard deviation of $0.1, 0.2, 0.5, 1$ respectively $2$ pixels has been added to corner points, and the comparisons are shown in Fig.~\ref{fig:syn-test3_supp}. Again, even when unrealistically strong noise is added ($\sigma = 2$), our system provides much better estimation of the focal length. Moreover, when considering the autocorrelation matrices, our system seems more robust to the noise. Note that we only select the results for some intrinsic parameters for these two experiments, but similar results can be expected for other parameters like principal points. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[width = 0.48\linewidth]{img/synF_mean} & \includegraphics[width = 0.48\linewidth]{img/synF_std} \\ \includegraphics[width = 0.48\linewidth]{img/synK1_mean} & \includegraphics[width = 0.48\linewidth]{img/synK1_std} \\ \includegraphics[width = 0.48\linewidth]{img/synK2_mean} & \includegraphics[width = 0.48\linewidth]{img/synK2_std} \\ \end{tabular} \caption{Comparisons of the intrinsic parameters estimated from three schemes on synthetic data: the paths generated from random poses, generated from the poses proposed by calibration wizard without and with autocorrelation matrix. $f = 800, (u,v) = (320,240), k_1 = \mathbf{0.5}, k_2 = \mathbf{1}$. {\color{red} Red} dashed lines represent the ground truth values. Wizard images achieve superior performance over random images on all intrinsic parameters.} \label{fig:syn-test21_supp} \end{figure} \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[width = 0.48\linewidth]{img/noise_75_mean.eps} & \includegraphics[width = 0.48\linewidth]{img/noise_75_std.eps} \end{tabular} \caption{Comparisons of the calibration schemes concerning the robustness to added noise. Zero-mean Gaussian noise with standard deviation of $0.1, 0.2, 0.5, 1$ respectively $2$ pixels is added to the 2D target points. The intrinsic parameters estimated from the wizard-generated paths (shown are results for the focal length), are more accurate (left) and precise (right) than random-generated paths, especially when the noise level is high. The total number of image is 75.} \label{fig:syn-test3_supp} \end{figure} \section{An efficient way of computing the covariance matrix of intrinsic parameters} \label{s.computations} As already explained in the paper, the Jacobian matrix $J$ can be denoted as: \begin{equation} J = \begin{pmatrix} A_1 & B_1 & 0 & \cdots & 0 \\ A_2 & 0 & B_2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ A_m & 0 & 0 & \cdots & B_m \end{pmatrix} \end{equation} Let us examine in detail the incorporation of the autocorrelation matrices of image corner points in the computation of the information matrix, as shown in Eq.~(9) in the paper: \begin{equation} H = J^\top \text{diag}(C_{11}, C_{12}, \cdots, C_{1n}, C_{21}, \cdots, C_{mn}) J \label{eq.information_matrix} \end{equation} Let us decompose the matrices $A_i$ and $B_i$ appearing in the definition of the $J$, into matrices $A_{ij}$ and $B_{ij}$ containing the residual associated with a single image point each (point $j$ in image $i$): \[ A_{ij} = \begin{pmatrix} \frac{\partial \hat{x}_{ij}}{\Theta_1} & \cdots & \frac{\partial \hat{x}_{ij}}{\Theta_k} \\ \frac{\partial \hat{y}_{ij}}{\Theta_1} & \cdots & \frac{\partial \hat{y}_{ij}}{\Theta_k} \end{pmatrix} \enspace B_{ij} = \begin{pmatrix} \frac{\partial \hat{x}_{ij}}{\Pi_{i,1}} & \cdots & \frac{\partial \hat{x}_{ij}}{\Pi_{i,6}} \\ \frac{\partial \hat{y}_{ij}}{\Pi_{i,1}} & \cdots & \frac{\partial \hat{y}_{ij}}{\Pi_{i,6}} \end{pmatrix} \] The $A_{ij}$ are of size $2\times k$ ($k$ is the number of intrinsic parameters) and the $B_{ij}$ of size $2\times 6$. We then have: \[ A_i = \begin{pmatrix} A_{i1} \\ \vdots \\ A_{in} \end{pmatrix} \enspace B_i = \begin{pmatrix} B_{i1} \\ \vdots \\ B_{in} \end{pmatrix} \] Now, the information matrix $H$ of Eq.~\eqref{eq.information_matrix} can be structured as: \begin{equation} H = \begin{pmatrix} U & W\\ W^\top & V \end{pmatrix} \label{eq:infoMat_new} \end{equation} where \begin{eqnarray} U & = & \sum_i U_i \\ \text{with}\ U_i & = & \sum_j A_{ij}^\top C_{ij} A_{ij} \\ W & = & \begin{pmatrix} W_1 & \cdots & W_m \end{pmatrix} \\ \text{with}\ W_i & = & \sum_j A_{ij}^\top C_{ij} B_{ij} \\ V & = & \begin{pmatrix} V_1 & & & \\ & V_2 & & \\ & & \ddots & \\ & & & V_m \end{pmatrix} \\ \text{with}\ V_i & = & \sum_j B_{ij}^\top C_{ij} B_{ij} \end{eqnarray} $U$ is a symmetric matrix of size $k\times k$, $W$ is of size $k \times (6m)$ (remember that $m$ is the number of images) and $V$ is a block-diagonal matrix consisting of $m$ symmetric $6\times6$ matrices $V_i$. As described in~\cite{hartley2003multiple}, the upper-left sub-matrix of $H^{-1}$ is given by \begin{equation} \Sigma = (U-W V^{-1} W^\top)^{+} \label{eq:sigma_big} \end{equation} which is a pseudo-inverse of a $k\times k$ symmetric matrix. Let us expand this using the above definitions of $U, V$ and $W$. Using the above definitions of $U, V$ and $W$, we can rewrite $\Sigma$ as: \begin{equation} \Sigma = \left\{ \sum_i \left( U_i - W_i V_i^{-1} W_i^\top \right) \right\}^{+} \label{eq.covariance} \end{equation} The computation is efficient. The $V_i$ are symmetric and of size $6\times6$, hence their inversion is a small problem. Other than that, the computation only involves products and sums of small matrices and one final inversion of a $k \times k$ symmetric matrix. Let us now consider the computation of the next best pose. As explained in the paper, we use global optimization algorithms for this purpose. They will need to compute $\Sigma$ and its trace multiple times, for multiple hypothetical next poses. It is thus interesting to study how to cut computation times by performing adequate pre-computations. This is very simple in our case. Let image $m+1$ be the next image, for which to compute the optimal pose. Images $1$ till $m$ are thus already acquired and we have computed intrinsic and pose parameters for them. All we need to do is to compute once the following part of equation (\ref{eq.covariance}), for images $1$ to $m$: \begin{equation} A = \sum_{i=1}^m \left( U_i - W_i V_i^{-1} W_i^\top \right) \end{equation} Then, for each hypothetical pose for image $m+1$, we only need to compute \begin{equation} \Sigma = \left\{ A + U_{m+1} - W_{m+1} V_{m+1}^{-1} W_{m+1}^\top \right\}^{+} \end{equation} This computation is efficient: besides the computation of partial derivatives of the projection function for the hypothetical new pose (see section \ref{s.derivatives} for details) and the sum or multiplication of small matrices (of size at most $(max(k,6) \times max(k,6)$), it only involves the inversion of a symmetric matrix of size $6\times6$ and of a symmetric matrix of size $k\times k$. Note that in this section we handled the incorporation of autocorrelation matrices. If one wishes to work without them, it suffices to replace the $C_{ij}$ by identity matrices. \section{Details on the computation of partial derivatives in $J$} \label{s.derivatives} In the following, we use the following shorthand notations for partial derivatives, which makes for easier reading. If $A$ is a vector of size $a$ and $B$ a vector of size $b$ (with possibly, $a=1$ or $b=1$), then we write: \begin{equation} \frac{\partial A}{\partial B} = \begin{pmatrix} \frac{\partial A_1}{\partial B_1} & \cdots & \frac{\partial A_1}{\partial B_b} \\ \vdots & \ddots & \vdots \\ \frac{\partial A_a}{\partial B_1} & \cdots & \frac{\partial A_a}{\partial B_b} \end{pmatrix} \end{equation} Also, if $A$ is a matrix and $B$ a scalar, then $\frac{\partial A}{\partial B}$ is the matrix gathering the partial derivatives of the coefficients of $A$ relative to the scalar $B$, in the same order as they appear in $A$. As we have seen in the paper, $3D$ to $2D$ projections can be written as: \begin{align} p_x(\Theta, \Pi, Q) &= q_x(\Theta, RQ+t) = q_x(\Theta, S)\\ p_y(\Theta, \Pi, Q) &= q_y(\Theta, RQ+t) = q_y(\Theta, S) \end{align} Thanks to the chain rule, the derivatives of residuals $\hat{x}$ (for their definition, see section 2 of the main paper) with respect to extrinsic parameters can be decomposed as: \begin{eqnarray} \frac{\partial \hat{x}}{\partial R} & = & \frac{\partial p_x}{\partial R} = \frac{\partial q_x}{\partial S}\frac{\partial S}{\partial R} \label{eq.pd1} \\ \frac{\partial \hat{x}}{\partial t} & = & \frac{\partial p_x}{\partial t} = \frac{\partial q_x}{\partial S}\frac{\partial S}{\partial t} \label{eq.pd2} \end{eqnarray} and likewise for partial derivatives of $\hat{y}$. Here, we use the informal notation \begin{equation} \partial R = \partial \begin{pmatrix} \alpha \\ \beta \\ \gamma \end{pmatrix} \end{equation} where the three angles $\alpha, \beta, \gamma$ parameterize the rotation matrix $R$ as shown on page 2 of the main paper and as reproduced here: \begin{equation} R = R_z(\gamma) R_y(\beta) R_x(\alpha) \end{equation} $\partial q_x/\partial S$ and $\partial q_y/\partial S$ depend on the projection functions for the camera model used for calibration (see below for an example, the pinhole model with one radial distortion coefficient), while $\partial S/\partial R$ and $\partial S/\partial t$ are generic for all camera models. Remember that \begin{equation} S = RQ+t \label{eq.S} \end{equation} Thus: \makeatletter \def\tagform@#1{\maketag@@@{\normalsize(#1)\@@italiccorr}} \makeatother {\small \begin{align} \label{eq:dSdR} \frac{\partial S}{\partial R} &= \begin{pmatrix} \frac{\partial S_1}{\partial \alpha} & \frac{\partial S_1}{\partial \beta} & \frac{\partial S_1}{\partial \gamma} \\ \frac{\partial S_2}{\partial \alpha} & \frac{\partial S_2}{\partial \beta} & \frac{\partial S_2}{\partial \gamma} \\ \frac{\partial S_3}{\partial \alpha} & \frac{\partial S_3}{\partial \beta} & \frac{\partial S_3}{\partial \gamma} \end{pmatrix} = R \arraycolsep=1.4pt \begin{pmatrix} Q_1 & Q_3 & -Q_2 \\ -Q_3 & Q_2 & Q_1 \\ Q_2 & -Q_1 & Q_3 \end{pmatrix}\\ \label{eq:dSdt} \frac{\partial S}{\partial t} &= \begin{pmatrix} \frac{\partial S_1}{\partial t_1} & \frac{\partial S_1}{\partial t_2} & \frac{\partial S_1}{\partial t_3} \\ \frac{\partial S_2}{\partial t_1} & \frac{\partial S_2}{\partial t_2} & \frac{\partial S_2}{\partial t_3} \\ \frac{\partial S_3}{\partial t_1} & \frac{\partial S_3}{\partial t_2} & \frac{\partial S_3}{\partial t_3} \end{pmatrix} = I_{3\times3} \end{align}} The derivation is straightforward. \subsection{Example: pinhole model with one radial distortion coefficient} Here we use the pinhole model with one radial distortion coefficient as an example. It can be easily generalized to other more complicated models. The model can be represented as: \begin{eqnarray} q_x(S) & = & u + (1 + k_1 r^2) f \frac{S_1}{S_3} \\ q_y(S) & = & v + (1 + k_1 r^2) f \frac{S_2}{S_3} \end{eqnarray} where $r = \sqrt{(\frac{S_1}{S_3})^2 + (\frac{S_2}{S_3})^2} = \frac{1}{S_3}\sqrt{S_1^2 + S_2^2}$. We define $\Theta = (f,u,v, k_1)$. First, the partial derivatives of the local projection functions with respect to $S$ can be written as: \begin{equation} \begin{pmatrix} \frac{\partial q_x}{\partial S} \\ \frac{\partial q_y}{\partial S} \end{pmatrix} = \begin{pmatrix} \frac{\partial q_x}{\partial S_1} & \frac{\partial q_x}{\partial S_2} & \frac{\partial q_x}{\partial S_3}\\ \frac{\partial q_y}{\partial S_1} & \frac{\partial q_y}{\partial S_2} & \frac{\partial q_y}{\partial S_3}\\ \end{pmatrix} \end{equation} where \begin{align*} \frac{\partial q_x}{\partial S_1} &= \frac{f}{S_3} + \frac{f k_1}{S_3^3}( 3S_1^2 +S_2^2 ) \\ \frac{\partial q_x}{\partial S_2} &= \frac{2 f k_1 S_1 S_2}{S_3^3} \\ \frac{\partial q_x}{\partial S_2} &= -\frac{f S_1}{S_3^2} - \frac{3 f k_1 S_1 (S_1^2 + S_2^2)}{S_3^4}\\ \frac{\partial q_y}{\partial S_1} &= \frac{2 f k_1 S_1 S_2}{S_3^3} \\ \frac{\partial q_y}{\partial S_2} &= \frac{f}{S_3} + \frac{f k_1}{S_3^3}(S_1^2 + 3S_2^2 ) \\ \frac{\partial q_y}{\partial S_2} &= -\frac{f S_2}{S_3^2} - \frac{3 f k_1 S_2 (S_1^2 + S_2^2)}{S_3^4}\\ \end{align*} This, together with Eq.~\eqref{eq:dSdR} and \eqref{eq:dSdt}, allows to compute the partial derivatives of residuals relative to extrinsic parameters in the Jacobian matrix $J$, by inserting into Eq.~\eqref{eq.pd1} and \eqref{eq.pd2} (and likewise for partial derivatives of $\hat{y}$). As for the partial derivatives relative to intrinsic parameters, they can be computed as: \begin{align} \frac{\partial \hat{x}}{\partial \Theta} & = \frac{\partial q_x}{\partial \Theta} = \begin{pmatrix} (1 + k_1 r^2)\frac{S_1}{S_3} & 1 & 0 & r^2 f \frac{S_1}{S_3} \end{pmatrix} \\ \frac{\partial \hat{y}}{\partial \Theta} & = \frac{\partial q_y}{\partial \Theta} = \begin{pmatrix} (1 + k_1r^2)\frac{S_2}{S_3} & 0 & 1 & r^2 f\frac{S_2}{S_3} \end{pmatrix} \end{align} Now, we have everything needed for building $J$. \section{Uncertainty map} \label{sec:uncertainty-map} Here we introduce the concept of uncertainty map which could be an effective tool to visualize the quality of the current calibration. Also, the uncertainty map can be used to evaluate the evolution of the quality along the calibration process when adding more and more images. \begin{figure*}[!ht] \centering \begin{tabular}{cccc} \includegraphics[width = 0.28\linewidth, trim = 10em 10em 10em 2.5em, clip]{img/umap_3.png} & \includegraphics[width = 0.28\linewidth, trim = 10em 10em 10em 2.5em, clip]{img/umap_3+2.png} & \includegraphics[width = 0.28\linewidth, trim = 10em 10em 10em 2.5em, clip]{img/umap_3+2w.png} & \includegraphics[width = 0.041\linewidth]{img/colorbar.png} \\ {\small (a)} & {\small (b)} & {\small (c)} & \end{tabular} \caption{Illustrations for the uncertainty maps and the effectiveness of our camera wizard. The size of the map is the same as acquired image size $640\times 480$. (a) Initial calibration results with 3 freely taken images. On top of the 3 freely-taken images, (b) add two freely taken images. (c) add two images proposed by wizard. We can clearly visualize: (1) the more calibration images, the lower the uncertainty (2) the uncertainty obtained from using wizard images is much lower than with freely taken ones (cf. the scale on the right hand side).} \label{fig:umap-illu} \end{figure*} Given the $k \times k$ covariance matrix $\Sigma$ of intrinsic parameters, an uncertainty map is defined as the expected uncertainties of the local projection model across the image plane. That is to say, we propagate the uncertainties of intrinsic parameters to each pixel on the image area. The quality of calibration at any stage of the calibration process can be visualized as shown in Fig.~\ref{fig:umap-illu}. Mathematically speaking, for each pixel $(x,y)$ on the image, the uncertainty propagation process can be formulated as: \begin{equation} \begin{split} \Gamma&(x,y) =\\& \begin{pmatrix}[1.5] \frac{\partial q_x(\Theta,S)}{\partial \Theta} |_{S=S(x,y,\Theta)} \\ \frac{\partial q_y(\Theta,S)}{\partial \Theta} |_{S=S(x,y,\Theta)} \end{pmatrix} \Sigma \begin{pmatrix}[1.5] \frac{\partial q_x(\Theta,S)}{\partial \Theta} |_{S=S(x,y,\Theta)} \\ \frac{\partial q_y(\Theta,S)}{\partial \Theta} |_{S=S(x,y,\Theta)} \end{pmatrix}^\top \label{eq:uncertainty-propergate} \end{split} \end{equation} where $\Gamma$ is a $2\times 2$ covariance matrix expressing the uncertainty per image point $(x,y)$. To compute this, we first need to back-project the image point to a 3D point $S$ (any 3D point along the line of sight associated with the image point will do). We denote this as $S(x,y,\Theta)$ in the above equation: the back-projection depends on the image point coordinates $x$ and $y$ and of course on the intrinsic parameters $\Theta$. Then, we propagate the uncertainty on the intrinsic parameters (covariance matrix $\Sigma$) in a standard way through the forward projection, as shown in the above equation, to obtain the covariance matrix $\Gamma$. Note that, we are aware that minimizing over this pixel reprojection uncertainty is better than over the trace of the covariance matrix $\Sigma$. The main problem with minimizing reprojection uncertainty is computation time, incompatible with real-time. In this paper we stuck to the trace, a commonly used simple cost function. In the future, we will consider the reprojection uncertainty for a subset of pixels. In the following, we provide an example for the process of acquiring the uncertainty map and some analysis. \subsection{Example: 3-parameter pinhole model} For this simple camera model, the uncertainty map can be analyzed theoretically, as shown in the following. We remind the definition of the 3-parameter pinhole model, where the vector of intrinsic parameters is $\Theta=(f,u,v)^\top$. \begin{eqnarray*} q_x(\Theta,S) & = & u + f \frac{S_1}{S_3} \\ q_y(\Theta,S) & = & v + f \frac{S_2}{S_3} \end{eqnarray*} These are the ``forward'' projection equations. As explained above, to construct the uncertainty map, we should back-project each pixel to 3D, i.e.\ to some 3D point $S$ which, if forward-projected, gives rise to the original pixel. One possibility for the back-projection is as follows: \[ S(x,y,\Theta) = \begin{pmatrix} x-u \\ y-v \\ f \end{pmatrix} \] We now compute the partial derivatives appearing in Eq.~\eqref{eq:uncertainty-propergate}: \begin{eqnarray*} \begin{pmatrix} \frac{\partial q_x(\Theta,S)}{\partial \Theta} |_{S=S(x,y,\Theta)} \\ \frac{\partial q_y(\Theta,S)}{\partial \Theta} |_{S=S(x,y,\Theta)} \end{pmatrix} & = & \begin{pmatrix} \frac{S_1}{S_3} & 1 & 0 \\ \frac{S_2}{S_3} & 0 & 1 \end{pmatrix} |_{S=S(x,y,\Theta)} \\ & = & \begin{pmatrix} \frac{x-u}{f} & 1 & 0 \\ \frac{y-v}{f} & 0 & 1 \end{pmatrix} \end{eqnarray*} Inserting this in Eq.~\eqref{eq:uncertainty-propergate}, we get the desired covariance matrices per image point: \begin{equation*} \Gamma(x,y) = \begin{pmatrix} \frac{x-u}{f} & 1 & 0 \\ \frac{y-v}{f} & 0 & 1 \end{pmatrix} \Sigma \begin{pmatrix} \frac{x-u}{f} & \frac{y-v}{f} \\ 1 & 0 \\ 0 & 1 \end{pmatrix} \end{equation*} To analyze the nature of the uncertainty map, we can imagine it as the coefficients of an ``uncertainty ellipse'' centered at every image pixel. Once we acquire $\Gamma$ for each pixel, the trace, larger eigenvalue or determinant of $\Gamma$ can be used to measure the impact of the uncertainty of intrinsic parameters, on this pixel. In Fig.~\ref{fig:umap-illu}, we generate the map by creating an image where the value of every pixel is given by the trace of $\Gamma$. We may further analyze the nature of these uncertainty maps as follows. When computing the trace of $\Gamma$ in detail, we get: \begin{align*} tr(\Gamma(x,y)) = \Sigma_{22} + &\Sigma_{33} + \frac{\Sigma_{11}}{f^2} \left[ (x-u)^2 + (y-v)^2 \right]\\ &+ \frac{2}{f} \left[ \Sigma_{12} (x-u) + \Sigma_{13}(y-v) \right] \end{align*} This is quadratic with respect to $x$ and $y$. The uncertainty map, if visualized in 3D (as height map above the $x-y$-plane), can be shown to be a circular paraboloid. The global minimum of the trace is attained at \[ x^* = u - f \frac{\Sigma_{12}}{\Sigma_{11}} \enspace\enspace\enspace y^* = v - f \frac{\Sigma_{13}}{\Sigma_{11}} \] and has the following value: \begin{equation*} \Sigma_{22} + \Sigma_{33} - \frac{\Sigma_{12}^2 + \Sigma_{13}^2}{\Sigma_{11}} \end{equation*} A few observations are as follows. If the covariance between the focal length and the principal point coordinates ($\Sigma_{12}$ and $\Sigma_{13}$) approaches zero, then the minimum of the uncertainty map coincides with the principal point and its value there is the sum of the variances of the principal point coordinates. And if the variance of the focal length tends to zero, the uncertainty map tends to being uniform. Therefore, when the calibration result gets more accurate (low uncertainty), the range of the uncertainty map will become smaller, and its minimum will be approaching the principal point. Finally, an empirical observation is that the covariance matrices $\Gamma$, when visualized as uncertainty ellipses, tend to be oriented towards the minimum of the uncertainty map (i.e.\ at each image point, the associated uncertainty ellipse has an axis that ``points'' towards the minimum of the uncertainty map). More analysis, also for more complex camera models, is possible. In summary, we believe that this concept is a principled way of displaying and interpreting the uncertainty of a calibration estimate.
{ "timestamp": "2019-09-04T02:39:54", "yymm": "1811", "arxiv_id": "1811.03264", "language": "en", "url": "https://arxiv.org/abs/1811.03264" }
\section{Introduction} \label{sec:introduction} The Drell-Yan (D-Y) process~\cite{drell} is one of the important experimental approaches to explore the partonic structure of hadrons~\cite{peng14}. It is a unique tool for accessing the structures of unstable hadrons such as pions and kaons~\cite{falciano86,conway,dutta2013}. The D-Y process plays an essential role in probing the sea quarks of protons~\cite{NA51,e866,chang14} as well. The transverse momentum ($q_T$) distributions of the D-Y cross sections yield important information on the intrinsic transverse momentum ($k_T$) distribution of partons~\cite{Bacchetta:2017gcc} in the small-$q_T$ region. Furthermore, the polar and azimuthal angular distributions of leptons produced in unpolarized D-Y process are sensitive to the underlying reaction mechanisms and to novel parton distributions such as Boer-Mulders functions~\cite{boer99}. For measurement with a transversely polarized target, a recent experiment extracted information on Sivers functions for the first time via the D-Y process~\cite{compass}. In the rest frame of the virtual photon in the D-Y process, a commonly used expression for the lepton angular distributions is given as~\cite{lam78} \begin{equation} \frac{d\sigma}{d\Omega} \propto 1+ \lambda \cos^2\theta + \mu \sin 2 \theta\cos\phi + \frac{\nu}{2} \sin^2\theta \cos 2 \phi, \label{eq:eq1} \end{equation} where $\theta$ and $\phi$ refer to the polar and azimuthal angles of $l^-$ ($e^-$ or $\mu^-$). At leading-order (LO), $q\bar{q}\rightarrow \gamma^*$ with collinear partons leads to a transversely polarized virtual photon with a prediction of $\lambda = 1$ and $\mu = \nu =0$~\cite{drell}. To describe the D-Y process with finite $q_T$, higher-order QCD processes, such as $q\bar{q}\rightarrow \gamma^* G$ and $qG \rightarrow \gamma^* q$ in $\mathcal{O}(\alpha_S)$, should be included and these processes could alter the angular coefficients $\lambda$, $\mu$ and $\nu$ in principle. While $\lambda$ can now deviate from 1, and $\mu$ and $\nu$ can be nonzero, a well-known result is that the Lam-Tung (L-T) relation~\cite{lam80}, \begin{equation} 1-\lambda-2\nu=0, \label{eq:eq2} \end{equation} holds for both NLO processes. Deviation from the L-T relation appears in the NNLO process $\mathcal{O}(\alpha_S^2)$ and beyond, e.g. $q\bar{q}\rightarrow \gamma^* GG$, $qG \rightarrow \gamma^* qG$ and $GG \rightarrow \gamma^* G$ according to pQCD~\cite{Brandenburg:1993cj}. Violation of the L-T relation was observed in the fixed-target experiments with pion beams by NA10~\cite{falciano86} and E615~\cite{conway}, while L-T was found to be satisfied in the D-Y production with proton beams by E866~\cite{zhu}. The $q_T$ range of these fixed-target experiments is between 0 and 5 GeV. As for the measurements of $Z$ boson production in the collider experiments, CDF data of $p-\bar{p}$ collision~\cite{cdf} are consistent with the L-T relation, while CMS and ATLAS data of $p-p$ collision~\cite{cms,atlas} show a clear violation. The violation of the L-T relation at $q_T>5$ GeV could be well described taking into account NNLO pQCD effect~\cite{Gauld:2017tww}. Lambersten and Vogelsang~\cite{Lambertsen:2016wgj} compared the NLO and NNLO pQCD calculations of $\lambda$ and $\nu$ with the data of fixed-target experiments NA10, E615 and E866. Overall the agreement is not as good as seen in the collider data at large $q_T$. Recently we interpreted the violation of the L-T relation as a consequence of the acoplanarity of the partonic subprocess~\cite{peng16,chang17}. This acoplanarity can arise from intrinsic transverse momenta of partons inside the hadrons, or from the perturbative gluon radiation beyond $\mathcal{O}(\alpha_s)$ such that the axis of the annihilating quark-antiquark pair (natural axis) no longer necessarily resides on the colliding hadron plane. In addition to the violation of the L-T relation, other salient features of the $q_T$ dependence of the $\lambda$, $\mu$ and $\nu$ parameters of the $Z$ production data from the collider experiments~\cite{peng16,chang17}, as well as the rotational invariance properties of these parameters~\cite{peng18}, could be well explained by this intuitive geometric approach. In this work we compare the $\lambda$, $\mu$, $\nu$ data measured at NA10~\cite{falciano86}, E615~\cite{conway} and E866~\cite{zhu} with the fixed-order pQCD calculations. The approach is similar to what was done in Ref.~\cite{Lambertsen:2016wgj}, but we extend the study to include the L-T violation quantity $1-\lambda-2\nu$, the $\mu$ parameter, as well as the scaling behavior of these angular parameters. Furthermore we present the NLO pQCD predictions for the ongoing COMPASS~\cite{COMPASSII} and SeaQuest~\cite{E906} experiments on the dimuon mass $Q$ and Feynman-$x$ ($x_F$) dependence of the angular parameters. The common features between the pQCD and the geometric approach~\cite{peng16,chang17} are also discussed. This paper is organized as follows. In Sec.~\ref{sec:method}, we describe how the fixed-order pQCD calculation is performed to extract the angular distribution parameters. The results from the pQCD calculations for the existing and forthcoming fixed-target experiments are then presented in Secs.~\ref{sec:results_oldexp} and~\ref{sec:results_newexp}, respectively. We further interpret some notable features of pQCD results using the geometric model in Sec.~\ref{sec:discussion}, followed by conclusion in Sec.~\ref{sec:conclusion}. \section{Calculations of angular parameters in DYNNLO} \label{sec:method} The formalism of the NLO ($\mathcal{O}(\alpha_S)$)~\cite{DY_nlo} and the NNLO ($\mathcal{O}(\alpha_S^2)$)~\cite{DY_nnlo} QCD of the D-Y process have been known for a while. It is not until recently that packages of evaluating the differential D-Y cross sections up to $\mathcal{O}(\alpha_s^2)$ from $p-p$ and $p-\bar{p}$ collisions are available for public usage: DYNNLO~\cite{DYNNLO} and FEWZ~\cite{FEWZ}. Both packages are parton-level Monte Carlo programs and they provide the differential cross sections for the D-Y process and $W$/$Z$ vector boson production. The threshold resummation of soft-gluon emission at small $q_T$ is not included in these two packages. As discussed in Ref.~\cite{Lambertsen:2016wgj}, even though resummation is important for the cross sections, it is expected not to affect the angular parameters~\cite{boer06,berger}. In this work we utilize the DYNNLO (version 1.5) package~\cite{DYNNLO_Web}. With some minor modifications, the code can evaluate the D-Y cross sections induced by pion or proton beams on proton or neutron targets. Via the LHAPDF6 framework~\cite{LHAPDF6}, the parton distribution functions (PDFs)~\cite{PDFsets} used for the protons and neutrons are ``CT14nlo'' and ``CT14nnlo'' in the NLO and NNLO calculations, respectively, and ``GRVPI1'' for the pion PDFs in both NLO and NNLO calculations. The factorization scale ($\mu_F$) and renormalization scale ($\mu_R$) are set as $\mu_F = \mu_R = Q$. In order to calculate the $\lambda$, $\mu$, and $\nu$ parameters, we first calculate the $A_i$ parameters in an alternative expression of the lepton angular distributions of the D-Y process as follows~\cite{cs}: \begin{eqnarray} \frac{d\sigma}{d\Omega} & \propto & (1+\cos^2\theta)+\frac{A_0}{2} (1-3\cos^2\theta) \nonumber \\ & & +A_1 \sin 2 \theta\cos\phi + \frac{A_2}{2} \sin^2\theta \cos 2 \phi \label{eq:eq3} \end{eqnarray} where $\theta$ and $\phi$, same as in Eq.~(\ref{eq:eq1}), are the polar and azimuthal angles of $l^-$ ($e^-$ or $\mu^-$) in the rest frame of $\gamma^*$. The angular coefficients $A_i$ could be evaluated by the moments of harmonic polynomial expressed as~\cite{atlas,Gauld:2017tww} \begin{eqnarray} A_0 & = & 4 - 10 \langle \cos^2\theta \rangle, \nonumber \\ A_1 & = & 5 \langle \sin2\theta \cos \phi \rangle, \nonumber \\ A_2 & = & 10 \langle \sin^2\theta \cos2\phi \rangle, \label{eq:eq4} \end{eqnarray} where $\langle f(\theta, \phi) \rangle$ denotes the moment of $f(\theta, \phi)$ , i.e. the weighted average of $f(\theta, \phi)$ by the cross sections in Eq.~(\ref{eq:eq3}). It is straightforward to show that $\lambda, \mu, \nu$ in Eq.~(\ref{eq:eq1}) are related to $A_0, A_1, A_2$ via \begin{eqnarray} \lambda = \frac{2-3A_0}{2+A_0};~~~ \mu = \frac{2A_1}{2+A_0};~~~ \nu = \frac{2A_2}{2+A_0}. \label{eq:eq5} \end{eqnarray} Equation~(\ref{eq:eq5}) shows that the L-T relation, $1-\lambda - 2 \nu=0$, is equivalent to $A_0 = A_2$. \section{Comparison with existing data from NA10, E615 and E866} \label{sec:results_oldexp} Now we compare the results of $\lambda$, $\mu$, $\nu$, and the L-T violation, $1-\lambda-2\nu$, from the fixed-order pQCD calculations with existing data from fixed-target experiments. The angular parameters are evaluated as a function of the dimuon's $q_T$ in the Collins-Soper frame~\cite{cs}. We first consider the data from NA10~\cite{falciano86} and E615~\cite{conway} for $\pi^-$ beam interacting with tungsten ($W$) targets. The NA10 experiment used three different beam energies: 140, 194 and 286 GeV, while E615 utilized a single beam energy of 252 GeV. Since the experiments were done with tungsten targets, the cross sections per nucleon were calculated by the weighted average of the $\pi^- p$ and $\pi^- n$ cross sections with 74 protons and 110 neutrons. Following the experimental acceptance specified in Ref.~\cite{Lambertsen:2016wgj}, we apply the kinematic cuts listed in Table~\ref{tab:accpt}. The results of NLO (red points) and NNLO (blue points) calculations together with the measurements (black points) are shown in Figs.~\ref{fig1_na10_140},~\ref{fig1_na10_194},~\ref{fig1_na10_286}, and ~\ref{fig1_e615}. \begin{table}[tbp] \centering \begin{tabular}{|c|c|c|c|} \hline \hline Experiment & Q (GeV) & $x_1$ & $x_F$ \\ \hline \hline NA10 & $4.05 \le Q \le 8.55$ & $0 \le x_1 \le 0.7$ & $0 \le x_F$ \\ \hline E615 & $4.05 \le Q \le 8.55$ & $0.2 \le x_1 \le 1$ & $0 \le x_F$ \\ \hline E866 & $4.5 \le Q \le 15$\textsuperscript{*} & $0 \le x_1 \le 0.7$ & $0 \le x_F$ \\ \hline \hline \multicolumn{4}{l}{\textsuperscript{*} \footnotesize{Excluding the $\Upsilon$ region $9 \le Q \le 10.7$ GeV.}} \end{tabular} \caption {Kinematic cuts applied for the experimental acceptance in the fixed-order pQCD calculation.} \label{tab:accpt} \end{table} \begin{figure}[htbp] \includegraphics[width=1.0\columnwidth]{fig1_na10_140.eps} \caption{Comparison of NLO (red points) and NNLO (blue points) fixed-order pQCD calculations with the NA10 $\pi^-+W$ D-Y data at 140 GeV~\cite{falciano86} (black points) for $\lambda$, $\mu$, $\nu$ and $1-\lambda-2\nu$.} \label{fig1_na10_140} \end{figure} \begin{figure}[htbp] \includegraphics[width=1.0\columnwidth]{fig1_na10_194.eps} \caption{Same as Fig.~\ref{fig1_na10_140}, but for NA10 data~\cite{falciano86} with 194-GeV $\pi^-$ beam.} \label{fig1_na10_194} \end{figure} \begin{figure}[htbp] \includegraphics[width=1.0\columnwidth]{fig1_na10_286.eps} \caption{Same as Fig.~\ref{fig1_na10_140}, but for NA10 data~\cite{falciano86} with 286-GeV $\pi^-$ beam.} \label{fig1_na10_286} \end{figure} \begin{figure}[htbp] \includegraphics[width=1.0\columnwidth]{fig1_e615.eps} \caption{Comparison of NLO (red points) and NNLO (blue points) fixed-order pQCD calculations with the E615 $\pi^-+W$ D-Y data at 252 GeV~\cite{conway} (black points) for $\lambda$, $\mu$, $\nu$ and $1-\lambda-2\nu$.} \label{fig1_e615} \end{figure} Overall, the calculated $\lambda$, $\mu$ and $\nu$ exhibit distinct $q_T$ dependencies. At $q_T \rightarrow 0$, $\lambda$, $\mu$ and $\nu$ approach the values predicted by the collinear parton model~\cite{drell}: $\lambda = 1$ and $\mu = \nu =0$. As $q_T$ increases, Figs.~\ref{fig1_na10_140}-\ref{fig1_e615} show that $\lambda$ decreases toward its large-$q_T$ limit of $-1/3$ while $\nu$ increases toward $2/3$, for both $q\bar{q}$ and $qG$ processes shown in Ref.~\cite{peng16}. The $q_T$ dependence of $\mu$ is relatively mild compared to $\lambda$ and $\nu$. This is understood as a result of some cancellation effect, to be discussed in Sec.~\ref{sec:discussion}. Comparing the results of the NLO with the NNLO calculation, $\lambda {\rm (NNLO)}$ is smaller than $\lambda \rm{(NLO)}$ while $\mu$ and $\nu$ are very similar for NLO and NNLO. The L-T violation, $1-\lambda-2\nu$, is zero in the NLO calculation, and turns to be nonzero and positive in the NNLO calculation. As shown in Figs.~\ref{fig1_na10_140}-\ref{fig1_e615}, while some general features of the NA10 and E615 data are described by the pQCD calculations, there are notable differences between the data and calculations. From the comparison between them, we find: 1) Perturbative QCD predicts that $\lambda$ drops as $q_T$ increases, but the data do not show this trend. The expected upper bound of $\lambda$, $\left| \lambda \right| \leq 1$, is sometimes exceeded by the data~\cite{Lambertsen:2016wgj}. This could reflect the presence of some systematic uncertainties in the data. 2) The agreement between the data and the pQCD calculation for the $\mu$ parameter is quite reasonable for NA10, but less so for E615. 3) The increase of $\nu$ with $q_T$ observed in the NA10 data is in good agreement with the pQCD calculation. However, the E615 data are significantly higher than the calculation. 4) The amount of the L-T violation, $1-\lambda-2\nu$, for the data is much larger than the prediction from the NNLO pQCD. Moreover, the sign of this violation is negative for the data, but positive for the pQCD. This apparent discrepancy could be partly caused by the unphysical values of $\lambda$ from the data, as $\lambda$ should not exceed 1. Regarding these findings two remarks are in order. First, pQCD predicts a sizable magnitude for $\nu$, comparable to the data. Therefore, in order to extract the value of the nonperturbative Boer-Mulders function from the measured data of $\nu$~\cite{boer99,Zhang:2008nu,Barone:2009hw}, contributions from the pQCD effect must be taken into account. Second, the pQCD calculation for $\mu$ tends to overestimate the NA10 data but underestimate the E615 data. As we will see in Sec.~\ref{sec:results_newexp}, $\mu$ has a strong dependence on $x_F$. The incomplete information on the $x_F$ acceptance of the experiments needed for the calculation could contribute to the discrepancy. \begin{figure}[htbp] \includegraphics[width=1.0\columnwidth]{fig1_e866p.eps} \caption{Comparison of NLO (red points) and NNLO (blue points) fixed-order pQCD calculations with the E866 $p+p$ D-Y data at 800 GeV~\cite{zhu} (black points) for $\lambda$, $\mu$, $\nu$ and $1-\lambda-2\nu$.} \label{fig1_e866p} \end{figure} \begin{figure}[htbp] \includegraphics[width=1.0\columnwidth]{fig1_e866d.eps} \caption{Same as Fig.~\ref{fig1_e866p}, but for E866 data~\cite{zhu} with a liquid deuterium target.} \label{fig1_e866d} \end{figure} The $q_T$ dependencies of the angular distribution parameters of 800-GeV $p+p$ and $p+d$ D-Y are calculated and compared with the E866 measurements~\cite{zhu} in Figs.~\ref{fig1_e866p} and ~\ref{fig1_e866d}. Given the large experimental uncertainty, the $p+p$ data in Fig.~\ref{fig1_e866p} are not in disagreement with the calculation. For Fig.~\ref{fig1_e866d}, where the $p+d$ data have smaller uncertainties, the agreement between data and the calculation is rather poor. In particular, the data on $\lambda$ are in general larger than 1, violating the expected upper bound for $\lambda$~\cite{Lambertsen:2016wgj}. That the $\nu$ data are less than the pQCD prediction in Figs.~\ref{fig1_e866p} and ~\ref{fig1_e866d}, suggests a negative contribution from the Boer-Mulders effect in the proton-induced DY. This is opposite to the situation in Fig.~\ref{fig1_e615}, where the $\nu$ data are more positive than the pQCD, suggesting a positive contribution from the Boer-Mulders function in the pion-induced DY. Since the contribution of the Boer-Mulders effect in $\nu$ is proportional to the product of the individual Boer-Mulders functions of quarks and anti-quarks in the colliding hadrons, the proton D-Y data would imply that the sea-quark Boer-Mulders function has a sign opposite to that of the valence Boer-Mulders function in the proton~\cite{transversity2014}. The pion data from Fig.~\ref{fig1_e615} suggests that the pion valence Boer-Mulders function has the same sign as the proton valence Boer-Mulders function~\cite{transversity2014}. The NNLO calculations predict a positive $1-\lambda-2\nu$ at NNLO while the data are consistent with zero for the proton target and slightly negative for the deuteron one. The negative values of $1-\lambda-2\nu$ for $p+d$ data are similar to the case for the pion D-Y data shown in Figs.~\ref{fig1_na10_140}-\ref{fig1_e615}. In Sec.~\ref{sec:discussion}, we will discuss why $1-\lambda-2\nu$ must be positive from the perspective of a geometric approach. \section{pQCD calculations for the COMPASS and SeaQuest experiments} \label{sec:results_newexp} There are two ongoing fixed-target D-Y experiments which have collected new data on the lepton angular distributions. The first one is the COMPASS experiment at CERN~\cite{COMPASSII}, running with 190-GeV $\pi^-$ beam and transversely-polarized $\rm{NH}_3$ target and unpolarized aluminum ($Al$) and tungsten ($W$) nuclear targets. The transverse-momentum-dependent Sivers asymmetry in the polarized D-Y process was reported recently~\cite{compass}, and high-statistics unpolarized D-Y data on the $W$ target have also been collected. The second one is the SeaQuest experiment at Fermilab~\cite{E906}, aiming at the measurement of $\bar{d}(x)/\bar{u}(x)$ ratio at intermediate-$x$ region via the D-Y process. It has taken data with the 120-GeV proton beam on unpolarized hydrogen, deuterium and various nuclear targets. Both COMPASS and SeaQuest experiments have collected data on the lepton angular distributions of the D-Y process. The final results are expected to be available in the near future. In addition, the extension of the SeaQuest experiment, the E1039 experiment~\cite{e1039}, expects to take more data relevant to the angular distributions in the near future. Here we present the results of the angular coefficients $\lambda$, $\mu$ and $\nu$ as a function of $q_T$ in various bins of $Q$ and $x_F$. There are three bins for $Q$ in the range of 4.0--7.0 GeV, as well as three bins for $x_F$ in the range of 0--0.6. These results could be convoluted by the COMPASS and SeaQuest spectrometer acceptances later for a direct comparison with experimental data. Since there are no significant difference between the NLO and NNLO results, we present only the results from the NLO calculation to illustrate the major features. The mean values of $Q$ and $x_F$ in each bin are listed in Tables~\ref{tab:mean_Q} and ~\ref{tab:mean_xF}. The pQCD calculations show that the $q \bar{q}$ process dominates over the whole $q_T$ region for the $\pi^-$-induced COMPASS experiment while the $qG$ process becomes more important for $q_T>1$ GeV in the proton-induced SeaQuest experiment. Through this study, the $Q$- and $x_F$-dependencies of $\lambda$, $\mu$ and $\nu$ are also investigated. \begin{table}[tbp] \caption {Mean values of $Q$ and $x_F$ in each $Q$ bin calculated for COMPASS and SeaQuest.} \label{tab:mean_Q} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \hline Bin & \multicolumn{2}{c|} {COMPASS} & \multicolumn{2}{c|} {SeaQuest} \\ \hline \hline & $\langle Q \rangle$ (GeV) & $\langle x_F \rangle $ & $\langle Q \rangle$ (GeV) & $\langle x_F \rangle $ \\ \hline $Q=4-5$ GeV & 4.42 & 0.32 & 4.36 & 0.24 \\ \hline $Q=5-6$ GeV & 5.43 & 0.32 & 5.36 & 0.23 \\ \hline $Q=6-7$ GeV & 6.43 & 0.32 & 6.36 & 0.22 \\ \hline \hline \end{tabular} \end{center} \end{table} \begin{table}[tbp] \caption {Mean values of $Q$ and $x_F$ in each $x_F$ bin calculated for COMPASS and SeaQuest.} \label{tab:mean_xF} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \hline Bin & \multicolumn{2}{c|} {COMPASS} & \multicolumn{2}{c|} {SeaQuest} \\ \hline \hline & $\langle Q \rangle$ (GeV) & $\langle x_F \rangle $ & $\langle Q \rangle$ (GeV) & $\langle x_F \rangle $ \\ \hline $x_F=0.0-0.2$ & 5.01 & 0.10 & 4.56 & 0.10 \\ \hline $x_F=0.2-0.4$ & 5.06 & 0.30 & 4.55 & 0.29 \\ \hline $x_F=0.4-0.6$ & 5.10 & 0.49 & 4.54 & 0.48 \\ \hline \hline \end{tabular} \end{center} \end{table} Figures~\ref{fig2_compass} and ~\ref{fig2_e906} show $\lambda$, $\mu$ and $\nu$ as a function of $q_T$ for various bins of $Q$ and $x_F$. The $q_T$ distributions of $\lambda$ and $\nu$ parameters depend sensitively on $Q$, but only weakly on $x_F$. As for $\mu$, its $q_T$ distribution has strong dependencies on $x_F$ and on $Q$. In particular, the magnitude of $\mu$ is small when $x_F$ is close to 0 and its sign could even turn negative at some $q_T$ region. As $x_F$ increases, the magnitude of $\mu$ increases pronouncedly. \begin{figure}[htbp] \centering \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig2_compass_Q}}\\ \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig2_compass_xF}} \caption [\protect{}] {(a) NLO pQCD results of $\lambda$, $\mu$, and $\nu$ as a function of $q_T$ at several $Q$ bins and $x_F>0$ for D-Y production off the tungsten target with 190-GeV $\pi^-$ beam in the COMPASS experiment. (b) Same as (a) but at several $x_F$ bins and $4<Q<9$ GeV.} \label{fig2_compass} \end{figure} \begin{figure}[htbp] \centering \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig2_e906_Q}}\\ \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig2_e906_xF}} \caption [\protect{}] {(a) NLO pQCD results of $\lambda$, $\mu$, and $\nu$ as a function of $q_T$ at several $Q$ bins and $x_F>0$ for D-Y production off the proton target with 120-GeV proton beam in the SeaQuest experiment. (b) Same as (a) but at several $x_F$ bins and $4<Q<9$ GeV.} \label{fig2_e906} \end{figure} In perturbative QCD at $\mathcal{O}(\alpha_S)$, ignoring the intrinsic transverse momenta of the colliding partons, the $\lambda$ and $\nu$ coefficients in the Collins-Soper frame for the $q \bar q \to \gamma^* G$ annihilation process~\cite{collins,boer06,berger} and the $qG \to \gamma^* q$ Compton process~\cite{falciano86,thews,lindfors} are given as \begin{align} \lambda &= \frac{2 Q^2-q_T^2}{2Q^2+ 3q_T^2} & \nu &= \frac{2 q_T^2}{2Q^2+ 3q_T^2} & &(q\bar q) \nonumber \\ \lambda &= \frac{2Q^2-5q_T^2}{2Q^2+15q_T^2} & \nu &= \frac{10q_T^2}{2Q^2+15q_T^2} & &(qG), \label{eq:eq11} \end{align} where $q_T$ and $Q$ are the transverse momentum and mass, respectively, of the dilepton. While the expression for $q \bar q \to \gamma^* G$ is exact, that for $qG \to \gamma^* q$ is obtained with some approximation. Equation (\ref{eq:eq11}) shows that $\lambda$ and $\nu$ scale with the dimensionless $q_T/Q$ in these pQCD NLO expressions. Nevertheless there is no $q_T/Q$ scaling for the $\mu$ parameter in NLO pQCD. \begin{figure}[htbp] \centering \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig3_compass_Q}}\\ \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig3_compass_xF}} \caption [\protect{}] {(a) NLO pQCD results of $\lambda$, $\nu$ and the fractions of $q \bar{q}$-process contribution in the total cross sections as a function of scaled transverse momentum $q_T/Q$ for D-Y production off the nuclear tungsten target with 190-GeV $\pi^-$ beam in the COMPASS experiment. The NLO pQCD expressions of $q \bar{q}$ and $qG$ processes are denoted by the solid and dashed lines respectively. (b) Same as (a) but at several $x_F$ bins and $4<Q<9$ GeV.} \label{fig3_compass} \end{figure} \begin{figure}[htbp] \centering \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig3_e906_Q}}\\ \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig3_e906_xF}} \caption [\protect{}]{(a) NLO pQCD results of $\lambda$, $\nu$ and the fractions of $q \bar{q}$-process contribution in the total cross sections as a function of scaled transverse momentum $q_T/Q$ for D-Y production off the proton target with 120-GeV proton beam in the SeaQuest experiment. The NLO pQCD expressions of $q \bar{q}$ and $qG$ processes are denoted by the solid and dashed lines respectively. (b) Same as (a) but at several $x_F$ bins and $4<Q<9$ GeV. It is noted that the rough structure at large $q_T/Q$ region of the results for $x_F=$0.4 -- 0.6 (blue points) is likely due to the fluctuation of calculations with $Q>7$ GeV near the edge of the phase space. The structure is expected to be removed, if one requires $Q<7$ GeV as the top figure.} \label{fig3_e906} \end{figure} Figures~\ref{fig3_compass} and~\ref{fig3_e906} show the NLO calculations of $\lambda$ and $\nu$ for COMPASS and SeaQuest as a function of the variable $q_T/Q$ in the various $Q$ and $x_F$ bins. The corresponding expressions for the $q \bar{q}$ and $q G$ processes in Eq.~(\ref{eq:eq11}) are denoted by the solid and dashed lines. Comparing Figs.~\ref{fig3_compass} and~\ref{fig3_e906} with Figs.~\ref{fig2_compass} and~\ref{fig2_e906}, the $\lambda$ and $\nu$ values for different $Q$ bins now converge into a common curve when they are plotted as a function of $q_T/Q$. This is consistent with the $q_T/Q$ scaling behavior of Eq.~(\ref{eq:eq11}). Figures~\ref{fig3_compass} and~\ref{fig3_e906} also display the fractions of the NLO cross sections due to the $q \bar{q}$ process for COMPASS and SeaQuest. The dominance of the $q \bar{q}$ process in the $\pi^-$-induced D-Y at COMPASS explains why the pQCD results for $\lambda$ and $\nu$ are very close to the solid $q \bar{q}$ lines. In contrast, the proton-induced D-Y in SeaQuest has large contributions from the $q G$ process, resulting in the $\lambda$ and $\nu$ closer to the dashed $q G$ lines. In comparison, we plot the $q_T$ distributions of $\lambda$, $\mu$ and $\nu$ in the negative $x_F$ (-0.6 -- 0) for COMPASS and SeaQuest in Fig.~\ref{fig2b}. The $\lambda$ and $\nu$ remain the same as that in $x_F>0$ while $\mu$ turns mostly negative. \begin{figure}[htbp] \centering \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig2_compass_xFb}}\\ \subfloat[] {\includegraphics[width=1.0\columnwidth]{fig2_e906_xFb}} \caption [\protect{}] {(a) NLO pQCD results of $\lambda$, $\mu$ and $\nu$ as a function of transverse momentum $q_T$ at several negative $x_F$ bins and $4<Q<9$ GeV for D-Y production off the nuclear tungsten target with 190-GeV proton beam in COMPASS experiment. (b) Same results of (a) for D-Y production off the proton target with 120-GeV proton beam in SeaQuest experiment.} \label{fig2b} \end{figure} \section{Geometric model} \label{sec:discussion} As seen above, the existing D-Y data of lepton angular distributions can be reasonably well described by the NLO and NNLO pQCD calculations. Various salient features of $Q$ and $x_F$ dependencies as well as $q_T/Q$ scaling are observed in the predicted results of $\lambda$, $\mu$ and $\nu$ parameters for COMPASS and SeaQuest experiments based on NLO pQCD. It is of interest to check if these features could be understood using the geometric approach developed in Refs.~\cite{peng16,chang17}. \begin{figure}[htb] \includegraphics[width=0.8\columnwidth]{fig5_three_plane_newest.eps} \caption [\protect{}]{Definition of the Collins-Soper frame and various angles and planes in the rest frame of $\gamma^*$. The hadron plane is formed by $\vec P_B$ and $\vec P_T$, the momentum vectors of the beam (B) and target (T) hadrons. The $\hat x$ and $\hat z$ axes of the Collins-Soper frame both lie in the hadron plane with the $\hat z$ axis bisecting the $\vec P_B$ and $- \vec P_T$ vectors. The quark ($q$) and antiquark ($\bar q$) annihilate collinearly with equal momenta to form $\gamma^*$, while the quark momentum vector $\hat z^\prime$ and the $\hat z$ axis form the quark plane. The polar and azimuthal angles of $\hat z^\prime$ in the Collins-Soper frame are $\theta_1$ and $\phi_1$. The $l^-$ and $l^+$ are emitted back-to-back with $\theta$ and $\phi$ as the polar and azimuthal angles for $l^-$.} \label{fig_geom} \end{figure} Here we sketch the geometric approach of Refs.~\cite{peng16,chang17}. As illustrated in Fig.~\ref{fig_geom}, we define three different planes, the hadron plane, the quark plane, and the lepton plane, in the Collins-Soper frame. In the $\gamma^*$ rest frame, the beam and target hadron momenta, $\vec P_B$ and $\vec P_T$ form the ``hadron plane'' on which the $\hat z$ axis, bisecting the $\vec P_B$ and $- \vec P_T$ vectors, lies. A pair of collinear $q$ and $\bar q$ with equal momenta annihilate into a $\gamma^*$. The momentum unit vector of $q$ is defined as $\hat z^\prime$, and the ``quark plane" is formed by the $\hat z^\prime$ and $\hat z$ axes. Finally, the ``lepton plane'' is formed by the momentum vector of $l^-$ and the $\hat z$ axis. The polar and azimuthal angles of the $\hat z^\prime$ axis in the Collins-Soper frame are denoted as $\theta_1$ and $\phi_1$. As shown in Refs.~\cite{peng16,chang17}, the angular coefficients $A_i$ in Eq.~(\ref{eq:eq3}) can be expressed in term of $\theta_1$ and $\phi_1$ as follows: \begin{eqnarray} A_0 &=& \langle\sin^2\theta_1\rangle \nonumber \\ A_1 &=& \frac{1}{2} \langle\sin 2\theta_1\cos \phi_1\rangle \nonumber \\ A_2 &=& \langle\sin^2\theta_1 \cos 2\phi_1\rangle. \label{eq:eq8} \end{eqnarray} The $\langle \cdot \cdot \cdot \rangle$ in Eq.~(\ref{eq:eq8}) is a reminder that the measured values of $A_i$ at a given kinematic bin are averaged over events having particular values of $\theta_1$ and $\phi_1$. As discussed in Refs.~\cite{peng16,chang17}, up to NLO ($\mathcal{O}(\alpha_S)$) in pQCD, the quark plane coincides with the hadron plane and $\phi_1=0$. Therefore $A_0=A_2$ or $1-\lambda-2\nu=0$, i.e., the L-T relation is satisfied. Higher order pQCD processes allow the quark plane to deviate from the hadron plane, i.e., $\phi_1 \neq 0$, leading to the violation of the L-T relation. For a nonzero $\phi_1$, Eq.~(\ref{eq:eq8}) shows that $A_2 < A_0$. Therefore, when the L-T relation is violated, $A_0$ must be greater than $A_2$ or, equivalently, $1 - \lambda - 2\nu >0$. This expectation of $1 - \lambda - 2\nu >0$ in the geometric approach is in agreement with the results of NNLO pQCD calculations shown in Figs.~\ref{fig1_na10_140}-\ref{fig1_e866d}. The geometric approach offers a simple interpretation for this result. \begin{figure}[tb] \centering \subfloat[] {\includegraphics[width=0.23\textwidth]{fig6a}} \subfloat[] {\includegraphics[width=0.23\textwidth]{fig6b}} \qquad \subfloat[] {\includegraphics[width=0.23\textwidth]{fig6c}} \subfloat[] {\includegraphics[width=0.23\textwidth]{fig6d}} \caption [\protect{}]{(a) Feynman diagram for $q - \bar q$ annihilation where a gluon is emitted from a quark in the beam hadron. (b) Momentum vectors for $q$ and $\bar q$ in the C-S frame before and after gluon emission. The momentum direction of $q$ is now collinear with that of $\bar q$. (c) Feynman diagram for the case where a gluon is emitted from an antiquark in the target hadron. (d) Momenta vectors for $q$ and $\bar q$ in the C-S frame before and after gluon emission for diagram (c).} \label{fig6} \end{figure} Figures ~\ref{fig2_compass}(b) and ~\ref{fig2_e906}(b) show that the $q_T$ dependencies for $\lambda$ and $\nu$ are insensitive to the value of $x_F$. In contrast, the $\mu$ parameter depends sensitively on $x_F$. This striking difference between the $\lambda$, $\mu$ and $\nu$ parameters can be understood in the geometric approach. At the next-to-leading order (NLO), $\mathcal{O}(\alpha_S)$, a hard gluon or a quark (antiquark) is emitted so that $\gamma^*$ acquires nonzero $q_T$. Figure~\ref{fig6}(a) shows a diagram for the $q - \bar q$ annihilation process in which a gluon is emitted from the quark in the beam hadron. In this case, the momentum vector of the quark is modified such that it becomes opposite to the antiquark's momentum vector in the rest frame of $\gamma^*$ [Fig.~\ref{fig6}(b)]. Since the antiquark's momentum is the same as the target hadron's, the $\hat z^\prime$ axis is along the direction of $- \vec p_T$. From Fig.~\ref{fig_geom}, it is evident that $\theta_1 = \beta$ and $\phi_1 = 0$ in this case. An analogous diagram in which the gluon is emitted from the antiquark in the target hadron is shown in Fig.~\ref{fig6}(c). In this case, $\theta_1 = \beta$ while $\phi_1 = \pi$. Table~\ref{tab:angles} lists the values of $\theta_1$ and $\phi_1$ for four cases of different combination of hadron and quark types from which the gluon is emitted~\cite{chang17}. \begin{table}[tbp] \caption {Angles $\theta_1$ and $\phi_1$ for four cases of gluon emission in the $q - \bar q$ annihilation process at order-$\alpha_s$. The signs of $A_0$, $A_1 (\mu)$, $A_2 (\nu)$ for the four cases are also listed.} \label{tab:angles} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline case & gluon emitted from & $\theta_1$ & $\phi_1$ & $A_0$ & $A_1 (\mu)$ & $A_2 (\nu)$ \\ \hline \hline 1 & beam quark & $\beta$ & 0 & + & + & + \\ \hline 2 & target antiquark & $\beta$ & $\pi$ & + & $-$ & + \\ \hline 3 & beam antiquark & $\pi - \beta$ & 0 & + & $-$ & + \\ \hline 4 & target quark & $\pi - \beta$ & $\pi$ & + & + & + \\ \hline \hline \end{tabular} \end{center} \end{table} Table~\ref{tab:angles} shows that the sign of $\mu$ could be either positive or negative, depending on which parton and hadron the gluon is emitted from. Hence, one expects some cancellation effects for $\mu$ among contributions from various processes. Each process is weighted by the corresponding density distributions for the interacting partons. At $x_F \sim 0$, the momentum fraction carried by the beam parton ($x_B$) is comparable to that of the target parton ($x_T$). Therefore, the weighting factors for various processes are of similar magnitude and the cancellation effect could be very significant, resulting in a small value of $\mu$. On the other hand, as $x_F$ increases toward 1, $x_B$ becomes much larger than $x_T$. In this case the weighting factors are now dominated by fewer processes, resulting in less cancellation and a larger value of $\mu$. This explains why the $\mu$ parameter exhibits a strong $x_F$ dependence in Figs.~\ref{fig2_compass}(b),~\ref{fig2_e906}(b) and~\ref{fig2b}. Table~\ref{tab:angles} also shows that $A_0$ and $A_2$ have the same sign (positive) for all four cases. This implies the absence of $x_F$-dependent cancellation effect for them. Hence $\lambda$ and $\nu$ have very weak $x_F$ dependencies, as shown in Figs.~\ref{fig2_compass}(b),~\ref{fig2_e906}(b) and~\ref{fig2b}. Therefore, the observed strong rapidity dependence for $\mu$ and weak rapidity dependence for $\lambda$ and $\nu$ in pQCD calculation can be nicely described by the geometric picture. In addition, considering the strong $x_F$-dependence for the $q_T$ distribution of $\mu$ parameters, it will be instructive for the experiments to measure the $q_T$ dependence of $\mu$ at several $x_F$ regions, instead of integrating over the entire $x_F$. The NLO pQCD expressions of $\lambda$ and $\nu$ as a function of $q_T$ in Eq.~(\ref{eq:eq11}) have been derived based on a geometric picture of collision geometry in the parton level~\cite{peng16,chang17}. Within the geometric picture, the $A_0$ and $A_2$ at NLO are equal to $\langle\sin^2\theta_1\rangle$ (Eq.~(\ref{eq:eq8})) with $\phi_1=0$. Given $q_T/Q=\tan \theta_1$ or $-\tan\theta_1$, the scaling of $A_0$ and $A_2$ (equivalently $\lambda$ and $\nu$) with $q_T/Q$ could also be understood. \begin{figure}[htbp] \includegraphics[width=1.0\columnwidth]{fig4_compass.eps} \caption{NLO (red points) and NNLO (blue points) pQCD results of $\lambda$, $\mu$, $\nu$ and $1-\lambda-2\nu$ as a function of $q_T$ at the kinematic bin of $5<Q<6$ GeV and $0.2<x_F<0.4$ for D-Y production off the tungsten target with 190-GeV $\pi^-$ beam in COMPASS experiment. The NLO pQCD expressions of $q \bar{q}$ and $qG$ processes are denoted by the dotted and dash-dotted lines respectively. The solid curves correspond to the fit results described in the text.} \label{fig4_compass} \end{figure} Figure~\ref{fig4_compass} shows both NLO (red points) and NNLO (blue points) pQCD results of $\lambda$, $\nu$ and $1-\lambda-2\nu$ as a function of $q_T$ at the kinematic bin of $5<Q<6$ GeV and $0.2<x_F<0.4$ for the COMPASS experiment. The corresponding NLO pQCD expressions of $q_T$ dependence for $q\bar{q}$ and $qG$ subprocesses in Eq.~(\ref{eq:eq11}) are drawn as dotted and dotted-dash curves. Assuming the fraction of these two processes is $q_T$ independent, a best-fit to the NNLO results of $\lambda$ yields the fraction of $q\bar{q}$ process to be 83\% for the COMPASS experiment. This value is consistent with pQCD results shown in Fig.~\ref{fig3_compass}. Applying this relative fraction of two pQCD processes, the NNLO result of $\nu$ could be reasonably well described, as shown in Fig.~\ref{fig4_compass}, with the acoplanarity parameter $ \langle \cos 2\phi_1 \rangle$, set at 0.94. The predicted $q_T$ distribution of the L-T violation $1-\lambda-2\nu$ from the NNLO pQCD could be then nicely described as well. Overall our studies show that salient features of $q_T/Q$ scaling and $x_F$ dependency for the $\lambda$, $\nu$, $\mu$ parameters of fixed-target D-Y experiments evaluated by NLO pQCD as well as the L-T violation $1-\lambda-2\nu$ from the NNLO pQCD can be nicely understood using the geometric picture. \section{Summary and Conclusion} \label{sec:conclusion} We have presented a comparison of the measurements of the angular parameters $\lambda$, $\mu$, $\nu$ and $1-\lambda-2\nu$ of the D-Y process from the fixed-target experiments with the corresponding results from the NLO and NNLO pQCD calculations. Qualitatively the transverse momentum ($q_T$) dependence of $\lambda$, $\mu$ and $\nu$ in the data could be described by pQCD. The difference between NLO and NNLO results becomes visible at large $q_T$. The L-T violation part $1-\lambda-2\nu$ remains zero in the NLO pQCD calculation and turns positive in NNLO pQCD. It is contrary to the measured negative values in the pion-induced D-Y experiments NA10 and E615. Data quality, nonperturbative effects such as Boer-Mulders function at low $q_T$ and higher-order perturbative QCD at large $q_T$ might account for the discrepancy. From the NLO pQCD calculation, we then present the predictions of the angular parameters as a function of $q_T$ in several $Q$ and $x_F$ bins for the ongoing COMPASS and SeaQuest experiments. The $\lambda$ and $\nu$ show some mild dependence on $Q$ and a weak $x_F$ dependence, while $\mu$ exhibits a pronounced dependence on $x_F$. For different $x_F$-values, $\lambda$ and $\nu$ are predicted to approximately scale with $q_T/Q$. The $x_F$ dependence of the angular parameters is well described by the geometric picture. In particular, the weak rapidity dependencies of the $\lambda$ and $\nu$, and the pronounced rapidity dependency for $\mu$ can be explained by the absence or presence of rapidity-dependent cancellation effects. The occurrence of acoplanarity between the quark plane and the hadron plane ($\phi_1 \neq 0$), for the pQCD processes beyond NLO leads to a violation of the $L-T$ relation. The predicted positive value of $1-\lambda-2\nu$, or $A_0>A_2$ when $\phi_1$ is nonzero, is consistent with the NNLO pQCD results. The resummation effect of soft-gluon emission is not taken into account in this work. In the geometric approach, summing over multiple gluon emissions by a single quark line is equivalent to an emission of a single gluon. Therefore, as long as the resummation is only performed for a single quark, the L-T relation will still be satisfied, as shown in Ref.~\cite{berger}. For a comprehensive pQCD calculation, the resummation effect should be included, especially in the small $q_T$ region. We leave it for future investigation. The NLO and NNLO pQCD calculations should provide a good benchmark for understanding the experimental data of lepton angular distributions of fixed-target D-Y experiments. It is interesting to see many salient features present in pQCD results can be readily understood by the geometric picture. This intuitive approach could offer some useful insights on the origins of many interesting characteristics of the lepton angular distributions in the forthcoming new precision data from the COMPASS and SeaQuest experiments. Any deviation from the pQCD results on the L-T violation as well as the $\nu$ parameter would indicate the presence of nonperturbative effects such as the Boer-Mulders functions. Finally we emphasize the importance of measuring the angular parameters in the D-Y process, which provides a powerful tool to explore the reaction mechanism and parton distributions potentially more sensitively than the D-Y cross sections alone. The measurement of the $q_T$ distributions of $\mu$ parameters with $x_F$ dependence is suggested, and the pQCD effect should be included in the extraction of nonperturbative Boer-Mulders effect from the data of $\nu$. \section*{Acknowledgments} \label{sec:acknowledgments} This work was supported in part by the U.S. National Science Foundation and the Ministry of Science and Technology of Taiwan. It was also supported in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Contract No. DE-AC05-060R23177.
{ "timestamp": "2019-01-28T02:06:17", "yymm": "1811", "arxiv_id": "1811.03256", "language": "en", "url": "https://arxiv.org/abs/1811.03256" }
\section{Introduction} Many studies have been done on the black hole physics. The black attribute in the black hole is because it captures all the light that passes through its event horizon, from which the black hole behaves like a black body in thermodynamics \cite{01}, hence they have blackbody radiation, which yields to the semi-classical methods of Hawking radiation as a tunneling process \cite{PLB1}.\\ According to Hawking's theory, the entropy of a black hole is proportional to the area of the black hole, and it is proportional to the power of the radius of the event horizon. In that case, black hole thermodynamics is one of the important fields of research in theoretical physics \cite{CQG1, CQG2, CQG3, CQG4, CQG5}. Small statistical disturbances around the equilibrium point produce correction terms in the black hole entropy \cite{03}. The lowest entropy corrections are logarithmic \cite{04} and higher order corrections are proportional to the powers of the inverse of the black hole area \cite{05}.\\ Investigating the thermodynamics of black holes in the presence of the correction terms, gives us important information about the black hole. Indeed study about the modified entropy increases our quantum knowledge and may help to realize the theory of quantum gravity. These correction terms in the black hole entropy for a large-sized cases are trivial and can be ignored, while they are important for the small black holes, and give important results \cite{06}.\\ Several types of black objects have been investigated by the presence of these correction terms of the black hole entropy \cite{07, 08, 09, 010, 011}. Leading order thermal fluctuations in a charged AdS black hole considered by the Ref. \cite{12}, which for the first time general coefficient for the logarithmic correction of the form $\ln{S_{0}T^{2}}$ has been introduced, where $S_{0}$ is the original entropy of the black hole with Hawking temperature $T$. Leading-order quantum correction to the semi-classical black hole entropy for the first time calculated by the Ref. \cite{04} as $\ln S_{0}$. In that case, it has been suggesting that the coefficient of the correction may be universal \cite{7}. The mentioned form of the logarithmic corrections to the rotating extremal black hole entropy in four and five dimensions \cite{8} as well as Schwarzschild and other non-extremal black holes in different dimensions \cite{9} already studied. This kind of logarithmic correction also used to study simple regular black hole solution satisfying the weak energy condition \cite{10}. It is calculated from three-dimensional black holes with soft hairy boundary conditions \cite{new2}. The same logarithmic corrections to the entropy of black hole obtained from Kerr/CFT correspondence \cite{11}.\\ P-V criticality of first-order entropy corrected AdS black holes in massive gravity recently studied by the Ref. \cite{PRD22}. Such leading order correction \cite{EPJC22} also considered in a Hyperscaling violation background \cite{EPJC33, EPJC44, EPJC55}.\\ The effect of the logarithmic term on the entropy functional formalism already investigated by Ref. \cite{15} and found that the leading correction to the micro-canonical entropy may be used to recover modified theories of gravity such as $f(R)$ theory of gravity \cite{16, 17}. The effects of leading order thermal fluctuations recently studied for the Reissner-Nordstr\"{o}m-AdS black hole \cite{21} which show that critical exponents are the same as critical exponents without thermal fluctuations.\\ Logarithmic corrected entropy is also important from AdS/CFT point of view \cite{23}. In the Ref. \cite{23} it has been found that the lower bound violation of shear viscosity to entropy ratio \cite{24,25,26,27,28,29} due to logarithmic correction may be violated.\\ One of the most interesting kinds of black holes is G\"{o}del black hole \cite{012, PLB2}. In this paper, we consider the G\"{o}del black hole with electric charge and rotation \cite{013} and examine the thermodynamic and statistical parameters of a black hole with higher order terms in the black hole entropy. In that case, thermodynamics and statistics of ordinary (non-rotating, uncharged) G\"{o}del black hole have been studied by the Ref. \cite{IJTP1}. Also, Ref. \cite{IJTP2} have been studied another case of logarithmic correction of the entropy on the thermodynamics and statistics of Kerr-G\"{o}del black hole. Now, we would like to obtain corrected thermodynamics and statistics quantities of Kerr-Newman-G\"{o}del black hole in the presence of higher order corrections.\\ This paper is organized as follows. In the next section, we review some important properties of Kerr-Newman-G\"{o}del black hole. In section 3 we write corrected entropy and study corrected thermodynamics of Kerr-Newman-G\"{o}del black hole. We obtain behavior of some important thermodynamics quantities like internal and Helmholtz free energies, enthalpy and Gibbs free energy. In section 4 we obtain the relation between pressure and volume which is useful to study critical points. In section 5 we analyze specific heat to investigate thermodynamics stability of the black hole. In section 6 we have the statistical study for the Kerr-Newman-G\"{o}del black hole. Finally, in section 7 we give the conclusion. \section{Kerr-Newman-G\"{o}del black hole} Kerr-Newman-G\"{o}del black hole in 5-dimensional space-time is described by the following metric \cite{014}, \begin{eqnarray}\label{1} {d}{s}^2=&-&{f}{(r)}[{d}{t}+{\frac{{h}{(r)}}{{f}{(r)}}}(d{\phi}+{\cos}{\theta}{d\psi})]^2\nonumber\\ &+&{1 \over 4}{r^2}({d\theta^2}+{\sin^2}{\theta}{d\psi^2})+{{d}{r^2} \over {v}{(r)}}+{{r^2}{v}{(r)} \over {4}{f}{(r)}}{({d\phi}+{\cos}{\theta}{d}{\psi})^2}, \end{eqnarray} where the Euler angle $\theta$, $\psi$ and $\phi$ run over the ranges $0$ to $\pi$, $0$ to $2\pi$, and $0$ to $4\pi$, respectively. Also \cite{1101}, \begin{equation}\label{2} {f}{(r)}={1}-{\frac{2\mu}{r^2}}+{\frac{q^2}{r^4}}, \end{equation} \begin{equation}\label{3} {h}{(r)}={j}{r^2}+{3}{j}{q}+{{(2{\mu}-{q})}{a} \over {2}{r^2}}-{{q^2}{a} \over {r^4}} \end{equation} and \begin{eqnarray}\label{4} {v}{(r)}&=&{1}-{{2}{\mu} \over {r^2}}+{{8}{j}({\mu}+{q})[{a}+{2}{j}({\mu}+{2}{q})] \over {r^2}}\nonumber\\ &+&{{2}({\mu}-{q}){a^2} \over {r^4}}+{{q^2}[{1}-{16}{j}{a}-{8}{j^2}({\mu}+{3}{q})] \over {r^4}}, \end{eqnarray} where, $mu$ denotes black hole mass, $q$ is electrical charge and $j$ represent the G\"{o}del parameter, which is responsible for the rotation of the G\"{o}del universe \cite{G}. The angular velocities the G\"{o}del black hole are given by \cite{1101}, \begin{equation}\label{5} {\Omega}{(r)}={\Omega}_{\phi}={{h}{(r)} \over {U}{(r)}}, \end{equation} where, \begin{eqnarray}\label{6} {U}{(r)}&=&{{{r^2}{v}{(r)}-{4}{h^2}{(r)}} \over {4}{f}{(r)}}\nonumber\\ &=&-{j^2}{r^2}({r^2}+{2}{\mu}+{6}{q})+{3}{j}{q}{a}+{({\mu}-{q}){a^2} \over {2}{r^2}}-{{q^2}{a^2} \over {4}{r^2}}+{{r^2} \over {4}}. \end{eqnarray} The outside event horizon ${r}_{+}$ is determined by equation ${v}({r}_{+})=0$ which yields to the following relation, \begin{eqnarray}\label{7} {r}_{+}^{2}={\mu}-{4}{j}({\mu}+{q}){a}-{8}{j^2}({\mu}+{q})({\mu}+{2}{q})+\sqrt{\delta} \end{eqnarray} where \begin{eqnarray}\label{7-1} {\delta}=\left({\mu}-{q}-{8}{j^2}{({\mu}+{q})^2}\right)\left({\mu}+{q}-{2}{a^2}-{8}{j}({\mu}+{2}{q})a-{8}{j^2}({\mu}+{2}{q})^2\right). \end{eqnarray} We will use above solutions to investigate the modified thermodynamics due to the higher order corrections of the black hole entropy. \section{Black hole corrected thermodynamics} We consider the black hole as a thermodynamic system. It has already been proved that the thermodynamic quantities of the G\"{o}del black hole apply to the first law of thermodynamics \cite{014}. The Hawking temperature of the Kerr-Newman-G\"{o}del black hole is given by the following formula \cite{1101}, \begin{equation}\label{8} {T}={r}_{+}{v^{\prime}}({r}_{+}) \over {8}{\pi}{\sqrt{U({r}_{+})}}, \end{equation} where $v$ is given by the equation (\ref{4}) and "prime" denote the derivative with respect to the black hole radius. Also its original (un-corrected) entropy is given by, \begin{equation}\label{9} {S}_{0}={\pi}^2{r}_{+}^2{\sqrt{U({r}_{+})}}. \end{equation} Knowing entropy in terms of the microstates $\Omega(E)$ as ${S}={\ln\Omega}(E)$, single particle energy spectrum for each quantum number given by, \begin{equation}\label{10} {E}_{n}={f}(n) \end{equation} while quantum density of state given by, \begin{equation}\label{11} {\rho}(E)={\Sigma_{n}}\Omega(E_{n})\delta(E-E_{n}), \end{equation} where delta function defined as, \begin{equation}\label{12} \delta(E-E_{n})=\delta(E-f(n))=\delta(n-F(E))|F\prime(E)|, \end{equation} and quantum density of state given by, \begin{equation}\label{13} {\rho}(E)={\Omega}({E})|F^{\prime}({E})|{\sum}_{n=0}^{n=\infty}{\delta}({n}-{F}({E})). \end{equation} One can deduce for each thermodynamic system that corrected $S(E)$ in the presence of thermal impairments would be calculated as follow, \begin{eqnarray}\label{14} {S}({E})&=&{S}_{0}({E})-{\ln}({F}({E}))-{1 \over 2}{\ln}({\alpha}_{2}^{2})+{\sum}_{n=2}^{n=\infty}{{{\alpha}_{2n}}({-1})^{n} \over ({2n}){!}{!}{{\alpha}_{2}}^{2}{n}}\nonumber\\ &+&{{1} \over {2}{!}}{\Sigma}_{n=3}^{\infty}{{\Sigma}_{m=3}^{\infty}}{{{{\alpha}_{n}}{{\alpha}_{m}}{({-1})^{k}}({2k}-{1}){!}{!} \over {n}{!}{m}{!}{{\alpha}_{2}}^{2}{k}}}+\mathcal{O}{(\frac{\alpha_{4}^{3}}{{\alpha}_{2}^{12}})}. \end{eqnarray} As subleading order terms have dependence on $m$ and $n$ i.e. order unity one can write corrected entropy as follow, \begin{equation}\label{15} {S}={S}_{0}-{{\alpha} \over {2}}\ln({S}_{0}{T^2})+{{\gamma} \over {S}_{0}}+\cdots, \end{equation} which is in a good agrement with what predicted in \cite{03}. In this regard, the higher powers of ${1} \over {S}_{0}$ have been discarded. $\alpha$ and $\gamma$ are constants that have different values for different systems and determined using system conditions. We examine the thermodynamics of the rotating charged G\"{o}del black hole with the entropy and Hawking temperature.\\ \begin{figure}[th] \begin{center} \includegraphics[scale=.5]{1.eps} \caption{Entropy according to the radius of the event horizon for values $\mu=1$, $q=0.1$, $a=1$, $j=0.0649$.} \label{fig1} \end{center} \end{figure} We give graphical analysis of the entropy for the several values of parameters (see Fig. \ref{fig1}). In the radius of a small horizon, in the absence of correction sentences, the graph is almost linear. By entering the logarithmic correction, the entropy changes and increases and in the presence of the second-order correction, the entropy shows a different behavior. Finally, in the presence of both corrective sentences, the entropy of the system is completely different, and the entropy of the system is greatly increased. \subsection{Internal energy} In the canonical ensemble, the relationship between entropy and the partition function is as follow, \begin{equation}\label{16} {S}(T)={k}_{B}{\ln}{Z}+{k}_{B}{T}{{\partial}{\ln Z} \over {\partial T}}, \end{equation} which is yields to the following relation, \begin{equation}\label{17} {\ln Z}={{1} \over {T}}{\int}{{S}({T}) \over {k}{B}}{d}{T} \end{equation} The internal energy of the system is obtained using partition function as follow, \begin{equation}\label{18} {E}={k}_{B}{T}^{2}{{d}{\ln Z} \over {d}{T}} \end{equation} \begin{figure}[th] \begin{center} \includegraphics[scale=.3]{2.eps}\includegraphics[scale=.3]{3.eps}\\ \includegraphics[scale=.3]{4.eps}\includegraphics[scale=.3]{5.eps}\\ \caption{Internal energy according to the radius of the event horizon for values $\mu=1$, $q=0.1$, $a=1$ and $j=0.0649$.} \label{fig2} \end{center} \end{figure} We draw the internal energy in the presence and absence of correction terms. When we have the first sentence of the principles of entropy, the presence and absence of the second sentence does not have any effect on the internal energy of the system, but when we ignore the first of these reforms, the presence and absence of the second sentence of the reform will make a lot of changes in energy. This effect can be seen in the following form. In the third plot of the Fig. \ref{fig2}, both terms are discarded, and the entropy is equal to the original one. But in the last plot, considering the second sentence, energy changes range from -100,000 to -15,000. The maximum value of these changes is -15000 for $r_{+} = 0.05$. As we approach $r_{+} = 0.08$, the energy decreases as an asymptotic line, so that $r_{+} = 0.08$ can be considered as an asymptotic of the graph. \subsection{Helmholtz free energy} Helmholtz free energy can also be obtained using the entropy and the partition function as follow, \begin{equation}\label{19} {F}=-{k}_{B}{T}{\ln Z}, \end{equation} so we can find its changes in the presence and absence of correction terms as illustrated by the Fig. \ref{fig6}. \begin{figure}[th] \begin{center} \includegraphics[scale=.5]{6.eps} \caption{Helmholtz free energy according to the radius of the event horizon for values $\mu=1$, $q=0.1$, $a=1$, and $j=0.0649$.} \label{fig6} \end{center} \end{figure} Fig. \ref{fig6} shows the free energy of the black hole regarding the radius of the event horizon. Whatever $r_{+}=0$ go to $r_{+}=3$, Helmholtz free energy goes from $F=0$ to $F\simeq-1\cdot5$, which can be called a curved asymmetric line.\\ When we add the second term of the Helmholtz free energy, energy starts at very low values and loses its effect at $r_{+}\leq1$. Therefore, it can be concluded that the second term only has an effect on Helmholtz free energy at $0\leq r_{+} \leq 1\cdot5$.\\ Now, if we only consider the effect of the first term of corrections of the entropy, free energy starts from $F\simeq3$, and its effect on the entropy is up to $r_{+}\leq2\cdot5$. The point is that this effect will increase Helmholtz free energy, which is in the contrary with the effect of the second term.\\ Now, if we consider both corrective sentences, we see significant changes in the graph. First, there is no starting point in the chart. This means that energy is infinite at the starting point. Perhaps it can be analyzed that at $r_{+}=0$, Helmholtz free energy is meaningless, just like the energy at zero point in harmonic oscillators. Secondly, at $r_{+}\simeq1\cdot6$ we can see vanishing of the Helmholtz free energy. Thirdly, in $1\cdot6\leq r_{+} \leq 3$ the graph goes toward unmodified Helmholtz free energy. But the obvious difference between this graph with the one we did not consider the corrections is that in the first three modes at $r_{+}=3$ all the graphs represent an energy; However, we consider both corrective sentences, Helmholtz free energy is about one unit higher than the unmodified state, which is a positive change indicating that the system is reversible. \section{The first law of the black hole thermodynamics} In the Refs. \cite{014, 1101} it is argued that the thermodynamics quantities of the charged rotating G\"{o}del black hole satisfy the first law of thermodynamics, hence the following relations hold for $\alpha=\gamma=0$, \begin{equation}\label{4-1} dM=TdS+\Omega dJ+\Phi dQ+Wdj \end{equation} and \begin{equation}\label{4-2} \frac{2}{3}M=TS+\Omega J+\frac{2}{3}\Phi Q-\frac{1}{3}Wj, \end{equation} where the mass calculated as \cite{014}, \begin{equation}\label{4-3} M=\pi\left(\frac{3}{4}\mu-j(\mu+q)a-2j^{2}(m+q)(4m+5q)\right), \end{equation} while angular momenta given by \cite{014}, \begin{equation}\label{4-4} J=\frac{\pi}{2}\left[a\left(\mu-\frac{q}{2}-2j(\mu-q)a-8j^{2}(\mu^{2}+\mu q-2q^{2})\right)-3jq^2+8j^{2}(3\mu+5q)q^2\right], \end{equation} with its conjugate variable given by the equation (\ref{5}) which calculated at $r=r_{+}$. Moreover, the conserved charge obtained as \cite{014}, \begin{equation}\label{4-5} Q=\frac{\sqrt{3}\pi}{2}\left[q-4j(\mu+q)a-8j^2(m+q)q\right], \end{equation} which has the following electrostatic potential as its conjugate, \begin{equation}\label{4-6} \Phi=\frac{\sqrt{3}}{2}\left(\frac{q}{r_{+}^{2}}+(jr_{+}^{2}+2jq-\frac{qa}{2r_{+}^{2}})\Omega(r_{+})\right). \end{equation} Finally, the generalized force \begin{equation}\label{4-7} W=2\pi(\mu+q)\left(a+2j(\mu+2q)\right), \end{equation} is conjugate variable of the G\"{o}del parameter.\\ Now we would like to investigate relation (\ref{4-2}) for the above quantities together the temperature (\ref{8}) and corrected entropy (\ref{15}). In order to give numerical analysis of the equation (\ref{4-2}) we rewrite it as follow, \begin{equation}\label{4-2} L=TS+\Omega J+\frac{2}{3}\Phi Q-\frac{1}{3}Wj-\frac{2}{3}M=0, \end{equation} and plot $L$ in terms of some parameters in the Fig. \ref{figL} for three different cases of G\"{o}del, charged G\"{o}del, and charged rotating G\"{o}del black holes. \begin{figure}[th] \begin{center} \includegraphics[scale=.25]{L1.eps}\includegraphics[scale=.25]{L2.eps}\includegraphics[scale=.25]{L3.eps} \caption{Plots of $L$ in terms of $j$ for values $\mu=1$, and (a) $a=q=0$; (b) $q=0.1$ and $a=0$; (c) $a=q=0.1$.} \label{figL} \end{center} \end{figure} We can see that the first law of the black hole thermodynamics hold ($L=0$) for the case of $\alpha=\gamma=0$ (dash dotted (green) line of plots). Plot (a) shows ordinary G\"{o}del black hole (uncharged static), plot (b) shows charged G\"{o}del black hole without rotation and plot (c) shows charged rotating G\"{o}del black hole. In all cases we can see that the first law of black hole in presence of the logarithmic corrected entropy (first order correction with $\gamma=0$) violated (see dashed blue line). In the case of higher order corrected entropy we can see there is special choice for the $j$ parameter which yields to $L=0$ hence satisfy the first law of thermodynamics (for example see solid red line of plots). \section{Heat capacity} One of the best ways to investigate the stability of black hole is the study of the sign of specific heat, while it is also possible to study the stability of black holes based on horizon thermodynamics \cite{PLB3}. The heat capacity is written using the black hole entropy and temperature as follow, \begin{equation}\label{25} {C}={T}{{d}{S} \over {d}{T}}, \end{equation} which we examine for the black hole stability in the presence of logarithmic and higher order entropy corrections. \begin{figure}[th] \begin{center} \includegraphics[scale=.5]{10.eps} \caption{Heat capacity according to the radius of the event horizon for values $\mu=1$, $q=0.1$, $a=1$, and $j=0.0649$.} \label{fig10} \end{center} \end{figure} Fig. \ref{fig10} shows the heat capacity diagram in terms of the radius of the event horizon. In the absence of corrective sentences, the black hole is unstable $(r_{+}<0\cdot65)$, but with the first term, the graph is slightly upward, so that for $r_{+}<0\cdot4$ the amount of heat capacity is positive. Considering the second sentence, the upsurge for $r_{+}<0\cdot2$ is too high. So, in general, it can be concluded that by modifying the black hole from unstable to steady state. In this diagram, the amount of heat capacity is not considered, but it is merely a sign of controversy, as illustrated by the Fig. \ref{fig10}. For $r_{+}>1\cdot65$, the heat capacity is positive, which means the stability of the Kerr-Newman-G\"{o}del black hole. We can obtain similar result by taking $j$ as variable parameter and use the equation (\ref{7}) for the event horizon radius. \section{Statistical mechanics} By using the thermodynamic quantities, we can study the black hole statistics. \subsection{Micro-states} The thermodynamics and statistical mechanics are linked by the following equation \begin{equation}\label{26} {S}={k}{\ln}{\Omega}, \end{equation} where $\Omega$ is the number of micro-states of the system. \begin{figure}[th] \begin{center} \includegraphics[scale=.3]{11.eps}\includegraphics[scale=.3]{12.eps}\\ \includegraphics[scale=.3]{13.eps}\includegraphics[scale=.3]{14.eps} \caption{Micro-states according to the radius of the event horizon for values $M=1$, $q=0.1$, $a=1$, and $j=0.0649$.} \label{fig11} \end{center} \end{figure} Plots of the Fig. \ref{fig11} show the micro-states diagram of the system in terms of the radius of the event horizon. As seen in the figure, in the absence of any corrected expressions, the peak diagram for $r_{+}\simeq6\cdot53$ equals $5\times{10}^{292}$ (third plot).\\ The system in the range $6\cdot3<r_{+}<6\cdot7$ has a micro-states. In the presence of only the second correction term, and ignoring the first sentence, the micro-states diagram does not change, that is, the second correction sentence does not affect the micro-states of the system (second plot).\\ In the presence of only the first term, the correction of the peak diagram in the point is reduced to $27\cdot7$ times (first plot), Therefore, considering both corrections (last plot) and the fact that only the first term in the diagram affects, the diagram is when there are both terms, flows toward a diagram that is only included in the first term. It can be concluded that entropy corrections reduce $27\cdot7$ times the micro-states of the system. \subsection{Partition function} By using the relation (\ref{17}) one can obtain the black hole partition function and study effects of higher order corrections of entropy. \begin{figure}[th] \begin{center} \includegraphics[scale=.3]{15.eps}\includegraphics[scale=.3]{16.eps}\\ \includegraphics[scale=.3]{17.eps}\includegraphics[scale=.3]{18.eps} \caption{Partition function according to the radius of the event horizon for values $\mu=1$, $q=0.1$, $a=1$, and $j=0.0649$.} \label{fig15} \end{center} \end{figure} Plots of the Fig. \ref{fig15} show the partition function of the system in terms of the radius of the event horizon. The partition function gives the statistical characteristics of a system in the thermodynamic equilibrium. This function is a function of temperature and volume, And given that every point of a black hole has a certain temperature and volume, it can be attributed to each point of the black hole a specific partition function.\\ Regardless of correction sentences, the peak diagram is located at $5\times{10}^{292}$, This point is the same as the micro-states courier's place.\\ When we only consider the second term, diagram partition function does not change. In other words, the second corrective sentence has no effect on the system function.\\ In the presence of the first correction term, the peak diagram in the point is reduced.\\ So, considering both correction terms and given that only the first term affects the diagram of the partition function, and graph in the presence of both correction terms goes to the graph that only affected the first term. \subsection{Probability function} Using the following equation, one can obtain the probability that the system will be in an energy state ${E}_{r_{+}}$, \begin{equation}\label{27} {P}={{\exp}{(-\beta{E}_{r})} \over {{\sum}_{r}{\exp}{(-{\beta}{{E}_{r}})}}}={{\exp}{(-{\beta}{{E}_{r}})} \over {Z}}. \end{equation} \begin{figure}[th] \begin{center} \includegraphics[scale=.5]{19.eps} \caption{Probability function according to the radius of the event horizon for values $\mu=1$, $q=0.1$, $a=1$, and $j=0.0649$.} \label{fig19} \end{center} \end{figure} Fig. \ref{fig19} shows the probability function in terms of the radius of the event horizon. In the absence of correction sentences for $r_{+}<0.25$, the probability function has a large amount, and this value goes up to $100$, but never reaches this number; The reason is that we never have a black hole in zero radius.\\ As $r_{+}=0.25$ goes to $r_{+}=0.75$, the probability function comes from $P\simeq0.2$ to $P\simeq0$ and for a radius larger than one, the probability function is zero, This means that the other system is not in the energy state ${E}_{r_{+}}$.\\ By applying the second-order correction, the probability function decreases slightly so that its maximum is in $r\simeq0.2$. If we only consider the first term, the probability function falls sharply and does not exceed $0.01$ in all radius.\\ Finally, taking into account all the corrections, the maximum probability function is less than $0.01$, And from a radius of roughly $r_{+}>1$, the probability function tends to zero; this means that the probability that the system will be in a state with ${E}_{r_{+}}$ is almost zero, That is, the system energy must be infinite. \section{Conclusions} In this paper, we examined the effect of entropy corrections on thermodynamics and statistical mechanics of Kerr-Newman-G\"{o}del black hole, and observed that in the presence of both correction terms, the entropy of the system is completely different, and the entropy of the system is greatly increased. The leading order correction is logarithmic \cite{3}, while higher order corrections are proportional to the inverse of the entropy. The internal energy of the system is significant in the presence of the first single sentence, and this has a significant role in the internal energy. We considered both correction terms and found the Helmholtz free energy is about one unit higher than the unmodified state, which is a positive change indicating that the system is reversible. We found that the first law of the black hole thermodynamics may hold with higher order corrections while violate with the logarithmic (first order) correction. In the radial range, the second term does not affect the micro-states and partition of the system, but the first term (logarithmic term) has a significant effect, the micro-states and partition of the system are greatly reduced. In the presence of both correction terms, the probability function is greatly reduced than when we did not enter them. In a small radius, the heat capacity, when we do not enter correction term, is completely negative in small radius, but when we enter both terms, the heat capacity is positive and the black hole goes to the stable phase. It tells that correction terms are many important to have stable Kerr-Newman-G\"{o}del black hole. In that case it would be interesting to investigate Smarr formula \cite{CQG} for the Kerr-Newman-G\"{o}del black hole in presence of higher order corrections of the entropy.
{ "timestamp": "2018-11-09T02:02:47", "yymm": "1811", "arxiv_id": "1811.03152", "language": "en", "url": "https://arxiv.org/abs/1811.03152" }
\section{Introduction} A big problem with supervised machine learning is the need for huge amounts of labeled data. At least it's a big problem if you don't have the labeled data and even now, in a world awash with big data, most of us don't. While a few companies have access to enormous quantities of certain kinds of labeled data, for most organizations and many applications, the creation of sufficient quantities of exactly the kind of labeled data desired is cost prohibitive or impossible. Sometimes the domain is one where there just isn't much data. It might be the diagnosis of a rare disease or determining whether a signature matches a few known exemplars. Other times the volume of data needed and the cost of human labeling by Amazon Turkers or summer interns is just too high. Paying to label every frame of a movie length video adds up fast, even at a penny a frame. We confront the problem of deep learning's big labeled data requirements, offer a rule based strategy for extreme augmentation of small data sets and apply that strategy with an image to image translation model to automated cartoon coloring with very limited training data with industry applications in art, design and animation. \section{A Small Data Problem - Automating Cartoon Coloring} We consider the problem of automating the consistent coloring of a cartoon character type, in a flat color style like old Disney cel animation or The Simpsons. Is animated character coloring a small data problem? In many cases it's not, but sometimes, it is. If we were actually trying to color Bart Simpson or Mickey Mouse and had access to the Fox or Disney film archives, we might well have sufficient data for standard machine learning methods or at least enough footage to make it. There are 625 Simpsons episodes, nearly 20 million frames \citep{wikipedia:17}. Even for relatively rare characters in film or video, we might be able to leverage frame by frame continuity to color subsequent or intermediate frames, and when choosing production methods for new projects we could model 3D CG characters and color each only once using a 3D shader that imitates old style 2D cel coloring. Here, we explicitly want to study a case of scarcity, when a large body of work doesn't exist. When we have just a few dozen drawings, unconnected across time, only one of them colored, what can we do then? If we are to get from such a small data set to a machine learning model that can consistently paint the rest of the training drawings and the same artist's future drawings of the same type(s), we'll need extreme augmentation without overfitting. \section{The Geometry of Dragons: a Rule Based Solution for 80 Percent} Faced with a shortage of training data, we should be asking if there is a good non machine learning based approach to our problem. If there's not a complete solution, is there a partial solution and would a partial solution do us any good? Do we even need machine learning to color flowers and dragons or can we specify geometric rules for coloring? \subsection{Tell a Kid How to Color} \begin{figure} \centering \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower3_C_annotated.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Blank.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon2_C_annotated.png} \end{subfigure} \caption{Kid's rules: ``Color by body part.''} \label{fig:kidrulesflowerpart} \end{figure} When you don't know what rules to tell a computer, one way to start is to ask what rules you'd tell a person. If you were telling a kid how to color a flower or dragon, what would you tell them? You could probably give them one that is colored and just say, ``color the others like this.'' But to be more explicit about what ``like this'' means, you could state rules in terms of body or flower parts: Make the center orange. Make the petals yellow. Make the body green. Make the spikes yellow. Make the eyes white (Fig. 1). For flowers or simple dragons, we'd be done with just two or three rules. For fancy dragons, the rule list would be longer but still doable. We could write down a dozen ``color by body part'' rules to color wings, ears, belly scales, arms, legs, claws, etc. We'd tell the kid the extra rules or provide a couple of examples with the extra body parts and we'd expect our flowers and dragons would get colored the way we wanted them. But for a computer that doesn't know what a ``body'' or a ``petal'' is, it seems like we're as far away as ever. We could train a model to recognize body and flower parts but we'd need a huge amount of labeled training data which we don't have. \subsection{From Body Parts to Geometric Rules} What we'd like to do is directly translate our body part rules into geometric rules that are easy to program. The original coloring task was inspired by examples colored with a ``paint bucket'' tool which colors all the pixels in the same connected component as the clicked pixel. If we find the connected components in a drawing (which is done for us in existing libraries in e.g. Python), we can write down geometric rules using area and distance to label the parts of flowers (Fig. 2) and simple dragons and then use our original kids' color by body part rules. These geometric labeling rules don't work for all of our initial drawings but they give us a solid partial solution that works for about 80 percent. \paragraph{Rules for flowers (or simple dragons):} \begin{enumerate} \item ``background''= biggest white component \item ``center'' (or ``body'') = second biggest white component \item ``petals'' (or ``spikes+'') = the remaining white components \item (For dragons, add ``eye'' vs. ``spikes'' rule using distance to background) \end{enumerate} For at least some of the fancy dragon features, we can keep going, finding geometric rules that distinguish between claws and spikes or between eyes and eyelids. But even if we had geometric rules for every feature, our basic geometric rules for labeling body parts are fundamentally flawed. They only work for most of the drawings. For every rule I've written, I have a drawing it doesn't work for. Where do the rules break down? Why don't they work for all drawings? And if they aren't reliable, what good are they? \begin{figure} \centering \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower3_BW_background.png} \caption{Not flower} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower3_BW_center.png} \caption{Center} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower3_BW_onePetal.png} \caption{One petal} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower3_BW_petals.png} \caption{All petals} \end{subfigure} \caption{Color flower by component} \label{fig:colorbycomp} \end{figure} \section{From 80 Percent to a Full Training Set With Rule Breaking Transformations and Extreme Augmentation} Our geometric rules work to color about 80 percent of the original drawings. We'll call those drawings ``rule conforming'', write a program to color them and make those AB pairs the beginning of our training set. Then what? \subsection{Where the Rules Break Down - the Other 20 Percent} Let's examine the question of where the rules break down (Fig. 3). Writing geometric rules, we are formalizing assumptions that aren't true for all of our drawings: The background is connected. The background is the biggest component. Flower centers are bigger than all the petals. There are no gaps in drawn lines. Body and limbs are connected. \begin{figure} \centering \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower26E24.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower3E21.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon12_1.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon1_1.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon2_2.png} \end{subfigure} \caption{Where rules break down: line gaps, sizes, back limbs} \label{fig:breakrules} \end{figure} It's these cases that we hope to train our model to take care of because our geometric rules don't. But because they couldn't be rule colored, they aren't in our training set. We need to find a way to add them. \subsection{Rule Breaking Transfomations} For each of the assumptions/rules we made, we look for rule breaking transformations that we can apply to a rule conforming drawing A or to a drawing/colored AB pair (by applying the same transformation to A and to B) so that we can augment our AB training set with pairs that break all the rules. We use domain knowledge where appropriate to choose the best functions and parameters. \textit{Background is connected and is the biggest component.} We cropped, scaled or translated to disconnect and/or shrink the background. \textit{Center of flower smaller than petals.} We assumed our flowers all looked like a sunflower with a big center and smaller petals but what about the ones that look more like a daisy with a small center and big petals or petals of various sizes? We'd like a transformation that can turn our sunflowers into daisies. Since the flowers have an approximate radial symmetry, we switched to polar coordinates, scaled to embed our image square in a unit disk and looked at radially symmetric homeomorphisms of the disk to find a function which would disproportionately shrink the center of our image and make a synthetic daisy. $f(r,\theta)=(r^3,\theta)$ worked perfectly to create daisies (Fig. 4) but distorted dragons well beyond natural variation and almost beyond recognition. \begin{figure} \centering \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower22.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower22P.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower25.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower25P.png} \end{subfigure} \caption{Synthetic daisy creation with $f(r,\theta)=(r^3,\theta)$} \label{fig:rtorcubed} \end{figure} \textit{Gaps in lines.} We expect some test drawings will have poorly connected lines where there's a gap; where, for example, the petal doesn't quite connect to the center in a hastily drawn line. To create gaps in lines in training data, we made erasures on a drawing A from an AB pair, either by hand or in an automated way using a squiggle as a mask, and then paired the new A$'$ with its old colored version B to get a new A$'$B pair with line gaps for the training set. Other functions occasionally created gaps by stretching/thinning lines. \subsection{Additional Augmentation} Having filled in the training set with many of the kinds of data we knew we were missing, what else can we do to add useful variation to the data while avoiding overfitting? \textit{Affine transformations.} We included the usual affine transformation augmentations: translation, rotation, scale, skew and mirror flip. \citep[See][]{Bloice:17}. \textit{Elastic distortions/Gaussian blur.} We extended an idea from \citet{simard:03} to use Gaussian blur to change the drawn line. Simard et al. used it to augment the MNIST OCR training set, comparing it to the natural oscillations of the hand. \citep[For an implementation see ][]{ernie:17}. A very useful transformation type, elastic distortions preserved pose and major features but changed characters in interesting and believable ways. For example, two distortions gave the same dragon either a fat, short snout or a long, pointy one and changed the bend in its tail (Fig. 5). \textit{Composition of functions.} For each original AB pair, we composed multiple sequences of transformations to get new AB pairs. Two composition dangers to watch out for are half dragons in space (cropping an edge and then moving that cut edge back into the middle) and unintended duplicates (beware of commutativity.) \section{Identify Target ML Model and Training Needs} As a general problem solving strategy, when the outline of our proposed solution has multiple parts, we should have some level of confidence that we'll be able to solve the other parts before we invest too much effort into one. To the extent that our approach to one part affects another, we should go beyond likelihood estimates and have an idea what the subpart solutions might look like. Before beginning our quest for rule breaking transformations and extreme augmentation, we should have had an idea of the kind of machine learning model that should work if we could build a training set and what kind and how much training data we'd likely need. Our target number for a deep learning model might have been in the tens of thousands or hundreds of thousands. It was only the identification of a suitable existing model with comparatively modest data needs that made sufficient augmentation seem possible. \begin{figure} \centering \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon10.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon10E20.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon10E22.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon10E23.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon10E24.png} \end{subfigure} \caption{Elastic distortion/Gaussian blur} \label{fig:elastic} \end{figure} We have a natural pairing of images in two styles (uncolored drawing and its colored version) and we want to go from one of the first kind to one of the second kind, which made our cartoon coloring application look like an excellent candidate for applying ``Image-to-Image Translations with Conditional Adversarial Nets'' \citep{pix2pix:16}. Their facades and city maps applications needed only 400-1100 AB training pairs. So we made that our target range. That's still an ambitious order of magnitude more than our original drawing data and two orders of magnitude more than our hand colored data, but far less than we might have expected. \section{Experiments, Results and Future Research} We had excellent results with uncropped flowers and promising results with dragons from only a few dozen original drawings colored and augmented as described above (all trained with orange/yellow scheme). The models handled several issues our geometric rules could not - flower line gaps, small centers and coloring dragon back limbs but fell short of our aspirations on croppings, background areas near close parts, all yellow spikes and fancy dragon parts. \begin{figure} \centering \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon1.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon2.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon3.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon4.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon5.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon6.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon7.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon8.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon9.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon10.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon11.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon12.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon13.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon14.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon15.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon16.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon17.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon18.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon19.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon20.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon21.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon22.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon23.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon24.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon25.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon26.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon27.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon28.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon29.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon30.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon31.png} \end{subfigure} \begin{subfigure}[b]{0.05\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/dragons_A/Dragon32.png} \end{subfigure} \caption{Every hand drawn picture that was used in making the 32Dragons training set} \label{fig:32DragonsOriginalData} \end{figure} We ran three experiments on a single AWS GPU instance using Hesse's TensorFlow \citep{hesse:17} implementation of Isola et al.'s Pix2Pix image translation model. We trained on all flowers, all dragons and both dragons and flowers for 200 epochs. We then tested each trained model on a mixed test set of new flowers, dragons and a few others. Our original 40 uncolored ``flower'' and 32 uncolored ``dragon'' (Fig. 6) 400x400 px drawings were made in a standard computer Paint program by the author. We trained for an orange/yellow coloring scheme for both flowers and dragons to see if the similarity between flower centers/petals and dragon bodies/spikes would make a mixed training set useful for dragons. \subsection{Experiment 1: 40Flowers} \textit{Training data: }40 original flower drawings, rule colored and augmented, including erasures to 506 AB training pairs. \textit{Results: }40Flowers was very good at coloring uncropped flowers. It could handle all the flowers our geometric rules could and several types they could not - line gaps, small centers and various size petals. With dragons, strongly cropped flowers and ``other'', it didn't color according to our intended scheme but it often (but not always) recognized lines as color boundaries (Fig. 7). \textit{Next steps: }The relative simplicity of the character type and the promising family of homeomorphisms of the disk suggest we could successfully shrink the original flower training set even further and/or improve cropping performance. \begin{figure} \centering \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower1erase-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower3crop-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower5erase-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower2crop-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower6erase-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower6crop-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon6-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/DragonFlowerScene-outputs40F.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower7-outputs.png} \end{subfigure} \caption{Colored by model 40Flowers (trained only on flowers)} \label{fig:40FlowersTest} \end{figure} \subsection{Experiment 2: 32Dragons} \textit{Training data:} 32 original dragon drawings, rule colored and augmented to 469 AB training pairs. We added hand erasures or ``paint bucket'' fills to a half dozen colored drawings to add properly colored back limbs to the training set. \textit{Results: }32Dragons had quite good results on simple dragons, but often colored a few spikes orange and muddled orange and yellow at the tail tip or tail spike, possibly confused by orange ears and the inconsistency of a yellow tail spike's existence in the training set. It did well with the simpler eye style and struggled with the other. It colored background where the gap between body parts was small. It did a lot right with wings, back limbs and ``other'' pictures but didn't color them quite according to our scheme. It struggled a bit with gaps, going a bit outside the line gap and introducing unwanted black in other places. 32Dragons colored flowers in an unexpected way with mostly white centers and petals mostly orange with white and yellow mixed in. There was often some color in the background. Croppings were worse. Dragon eyes may explain the coloring of flowers since the the round eye is white surrounded by orange body (Fig. 8). \textit{Next steps: }Several research directions seem promising. We could use transfer learning to build from simpler to more complex characters (both across character types or within a single type adding body part features) and to incorporate problematic transformations (e.g erasures or croppings.) We could augment a purpose built data set by saving in progress drawings at multiple stages. We could look for additional distortion transformations and investigate methods for drawing automation. Relevant works include \citet{Hauberg:16} for learned diffeomorphisms and \citet{Ha:17} for sketching with neural nets. \begin{figure} \centering \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower1erase-outputs_32D.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower3crop-outputs_32D.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon7-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon7wing-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon7wingtwobacklimbsholeserase-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon4erase-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon4crop-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon5mirror-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon5bellybacklimbs-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon3crop-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon3-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon3earsbacklimbsbkgrdholes-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon2-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon1crop-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon1mirror-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon1erase-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/DragonFlowerScene-outputs32D.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Flower7-outputs_32D.png} \end{subfigure} \caption{Colored by model 32Dragons (trained only on dragons)} \label{fig:32DragonsTest} \end{figure} \subsection{Experiment 3: MixedFD} \textit{Training Data: }Combined 40Flowers and 32Dragons training sets 975 AB training pairs. \textit{Results: }MixedFD had quite good results on simple dragons, but not as good as 32Dragons. MixedFD miscolored the eyes, probably confused by orange flower centers. MixedFD colored spikes yellow, but at the expense of miscolored ears. MixedFD was quite good on dragon line gaps and it was a little better than 32Dragons on crops but it introduced even more unwanted black. MixedFD was good on uncropped flowers, including line gaps and various center/petal sizes, but had unwanted black. \section{Industry Applications and Further Research} This research is the first step in a broader inquiry into how to generate new works in the style of an individual artist from limited original data, with industry applications in art, design and animation. Automating the coloring of cartoon characters with a consistent color scheme has potential applications in cel style animation for film and game production and in comics for periodical and book publication, but our target application is more ambitious. In fashion, interior design and architecture, from high end photography printers to modern looms to CNC steel fabrication equipment, wherever there exists a manufacturing process where it is cheap to switch patterns, there is the possibility of offering products incorporating unique to each customer designs at mass market prices if there is an inexpensive way to create new designs. The challenge is to maintain design style and quality across a large number of items while introducing automation and variation. For a first use case in wearables, we are currently using the work in this paper together with further work on drawing creation to produce a family of designs for T-shirts where each shirt will have a unique but closely related design. Shirt mock ups shown here (Fig. 9) use test image results from the 32Dragons experiment. \begin{figure} \centering \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon1mirror-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/D1onT.png} \end{subfigure} \begin{subfigure}[b]{0.1\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/Dragon2-outputs.png} \end{subfigure} \begin{subfigure}[b]{0.15\linewidth} \includegraphics[width=\linewidth]{PicsForPAPIsPaper/D2onT.png} \end{subfigure} \caption{AI T-shirt designs: unique design for every customer} \label{fig:DragonShirts} \end{figure} \section{Conclusion} We are exploring ways to use machine learning for automated design generation, with applications across multiple industries. To automate cartoon coloring, we suggested a generalizable strategy for going from a very small data set to a deep learning training set, bootstrapping with geometric rules to get to a partial solution and using strategic rulebreaking transformations and other augmentations to go the rest of the way. The study of a specific application has allowed for a useful discussion of implementation details and specific outcomes, but the underlying ideas generalize beyond the problem of coloring dragons. Up a few levels of abstraction, the strategy's formulation is equally applicable to image, language or numerical data across a wide range of domains. Given any unlabeled data set, if there is a substantial subset with a method for automated labeling, a method for transforming that labeled subset into labeled data like the data we couldn't automate labeling for and other application appropriate, label preserving augmentation techniques (if the original data set was small), then we can combine these methods to create a large labeled training set representing the variation in our original small unlabeled training set (and maybe well beyond). \vskip 0.2in
{ "timestamp": "2018-11-09T02:02:45", "yymm": "1811", "arxiv_id": "1811.03151", "language": "en", "url": "https://arxiv.org/abs/1811.03151" }
\section{Introduction} \vspace{-2mm} As methodology researchers, we often find it challenging to explain intuitively where and how our research advancements in speaker recognition can be used. To demonstrate speaker recognition technology in an appealing way to the public, many challenges need to be resolved. Besides the standard challenges of speaker recognition technology such as background noise \cite{zhao2014robust}, channel mismatch~\cite{Solomonoff2005}, and the requirement of fast response times in large-scale recognition tasks \cite{Schmidt2014}, there are challenges related to the demo design itself. First, the traditional speaker recognition setting requires at least two separate speech inputs from the user, one is for enrollment and the other one for test. The requirement of two separate recordings can be inconvenient for an user who wants to quickly test the system. The second challenge in showcasing is how to give an attractive feedback to the user. This could be implemented as a real-life application, for example, by using user's voice to open a physical lock, or in a less involved way by displaying recognition scores in a screen \cite{lee2011joint}. \begin{figure}[t!] \centerline{\includegraphics[width=0.8\linewidth]{celeb_screenshot_clipped.png}} \vspace{-2mm} \caption{A screenshot from our voice search web application displaying the basic elements of the UI: Recording button, audio visualization, playback option for the recorded speech, and the results.} \label{fig:screenshot} \end{figure} In this work, we present a concept for creating publicly appealing demos to showcase speaker recognition technology by leveraging public-domain target speaker data collected from YouTube. The core idea is to compare users' speech to the ones of celebrities on YouTube, who have been enrolled prior to the real-time demonstration. The results of the comparison are then displayed as a selection of YouTube videos from the best matching celebrities, which allows users to see and listen to the celebrity speakers who they most resemble to (Figure \ref{fig:screenshot}). Even if we focus on speaker recognition research, the same concept could also be applied for other things that can be inferred or estimated from speech such as age, emotion, or language. For example, if the user records angry voice, the results could show YouTube videos of angry people. When the results include famous public figures, the user's interest and satisfaction of the demo tends to naturally rise. We saw this positive effect while presenting the demo in a locally organized sub-event of an European-wide ``The European Researchers' Night 2018'' \footnote{\url{https://ec.europa.eu/research/mariecurieactions/actions/european-researchers-night}} that aims to bring scientific research to public. We run our demo on a web platform that can be used on PCs and mobile phones with an internet connection to ensure good accessibility of the demo. The web platform communicates with a computation server that runs the speaker recognition back end based on our recent work on computationally efficient i-vector extraction \cite{Vestman2018}. The back end provides the results to the web platform that displays them by using embedded YouTube video players. Our extensive use of YouTube data has been made possible by recent automated speech data collection efforts in \cite{Nagrani2017} and \cite{Chung2018} resulting in VoxCeleb1 and VoxCeleb2 corpora, respectively. These corpora provide a large set of annotated YouTube speech data including metadata for obtaining web links to the original YouTube videos. To best of our knowledge, prior existing speaker recognition demos have not utilized VoxCeleb data in the proposed way. We are aware of a website \footnote{\url{https://celebsoundalike.com}} with a similar idea, but unfortunately we have not been able to successfully run the demo to see how it functions. Based on the celebrity speaker names, that demo does not utilize VoxCeleb data, and likely does not display YouTube videos in the results. In summary, the current work describes a novel concept that allows speech technology research teams to demonstrate their research without requiring large amount of additional work. To help other researchers to apply the concept for their own research, we share the source code of our web platform allowing a quick start for prototyping possible demo applications. We tested the concept among the public using our voice comparison demo utilizing standard speaker recognition techniques, and the received feedback from the people was mainly positive. \label{sec:intro} \vspace{-2mm} \section{Web platform for speech demos} \vspace{-2mm} \label{sec:web-platform} We designed the web platform with the sole purpose of demonstrating speech processing systems to the public, and in this work we used it to demonstrate speaker recognition using YouTube data. The platform is implemented as a web service in PHP and JavaScript, supporting different browser and devices. Users can select one of speech processing ``methods'' defined by the host of the platform. The methods could, for example, perform speaker recognition or age estimation from the recorded speech. The back end of the platform is implemented in PHP, and thus only needs a web server capable of running PHP (\textit{e}.\textit{g}. Apache or Nginx). For simplicity and reliability, server-side code only receives WAV audio files from the clients and runs a specified method as a \texttt{system()} call, and finally returns the results to the client. For privacy reasons, the audio file is not stored on the server, and is immediately removed at the end of handling user's request. The platform supports including additional user inputs required for the analysis, \textit{e}.\textit{g}. the claimed identity for speaker verification demo. The front end of the platform is implemented in JavaScript, which also handles recording of the audio in raw format at 16kHz. Features required by this code are supported by the most PCs and Android phones, making it easier to share the demo with others. The user interface records a sample of user's speech, queries what speech processing method should be applied on the recorded sample, sends the sample to server to be processed, and displays the results. We share the source code of the platform in the hopes it will support other researchers in speech analysis to demonstrate their work to the public. The code includes instructions how to setup the server in couple of steps. New speech analysis systems for demonstration can be added by modifying a single JSON file. \vspace{-2mm} \section{Speaker recognition back end} \vspace{-2mm} \label{sec:speaker-recognition} The system comparing user's voice to voices in YouTube videos can be regarded as a \emph{closed set speaker identification} system. As we only utilize a closed set of YouTube target speakers, we can include the data from the target speakers in the system development. In this section, we describe the data sets and the speaker identification system used for providing functionality to the web front end. \vspace{-2mm} \subsection{YouTube data: VoxCeleb1 \& VoxCeleb2} \vspace{-1mm} The audio-visual VoxCeleb corpora \cite{Nagrani2017, Chung2018} have been adopted in many application areas including speaker recognition \cite{Nagrani2017, Chung2018}, speech separation \cite{Afouras2018}, and emotion recognition \cite{Albanie18a} to name a few. The VoxCeleb data has been automatically collected from YouTube by exploiting face verification and active speaker detection systems. An automated pipeline enabled collecting very large scale speaker recognition data sets: When combined, the VoxCeleb corpora consist of almost 1.3 million speech clips from over 170,000 YouTube videos from more than 7000 speakers and, in total, nearly 3000 hours of speech material. The average length of speech clips in VoxCeleb is about eight seconds. The metadata provided with the VoxCeleb corpora includes, for example, speakers' names, IDs of the original YouTube videos, and the starting and ending times of the clips within the videos expressed as frames. This metadata is enough for setting up a demo where users can find best matching voices to theirs from YouTube. Although the metadata is automatically obtained, it is, in our experience, fairly accurate. Regarding to the correctness of the labels, the authors of VoxCeleb mention that the VoxCeleb2 corpus is mainly intended to be used as a training data set and that during the data collection thresholds for discarding false positives were not as strictly set as with VoxCeleb1 data collection \cite{Chung2018}. We have witnessed a few labeling errors in VoxCeleb2, such as Finnish president Tarja Halonen being confused to talk-show host Conan O'Brien. However, the errors do not exist to an extent that would be a considerable problem for our application. \vspace{-2mm} \subsection{Speaker identification system description} \vspace{-2mm} The acoustic feature vectors of the speaker identification system consist of 20 MFCCs plus their delta and double-delta coefficients. The system discards non-speech frames using a energy based speech activity detector and normalizes obtained features to have zero mean and unit variance. For training the system components and enrolling the celebrities, we used those speakers from VoxCeleb corpora who had more than five utterances of length of five seconds or more. There are 903,498 such utterances and 7,363 such speakers. In the training of some system components, only a fraction of this data was needed to reach close to optimal recognition accuracy. We trained an universal background model (UBM) using one-thirtieth of the selected 903,498 utterances. The UBM is a 1024-component Gaussian mixture model (GMM) \cite{reynolds2000speaker}, which is used to compute sufficient statistics for i-vector extraction. We compute 800-dimensional i-vectors by compressing mean supervectors of maximum a posteriori (MAP) adapted GMMs using probabilistic principal component analysis (PPCA) as described in \cite{Vestman2018}. This is a (speed-wise) high-performing alternative to the stardard i-vector extraction that is traditionally done via front-end factor analysis \cite{dehak2011front, kenny2012small}. We trained the PPCA model using one-fifteenth of the selected data. Prior to scoring, i-vectors are centered using the mean computed from the whole training data of 903,498 utterances and then normalized to unit length. Scoring is performed with a simplified Gaussian probabilistic linear discriminant analysis (G-PLDA) model \cite{garcia2011analysis}, which has a 350-dimensional speaker subspace. The G-PLDA model was trained using the whole training data. At the online stage, the i-vector extracted from user's recording is scored against all of the 903,498 i-vectors used in PLDA training. The speakers are sorted according to the scores of their highest scoring utterances, from highest score to lowest. Finally, the system sends the names of the \mbox{top-5} speakers together with the links to the YouTube-videos that correspond to the highest scoring utterances to the client. \renewcommand{\arraystretch}{1.3} \begin{table*}[t!] \caption{Speaker rank testing for six public figures using 10 audio clips from each speaker. The speaker ranks range from 1 to 5 and 'x' is shown if the result list of top-5 speakers did not contain the correct speaker at all. The tests are performed with and without a replay channel. The replay experiment does not require direct access to the back end system, but can be done by using the web demo only.} \vspace{2mm} \label{table:rankings} \small \begin{tabular}{p{0.20\linewidth-2\tabcolsep} p{0.20\linewidth-2\tabcolsep}>{\raggedleft\arraybackslash} p{0.05\linewidth-2\tabcolsep}>{\raggedleft\arraybackslash} p{0.05\linewidth-2\tabcolsep}>{\raggedleft\arraybackslash} p{0.05\linewidth-2\tabcolsep} p{0.05\linewidth-2\tabcolsep} p{0.20\linewidth-2\tabcolsep}>{\raggedleft\arraybackslash}p{0.05\linewidth-2\tabcolsep}>{\raggedleft\arraybackslash} p{0.05\linewidth-2\tabcolsep}>{\raggedleft\arraybackslash}p{0.05\linewidth-2\tabcolsep}} & \multicolumn{4}{l}{Without replay channel} & & \multicolumn{4}{l}{With replay channel} \\ \cline{2-5} \cline{7-10} & & \multicolumn{3}{l}{Occurrences in} & & & \multicolumn{3}{l}{Occurrences in} \\ \cline{3-5} \cline{8-10} Speaker's name & list positions for 10 clips & top1 & top3 & top5 & & list positions for 10 clips & top1 & top3 & top5 \\ \hline Hillary Clinton & \texttt{1111111111} & 10 & 10 & 10 & & \texttt{1111111111} & 10 & 10 & 10\\ Ariana Grande & \texttt{1111111111} & 10 & 10 & 10 & & \texttt{3111111111} & 9 & 10 & 10\\ Oprah Winfrey & \texttt{1111111111} & 10 & 10 & 10 & & \texttt{1311111112} & 8 & 10 & 10\\ Johnny Depp & \texttt{1111112112} & 8 & 10 & 10 & & \texttt{1111x11121} & 8 & 9 & 9\\ Bruno Mars & \texttt{1411111112} & 8 & 9 & 10 & & \texttt{1x21111211} & 7 & 9 & 9\\ Conan O'Brien & \texttt{1111111111} & 10 & 10 & 10 & & \texttt{1111111111} & 10 & 10 & 10\\ \hline Total (in \% of max.)& & 93 & 98 & 100 & & & 87 & 97 & 97 \\ \hline \end{tabular} \end{table*} \renewcommand{\arraystretch}{1.0} \vspace{-1mm} \subsection{System runtime considerations at online stage} \vspace{-1mm} To ensure fast response times, we implemented the speaker recognition back end as a server that has all the necessary models preloaded in the memory. The server is implemented with Python using scientific computing libraries available to it (\textit{e}.\textit{g}. NumPy and SciPy). We pay special attention to the PLDA scoring and i-vector extraction as they are the most time consuming steps during the computation. In \cite{garcia2011analysis}, it is shown that the score for a trial using G-PLDA can be computed as \[ \textrm{score} = \vec{\tilde\eta}_1 ^\intercal \widetilde Q \vec{\tilde\eta}_1 + \vec{\tilde\eta}_2 ^\intercal \widetilde Q \vec{\tilde\eta}_2 + 2 \vec{\tilde\eta}_1 ^\intercal \Lambda \vec{\tilde\eta}_2 + \textrm{const}, \] where $\vec{\tilde\eta}_1$ and $\vec{\tilde\eta}_2$ are lower dimensional projections of enrollment and test i-vectors, respectively, and where $\vec{\tilde\eta}_1 ^\intercal \widetilde Q \vec{\tilde\eta}_1$ and $\vec{\tilde\eta}_1 ^\intercal \Lambda$ can be precomputed. As we work with an identification system (one test segment vs. all enrollment segments), the second term $ \vec{\tilde\eta}_2 ^\intercal \widetilde Q \vec{\tilde\eta}_2$ is a constant and thus can be neglected. Therefore, to get all the $n=903{,}498$ scores at online stage, we only need to compute \[ \textrm{scores} = \vec{\nu} + 2 D P \vec{\eta}_2, \] where $\vec{\nu}$ is an $n$-dimensional vector containing precomputed values $\vec{\tilde\eta}_1 ^\intercal \widetilde Q \vec{\tilde\eta}_1$, matrix $D \in \mathbb{R}^{n \times 350}$ contains precomputed vectors $\vec{\tilde\eta}_1 ^\intercal \Lambda$, and $P$ is a $350 \times 800$ projection matrix that projects test i-vector $\vec{\eta}_2$ to a lower dimensional space so that $\vec{\tilde\eta}_2 = P \vec{\eta}_2$. The product $D\vec{\tilde\eta}_2$ can be efficiently parallelized. The i-vector extraction using PPCA is simply a matter of compressing 61440-dimensional GMM-supervector to 800-dimensional space using a precomputed projection matrix. Note that the traditional approach for i-vector extraction would, in addition, require inverting an $800 \times 800$ posterior covariance matrix~\cite{madikeri2014fast, Vestman2018}. \vspace{-1mm} \section{System evaluation} \vspace{-2mm} We tested our voice search demo and the underlying speaker recognition back end in multiple ways using both objective and subjective measures in evaluation. On the objective side, we computed an equal error rate (EER) using VoxCeleb speaker verification protocol and further we tested the rankings that the system displays for newly downloaded and replayed YouTube data. On the subjective side, we gathered feedback from the users of the system, including their opinions on how close the displayed top five celebrities sound to the user. \vspace{-1mm} \subsection{Evaluation using VoxCeleb speaker verification protocol} \vspace{-1mm} The VoxCeleb1 speaker verification test protocol includes 37720 trials with a balanced number of same speaker trials and impostor trials. The trial list has been formed using 4715 utterances from 40 speakers. Using this protocol, we obtained EER of 6.69 \%. This result is better than the baseline result for i-vectors in \cite{Chung2018}, but should not be directly compared as our system utilizes testing utterances also in system training. \vspace{-1mm} \subsection{Speaker rank testing on non-VoxCeleb YouTube data} \vspace{-1mm} To test the the final deployed demo, we studied the speaker rankings the system outputs. For this purpose, we collected a small set of new YouTube data. This set contains 10 new speech clips for six public figures in VoxCeleb corpora. The clips are about 15 seconds long each and they are extracted from videos that are not already present in VoxCeleb corpora. When the new clips are fed to the speaker recognition back end, the output lists of top-5 speakers should contain the correct speaker as they are present in VoxCeleb and hence are already enrolled to the system. The new test data was used with the system in two ways: First, we downloaded the speech clips from YouTube and fed the data directly to the speaker recognition back end. Secondly, we played files directly from YouTube and at the same time recorded them with the web demo. Unlike the first approach, the second one includes the channel effects caused by replaying the data. In the replay experiment, the playback device was Sony SRS-XB10 portable Bluetooth speaker while the web demo was ran in Chrome browser in Nokia 8 smartphone running Android $8{.}1{.}0$. The distance between the two devices was kept to 5 cm as the recording device was held by hand above the up facing speaker. The room in which the experiment took place was quiet and the only background noise that was present was the fan noise of the laptop which was connected to the speaker. For both settings, with and without replay, the speaker rankings for all the test utterances are shown in Table \ref{table:rankings}. In addition, the table contains statistics of the number of occurrences in the top-1, top-3, and top-5 rankings. Without the replay, the system was always able to include the correct speaker to the top-5 list and 93\% of the times the speaker was identified correctly (\textit{i}.\textit{e}., in top-1). Replaying the audio clips decreased the system performance only slightly as the correct speaker was left outside the top-5 list only twice out of the 60 trials. To get insight of how long of an utterance is required for getting good results in our celebrity matching demo, we studied the effect of length of the test utterance on system accuracy. We ran the previous experiment without the replay effect using utterances clipped to lengths ranging from 1 second to 15 seconds. We found that the test segment needs to be at least 9 seconds to obtain close to optimal performance and at least 5 seconds to obtain identification accuracies greater than 70\% (Figure \ref{fig:length}). \begin{figure}[t!] \centerline{\includegraphics[width=0.9\linewidth]{length_analysis.pdf}} \vspace{-5mm} \caption{The effect of utterance length on speaker ranking performance. Specifically, the graph shows how often the target speakers are displayed in the top-lists when tested with different lengths of test utterances from the target. An utterance of length 9s is required to reach close to optimal performance.} \label{fig:length} \end{figure} \renewcommand{\arraystretch}{1.3} \begin{table}[t!] \caption{Computation times for the different steps in the voice comparison pipeline. The steps marked with \textbf{*} are parallelized to 16 CPU cores while others steps utilize only 1 core. The total response time is the time it takes to upload the speech and compute and display the results. The data was collected from 402 requests, except for the total response time which was collected together with the feedback questionnaire (n=27).} \vspace{-2mm} \label{table:times} \small \begin{tabular}{p{0.55\linewidth-2\tabcolsep}>{\raggedleft\arraybackslash}p{0.15\linewidth-2\tabcolsep}>{\raggedleft\arraybackslash}p{0.15\linewidth-2\tabcolsep}>{\raggedleft\arraybackslash}p{0.15\linewidth-2\tabcolsep}} & \multicolumn{3}{l}{Times in milliseconds (ms)} \\ \cline{2-4} & median & mean & SD \\ \hline Audio loading, MFCC extraction & 47.1 & 86.2 & 142.1 \\ Sufficient statistics computation & 20.9 & 46.3 & 98.1 \\ MAP adaptation & 0.8 & 0.9 & 0.6 \\ \mbox{Supervector compression (PPCA)\textbf{*}} & 42.6 & 56.5 & 28.3 \\ I-vector centering \& length norm. & 0.1 & 0.1 & $<0.1$ \\ I-vector compression (PLDA) & 0.4 & 0.5 & 0.3 \\ PLDA scoring\textbf{*} & 336.4 & 423.8 & 195.8 \\ Sorting speakers & 39.6 & 44.6 & 13.8 \\ \hline Total time in computing server & 521.5 & 661.0 & 331.6 \\ \hline Total response time & 1791.1 & 2503.5 & 1975.1 \\ \hline \end{tabular} \end{table} \renewcommand{\arraystretch}{1.0} \vspace{-2mm} \subsection{Feedback and impressions from public testing} \vspace{-2mm} The first public test for our voice search demo took place in the event ``The European Researchers' Night 2018'' (September 28, 2018), where researcher's from many fields were displaying their research to the public. The event was funded by EU and it was organized in many countries across the Europe. In our local event, we were showcasing our demo for five hours and for the most of the time there was a long queue of people waiting for their turn to test our demo. In total, approximately 150 people tried the demo. The feedback was mostly positive, although not everyone was satisfied with their results. As the event was targeted for families, many of the testers were children. This was a slight problem as only a small minority of the speakers in VoxCeleb corpora are children, causing it to be difficult to find a good voice match for everyone. In the event, we were using our own high-quality microphone (Zoom H6 Handy Recorder, XY mic) and a laptop that was well tested with the demo. To see how the demo works ``in the wild'', we shared a web link to our demo in a multiple social media platforms. The shared demo application was equipped with a short feedback questionnaire for subjective evaluation. We also collected error reports containing system information of the devices on which the demo did not work. The public testing revealed that the device and browser support is still quite limited due to some issues with the audio recording and playback support. Based on the feedback, we estimate that demo ran on 50 to 70 percent of the device-browser configurations that our test users were using. We also got some good suggestions how to improve the user interface and we believe that together with improved browser support the user experience can be very good as the received answers (n=27) to the questionnaire were already fairly positive as can be seen from Figure \ref{fig:questionnaire}. \begin{figure}[tb!] \centerline{\includegraphics[width=\linewidth]{questionnaire_answers.pdf}} \vspace{-3mm} \caption{Results from the feedback questionnaire, gathered from users using platform that finds matches for their speech from a set of over 7000 celebrities. Based on these subjective assessments, the system is able to find good matches for users' speech in most cases.} \label{fig:questionnaire} \end{figure} \vspace{-2mm} \subsection{Response and computation times} \vspace{-1mm} During the test in the wild, we collected computation times of the different steps in the voice comparison. The statistics are summarized in Table \ref{table:times}. The average time to compute one voice comparison request was 661 milliseconds, which means that our computation server could, theoretically, respond to 5000 requests in an hour without processing multiple requests in parallel. The total response time, on average, was about 2.5 seconds. As seen from Figure \ref{fig:questionnaire}, this level of responding speed was considered to be fast. \vspace{-2mm} \section{CONCLUSIONS} \vspace{-2mm} We successfully capitalized the appeal to public figures with our YouTube voice search demo application. The objective and the subjective evaluations of the demo showed that the platform was mostly successful in providing good results and also being convenient to use. The feedback received from the users allows us to further develop our demo platform, which we have shared for open source development at \url{https://github.com/bilalsoomro/speech-demo-platform}. We would be happy to see the proposed concept to be applied in the future with other speech related recognition systems as well. \vfill\pagebreak \bibliographystyle{IEEEbib}
{ "timestamp": "2019-02-12T02:22:24", "yymm": "1811", "arxiv_id": "1811.03293", "language": "en", "url": "https://arxiv.org/abs/1811.03293" }
\section{INTRODUCTION\label{intro}} Gamma-ray bursts (GRBs) with unusually low gamma-ray luminosities are classified as low-luminosity GRBs (llGRBs) and are thought to comprise a distinct population from their cosmological counterparts. Although only a handful of nearby events have been known, their volumetric rate appears to be higher ($10^2$--$10^3$ Gpc$^{-3}$ yr$^{-1}$) than the beaming corrected rate of standard GRBs \citep{2006Natur.442.1011P,2006Natur.442.1014S,2006ApJ...645L.113C,2007ApJ...657L..73G,2007ApJ...662.1111L}. They are all associated with broad-lined Ic supernovae (SNe Ic-BL; see \citealt{2006ARA&A..44..507W,2012grbu.book..169H,2017AdAst2017E...5C} for reviews), which exhibit broad-line spectral features. Well-observed examples of llGRBs and the associated SNe include, GRB 980425/SN 1998bw \citep{1998Natur.395..663K,1998Natur.395..670G}, GRB 060218/SN 2006aj \citep{2006Natur.442.1008C,2006Natur.442.1018M,2006ApJ...645L..21M,2006Natur.442.1011P,2006Natur.442.1014S}, and GRB 100316D/SN 2010bh \citep{2011MNRAS.411.2792S,2011ApJ...740...41C,2012ApJ...753...67B,2012A&A...539A..76O}. All the SNe associated with GRBs and llGRBs are suggested to be highly energetic compared with ordinary core-collapse SNe (CCSNe). Their light curves and spectra indicate explosion energies $\sim10$ times larger than the canonical value of $10^{51}$ erg. The association of GRBs and inferred large kinetic energies imply hidden central engine activities distinguishing these extraordinary events from ordinary explosions of massive stars powered by neutrinos. From an observational point of view, scrutinizing the gamma-ray and X-ray emission from these highly energetic CCSNe offers an important tool to investigating their origin. The launch of the Neil Gehrels {\it Swift} Observatory \citep{2004ApJ...611.1005G} realized detailed gamma-ray and X-ray observations of GRBs and rapid follow-ups. GRBs 060218 and 100316D are well-observed llGRBs detected after the launch of the {\it Swift} satellite. They emitted gamma-rays for unusually long priods compared with cosmological GRBs. The early X-ray spectra of 060218 and 100316D are well fitted by a power-law spectrum combined with a thermal component with a temperature of the order of $kT\sim 0.1$ keV \citep{2006Natur.442.1008C,2011MNRAS.411.2792S}. The long-lasting gamma-ray emission and the large blackbody radii inferred by the thermal component indicate that hot ejecta with a photospheric radius larger than typical radii of compact stars play a vital role in producing high-energy emission. Radio observations of SNe are another probe for highly energetic explosions through the presence of fast shock waves propagating in the circumstellar medium (CSM) of the exploding star. Follow-up radio observations of GRB-SNe have also been conducted. GRB-SNe, such as SN 1998bw \citep{1998Natur.395..663K,2002ARA&A..40..387W}, are indeed known to be a bright radio emitter, which is likely caused by synchrotron emission produced by the forward shock sweeping the ambient gas. Furthermore, the discovery of radio-loud SNe Ic-BL without any gamma-ray signature, e.g., SN 2009bb and SN 2012ap, have revealed a population of relativistic SNe, whose radio emission strongly indicates the presence of ejecta traveling at (mildly) relativistic speeds \citep{2010Natur.463..513S,2015ApJ...799...51M,2015ApJ...805..187C}. Light curve modelings of radio emission from GRB-SNe and relativistic SNe have also been attempted by several authors \citep[e.g.,][]{2015MNRAS.448..417B,2015ApJ...805..164N}. Despite these multi-wavelength observations and intensive discussion, the progenitor system of llGRBs and the origin of their gamma-ray emission are still poorly understood \citep[see, e.g.,][]{2007MNRAS.375..240L,2007ApJ...659.1420T,2007ApJ...664.1026W,2007ApJ...667..351W,2011ApJ...739L..55B}. One of the plausible scenarios for the gamma-ray emission is the emergence of a mildly relativistic shock from a CSM in which the progenitor star is embedded (e.g., \citealt{2006Natur.442.1008C,2007ApJ...667..351W,2012ApJ...747...88N}; see also \citealt{1999ApJ...510..379M,1999ApJ...516..788W,2001ApJ...551..946T}). In this scenario, the CSM should be sufficiently dense so that the photosphere is well above the surface of the star so as to prolong the prompt gamma-ray emission. The mildly relativistic shock can be driven by either a weak jet \citep{2016MNRAS.460.1680I}, the cocoon associated with a jet \citep{2002MNRAS.337.1349R,2005ApJ...629..903L,2013ApJ...764L..12S}, or a jet choked in a star \citep{2011ApJ...739L..55B,2012ApJ...750...68L} or in an extended stellar envelope \citep{2015ApJ...807..172N}. In the relativistic shock breakout scenario, the forward shock, which propagates in the outermost layer of a star, an extended envelope, or whatever, deposits a fraction of the shock kinetic energy into the internal energy of the shocked gas, which is finally released as bright emission. Thus, the shock propagation plays a critical role in how much energy is available for the luminous emission. Most theoretical studies on shock breakout scenario \citep[e.g.,][]{2012ApJ...747...88N} consider an accelerating shock in a medium with a steep density slope \citep{1999ApJ...510..379M,2001ApJ...551..946T}. However, for a CSM with a shallow density slope, e.g., a steady wind with $\rho\propto r^{-2}$, shocks usually decelerate as they sweep the surrounding medium \citep{1982ApJ...258..790C,1982ApJ...259..302C}. The energy loaded in the shocked gas while the shock is propagating in the interior of the CSM can also contribute to emission in the early phase of llGRBs Recently, we have developed a semi-analytic model for relativistic ejecta-CSM interaction \citep{2017ApJ...834...32S}, which can be used for estimating the amount of the energy produced by the hydrodynamic interaction. Recently, the new GRB 171205A was detected by the {\it Swift} satellite \citep{2017GCN.22177....1D}. The reported $T_{90}$ duration and the $15$--$50$ keV fluence of the burst were $T_{90}=189.4$ s and $3.6\pm0.3\times10^{-6}$ erg cm$^{-2}$. The optical afterglow was immediately identified, and found to be associated with a spiral galaxy at $z=0.0368$ \citep{2017GCN.22180....1I}. Assuming the distance of $167$ Mpc, the isotropic equivalent energy of the prompt gamma-ray emission is $1.2\times 10^{49}$ erg, which is much smaller than those of standard GRBs\footnote{The recent paper by \cite{delia2018}, which is published while this manuscript was being peer-reviewed, performed a combined analysis of the {\it Swift} and Konus-Wind datasets and reported a slightly higher isotropic gamma-ray energy, $E_\mathrm{iso}=2.18^{+0.63}_{-0.50}\times 10^{49}$ erg, and a longer $T_{90}=190.5\pm33.9$ s. However, the dataset used in this paper is not significantly different from theirs, leaving the conclusions unchanged. }. From these gamma-ray properties, GRB 171205A is unambiguously classified as an llGRB. Observations in other wavelengths have also been carried out by several groups. Follow-up optical photometric and spectroscopic observations identified an SN component in the optical afterglow at only $2.4$ days after the discovery \citep{2017GCN.22204....1D}. The early spectra of the SN component showed SN 1998bw-like spectral features\footnote{See, also, the recent paper by \cite{2018arXiv181003250W}}. Radio observations were initiated about $20$ hours after the trigger, revealing a bright radio source \citep{2017GCN.22187....1D}. The prompt {\it Swift} detection and the dedicated multi-wavelength follow-up campaigns have made GRB 171205A one of the most densely observed nearby GRB-SNe. Development of theoretical light curve models self-consistently explaining the multi-wavelength light curves of GRB 171205A would greatly help us constraining the progenitor scenario for llGRBs and the mechanism responsible for the high-energy emission. So far, \cite{2017arXiv171209319D} have attempted theoretical interpretation of the prompt and afterglow light curves of GRB 171205A in the framework of an off-axis SN-GRB. In this work, we perform light curve modeling of GRB 171205A based on the relativistic ejecta-CSM interaction model developed by our previous work \citep[two separated papers: ][hereafter \hyperlink{sms17}{SMS17} and \hyperlink{sm18}{SM18}]{2017ApJ...834...32S,2018MNRAS.478..110S}. We found that the relativistic ejecta-CSM interaction model can successfully explain the multi-wavelength light curves of GRB 171205A. From the light curve modeling, we require some conditions on the dynamical properties of the ejecta produced by the stellar explosion associated with GRB 171205A and the density structure of the CSM surrounding the progenitor star. Then, we further discuss the population of llGRBs and other X-ray transients in the framework of the CSM interaction scenario. This paper is organized as follows. In Section \ref{sec:emission_model}, we describe our emission model. Results of the light curve modeling are presented in Section \ref{sec:results}. We discuss the implications of the results and the origin of llGRBs in Section \ref{sec:discussion}. Finally, Section \ref{sec:conclusion} concludes this paper. \section{Emission model}\label{sec:emission_model} \begin{figure} \begin{center} \includegraphics[scale=0.41,bb=0 0 591 481]{./ejecta_CSM.pdf} \caption{Schematic view of the emission model. The upper and lower panels correspond to the stages where the shocked gas is optically thick and thin. } \label{fig:schematic} \end{center} \end{figure} The emission model is based on our previous work (\hyperlink{sms17}{SMS17} and \hyperlink{sm18}{SM18}) with some updates. Figure \ref{fig:schematic} schematically represents the situation considered here. A massive star explodes in the surrounding CSM and creates expanding spherical ejecta. As a result of the collision between the ejecta and the CSM, the gas swept up by the forward and reverse shock fronts forms a geometrically thin shell. At early stages of the dynamical evolution, the shell is optically thick (upper panel). Thus, the radiation produced by the shock dissipation is basically trapped in the shell and gradually escapes via radiative diffusion as the optical depth of the shell drops down to unity, which we observe as the prompt gamma-ray and X-ray emission. The shell becomes transparent at later epochs (lower panel) Then, thermal photons leaking through the photosphere, which is now receding into the un-shocked ejecta, are observed as optical and UV emission. In addition, non-thermal electrons produced in the shock fronts give rise to radio and X-ray emission by synchrotron and inverse Compton processes. In the following, we describe our emission model used for the light curve modeling. \subsection{Hydrodynamics}\label{sec:hydrodynamics} In \hyperlink{sms17}{SMS17}, we present semi-analytic formulae for the dynamical evolution of mildly relativistic ejecta interacting with a CSM with spherical symmetry. The density structure of the ejecta is of fundamental importance for the subsequent dynamical evolution. The following several parameters characterize the density structures. Since the ejecta are freely expanding, the radial velocity $c\beta$, where $c$ is the speed of light, is given by the radial coordinate $r$ divided by the elapsed time $t$, \begin{equation} \beta=\frac{r}{ct}. \end{equation} We assume that the radial density profile of the ejecta is given by a broken power-law function of the 4-velocity, $\Gamma\beta$, where $\Gamma=(1-\beta^2)^{-1/2}$ is the Lorentz factor corresponding to the radial velocity. The power-law exponent of the outer density profile is denoted by $n$, while the inner density structure is assumed to be flat. Thus, the density profile is expressed as follows, \begin{equation} \rho_\mathrm{ej}(r,t)= \left\{\begin{array}{l} \rho_0\left(\frac{t}{t_0}\right)^{-3}(\Gamma_\mathrm{br}\beta_\mathrm{br})^{-n}\\ \hspace{2em}\mathrm{for}\ \ \ \Gamma\beta\leq \Gamma_\mathrm{br}\beta_\mathrm{br}\\ \rho_0\left(\frac{t}{t_0}\right)^{-3}(\Gamma\beta)^{-n}\\ \hspace{2em}\mathrm{for}\ \ \ \Gamma_\mathrm{br}\beta_\mathrm{br}<\Gamma\beta\leq\Gamma_\mathrm{max}\beta_\mathrm{max}, \end{array}\right. \label{eq:density_profile} \end{equation} where $t_0$ is the initial time of the interaction and we set $t_0=1$ s and $\Gamma_\mathrm{br}\beta_\mathrm{br}=0.1$. The maximum Lorentz factor is set to be $\Gamma_\mathrm{max}=5$ throughout this work. Note that results are not sensitively dependent on the value of the maximum Lorentz factor, because layers around the maximum velocity are immediately swept by the reverse shock. The normalization constant $\rho_0$, the exponent $n$, and the break velocity $\beta_\mathrm{br}$ are important parameters specifying how much energy is loaded in the outermost ejecta. These parameters are deeply related to how the relativistic ejecta are produced and connected to non-relativistic supernova ejecta at the bottom, thereby being highly uncertain. In the following we assume a relatively flat density profile ($n=5$ for the fiducial model), which realizes a large energy dissipation rate at early epochs, in order to explain early bright emission. The normalization constant $\rho_0$ is determined by specifying the kinetic energy of the relativistic part $(\Gamma\beta\geq1)$ of the ejecta, \begin{equation} E_\mathrm{rel}=4\pi c^5t^3\int_{1/\sqrt{2}}^{\beta_\mathrm{max}}\rho_\mathrm{ej}(r,t)\Gamma(\Gamma-1)\beta^2d\beta, \end{equation} where the lower bound, $\beta=1/\sqrt{2}$, is the velocity corresponding to $\Gamma\beta=1$. We use the normalized kinetic energy $E_\mathrm{rel,51}=E_\mathrm{rel}/(10^{51}\ \mathrm{erg})$ as a free parameter to specify the normalization constant $\rho_0$. In a similar way, the mass of the relativistic ejecta is defined as follows, \begin{equation} M_\mathrm{rel}=4\pi (ct)^3\int_{1/\sqrt{2}}^{\beta_\mathrm{max}}\rho_\mathrm{ej}(r,t)\Gamma\beta^2d\beta. \end{equation} At the initial time of the interaction $t=t_0$, the outermost layer of the ejecta is adjacent to the CSM at $r=c\beta_\mathrm{max}t_0$. We assume a wind-like CSM with a constant mass-loss rate and a wind velocity, \begin{equation} \rho_\mathrm{csm}(r)=Ar^{-2}. \end{equation} We adopt the following normalization for the CSM density parameter $A$, $A_\star=A/(5\times 10^{11}\ \mathrm{g}\ \mathrm{cm})$, and treat the non-dimensional quantity $A_\star$ as a free parameter characterizing the CSM density. For a typical wind velocity of compact stars ($v_\mathrm{w}\simeq 10^3$ km s$^{-1}$), $A_\star=1$ corresponds to a mass-loss rate of $\dot{M}\simeq 10^{-5}\ M_\odot$ yr$^{-1}$, which is a typical value for a galactic Wolf-Rayet star. For late-time light curve modeling, we also explore the possibility that a dense CSM is present only in the immediate vicinity of the progenitor star, while the outer CSM density is similar to a wind with $\dot{M}=10^{-5}\ M_\odot$ yr$^{-1}$. In such cases, we assume that the dense CSM only extends up to $r=r_\mathrm{out}$, \begin{equation} \rho_\mathrm{csm}(r)=\left\{ \begin{array}{ccl} Ar^{-2} &r\leq r_\mathrm{out},\\ A_\mathrm{out}r^{-2} &r_\mathrm{out}<r, \end{array}\right. \label{eq:csm_double_power} \end{equation} with $A_\mathrm{out}<A$. The outer CSM density is also normalized so that $A_{\mathrm{out},\star}=A_\mathrm{out}/(5\times 10^{11}\ \mathrm{g}\ \mathrm{cm})$. When the ejecta expand into the CSM, the hydrodynamic interaction between the two media creates forward and reverse shocks propagating in the CSM and the ejecta, converting the kinetic energy of outer layers of the ejecta into the internal energy of the shocked gas. The basic idea adopted in this model is the so-called ``thin shell approximation'', in which we treat the region between the two shock fronts as a geometrically thin shell, as in the non-relativistic treatment \citep{1982ApJ...258..790C,1982ApJ...259..302C}. We obtain the shell radius $R_\mathrm{s}$ by solving the equation of motion for the thin shell. Accordingly, the evolutions of other hydrodynamic quantities, such as the forward and reverse shock radii $R_\mathrm{fs}$ and $R_\mathrm{rs}$, the corresponding shock velocities $c\beta_\mathrm{fs}$ and $c\beta_\mathrm{rs}$, the shell mass $M_\mathrm{s}$, the shell velocity and the Lorentz factor $c\beta_\mathrm{s}$ and $\Gamma_\mathrm{s}$, and the post-shock pressures $P_\mathrm{fs}$ and $P_\mathrm{rs}$ at the forward and reverse shock fronts, are obtained by using the shock jump conditions. We have confirmed that the temporal evolution of these hydrodynamic quantities are in good agreement with hydrodynamic simulations for various sets of the free parameters (\hyperlink{sms17}{SMS17}). \begin{figure*} \begin{center} \includegraphics[scale=0.6,bb=0 0 680 509]{./dynamics.pdf} \caption{Dynamical evolution of the shell caused by the ejecta-CSM interaction. In the left panels, we plot the temporal evolutions of the shell and the shock radii ($R_\mathrm{s}$, $R_\mathrm{fs}$, and $R_\mathrm{rs}$), the 4-velocity of the shell ($\Gamma_\mathrm{s}\beta_\mathrm{s}$), and the post-shock pressures ($P_\mathrm{fs}$ and $P_\mathrm{rs}$) from top to bottom. The right panels show those of the optical depth of the shell ($\tau_\mathrm{s}$), the energy dissipation rates at the shocks ($\dot{E}_\mathrm{fs}$ and $\dot{E}_\mathrm{rs}$), and the radiation energy of the shell ($E_\mathrm{s,rad}$). The parameters of the ejecta and the CSM are set to $E_\mathrm{rel,51}=0.5$, $n=5$, and $A_\star=25$. } \label{fig:dynamics} \end{center} \end{figure*} Figure \ref{fig:dynamics} shows an example of the semi-analytic calculations. The free parameter specifying the hydrodynamic model are set to $E_\mathrm{rel,51}=0.5$, $n=5$, and $A_\star=25$. The shell and shock radii monotonically increase with time. As seen in the top left panel of Figure \ref{fig:dynamics}, their temporal evolutions are almost identical with each other. This means that the width between the forward and reverse shock fronts is much smaller than the shell radius, justifying the thin shell approximation. As the shell sweeps the CSM, the mass loading and the pressure gradient force within the shell decelerate the shell, resulting in the decreasing shell velocity. The post-shock pressures at the shock fronts also decrease with time as the velocity and the pre-shock density decrease. \subsection{Emission from optically thick shell} The shocked gas in the thin shell between the forward and reverse shock fronts is optically thick in the early stage of the dynamical evolution. We define the optical depth of the shell as follows, \begin{equation} \tau_\mathrm{s}=\frac{\kappa M_\mathrm{s}}{4\pi R_\mathrm{s}^2}, \label{eq:tau_sh} \end{equation} where $\kappa$ is the opacity and set to $\kappa=0.2$ cm$^2$ g$^{-1}$. During the optically thick stage ($\tau_\mathrm{s}>1$), photons in the shell gradually escape from the shell via radiative diffusion. In our emission model, photons diffusing out from the shell are regarded as the dominant source of the prompt gamma-ray emission. We calculate the bolometric light curve of the emission by adopting the diffusion approximation. The forward and reverse shocks convert the kinetic energy of the ejecta into the radiation energy in the shell. We assume that the internal energy density in the shell is dominated by radiation. The temporal evolution of the radiation energy $E_\mathrm{s,rad}$ of the shell is governed by the balance between the production by the shock dissipation and cooling processes, \begin{equation} \frac{dE_\mathrm{s,rad}}{dt}=\dot{E}_\mathrm{fs}+\dot{E}_\mathrm{rs}-\frac{E_\mathrm{s,rad}}{3V}\frac{dV}{dt}-\dot{E}_\mathrm{diff}, \end{equation} as long as the shell is optically thick, $\tau_\mathrm{s}>1$. The 1st and 2nd terms of the R.H.S. of this equation represent the energy production through the forward and reverse shock fronts. The energy production rates are given by \begin{equation} \dot{E}_\mathrm{fs}=4\pi cR_\mathrm{fs}^2\frac{\gamma_\mathrm{ad}P_\mathrm{fs}}{\gamma_\mathrm{ad}-1}\Gamma_\mathrm{s}^2(\beta_\mathrm{fs}-\beta_\mathrm{s}), \end{equation} for the forward shock and \begin{equation} \dot{E}_\mathrm{rs}=4\pi cR_\mathrm{rs}^2\frac{\gamma_\mathrm{ad}P_\mathrm{rs}}{\gamma_\mathrm{ad}-1}\Gamma_\mathrm{s}^2(\beta_\mathrm{s}-\beta_\mathrm{rs}), \end{equation} for the reverse shock (\hyperlink{sms17}{SMS17}), where $\gamma_\mathrm{ad}=4/3$ is the adiabatic exponent. These terms are set to $\dot{E}_\mathrm{fs}=\dot{E}_\mathrm{rs}=0$ after the shell becomes optically thin, $\tau_\mathrm{s}<1$ (instead, the dissipated energy partly contributes to non-thermal emission). The 3rd term represents adiabatic cooling, which is proportional to the fractional change in the shell volume $V$ per unit time. Finally, the 4th term represents the energy loss due to the radiative diffusion and is given by the following formula, \begin{equation} \dot{E}_\mathrm{diff} =4\pi R_\mathrm{s}^2u_\mathrm{s,rad}v_\mathrm{diff}, \label{eq:dEdt_rad} \end{equation} where $u_\mathrm{s,rad}=E_\mathrm{s,rad}/V$ is the radiation energy density of the shell. The diffusion velocity $v_\mathrm{diff}$ is obtained from the radiative transfer equation in the diffusion limit, \begin{equation} v_\mathrm{diff}= \frac{c(1-\beta_\mathrm{s}^2)}{(3+\beta_\mathrm{s}^2)\tau_\mathrm{s}+2\beta_\mathrm{s}}, \label{eq:v_diff} \end{equation} (see, Appendix of \hyperlink{sms17}{SMS17} for detail). The diffusion velocity should not exceed the following maximum value, \begin{equation} v_\mathrm{diff,max}=c(1-\beta_\mathrm{s}), \end{equation} above which the radiation energy goes through the shell in a superluminal way. Therefore, when the velocity calculated by Equation (\ref{eq:v_diff}) is larger than this threshold, we set $v_\mathrm{diff}=v_\mathrm{diff,max}$. In other words, this prescription corresponds to the flux-limited diffusion in radiation hydrodynamics, although we deal with only a single zone. The temporal evolutions of the optical depth $\tau_\mathrm{s}$, the energy production rates at the forward and reverse shock fronts, and the radiation energy of the shell are presented in the right panels of Figure \ref{fig:dynamics}. Initially, the optical depth steeply rises owing to the rapid accumulation of mass. After reaching the maximum value, it steadily decreases down to $\tau_\mathrm{s}<1$ as the shell radius increases. The shell becomes optically thin at $t\simeq 10^3$ s for this parameter set. The energy production rate due to the shock dissipation continuously decreases both for the forward and reverse shocks. The energy dissipated at the forward shock front is larger than that at the reverse shock front by more than one order of magnitude, indicating that the non-thermal emission from the forward shock dominates over the reverse shock counterpart. Because of the less significant contribution from the reverse shock, we only calculate the non-thermal emission from the forward shock in the following light curve modeling. The radiation energy of the shell continues to increase with time as long as the shell is optically thick. After the shell becomes optically thin, the shock dissipation no longer contributes to the increase in the radiation energy. Then, it decreases owing to the radiative diffusion and adiabatic cooling. We use the radiative loss rate $\dot{E}_\mathrm{diff}$ to calculate the bolometric luminosity of the emission seen by a distant observer. We assume that the intensity of the emission is isotropic in the rest frame of the shell and use the method described in Appendix \ref{sec:light_curve}, which takes into account relativistic effects, to obtain the bolometric luminosity seen in the observer frame. The observed bolometric luminosity of the diffusive emission is obtained as follows: \begin{equation} L_\mathrm{diff}(t_\mathrm{obs})= c\int\frac{\dot{E}_\mathrm{diff}(t)}{R_\mathrm{s}(t)\Gamma_\mathrm{s}^3[1-\mu\beta_\mathrm{s}(t)]^3}dt, \end{equation} where $\mu$ is given by Equation (\ref{eq:mu}). \subsection{Photospheric emission}\label{sec:photospheric_emission} We also consider the photospheric emission from the pre-shocked SN ejecta. After the shell becomes optically thin, the photosphere recedes into deeper layers of the SN ejecta (Figure \ref{fig:schematic}). Inner layers in the pre-shocked ejecta successively become transparent and release the remaining internal energy as thermal photons. We adopt the following simplified model to calculate the luminosity and the temperature of the photospheric emission. First, the optical depth between the forward shock front and a layer with a velocity $r/t$ in the pre-shocked SN ejecta at $t$ is calculated as follows, \begin{equation} \tau(t,r)=\tau_\mathrm{s}+\int_{r}^{R_\mathrm{rs}(t)}\kappa\rho_\mathrm{ej}(r,t)dr. \end{equation} The 1st term in the R.H.S. is the contribution from the shell, Equation (\ref{eq:tau_sh}), which is now less than unity. The 2nd term is the contribution from the pre-shocked SN ejecta, which are now truncated by the reverse shock at $r=R_\mathrm{rs}$. The photospheric radius $R_\mathrm{ph}(t)$ at $t$ is determined so that $\tau(t,R_\mathrm{ph})=1$ is satisfied. The ejecta are supposed to originate from a stellar atmosphere swept by a blast wave generated as a result of the core collapse. The shock kinetic energy is converted to the kinetic energy and internal energy of the downstream gas, which leads to a comparable fraction of the kinetic and thermal energy contents within the ejecta before significant adiabatic loss. We assume that the initial internal energy distribution, $u_\mathrm{ej}(t_0,r)$, of the SN ejecta is proportional to the kinetic energy distribution, $\Gamma(\Gamma-1)\rho_\mathrm{ej}(t_0,r)$. We introduce a constant $f_\mathrm{th}$ specifying the ratio of the internal energy density to the kinetic energy density at $t=t_0(=1\ \mathrm{s})$, i.e., $u_\mathrm{ej}(t_0,r)=f_\mathrm{th}\Gamma(\Gamma-1)\rho_\mathrm{ej}(t_0,r)$. Each layer of the ejecta loses the internal energy by adiabatic expansion. Specifically, the internal energy density should decrease as $u_\mathrm{ej}(t,0)\propto t^{-3\gamma_\mathrm{ad}}$. Thus, the internal energy density profile $u_\mathrm{ej}(t,r)$ is described as follows, \begin{equation} u_\mathrm{ej}(t,r)=\left(\frac{t}{t_0}\right)^{-3\gamma_\mathrm{ad}}f_\mathrm{th}\Gamma(\Gamma-1)\rho_\mathrm{ej}(t_0,r)c^2. \end{equation} In the following, we assume a fixed value of $f_\mathrm{th}=0.3$ so that the observed optical and UV fluxes can be explained. The amount of the radiation energy released within a small time interval from $t$ to $t+\Delta t$ is calculated as follows. The photospheric radius evolves from $R_\mathrm{ph}(t)$ to $R_\mathrm{ph}(t+\Delta t)\simeq R_\mathrm{ph}(t)+dR_\mathrm{ph}/dt\Delta t$ within the time interval. The layer corresponding to the photosphere at $t$ travels at the velocity $R_\mathrm{ph}(t)/t$ and then reaches $r=R_\mathrm{ph}(t)(t+\Delta t)/t$ at $t+\Delta t$. Therefore, the volume $\Delta V$ that newly becomes transparent is given by \begin{eqnarray} \Delta V &=&\frac{4\pi}{3}\left[R_\mathrm{ph}(t)^3\left(\frac{t+\Delta t}{t}\right)^3-R_\mathrm{ph}(t+\Delta t)^3\right] \nonumber\\ &\simeq& 4\pi R_\mathrm{ph}(t)^2\left[\frac{R_\mathrm{ph}(t)}{t}-\frac{dR_\mathrm{ph}}{dt}\right]\Delta t. \end{eqnarray} The internal energy lost through the photosphere per unit time yields \begin{equation} \dot{E}_\mathrm{ph}(t) =4\pi R_\mathrm{ph}(t)^2 \left[\frac{R_\mathrm{ph}(t)}{t}-\frac{dR_\mathrm{ph}}{dt}\right] u_\mathrm{ej}(t,R_\mathrm{ph}). \end{equation} The observed bolometric luminosity is calculated in the same way as the optically thick shell: \begin{equation} L_\mathrm{ph}(t_\mathrm{obs})= c\int\frac{\dot{E}_\mathrm{ph}(t,\nu)}{R_\mathrm{s}(t)\Gamma_\mathrm{s}^3[1-\mu\beta_\mathrm{s}(t)]^3}dt. \end{equation} In addition, we assume that the radiation is well represented by blackbody emission. The radiation temperature at the photosphere is obtained from the Stefan-Boltzmann law, \begin{equation} T_\mathrm{ph}=\left[\frac{\dot{E}_\mathrm{ph}(t)}{4\pi R_\mathrm{ph}(t)^2\sigma_\mathrm{SB}}\right]^{1/4}, \end{equation} where $\sigma_\mathrm{SB}$ is the Stefan-Boltzmann constant. Using the radiation temperature as the color temperature of the blackbody emission, we obtain the luminosity per unit frequency, \begin{equation} \left(\frac{d\dot{E}}{d\nu}\right)_{\mathrm{ph}}=\frac{2\pi\dot{E}_\mathrm{ph}(t)}{c^2\sigma_\mathrm{SB} T_\mathrm{ph}^4}\frac{h\nu^3}{\exp(h\nu/k_\mathrm{B}T_\mathrm{ph})-1}, \end{equation} which is used to calculate the observed luminosity per unit frequency, \begin{equation} L_{\nu,\mathrm{ph}}(t_\mathrm{obs})= c\int\frac{(d\dot{E}/d\bar{\nu})_\mathrm{ph}(\bar{\nu})}{R_\mathrm{s}(t)\Gamma_\mathrm{s}^2[1-\mu\beta_\mathrm{s}(t)]^2}dt, \end{equation} where $\bar{\nu}$ is the comoving frequency given by Equation (\ref{eq:nu_bar}). \subsection{Non-thermal emission} After the shell becomes optically thin and most of photons trapped in the shell have been released, the shocked gas starts producing non-thermal photons via synchrotron and inverse Compton processes. Photospheric photons considered in the previous subsection serve as seed photons for the inverse Compton emission. We calculate the non-thermal emission following the early high-energy emission by using the method developed by \hyperlink{sm18}{SM18}. We focus on the non-thermal emission from the forward shock, because the energy dissipation rate at the forward shock front dominates over that of the reverse shock (Section \ref{sec:hydrodynamics}). \subsubsection{Electron momentum distribution} We treat non-thermal electrons produced at the shock front in one-zone approximation. In other words, we assume that these electrons are uniformly distributed in a narrow region close to their production site and do not treat their spatial advection and diffusion. Furthermore, we assume that their angular distribution in the momentum space is isotropic. Thus, their momentum distribution is expressed as a function of time $t$ and the norm of the momentum $p_\mathrm{e}$, $dN/dp_\mathrm{e}(t,p_\mathrm{e})$. The temporal evolution of the electron momentum distribution is obtained by solving the following advection equation in the momentum space for the range from $p_\mathrm{min}=10^{-3}m_\mathrm{e}c$ to $p_\mathrm{max}=10^6m_\mathrm{e}c$, \begin{equation} \frac{\partial }{\partial t}\left(\frac{dN}{dp_\mathrm{e}}\right)= \frac{\partial}{\partial p_\mathrm{e}}\left[(\dot{p}_\mathrm{syn}+\dot{p}_\mathrm{ic}+\dot{p}_\mathrm{ad})\frac{dN}{dp_\mathrm{e}}\right] +\left(\frac{d\dot{N}}{dp_\mathrm{e}}\right)_\mathrm{in}. \label{eq:govern} \end{equation} We assume that all the electrons swept by the shock front experience non-thermal acceleration process and then obey a power-law momentum distribution with an exponent $-p$, \begin{equation} \left(\frac{d\dot{N}}{dp_\mathrm{e}}\right)_\mathrm{in}\propto \left\{ \begin{array}{cl} p_\mathrm{e}^{-p}&\mathrm{for}\ p_\mathrm{in}\leq p_\mathrm{e}\leq p_\mathrm{max},\\ 0&\mathrm{otherwise}. \end{array} \right. \end{equation} The normalization and the minimum injection momentum $p_\mathrm{in}$ are determined by the energy dissipation rate at the forward shock front and the average electron energy. As usually assumed in many non-thermal emission models for GRBs and SNe \citep[e.g.,][]{1998ApJ...497L..17S,2001ApJ...548..787S,2002ApJ...568..820G}, we introduce a parameter $\epsilon_\mathrm{e}$ and assume that a fraction $\epsilon_\mathrm{e}$ of the internal energy of the gas in the downstream of the shock is converted to the energy of non-thermal electrons, $u_\mathrm{ele}=\epsilon_\mathrm{e}u_\mathrm{int}$, where $u_\mathrm{int}$ is the internal energy density at the shock front. The average energy of a single non-thermal electron is given by the electron internal energy $u_\mathrm{ele}$ divided by the electron number density $n_\mathrm{ele}$ in the downstream, $u_\mathrm{ele}/n_\mathrm{ele}$. The momentum loss rates, $\dot{p}_\mathrm{syn}$ and $\dot{p}_\mathrm{ic}$, due to synchrotron emission and inverse Compton cooling can be calculated from the corresponding energy loss rates. They are given by \begin{equation} \dot{p}_\mathrm{syn}=\frac{4\sigma_\mathrm{T}u_\mathrm{B}}{3m_\mathrm{e}^2c^2}p_\mathrm{e}\sqrt{m_\mathrm{e}^2c^2+p_\mathrm{e}^2}, \label{eq:pdot_syn} \end{equation} and \begin{equation} \dot{p}_\mathrm{ic}=\frac{4\sigma_\mathrm{T}u_\mathrm{rad}}{3m_\mathrm{e}^2c^2}p_\mathrm{e}\sqrt{m_\mathrm{e}^2c^2+p_\mathrm{e}^2}, \label{eq:pdot_ic} \end{equation} where $\sigma_\mathrm{T}$ is the Thomson cross section, $u_\mathrm{B}$ the magnetic energy density, and $u_\mathrm{rad}$ the energy density of seed photons. We further adopt a frequently-adopted prescription that the magnetic energy density is given by $u_\mathrm{B}=\epsilon_\mathrm{B}u_\mathrm{int}$, where $\epsilon_\mathrm{B}$ is the other microphysics parameter specifying the fraction of the magnetic energy density to the internal energy density. As we will see below, the photospheric and synchrotron emission contribute to seed photons for inverse Compton emission. Thus, we use the radiation energy densities of photospheric and synchrotron photons for $u_\mathrm{rad}$. The adiabatic momentum loss rate is \begin{equation} \dot{p}_\mathrm{ad}=\frac{p_\mathrm{e}}{3V}\frac{dV}{dt}. \label{eq:pdot_ad} \end{equation} The advection equation, Equation (\ref{eq:govern}), is numerically solved by a 1st-order implicit upwind scheme. \subsubsection{Synchrotron emission} For a given electron momentum distribution, calculations of synchrotron emissivity $j_{\nu,\mathrm{syn}}$ and the self-absorption coefficient $\alpha_{\nu,\mathrm{syn}}$ are straightforward. We use the widely used formulae in the literature \citep[e.g.][]{1979rpa..book.....R}: \begin{equation} j_{\nu,\mathrm{syn}}=\frac{1}{4\pi V}\int P_{\nu,\mathrm{syn}}(\gamma_\mathrm{e})\frac{dN}{dp_\mathrm{e}}dp_\mathrm{e}, \end{equation} for the synchrotron emissivity, and \begin{equation} \alpha_{\nu,\mathrm{syn}}=\frac{c^2}{8\pi V\nu^2}\int \frac{\partial}{\partial p_\mathrm{e}}\left[p_\mathrm{e}\gamma_\mathrm{e}P_{\nu,\mathrm{syn}}(\gamma_\mathrm{e})\right] \frac{1}{p_\mathrm{e}^2}\frac{dN}{dp_\mathrm{e}}dp_\mathrm{e}, \end{equation} for the absorption coefficient, where $P_{\nu,\mathrm{syn}}(\gamma_\mathrm{e})$ is the synchrotron power per unit frequency for a single electron with a Lorentz factor of $\gamma_\mathrm{e}=[1+p_\mathrm{e}^2/(m_\mathrm{e}^2c^2)]^{1/2}$. The corresponding synchrotron self-absorption optical depth is the product of the absorption coefficient and the shell width $V/(4\pi R_\mathrm{s}^2)$, \begin{equation} \tau_\mathrm{\nu,\mathrm{ssa}}=\frac{c^2}{32\pi^2 R_\mathrm{s}^2\nu^2}\int \frac{\partial}{\partial p_\mathrm{e}}\left[p_\mathrm{e}\gamma_\mathrm{e}P_{\nu,\mathrm{syn}}(\gamma_\mathrm{e})\right] \frac{1}{p_\mathrm{e}^2}\frac{dN}{dp_\mathrm{e}}dp_\mathrm{e}. \end{equation} Since the intensity of the synchrotron emission is expressed in the following way, \begin{equation} I_\mathrm{syn}(\nu)=\frac{j_{\nu,\mathrm{syn}}}{\alpha_{\nu,\mathrm{syn}}}(1-e^{-\tau_{\nu,\mathrm{ssa}}}), \end{equation} the corresponding synchrotron energy loss rate per unit frequency yields \begin{equation} \left(\frac{d\dot{E}}{d\nu}\right)_\mathrm{syn}=16\pi^2 R_\mathrm{fs}^2I_\mathrm{syn}(\nu). \end{equation} The observed luminosity per unit frequency is given by \begin{equation} L_{\nu,\mathrm{syn}}(t_\mathrm{obs})= 2c\int\frac{(d\dot{E}(t,\bar{\nu})/d\bar{\nu})_\mathrm{syn}}{R_\mathrm{s}(t)\Gamma_\mathrm{s}^2[1-\mu\beta_\mathrm{s}(t)]^2}dt. \end{equation} \subsubsection{Inverse Compton emission} The inverse Compton emission is calculated by the following formula, \begin{equation} I_\mathrm{ic}(\nu)=\int G(\gamma_\mathrm{e},\nu_\mathrm{i},\nu)\frac{dN}{dp_\mathrm{e}}I_\mathrm{seed}(\nu_\mathrm{i})dp_\mathrm{e}d\nu_\mathrm{i}, \end{equation} for a given electron momentum distribution $dN/dp_\mathrm{e}$ and a seed photon intensity $I_\mathrm{seed}(\nu_\mathrm{i})$. The redistribution function $G(\gamma_\mathrm{e},\nu_\mathrm{i},\nu)$ gives the energy spectrum of scattered photons for incoming mono-energetic electrons with the Lorentz factor $\gamma_\mathrm{e}$ and monochromatic photons with the frequency $\nu_\mathrm{i}$ (see, Appendix of \hyperlink{sm18}{SM18}). We also note that the adopted redistribution function correctly takes into account the Klein-Nishina suppression. We consider the photospheric emission and the synchrotron emission as the sources of seed photons. \begin{equation} I_\mathrm{seed}(\nu)=\frac{1}{16\pi^2 R_\mathrm{fs}^2}\frac{1-\beta_\mathrm{s}(t)}{1-\beta_\mathrm{ph}(t')}\left(\frac{d\dot{E}}{d\nu}\right)_\mathrm{ph} +I_\mathrm{syn}(\nu). \end{equation} We note that the contribution from the photospheric emission needs some corrections because of the difference between the photospheric and forward shock radii. The time $t'$ appearing in the 1st term of the R.H.S is defined so that a photospheric photon emitted at this time into the radial direction reaches the forward shock at $t$, $R_\mathrm{fs}(t)=R_\mathrm{ph}(t')+c(t-t')$. The energy loss rate per unit frequency $(d\dot{E}/d\nu)_\mathrm{ph}$ should be evaluated at this time. The term $(1-\beta_\mathrm{s})/(1-\beta_\mathrm{ph})$ is the correction factor for the energy density, where $\beta_\mathrm{ph}(t')=R_\mathrm{ph}(t')/(ct')$ is the velocity of the layer at the photospheric radius at the time $t'$. In a similar way to the synchrotron emission, the observed luminosity per unit frequency is given by \begin{equation} L_{\nu,\mathrm{ic}}(t_\mathrm{obs})= 2c\int\frac{(d\dot{E}(t,\bar{\nu})/d\bar{\nu})_\mathrm{ic}}{R_\mathrm{s}(t)\Gamma_\mathrm{s}^2[1-\mu\beta_\mathrm{s}(t)]^2}dt, \end{equation} with \begin{equation} \left(\frac{d\dot{E}}{d\nu}\right)_\mathrm{ic}=16\pi^2 R_\mathrm{fs}^2I_\mathrm{ic}(\nu). \end{equation} \section{Results}\label{sec:results} \subsection{Prompt gamma-ray emission} \begin{figure} \begin{center} \includegraphics[scale=0.4,bb=0 0 576 720]{./bat.pdf} \caption{Theoretical light curves compared with early gamma-ray and X-ray observations by the {\it Swift}. The luminosity from $t=-100$ s to $t=+600$ s is shown ($t=0$ corresponds to the BAT trigger). The black and blue crosses represent the {\it Swift}/BAT ($15$--$50$ keV) and XRT ($0.3$--$10$ keV) observations. The theoretical models with different sets of parameters are plotted. In all the panels, the fiducial model with $ A_\star=25$, $E_\mathrm{rel,51}=0.5$, and $n=5$ is plotted as a solid line. The dashed and dash-dotted lines in each panel represent models with $A_\star=30$ and $20$ (top), $E_\mathrm{rel,51}=1.0$ and $0.2$ (middle), and $n=4$ and $6$ (bottom). } \label{fig:bat} \end{center} \end{figure} First, we focus on the prompt gamma-ray emission and the subsequent X-ray emission. In Figure \ref{fig:bat}, we plot the gamma-ray and X-ray luminosities of GRB 171205A observed by the {\it Swift}/BAT and XRT. We make use of the data processed and provided by UK Swift Science Data Centre\footnote{http://www.swift.ac.uk/} \citep[see,][]{2007A&A...469..379E,2009MNRAS.397.1177E}. The gamma-ray luminosity first reaches $\sim 10^{47}$ erg s$^{-1}$ and then declines down to $\sim 10^{46}$ erg s$^{-1}$ in $\sim 200$ s. The XRT observation has been conducted from $t_\mathrm{obs}\simeq 135$ s. The BAT spectrum (from $t_\mathrm{obs}\simeq -40$ s to $200$ s) is well fitted by a single power-law distribution with a photon index of $1.41\pm0.14$ \citep{2017GCN.22184....1B}. A slightly softer photon index ($1.717^{+0.035}_{-0.024}$) is reported for the XRT spectrum at later epochs (from $t_\mathrm{obs}=135$ s to $400$ s; \citealt{2017GCN.22183....1K}). Theoretical bolometric light curves are compared with the observed light curves in Figure \ref{fig:bat}. Our fiducial model assumes $A_\star=25$, $E_\mathrm{rel,51}=0.5$, and $n=5$, which is shown as a solid line in each panel of Figure \ref{fig:bat}. Models with different values of $A_\star$, $E_\mathrm{rel,51}$, and $n$ are also plotted in each panel. The theoretical light curves show a remarkable agreement with the BAT and XRT light curves. We note that the theoretical light curve in specific energy ranges can be different from those shown in Figure \ref{fig:bat}, since the theoretical model cannot produce frequency dependent light curves. In particular, the $0.3$--$10$ keV flux from $t_\mathrm{obs}=100$ s to $200$ s, during which both BAT and XRT observations are available, is smaller than the $15$--$ 50$ keV flux by a factor of a few. Thus, the bolometric flux must be larger than $0.3$--$10$ keV flux owing to the contribution from photons with higher energies, although the flux of the high-energy photons at later times appears to be below the BAT detection threshold. This is also supported by the XRT photon index harder than $2$. It is therefore natural that our fiducial bolometric light curve adjusting the BAT result is slightly more luminous than the XRT light curve at $t_\mathrm{obs}>100$ s but shows a similar decay rate. \subsection{$E_\mathrm{rad}$--$T_\mathrm{burst}$ diagram} \begin{figure} \begin{center} \includegraphics[scale=0.55,bb=0 0 453 509]{./diagram.pdf} \caption{Scatter plot showing (isotropically equivalent) radiated energy vs burst duration. Stars represent nearby GRBs unambiguously classified as llGRBs, while gray circles represent nearby GRBs with SNe whose gamma-ray properties cannot be explained by the CSM interaction model. GRB 171205A is shown by the red star. Gray dots correspond to GRBs with spectroscopically confirmed redshifts. The data are taken from 3rd {\it Swift}/BAT GRB catalog (https://swift.gsfc.nasa.gov/results/batgrbcat/) compiled by \protect\cite{2016ApJ...829....7L}. The solid curves show the relation between the radiated energy and the duration predicted by the CSM interaction model. In each curve, the assumed value of the CSM density $A_\star$ increases from $A_\star=1$ to $A_\star=1000$, with $E_\mathrm{rel}$ and $n$ fixed. The red, blue, green, and black solid curves (from top to bottom) corresponds to models with $E_\mathrm{rel,51}=10$, $1.0$, $0.1$, $0.01$. The adopted values of $A_\star$ are shown as dashed black curves with labels $A_\star=1$, $10$, $100$, and $1000$. } \label{fig:diagram} \end{center} \end{figure} In Figure \ref{fig:diagram}, we present a $E_\mathrm{rad}$--$T_\mathrm{burst}$ diagram, in which GRBs with known $E_\mathrm{iso}$ and $T_{90}$ are compared with theoretical predictions. We plot swift GRBs with known redshifts from 3rd {\it Swift}/BAT GRB catalog compiled by \cite{2016ApJ...829....7L}. They occupy the upper left region of the diagram, reflecting their large $E_\mathrm{iso}$ and short $T_{90}$. On the other hand, llGRBs, GRB 980425, 060218, 100316D, and 171205A, are located in the lower right region. Some nearby GRBs associated with SNe, GRB 030329, 031203, 120422A, and 161219B, are also plotted. In Figure \ref{fig:diagram}, we plot theoretical predictions as done by \hyperlink{sms17}{SMS17}. The radiated energy $E_\mathrm{rad}$ is calculated by integrating the theoretical bolometric light curve with respect to time. The burst duration $T_\mathrm{burst}$ is defined as the observer time at which $90\%$ of the total radiated energy has been received. The solid curves show the relations between the radiated energy $E_\mathrm{rad}$ and the duration $T_\mathrm{burst}$ predicted by theoretical models with different kinetic energies of the ejecta. Each curve is obtained by increasing the CSM density parameter $A_\star$ from $A_\star=1$ to $A_\star=1000$. As discussed by \hyperlink{sms17}{SMS17}, locations of llGRBs are successfully explained by the ejecta-CSM interaction scenario, while some GRBs with SNe (GRB 030329, 031203, 120422A, and 161218B) are not (this will be discussed in Section \ref{sec:llGRBs}). With the new llGRB 171205A included, llGRBs seem to follow the trend that llGRBs with longer durations produce larger amounts of isotropic gamma-ray energy. This trend is also in line with the theoretical expectation that larger amounts of CSM lead to more prolonged emission with larger radiated energies. Swift GRBs are well above the theoretical curves, clearly indicating that a highly collimated emission region, i.e., an ultra-relativistic jet, is required to explain their large isotropic equivalent gamma-ray energies released over short durations. \subsection{Photospheric emission} \begin{figure*} \begin{center} \includegraphics[scale=0.6,bb=0 0 648 360]{./photospheric.pdf} \caption{Properties of the photospheric emission. In the left column, the photospheric radius $R_\mathrm{ph}$ (top) and temperature $T_\mathrm{ph}$ (bottom) are plotted as a function of the delay time $t-R_\mathrm{ph}(t)/c$. In the right column, we plot the bolometric luminosity as a function of the observer time. } \label{fig:thermal} \end{center} \end{figure*} The photospheric emission expected after the optically thick stage predominantly contributes to optical-UV emission observed by the UVOT telescope on board the {\it Swift} satellite. Figure \ref{fig:thermal} presents the temporal behaviors of the photospheric emission. The left panels of Figure \ref{fig:thermal} show the photospheric radius and the temperature as a function of the delay time $t-R_\mathrm{ph}(t)/c$, which roughly corresponds to the observer time. On the other hand, the right panel of Figure \ref{fig:thermal} represents the bolometric light curve, which is given as a function of the observer time $t_\mathrm{obs}$. The photospheric radius steadily increases as the ejecta expand. Optically thick layers stratified below the photosphere cool via adiabatic expansion, which results in the monotonically declining bolometric luminosity and the photospheric temperature. \begin{figure} \begin{center} \includegraphics[scale=0.6,bb=0 0 360 576]{./uvot.pdf} \caption{Multi-band light curve of the photospheric emission. In each panel, we plot the luminosity per unit frequency $L_{\nu,\mathrm{ph}}$ for the frequencies corresponding to the central wavelengths of the UVOT filters ({\it uvm2}, {\it uvw2}, {\it uvw1}, {\it u}, {\it b}, and {\it v}-bands from top to bottom). The theoretical multi-band light curves (solid lines) are compared with the UVOT observations (filled circles; \citealt{2017GCN.22202....1S}). } \label{fig:uvot} \end{center} \end{figure} In Figure \ref{fig:uvot}, we plot the multi-band light curves of the photospheric emission. The theoretical luminosity per unit frequency is calculated for the wavelengths of $\lambda=190$, $220$, $260$, $350$, $440$, and $550$ nm, which roughly correspond to the central wavelengths of the UVOT filters. The multi-color light curve is compared with the UVOT observations up to $10^5$ s. The AB magnitudes in the UVOT bands are obtained from the reported magnitudes \citep{2017GCN.22202....1S} and the UVOT zero-points \citep{2008MNRAS.383..627P,2011AIPC.1358..373B}. We corrected the magnitudes by assuming $R_V=3.1$ and the Galactic extinction of $E(B-V)_\mathrm{MW}=0.05$ \citep{2017GCN.22202....1S} and using the formula provided by \cite{1989ApJ...345..245C}. Although the theoretical multi-band light curve reproduces similar magnitudes to those of the UVOT observations at $t_\mathrm{obs}\simeq10^4$ s, their temporal evolutions do not fully explain the UVOT observations. The observed UV fluxes (uvw1, uvw2, and uvm2) increases from $t_\mathrm{obs}\simeq 5\times 10^3$ s to $10^4$ s, while the theoretical light curves decline. This discrepancy is probably due to the simplified treatment of the photospheric emission. UV emission from SNe is known to suffer from various transfer effects, such as line-blanketing \citep[e.g.,][]{2010ApJ...721.1608B}. Moreover, we note that the assumption of the electron scattering opacity for a fully ionized gas is no longer valid at photospheric temperatures of several thousands K or less. In addition, the theoretical light curve expects less luminous UV and optical fluxes at $t_\mathrm{obs}\simeq 10^5$ s than the observations. This is probably caused by the contribution of other power source(s). Several ground-based observations have found the re-brightening of this event mainly in optical and IR bands within 2 days after the trigger \citep{2017GCN.22204....1D}, which is interpreted as the emergence of the associated SN. Since we do not include contribution from any other power source, especially radioactive nuclei, the theoretical multi-color light curves continue to declines even after $10^5$ s. \subsection{Non-thermal emission} In the following, we present results of the non-thermal emission modelling. The microphysics parameters used in the calculations are set to $\epsilon_\mathrm{e}=0.1$, $\epsilon_\mathrm{B}=0.01$, and $p=3.0$ \citep[e.g.,][]{1999ApJ...526..716L,2015MNRAS.448..417B,2015ApJ...805..164N}. \subsubsection{Broad-band light curve} \begin{figure} \begin{center} \includegraphics[scale=0.45,bb=0 0 566 680]{./broadband_lc.pdf} \caption{X-ray, optical, and radio light curves calculated by our emission model. The light curves in X-ray (top panel), optical-UV (middle panel), and radio (bottom panel) bands are compared with observations of GRB 171205A. In the top panel, we plot the BAT and XRT observations (the same symbols as Figure \ref{fig:bat}). The light curve shown as a red solid is the fiducial model shown in Figure \ref{fig:bat}. The late X-ray emission is dominated by the inverse Compton emission. While the green solid line shows the $0.3$--$10$ keV light curve of the inverse Compton emission, the $\nu L_\nu$ light curves at $1$ and $10$ keV are plotted as dashed and dash-dotted lines. In the middle panel, we plot the same multi-band light curves as Figure \ref{fig:thermal} in $\nu L_\nu$ as well as the bolometric light curve $L_\mathrm{ph,bol}$ (dashed line). In the bottom panel, radio light curves at $5$ (solid), $10$ (dashed), $100$ (dash-dotted), and $300$ (dotted) GHz are compared with early radio observations by NOEMA \protect\citep[blue circle;][]{2017GCN.22187....1D}, ALMA \protect\citep[blue and magenta squares;][]{2017GCN.22252....1P}, and VLA \protect\citep[red star;][]{2017GCN.22216....1L}. } \label{fig:broadband_lc} \end{center} \end{figure} In Figure \ref{fig:broadband_lc}, the theoretical light curves in X-ray, optical-UV, and radio bands are shown and compared with multi-wavelength observations of GRB 171205A. At first, we assume the same free parameters for the ejecta and the CSM as the fiducial model in the previous section, $A_\star=25$, $E_\mathrm{rel,51}=0.5$, and $n=5$. The early emission in the optically thick stage is also plotted in the top panel showing the BAT and XRT light curves. The optical and UV light curves shown in the middle panel are the same model as Figure \ref{fig:thermal}, but in $\nu L_\nu$. \begin{figure} \begin{center} \includegraphics[scale=0.45,bb=0 0 566 680]{./broadband_lc2.pdf} \caption{Same as Figure \ref{fig:broadband_lc}, but with the reduced outer CSM density of $A_{\mathrm{out},\star}=0.5$. } \label{fig:broadband_lc2} \end{center} \end{figure} As is clearly seen in the top panel of Figure \ref{fig:broadband_lc}, the theoretical light curve significantly overestimates the X-ray luminosity. The X-ray emission is dominated by the inverse Compton emission of non-thermal electrons, whose flux is proportional to the product of the energy density of seed photons and the number density of non-thermal electrons. Since the seed photons are predominantly produced by the photospheric emission and its flux is constrained by the UVOT observations, the number density of non-thermal electrons or equivalently the electron energy injection rate at the shock front should be modified for alleviating this disagreement. One adjustable parameter is the fraction $\epsilon_\mathrm{e}$. However, reducing $\epsilon_\mathrm{e}$ leads to smaller minimum injection momenta $p_\mathrm{in}$. For a significantly small $p_\mathrm{in}$, the X-ray break frequency, above which the spectral energy distribution softens, can be in the observed energy range of $0.3$--$10$ keV or even lower, contradicting the observed X-ray spectrum with the hard photon index. One possible solution to this discrepancy is relaxing the constraint on the CSM density parameter $A_\star$ obtained by the gamma-ray light curve fitting. In this ejecta-CSM interaction model, the prompt gamma-ray emission probes the CSM extending up to $r\sim3\times 10^{13}$ cm from the center. While a relatively dense CSM is required in the immediate vicinity of the star, the CSM density beyond this region can be lower than expected by a simple extrapolation of the inverse square law, $\rho_\mathrm{csm}\propto r^{-2}$. The implication for this structure will further be discussed in Section \ref{sec:progenitor}. Therefore, we explore the possibility that the CSM density drops beyond $r_\mathrm{out}=3\times 10^{13}$ cm, while keeping the inner CSM density fixed. In other words, we employ Equation (\ref{eq:csm_double_power}) with reduced outer CSM densities $A_\mathrm{out}<A$. In Figure \ref{fig:broadband_lc2}, we plot the multi-wavelength light curves with the outer CSM density of $A_{\mathrm{out},\star}=0.5$, which is smaller by a factor of $50$ than the model shown in Figure \ref{fig:broadband_lc}. The early emission from the optically thick shell and the photospheric emission are almost identical with the fiducial model because the parameters of the SN ejecta remain unchanged. The late-time X-ray and radio light curves are better reproduced by this model with modified CSM structure. The theoretical $0.3$--$10$ keV light curve in Figure \ref{fig:broadband_lc2} exhibits a plateau from $t_\mathrm{obs}\simeq 10^3$ s to $2\times 10^{4}$ s with a luminosity of $\simeq 10^{43}$ erg s$^{-1}$. Although the plateau X-ray luminosity is still larger than the observed X-ray luminosity by a factor of a few, the plateau and the subsequent decay broadly reproduce the observed features. The theoretical X-ray light curve appears to decline faster than the XRT light curve after $t_\mathrm{obs}=10^5$ s. This may be improved by including radioactively powered thermal emission, which additionally provides seed photons for the inverse Compton emission. The radio light curves in several bands are also plotted in the bottom panel of Figure \ref{fig:broadband_lc2}. The flux density in each band initially rises and then declines in a power-law fashion, and its peak appears earlier for higher frequencies. These trends are common for young radio-emitting SNe, where the rising and declining parts correspond to optically thick and thin synchrotron emission, respectively \citep[e.g.,][]{2016arXiv161207459C}. The $100$ and $350$ GHz radio flux densities reach the peak values of $\sim 100$ mJy at $t_\mathrm{obs}=3\times 10^4$ s and $10^5$ s. The fluxes continue to decline with $\sim t^{-1.5}$ after the peaks. At $t_\mathrm{obs}\simeq 5\times 10^5$ s, ALMA observations were carried out at $92$ and $340$ GHz. The reported flux densities of a few 10 mJy \citep{2017GCN.22252....1P} are roughly consistent with the theoretical fluxes at similar frequencies of $100$ and $300$ GHz. Since the flux density at $92$ GHz is smaller than that at $340$ GHz, the synchrotron spectrum in this frequency range is likely to have been in the optically thin regime. In this regime, the spectral slope depends on the assumed exponent $p$ of the electron momentum distribution, $\propto \nu^{-p/2}$ or $\nu^{-(p-1)/2}$. On the other hand, radio fluxes at lower frequencies, $5$ and $10$ GHz, are still rising even at $t_\mathrm{obs}=10^{6}$ s, which is also consistent with Karl G. Jansky Very Large Array (VLA) observations by \cite{2017GCN.22216....1L}, claiming a spectral slope consistent with a synchrotron self-absorbed spectrum. \subsubsection{Electron momentum distribution}\label{sec:electron_distribution} \begin{figure} \begin{center} \includegraphics[scale=0.55,bb=0 0 453 339]{./espec.pdf} \caption{Electron momentum distributions at several epochs. The solid, dashed, and dash-dotted curves represent the distributions when the elapsed time and the shell radius satisfy $t-R_\mathrm{s}/c=10^{3}$, $10^4$, and $10^5$ s. Power-law distributions expected in different regimes are shown in thin dotted lines.} \label{fig:espec} \end{center} \end{figure} Figure \ref{fig:espec} shows the electron momentum distributions at several epochs. The plotted electron distributions are those at epochs satisfying $t-R_\mathrm{s}(t)/c=10^{3}$, $10^4$, and $10^5$ s. At these epochs, non-thermal electrons with the plotted momentum distributions predominantly contribute to the non-thermal emission at the observer times of $t_\mathrm{obs}=10^3$, $10^4$, and $10^5$ s. The distributions at early epochs are generally a broken power-law function with three segments, the high energy part with the spectral slope of $d\ln N/d\ln p_\mathrm{e}=-4$, the low energy part with a flat slope, and the intermediate part between them. The high energy part is composed of electrons with the momentum higher than the minimum injection momentum at several $10m_\mathrm{e}c$. These electrons suffer from efficient Compton cooling, which makes the slope steeper by $-1$ than that of the injected electrons, $dN/dp_\mathrm{e}\propto p_\mathrm{e}^{-p-1}$, i.e., the fast cooling regime. Electrons having cooled further down to energies below the minimum injection momentum constitute the intermediate part. At the earliest epochs of $t-R_\mathrm{s}(t)/c=10^{3}$ s, the spectral slope in this part is $-2$, $dN/dp_\mathrm{e}\propto p_\mathrm{e}^{-2}$, which is also realized in the fast cooling regime for electrons with energies lower than the minimum injection energy. As we will see below, this spectral slope is important in determining the photon index of the X-ray spectra. At the low energy part ($p_\mathrm{e}<m_\mathrm{e}c$), a flatter spectral slope is realized. For $p_\mathrm{e}\ll m_\mathrm{e}c$, all the cooling terms expressed by Equations (\ref{eq:pdot_syn}), (\ref{eq:pdot_ic}), and (\ref{eq:pdot_ad}), are proportional to $p_\mathrm{e}$. Therefore, in the steady state and for $p_\mathrm{e}<p_\mathrm{in}$, where the time-dependence and the injection term of Equation (\ref{eq:govern}) should vanish, the resulting governing equation, \begin{equation} \frac{\partial}{\partial p_\mathrm{e}}\left[\left( \dot{p}_\mathrm{syn}+\dot{p}_\mathrm{ic}+\dot{p}_\mathrm{ad}\right)\frac{dN}{dp_\mathrm{e}}\right]=0, \end{equation} requires $dN/dp_\mathrm{e}\propto p_\mathrm{e}^{-1}$. These power-law functions expected in the different regimes of the momentum distribution are also represented in Figure \ref{fig:espec}. \subsubsection{Spectral energy distribution}\label{sec:sed} \begin{figure} \begin{center} \includegraphics[scale=0.50,bb=0 0 504 576]{./Fnu.pdf} \caption{Spectral energy distributions at $t_\mathrm{obs}=3\times 10^{3}$, $10^4$, and $10^5$ s from top to bottom. The luminosity per unit frequency (or equivalently the flux density) is plotted for the synchrotron (green), photospheric (red), and inverse Compton (blue) components. Some observational data are plotted for comparison. The black circle in the bottom panel represents NOEMA observation. The UVOT multi-color photometric data are plotted as blue circles in the middle and bottom panels. The time-sliced XRT data after absorption correction (see Appendix \ref{sec:xrt}) are plotted as magenta crosses in all the panels. } \label{fig:sed} \end{center} \end{figure} Figure \ref{fig:sed} shows the broad-band spectral energy distributions ($\nu L_\nu$) at $t_\mathrm{obs}=3\times 10^{3}$, $10^4$, and $10^5$ s. The contributions of the synchrotron emission, the photospheric emission, and the inverse Compton emission, each of which dominates the total emission at radio, optical, and X-ray (or gamma-ray) energy ranges, are separately shown in each panel. First, the synchrotron component dominates the radio flux and is well represented by a broken-power law function, whose spectral slope is $L_\nu\propto \nu^{5/2}$ and $\nu^{-p/2}$ in low and high frequency parts. Second, the peak of the photospheric emission appears in the optical-UV range and the spectral energy distribution is given by a Planck function as we have assumed in Section \ref{sec:photospheric_emission}. Finally, the inverse Compton emission is the convolution of the electron momentum distribution with the synchrotron and photospheric photon spectra, resulting in relatively complex spectral energy distributions. The inverse Compton emission in the X-ray energy range is created by photospheric photons scattered by non-thermal electrons with energies close to the minimum injection energy. The spectra in the X-ray range of $0.3$--$10$ keV show power-law distributions with hard photon indices, $\Gamma_\mathrm{ph}\simeq 1.5$. The photon index reflects the slope of the electron momentum distribution below the minimum injection energy, i.e., the intermediate part (Section \ref{sec:electron_distribution}). Since electrons in this part are predominantly produced by inverse Compton cooling, the electron energy spectrum follows $\gamma_\mathrm{e}^{-2}$, which leads to the inverse Compton spectrum with the photon index of $\Gamma_\mathrm{ph}=1.5$. The X-ray spectrum gradually softens with time and the photon index around $0.3$ keV becomes nearly $\Gamma_\mathrm{ph}\simeq2$ at $t_\mathrm{obs}=10^5$ s. Some observational data at similar epochs, NOEMA \citep{2017GCN.22187....1D} and UVOT \citep{2017GCN.22202....1S}, are plotted in Figure \ref{fig:sed} and compared with the theoretical spectral energy distributions. For the X-ray emission, we have obtained time-sliced X-ray spectra in the $0.3$--$10.0$ keV energy range by analyzing the XRT data (see, Appendix \ref{sec:xrt} for detail). The spectral fitting by an absorbed single power-law function have been performed and the results are summarized in Table \ref{table:fitting}. The absorption corrected X-ray spectra for the three different time intervals, $t_\mathrm{obs}=10^3$ s to $10^4$ s, $10^4$ s to $3\times 10^4$ s, and $10^5$ s to $2\times 10^{5}$ s, are plotted in Figure \ref{fig:sed}. As seen in Figure \ref{fig:sed}, the theoretical spectral energy distributions show overall agreements with current observational constraints in radio, optical, and X-ray. The slopes of the absorption corrected X-ray spectra are well explained by the theoretical model. The best-fit photon indices of the XRT spectra at earlier two time intervals are $\Gamma_\mathrm{ph}\sim 1.7$ or $1.8$, while it softens to $\Gamma_\mathrm{ph}\simeq 2.3$ at $t_\mathrm{obs}>10^5$ s. The spectral softening expected by the theoretical model appears to agree with the temporal evolution of the X-ray spectra, although observations with better photon statistics are needed. \section{Discussion}\label{sec:discussion} \subsection{Broad-band emission from llGRBs} The broad-band light curves calculated by our theoretical model successfully explain several key properties of GRB 171205A. The early gamma-ray and X-ray light curves of GRB 171205A are reproduced by the emission diffusing out from the optically thick shell, which is a natural consequence of the relativistic ejecta-CSM interaction. The light curve fitting suggests the ejecta kinetic energy of $5\times 10^{50}$ erg and the CSM density parameter of $A_\star=25$. After the shell becomes transparent, photospheric and non-thermal emission contribute to the subsequent multi-wavelength emission. The photospheric emission from the ejecta expects optical and UV fluxes comparable to observed values. We found that a wind-like CSM based on a simple extrapolation of the density profile of $A_\star r^{-2}$ with $A_\star=25$ to outer radii leads to a too bright X-ray afterglow. In order to ease the discrepancy, we introduced a sudden change in the CSM density at $r_\mathrm{out}=3\times 10^{13}$ cm and obtained an X-ray light curve marginally consistent with observations. We note that the discrepancy might also be resolved by a more sophisticated treatment of radiative transfer in the shell. For example, the theoretical model does not treat photons re-processed by the CSM after diffusing out from the shell. Such photons are expected to be scattered by the CSM and reach the observer with delays \citep[e.g.,][]{2015ApJ...805..159M}. In addition, the hydrodynamic model assumes spherical symmetry. Although the spherical model succeeds in explaining the properties of the gamma-ray emission, both the ejecta and the CSM can be asymmetric, which could further modify the early gamma-ray and afterglow light curves. As expected in the shock emergence in massive stars, asymmetric shock fronts affect the shock breakout light curve in various ways \citep{2010ApJ...717L.154S,2013ApJ...779...60M,2016ApJ...825...92S,2018ApJ...853...52O,2018ApJ...856..146A}. Although llGRBs do not require highly collimated ultra-relativistic jet owing to their low gamma-ray luminosities, deviations from spherical symmetry are naturally expected to some extent. \subsection{Origin of relativistic ejecta and progenitor system}\label{sec:progenitor} The successful light curve fitting indicates that the stellar explosion responsible for GRB 171205A was associated with the creation of an ejecta component traveling at mildly relativistic speeds. The required kinetic energy of the ejecta with 4-velocities faster than the speed of light ($\Gamma\beta\geq 1$), $E_\mathrm{rel}=5\times 10^{50}$ erg, is already comparable to the canonical explosion energy of CCSNe. Ordinary CCSNe powered by the neutrino mechanism are unlikely to produce such relativistic ejecta components, clearly suggesting that some additional mechanisms should operate in depositing energy into a small amount of stellar materials. For the density profile adopted in our model, Equation (\ref{eq:density_profile}), with $E_\mathrm{rel}=5\times 10^{50}$ erg, the mass of the relativistic component is only $M_\mathrm{rel}=4.5\times 10^{-5}\ M_\odot$. The total mass including the non-relativistic part is $0.63\ M_\odot$, while the total kinetic energy reaches $ E_\mathrm{tot}=1.4\times 10^{52}$ erg, leading to a much higher energy-to-mass ratio $E_\mathrm{tot}/M_\mathrm{tot}=2.2\times 10^{52}$ erg\ $M_\odot^{-1}$ and a much shorter diffusion time scale for photons in the non-relativistic ejecta than those inferred from the associated SN component. It is, therefore, highly likely that the relativistic ejecta are a distinct component from the non-relativistic SN ejecta rather than continuously connected density and velocity structure. There are several proposed scenarios for creating ejecta moving at mildly relativistic speeds. Since ultra-relativistic jets are considered to be associated with the surrounding cocoon produced by the jet-star interaction, the cocoon component overwhelming the star after the jet penetration is a plausible candidate \citep{2002MNRAS.337.1349R,2005ApJ...629..903L,2013ApJ...764L..12S,2017arXiv170105198D}. Mildly relativistic ejecta can be produced even without the successful jet penetration, e.g., a jet choked in a massive star \citep{2011ApJ...739L..55B,2012ApJ...750...68L} or in an extended envelope attached to the star \citep{2015ApJ...807..172N}. In the choked jet scenarios, the energy injected by the central engine is transported by the jet through the deep interior of the star and most of the energy is dissipated in outer layers of the star, realizing relativistic ejecta with a high energy-to-mass ratio. In the framework of the relativistic ejecta-CSM interaction, any putative scenario for llGRBs should be able to create relativistic ejecta with the properties constrained by the light curve fitting. Whether the proposed scenarios can reproduce such relativistic SN ejecta should be examined thoroughly for unveiling the origin of llGRBs. The origin of the moderately dense CSM in the immediate vicinity of the progenitor star also remains unclear. The inferred value of $A_\star=25$ corresponds to a mass-loss rate of a few $10^{-4}\ M_\odot$ yr$^{-1}$ for a wind velocity of $10^3$ km s$^{-1}$, which is larger than typical mass-loss rates of galactic WN and WC stars by an order of magnitude \citep[e.g.,][]{2006A&A...457.1015H,2012A&A...540A.144S}. Unlike the long-lasting GRBs 060218 and 100316D, extremely large amounts of CSM ($A_\star>100$ or $\dot{M}>10^{-3}\ M_\odot$ yr$^{-1}$) are not required in this particular event. However, the moderately dense CSM compared to galactic Wolf-Rayet stars cannot be explained by the current standard theory of massive star evolution and stellar winds. On the other hand, the light curve modeling of the late-time X-ray and radio observations suggests a CSM density ($A_{\mathrm{out},\star}=0.5$) comparable to galactic Wolf-Rayet stars. Such centrally concentrated CSM are suggested by spectroscopic observations of type II SNe in very early stages \citep[known as ``flash spectroscopy'';][]{2014Natur.509..471G,2017NatPh..13..510Y}. Recent early photometric observations of type II SNe also indicate the presence of an enhanced mass-loss prior to the gravitational collapse \citep{2018NatAs.tmp..122F}. However, what drives the intense mass-loss and whether such dense CSM can be ubiquitously present for any types of CCSNe are still unclear. In addition, such intense mass-loss at the final evolutionary stage of stellar evolution may be eruptive \citep[e.g.,][]{2007ApJ...657L.105F,2013MNRAS.430.1801M}. Therefore, the resultant CSM may not be a simple spherical wind following the inverse square law, but a highly aspherical and/or clumpy ejecta. If so, it may also have a non-negligible impact on the preceding steady wind, such as the shock formation, further modifying the CSM structure. However, exploring a lot of possible CSM models is impractical and thus we have only modified the CSM density beyond $r=r_\mathrm{out}$ to see its influence on the theoretical light curve. One possible theoretical explanation is an enhanced mass-loss in late burning stages of the progenitor star by still uncertain mass-losing processes, such as wave-driven mass-loss \citep{2012MNRAS.423L..92Q,2014ApJ...780...96S}. Another possibility for the variation in the CSM density is an accelerating stellar wind \citep[e.g.,][]{1999isw..book.....L}, which is also suggested for early emission from type IIP SNe \citep{2017MNRAS.469L.108M,2018arXiv180207752M}. Stellar winds are supposed to blow slowly at the bottom and gradually accelerate up to a terminal velocity beyond the sonic point. Therefore, even for a constant mass-loss rate, the CSM density, which is proportional to $\dot{M}/v_\mathrm{w}$, could be enhanced in the vicinity of the stellar surface compared to a simple inverse square law. Although an accelerating stellar wind is a possible solution, it introduces several uncertain factors, such as, the wind velocity at the base and the velocity gradient. Therefore, we leave it to future work to explore appropriate wind density and velocity profiles. We also mention that such progenitor systems may be a consequence of massive star evolution in unusual environments. In fact, GRBs with unusually long durations and soft late-time X-ray spectral indices, including GRBs 060218 and 100316D, appear to show large absorption column densities \citep{2015ApJ...805..159M}. This indicates the progenitor system of llGRBs may be closely linked with unusual environments in which the progenitor star is embedded. In any case, the mechanism of the enhanced mass-loss in the final evolutionary stage of massive stars is indispensable for the ultimate understanding of llGRBs and other transients powered by ejecta-CSM interaction. \subsection{A population of X-ray transients powered by SN ejecta-CSM interaction} In the following, we summarize transients possibly powered by SN ejecta-CSM interaction. \subsubsection{Low-luminosity GRBs}\label{sec:llGRBs} As shown in the $E_\mathrm{rad}$--$T_\mathrm{burst}$ diagram (Figure \ref{fig:diagram}), llGRBs are located in the lower right region because of their low gamma-ray luminosities. This clearly distinguishes llGRBs from the population of {\it Swift} GRBs with larger $E_\mathrm{iso}$ and shorter $T_{90}$. The prediction of the ejecta-CSM interaction model agrees with the region occupied by llGRBs. As \hyperlink{sms17}{SMS17} have pointed out, the locations of the previously discovered llGRBs, 980425, 060218, and 100316D, on the diagram are consistent with the region with $E_\mathrm{rel}=10^{50}$--$10^{51}$ erg and $A_\star$ of a few up to several 100. In addition, the newly discovered llGRB 171205A fills the gap between the fast and less energetic event (GRB 980425) and the two events with long-lasting gamma-ray emission (GRB 060218 and 100316D). This discovery further supports the idea that these llGRBs constitute a distinct population of transients arising from mildly relativistic SN ejecta interacting with the CSM. If this scenario is correct, even more events with similar properties will be detected in current and future survey missions and fill the region predicted by the model on the $E_\mathrm{rad}$--$T_\mathrm{burst}$ diagram. In Figure \ref{fig:diagram}, GRBs 031203, 120422A, and 161219B are located above the theoretical curves. In fact, GRB 031203 is often classified as a llGRB \citep[e.g.,][]{2007ApJ...657L..73G}. If they are actually powered by ejecta-CSM interaction, this discrepancy indicates that some modifications, e.g., aspherical ejecta, are required to account for these relatively energetic events. On the other hand, there are suggestions that they are intrinsically different from llGRBs. \cite{2012A&A...547A..82M} suggest that GRB 120422A and the associated SN 2012bz may be an intermediate case between cosmological GRBs and X-ray flashes (including events like GRB 060218). \cite{2012ApJ...756..190Z} found that the variable prompt gamma-ray emission of GRB 120422A is better explained by emission from a jet, rather than quasi-spherical shock emergence. They also suggest that a gamma-ray luminosity of $\sim 10^{48}$ erg s$^{-1}$ distinguishes GRBs driven by jets from llGRBs. This threshold gamma-ray luminosity is roughly consistent with the highest gamma-ray luminosity realized in our theoretical model with $E_\mathrm{rel,51}=10$. \cite{2017A&A...605A.107C} also classify GRB 161219B as an intermediate-luminosity GRBs. The presence of an intermediate class between cosmological GRBs and llGRBs and how the two populations are overlapped are still debated and thus require a larger set of examples. \subsubsection{XRF 080109/SN 2008D} XRF 080109 is an X-ray flash serendipitously discovered by the {\it Swift} \citep{2008Natur.453..469S}. The X-ray emission was later found associated with the birth of the type Ib/c SN 2008D \citep{2008Sci...321.1185M,2009ApJ...692L..84M,2009ApJ...702..226M,2009ApJ...692.1131T}, which was supposed to be an exploding helium star with a relatively large explosion energy. The X-ray luminosity reaches the peak value of several $10^{43}$ erg s$^{-1}$ in the first $\sim 50$ s and then declines over the next few $100$ s. The total radiated energy and the burst duration are $E_\mathrm{iso}\simeq 6\times 10^{45}$ erg and $T_\mathrm{90}\simeq 470$ s \citep{2009ApJ...702..226M}, which are also plotted in Figure \ref{fig:diagram}. The potential similarities of XRF 080109/SN 2008D to the llGRB 060218/SN 2006aj let several authors to put forward the supernova ejecta-CSM interaction scenario for the origin of the X-ray emission. The location of XRF 080109 on the $E_\mathrm{rad}$--$T_\mathrm{burst}$ diagram suggest that it is much less energetic than llGRBs. Therefore, unlike llGRBs, XRF 080109 does not appear to arise from highly energetic ejecta moving at relativistic speeds, even if it is indeed powered by ejecta-CSM interaction. Nevertheless, the existence of XRF 080109 probably indicates that there is an even larger population of X-ray transients associated with the birth of stripped envelope SNe. Unfortunately, the current detection limits of any unbiased X-ray surveys have not reached such low X-ray fluxes. \subsubsection{CDF-S XT1} Recently, \cite{2017MNRAS.467.4841B} reported the detection of an X-ray transient in the Chandra Deep Field South, which is dubbed CDF-S XT1. Although no transient optical counterpart has been found, the most likely host galaxy was found at a photometric redshift of $z\sim 2.23$. Adopting the redshift, the peak $0.3$--$10$ keV X-ray luminosity reaches $2\times 10^{47}$ erg s$^{-1}$. The integrated X-ray energy of $9\times 10^{49}$ erg released over $\sim 10^3$ s implies a potential similarity of CDF-S XT1 to llGRBs. In addition, the reported photon index, $\Gamma_\mathrm{ph}=1.43^{+0.26}_{-0.15}$, of the X-ray spectrum of CDF-S XT1 is also consistent with those of llGRBs. Since there is no gamma-ray detection, the direct comparison of the radiated energy and the duration of CDF-S XT1 with other GRBs is not straightforward. However, taking the radiated energy and the duration of the X-ray emission at face values, CDF-S XT1 is located in a similar region on the $E_\mathrm{rad}$--$T_\mathrm{burst}$ diagram to llGRB 060218 and 100316D. Although the connection between CDF-S XT1 and llGRBs is not confirmed and warrants further investigations, this may imply that X-ray transients arising from the ejecta-CSM interaction are ubiquitously found in both nearby and high-redshift galaxies. \subsubsection{Soft GRBs detected by MAXI} The Monitor of All-sky X-ray Image (MAXI) on board the International Space Station (ISS) have detected 22 GRBs without any simultaneous detection by other gamma-ray satellites during the first $44$ months of its operation (only-MAXI GRBs; \citealt{2014PASJ...66...87S}) \footnote{a more complete list including recent events is available at http://maxi.riken.jp/grbs/}. The non-detection by any other gamma-ray satellites indicates that only-MAXI GRBs are dominated by relatively soft X-ray photons. Due to the lack of successful follow-up observations, distances to only-MAXI GRBs remain unknown. However, as we show below, the event rate is roughly consistent with llGRBs. As shown by \cite{2014PASJ...66...87S}, GRBs with an average $2$--$20$ keV X-ray flux down to $\sim2\times10^{-9}$ erg s$^{-1}$ cm$^{-2}$ have been detected by MAXI. Assuming the lowest flux to be the detection threshold, a soft GRB with an average luminosity of $\sim 10^{46}$ erg s$^{-1}$ could be detected up to a distance of $D_\mathrm{maxi}\simeq 200$ Mpc. For the volumetric event rate $R_\mathrm{llgrb}$ of llGRBs estimated by several studies ($10^2$--$10^3$ Gpc$^{-3}$ yr$^{-1}$; e.g., \citealt{2006Natur.442.1014S,2006ApJ...645L.113C}), the number of events per year within the distance yields, \begin{eqnarray} \frac{4\pi D_\mathrm{maxi}^3R_\mathrm{llgrb}}{3}&=& 10\ \mathrm{events\ yr}^{-1} \left(\frac{D_\mathrm{maxi}}{200\ \mathrm{Mpc}}\right)^{3} \nonumber\\&&\ \ \ \ \times \left(\frac{R_\mathrm{llgrb}}{300\ \mathrm{Gpc}^{-3}\ \mathrm{yr}^{-1}}\right). \end{eqnarray} MAXI scans the nearly entire sky every $t_\mathrm{orbit}=5500$ s, which corresponds to the single orbit of the ISS around the earth. Therefore, the probability of detecting a GRB with a duration $t_\mathrm{burst}$ is approximately given by $t_\mathrm{burst}/t_\mathrm{orbit}$. Thus, for GRB 060218-like events with burst durations of $t_\mathrm{burst}\simeq10^3$ s, the detection rate is roughly estimated to be \begin{equation} \frac{4\pi D_\mathrm{maxi}^3R_\mathrm{llgrb}}{3}\frac{t_\mathrm{burst}}{t_\mathrm{orbit}}=2\ \mathrm{events\ yr}^{-1}. \end{equation} The detection rate indeed suffers from various uncertain factors, such as, the intrinsic volumetric rate and the typical burst duration. Nevertheless, the estimated value roughly explains the detection rate of only-MAXI GRBs, $6$ events yr$^{-1}$ and therefore may suggest that several llGRBs have been detected by MAXI. \section{Conclusion}\label{sec:conclusion} In this paper, we have carried out multi-wavelength light curve modeling of the new llGRB 171205A. The theoretical model is based on the relativistic SN ejecta-CSM interaction scenario. We adopt the hydrodynamic model developed by our previous study (\hyperlink{sms17}{SMS17}), which solves the dynamical evolution of the geometrically thin shell produced by the collision between the ejecta and the CSM. The light curve model for the early gamma-ray emission assumes that the radiation energy produced in the shell is gradually released by radiative diffusion. The photospheric emission from un-shocked ejecta and non-thermal emission from the forward shock are also treated by using emission models (\hyperlink{sm18}{SM18}) combined with the hydrodynamic model. The broad-band emission of llGRB 171205A is successfully explained by the emission powered by the CSM interaction. The duration and the isotropic equivalent energy of llGRBs are also well explained by a population of relativistic SNe exploding in a relatively dense CSM, and such a population occupies a distinct region on the $E_\mathrm{rad}$--$T_\mathrm{busrt}$ diagram from cosmological GRBs. However, we had to introduce centrally concentrated CSM with a sudden density drop to explain the late-time X-ray and radio emission. Although recent observations of CCSNe provide some observational support for such CSM structure \citep[e.g.,][]{2014Natur.509..471G,2017NatPh..13..510Y}, the mechanism responsible for the enhanced mass-loss is still unclear. We also point out that the potential similarities of XRF 080107, CDF-S XT1, and possibly MAXI GRBs, with llGRBs may suggest that there are still a plenty of hidden or unspecified X-ray bright transients beyond the reach of current unbiased X-ray surveys. Future deep and/or wide-field X-ray surveys combined with intensive follow-up observations in multi-wavelengths will uncover such hidden populations and ultimately help us elucidating the still mysterious origin of highly energetic stellar explosions. \acknowledgments This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. AS would like to appreciate Kunihito Ioka for fruitful discussion. Numerical computations were in part carried out on Cray XC30 and XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. KM acknowledges support by the Japan Society for the Promotion of Science (JSPS) KAKENHI grant 17H02864, 18H05223, and 18H04585. \software{XSPEC (v12.8; \citealt{1996ASPC..101...17A}), HEAsoft (v6.24; \citealt{1996ASPC..101...17A})}
{ "timestamp": "2018-11-09T02:06:53", "yymm": "1811", "arxiv_id": "1811.03240", "language": "en", "url": "https://arxiv.org/abs/1811.03240" }
\section{Introduction} The elastic distortion energy, $f$, of a uniaxial nematic in terms of the nematic director, $\mathbf{n}(\mathbf{r})$, field was developed by Oseen, Zocher, and Frank (OZF) \cite{oseen,zocher,frank}. Frank critically reformulated the elastic theory as theory of curvature elasticity that deals with small deviations of the nematic director from a uniformly and perfectly oriented nematic. This continuum theory is a first order elastic theory in what concerns bulk elasticity. OZF elasticity describes satisfactory conventional nematics and its drawbacks and failures have been widely discussed \cite{saupe,luiz}. Recently, a new class of achiral nematics with a periodic structure in the nanoscale range has been identified \cite{cestari,panov,imrie} exciting great interest in the nematic liquid crystal community. A few nematic modulated phases have been observed but their exact structure remains under investigation even for the most familiar among them, the twist-bend nematic $\mathrm{N_{TB}}$ phase \cite{Borshch,Zimmer,vana_exp,chen} that is also termed $\mathrm{N_{x}}$ nematic \cite{vana_exp}. Several models have been proposed for the $\mathrm{N_{TB}}$ \cite{dozov,shamid,virga,greco,ferrarini,pre,longa,kats,matsu,vana_th,lelidis,barbero}. In particular, models implying elasticity can be grouped to two categories those requiring a negative Frank elastic constant \cite{dozov,shamid,lelidis,barbero} and those that do not \cite{virga,pre,matsu,kats}. A softening of the bend elastic constant arises from its renormalization due to flexoelectricity \cite{shamid} and/or polar effects steaming from molecular shape \cite{vana_th,osipov}. That means, one has to introduce either a new element of symmetry like the helical axis unitary vector remaining in the frame of linear elasticity, or to expand the elastic energy to higher order. Nevertheless, elasticity is not the only way to obtain modulated nematics, for instance, biaxiality \cite{longa} and/or polar order \cite{vana_th}, or entropy \cite{ferrarini} are some other options. Therefore it seems that at the time being the understanding of modulated nematics remains poor and decisive experiments in order to qualify or disqualify some models are still lucking. According to Dozov's paper \cite{dozov}, the modulation arrives because the bend elastic constant becomes negative in $\mathrm{N_{TB}}$. This hypothesis implies that in order to describe a spontaneously deformed nematic phase of achiral molecules, OZF elastic theory has to be extended to include gradients of the deformation tensor. Another approach which is widely applied in classical elasticity of solids \cite{brugger,barsch,chang,russi,landau} consists in expanding $f=f(\nabla{\bf n})$ to higher powers than the second, of the deformation tensor, has been recently applied in nematics without further justification \cite{lelidis,barbero}. Hereafter, we refer to this approach as extended first order elasticity (E1OE). In the present paper, we extend the OZF continuum elastic theory by expanding the elastic free energy density, $f$, up to fourth order (4OE) in the derivatives of $\mathbf{n}(\mathbf{r})$ so as to describe situations where one or more Frank's elastic constants become softer or negative, that is, $f=f(\nabla{\bf n},\dots,\nabla\nabla\nabla\nabla{\bf n})$. The tensor fields of the elastic constants are decomposed in their invariants. Moreover, the E1OE theory is derived in a systematic way from the invariants of the elastic constants. We apply both theories in the case of the $\mathrm{N_{TB}}$ and we compare their respective results. Finally, the splay bend nematic $N_{SB}$ phase is investigated. \section{Non-linear nematic elasticity} OZF elastic theory was obtained under the assumption that $f$ is an analytic function of the elastic tensor $\nabla{\bf n}$. Elastic deformations have to be mild at molecular scale. When the length-scale of the deformation becomes comparable to the molecular length, the linear elasticity fails. A possible generalization would involve spatial derivatives of ${\bf n}({\bf r})$ of higher order than the first. In order to operate such an expansion of $f$ one needs a criterion to quantify the relative importance of derivatives and of their powers that enter in each term of the expansion. Such a criterion can be provided by molecular models if the intermolecular interaction energy is known. Once one relates intermolecular interaction to the elastic constants \cite{supmat,saupe,luiz}, it results that the effective order of a term in the expansion of $f$ results from the sum of the order of all derivatives composing that term, for instance, the terms $(\mathrm{d}n/\mathrm{d}x)\,(\mathrm{d^{k-1}}n/\mathrm{d}x^{k-1})$ and $(\mathrm{d^{2}}n/\mathrm{d}x^{2})\,(\mathrm{d^{k-2}}n/\mathrm{d}x^{k-2})$ are both of order $k$. Applying this rule, the bulk elastic energy density, of a uniaxial nematic composed by achiral molecules, up to fourth order terms is given by \begin{eqnarray}\label{elen} f &=& f_0+K_{ijkl}n_{i,j} n_{k,l} + N_{ijk}\, n_{i,jk}\\ &+&H_{ijklmnpq}\, n_{i,j} n_{k,l} n_{m,n} n_{p,q}+ G_{ijklmn}\, n_{i,jk} n_{l,mn}\nonumber\\ & +& M_{ijklpqr}\,n_{i,j}n_{k,l}n_{p,qr}+ P_{ijklrs}\,n_{i,j}n_{k,lrs}+Q_{ijklr}\,n_{i,jklr}\nonumber \end{eqnarray} where $f_0$ is the energy density of the state with uniform alignment (undeformed), and $i,j,k,\ell,r,s,p,q=x_1,x_2,x_3$. $K_{ijkl}$ and $ N_{ijk}$ are second order terms while the rest of terms in (\ref{elen}) are fourth order terms. Linear and third order terms vanish identically due to the non-polar character of nematic phases, $f({\bf n})=f(-{\bf n})$, and are not presented here. Of course, these latter terms are present in chiral nematics \cite{saupe,berreman}. Using standard techniques for the calculation of the invariants of a tensor field, we calculated the invariants of the elastic tensors apperaring in (\ref{elen}). Hereafter, we use this decomposition in order to investigate the $N_{TB}$, and $N_{SB}$ for which there is some experimental evidence. These two phases represent unidimensional problems, that is, ${\bf n}$ depends on just one spatial coordinate, say, ${\bf n}={\bf n}(x_3)$. Therefore only a few among over than $50$ invariants entering in $f$ survive and the subsequent analysis is simplified. Using the condition ${\bf n}={\bf n}(x_3)$, we find three second order invariants \begin{eqnarray} \label{1}U_1=n_3^2(n_{1,3}^2+n_{2,3}^2+n_{3,3}^2),\quad U_2=n_{3,3}^2,\quad U_3=n_{1,3}^2+n_{2,3}^2+n_{3,3}^2 \end{eqnarray} which can be rewritten as $U_1=|{\bf b}|^2$, $U_2=s^2$, and $U_3=s^2+t^2+|{\bf b}|^2$ in terms of splay, twist, and bend deformations defined \cite{prost} by $s=\nabla \cdot {\bf n}$, $t={\bf n}\cdot \nabla \times {\bf n}$, and ${\bf b}={\bf n}\times \nabla \times {\bf n} $. In the same representation, the six fourth order invariants can be written as \begin{eqnarray} \label{18}V_1=|{\bf b}|^4,\quad V_2=s^2 |{\bf b}|^2,\quad V_3= |{\bf b}|^2(s^2+t^2+ |{\bf b}|^2),\\ \label{21}V_4=s^4,\quad V_5=s^2(s^2+t^2+ |{\bf b}|^2),\quad V_6=(s^2+t^2+ |{\bf b}|^2)^2 \end{eqnarray} Finally, the total elastic energy density can be written as \begin{eqnarray}\label{1Delen} f=f_0+f_2+f_4=f_0+\frac{1}{2}\,\sum_{i=1}^3 K_i\,U_i+\frac{1}{4}\,\sum_{i=1}^6 H_i\,V_i \end{eqnarray} where $f_2$ is related to the invariants of second order, and $f_4$ to those of fourth order. As can be easily verified, the fourth order contribution, $f_4$, is a homogeneous expression in $s^2$, $t^2$, and $|{\bf b}|^2$ as was supposed in \cite{barbero}. \section{Extended first order elasticity} \subsection{Twist-bend case} First, we describe the approach of extended first order elasticity in the cases of $TB$ and $SB$ nematic. To begin, let us apply the results of the above analysis to the heliconical twist-bend deformation characterized by the nematic director \begin{equation} \label{tw1}{\bf n}=(\cos \phi \,{\bf u_1}+\sin \phi\,{\bf u_2})\sin \theta_0+ {\bf u_3}\,\cos \theta_0 \end{equation} where the conical angle $\theta_0$ is position independent and $\phi=\phi(x_3)$. Using (\ref{1Delen}), the elastic energy density of the $N_{TB}$ phase is given by \begin{equation} \label{tw13}f=f_0+\frac{1}{2}R(\theta_0)\,\phi'^2+\frac{1}{4}S(\theta_0)\,\phi'^4 \end{equation} where $\phi '=\mathrm{d}\phi/\mathrm{d} x_3$, and \begin{eqnarray} \label{tw11}R(\theta_0)&=&(K_1\,\cos^2 \theta_0+K_3)\,\sin^2 \theta_0,\\ \label{tw12}S(\theta_0)&=&(H_1\,\cos^4\theta_0+H_3\,\cos^2 \theta_0+H_6)\sin^4\theta_0 \end{eqnarray} are effective elastic constants. The Euler-Lagrange equation of (\ref{tw13}) is \begin{equation} \label{tw14} \left[R(\theta_0)+S(\theta_0)\phi'^2\right]\phi'=\alpha \end{equation} where $\alpha$ is an integration constant. For $R=S=0$ or $R\ge 0$ and $S>0$ only the uniform nematic solution exists. Modulated solutions may appear for $R<0$ and $S>0$. In the absence of surface anchoring energy, $\alpha=0$, from (\ref{tw14}) we obtain \begin{equation} \label{tw15}\phi'=\phi_u'=0,\quad \phi'=\phi'_d=\pm\sqrt{\frac{|R(\theta_0)|}{S(\theta_0)}} \end{equation} corresponding to a uniform, and spontaneously deformed states respectively. Comparison of their energy implies that the deformed state is energetically favorable if permitted to exist, since \begin{eqnarray} \label{tw17}f(\phi_d')=f_0-\frac{(K_1 \cos^2 \theta_0+K_3)^2}{4(H_1\cos^4\theta_0+H_3\cos^2\theta_0+H_6)}<f_0=f(\phi_u') \end{eqnarray} The value of $\theta_0$ has to be determined by minimizing (\ref{tw17}). Let us consider, as example, the case where $K_1<0$, $K_3>0$ and $H_6>0$. The energy of the deformed state is rewritten as \begin{equation} \label{tw19}f(\theta_0)=f_0-\eta \frac{(1-\xi \cos^2 \theta_0)^2}{1+h_3\, \cos^2\theta_0+h_1\,\cos^4\theta_0} \end{equation} where, we introduced \begin{equation} \label{tw18} \xi=|K_1|/K_3,\quad h_1=H_1/H_6>0,\quad h_3=H_3/H_6<0,\quad{\rm and}\quad \eta=K_3^2/4H_6 \end{equation} Note that $K_3=K_{22}$ and $K_1=K_{33}-K_{22}$. Minimization gives the modulated solutions $\cos^2\theta_0=-(h_3+2\xi)/(2 h_1+h_3\xi)$ which correspond to the twist-bend phase. If one considers the case $K_{33}<0$ then $\xi>1$, a standard analysis shows that the twist bend configuration is the ground state when $1<h_1<\xi^2$ and $-2\sqrt{h_1}<h_3<-2(h_1+\xi)/(1+\xi)$. Figure 1 shows $f(\theta_0)/\eta$, blue curve, and the effective elastic constants of second $R(\theta)$, orange curve, and fourth $S(\theta)$, green curve, order. $f_0$ is normalized to 0. The conical angle that minimizes the energy is $\theta_0=23.7^o$. \begin{figure}[h] \centering \includegraphics[width=7cm]{Figure_1.eps} \caption[]{Energy and effective elastic constants vs $\cos^2\theta$ in reduced units $f/\eta$ (black solid line), $R/K_3$ (blue dashed line) and $S/H_6>0$ (red thick line). The curves are not in scale in the vertical direction in order to be visible. $f_0=0$.} \label{Figure_1} \end{figure} \subsection{Splay-bend case} Let us consider now the splay-bend nematic phase which is also a one-dimensional deformation, always in the frame of the E1OE, and in the absence of anchoring energy. In this framework, indicating by $\theta$ the angle formed by the nematic director ${\bf n}$ with the $x_3$ axis, and assuming that ${\bf n}$ is contained in the $(x_1,x_3)$-plane then the nematic director components are $ n_1=\sin \theta$, $n_2=0$, $n_3=\cos \theta$. The invariants of second order are \begin{eqnarray} \label{s2}U_1=\cos^2 \theta\,\theta'^2,\quad U_2=\sin^2\theta\,\theta'^2,\quad U_3=\theta'^2 \end{eqnarray} and those of fourth order are \begin{eqnarray} \label{s6}V_1=\cos^4\theta\,\,\theta'^4,\quad V_2=\sin^2\theta\,\cos^2\theta\,\,\theta'^4,\quad V_3=\cos^2\theta\,\,\theta'^4\\ \label{s9}V_4=\sin^4\theta\,\,\theta'^4,\quad V_5=\sin^2\theta\,\,\theta'^4,\quad V_6=\theta'^4 \end{eqnarray} The elastic energy density is cast in the form \begin{equation} \label{s12}f=f_0+\frac{1}{2}{\cal R}(\theta)\theta'^2+\frac{1}{4}{\cal S}(\theta)\theta'^4 \end{equation} with the effective elastic constants \begin{eqnarray} \label{s13}{\cal R}(\theta)&=&K_1\,\cos^2\theta+K_2\,\sin^2\theta+K_3,\\ \label{s14}{\cal S}(\theta)&=&H_1\,\cos^4\theta+H_2\,\sin^2\theta\,\,\cos^2\theta+H_3\,\cos^2\theta+ H_4\,\sin^4\theta+H_5\,\sin^2\theta+H_6 \end{eqnarray} The Euler-Lagrange equation of the problem is \begin{equation} \label{5-s}4[{\cal R}(\theta)+3{\cal S}S(\theta)\theta'^2]\theta''+[2 {\cal R}(\theta)+3 {\cal S}(\theta) \theta'^2]\theta'^2=0 \end{equation} The absence of interaction between the substrate and the liquid crystal is mathematically responsible for the transversality conditions \begin{equation} \label{6-s}[{\cal R}(\theta) + {\cal S}(\theta) \theta'^2]\theta'=0,\quad{\rm at}\quad x_3=\pm d/2 \end{equation} Let us consider first the simple case where ${\cal R}(\theta)=-\kappa$ and ${\cal S}(\theta)=H$, with $\kappa$ and $H$ independent of $\theta$, and $\kappa>0$. In this situation the total elastic energy density is \begin{equation} \label{7-s}f=f_0-\frac{1}{2}\kappa \theta'^2+\frac{1}{4}H \theta'^4 \end{equation} with solutions $\theta'= 0$ and $\theta'=\sqrt{\kappa/ H}$. The stable solution is the deformed one since \begin{equation} \label{11-s}f(\theta'=0)=f_0,\quad{\rm and}\quad f\left(\theta'=\sqrt{\frac{\kappa}{H}}\right)=f_0-\frac{1}{4}\frac{\kappa^2}{H} \end{equation} In this particular case, the tilt angle is a monotonic function of $x_3$. This conclusion can be generalized. Suppose that in a given point $x_{30}$ along the $x_3$ axis, $\theta'(x_{30})=0$. (\ref{5-s}) implies \begin{equation} \label{12-s}{\cal R}[\theta(x_{30})]\theta''(x_{30})=0. \end{equation} Since ${\cal R}\neq 0$, it follows that $\theta''(x_{30})=0$. Similarly, one can show that $\theta'''(x_{30})=0$ and so on. Hence, either $\theta'=0$, that is $\theta$ is position independent, or $\theta'$ cannot change sign, that is $\theta(x_3)$ is a monotonic function. Nevertheless, a deformed state that minimizes the energy implies ${\cal R}(\theta) <0$, and hence $\theta$ cannot be position independent. Therefore, in the framework of the E1OE, we infer that the tilt angle of the nematic director for the $\mathrm{N_{SB}}$ phase is a monotonic function of the position $\theta=\theta(x_3)$. \section{Fourth order elasticity} In the approximation of fourth order elasticity $f$ depends on derivatives of $\mathbf{n}(\mathbf{r})$ up to fourth order. For twist-bend deformation, substitution of $\mathbf{n}(\mathbf{r})$ from (\ref{tw1}) into the expressions of the invariants results to the elastic energy \begin{equation} \label{h1}f=\frac{1}{2}R\phi'^2+\frac{1}{4}S \phi'^4+\frac{1}{2}G \phi''^2+H \phi' \phi''' \end{equation} where $R$, $S$, $G$, and $H$ depend on $\theta_0$. The last term can be decomposed into a bulk term that renormalizes the elastic constant $G$, and to a surface-like term since $\phi' \phi'''=(\phi' \phi'')'-\phi''^2$. Therefore, disregarding the third derivative term results to neglect surface-like terms in the energy $F$. \subsection{In presence of surface-like terms} In the absence of surface-like terms, $f$ reduces to \begin{equation} \label{g1}f=f_0+\frac{1}{2}R \phi'^2+\frac{1}{4} S \phi'^4 +\frac{1}{2} G \phi''^2 \end{equation} For $R(\theta_0)\ge 0$, the undeformed solution is the stable one. Hereafter, we suppose that $R(\theta_0)<0$. For a sample in the form of a slab of thickness $d$, and the $x_3-$axis perpendicular to the bounding surfaces at $x_3=\pm d/2$, the minimization of the total energy \begin{equation} \label{h0}F=\int_{-d/2}^{d/2} f(\phi',\phi'')\,\mathrm{d}x_3 \end{equation}in the absence of surface anchoring energy, gives the first integral \begin{equation} \label{g6}R \phi'+S \phi'^3-G \phi'''=0 \end{equation} with the boundary conditions $ \label{g7} \phi''=0$ at $x_3=\pm d/2$. Apart from the trivial uniform nematic solution $\phi_u'=0$, Eq.(\ref{g6}) has a second solution \begin{equation} \label{g11}\phi_d'=q_d=\sqrt{-\frac{R}{S}}=const., \end{equation} to which corresponds the position independent elastic energy density \begin{equation} \label{g12}f_d=f_0-\frac{R^2}{4 S}<f_u=f_0 \end{equation} Note that if the quartic term in $\phi'$ is neglected, no deformed solution exists. Finally, a third solution, $\phi'=q(x_3)$, with variable wave-vector exists. However, $f(q(x_3))>f(q_d)$, as can be shown by substituting $q(x_3)=q_d+\delta q(x_3)$ in the energy \begin{equation} f(q_d+\delta q(x_3))-f(q_d)=\frac{1}{4}S\,\delta q^2 (2 q_d+\delta q)^2 +\frac{1}{2} G\,\delta q'^2>0, \end{equation} and therefore $q_d$ corresponds to the absolute minimum of (\ref{g1}). \subsection{In presence of surface-like terms} In the following, we investigate the full energy density expression (\ref{h1}), that is, keeping surface-like terms. Minimizing \begin{equation} F=\int_{-d/2}^{d/2} f(\phi',\phi'',\phi''')\,dx_3, \end{equation} we get the first integral \begin{equation} \label{h7}R q+S q^3+(2 H-G)q''=0, \end{equation} where $q=\phi'$, and the boundary conditions \begin{eqnarray} \label{h8}(G-H)q'= 0\quad\&\quad H q = 0\quad \text{at}\quad x_3=\pm d/2 \end{eqnarray} Since the ordinary differential equation (\ref{h7}) is of second order and the boundary conditions to be satisfied are four, the function minimizing the total energy will be in general discontinuous. One encounters a similar problem as the well known $K_{13}$ puzzling, for some time, question \cite{oldano,durand,vertogen}. \begin{figure}[h] \centering \includegraphics[width=8cm]{Figure_2.eps} \caption[]{Energy with surface like terms $F$ (black solid line) and position independent deformation energy $F_d$ (blue dashed line) vs $A$ for a nematic slab of thickness $d$. $F$ has a minimum, $F_{min}<F_d$, for $A\ne 0$. $f_0=0$.} \label{Figure_2} \end{figure} A trivial solution of (\ref{h7}) is $q=0$, and the corresponding energy density is $f=f_0$. However, a simple inspection shows that the functions varying rapidly enough near to the limiting surfaces are related to lower total energy. As an example let us consider the trial function \begin{equation} \label{h14}q(z)=q_d+A\,\frac{\sinh(x_3/L)}{\sinh(d/2L)}, \end{equation} defined in $0\leq x_3 \leq d/2$, and continued analytically in even manner in $-d/2\leq x_3\leq 0$. In (\ref{h14}), $A$ is a constant, $q_d$ is given by (\ref{g11}), and $L\ll d$ is the thickness of a surface layer. This trial function coincides with $q_d$ in the bulk, and differs from it just in the surface layers. The total energy of the sample is a function of the amplitude $A$. A plot $F=F(A)$ shows that $F-F_0<0$ reaches a minimum for $A\neq 0$ (see Figure 2). A direct calculation of the profile can be performed by minimization of the total energy if $q(x_3)$ is expanded as a power series of $x_3-x_3^*$, with $x_3^*=d-L$ in the surface layers. In this framework, it is assumed that \begin{eqnarray} \label{h16} q &=& q_d,\quad {\rm for}\quad0\leq x_3\leq x_3^*\\ \label{h17} q &=& q_d+b (x_3-x_3^*)+(c/2)(x_3-x_3^*)^2, \quad{\rm for}\quad x_3^*\leq x_3\leq d/2 \end{eqnarray} where $b$ and $c$ are free parameters. Evaluating $F$, see Figure 3, and minimizing it with respect to $b$ and $c$ shows that the minimizing function is discontinuous with discontinuity points at the border. For both signs of $H$, $F$ presents a minimum. Therefore, the stable solution in the bulk is $q_d$ and hence for the twist-bend phase $q=\sqrt{-R/S}$, as has been determined in the case of the E1OE model. Note that, in the present case, it seems that the $(\phi')^4$ term controls the bulk solution over the $(\phi'')^2$ term. \begin{figure}[h] \centering \includegraphics[width=8cm]{Figure_3.eps} \caption[]{Contour plot of $F$ vs $b$ and $c$. $F$ has a minimum for $b/q_d^2 =-1.22777$, and $c/q_d^3 = 3.36218$. $f_0=0$.} \label{Figure_3} \end{figure} \section{Conclusions} In conclusion, we extended in a systematic way the linear elasticity of nematics using two approaches. First by taking into account gradients of the nematic strain tensor, and second by considering higher powers of the deformation tensor. Applying both models in the case of a twist-bend nematic we found that the bulk solution is the same. In the case of the strain tensor gradient model, surface effects arise in the same way as in the problem of $K_{13}$ of undeformed nematics. In the case of a splay-bend nematic, in the framework of the extended first order elasticity, we demonstrated that small oscillations of the nematic director $\mathbf{n}(x_3)$ around the $x_3-$axis are forbidden, and $\mathbf{n}(x_3)$ is a monotonic function of $x_3$. Finally, we note that the tight pitch ($\approx 10\mathrm{nm}$) of the helix in the nematic twist-bend phase sheds some doubts about the applicability of a continuum theory. For a detailed discussion on this issue, see for instance \cite{vana_th}. Nevertheless, this phase has been first predicted by continuum models which at least qualitatively seem to describe the up to now known physics of the phase. Further, a pitch of $\sim10\mathrm{nm}$ was predicted by the elastic model \cite{pre} as shown in \cite{rosseto}. Certainly the modulated nematic phases problem is far to be elucidated. \section*{Appendix: Elasticity from molecular interactions} The molecular approach is based on molecular interactions which are supposed to be additives and to decrease rapidly with separation so that can be neglected for a length much longer than a molecular dimension. In a nematic liquid crystal, the anisotropy of the intermolecular interaction gives rise to anisotropic elastic constants. We assume a uniaxial nematic composed by rod like molecules and with perfect nematic order $S=1$, that is, the molecular long axes coincide with the nematic director. Let ${\bf n}={\bf n}({\bf R})$ and ${\bf n'}={\bf n}({\bf R'})$ be the directors of two interacting molecules at the points ${\bf R}$ and ${\bf R'}={\bf R}+{\bf r}$. The two body interaction energy between two molecules is a function of their relative orientation and their separation \cite{saupe} \begin{equation} \label{f5}v=v({\bf n},{\bf n'},{\bf r}) \end{equation} The interaction energy between two elements of volume $d \tau$ and $d\tau'$ at ${\bf R}$ and ${\bf R'}$ containing $dN=\rho({\bf R}) d\tau$ and $dN'=\rho({\bf R'}) d\tau'$, where $\rho$ is the particle density, is \begin{equation} \label{f5-1}d^2 {\cal V}=v d\tau d\tau'. \end{equation} Supposing constant density of particles, assumption valid just in the bulk, (\ref{f5-1}) can be rewritten as \cite{saupe} \begin{equation} \label{f52}d^2 {\cal V}=g({\bf n},{\bf n'},{\bf r}) d\tau d\tau' \end{equation} where $g({\bf n},{\bf n'},{\bf r})=\rho^2 v ({\bf n},{\bf n'},{\bf r})$. In the elastic approximation $v\neq 0$ only for $r_m\leq r\leq r_M$, where $r_m$ is a lower cut-off, and $r_M$ is of the order of the range of the molecular forces responsible for the condensed phase. If ${\bf n}$ varies slowly over $r_M$ we have \begin{equation} \label{f6}{\bf n'}={\bf n}({\bf R'})={\bf n}({\bf R})+\delta {\bf n}({\bf R},{\bf r}), \end{equation} with $|\delta {\bf n}({\bf R},{\bf r})|\ll 1$. Hereafter, we limit our analysis to second order. However, the results can be generalized to all orders. Substituting (\ref{f6}) into the expression for $g$ we get \begin{equation} \label{f7}g=g({\bf n},{\bf n}+\delta {\bf n},{\bf r}), \end{equation} Since $|\delta {\bf n}({\bf R},{\bf r})|\ll 1$ for $r_m\leq r\leq r_M$, we can expand $g$ in powers of $\delta {\bf n}$ \begin{equation} \label{f8}g=g({\bf n},{\bf n},{\bf r})+q_i \delta n_i+\frac{1}{2} q_{ij} \delta n_i \delta n_j+{\cal O}(3), \end{equation} where Einstein summation convention for repeated indexes is assumed, and \begin{equation} \label{f9}q_i({\bf n},{\bf r})=\left(\frac{\partial g}{\partial n_i'}\right)_{{\bf n'}={\bf n}},\quad{\rm and}\quad q_{ij}({\bf n},{\bf r})=\left(\frac{\partial^2 g}{\partial n_i' \partial n_j'}\right)_{{\bf n'}={\bf n}} \end{equation} To derive the elastic energy density we expand $\delta n_i$ in power series of $r_i$, the cartesian components of ${\bf r}$ \begin{equation} \label{f10}\delta n_i=n_{i,j} r_j+\frac{1}{2}n_{i,jk} r_j r_k+..... \end{equation} where $n_{i,j}=(\partial n_i/\partial X_j)$ are evaluated at ${\bf R}$. Substituting (\ref{f10}) into (\ref{f8}) results to \begin{equation} \label{f11}g=g({\bf n},{\bf n},{\bf r})+q_i n_{i,j} r_j+\frac{1}{2}\left(q_i n_{i,kl}+q_{ij} n_{i,k} n_{j,l}\right) r_k r_l+... \end{equation} The elastic energy density, in the mean field approximation, at ${\bf R}$, is given by \begin{equation} \label{f12}f=\frac{1}{2}\int\int\int_{\tau} g({\bf n},{\bf n'},{\bf r}) d\tau', \end{equation} where $\tau$ is the volume of the sample. Due to the short range character of the interaction the integral has to be performed on a volume of linear dimension of the order of $r_M$. Substituting (\ref{f11}) into (\ref{f12}) we get \begin{equation} \label{f13}f=f_0+{\cal L}(\nabla{\bf n})+{\cal N}\nabla(\nabla{\bf n})+\frac{1}{2}{\cal K}(\nabla{\bf n})(\nabla{\bf n}) \end{equation} where the elements of the tensors ${\cal L}$, ${\cal N}$ and ${\cal K}$ are \begin{eqnarray} \label{f14}L_{ik}&=&\frac{1}{2}\int\int\int_{\tau} q_i r_k \,d\tau',\\ \label{f15}N_{ikn}&=&\frac{1}{4}\int\int\int_{\tau}q_i r_k r_n \,d\tau',\\ \label{f16}K_{ijkn}&=&\frac{1}{2}\int\int\int_{\tau} q_{ij}r_k r_n \,d\tau'. \end{eqnarray} Expansion (\ref{f11}) gives a rule to expand the elastic energy density to spatial derivatives higher than the first order. For instance, $n_{i,j} n_{k,l}$ is of second order as $n_{j,kl}$. $n_{i,jkl}$ is of third order as $n_{i,j} n_{k,l} n_{p,q}$ and $n_{i,jk} n_{r,s}$, and so on.
{ "timestamp": "2018-11-09T02:09:42", "yymm": "1811", "arxiv_id": "1811.03300", "language": "en", "url": "https://arxiv.org/abs/1811.03300" }
\section*{Acknowledgment} This research was funded by TCS Research Scholarship Program and Young Faculty Research Fellowship (Visvesvaraya Ph.D. scheme) from MeitY, Govt. of India. \section{The ACP Control Algorithm} \label{sec:algorithm} \begin{figure}[!t] \begin{center} \includegraphics[width=0.45\textwidth]{MATLAB_plots_camera/demo_acp} \caption {A snippet from the function of ACP. The y-axis of the plot showing actions denotes the action and the line number in Algorithm~\ref{alg:acp}. Note the action marked by the dotted red line. At the time instant ACP observes an increase in both backlog and age and chooses ($9$,DEC) initially. However, there is still a significant jump in age. This results in the choice of multiplicative decrease ($7$,MDEC).} \label{fig:acpDemo} \end{center} \vspace{-0.2in} \end{figure} Let the control epochs of ACP (Section~\ref{sec:problem}) be indexed $1,2,\ldots$. Epoch $k$ starts at time $t_k$. At $t_1$ the update rate $\lambda_{1}$ is set to the inverse of the average packet round-trip-times (RTT) obtained at the end of the initialization phase. At time $t_k$, $k>1$, the update rate is set to $\lambda_k$. The source transmits updates at a fixed period of $1/\lambda_{k}$ in the interval $(t_{k}, t_{k+1})$. Let $\overline{\Delta}_k$ be the estimate at the source ACP of the time average update age at the monitor at time $t_k$. This average is calculated over $(t_{k-1}, t_k)$. To calculate it, the source ACP must construct its estimate of the age sample function (see Figure~\ref{fig:ageSampleFunction}), over the interval, at the monitor. It knows the time $a_i$ a source sent a certain update $i$. However, it needs the time $d_i$ at which update $i$ was received by the monitor, which it approximates by the time the ACK for packet $i$ was received. On receiving the ACK, it resets its estimate of age to the resulting round-trip-time (RTT) of packet $i$. Note that this value is an overestimate of the age of the update packet when it was received at the monitor, since it includes the time taken to send the ACK over the network. The time average $\overline{\Delta}_k$ is obtained simply by calculating the area under the resulting age curve over $(t_{k-1}, t_k)$ and dividing it by the length $t_k - t_{k-1}$ of the interval. Let $\overline{B}_k$ be the time average of backlog calculated over the interval $(t_{k-1}, t_k)$. This is the time average of the instantaneous backlog $B(t)$ over the interval. The instantaneous backlog increases by $1$ when the source sends a new update. When an ACK corresponding to an update $i$ is received, update $i$ and any unacknowledged updates older than $i$ are removed from the instantaneous backlog. In addition to using RTT(s) of updates for age estimation, we also use them to maintain an exponentially weighted moving average (EWMA) $\overline{\text{RTT}}$ of RTT. We update $\overline{\text{RTT}} = (1 - \alpha) \overline{\text{RTT}} + \alpha \text{RTT}$ on reception of an ACK that corresponds to a round-trip-time of RTT. \begin{algorithm}[t] \caption{Control Algorithm of ACP} \label{alg:acp} \footnotesize \begin{algorithmic}[1] \State \textbf{INPUT:} $b_k, \delta_k, \overline{T}$ \State \textbf{INIT:} $flag \gets 0$, $\gamma \gets 0$ \While{true} \If {$b_k>0$ \&\& $\delta_k >0$}\label{alg:one} \If {$flag==1$} \State $\gamma=\gamma+1$\label{alg:oneincr} \State MDEC($\gamma$) \Else \State DEC\label{alg:oneDEC} \EndIf \State $flag\gets 1$ \ElsIf { $b_k>0$ \&\& $\delta_k <0$}\label{alg:two} \If {$flag==1$ \&\& $|b_k|< 0.5*|b_k^{*}|$}\label{alg:two1} \State $\gamma=\gamma+1$ \State MDEC($\gamma$) \Else \State INC, $flag\gets 0$, $\gamma\gets0$ \label{alg:two2} \EndIf \ElsIf { $b_k<0$ \&\& $\delta_k >0$} \State INC, $flag\gets 0$, $\gamma\gets0$ \label{alg:three} \Else { $b_k<0$ \&\& $\delta_k <0$}\label{alg:four} \If {$flag==1$ \&\& $\gamma>0$} \State MDEC($\gamma$) \label{alg:fourMDEC} \Else \State DEC, $flag\gets 0$, $\gamma\gets0$ \EndIf \EndIf \State update $\lambda_k$ \State wait $\overline{T}$ \EndWhile \end{algorithmic} \end{algorithm} The source ACP also estimates the inter-update arrival times at the monitor and the corresponding EWMA $\overline{Z}$. The inter-update arrival times are approximated by the corresponding inter-ACK arrival times. The length $\overline{T}$ of a control epoch is set as an integral multiple of $\mathcal{T} = \min(\overline{\text{RTT}}, \overline{Z})$. This ensures that the length of a control epoch is never too large and allows for fast enough adaptation. Note that at sufficiently low rate $\lambda_{k}$ of sending updates $\overline{Z}$ is large and at a sufficiently high update rate $\overline{\text{RTT}}$ is large. At time $t_k$ we set $t_{k+1} = t_{k} + \overline{T}$. In all our evaluation we have used $\overline{T} = 10 \mathcal{T}$. The resulting length of $\overline{T}$ was observed to be long enough to see desired changes in average backlog and age in response to a choice of source update rate at the beginning of an epoch. The source updates $\overline{\text{RTT}}$, $\overline{Z}$, and $\overline{T}$ every time an ACK is received. At the beginning of control epoch $k > 1$, at time $t_k$, the source ACP calculates the difference $\delta_k = \overline{\Delta}_k - \overline{\Delta}_{k-1}$ in average age measured over intervals $(t_{k-1}, t_k)$ and $(t_{k-1}, t_{k-2})$ respectively. Similarly, it calculates $b_k = \overline{B}_k - \overline{B}_{k-1}$. ACP at the source chooses an action $u_k$ at the $k$\textsuperscript{th} epoch that targets a change $b^{*}_{k+1}$ in average backlog over an interval of length $\mathcal{T}$ with respect to the $k$\textsuperscript{th} interval. The actions, may be broadly classified into (a) additive increase (INC), additive decrease (DEC), and multiplicative decrease (MDEC). MDEC corresponds to a set of actions $\text{MDEC}(\gamma)$, where $\gamma = 1,2,\ldots$. We have% {\small \begin{align} &\text{INC: } b^{*}_{k+1} = \kappa,\, \text{DEC: } b^{*}_{k+1} = -\kappa,\nonumber\\ &\text{MDEC(}\gamma{\text{): } } b^{*}_{k+1} = -(1 - 2^{-\gamma}) B_k, \end{align} } where $\kappa > 0$ is a step size parameter. ACP attempts to achieve $b^{*}_{k+1}$ by setting $\lambda_k$ appropriately. The estimate of $\overline{Z}$ at the source ACP of the average inter-update arrival time at the monitor gives us the rate $1/\overline{Z}$ at which updates sent by the source arrive at the monitor. This and $\lambda_k$ allow us to estimate the average change in backlog over $\mathcal{T}$ as $(\lambda_k - (1/\overline{Z})) \mathcal{T}$. Therefore, to achieve a change of $b^{*}_{k+1}$ requires choosing $\lambda_k = \frac{1}{\overline{Z}} + \frac{b^{*}_{k+1}}{\mathcal{T}}$. Algorithm~\ref{alg:acp} summarizes how ACP chooses its action $u_k$ as a function of $b_k$ and $\delta_k$. Figure~\ref{fig:acpDemo} shows an example of ACP in action. \begin{figure}[!t] \begin{center} \includegraphics[width=0.3\textwidth]{Figs/flowchart} \caption{Update of $\overline{\text{RTT}}$, $\overline{Z}$, and $\overline{T}$, which takes place every time an ACK is received.} \label{fig:flow} \end{center} \end{figure} The source ACP targets a reduction in average backlog over the next control interval in case either $b_k>0, \delta_k>0$ or $b_k<0, \delta_k<0$. The first condition (line~\ref{alg:one}) indicates that the update rate is such that updates are experiencing larger than optimal delays. ACP attempts to reduce the backlog, first using DEC (line~\ref{alg:oneDEC}), followed by multiplicative reduction MDEC to reduce congestion delays and in the process reduce age quickly. Consecutive occurrences ($flag == 1$) of this case (tracked by increasing $\gamma$ by $1$ in line~\ref{alg:oneincr}) attempt to decrease backlog even more aggressively, by a larger power of $2$. The condition $b_k<0, \delta_k<0$ occurs on a reduction in both age and backlog. ACP greedily aims at reducing backlog further hoping that age will reduce too. It attempts MDEC (line~\ref{alg:fourMDEC}) if previously the condition $b_k>0, \delta_k>0$ was satisfied. Else, it attempts an additive decrease DEC. The source ACP targets an increase in average backlog over the next control interval in case either $b_k>0, \delta_k<0$ or $b_k<0, \delta_k>0$. On the occurrence of the first condition (line~\ref{alg:three}) ACP greedily attempts to increase backlog. When the condition $b_k<0, \delta_k>0$ occurs, we check if the previous action attempted to reduce the backlog. If not, it hints at too low an update rate causing an increase in age. So, ACP attempts an additive increase (line~\ref{alg:two2}) of backlog. If yes, and if the actual change in backlog was much smaller than the desired (line~\ref{alg:two1}), ACP attempts to reduce backlog multiplicatively. This helps counter situations where the increase in age is in fact because of increasing congestion. Specifically, increasing congestion in the network may cause the inter-update arrival rate $1/\overline{Z}$ at the monitor to reduce during the epoch. As a result, despite the attempted multiplicative decrease in backlog, it may change very little. Clearly, in such a situation, even if the backlog reduced a little, the increase in age was not caused because the backlog was low. The above check ensures ACP attempts reducing backlog to desired levels. In the above case, if instead ACP ignores the much smaller than desired change, it will end up increasing the rate of updates, further increasing backlog and age. \begin{figure}[!t] \begin{center} \includegraphics[width=0.48\textwidth]{MATLAB_plots_WoWMoM/Simulation_topology_WoWMoM.pdf} \caption {Sources are connected to the monitor via multiple routers and access points. Each source update travels over six hops. The first hop is between the source and access point AP-1. This could be either P2P or WiFi. The other hops that involve the ISP(s) and the Gateway are an abstraction of the Internet. These hops are P2P links and we vary their rates to simulate different end-to-end RTT.} \label{fig:simulationNetwork} \end{center} \vspace{-0.2in} \end{figure} \section{Architecture} \section{Good Age Control Behavior and Challenges} \label{sec:acpIntuit} ACP must suggest a rate $\lambda$ updates/second at which a source must send fresh updates to its monitor. ACP must adapt this rate to network conditions. To build intuition, let's suppose that the end-to-end connection is well described by an idealized setting that consists of a single first-come-first-served (FCFS) queue that serves each update in constant time. An update generated by the source enters the queue, waits for previously queued updates, and then enters service. The monitor receives an update once it completes service. Note that every update must age at least by the (constant) time it spends in service before it is received by the monitor. It may age more if it ends up waiting for one or more updates to complete service. In this idealized setting, one would want a new update to arrive as soon as the last generated update finishes service. To ensure that the age of each update received at the monitor is the minimum, one must choose a rate $\lambda$ such that new updates are generated in a periodic manner with the period set to the time an update spends in service. Also, update generation must be synchronized with service completion instants so that a new update enters the queue as soon as the last update finishes service. In fact, such a rate $\lambda$ is age minimizing even when updates pass through a sequence of $Q>1$ such queues in tandem~\cite{shortle2018fundamentals}. The update is received by the monitor when it leaves the last queue in the sequence. The rate $\lambda$ will ensure that a generated packet ages exactly $Q$ times the time it spends in the server of any given queue. At any given time, there will be exactly $Q$ update packets in the network, one in each server Of course, the assumed network is a gross idealization. We assumed a series of similar constant service facilities and that the time spent in service and instant of service completion were known exactly. We also assumed lack of other traffic. However, the resulting intuition is significant. Specifically, \emph{a good age control algorithm must strive to have as many update packets of a source in transit as possible while simultaneously ensuring that these updates avoid waiting for other previously queued updates of the source}\footnote{Simulations that further build on this intuition may be found in~\cite{shreedhar_wowmom_long}. We skip them here due to lack of space.}. As described next, ACP tracks changes in the number of backlogged packets, which are updates for whom the source awaits an ACK from the monitor, and average age over short intervals. In case backlog and age increase, ACP acts to rapidly reduce the backlog. \section{Introduction} \label{sec:introduction} Inexpensive IoT devices have led to the proliferation of a relatively new class of real-time monitoring systems for applications such as health care, smart homes, transportation, and natural environment monitoring. Devices repeatedly sense various physical attributes of a region of interest, for example, traffic flow at an intersection. This results in a device (the \emph{source}) generating a sequence of packets (\emph{updates}) containing measurements of the attributes. A more recently generated update contains a more current measurement. The updates are communicated over the Internet to a \emph{monitor} that processes them and decides on any actuation that may be required. For such applications, it is desirable that freshly sensed information is available at monitors. However, simply generating and sending updates at a high rate over the Internet is detrimental to this goal. In fact, freshness at a monitor is optimized by the source smartly choosing an update rate, as a function of the end-to-end network conditions. Freshness at the monitor suffers when a too small or a too large rate of updates is chosen by the source. See, for example,~\cite[Figure 3]{KaulYatesGruteser-Infocom2012} for how age at the monitor varies as a function of source rate for simple first-come-first-served queues with memoryless arrival and service processes. The requirement of freshness is not akin to requirements of other pervasive real-time applications like voice and video. While resilient to packet drops to a certain degree, they require end-to-end packet delays to lie within known limits and would like small end-to-end jitter. Monitoring applications may achieve a low update packet delay by simply choosing a low rate at which the source sends updates. This, however, may be detrimental to freshness, as a low rate of updates can lead to a large \emph{age} of sensed information at the monitor, simply because updates from the source are infrequent. More so than voice/video, monitoring applications are exceptionally loss resilient and they don't benefit from the source retransmitting lost updates. Instead, the source should continue sending new updates at its configured rate. At the other end of the spectrum are applications like that of file transfer that require reliable transport and high throughputs but are delay tolerant and use the transmission control protocol (TCP). The congestion control algorithm of TCP, which optimizes the use of the network pipe for throughput, is detrimental to keeping age low. We detail the impact of TCP on age in \cite[Section 3]{shreedhar_wowmom_long}\footnote{We don't include evaluation of TCP in this paper because of limited space.}. Its features of packet retransmissions and in-order delivery can keep fresh packets waiting at the monitor TCP for older packets to be successfully received. This causes large increases in age on transmission errors. Also, small sized updates may age more than larger ones as the congestion window size doesn't increase till a sender maximum segment size bytes are acknowledged. This delays the delivery of updates with fewer bytes Unlike TCP, UDP ignores dropped packets and delivers packets to applications as soon as they are received. This makes it desirable for age sensitive applications. In fact, while ACP chooses the best rate of sending updates, it uses UDP to transport them over the Internet. Our specific contributions are listed next. \noindent \textbf{(a)} We propose the Age Control Protocol (detailed in Sections~\ref{sec:ACPProtocol},~\ref{sec:problem},~\ref{sec:acpIntuit}, and~\ref{sec:algorithm}), a novel transport layer protocol for real-time monitoring applications that aims to deliver fresh updates over the Internet. ACP regulates the rate at which a source sends its updates to a monitor over its end-to-end connection in a manner that is application independent and makes the network transparent to the source. The goal is to keep the average age of sensed information at the monitor to a minimum, where the age of an update is the time elapsed since its generation by the source. Based on feedback from the monitor, ACP adapts rate to the perceived congestion in the Internet. Consequently, ACP also limits congestion that would otherwise be introduced by sources sending to their monitors at unnecessarily fast update rates. We argue that ACP, unlike other transport protocols like TCP and RTP, must maintain just the right number of update packets in transit at any given time. \noindent \textbf{(b)} We provide an extensive evaluation of ACP (Sections~\ref{sec:evaluation},~\ref{sec:simulationResults} and ~\ref{sec:realWorldResults}) using network simulations and real-world experiments in which one or more sources sends packets to monitors. To exemplify, over an inter-continental end-to-end IP connection with a median round-trip-time of about $185$ msec, ACP achieves a significant reduction in the median age of about $100$ msec ($\approx 33\%$ improvement) over age achieved by a protocol that sends one update every RTT. We end this section with a brief on the related art. Prior works~\cite{KaulYatesGruteser-Infocom2012,KamKompellaEphremides2013ISIT,KamKNE2016IT} have analyzed the metric of age of sensed information (AoI) for queue theoretic abstractions of networks. Works have considered optimizing age for multiple sources sharing a communication link, for example, \cite{Kadota-UBSM-Allerton2016,JiangKrishnmachariZZN-ISIT2018}. AoI has been analyzed under a variety of link scheduling methods \cite{LuJiLi-Mobicom2018,Talak-KKM-arxiv2018}. Multihop networks have also received attention \cite{TalakKaramanModiano-Allerton2017}. Notably, optimality properties of a Last Generated First Served service when updates arrive out of order are found in \cite{Bedewy-SS-arxiv2017}. Such AoI literature has focused on analytically tractable simple models. Moreover, a model for the system is typically assumed to be known. In this work, our objective has been to develop end-to-end updating schemes that perform reasonably well without assuming a particular network configuration or model. Closer to our goal, more recently, in~\cite{sert2018DeepQ} the authors proposed a deep Q-learning based approach to optimize age over a given but unknown network topology. A preliminary version of our work on ACP appeared as a poster~\cite{shreedhar2018acp}. \section{Age Sensitive Update Traffic over TCP} \label{sec:TCP} Before we delve into the problem of end-to-end age control, we demonstrate why TCP as a choice of transport protocol is unsuitable for age sensitive traffic. Specifically, we show that the congestion control mechanism of TCP together with its goal of guaranteed and ordered delivery of packets can lead to a very high age at the monitor, in comparison to when UDP is used, for a wide range of utilization of the network by the traffic generated by the source, and not just when the utilization is high. We simulated a simple network consisting of a source that sends measurement updates to a monitor via a single Internet Protocol (IP) router. The source node has a bidirectional point-to-point (P2P) link of rate $1$ Mbps to the router. A similar link connects the router to the monitor. The source uses a TCP client to connect to a TCP server at the monitor and sends its update packets over the resulting TCP connection. We will also compare the obtained age with when UDP is used instead. \emph{Retransmissions and In-order Delivery:} Figure~\ref{fig:UDPvsTCPPacketError} illustrates the impact of packet error on TCP. A packet was dropped independently of other packets with probability $0.1$. The figure compares the average age at the monitor and the average update packet delay, which is the time elapsed between generation of a packet at the source and its delivery at the monitor, when using TCP and UDP. On using TCP, the time average age achieves a minimum value of $0.18$ seconds when the source utilizes a fraction $0.2$ of the available $1$ Mbps to send update packets. This is clearly much larger than the minimum age of $\approx 0.01$ seconds at a utilization of $\approx 0.8$ when UDP is used. The large minimum age when using TCP is explained by the way TCP guarantees in order packet delivery to the receiving application (monitor). It causes fresher updates that have arrived out-of-order at the TCP receiver to wait for older updates that have not yet been received, for example, because of packet losses in the network. This can be seen in Figure~\ref{fig:UDPvsTCPPacketErrorRXBuffer} that shows how large measured packet delays coincide with a spike in the number of bytes received by the monitor application. The large delay is that of a received packet that had to undergo a TCP retransmission. The corresponding spike in received bytes, which is preceded by a pause, is because bytes with fresher information received earlier but out of order are held by the TCP receiver till the older packet is received post retransmission. Unlike TCP, UDP ignores dropped packets and delivers packets to applications as soon as they are received. This makes it desirable for age sensitive applications. As we will see later, ACP uses UDP to provide update packets with end-to-end transport over IP. \emph{TCP Congestion Control and Small Packets:} Next, we describe the impact of small packets on the TCP congestion algorithm and its impact on age. This is especially relevant to a source sending measurement updates as the resulting packets may have small application payloads. Note that no packet errors were introduced in simulations used to make the following observations. Observe in the upper plot of Figure~\ref{fig:TCPSmallPacketSizes} that the $500$ byte packet payloads experience higher age at the monitor than the larger $536$ byte packets. The reason is explained by the impact of packet size on how quickly the size of the TCP congestion window (CWND) increases. The congestion window size doesn't increase till a sender maximum segment size (SMSS) bytes are acknowledged. TCP does this to optimize the overheads associated with sending payload. Packets with fewer bytes may thus require multiple TCP ACK(s) to be received for the congestion window to increase. This explains the slower increase in the size of the congestion window for $500$ byte payloads seen in Figure~\ref{fig:TCPSmallPacketSizes}. This causes smaller packets to wait longer in the TCP send buffer before they are sent out by the TCP sender, which explains the larger age in Figure~\ref{fig:TCPSmallPacketSizes}. \section{Conclusions} \label{sec:conclusions} We proposed the Age Control Protocol, which is a novel transport layer protocol for real-time monitoring applications that desire the freshness of information communicated over the Internet. ACP works in an application-independent manner. It regulates the update rate of a source in a network-transparent manner. We detailed ACP's control algorithm that adapts the rate so that the age of the updates at the monitor is minimized. Via network simulations and real-world experiments, we showed that ACP adapts the source update rate well to make an effective use of network resources available to the end-to-end connection between a source and its monitor \balance \section{Conclusion}\label{sec:conclusion} We proposed, QAware, a novel cross-layer MPTCP scheduler that combines hardware device queue occupancy and TCP RTT for efficient scheduling decisions. We detailed its design and implementation. We evaluated QAware using an extensive set of simulations and real network experiments for various network configurations and applications such as bulk data transfers, web browsing, web file downloads, and video streaming. Comparisons with various state-of-the-art schedulers such as DAPS, BLEST, and ECF were used to demonstrate the efficacy of QAware. It outperformed other schedulers in all network configurations and workloads we tested. Further, we show that QAware quickly adapts to co-existing applications and sudden variations in network conditions. We have open-sourced QAware's implementation as a modular scheduler for latest stable MPTCP Linux release. \section{Conclusions} \section{Evaluation Methodology} \label{sec:evaluation} We used a mix of simulations and real-world experiments to evaluate ACP. While simulations allowed us to test with large numbers of sources contending with each other over wireless access under varied wireless channel conditions and densities of source placements, real-world experiments allowed us to test ACP over a real intercontinental end-to-end connection. Figure~\ref{fig:simulationNetwork} shows the end-to-end network used for simulations. We start by describing the wireless access over which sources connect to AP-1. We performed simulations for $1-50$ sources accessing AP-1 using the WiFi ($802.11$g) medium access. We simulated for sources spread uniformly and randomly over areas of $10\times 10$ m$^2$, $20\times 20$ m$^2$ and $50\times 50$ m$^2$. The channel between a source and AP-1 was chosen to be Log-Normally distributed with choices of $4$, $8$, and $12$ for the standard deviation. The pathloss exponent was $3$. WiFi physical (PHY) layer rates were set to one of $12$ Mbps and $54$ Mbps. We simulated for no WiFi retries and a max retry limit of $7$. For the network beyond AP-1, all links were configured to be P2P. We set the P2P link rates from the set $\{0.3, 0.6, 1.2, 6.0\}$ Mbps. This was to simulate network RTT of a wide range. We used the network simulator ns3\footnote{\url{https://www.nsnam.org/}} together with the YansWiFiPhyHelper\footnote{\url{https://www.nsnam.org/doxygen/classns3_1_1_yans_wifi_phy.html}}. Our simulated network is however limited in the number of hops, which is six. We also evaluated ACP in the real-world by making $2 - 10$ sources connected to an enterprise WiFi access point, which is part of a university network, send their updates over the Internet to monitors that were running on a server with a global IP on another continent. This setup allows us to test ACP over a path with large RTT(s) and tens of hops. While the WiFi access point doesn't see much other traffic, we don't control the interference that may be created by adjoining access points or WiFi clients. Lastly, we had no control over the traffic on the university intranet when the experiments were performed. To compare the age control performance of ACP, we use \emph{Lazy}. \emph{Lazy}, like ACP, also adapts the update rate to network conditions. However, it is very conservative and keeps the average number of update packets in transit small. Specifically, it updates the $\overline{\text{RTT}}$ every time an ACK is received and sets the current update rate to the inverse of $\overline{\text{RTT}}$. Thus, it aims at maintaining an average backlog of $1$. We end by stating that an appropriate selection of step size $\kappa$ is crucial to the proper functioning of ACP. We chose it by hit and trial. For simulations, we found a step size of $\kappa = 0.25$ to be the best. However, this turned out to be too small for experiments over the Internet. For these, we tried $\kappa \in \{1, 2\}$. Next, we will discuss the simulation results followed by the real-world results. \section{Introduction} \label{sec:introduction} The availability of inexpensive embedded devices with the ability to sense and communicate has led to the proliferation of a relatively new class of real-time monitoring systems for applications such as health care, smart homes, transportation, and natural environment monitoring. Devices repeatedly sense various physical attributes of a region of interest, for example, traffic flow at an intersection. This results in a device (the \emph{source}) generating a sequence of packets (\emph{updates}) containing measurements of the attributes. A more recently generated update contains a more current measurement. The updates are communicated over the Internet to a \emph{monitor} that processes them and decides on any actuation that may be required. For such applications, it is desirable that freshly sensed information is available at monitors. However, as we will see, simply generating and sending updates at a high rate over the Internet is detrimental to this goal. In fact, freshness at a monitor is optimized by the source smartly choosing an update rate, as a function of the end-to-end network conditions. Freshness at the monitor suffers when a too small or a too large rate of updates is chosen by the source. In this work, we propose the Age Control Protocol (ACP), which in a network-transparent manner regulates the rate at which updates from a source are sent over its end-to-end connection to the monitor. This rate is such that the average age, where the age of an update is the time elapsed since its generation by the source, of sensed information at the monitor is kept to a minimum, given the network conditions. Based on feedback from the monitor, ACP adapts its suggested rate to the perceived congestion in the Internet. Consequently, ACP also limits congestion that would otherwise be introduced by sources sending to their monitors at unnecessarily fast update rates. The requirement of freshness is not akin to requirements of other pervasive real-time applications like voice and video. For these applications, the rate at which packets are sent is determined by the codec being used. Often, such applications adapt to network conditions by choosing an appropriate code rate. These applications, while resilient to packet drops to a certain degree, require end-to-end packet delays to lie within known limits and would like small end-to-end jitter. Monitoring applications may achieve a low update packet delay by simply choosing a low rate at which the source sends updates. This, however, may be detrimental to freshness, as a low rate of updates can lead to a large \emph{age} of sensed information at the monitor, simply because updates from the source are infrequent. More so than voice/video, monitoring applications are exceptionally loss resilient and they don't benefit from the source retransmitting lost updates. Instead, the source should continue sending new updates at its configured rate. \begin{figure}[!t] \begin{center} \includegraphics[width=0.4\textwidth]{Figs/ageThrDelay.pdf} \caption{Interplay of the networking metrics of delay (solid line), throughput (normalized by service rate) and age. Shown for a M/M/1 queue~\cite{shortle2018fundamentals} with service rate of $1$. The age curve was generated using the analysis for a M/M/1 queue in~\cite{KaulYatesGruteser-Infocom2012}.} \label{fig:networkingMetrics} \end{center} \vspace{-0.2in} \end{figure} At the other end of the spectrum are applications like that of file transfer that require reliable transport and high throughputs but are delay tolerant. Such applications use the transmission control protocol (TCP) for end-to-end delivery of application packets. As we show in Section~\ref{sec:TCP}, the congestion control algorithm of TCP, which optimizes the use of the network pipe for throughput, and TCP's emphasis on guaranteed and ordered delivery is detrimental to keeping age low. Figure~\ref{fig:networkingMetrics} broadly captures the behavior of the metrics of delay and age as a function of throughput. Under light and moderate loads when packet dropping is negligible, throughput (average network utilization) increases linearly in the rate of updates. This leads to an increase in the average packet delay. Large packet delays coincide with large average age. Large age is also seen for small throughputs (and corresponding small rate of updates). \emph{At a low update rate, the monitor receives updates infrequently, and this increases the average age (staleness) of its most fresh update}. Finally, observe that there exists a sending rate (and corresponding throughput) at which age is minimized. Many works, see Section~\ref{sec:related}, have analyzed age as a Quality-of-Service metric for monitoring applications. Often such works have employed queue theoretic abstractions of networks. More recently in~\cite{sert2018DeepQ} the authors proposed a deep Q-learning based approach to optimize age over a given but unknown network topology. We believe our work is the first to investigate age control at the transport layer of the networking stack, that is over an end-to-end connection in an IP network and in a manner that is transparent to the application. A preliminary version of this work appeared as a poster~\cite{shreedhar2018acp}. Our specific contributions are listed next.\\ \noindent \textbf{(a)} We propose the Age Control Protocol, a novel transport layer protocol for real-time monitoring applications that wish to deliver fresh updates over IP networks. ACP regulates the rate at which a status updating source sends its updates to a monitor over its end-to-end connection in a manner that is application independent and makes the network transparent to the source. We argue that such a protocol, unlike other transport protocols like TCP and RTP, must have just the right number of update packets in transit at any given time. \noindent \textbf{(b)} We demonstrate that TCP, which is the most commonly used transport protocol in the Internet is not suitable for the transport of update packets. \noindent \textbf{(c)} We provide an extensive evaluation of the protocol using network simulations and real world experiments in which one or more sources sends packets to monitors. While our simulations are constrained to six hop paths between the sources and the monitors, in our real experiments we have sources send their updates over an inter-continental end-to-end IP connection. Over such a connection with a median round-trip-time of about $185$ msec, ACP achieves a significant reduction in median age of about $100$ msec ($\approx 33\%$ improvement) over age achieved by a protocol that sends one update every RTT. The rest of the paper is organized as follows. In the next section, we describe related works. In Section~\ref{sec:TCP} we demonstrate why the mechanisms of TCP are detrimental to minimizing age. In Section~\ref{sec:ACPProtocol}, we detail the Age Control Protocol, how it interfaces with a source and a monitor, and the protocol's timeline. In Section~\ref{sec:problem} we define the age control problem. In Section~\ref{sec:acpIntuit}, we use simple queueing models to intuit a good age control protocol and discuss a few challenges. Section~\ref{sec:ACPProtocol} details the control algorithm that is a part of ACP. This is followed by details on the evaluation methodology in Section~\ref{sec:evaluation}. We discuss simulation results in Section~\ref{sec:simulationResults} and results from real-world experiments in Section~\ref{sec:realWorldResults}. We conclude in Section~\ref{sec:conclusions}. \section{Good Age Control Behavior and Challenges} \label{sec:acpIntuit} ACP must suggest a rate $\lambda$ updates/second at which a source must send fresh updates to its monitor. ACP must adapt this rate to network conditions. To build intuition, let's suppose that the end-to-end connection is well described by an idealized setting that consists of a single FCFS queue that serves each update in constant time. An update generated by the source enters the queue, waits for previously queued updates, and then enters service. The monitor receives an update once it completes service. Note that every update must age at least by the (constant) time it spends in service, before it is received by the monitor. It may age more if it ends up waiting for one or more other updates to complete service. In this idealized setting, one would want a new update to arrive as soon as the last generated update finishes service. To ensure that the age of each update received at the monitor is the minimum, one must choose a rate $\lambda$ such that new updates are generated in a periodic manner with the period set to the time an update spends in service. Also, update generation must be synchronized with service completion instants so that a new update enters the queue as soon as the last update finishes service. In fact, such a rate $\lambda$ is age minimizing even when updates pass through a sequence of $Q>1$ such queues in tandem~\cite{shortle2018fundamentals}. The update is received by the monitor when it leaves the last queue in the sequence. The rate $\lambda$ will ensure that a generated packet ages exactly $Q$ times the time it spends in the server of any given queue. At any given time, there will be exactly $Q$ update packets in the network, one in each server. Of course, the assumed network is a gross idealization. We assumed a series of similar constant service facilities and that the time spent in service and instant of service completion were known exactly. We also assumed lack of any other traffic. However, as we will see further, the resulting intuition is significant. Specifically, \emph{a good age control algorithm must strive to have as many update packets in transit as possible while simultaneously ensuring that these updates avoid waiting for other previously queued updates}. Before we detail our proposed control method, we will make a few salient observations using analytical results for simple queueing models and simulation results that capture stochastic service and generation of updates. These will help build on our intuition and also elucidate the challenges of age control over a priori unknown and likely non-stationary end-to-end network conditions. \subsection{Analytical Queueing Model for Two Queues} We will consider two queueing models. One is the $M/M/1$ FCFS queue with an infinite buffer in which a source sends update packets at a rate $\lambda$ to a monitor via a single queue, which services packets at a rate $\mu$ updates per second. The updates are generated as a Poisson process of rate $\lambda$ and packet service times are exponentially distributed with $1/\mu$ as the average time it takes to service a packet. In the other model, updates travel through two queues in tandem. Specifically, they enter the first queue that is serviced at the rate $\mu_1$. On finishing service in the first queue, they enter the second queue that services packets at a rate of $\mu_2$. As before, updates arrive to the first queue as a Poisson process and packet service times are exponentially distributed. The average age for the case of a single $M/M/1$ queue was analyzed in~\cite{KaulYatesGruteser-Infocom2012}. We extend their analysis to obtain analytical expressions of average age as a function of $\lambda,\mu_1$ and $\mu_2$ for the two queue case, by using the well known result that updates also enter the second queue as a Poisson process of rate $\lambda$~\cite{shortle2018fundamentals} \emph{On the impact of non-stationarity and transient network conditions:} Figure~\ref{fig:motivatingACP_age_lambda} shows the expected value (average) of age as a function of $\lambda$ when the queueing systems are in \emph{steady state}. It is shown for three single $M/M/1$ queues, each with a different service rate, and for two queues in tandem with both servers having the same unit service rate. Observe that all the age curves have a bowl-like shape that captures the fact that a too small or a too large $\lambda$ leads to large age. Such behavior has been observed in non-preemptive queueing disciplines in which updates can't preempt other older updates. A reasonable strategy to find the optimal rate thus seems to be one that starts at a certain initial $\lambda$ and changes $\lambda$ in a direction such that a smaller expected age is achieved. In practice, the absence of a network model (unknown service distributions and expectations), would require Monte-Carlo estimates of the expected value of age for every choice of $\lambda$. Getting these estimates, however, would require averaging over a large number of instantaneous age samples and would slow down adaptation. This could lead to updates experiencing excessive waiting times when $\lambda$ is too large. Worse, transient network conditions (a run of bad luck) and non-stationarities, for example, because of introduction of other traffic flows, could push these delays to even higher values, leading to an even larger backlog of packets in in transit. Figure~\ref{fig:motivatingACP_age_lambda}, illustrates how changes in network conditions (service rate $\mu$ and number of hops (queues)) can lead to large changes in the expected age. It is desirable for a good age control algorithm to not allow the end-to-end connection to drift into a high backlog state. As we describe in the next section, ACP tracks changes in the average number of backlogged packets and average age over short intervals, and in case backlog and age increase, ACP acts to rapidly reduce the backlog. \emph{On Optimal Average Backlogs:} Figure~\ref{fig:motivatingACPArrivalAndSystem} plots the average packet system times, where the system time of a packet is the time that elapses between its arrival and completion of its service, as a function of inter-arrival time ($1/\lambda$) for three single queue $M/M/1$ networks and two networks that have two queues in tandem. As expected, increase in inter-arrival time reduces the system time. As inter-arrival times become large, packets wait less often for others to complete service. As a result, as inter-arrival time increases, the system times converge to the average service time of a packet. For each queueing system, we also mark on its plot the inter-arrival time that minimizes age. It is instructive to note that for the three single queue systems this inter-arrival time is only slightly smaller than the system time. However, for the two queues in tandem with service rates of $1$ each, the inter-arrival time is a lot smaller than the system time. The implication being that on an average it is optimal to send slightly more than one ($\approx 1.2$) packet every system time for the single queue system. However, for the two queue network with the similar servers, we want to send a larger number ($\approx 1.6$) of packets every system time. For the two queue network where the second queue is served by a faster server, this number is smaller ($\approx 1.43$). As we observe next, as one of the servers becomes faster, the two queue network becomes more akin to a single queue network with the slower server. \begin{table} \begin{center} \begin{tabular}[!b]{|L|L|L|L|L|L|L|} \hline \text{Network} & R_1 & R_2 & R_3 & R_4 & R_5 & R_6\\ \hline \text{Net A} & 1 & 1 & 1 & 1 & 1 & 1\\ \hline \text{Net B} & 1 & 1 & \mathbf{5} & \mathbf{5} & 1 & 1\\ \hline \text{Net C} & 1 & \mathbf{5} & 5 & 5 & \mathbf{5} & 1\\ \hline \text{Net D} & \mathbf{5} & 5 & 5 & 5 & 5 & 1\\ \hline \text{Net E} & 5 & 5 & 5 & 5 & 5 & \mathbf{5}\\ \hline \end{tabular} \caption{Various P2P link configurations applied to the network diagram in Figure~\ref{fig:simulationNetwork}. The rates $R_i$ are in Mbps. $R_1$ is the rate of the link between the source and AP-1 and $R_6$ is that of the link between AP-2 and the monitor.} \label{tab:goodACPSimulationTable} \end{center} \vspace{-0.2in} \end{table} Note that these numbers are in fact the optimal (age minimizing) average number of packets in the system. Figure~\ref{fig:motivatingACPBacklog} shows how this optimal average backlog varies as a function of $\mu_2$ for a given $\mu_1$. The observations stay the same on swapping $\mu_1$ and $\mu_2$. As $\mu_2$ increases, that is as the second server becomes faster than the first ($\mu_1 = 1$), we see that the average backlog increases in queue $1$ and reduces in queue $2$, while the sum backlog gets closer to the optimal backlog for the single queue case. Specifically, as queue $1$ becomes a larger bottleneck relative to queue $2$, optimal $\lambda$ must adapt to the bottleneck queue. The backlog in the faster queue is governed by the resulting choice of $\lambda$. For when the rates $\mu_1$ and $\mu_2$ are similar, they see similar backlogs. However, as is seen for when $\mu_1,\mu_2 = 1$, the backlog per queue is smaller than a network with only a single such queue. However, the sum backlog ($\approx 1.6$) is larger. \begin{figure}[!t] \begin{center} \includegraphics[width=.5\textwidth]{MATLAB_plots_WoWMoM/optimal_backlog_skk} \caption{Average backlogs at different nodes in the network, shown in Fig~\ref{fig:simulationNetwork}, at the optimal update rate. Net E is similar to Net A and not shown.} \label{fig:motivatingACPBacklog} \end{center} \vspace{-0.1in} \end{figure} \subsection{Simulating Larger Number of Hops} To see if this intuition generalizes to more number of hops, we simulated an end-to-end connection which has the source send its packet to the monitor over $6$ hops, where each hop is serviced by a bidirectional P2P link. The hops are shown in Figure~\ref{fig:simulationNetwork}. We vary the rates at which the P2P links transmit packets to gain insight into how queues in a network must be populated with update packets at an age optimal rate. We also introduce other traffic in the network that occupies, on an average, a fraction $0.2$ Mbps of each P2P link from the source to the monitor. The different configurations are summarized in Table~\ref{tab:goodACPSimulationTable}. For each network configuration we have the source send updates over UDP to the monitor using an a priori chosen rate $\lambda$. We vary $\lambda$ over a wide range of values and for each $\lambda$ we calculate the obtained time average age. These simulations allow us to empirically pick the age minimizing $\lambda$ for the given network. Figure~\ref{fig:motivatingACPBacklog} shows the time average \emph{backlog} (queue occupancy) at the different nodes in the network at the optimal $\lambda$. The backlog at a node includes the update packet being transmitted on a node's outgoing P2P link and any update that is awaiting transmission at the node. Observe that all P2P links in each of Net A and Net E have the same rate, $1$ and $5$ Mbps respectively. Though Net B has links much faster than that of Net A, for both these networks the average backlog at all nodes is close to $1$. That it is smaller than $1$ is explained by the presence of the other flow. The other flow, which also originates at the client, is also the reason why the client sees a slightly larger average queue occupancy by the update packets Net B has faster P2P links connecting ISP(s) and the Gateway when compared to Net A. However, its other links are slower than that in Net E. We see that the nodes that have fast outgoing links have low backlogs and those that have slow links have an average backlog close to $0.8$. The source has a slow outgoing link and as a result of the other flow sees slightly larger occupancy of update packets. In summary, at $\lambda$ that minimizes average age, as is also shown in Figure~\ref{fig:motivatingACPBacklog} for Net C and Net D, \emph{nodes with outgoing links that are bottlenecks relative to the others' links see a backlog such that no more than one update packet is queued at them}. Naturally, nodes with faster links see smaller backlogs in proportion to how fast their links are with respect to the bottleneck. A corollary to the above observations, which we do not demonstrate for lack of space, is that a good age control algorithm should on an average have a larger number of packets simultaneously in transit in a network with a larger number of hops (nodes/queues). A good age control algorithm must not allow the end-to-end connection to drift into a high backlog state. As described next, ACP tracks changes in the number of backlogged packets and average age over short intervals, and in case backlog and age increase, ACP acts to rapidly reduce the backlog. \section*{Acknowledgment} \input{abstract} \input{introduction} \input{related-insert} \input{whynotTCP} \input{networking} \input{problem} \input{intuition} \input{algorithm} \input{evaluation} \input{simulation} \input{realexperiments} \input{conclusions} \input{ack} \bibliographystyle{abbrv} \section{Motivation and Problem Statement} In this section we will motivate the need for minimizing age for status update applications. Using age curve derive the formula for age. Convince that this is the general formula. \section{The Age Control Protocol} \label{sec:ACPProtocol} The Age Control Protocol resides in the transport layer of the TCP/IP networking stack and operates only on the end hosts. Figure~\ref{fig:networkStack} shows an \emph{end-to-end connection} between two hosts, an IoT device, and a server, over the Internet. A source opens an ACP connection to its monitor. Multiple sources may connect to the same monitor. ACP uses the unreliable transport provided by the user datagram protocol (UDP) for sending of updates generated by the sources. This is in line with the requirements of fresh delivery of updates. Retransmissions make an update stale and also compete with fresh updates for network resources. The source ACP appends a header to an update from a source. The header contains a \emph{timestamp} field that stores the time the update was generated. The source ACP suggests to the source the rate at which it must generate updates. To be able to calculate the rate, the source ACP must estimate network conditions over the end-to-end path to the monitor ACP. This is achieved by having the monitor ACP acknowledge each update packet received from the source ACP by sending an ACK packet in return. The ACK contains the timestamp of the update being acknowledged. The ACK(s) allow the source ACP to keep an estimate of the age of sensed information at the monitor. An \emph{out-of-sequence} ACK, which is an ACK received after an ACK corresponding to a more recent update packet, is discarded by the source ACP. Similarly, an update that is received \emph{out-of-sequence} is discarded by the monitor. This is because the monitor has already received a more recent measurement from the source. Figure~\ref{fig:acpConnectionTimeline} shows a timeline of a typical ACP connection. For an ACP connection to take place, the monitor ACP must be listening on a previously advertised UDP port. The ACP source first establishes a UDP connection with the monitor. This is followed by an \emph{initialization} phase during which the source sends an update and waits for an ACK or for a suitable timeout to occur, and repeats this process for a few times, with the goal of probing the network to set an initial update rate. Following this phase, the ACP connection may be described by a sequence of \emph{control epochs}. The end of the \emph{initialization} phase marks the start of the first control epoch. At the beginning of each control epoch, ACP sets the rate at which updates generated from the source are sent until the beginning of the next epoch. \begin{figure}[!t] \begin{center} \includegraphics[width=0.4\textwidth]{Figs/NetworkStack} \caption{The ACP end-to-end connection.} \label{fig:networkStack} \end{center} \vspace{-0.2in} \end{figure} \section{The Age Control Problem} \label{sec:problem} \begin{figure}[!b] \begin{center} \includegraphics[width=0.47\textwidth]{Figs/age-abridged-2.pdf} \caption{A sample function of the age $\Delta(t)$. Updates are indexed $1,2,\ldots$. The timestamp of update $i$ is $a_i$. The time at which update $i$ is received by the monitor is $d_i$. Since update $2$ is received out-of-sequence, it doesn't reset the age process.} \label{fig:ageSampleFunction} \end{center} \vspace{-0.2in} \end{figure} We will formally define the age of sensed information at a monitor. To simplify presentation, in this section, we will assume that the source and monitor are time synchronized, although the functioning of ACP doesn't require the same. Let $z(t)$ be the timestamp of the freshest update received by the monitor up to time $t$. Recall that this is the time the update was generated by the source. The age at the monitor is $\Delta(t) = t - z(t)$ of the freshest update available at the monitor at time $t$. An example sample function of the age stochastic process is shown in Figure~\ref{fig:ageSampleFunction}. The figure shows the timestamps $a_1, a_2,\ldots,a_6$ of $6$ packets generated by the source. Packet $i$ is received by the monitor at time $d_i$. At time $d_i$, packet $i$ has age $d_i - a_i$. The age $\Delta(t)$ at the monitor increases linearly in between reception of updates received in the correct sequence. Specifically, it is reset to the age $d_i - a_i$ of packet $i$, in case packet $i$ is the freshest packet (one with the most recent timestamp) at the monitor at time $d_i$. For example, when update $3$ is received at the monitor, the only other update received by the monitor until then was update $1$. Since update $1$ was generated at time $a_1 < a_3$, the reception of $3$ resets the age to $d_3 - a_3$ at time $d_3$. On the other hand, while update $2$ was sent at a time $a_2 < a_3$, it is delivered out-of-order at a time $d_2 > d_3$. So packet $2$ is discarded by the monitor ACP and age stays unchanged at time $d_2$. We want to choose the rate $\lambda$ (updates/second) that minimizes the expected value $\lim_{t \to \infty}E[\Delta(t)]$ of age at the monitor, where the expectation is over any randomness introduced by the network. Note that in the absence of a priori knowledge of a network model, as is the case with the end-to-end connection over which ACP runs, this expectation is unknown to both source and monitor and must be estimated using measurements. Lastly, we would like to dynamically adapt the rate $\lambda$ to nonstationarities in the network. \begin{figure}[!t] \begin{center} \includegraphics[width=0.47\textwidth]{Figs_cameraReady/timing_horizontal} \caption{Timeline of an ACP connection. The box \textbf{I} marks the beginning of the initialization phase of ACP. The boxed \textbf{C} denotes the ACP algorithm (Algorithm~\ref{alg:acp}) executed when a new control epoch begins. The boxed \textbf{U} is executed when an ACK is received and updates $\overline{Z},\overline{\text{RTT}}, \text{and } \mathcal{T}$.} \label{fig:acpConnectionTimeline} \end{center} \vspace{-0.2in} \end{figure} \begin{figure}[!b] \begin{center} \includegraphics[width=0.45\textwidth]{Figs/Backlog.pdf} \caption{A sample function of the backlog process $B(t)$. Updates are indexed $1,2,\ldots$. The timestamp of update $i$ is $a_i$. The time at which update $i$ is received by the monitor is $d_i$. Since update $3$ is received before $2$, backlog is reduced by $2$ packets at $d_3$. Also, there is no change in $B(t)$ at $d_2 > d_3$.} \label{fig:backlogSampleFunction} \end{center} \end{figure} \section{Protocol Description} \section{Inter-Continental Updates} \label{sec:realWorldResults} We will show results for when $10$ sources sent their updates to monitors on a server in another continent. The sources, as described earlier, gained access to the Internet via an enterprise access point. The results were obtained by running ACP and \emph{Lazy} alternately for $10$ runs. Each run was restricted to $1000$ update packets long so that on an average ACP and \emph{Lazy} experienced similar network conditions. We ran ACP for $\kappa=1$ and $\kappa=2$. Using traceroute, we observed that the number of hops was large, about $30$, during these experiments. Figure~\ref{fig:CDFFromRealWorld} summarizes the comparison of ACP and \emph{Lazy}. Figure~\ref{fig:cdfAgeRealWorld} shows the cumulative distribution functions (CDF) of the average age obtained by each source when using ACP (using $\kappa = 1$) and the corresponding CDF(s) when using \emph{Lazy}. As is seen in the figure, ACP outperforms \emph{Lazy} and obtains a median improvement of about $100$ msec in age ($\approx 33\%$ over average age obtained using \emph{Lazy}). This over an end-to-end connection with median RTT of about $185$ msec. Further, observe that the age CDF(s) for all the sources when using either ACP or \emph{Lazy} are similar. This hints at sources sharing the end-to-end connection in a fair manner. Also, observe from Figure~\ref{fig:cdfRTTRealWorld} that the median RTT(s) for both ACP and \emph{Lazy} are almost the same. This signifies that ACP maintains a backlog of update packets in a manner such that the packets don't suffer additional delays because multiple packets of the source are traversing the network at the same time. Lastly, consider a comparison of the CDF of average backlogs shown in Figure~\ref{fig:cdfBacklogRealWorld}. ACP exploits very well the fast end-to-end connection with multiple hops and achieves a very high median average backlog of about $30$ when using a step size of $1$ and a much higher backlog when using a step size of $2$. We observe that step size $\kappa=1$ worked best age wise. \emph{Lazy}, however, achieves a backlog of about $1$ (not shown). We end by showing snippets of ACP in action over the end-to-end path. Figures~\ref{fig:acpBacklogEvol} and~\ref{fig:acpAgeEvol} show the time evolution of average backlog and average age, as calculated at control epochs. ACP increases backlog in small steps (see Figure~\ref{fig:acpBacklogEvol}, $14$ seconds onward) over a large range followed by a rapid decrease in backlog. The increase coincides with a reduction in average age, and the rapid decrease is initiated once age increases. Also, observe that age decreases very slowly (dense regions of points low on the age curve around the $15$ second mark) with an increase in backlog just before it increases rapidly. The region of slow decrease is around where, ideally, backlog must be set to keep age to a minimum. \section{Real World Setup and Results} We implemented ACP below application layer and used UDP datagrams for communication. We used an Ubuntu 16.04 LTS virtual machine on IIITD server as receiver and an Ubuntu 16.04 LTS machine as a client located outside IIIT-Delhi campus. For communication between these two machines, we established a SSL-VPN connection between the hosts. \section{Related Work} \label{sec:related} The need for timely updates arises in many fields, including, for example, vehicular updating \cite{kaul_minimizing_2011}, real time databases \cite{xiong_deriving_1999}, data warehousing \cite{karakasidis_etl_2005}, and web caching \cite{yu_scalable_1999,Cho-GM-TODS2003effective}. For sources sending updates to monitors, there has been growing interest in the age of information (AoI) metric that was first analyzed for elementary queues in \cite{KaulYatesGruteser-Infocom2012}. To evaluate AoI for a single source sending updates through a network cloud \cite{KamKompellaEphremides2013ISIT} or through an M/M/k server \cite{KamKompellaEphremides2014ISIT,KamKNE2016IT}, out-of-order packet delivery was the key analytical challenge. Packet deadlines are found to improve AoI in \cite{KamKompellaNWE-ISIT2016}. AoI in the presence of errors is evaluated in \cite{ChenHuang-ISIT2016}. Distributional properties of the age process have also been analyzed for the D/G/1 queue under first-come-first-served (FCFS) \cite{Champati-AG-AOI2018}, as well as single server FCFS and LCFS queues \cite{Inoue-MTT-arxiv2017}. There have also been studies of energy-constrained updating \cite{Elif2015ITA,Yates2015ISIT,UpdateorWait-IT2017,Nath-WY-spawc2017,FaraziKleinBrown-AOI2018,Arafa-YU-ICC2018}. There has also been substantial efforts to evaluate and optimize age for multiple sources sharing a communication link \cite{Kadota-UBSM-Allerton2016,KaulYates-ISIT2017,Sang-LJ-globecom2017, NajmTelatar-AOI2018,JiangKrishnmachariZZN-ISIT2018}. In particular, near-optimal scheduling based on the Whittle index has been explored in \cite{Kadota-UBSM-Allerton2016,Jiang-KZN-itc2018,Hsu-ISIT2018}. When multiple sources employ wireless networks subject to interference constraints, AoI has been analyzed under a variety of link scheduling methods \cite{LuJiLi-Mobicom2018,Talak-KKM-arxiv2018}. AoI analysis for multihop networks has also received attention \cite{TalakKaramanModiano-Allerton2017}. Notably, optimality properties of a Last Generated First Served (LGFS) service when updates arrive out of order are found in \cite{Bedewy-SS-arxiv2017}. While the early work \cite{kaul_minimizing_2011} explored practical issues such as contention window sizes, the subsequent AoI literature has primarily been focused on analytically tractable simple models. Moreover, a model for the system is typically assumed to be known. In this work, our objective has been to develop end-to-end updating schemes that perform reasonably well without assuming a particular network configuration or model. This approach attempts to learn (and adapt to time variations in) the condition of the network links from source to monitor. This is similar in spirit to hybrid ARQ based updating schemes \cite{Ceran-GG-WCNC2018,NajmYatesSoljanin-ISIT2017} that learn the wireless channel. The chief difference is that hybrid ARQ occurs on the short timescale of a single update delivery while ACP learns what the network supports over many delivered updates. \begin{figure*}[!th] \begin{center} \subfloat[]{\includegraphics[width=.33\textwidth]{MATLAB_plots_WoWMoM/Fig1a_ageUDP_TCP_retrans_skk} \label{fig:UDPvsTCPPacketError}} \subfloat[]{\includegraphics[width=.33\textwidth]{MATLAB_plots_WoWMoM/rxBytes_delay_skk} \label{fig:UDPvsTCPPacketErrorRXBuffer}} \subfloat[]{\includegraphics[width=.33\textwidth]{MATLAB_plots_WoWMoM/Fig1c_tcp_500_536B_skk} \label{fig:TCPSmallPacketSizes}} \label{fig:TCPEval} \caption{(a) Impact of packet errors on age and packet delay when using TCP and UDP (b) A time snapshot of delays suffered by update packets transmitted using TCP and the corresponding received bytes by the monitor. Packet error rate was set to $0.1$ (c) Age as a function of packet size and how packet size impacts increase of the TCP congestion window.} \end{center} \vspace{-0.2in} \end{figure*} \section{Related Work} \section{Simulation Results} \label{sec:simulationResults} \begin{figure}[!t] \begin{center} \subfloat[]{\includegraphics[width=.25\textwidth]{MATLAB_plots_WoWMoM/age_p2p6} \label{fig:age_topology_1_fast}} \subfloat[]{\includegraphics[width=.25\textwidth]{MATLAB_plots_WoWMoM/backlog_p2p} \label{fig:backlog_topology_1_fast}}\\ \vspace{-0.02\textwidth} \subfloat[]{\includegraphics[width=.25\textwidth]{MATLAB_plots_WoWMoM/rtt_p2p6} \label{fig:rtt_topology_1_fast}} \subfloat[]{\includegraphics[width=.25\textwidth]{MATLAB_plots_WoWMoM/lambda_p2p6} \label{fig:lambda_topology_1_fast}} \caption{(a) Average source age (b) Average source backlog (c) Average source RTT and (d) Update rate $\lambda$ for Lazy and ACP when all links other than wireless access are $6$ Mbps. All sources used a WiFi PHY rate of $54$ Mbps. The sources are spread over an area of $100$ m$^2$. The standard deviation of shadowing was set to $4$ dB.} \label{fig:ageAndBacklogFastAndSlowNets} \end{center} \vspace{-0.2in} \end{figure} Figure~\ref{fig:ageAndBacklogFastAndSlowNets} compares the average age, source update rate $\lambda$, the RTT, and the average backlog, obtained when using ACP and \emph{Lazy}. We vary the number of sources in the network from $1$ to $20$. For smaller numbers of sources, the backlog (see Figure~\ref{fig:backlog_topology_1_fast}) per source maintained by ACP is high. This is because, given the similar rate P2P links and higher rate WiFi link, when using ACP, the sources attempt to have their update packets in the queues of the access points and routers in the network. On the other hand, a source using \emph{Lazy} sticks to sending just one packet every RTT on an average. Thus, the average backlog per source stays similar for different numbers of sources. As the numbers of sources become large in comparison to the number of hops (six) in the network, even at an average backlog of about $1$ update per source, there is little value in a source sending more than one update per RTT. Note that there are only $6$ hops (queues) in the network. When there are five or more sources, a source sending at a rate faster than $1$ every RTT will have its updates waiting for each other to finish service. This results in ACP maintaining a backlog close to \emph{Lazy} when the numbers of sources are $5$ and more. Figure~\ref{fig:lambda_topology_1_fast} shows the average source rate of sending update packets. Observe that the average source rate drops in proportion to the number of sources. While the source rate is about $800$ updates/second when there is only a single source, it is about $70$ when the wireless access is shared by $20$ sources. This scaling down is further evidence of ACP adapting to the introduction of larger numbers of sources. While a source using ACP ramps down its update rate from $800$ to $70$, \emph{Lazy} more or less sticks to the same update rate throughout. \begin{figure}[!t] \begin{center} \subfloat[]{\includegraphics[width=0.25\textwidth]{MATLAB_plots_camera/age_allTopology} \label{fig:acpOverWiFi_age400}} \hspace{-.25in} \subfloat[]{\includegraphics[width=0.25\textwidth]{MATLAB_plots_camera/retry_percentage_multi} \label{fig:acpOverWiFi_age_retry}} \caption{(a) Age and (b) retry rate as a function of number and density of sources and maximum retry limit. The vertical bars denote a region of $\pm 1$ standard deviation around the mean (marked).} \label{fig:acpOverWiFi_age} \end{center} \vspace{-0.1in} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=0.48\textwidth]{MATLAB_plots_camera/acp_adaption} \caption{ACP adapts to network changes. Blue circles show the achieved age by an ACP client over time. A UDP client of rate $0.2$ Mbps is connected to AP-1 at $200-400$ secs and $1000-1200$ secs. Another UDP client of rate $0.3$ Mbps is connected to AP-2 at $600-800$ secs and $1000-1200$ secs. A darker shade of pink signifies a larger aggregate UDP load on the network.} \label{fig:acpAdapts} \end{center} \vspace{-0.2in} \end{figure} \begin{figure*}[!t] \begin{center} \subfloat[]{\includegraphics[width=.33\textwidth]{MATLAB_plots_WoWMoM/cdf_real_skk} \label{fig:cdfAgeRealWorld}} \subfloat[]{\includegraphics[width=.33\textwidth]{MATLAB_plots_WoWMoM/rtt_real_skk} \label{fig:cdfRTTRealWorld}} \subfloat[]{\includegraphics[width=.33\textwidth]{MATLAB_plots_WoWMoM/Backlog_skk} \label{fig:cdfBacklogRealWorld}} \caption{We compare the CDF(s) of average (a) Age (b) RTT and (c) Backlog obtained over $10$ runs each of \emph{Lazy} and ACP with step size choices of $\kappa=1,2$. The Age CDF(s) of all the $10$ sources are shown.} \label{fig:CDFFromRealWorld} \end{center} \vspace{-0.14in} \end{figure*} \begin{figure}[!t] \begin{center} \subfloat[]{\includegraphics[width=.24\textwidth]{MATLAB_plots_WoWMoM/avgBacklog_real} \label{fig:acpBacklogEvol}} \subfloat[]{\includegraphics[width=.24\textwidth]{MATLAB_plots_WoWMoM/Age_real1} \label{fig:acpAgeEvol}} \caption{The time evolution of average backlog and age that resulted from one of the ACP source sending updates over the Internet.} \label{fig:acpEvolution} \end{center} \vspace{-0.2in} \end{figure} This artificial constraint of a very few hops combined with a single end-to-end path is removed in the real-world experiments that we present in the next section. As we will see, sources accessing a common access point will maintain high backlogs over their end-to-end connections to their monitors. The absolute improvements in average age achieved by ACP, see Figure~\ref{fig:age_topology_1_fast}, for fewer numbers of sources seem nominal but must be seen in light of the fact that end-to-end RTT of the simulated network under light load conditions is very small (about $5$ msec as seen in Figure~\ref{fig:rtt_topology_1_fast}). ACP achieves a $21\%$ and $13\%$ reduction in age with respect to Lazy, respectively, for a single source and two sources. The only impact that changing the link rates of the P2P links had was a corresponding change in RTT and Age. For example, while the average age achieved by a source using ACP in a $20$ source network with P2P link rates $0.3$ Mbps was $\approx 6$ seconds, it was $\approx 0.25$ seconds when the P2P link rates were set to $6.0$ Mbps. The larger RTT for the latter meant smaller $\lambda$ of about $5$ updates/second/source. The backlogs, as one would expect, were similar, however. Next consider Figure~\ref{fig:acpOverWiFi_age400} that shows the impact of maximum allowed retries, numbers of sources (varied from $5$ to $50$), and source density (areas of $50\times 50$ m$^2$ and $20\times 20$ m$^2$), on average age. The standard deviation of shadowing was set to $12$. Note that age is similar for the two simulated areas for a given setting of maximum retries. However, it is significantly larger for when the max retry limit is set to $7$ in comparison to when no retries are allowed. This is especially true when the network has larger numbers of sources. Larger numbers of sources witness higher rates of retries (Figure~\ref{fig:acpOverWiFi_age_retry}, retry limit is $7$) due to a higher rate of packet decoding errors that result from collisions over the WiFi medium access shared by all sources. Retries create a two-fold problem. One that a retry may keep a fresher update from being transmitted. Second, ACP, like TCP, confuses packet drops due to channel errors to be network congestion. This causes it to unnecessarily reduce $\lambda$ in response to packet errors, which increases age. In summary, retries at the wireless access are detrimental to keeping age low. Finally, observe in Figure~\ref{fig:acpOverWiFi_age400} that the spread of ages achieved by sources is very small. In fact, we see that sources in a network achieve similar ages and in all our simulations the Jain's fairness index~\cite{jain1999throughput} was found to be close to the maximum of $1$. ACP adapts rather quickly to the introduction of other flows that congest the network. This is exemplified by Figure~\ref{fig:acpAdapts}. We introduced one to two UDP flows at different points in the network used for simulation (Figure~\ref{fig:simulationNetwork}), where all links are $1$ Mbps. ACP reduces $\lambda$ appropriately and adapts backlog to desired levels. \section{Age Sensitive Update Traffic over TCP} \label{sec:TCP} Before we delve into the problem of end-to-end age control, we demonstrate why TCP as a choice of transport protocol is unsuitable for age sensitive traffic. Specifically, we show that the congestion control mechanism of TCP together with its goal of guaranteed and ordered delivery of packets can lead to a very high age at the monitor, in comparison to when UDP is used, for a wide range of utilization of the network by the traffic generated by the source, and not just when the utilization is high. We simulated a simple network consisting of a source that sends measurement updates to a monitor via a single Internet Protocol (IP) router. The source node has a bidirectional point-to-point (P2P) link of rate $1$ Mbps to the router. A similar link connects the router to the monitor. The source uses a TCP client to connect to a TCP server at the monitor and sends its update packets over the resulting TCP connection. We will also compare the obtained age with when UDP is used instead. \emph{Retransmissions and In-order Delivery:} Figure~\ref{fig:UDPvsTCPPacketError} illustrates the impact of packet error on TCP. A packet was dropped independently of other packets with probability $0.1$. The figure compares the average age at the monitor and the average update packet delay, which is the time elapsed between generation of a packet at the source and its delivery at the monitor, when using TCP and UDP. On using TCP, the time average age achieves a minimum value of $0.18$ seconds when the source utilizes a fraction $0.2$ of the available $1$ Mbps to send update packets. This is clearly much larger than the minimum age of $\approx 0.01$ seconds at a utilization of $\approx 0.8$ when UDP is used. The large minimum age when using TCP is explained by the way TCP guarantees in order packet delivery to the receiving application (monitor). It causes fresher updates that have arrived out-of-order at the TCP receiver to wait for older updates that have not yet been received, for example, because of packet losses in the network. This can be seen in Figure~\ref{fig:UDPvsTCPPacketErrorRXBuffer} that shows how large measured packet delays coincide with a spike in the number of bytes received by the monitor application. The large delay is that of a received packet that had to undergo a TCP retransmission. The corresponding spike in received bytes, which is preceded by a pause, is because bytes with fresher information received earlier but out of order are held by the TCP receiver till the older packet is received post retransmission. Unlike TCP, UDP ignores dropped packets and delivers packets to applications as soon as they are received. This makes it desirable for age sensitive applications. As we will see later, ACP uses UDP to provide update packets with end-to-end transport over IP. \emph{TCP Congestion Control and Small Packets:} Next, we describe the impact of small packets on the TCP congestion algorithm and its impact on age. This is especially relevant to a source sending measurement updates as the resulting packets may have small application payloads. Note that no packet errors were introduced in simulations used to make the following observations. Observe in the upper plot of Figure~\ref{fig:TCPSmallPacketSizes} that the $500$ byte packet payloads experience higher age at the monitor than the larger $536$ byte packets. The reason is explained by the impact of packet size on how quickly the size of the TCP congestion window (CWND) increases. The congestion window size doesn't increase till a sender maximum segment size (SMSS) bytes are acknowledged. TCP does this to optimize the overheads associated with sending payload. Packets with fewer bytes may thus require multiple TCP ACK(s) to be received for the congestion window to increase. This explains the slower increase in the size of the congestion window for $500$ byte payloads seen in Figure~\ref{fig:TCPSmallPacketSizes}. This causes smaller packets to wait longer in the TCP send buffer before they are sent out by the TCP sender, which explains the larger age in Figure~\ref{fig:TCPSmallPacketSizes}.
{ "timestamp": "2019-05-07T02:10:35", "yymm": "1811", "arxiv_id": "1811.03353", "language": "en", "url": "https://arxiv.org/abs/1811.03353" }
\section{Introduction} \renewcommand{\theequation}{1.\arabic{equation}}\setcounter{equation}{0} The cosmic inflation has achieved remarkable successes not only in solving several fundamental and conceptual problems (such as the flatness, horizon problem, and exotic relics) of the standard big bang cosmology, but more importantly, it provides a causal mechanism for generating the large-scale structure of the Universe and the cosmic microwave background (CMB) \cite{guth_inflationary_1981, starobinsky_new_1980, sato_firstorder_1981} (see Ref.~\cite{baumann_tasi_2009} for an updated review). All these predictions are matched well to cosmological observations with a spectacular precision \cite{komatsu_sevenyear_2011, planckcollaboration_planck_2018, planck_collaboration_planck_2015-4, planck_collaboration_planck_2014-1}. These observations have provided a strong evidence of a nearly scale invariant power spectrum of adiabatic perturbations and support the inflationary paradigm with a single scalar field, which gives rise to a slow-roll inflationary phase. During this slow-roll phase, the energy density of the matter field remains nearly constant and the spacetime behaves like a quasi-de Sitter spacetime. While there are a lot of approaches to realize the inflation with a single scalar field, the EFT of inflation provides a general framework for describing the most generic single scalar field theory and the associated fluctuations in a quasi de-Sitter background \cite{cheung_effective_2008, weinberg_effective_2008}. In this framework, the scalar field provides a clock that breaks time {diffeomorphism invariance but preserves the spatial one}. This allows one to construct the action of the theory around the quasi de-Sitter background in terms of spatial diffeomorphism invariants and study the effects of different terms. This is very similar to the case of Ho\v{r}ava-Lifshitz (HL) theory of quantum gravity \cite{horava_quantum_2009, horava_general_2010, zhu_symmetry_2011, zhu_general_2012, lin_postnewtonian_2014}, in which the symmetry of the theory is broken from the general covariance down to the foliation-preserving diffeomorphisms. With this property, the action of the HL theory of quantum gravity has to be constructed in terms of 3-dimensional spatial diffeomorphism invariants. By including high-order spatial derivative operators (up to the six order), but excluding high-order temporal derivative operators, the HL theory of gravity becomes power-counting renormalizable. Such remarkable features have attracted a lot of attention and the observational effects of high-order operators {to} inflationary perturbation spectra have been extensively studied \cite{huang_primordial_2013, zhu_inflation_2013, zhu_effects_2013, wang_polarizing_2012} (see \cite{wang_horava_2017} for an updated review). Considering the similarity between the two theories based on the same symmetry, it is natural to study these high-order derivative terms in the framework of EFT of inflation. These new terms provide an efficient way for parametrizing unknown high energy physics effects on the low energy {scale, and produce various} inflationary models \cite{tong_effective_2017, Gwyn:2012mw, Castillo:2013sfa, Hetz:2016ics, Gwyn:2014doa, alishahiha_dbi_2004, shandera_observing_2006, gong_higher_2015, arkani-hamed_ghost_2004}, such as inflation models in HL theory mentioned above, DBI inflation \cite{alishahiha_dbi_2004, shandera_observing_2006}, and Ghost inflation \cite{arkani-hamed_ghost_2004} (for EFT of bouncing universe, see \cite{cai_effective_2017, cai_higher_2017}). Recently, the EFT of inflation has been extended by adding higher spatial derivative terms (up to the fourth-order) \cite{ashoorioon_extended_2018}. With this extension, the usual linear dispersion relation associated with the propagation of inflationary scalar perturbation has been changed to a nonlinear one. The presence of the high-order operators also sets a new characteristic energy scale in the extended theory. Above this new energy scale, the high-order operators dominate, while below it the usual linear ones become dominant. An important question now is whether the new high-order derivative operators in the extended EFT of inflation can leave any observational effects. An essential step to address this issue is to investigate the cosmological perturbations in the extended EFT of inflation and calculate the corresponding inflationary observables by evolving perturbation modes starting from the high energy regime where the higher-order terms dominate until the end of the slow-roll inflation. Such considerations have attracted a lot of attention recently \cite{ashoorioon_extended_2018, ashoorioon_getting_2017, ashoorioon_nonunitary_2018}. In particular, the impact of the resulting nonlinear dispersion relation on the primordial perturbation spectrum has been studied extensively (see \cite{ashoorioon_effects_2011, ashoorioon_note_2011, zhu_inflationary_2014, zhu_gravitational_2014-1, wu_CTP, zhu_quantum_2014, zhu_highorder_2016} and references therein). {One of the} effects is that the cosmological perturbations can experience a period of non-adiabatic evolution in the high energy regime, and {as a result}, the perturbations are no longer in the adiabatic Bunch-Davies state. These excited states in turn lead to particle production during inflation and modify the primordial perturbation spectrum. According to the quantum field theory in curved spacetime, particle production can arise from non-adiabatic evolutions of the associated field modes \cite{winitzki_cosmological_2005}. {Indeed, this is exactly the case that occurs in} cosmological perturbations in the extended EFT of inflation \cite{ashoorioon_extended_2018, ashoorioon_getting_2017, ashoorioon_nonunitary_2018}. Such non-adiabatic {evolutions of} the primordial perturbations also {occur during the super-inflationary phase right after the quantum bounce in loop quantum cosmology, see, for examples, \cite{wu_nonadiabatic_2018, zhu_primordial_2018a, zhu_preinflationary_2017, zhu_universal_2017b}. }However, when the adiabatic condition is violated, it is in general impossible to study exact solution and the corresponding power spectra for cosmological perturbations, and thus one has to use some approximate methods. Recently, we have developed a method, {\em the uniform asymptotic approximation method} \cite{zhu_constructing_2014, zhu_inflationary_2014, zhu_quantum_2014}, to calculate precisely the quantum gravitational effects of the primordial perturbation spectra. The robustness of this method has been verified for calculating primordial spectra in k-inflation \cite{zhu_power_2014, Wu:2017joj, martin_kinflationary_2013,ringeval_diracborninfeld_2010}, inflation with nonlinear dispersion relations \cite{zhu_inflationary_2014, zhu_quantum_2014, zhu_highorder_2016}, quantum gravitational effects in loop quantum cosmology \cite{zhu_scalar_2015, zhu_detecting_2015, zhu_inflationary_2016}, the parametric resonance during inflation and reheating \cite{Zhu:2018smk}, and applications to quantum mechanics \cite{Zhu:2019bwj}. We note here that this method was first applied to inflationary cosmology in the framework of GR in \cite{habib_inflationary_2002, habib_inflationary_2005, habib_characterizing_2004}, and then we {made various extensions}, so it can be applied to {other theories of gravity}, including the ones with nonlinear dispersion relations \cite{zhu_inflationary_2014,zhu_highorder_2016}. The main purpose of the present paper is to use this method to derive the inflationary observables of slow-roll inflation in the framework of the extended EFT. By analytically solving the equation of motion for scalar perturbations, we concentrate on how the high-order operators affect the evolution of the perturbation modes, by providing the general expressions of theperturbation spectrum at the end of the slow-roll inflation. The main properties of the high-order operators have been discussed in detail. These expressions represent a significant improvement over the previous results obtained so far in the literature. We organize the rest of the paper as follows. In Sec.~II, we provide a brief introduction to the extended EFT of inflation and the equations of motion for both cosmological scalar and tensor perturbations. In Sec.~III, we first discuss how the adiabatic condition is violated due to the introduction of the high-order operators in the extended EFT of inflation, and then construct analytical solutions for scalar perturbation modes, by using the uniform asymptotic approximation method. With the analytical solution we derive the general expression of the corresponding perturbation spectrum. In Sec.~IV, we study the main features of both the particle production rate during the inflation and the primordial perturbation spectrum. Our main conclusions and discussions are presented in Sec.~V. The strong coupling issue in the extended EFT is also discussed in Appendix A. \section{Extended effective field theory of inflation} \renewcommand{\theequation}{2.\arabic{equation}}\setcounter{equation}{0} In this section, we present a brief review of the extended EFT of inflation by including high-order derivative terms \cite{ashoorioon_extended_2018}. In general, the EFT of inflation provides a framework for describing the most general single scalar field on a quasi de-Sitter background \cite{cheung_effective_2008, weinberg_effective_2008}. Since the Friedmann-Robterson-Walker (FRW) background metric provides a preferred time foliation, we can in general write the action of a theory around the FRW background in terms of only the 3-dimensional spatial diffeomorphism invariants. Then, it can be shown that the basic building blocks of this construction include scalars like $g^{00}$, pure function of time $c(t)$, and the extrinsic curvature tensor $K^{\mu\nu}$ of the constant time hypersurfaces. Using these blocks, one can show that the action of the EFT of inflation around a flat FRW background reads \begin{widetext} \begin{eqnarray}\label{Seff} S_{\rm eft}&=&M_{\rm Pl}^2 \int d^4 x \sqrt{-g} \Bigg\{\frac{R}{2}+\dot H g^{00} - (3H^2+\dot H) + \frac{M_2^4}{2 M_{\rm Pl}^2}(g^{00}+1)^2 + \frac{\bar M_1^3}{2 M_{\rm Pl}^2} (g^{00}+1)\delta K^\mu_\mu \nonumber\\ && ~~~~~~~~~~~~~~~~~~~~~~ - \frac{\bar M_2^2}{2 M_{\rm Pl}^2} (\delta K_\mu^\mu)^2- \frac{\bar M_3^2}{2M_{\rm Pl}^2} \delta K^\mu_\nu \delta K^\nu_\mu\Bigg\}, \end{eqnarray} \end{widetext} where $\delta K_{\mu\nu}$ denotes the perturbation of $K_{\mu\nu}$ about the flat FRW background. The above action can be extended by including high-order derivative terms. In this paper, we follow the extension proposed in \cite{ashoorioon_extended_2018}, which includes only operators with spatial derivatives up to the fourth-order. The action for these additional terms is given by \begin{widetext} \begin{eqnarray}\label{DeltaS} \Delta S &=& \int d^4 x \sqrt{-g}\Bigg\{ \frac{\bar M_4}{2}\nabla g^{00} \nabla^\nu \delta K_{\mu\nu} -\frac{\delta_1}{2} (\nabla_{\mu} \delta K^{\nu\gamma})(\nabla^{\mu} \delta K_{\nu\gamma}) - \frac{\delta_2}{2}(\nabla_{\mu} \delta K^{\nu}_{\nu})^2 \nonumber\\ &&~~~~~~~~ ~~~~~~~~ - \frac{\delta_3}{2}(\nabla_{\mu} \delta K^{\mu}_{\nu})(\nabla_{\gamma} \delta K^{\gamma \nu}) - \frac{\delta_4}{2} \nabla^\mu \delta K_{\nu\mu} \nabla^\nu \delta K^{\sigma}_{\sigma}\Bigg\}, \end{eqnarray} \end{widetext} where $\bar M_4$ {and} $\delta_i \; (i=1,2,3,4)$ are {constants}. The first term (the $\bar M_4$ term) contains three {derivatives}, thus it breaks the time reverse symmetry. The other terms ($\delta_i $ terms) contain {the fourth-order derivatives} and lead to sixth-order corrections to the standard linear dispersion relation of the scalar perturbations. We would like to mention that, in writing the above action, one requires the perturbation in the inflaton field $\phi$ to be zero, i.e., $\delta \phi=0$. This is achieved by considering the following linear transformation, \begin{eqnarray} \tilde t = t+\xi^0(t,x^i), \;\; \delta \tilde \phi = \delta \phi + \xi^0(t, x^i) \dot \phi_0(t), \end{eqnarray} which leads to a particular gauge (unitary gauge) with $\xi^0(t,x^i)= - \delta\phi/\dot \phi_0$, {in which} there is no inflaton {perturbations. Obviously, the actions of} equations (\ref{Seff}) and (\ref{DeltaS}), which respect the unitary gauge, are not of time diffeomorphism invariance. In order to restore such a symmetry, one can introduce a Goldstone mode $\pi(t,x^i)$ and require it to transform as $\pi(t,x^i) \to \pi(t,x^i)- \xi^0(t,x^i)$. Consequently, perturbation of the inflaton field is not required to be zero and it is related to $\pi(t,x^i)$ via $\delta \phi = \dot \phi_0 \pi$. On the other hand, one must be careful when dealing with higher derivative operators in the theory. The main concern is that they in general produce time derivatives higher than second-order in the equations of motion, and according to Ostrogradski' s theorem, a system with higher order time derivatives is usually not free of ghosts. For this reason, as analyzed in \cite{ashoorioon_extended_2018}, one has to impose the condition \begin{eqnarray} \delta_1=0=\delta_2, \end{eqnarray} in order to avoid the presence of high-order time derivatives. In this paper, we will adopt this condition and disregard the $\delta_1$ and $\delta_2$ terms in the above action. \subsection{Tensor perturbations} For the tensor perturbations, the perturbed spacetime is set to \begin{eqnarray} g_{ij}=a^2 (\delta_{ij}+h_{ij}), \end{eqnarray} where $h_{ij}$ represents the transverse and traceless tensor perturbations, {and satisfies the conditions}, \begin{eqnarray} h^{i}_{i}=0=\partial^i h_{ij}. \end{eqnarray} Then, expanding the total action $S_{\rm tot}=S_{\rm eft}+ \Delta S$ up to the second-order of {$h_{ij}$, we find } \begin{eqnarray} S^{2}_{h} = \frac{M_{\rm Pl}^2}{8} \int dt d^3x a^3 \left(c_t^{-2}\partial_t h_{ij} \partial_t h^{ij}-a^{-2} \partial_k h_{ij} \partial^k h^{ij}\right),\nonumber\\ \end{eqnarray} where the effective sound speed $c_t$ for the tensor perturbations is given by \begin{eqnarray} c_t^2 = \left(1-\frac{\bar M_3^2}{M_{\rm Pl}^2}\right)^{-1}. \end{eqnarray} In order to avoid superluminal propagations for the tensor modes, we require \begin{eqnarray} \bar M_3 <0. \end{eqnarray} Then, the variation of the total action with respect to $h_{ij}$ leads to the equation of motion \begin{eqnarray} \frac{d^2\mu_k^{(t)}(\eta)}{d\eta^2} + \left(c_t^2 k^2- \frac{a''}{a}\right) \mu_k^{(t)}(\eta)=0, \end{eqnarray} where $d\eta =dt/a$ is the conformal time and $\mu_k^{(t)}(\eta)\equiv a M_{\rm Pl} h_k/\sqrt{2}$ with $h_k$ denoting the Fourier modes of the tensor perturbations. In the slow-roll inflation, if we treat all the slow-rolling quantities approximately as constant, then the equation of motion for the tensor perturbations can be solved analytically and expressed as a linear combination of the Hankel functions, \begin{eqnarray} \mu_k^{(t)}(\eta) \simeq \frac{\sqrt{-\pi \eta}}{2} \left[\alpha_k H^{(1)}_{\nu_t} (-c_t k\eta) + \beta_k H^{(2)}_{\nu_t} (-c_t k\eta)\right],\nonumber\\ \end{eqnarray} where $H_{\nu_t}^{(1)}(-c_t k \eta)$ and $H_{\nu_t}^{(2)}(-c_t k \eta)$ denote the Hankel functions of the first and second kind, respectively, and $\nu_t$ is a slow-roll quantity defined as \begin{eqnarray} \nu_t^2 \equiv \eta^2 \frac{a''}{a}+\frac{1}{4}. \end{eqnarray} The coefficients $\alpha_k$ and $\beta_k$ are two integration constants. In order to fix them, we impose the Bunch-Davis (BD) vacuum state at the initial time, \begin{eqnarray} \mu_k^{(t)}(\eta) = \frac{1}{\sqrt{2 kc_{t}}} e^{- ic_t k \eta}, \end{eqnarray} which leads to \begin{eqnarray} \alpha_k =1,\;\; \beta_k=0. \end{eqnarray} Then, the tensor perturbation spectrum is calculated at the end of inflation, i.e., $\eta \to 0^-$. By using the asymptotic form of the Hankel function when $\eta\to 0^-$, we obtain \begin{eqnarray} \mathcal{P}_h &\equiv& \frac{k^3}{\pi^2 M_{\rm Pl}^2}\left|\frac{\mu_k^{(t)}(\eta)}{a(\eta)}\right|^2 \nonumber\\ &=&\frac{k^2}{4c_t\pi^3}\frac{1}{a^2(\eta)}\Gamma^2(\nu_t) \left(-\frac{c_tk\eta}{2}\right)^{1-2\nu_t} .\nonumber\\ \end{eqnarray} \subsection{Scalar perturbations} As mentioned above, the Goldstone mode $\pi(t,x^i)$ can be introduced to restore the time diffeomorphism invariance of the action. This also describes the scalar perturbations around the flat FRW background. Thus, in order to study the scalar perturbations, one can transform the action in (\ref{Seff}) and (\ref{DeltaS}) from the unitary gauge to the $\pi$-gauge by evaluating the action explicitly for $\pi$. Following this procedure, we find, \begin{eqnarray}\label{quadra_eff} S^\pi &=&S_{\rm eff}^\pi+\Delta S^\pi = \int d^4x \sqrt{-g} (\mathcal{L}_{\rm eff}+\mathcal{L}_{\Delta S}), \end{eqnarray} where \begin{eqnarray} \mathcal{L}_{\rm eff} &=& M^{2}_{\rm Pl}\dot{H}(\partial_{\mu}\pi)^{2}+2M^{4}_{2}\dot{\pi}^{2}-\bar{M}^{3}_{1}H \left(3\dot{\pi}^{2}-\frac{(\partial_{i}\pi)^{2}}{2a^{2}}\right) \nonumber\\ &&-\frac{\bar{M}^{2}_{2}}{2}\left(9H^{2}\dot{\pi}^{2}-3H^{2}\frac{(\partial_{i}\pi)^{2}}{a^{2}}+\frac{(\partial^{2}_{i}\pi)^{2}}{a^{4}}\right) \nonumber\\ &&-\frac{\bar{M}^{2}_{3}}{2} \left(3H^{2}\dot{\pi}^{2}-H^{2}\frac{(\partial_{i}\pi)^{2}}{a^{2}}+\frac{(\partial^{2}_{j}\pi)^{2}}{a^{4}}\right), \end{eqnarray} and \begin{eqnarray} \mathcal{L}_{\Delta S} &=& \frac{\bar{M}_{4}}{2}\left(\frac{k^{4}H\pi^{2}}{a^{4}}+\frac{k^{2}H^{3}\pi^{2}}{a^{2}}-9H^{3}\dot{\pi}^{2}\right) \nonumber\\ &&-\frac{1}{2}\delta_{3}\left(\frac{k^{6}\pi^{2}}{a^{6}}+\frac{3H^{2}k^{4}\pi^{2}}{a^{4}}+\frac{H^{2}k^{2}\dot{\pi}^{2}}{a^{2}}-9H^{4}\dot{\pi}^{2}\right) \nonumber\\ &&-\frac{1}{2}\delta_{4} \Bigg(\frac{k^{6}\pi^{2}}{a^{6}}+\frac{H^{2}k^{4}\pi^{2}}{2a^{4}}+\frac{9H^{4}k^{2}\pi^{2}}{2a^{2}}+\frac{3H^{2}k^{2}\dot{\pi}^{2}}{a^{2}} \nonumber\\ &&~~~~~~~~~ +\frac{27}{2}H^{4}\dot{\pi}^{2}\Bigg). \end{eqnarray} Note that in the above equations only the quadratic terms in the action are included, since these are the most relevant parts for our discussions about the primordial power spectrum. Variation of the above action $S^\pi$ with respect to $\pi$ yields the equation of motion for the Goldstone mode $\pi$, \begin{eqnarray} A_1 \ddot \pi_k + B_1 \dot \pi_k +\left(F_1 \frac{k^2}{a^2}+D_1 \frac{k^4}{a^4}+C_1 \frac{k^6}{a^6}\right)\pi_k=0,\nonumber\\ \end{eqnarray} where \begin{eqnarray} A_1 &=& -2M^{2}_{Pl}\dot{H}+4M^{4}_{2}-6\bar{M}^{3}_{1}H-9H^{2}\bar{M}^{2}_{2} \nonumber\\ &&-3H^{2}\bar{M}^{2}_{3}+2H^{4}F_{0}(k,\tau), \label{A1}\\ B_1 &=&-6\bar{M}^{3}_{1}\dot{H}-18\dot{H}H\bar{M}^{2}_{2}-6\dot{H}H\bar{M}^{2}_{3} +3HC_{0}\nonumber\\ &&+H^{5}\left[6F_{0}(k,\tau)+\frac{2k^{2}}{a^{2}H^{2}}(\delta_{3}+3\delta_{4})\right], \\ C_1 &=&\delta_{3}+\delta_{4}, \\ D_1 &=&\bar{M}^{2}_{2}+\bar{M}^{2}_{3}+H^{2}\frac{\delta_{4}}{2}+3H^{2}\delta_{3}-\bar{M}_{4}H, \\ F_1 &=& -2M^{2}_{Pl}\dot{H}-\bar{M}^{3}_{1}H-3H^{2}\bar{M}^{2}_{2}-\bar{M}^{2}_{3}H^{2} \nonumber\\ && +3H^{4}\left(\delta_{3}+\frac{3}{2}\delta_{4} \right)-\bar{M}_{4}H^{3}. \end{eqnarray} and \begin{eqnarray} F_{0}(k,\tau)\equiv\frac{9}{2}\delta_{3}-\frac{27}{4}\delta_{4}-\frac{k^{2}}{2a^{2}H^{2}}(\delta_{3}+3\delta_{4})-\frac{9}{2H}\bar{M}_{4}. \nonumber\\ \end{eqnarray} In order to simplify the above equation, assuming $\delta_3=-3\delta_4$ \footnote{The discussion without this condition has been studied in \cite{ashoorioon_nonunitary_2018} recently.} and $u_k=a \pi_k$ \cite{ashoorioon_extended_2018a}, we obtain \begin{eqnarray} \label{eom} u''_k + \left( \omega_k^2(\eta)-\frac{z''}{z}\right)u_k=0,\nonumber\\ \end{eqnarray} where \begin{eqnarray}\label{nonlinear} \omega_k^2(\eta) = c_s^2 k^2 \left(1+\frac{D_1}{G_1 c_s^2} \frac{k^2}{a^2} + \frac{C_1}{G_1 c_s^2} \frac{k^4}{a^4}\right), \end{eqnarray} and \begin{eqnarray} c_s^2 &\equiv & \frac{F_{1}}{G_{1}}, \label{cs}\\ G_{1}&\equiv & -2M^{2}_{Pl}\dot{H}+4M^{4}_{2}-6\bar{M}^{3}_{1}H-9H^{2}\bar{M}^{2}_{2} \nonumber\\ &&-3H^{2}\bar{M}^{2}_{3}-\frac{81}{2}H^{4}\delta_{4}-9\bar{M}_{4}H^{3}. \end{eqnarray} In the quasi-de-Sitter limit, we also have \begin{eqnarray} \frac{z''}{z} \simeq \frac{2}{\eta^2}, \;\; a H \simeq - \eta. \end{eqnarray} It is worth to note that, after considering the high-order derivative terms in the action, the conventional linear dispersion relation becomes nonlinear. To study their effects it is convenient to introduce a characteristic energy scale $M_*$, above which the nonlinear terms become dominant. For this purpose, we set \begin{eqnarray} \hat \alpha_0 \left(\frac{H}{M_*}\right)^2 = \frac{D_1 H^2}{G_1 c_s^4}, \label{alpha_0}\\ \hat \beta_0 \left(\frac{H}{M_*}\right)^4 = \frac{C_1}{G_1 c_s^6}, \label{beta_0} \end{eqnarray} where $\hat \alpha_0$ and $\hat \beta_0$ are two dimensionless constants in the de-Sitter limit. Since the physical observables are evaluated at the time when the perturbation modes exit the Hubble radius with the energy scale of the order of the Hubble scale $H$, in order for the high-order derivative terms to be under control, one would require $H \ll M_*$, so that the horizon crossing can occur in the regime where the dispersion relation acquires the standard linear form. With this in mind, it is convenient to write the equation of motion for the scalar perturbations in the form, \begin{eqnarray} \label{eom_1} u''_k(\eta) + c_s^2 k^2 \left(1+\hat \alpha_0 \epsilon_*^2 y^2+\hat \beta_0 \epsilon_* ^4 y^4- \frac{2}{y^2}\right)u_k(\eta)=0,\nonumber\\ \end{eqnarray} where we define $y \equiv - c_s k \eta$ and \begin{eqnarray}\label{epsilon} \epsilon_* \equiv \frac{H}{M_*} \ll 1. \end{eqnarray} On the other hand, we also require that all the perturbation modes have to be stable in the ultraviolet regime, which leads to the condition for the healthy ultraviolet behavior \begin{eqnarray}\label{uv} \hat \beta_0 >0. \end{eqnarray} Then, a natural question arises: whether the scalar perturbations are still compatible with observations after the inclusion of high-order operators. It was claimed that the theory with high-order operators would have an IR strong coupling cut-off $\Lambda^{\rm IR}_c$, which in general makes this theory not a viable EFT. If this is true, it implies that the high energy regime dominated by the $k^6$ term in (\ref{nonlinear}) is not accessible, and all the high-order operators are suppressed by the strong coupling cut-off, which can only contribute some negligible corrections. However, as argued in \cite{ashoorioon_extended_2018a, Baumann:2011su}, this may not always be the case. One way to solve this issue is to require the energy scale of the strong coupling is greater than the energy scale of the new physics. In this way, the high-order operators will dominate at the high energy regime before the scalar perturbation mode becomes strong coupled, which changes dramatically the scaling behavior of the theory, so that {it can be healthy in both of the IR and UV limits. Similar treatment has also been employed in studying the strong coupling problem in the HL theory of quantum gravity and plays an essential role for making the theory power-counting renormalizable \cite{horava_general_2010, zhu_symmetry_2011, zhu_general_2012}. With this treatment, the extended EFT of inflation with high-order} operators can provide a controlled description of the perturbations around a quasi de Sitter background from the low energy regime to UV regime \cite{ashoorioon_extended_2018a}). In Appendix A, by considering two cubic and quartic terms in the action as examples, we show in detail how the presence of the high-order operators can cure the strong coupling problem. In the next two sections, we shall show in detail that it is the behavior of the high-order operators in the UV regime that leads to significant effects on the evolution of the scalar perturbations and produces modifications on the primordial power spectrum. \section{Approximate solution in the uniform asymptotic approximation} \renewcommand{\theequation}{3.\arabic{equation}}\setcounter{equation}{0} \subsection{WKB Approximation} In this section, we start with the evolution of the scalar perturbations during the inflation with the nonlinear dispersion relation given in the last section. In general, an important feature of the nonlinear dispersion relation is that it can produce additional excited states for the primordial perturbations on the sub-horizon scale during inflation. Before we study the generation of these excited states and their effects on the primordial perturbation spectrum in detail, we would like first to provide a qualitative analysis by using the WKB approximation. In general, the solution of the mode function $\mu_k^{(s,t)}(\eta)$ of the equation, \begin{eqnarray} \mu_k''^{(s,t)}(\eta)+ \Omega^2(\eta) \mu_k^{(s,t)}(\eta)=0, \end{eqnarray} can be approximated by the WKB solutions \begin{eqnarray}\label{eom_wkb} \mu_k^{(s, t)}(\eta) \simeq \frac{\alpha_k}{\sqrt{2 \Omega(\eta)}}e^{- i \int \Omega(\eta) d\eta} + \frac{\beta_k}{\sqrt{2 \Omega(\eta)}}e^{ i \int \Omega(\eta) d\eta},\nonumber\\ \end{eqnarray} if the WKB condition \begin{eqnarray} \label{wkb} \left|\frac{3 \Omega' {^2}}{4\Omega^4}-\frac{\Omega''}{2\Omega^3}\right|\ll 1, \end{eqnarray} is satisfied. Here the function \begin{eqnarray}\label{Omega} \Omega^2(\eta) &\equiv& \omega_k^2(\eta) - \frac{z''}{z}\nonumber\\ &=& c_s^2 k^2 \left(1+\hat \alpha_0 \epsilon_*^2 y^2+\hat \beta_0 \epsilon_* ^4 y^4- \frac{2}{y^2}\right), \end{eqnarray} and $\alpha_k$ and $\beta_k$ are the two Bogoliubov coefficients, which will be determined by {the initial conditions}. Generally an adiabatic state is assumed \cite{baumann_tasi_2009}, \begin{eqnarray} \alpha_k=1, \; \beta_k =0. \end{eqnarray} However, in some cases, the WKB condition may be violated or not be satisfied during the whole process. Then, the non-adiabatic evolution of the mode $\mu_k(\eta)$ will produce excited states (i.e. particle production) and eventually lead to a state with \begin{eqnarray} \alpha_k \neq 1,\; \beta_k \neq 0. \end{eqnarray} According to (\ref{wkb}), there are several situations in which the WKB condition can be violated. One case is when $\Omega^2(\eta)$ contains zeros (represented as the real turning points of Eq. (\ref{eom_wkb}) or case (a) and (b) in Fig.~\ref{gofy_tu}) or is extremely close to zero (complex conjugated turning points of Eq.~(\ref{eom_wkb}) or case (c) in Fig.~\ref{gofy_tu}) in the intervals of interest. It is simple to check that when $\Omega^2(\eta)$ equals zero, the WKB condition of (\ref{wkb}) becomes divergent. While for the linear dispersion relation, $\Omega^2(\eta)$ in general has only one zero ( which can be identified as the time when the mode exit the Hubble radius), $\Omega^2(\eta)$ in Eq.~(\ref{Omega}) may have three zeros because of the presence of the higher order operators in the nonlinear dispersion relation. Then, the WKB condition can be violated at several points. In particular, when the two zeros are real, the WKB condition is strongly violated, while it is only weakly violated if these two zeros are complex conjugated. Another possible case that could violate the WKB condition is around the second-order pole about $y \to 0^+$ (i.e. $\eta \to 0^-$). For the latter, it can be shown that \begin{eqnarray} \left|\frac{3 \Omega' {^2}}{4\Omega^4}-\frac{\Omega''}{2\Omega^3}\right|\to \frac{3}{8} \sim \mathcal{O}(1), \end{eqnarray} so the WKB condition is not satisfied. Recently, in order to deal with such cases, we have developed a method (the uniform asymptotic approximation). In the following subsections, we are going to apply this method to the perturbation modes with the high-order operators and study their effects in detail. \subsection{Classification of turning points} To this purpose, let us first write the equation of motion (\ref{eom}) for $u_k$ in the standard form for the {uniform asymptotic approximation \cite{olver_asymptotics_1997, olver_secondorder_1975, zhu_inflationary_2014}} \begin{eqnarray} \frac{d^2\mu_k}{dy^2} = \{g(y)+q(y)\} \mu_k, \end{eqnarray} where \begin{eqnarray}\label{GQ} g(y)+q(y)=\frac{2}{y^2}-\hat \beta_0 \epsilon_*^4 y^4- \hat \alpha_0 \epsilon_*^2 y^2-1. \end{eqnarray} According to the theory of the second-order ordinary differential equations, the solution of {the} above equation depends on poles and turning points of the function $g(y)$ and $q(x)$. Analyzing the corresponding error control function associated with the uniform asymptotic approximation around poles and turning points can provide guidance on how to determine the functions $g(y)$ and $q(x)$. For the equation of motion given in the above, we can see that $g(y)$ and $q(y)$ in general has two poles: one is located at $y=0^+$ and the other is at $y=+\infty$. Using the analysis in the uniform asymptotic approximation \cite{olver_asymptotics_1997, olver_secondorder_1975, zhu_inflationary_2014}, the functions $g(y)$ and $q(y)$ can be chosen as \begin{eqnarray}\label{G} q(y)&=&-\frac{1}{4y^2},\nonumber\\ g(y)&=&\frac{9}{4y^2}-1-\hat{\alpha}_0 \epsilon^2_\ast y^2-\hat{\beta}_0\epsilon^4_\ast y^4. \end{eqnarray} Except the two poles at $y=0^+$ and $y=+\infty$, $g(y)$ may also have zeros in the range $y\in(0,+\infty)$, which are called turning points. By solving the equation $g(y)=0$, we obtain three turning points, which are \begin{eqnarray} y_0&=&\left\{\frac{-\hat{\alpha_0}}{3\hat{\beta_0}\epsilon_*^2}\left[1-2\sqrt{1-Y}\cos\left(\frac{\theta}{3}\right)\right]\right\}^{1/2}, \nonumber\\ y_1&=&\left\{\frac{-\hat{\alpha_0}}{3\hat{\beta_0}\epsilon_*^2}\left[1-2\sqrt{1-Y}\cos\left(\frac{\theta+2\pi}{3}\right)\right]\right\}^{1/2},\nonumber\\ y_2&=&\left\{\frac{-\hat{\alpha_0}}{3\hat{\beta_0}\epsilon_*^2}\left[1-2\sqrt{1-Y}\cos\left(\frac{\theta+4\pi}{3}\right)\right]\right\}^{1/2},\nonumber\\ \end{eqnarray} with $Y\equiv3\hat{\beta}_0/\hat{\alpha}_0^2$ and \begin{eqnarray} \cos\theta\equiv-\left(1-\frac{3}{2}Y-\frac{27}{8}\hat{\alpha}_0Y^2\epsilon_*^2\right)^2. \end{eqnarray} Without loss of generality we assume that $0<y_0 < {\rm Re}(y_1) \leq {\rm Re}(y_2)$, in which $y_0$ is assumed to be a single and real turning point but $y_1$ and $y_2$ can be both real and single, real and double, or complex conjugated. In general, the nature of $y_1$ and $y_2$ can be determined by, \begin{eqnarray} \Delta \equiv(\mathcal{Y}-1)^3+\left(1-\frac{3}{2}\mathcal{Y}-\frac{27}{8}\hat{\alpha}_0Y^2\epsilon_*^2\right)^2. \end{eqnarray} When $\Delta <0$, the three turning points ($y_0$, $y_1$, and $y_2$) are all real and different. When $\Delta =0$, there is one single real turning point ($y_0$), and one double real turning point ($y_1=y_2$). When $\Delta >0$, there is a single real turning point ($y_0$) and two complex conjugated turning points ($y_1^\ast=y_2$). \begin{figure} \includegraphics[totalheight=2.4in,width=3.4in,angle=0]{fg.pdf} \caption{The function $g(y)$ defined by Eq.(\ref{G}). The cases (a, b, c) correspond, respectively, to (a) three different real roots; (b) one single and double roots; or (c) a single and two complex conjugated roots. The case (d) has only a single real root with $\hat{\alpha}_0<0$. In all cases, $\hat{\beta}_0>0$ is assumed. }\label{gofy_tu} \end{figure} \subsection{Approximate solution in the Uniform Asymptotic approximation} According to the discussions given above, there are two poles and three turning points. In the uniform asymptotic approximation, the approximate solution depends on the types of the turning points. Thus in the following we are going to discuss the solution around each turning point in detail. We first consider the single turning point $y_0$, which lies in the range $(0, {\rm Re}(y_1)$). Then, the approximate solution around this single turning point can be expressed in terms of the Airy functions, \begin{eqnarray}\label{Ar} \mu_k(y)=a_0\left(\frac{\xi(y)}{g(y)}\right)^{1/4}{\rm Ai}(\xi)+b_0\left(\frac{\xi(y)}{g(y)}\right)^{1/4}{\rm Bi}(\xi),\nonumber\\ \end{eqnarray} where ${\rm Ai}(\xi)$ and ${\rm Bi}(\xi)$ are the Airy functions {of type I and II, respectively}, $a_0$ and $b_0$ are two integration constants, and $\xi$ is a monotonous function of $y$ given by \begin{eqnarray} \xi(y) = \begin{cases} \left(-\frac{3}{2}\int^y_{y_0}\sqrt{g(y')}dy'\right)^{2/3} ,\; & 0<y\leq y_0,\\ -\left(\frac{3}{2}\int^y_{y_0}\sqrt{-g(y')}dy'\right)^{2/3} ,\; & y_0<y\leq {\rm Re}(y_1),\\ \end{cases}\nonumber\\ \end{eqnarray}. {Around the the turning points $y_1$ and $y_2$, the approximate solution of $\mu_k(y)$ can be expressed as} \begin{eqnarray}\label{parabolic} \mu_k(y)&=&a_1\left(\frac{\zeta^2-\zeta^2_0}{-g(y)}\right)^{\frac{1}{4}}W\left(\frac{1}{2}\zeta^2_0,\sqrt{2}\zeta\right)\nonumber\\ &&+b_1\left(\frac{\zeta^2-\zeta^2_0}{-g(y)}\right)^{\frac{1}{4}}W\left(\frac{1}{2}\zeta^2_0,-\sqrt{2}\zeta\right), \end{eqnarray} where $W(\frac{1}{2}\zeta^2_0,\sqrt{2}\zeta)$ and $W(\frac{1}{2}\zeta^2_0,-\sqrt{2}\zeta)$ are the parabolic cylinder functions, $a_1$ and $b_1$ are two integration constants, and $\zeta_0^2$ is defined as \begin{eqnarray} \zeta_0^2 = \pm \frac{2}{\pi} \left|\int_{y_1}^{y_2}\sqrt{g(y)}dy\right|. \end{eqnarray} Here $\pm$ correspond to $y_{1,2}$ being both real and complex conjugated respectively. We observe that the sign of $\zeta_0^2$ depends on the type of the turning points $y_1$ and $y_2$. $\zeta_0^2$ is positive when $y_1$ and $y_2$ are real and negative if $y_1$ and $y_2$ are complex conjugated. The variable $\xi$ is a monotonous increasing function of $y$. When $y_1$ and $y_2$ are both real, $\zeta$ is related to $y$ via \begin{widetext} \begin{eqnarray} \begin{cases} \int_{y_1}^y\sqrt{-g(y^*)}dy^*=\frac{1}{2}\zeta\sqrt{\zeta^2-\zeta^2_0}+\frac{\zeta^2_0}{2}{\rm arcosh}\left(-\frac{\zeta}{\zeta_0}\right),\;y_0<y<y_1,\\ \int_{y_1}^y\sqrt{-g(y^*)}dy^*=\frac{1}{2}\zeta\sqrt{\zeta^2-\zeta^2_0}+\frac{\zeta^2_0}{2}{\rm arcosh}\left(-\frac{\zeta}{\zeta_0}\right),\;y_1<y<y_2,\\ \int_{y_2}^y\sqrt{-g(y^*)}dy^*=\frac{1}{2}\zeta\sqrt{\zeta^2-\zeta^2_0}-\frac{\zeta^2_0}{2}{\rm arcosh}\left(\frac{\zeta}{\zeta_0}\right),\;y_2<y, \end{cases} \end{eqnarray} When $y_1$ and $y_2$ are complex conjugated, we have \begin{eqnarray} \int_{Re(y_1)}^y\sqrt{-g(y^*)}dy^*=\frac{1}{2}\zeta\sqrt{\zeta^2-\zeta^2_0}-\frac{\zeta^2_0}{2}\ln\left(\frac{\zeta+\sqrt{\zeta^2-\zeta^2_0}}{|\zeta_0|}\right), \end{eqnarray} \end{widetext} With the approximate solutions around each of the turning points given above, now we need to match them together. Before doing so, we need first to specify the initial conditions of the perturbation modes. As we have mentioned at the end of Sec. II (see Eq.~(\ref{uv})), in order to obtain a healthy ultraviolet limit, we have $\hat \beta_0 >0$. This allows us to impose the usual adiabatic Bunch-Davies vacuum state when $y \to +\infty$, \begin{eqnarray} \lim_{y \to +\infty}\mu_k(y)&=&\frac{1}{\sqrt{2\omega_k}}e^{-i\int\omega_k d\eta}\nonumber\\ &=&\sqrt{\frac{1}{2k}}\left(\frac{1}{-g}\right)^{1/4}\exp\left(-i\int^y_{y_i}\sqrt{-g}dy\right).\nonumber\\ \end{eqnarray} Once the initial conditions are specified, we require the approximate solution (\ref{parabolic}) satisfy these conditions $y \gg {\rm Re} (y_2)$, so that we obtain \begin{eqnarray} a_1&=&2^{-3/4}k^{-1/2}\kappa^{-1/2}\left(\frac{1}{2}\zeta_0^2\right),\\ b_1&=&-i2^{-3/4}k^{-1/2}\kappa^{1/2}\left(\frac{1}{2}\zeta_0^2\right), \end{eqnarray} where $\kappa\left(\frac{1}{2}\zeta_0^2\right)$ is given by \begin{eqnarray} \kappa\left(\frac{1}{2}\zeta_0^2\right)\equiv\sqrt{1+e^{\pi\zeta^2_0}}-e^{\pi\zeta^2_0/2}. \end{eqnarray} Then we need to match the approximate solution (\ref{Ar}) around the single turning point $y_0$ with the approximate solution (\ref{parabolic}) around the turning point $y_1$ in their overlaping region between $y_0$ and $y_1$. This leads to, \begin{eqnarray}\label{ab} a_0&=&\sqrt{\frac{\pi}{2k}} \left[\kappa^{-1}\left(\frac{1}{2}\zeta_0^2\right)\sin\mathfrak{B}-i \kappa\left(\frac{1}{2}\zeta_0^2\right)\cos\mathfrak{B} \right],\nonumber\\ b_0&=&\sqrt{\frac{\pi}{2k}} \left[\kappa^{-1}\left(\frac{1}{2}\zeta_0^2\right)\cos\mathfrak{B}+i \kappa\left(\frac{1}{2}\zeta_0^2\right)\sin\mathfrak{B}\right],\nonumber\\ \end{eqnarray} where \begin{eqnarray} \mathfrak{B}\equiv\int^{y_1}_{y_0}\sqrt{-g}dy+\phi(\zeta_0^2/2), \end{eqnarray} with $\phi(x)=\frac{x}{2} - \frac{x}{4}\ln x^2 + \frac{1}{2} {\rm ph} \Gamma(\frac{1}{2}+i x)$, where the phase of $ \Gamma(\frac{1}{2}+i x)$ is zero when $x=0$, and is determined by continuity otherwise. Having matched the approximate solutions together, all the integration constants appearing in the approximate solutions are uniquely determined by the initial conditions. Therefore, with these solutions, we are able to study the perturbation modes starting from the high energy regime until the end of the slow-roll inflation. Let us now consider some representative cases. The cases with (i) three different single turning points, and (ii) one and two complex conjugate roots, are plotted respectively in the left and right panels of Fig.~\ref{num_ana}. From these figures, we can see clearly that the exact solutions are well approximated by the analytical ones. We have also considered many other cases, and found that in all the cases the analytical approximate solution traces the exact (numerical) one very well. \begin{figure*} \centering \includegraphics[totalheight=2.4in,width=3.4in,angle=0]{ct1.pdf} \includegraphics[totalheight=2.4in,width=3.4in,angle=0]{dt1.pdf} \caption{Comparison of the analytical approximate solutions in the uniform asymptotic approximation to the numerical (exact) solutions in a de Sitter background.}\label{num_ana} \end{figure*} \section{Non-adiabatic effects on power spectrum of the scalar perturbations} \renewcommand{\theequation}{4.\arabic{equation}}\setcounter{equation}{0} As we have mentioned in the above section, the presence of the {extra turning points} leads to the violation of the adiabatic evolution of the perturbation modes. This fact has also been indicated and discussed in detail in Refs.~\cite{ashoorioon_extended_2018, ashoorioon_getting_2017, zhu_inflationary_2014, zhu_highorder_2016}. {As pointed out} in these works, the non-adiabatic evolution of the perturbation modes leads to highly populated excited states and amplify the standard {perturbation spectrum}. Since the presence of the extra turning points is a direct consequence of the high-order operators included in the extended EFT of inflation, these non-adiabatic effects are caused directly by these operators. In this section, by using the analytical solution we have derived above, we discuss in detail how these effects affect both the generation of excited states and the primordial perturbation spectrum. \subsection{Generation of excited states and particle production rate} Let us first consider the non-adiabatic effects on the generation of the excited state. For this purpose, when the perturbation modes are inside the Hubble radius (between $y_0$ and $y_1$), {we find that $g(y)\simeq-\omega^2_k/k^2$ and} \begin{eqnarray} \int^y_{y_0}\sqrt{-g(y)}dy'\simeq-\int^{\eta}_{\eta_0}\omega_k(\eta)d\eta'. \end{eqnarray} {Then, using the asymptotic form of the Airy functions, we find that the approximate solution (\ref{Ar}) can be casted in the form} \begin{eqnarray} \mu_k(\eta) &\simeq& \frac{1}{\sqrt{2\omega_k}}\sqrt{\frac{k}{2\pi}}\frac{1}{i} \Big\{(ia_0-b_0)e^{ i\int^{\eta}_{\eta_0}\omega_k(\eta)d\eta'- i \frac{\pi}{4}} \nonumber\\ &&+(ia_0+b_0)e^{-i \int^{\eta}_{\eta_0}\omega_k(\eta)d\eta'- i \frac{\pi}{4}} \Big\}, \end{eqnarray} {From which} we can immediately identify the Bogoliubov coefficients of the excited modes at the subhorizon scale as \begin{eqnarray} |\beta_k|^2&=&\frac{k}{2\pi} |i a_0-b_0|^2\nonumber\\ &=&\frac{1}{4}(\kappa^2+\kappa^{-2}-2)\nonumber\\ &=&e^{\pi\xi^2_0}. \end{eqnarray} Here $\beta_k$ is the Bogoliubov coefficient that measures the particle production rate. From the above expression, we can see that $\beta_k^2$ is determined by $\zeta_0^2$, for which we have the following remarks. First, the sign of $\zeta_0^2$ is sensitive to the nature of the turning points $y_1$ and $y_2$, which can be classified into several classes: \begin{itemize} \item When $y_1$ and $y_2$ are both single and real, $\zeta_0^2>0$, which implies that the particle production during the process is exponentially enhanced. As we have shown in Sec. III. B, for this case to happen, one must require the discriminant $\Delta<0$. This corresponds to a requirement on the parameters of the high-order operators. \item When $y_1$ and $y_2$ are two real and equal, i.e., $y_1=y_2$, we have $\zeta_0^2=0$. Then, we have $\beta_k^2=1$. \item When $y_1$ and $y_2$ are complex conjugated, i.e., $y_1=y_2^*$, $\zeta_0^2$ is negative. This implies that the particle production is exponentially suppressed. \end{itemize} Now an important question arises for the case of the exponentially enhanced particle production rate, namely, whether or not the backreaction of the excited modes is small enough to allow inflation to last long enough. According to the analysis in \cite{lemoine_stressenergy_2001, brandenberger_backreaction_2005}, in order to avoid large backreactions, one has to impose the condition \begin{eqnarray} |\beta_k|^2\lesssim 8\pi \frac{H^2_{inf}M^2_{Pl}}{M^4_\ast}, \end{eqnarray} where $H_{inf}$ is the energy scale of the inflation, and the Planck 2015 data yield the constraint $H_{\rm inf}/M_{\rm Pl} \leq 3.5 \times 10^{-5}$ \cite{planck_collaboration_planck_2015-4}. Thus, if we take $H_{\rm inf}/M_{\rm Pl} \sim 2 \times 10^{-3} $, we can infer that \begin{eqnarray} |\beta_k|^2\lesssim\mathcal{O}(1). \end{eqnarray} Then{, we can obtain} \begin{eqnarray} \sqrt{1+|\beta_k|^2}-|\beta_k|\lesssim |\alpha_k+\beta_k| \lesssim|\beta_k|+\sqrt{1+|\beta_k|^2},\nonumber\\ \end{eqnarray} which leads to the constraint on $|\alpha_k+\beta_k|^2 $, that is, \begin{eqnarray} 3-2\sqrt{2}\lesssim |\alpha_k+\beta_k|^2 \lesssim3+2\sqrt{2}. \end{eqnarray} \subsection{Scalar perturbation spectrum} With the solutions obtained in the last subsection, we are now able to calculate the perturbation spectrum for the scalar perturbations at the end of the slow-roll inflation in the limit $y \to 0^+$. In this limit, the scalar perturbation is described by the approximate solution (\ref{Ar}) around $y_0$. This solution can also be represented as a linear combination of one growing mode and one decaying mode, in which ${\rm Bi}(\xi)$ is the growing mode and ${\rm Ai}(\xi)$ is the decaying mode. When $y \to 0^+$, only the growing mode is relevant, and then using the asymptotic form of ${\rm Bi}(\xi)$ we find \begin{eqnarray} u_k\simeq b_0\left(\frac{1}{\pi^2g(y)}\right)^{1/4}\exp\left(\int^{y_0}_y\sqrt{g(y)}dy\right), \end{eqnarray} where $b_0$ is given by Eq.(\ref{ab}). Then, the scalar power spectrum is given by \begin{eqnarray}\label{pw} \mathcal{P}_s&\equiv&\frac{k^3}{2\pi^2}\left|\frac{u_k(y)}{z}\right|^2\nonumber\\ &=& \mathcal{A} \left(\frac{k^2y}{9\pi^2z^2}\right)\exp{\left(2\int^{y_0}_y\sqrt{g(y)}dy\right)}, \end{eqnarray} where \begin{eqnarray}\label{A} \mathcal{A}&\equiv&\frac{2k|b_0|^2}{\pi}\nonumber\\ &=&1+2e^{\pi\zeta_0^2}+2e^{\pi\zeta_0^2/2}\sqrt{1+e^{\pi\zeta_0^2}} \cos(2\mathfrak{B}). \end{eqnarray} Obviously, the perturbation spectrum can be modified due to the high-order operators included in the extended EFT of inflation by two quantities: the modified factor $\mathcal{A}(k)$ and the exponential integration of $\sqrt{-g(y)}$ from $y_0$ to $0$. In order to see this fact clearly, let us first consider the integral with the linear dispersion relation, \begin{eqnarray} \mathcal{M}_0 = \exp{\left(2 \int^{y_0}_y \sqrt{\frac{9}{4 y'^2} -1} dy'\right)}. \end{eqnarray} Here {we have $y_0 = \frac{3}{2}$} in the de-Sitter background. For the nonlinear dispersion relation, this integral {changes to} \begin{eqnarray}\label{integral_M} \mathcal{M} = \exp{\left(2 \int^{y_0}_y \sqrt{g(y')} dy'\right)}. \end{eqnarray} With the above two definitions, we can cast the perturbation {spectrum of (\ref{pw}) into} the form \begin{eqnarray} \mathcal{P}_s = \mathcal{A}\times \frac{\mathcal{M}}{\mathcal{M}_0} \times \mathcal{P}^{\rm GR}_s, \end{eqnarray} where $\mathcal{P}^{\rm GR}_s $ denotes the standard nearly {scale-invariant} power-law spectrum in the framework {of} GR. One essential question related to the enhanced perturbation spectrum is if the effects of $\mathcal{A}$ and $\mathcal{M}/\mathcal{M}_0$ can lead to the violation of the nearly scale-invariance of the scalar spectrum. In fact, as shown in \cite{zhu_highorder_2016}, if we assume that both parameters $\hat \alpha_0$ and $\hat \beta_0$ are varying slowly, then the resulting primordial perturbation spectrum is still nearly scale-invariant. This assumption is correct if one only considers the first-order slow-roll approximation by treating all the slow-roll quantities as constant. In this case, the quantity $\mathcal{A}$ and $\mathcal{M}/\mathcal{M}_0$ do not depend on $k$, and, as a result, it does not contribute significantly to any scale-dependence of the spectrum. In this way, as shown in \cite{zhu_inflationary_2014}, the corresponding scalar spectral index can be calculated directly from the power spectrum (\ref{pw}), which is given by \begin{eqnarray} n_s &\equiv& 1+ \frac{d \ln \mathcal{P}_s}{d\ln k}\nonumber\\ &=& 4 - 2 \lim_{y \to 0}\int_{y}^{y_0} \frac{1-2 \hat \alpha_0 \epsilon_*^2 y'^2+ 3 \hat beta_0 \epsilon_*^4 y'^4}{\sqrt{g(y')}}dy'\nonumber\\ &=& n_{s}^{\rm GR}. \end{eqnarray} That is, to the first-order approximations of the slow-roll parameters, the power spectrum indices of the scalar perturbations is the same as that given in GR. This indicates that the presence of the high-order operators in the nonlinear dispersion relation can only affect the overall amplitude of the scalar spectrum at this order. It is still worth noting that the presence of the high-order operators in the nonlinear dispersion relation can contribute to the scale dependence of the power spectrum, if one goes beyond the first-order slow-roll approximation in the {extended} EFT of inflation. Physically this is because the presence of the high-order operators can affect slightly the time when the scalar perturbation modes exit the Hubble radius, which contributes to {the} spectral index at the order of \cite{zhu_quantum_2014, zhu_highorder_2016}, \begin{eqnarray} \frac{d \ln\mathcal{A} }{d \ln k} \sim \frac{d \ln \mathcal{M}/\mathcal{M}_0}{ d \ln k} \sim \epsilon_*^2 \times \mathcal{O}(\epsilon), \end{eqnarray} where $\epsilon_*$ denotes the slow-roll parameters. Therefore, with the presence of the high-order operators, the resulting power spectrum {of the} scalar perturbations is still nearly scale-invariant and the significant effects due to $\mathcal{A}$ and $\mathcal{M}/\mathcal{M}_0$ can only modify the overall amplitude of the power spectrum. In the following we {consider} the effects of $\mathcal{A}$ and $\mathcal{M}/\mathcal{M}_0$ and their properties in detail. \subsubsection{Non-adiabatic effects on the perturbation spectrum} The modified factor $\mathcal{A}$ measures the contribution due to {the} presence of the two turning points $y_1$ and $y_2$. For the perturbation modes with a linear dispersion relation in the EFT of inflation, the equation of motion can in general has only one single turning point. Therefore, the modified factor $\mathcal{A}$ represents a direct effect of the presence of high-order operators included in the extended EFT of inflation, namely, the terms with $\bar M_4$, $\delta_3$, and $\delta_4$ in the action (\ref{DeltaS}). We observe that $\mathcal{A}$ depends on the quantity $\zeta_0^2$, which {is} related to the strength of the violation of {the} adiabatic evolution because of the {two extra turning points}, as we mentioned above. When $\zeta_0^2$ is positive and large, which corresponds to the case in which both $y_1$ and $y_2$ are real (i.e. case (a) in Fig.~\ref{gofy_tu}), we have \begin{eqnarray} e^{\pi \zeta_0^2} \gg 1. \end{eqnarray} Then the modified factor {$\mathcal{A}$ reads} \begin{eqnarray} \mathcal{A} \simeq 2 e^{\pi \zeta_0^2} \left(1+ \cos{2 \mathfrak{B}}\right), \end{eqnarray} which indicates that {the} power spectrum is exponentially enhanced {in the most part of the} parameter space. When $\zeta_0^2=0$, which corresponds to the double turning point case $y_1=y_2$ (i.e. case (b) in Fig.~\ref{gofy_tu}), we have \begin{eqnarray} \mathcal{A} = 3 + 2 \sqrt{2} \cos{2 \mathfrak{B}}. \end{eqnarray} {In} the case in which $\zeta_0^2$ is negatively large, which corresponds to the case in which the turning points $y_1$ and $y_2$ are complex conjugated, we have \begin{eqnarray} e^{\pi \zeta_0^2} \ll 1. \end{eqnarray} Thus, the modified factor $\mathcal{A}$ is \begin{eqnarray} \mathcal{A} = 1 + 2 e^{\pi \zeta_0^2/2} \cos{2 \mathfrak{B}}+ \mathcal{O}(e^{\pi \zeta_0^2}), \end{eqnarray} which reduces to the {standard one} with only one single turning point $y_0$. In Fig.~\ref{modified_factor} and \ref{modified_factor2}, we plot the behavior of the modified factor $\mathcal{A}$ with respect to different parameters. All these figures show clearly the modified factor $\mathcal{A}$ gets enhanced when $y_1$ and $y_2$ are both real and single, and reduces to one {when} $y_1$ and $y_2$ are complex conjugated. Our analytical results presented here are also in agreement with the numerical results obtained in \cite{ashoorioon_extended_2018}. \subsubsection{Impact of the exponential integration of $\sqrt{-g(y)}$} Another effect of the high-order operators, which may also change the exponential integral of $\sqrt{-g(y)}$ over range from $y_0$ to $0$ in comparison to a linear dispersion relation, is the ratio $\mathcal{M}/\mathcal{M}_0$. From Eq.~(\ref{integral_M}), we can see that $\mathcal{M}/\mathcal{M}_0$ is more sensitive to the magnitude of the parameter $\epsilon_* = \frac{H}{M_*}$ rather than the parameters $\hat \alpha_0$ and $\hat \beta_0$. When $\epsilon_* \ll 1$, as we have assumed in Eq.~(\ref{epsilon}), we find the turning point $y_0$ can be approximated by \begin{eqnarray} y_0 = \frac{3}{2}+\mathcal{O}(\epsilon_*). \end{eqnarray} During the interval between $0$ and $y_0$, we also have \begin{eqnarray} \sqrt{g(y)} = \sqrt{\frac{9}{4 y^2} -1} + \mathcal{O}(\epsilon_*). \end{eqnarray} {With the above expressions,} we have \begin{eqnarray} \frac{\mathcal{M}}{\mathcal{M}_0} = 1+ \mathcal{O}(\epsilon_*). \end{eqnarray} This indicates that if $\epsilon_* \ll 1$, the exponential integral in (\ref{integral_M}) does not lead to any significant effects on the primordial perturbation spectrum. The only effects come from the modified factor $\mathcal{A}$. If we relax the condition $\epsilon_* \ll 1$ to $\epsilon \lesssim \mathcal{O}(1)$, namely, we consider $\hat \alpha_0 \epsilon_*^2 \lesssim \mathcal{O}(1)$ and $\hat \beta_0 \epsilon_*^4 \lesssim \mathcal{O}(1)$, the above conclusion will be {significantly} changed. To see this clearly, in Fig.~\ref{M} we plot $g(y)$ for different values of {the} parameters and compare it with the one of {a linear dispersion relation}. When both $\hat{\alpha}_0\epsilon_\ast^2$ and $\hat{\beta}_0\epsilon_\ast^4$ are positive, which corresponds to case (d) in Fig.~\ref{gofy_tu}, $g(y)$ only has one single turning point and the corresponding modified factor $\mathcal{A} \simeq 1$. However, as shown in the top panel of Fig.~\ref{M}, this case leads to a shift of $y_0$ from $\frac{3}{2}$ in the linear case to a smaller value. As a result, the curve of $g(y)$ with {a} nonlinear dispersion relation is always beneath the one with {a} linear dispersion relation between $y_0$ and $0$. In this case, {it is obvious that} $\mathcal{M}/\mathcal{M}_0 <1$ is significant, which implies the perturbation spectrum is suppressed in comparison to the standard one {given in GR.} When $\hat \alpha_0 \epsilon_*^2$ is negative, things become more complicated. In this case, the shift of the turning point $y_0$ from $\frac{3}{2}$ is a result of competition between $\hat \alpha_0 \epsilon_*^2$ and $\hat \beta_0 \epsilon_*^4$. The former can make $y_0>\frac{3}{2}$ and the later can make it smaller. When the effect of $\hat \alpha_0 \epsilon_*^2$ is larger than that of $\hat \beta_0 \epsilon_*^4$, {then} $y_0$ becomes larger than $3/2$, otherwise it will be smaller. However, from the shift of $y_0$ itself we still cannot conclude if the ratio $\mathcal{M}/\mathcal{M}_0$ is larger than unity or not. As shown in the middle and bottom panels of Fig.~\ref{M}, even when $y_0<3/2$, the curve of $g(y)$ is not always beneath the one with a linear dispersion relation during $(0, y_0)$. This makes the analysis of the ratio $\mathcal{M}/\mathcal{M}_0$ very difficult. However, it can be shown that the ratio $\mathcal{M}/\mathcal{M}_0$ can be either larger than one or smaller than one, depending on the values of the parameters. In fact, the impact of the high-order operators on the perturbation spectrum with larger values of $\hat \alpha_0 \epsilon_*^2$ and $\hat \beta_0 \epsilon_*^4$ has already been studied numerically in \cite{ashoorioon_extended_2018}. The results we presented in this subsubsection is in agreement with these numerical analyses. Here we would like to emphasize that our analytical results show clearly the impact of the high-order operators of the extended EFT of inflation on the perturbation spectrum originally from two different effects, one is measured by the modified factor $\mathcal{A}$, and the other is measured by the ratio $\mathcal{M}/\mathcal{M}_0$. In addition, we note that, the suppression or enhancement on the perturbation spectrum due to the ratio $\mathcal{M}/\mathcal{M}_0$ is not an effect of the excited state generated by the non-adiabatic evolution of modes around $y_1$ and $y_2$. These effects mainly originate from the shifts of the turning point $y_0$ in comparison with the linear dispersion relation. \begin{figure*} \centering \includegraphics[width=8cm]{A1.pdf} \includegraphics[width=8cm]{A2.pdf} \caption{ The modified factor $\mathcal{A}$ as a function of $\hat \beta_0$ with $\hat \alpha_0$ and $\epsilon_*$ fixed. In both figures, the modified factor changes from the two real turning points region (corresponds to $\mathcal{A} \gg 1$) to the two complex conjugated region (corresponds to $\mathcal{A} \simeq 1$). Left panel: $\hat \alpha_0 =2$ and $\epsilon_* = 0.01$. Right panel: $\hat \alpha_0 =2.5$ and $\epsilon_* = 0.01$.} \label{modified_factor} \end{figure*} \begin{figure} \centering \includegraphics[width=8cm]{A4.pdf} \caption{The modified factor $\mathcal{A}$ as a function of $\hat \beta_0$ with $\hat \alpha_0=2$ for several values of $\epsilon_*$. } \label{modified_factor2} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{eg1.pdf} \includegraphics[width=8cm]{eg2.pdf} \includegraphics[width=8cm]{eg3.pdf} \caption{Comparison of $g(y)$ with nonlinear and linear dispersion relations in the range of from $(0, y_0)$ for different values of parameters.}\label{M} \end{figure} \section{Conclusion and Outlook} \renewcommand{\theequation}{5.\arabic{equation}}\setcounter{equation}{0} In this paper, we {provide} a detailed and analytical study of the effects of high-order operators on the primordial curvature perturbations. These high-order operators are naturally included in the framework of the extended EFT of inflation. We show clearly that the effects of these high-order operators can naturally generate an excited state on the primordial curvature perturbation rather than the usual BD vacuum state at subhorizon scale before the perturbation modes cross the Hubble radius. As we showed in Sec. IV, this excited state can produce enhanced effects if these high-order operators can lead to strong violation of the adiabatical condition of the perturbation modes (corresponding to two extra real and single turning points $y_1$ and $y_2$), but reduce to the BD state when the violation is weak (corresponding to two complex conjugated turning points). With this excited state, we calculate explicitly these non-adiabatic effects on the primordial curvature perturbation spectrum and {showed} that the {modifications} of the spectrum come from two effects, one is from the modified factor $\mathcal{A}$ and the other is from the exponential integral of $\sqrt{g(y)}$ over the range of $(0, y_0)$. {Despite of all these effects, the} resulting modified power spectrum is still nearly scale-invariant and the presence of the high-order operators can only affect the overall amplitude of the spectrum. In particular, we show clearly that the modified factor $\mathcal{A}$ measures the contribution due to the presence of the two turning points $y_1$ and $y_2$, which represents direct effects of the inclusion of the high-order operators in the theory. The effects from the exponential integral, which either leads to suppression or enhancement on the perturbation spectrum, are mainly from the derivation of the turning point $y_0$ from that in comparison with the linear dispersion relation. \section*{Acknowledgements} This work is supported by National Natural Science Foundation of China with the Grants Nos. 11675143, 11975203, 11675145, and the Fundamental Research Funds for the Provincial Universities of Zhejiang in China with Grants No. RF-A2019015. \section*{Appendix A: Analysis of the strong coupling problem} \renewcommand{\theequation}{A.\arabic{equation}}\setcounter{equation}{0} In this appendix, we present a detailed analysis of the strong coupling problem in the extended EFT of inflation. We consider several cubic and quartic terms in the action which can run into the strong coupling regime when the dispersion relation is linear. Then we show in detail that this strong coupling problem can be cured by the presence of the high-order operators introduced in the extended EFT of inflation \cite{ashoorioon_extended_2018}. For this purpose, we adopt the mechanism used in refs. \cite{zhu_symmetry_2011, zhu_general_2012, blas_comment_2010, lin_strong_2011} in the Horava-Lifshitz gravity and follow the calculations given there. It is worth noting that the similar mechanism has also been used to get rid of the strong coupling problem in the EFT of inflation with the presence of the fourth-order derivative operators \cite{Baumann:2011su}. To proceed, let us first write the quadratic action (\ref{quadra_eff}) of the extended EFT of inflation in the form of \begin{eqnarray}\label{quadratic_action} S_{\pi}^{(2)} &=& \frac{1}{2}\int d^4x a^3 A_1 \Bigg[ \dot \pi^2 -c_s^2 \Bigg( \frac{(\partial_i \pi )^2}{a^2}-\frac{\hat \alpha_0}{M_*^2} c_s^2 \frac{(\partial^2\pi)^2}{a^4} \nonumber\\ &&~~~~~~~~~~~~~~~~ + \frac{\hat \beta_0}{M_*^4} c_s^4 \frac{(\partial^3 \pi)^2}{a^6}\Bigg) \Bigg], \end{eqnarray} where $A_1$, $c_s$, $\hat \alpha_0$, $\hat \beta_0$, and $M_*$ are given by eqs.~(\ref{A1}, \ref{cs}, \ref{alpha_0}, \ref{beta_0}), respectively. In order to consider the strong coupling problem, we consider the following cubic and quartic terms in the action {as examples,} \begin{eqnarray} S_\pi^{(3)} \sim \int d^4 x a^3 \lambda_1 \frac{\dot \pi (\partial_i \pi)^2}{a^2},\\ S_{\pi}^{(4)} \sim \int d^4x a^3 \lambda_2 \frac{(\partial_i \pi \partial^i \pi)^2}{a^4}, \end{eqnarray} where $\lambda_1$ and $\lambda_2$ are {two coupling constants}. The coupling constant $\lambda_1$ has been derived in \cite{Baumann:2011su} without the introducing of the sixth derivative operators. The effects of the higher derivative terms can only contribute small corrections so that we have $\lambda_1 \simeq 2 M_{\rm Pl}^2 \dot H (1-c_s^2) c_s^{-2}(1 + \mathcal{O}(\epsilon_*^2))\simeq - 2 M_{\rm Pl}^2 H^2 \epsilon_1 c_s^{-2}(1-c_s^2)$ \cite{Baumann:2011su}. For the coupling constant $\lambda_2$, normally one has $\lambda_2 \sim M_{\rm Pl}^2 H^2 \epsilon_1 (1-c_s^2)^{p}c_s^{-q} (1+\mathcal{O}(\epsilon_*^2))$ with $p \geq 0 , q >0$ \cite{Baumann:2011su}. Depending on the energy scale, the above two terms have different scalings. So in the following we consider them, separately. \subsection{$|\partial_i| \ll M_*$} In this case, we find that the high-order derivative terms in the quadratic action can be neglected, and \begin{eqnarray}\label{low_action} S^{(2)}_\pi \simeq \frac{1}{2} \int d^4x a^3 A_1 \left[\dot \pi^2 - c_s^2 \frac{ (\partial_i \pi)^2}{a^2}\right]. \end{eqnarray} Considering the transformation \begin{eqnarray} \label{transA} t \to b_1 \hat t,\;\; x^i \to b_2 \hat x^i,\;\; \pi \to b_3 \hat \pi, \end{eqnarray} the action (\ref{low_action}) can be written in the canonical form, \begin{eqnarray}\label{low_actionb} S^{(2)}_\pi \simeq \int d^4 \hat x a^3 \left[(\hat \partial_t \hat \pi)^2 - \frac{ (\hat \partial_i \hat \pi)^2}{a^2}\right]. \end{eqnarray} Note that we consider $A_1$ and $c_s^2$ being constants since they both are slowly-varying during the slow-roll inflation. In writing the above form one sees that the coefficient of each term in the action is of the order of $\mathcal{O}(1)$ and \begin{eqnarray} b_2 = b_1 c_s , \;\;\;\; b_3= \frac{\sqrt{2}}{b_1 \sqrt{A_1} c_s^{3/2}}. \end{eqnarray} Then, under the same transformation (\ref{transA}) the cubic and quartic terms transform into \begin{eqnarray} S^{(3)}_\pi \simeq \frac{1}{b_1^2 c_s^{7/2}}\left(\frac{2}{A_1}\right)^{3/2} \hat S^{(3)}_\pi,\\ S^{(4)}_\pi \simeq \frac{4}{b_1^4 A_1^2 c_s^7} \hat S_\pi^{(4)}. \end{eqnarray} On the other hand, we observe that the action (\ref{low_actionb}) is invariant under the rescaling, \begin{eqnarray} \hat t \to b^{-1} \hat t, \;\; \hat x^i \to b^{-1} \hat x^i,\;\; \hat \pi \to b \hat \pi. \end{eqnarray} Then the coupling constants $\lambda_1$ and $\lambda_2$ scale as $b^2$ and $b^4$, respectively. Therefore, these terms are irrelevant and nonrenormalizable \cite{Polchinski}. To see the problem clearly, let us consider a physical process with the energy $E$, for example, then we have \begin{eqnarray} \int d^4 \hat x a^3 \frac{\hat \partial_{\hat t}\hat\pi (\hat \partial_i \hat\pi)^2}{a^2} \sim E^2,\\ \int d^4 \hat x a^3 \frac{ (\hat \partial_i \hat\pi \hat \partial^i \hat \pi)^2}{a^4} \sim E^4. \end{eqnarray} Since the cubic and quartic action is dimensionless, we must have \begin{eqnarray} \frac{1}{b_1^2 c_s^{7/2}}\left(\frac{2}{A_1}\right)^{3/2} \hat S^{(3)}_\pi \simeq \left(\frac{E}{\Lambda_{\rm sc}^{(3)}}\right)^2,\\ \frac{4}{b_1^4 A_1^2 c_s^7} \hat S_\pi^{(4)} \simeq \left(\frac{E}{\Lambda_{\rm sc}^{(4)}}\right)^4, \end{eqnarray} where the strong coupling energy scales $\Lambda_{\rm sc}^{(3,4)}$ are, \begin{eqnarray} \Lambda_{\rm sc}^{(3)} = \frac{b_1 c_s^{7/4}}{\lambda_1^{1/2}}\left(\frac{A_1}{2}\right)^{3/4}, \\ \Lambda_{\rm sc}^{(4)} = \frac{b_1 \sqrt{A_1}c_s^{7/4}}{\sqrt{2} \lambda_2^{1/4}}. \end{eqnarray} Although we have only considered two terms, by {following} the above procedure one can in principle get the strong coupling scales $\Lambda_{\rm sc}$ for all the nonrenormalizable terms in the cubic and quartic action \cite{Polchinski}. It is obvious that when the energy $E$ is above the strong coupling scales $\Lambda_{\rm sc}^{(3)}$ ($\Lambda_{\rm sc}^{(4)}$), {it runs into} the strong coupling regime. The strong coupling energy and momentum scales in the physical coordinates are given respectively by \begin{eqnarray} (\Lambda_{\omega}^{(3)},\Lambda_{k}^{(3)}) = \left(\frac{c_s^{7/4}}{\lambda_1^{1/2}}\left(\frac{A_1}{2}\right)^{3/4}, \frac{c_s^{3/4}}{\lambda_1^{1/2}}\left(\frac{A_1}{2}\right)^{3/4}\right),\nonumber\\ (\Lambda_{\omega}^{(4)},\Lambda_{k}^{(4)}) = \left(\frac{ \sqrt{A_1}c_s^{7/4}}{\sqrt{2} \lambda_2^{1/4}}, \frac{ \sqrt{A_1}c_s^{3/4}}{\sqrt{2} \lambda_2^{1/4}}\right).\nonumber\\ \end{eqnarray} We would like to mention that the above analysis only holds for $M_* \gg E > \Lambda_{\omega}^{(3, 4)}$, which implies \begin{eqnarray} \frac{c_s^{7/4}}{\lambda_1^{1/2}}\left(\frac{A_1}{2}\right)^{3/4}, \; \frac{ \sqrt{A_1}c_s^{7/4}}{\sqrt{2} \lambda_2^{1/4}} < M_*. \end{eqnarray} However, when $M_* < \Lambda_\omega^{(3, 4)}$ and $E > M_*$, before reaching the strong coupling energy scale, one has to take the effects of the high-order derivative terms in (\ref{quadratic_action}) into account. In ref.\cite{Baumann:2011su}, it has been proved in detail that exactly due to the presence of the fourth-order derivative terms in the EFT of inflation, the strong coupling problem can be cured. This is very similar to the strong coupling problem that have been eliminated due to the presence of the high-order derivative terms up to the sixth order \cite{zhu_symmetry_2011, zhu_general_2012}. In the following, by applying the same mechanism we shall show that the strong coupling problem can also be cured with the presence of the high-order spatial derivative terms. Since ref.~\cite{Baumann:2011su} has considered the fourth-order derivative terms in detail, here we only focus on the effects of the sixth-order operators. \subsection{$M_* < \Lambda_{\omega}^{(3,4)}$} In this case, for $E > M_*$, the action (\ref{quadratic_action}) reduces to \begin{eqnarray} S_{\pi}^{(2)} \simeq \frac{1}{2}\int d^4x a^3 A_1 \Bigg[ \dot \pi^2 - \frac{\hat \beta_0}{M_*^4} c_s^6 \frac{(\partial^3 \pi)^2}{a^6}\Bigg]. \end{eqnarray} Using the transformation (\ref{transA}) with \begin{eqnarray} b_2= \frac{2^{1/6}b_1^{1/3} c_s^{1/2} \hat \beta_0^{1/12}}{A_1^{1/6}M_*^{1/3}}, \\ b_3=\frac{2^{1/4}M_*^{1/2}}{A_1^{1/4} c_s^{3/4} \hat \beta_0^{1/8}}, \end{eqnarray} one has \begin{eqnarray} S_\pi^{(2)} = \int d^4\hat x [(\hat \partial_{\hat t} \hat \pi)^2 - (\hat \partial^3 \hat \pi)^2]. \end{eqnarray} For the cubic and quartic action, they transform as \begin{eqnarray} S_{\pi}^{(3)} = \frac{2 \sqrt{2} b_1^{1/3} M_*^{7/3}}{A_1^{3/2} c_s^{7/2} \hat \beta_0^{7/12} }\hat S^{(3)}_\pi,\\ S_{\pi}^{(4)} = \frac{ 4 b_1^{2/3} M_*^{14/3}}{A_1^2 c_s^7 \hat \beta_0^{7/6}}\hat S^{(4)}_\pi. \end{eqnarray} Similar to \cite{zhu_symmetry_2011, zhu_general_2012}, we also observe that the quadratic action is invariant under the rescaling \begin{eqnarray} \hat t \to b^{-3} \hat t,\;\; \hat x^i \to b^{-1} \hat x^i,\;\; \hat \pi \to \hat \pi. \end{eqnarray} Then, one can immediately see that the $\lambda_1$ term in the cubic action and $\lambda_2$ term in the quartic action scale as $b^{-1}$ and $b^{-2}$, respectively. Therefore, these two terms become superrenormalizable \cite{Polchinski}. Therefore, the conditions for curing the strong coupling problem which arises in the cubic and quartic terms require $M_* \lesssim \Lambda_{\omega}^{(3,4)}$. This leads to \begin{eqnarray}\label{CA} M_* \lesssim \frac{c_s^{7/4}}{\lambda_1^{1/2}}\left(\frac{A_1}{2}\right)^{3/4}, \; \frac{ \sqrt{A_1}c_s^{7/4}}{\sqrt{2} \lambda_2^{1/4}}. \end{eqnarray} Considering $A_1 \simeq - 2 M_{\rm Pl}^2 \dot H (1+\mathcal{O}(\epsilon_*^2)) \simeq 2 M_{\rm Pl}^2 H^2 \epsilon_1$ and $\lambda_1 \simeq 2 M_{\rm Pl}^2 \dot H (1-c_s^2) c_s^{-2}(1 + \mathcal{O}(\epsilon_*^2))\simeq - 2 M_{\rm Pl}^2 H^2 \epsilon_1 c_s^{-2}(1-c_s^2)$ \cite{Baumann:2011su}, the first condition in (\ref{CA}) reduces to \begin{eqnarray} M_* \lesssim \frac{c_s^{11/4} M_{\rm Pl} ^{1/2}H^{1/2} \epsilon_1^{1/4}}{\sqrt{1-c_s^2}} \simeq \frac{\mathcal{O}(1)}{\sqrt{1-c_s^2}} \times 10^{-3} M_{\rm Pl}.\nonumber\\ \end{eqnarray} Note that in the above we have assumed $c_s \sim \mathcal{O}(1)$ and used $H \sim 2.7 \times 10^{-5} M_{\rm Pl}$, $\epsilon_1 \sim 0.0068$ from the Planck 2018 data \cite{planckcollaboration_planck_2018}. Notice that we have also assumed the condition $H \lesssim M_*$ (i.e. $\epsilon_* < 1$) throughout the paper, which implies \begin{eqnarray} 2.7 \times 10^{-5} \lesssim \frac{M_*}{M_{\rm Pl}} \lesssim \frac{ \mathcal{O}(1)}{\sqrt{1-c_s^2}} \times 10^{-3}. \end{eqnarray} Similarly, for the second condition in (\ref{CA}) with $\lambda_2 \sim M_{\rm Pl}^2 H^2 \epsilon_1 c_s^{-q} (1-c_s^2)^p$, we find \begin{eqnarray} 2.7 \times 10^{-5}\lesssim \frac{M_* }{M_{\rm Pl}} \lesssim \frac{\mathcal{O}(1)}{(1-c_s^2)^{p/4}} \times 10^{-3}. \end{eqnarray} Therefore, we conclude that, provided above conditions hold, the cubic and quartic terms considered in this appendix do not lead to any strong coupling problem. Although we have only considered the strong coupling problem associated with these two terms, the analysis can be extended to other high-order terms in the cubic and quartic action. { In this appendix (as well as in \cite{lin_strong_2011, blas_comment_2010}), we show clearly that all these terms can be made either superrenormalizable or strictly renormalizable. }
{ "timestamp": "2019-09-18T02:09:30", "yymm": "1811", "arxiv_id": "1811.03216", "language": "en", "url": "https://arxiv.org/abs/1811.03216" }
\section{Introduction\label{sec:introduction}} The multi--wavelength emission of galaxies from $\gamma$--rays to the radio domain is the outcome of the complex physical interplay between their main baryonic components: stars of all ages and their remnants; molecular, atomic, and ionised gas; dust; and supermassive black holes. This means that the spectral energy distribution (SED) of a galaxy contains the imprint of the baryonic processes that drove its formation and evolution along cosmic times. In other words, to understand galaxy formation and evolution we need to extract the information tightly woven into the SED of galaxies across a broad range of redshifts. Over the past decade, major efforts have been undertaken to develop and strengthen two of the main pillars upon which rest our studies of galaxy formation and evolution: panchromatic observations and panchromatic models. On the observational side, large multi--wavelength surveys of galaxies have been carried out to measure the SED of galaxies across space and time, yielding a treasure trove of data that provide us with outstanding insight across the different baryonic components of galaxies. In turn, to interpret these observations and measure the fundamental physical properties of galaxies (e.g. star formation rate (SFR) and history (SFH), stellar mass, attenuation, dust mass and properties, presence and characteristics of an active nucleus, etc.), important investments have been made towards creating ever more precise and accurate models of the emission of galaxies over multiple orders of magnitude in wavelength. Modelling the SED of galaxies is a heavily intricate problem. Galaxies with very different properties can have broadly similar SEDs. This is particularly the case when considering restricted wavelength ranges rather than the full SED, which is seldomly available. Therefore, estimating the physical properties of galaxies precisely and accurately with only limited data is a considerable challenge. In practice, different avenues can be taken to build physically motivated SED models and attempt to determine their intrinsic physical properties. A popular approach consists in modelling galaxies using simple dust--attenuated templates representative of the diversity of galaxies at different redshifts. Such an approach is generally adopted by photometric redshift codes that fit only the FUV (far--ultraviolet) to NIR (near--infrared) part of the SED. Although this method is fast and works remarkably well for determining redshifts, as long as spectral breaks are sampled, it shows important limits when it comes to estimating the physical properties of galaxies beyond the stellar mass. In particular it can suffer heavily from degeneracies between the age and the metallicity \citep[e.g.][]{worthey1994a} or between the age and the attenuation \citep[e.g.][]{papovich2001a}: a galaxy can appear red either because it is strongly attenuated, because it does not form stars anymore, or because it has a high metallicity. A more accurate but much more demanding approach in terms of computational resources is to solve the radiative transfer equation of the emission of stellar populations through a dusty gaseous medium with an arbitrary geometry. While this allows for an exquisitely detailed modelling, the required computing time can be extremely large and the effort required to construct large grids of models rapidly becomes prohibitively expensive even on relatively small samples of galaxies. So far this constraint has largely confined radiative transfer models to theoretical studies \citep[e.g.][]{gordon2001a,tuffs2004a,trayford2017a} and to only a handful of in--depth observational case studies, generally on edge-on galaxies \citep[e.g.][]{xilouris1999a,popescu2000a,bianchi2008a,delooze2012b}, with the modelling of face-on galaxies being a fairly recent development \citep[e.g.][]{delooze2014a, viaene2017a}. An increasingly popular compromise in terms of speed, precision, and accuracy is to rely on an energy balance principle: the energy emitted by dust in the mid-- and far--IR exactly corresponds to the energy absorbed by dust in the UV--optical range. Such a method has been adopted by modern SED modelling codes such as \texttt{CIGALE} \citep{burgarella2005a,noll2009a}, \texttt{MAGPHYS} \citep{dacunha2008a}, and \texttt{FSPS} \citep{conroy2009a,conroy2010b} for instance. Such codes are very versatile and have been applied to study a wide variety of issues: why quiescent galaxies do not follow the starburst IRX--$\beta$ relation \citep{boquien2012a}, the attenuation properties of galaxies \citep[][Buat et al. (in press), Decleir et al. (submitted)]{buat2011b,buat2012a,boquien2013a,lofaro2017a,salim2018a}, SFR estimators \citep{buat2014a,boquien2014a,boquien2016a}, the separation of the emission of active galactic nuclei (AGNs) from their host galaxy \citep{ciesla2015a}, the imprint of the environment on the SED of galaxies \citep{bitsakis2016a,ciesla2016a}, or more generally the properties of nearby and distant galaxies \citep[e.g.][Małek et al. (in press), Burgarella et al. 2018 (submitted)]{burgarella2011a,giovannoli2011a,johnston2015a,malek2014a,alvarez2016a,pappalardo2016a,hirashita2017a,vika2017a}, to cite but a few of the studies carried out with \texttt{CIGALE}. If this approach is so successful, this is largely owing to the efficiency of the method, which allows to obtain good results, breaking the aforementioned degeneracies with the help of dust emission, while doing so rapidly with relatively modest computing requirements. A key aspect of many modern models is their use of a Bayesian--like approach. The physical properties are then not evaluated from the best--fit model but by weighting all the models depending on their goodness--of--fit, with the best--fit models having the heaviest weight. This naturally takes into account the uncertainties on the observations while also including the effect of intrinsic degeneracies between physical parameters (different models, sometimes with widely different physical parameters, can yield very similar SEDs over some wavelength ranges, making it difficult to favour one model in particular). By doing so we are able to not only convincingly reproduce the observations but also to obtain more reliable estimates of the physical properties and their related uncertainties. With ever larger and deeper surveys spanning ever broader wavelength ranges, it is especially important that we have equally more efficient, reliable, and versatile tools to model galaxies and estimate their physical properties. We present in this paper the new \texttt{python} version of \texttt{CIGALE}. While it shares the name, the ``energy balance'' principle, and the Bayesian--like strategy of the original \texttt{FORTRAN} implementation presented in \cite{noll2009a}, it is a completely new code that benefits from years of experience developing, maintaining, and using the original \texttt{CIGALE} \texttt{FORTRAN}, while addressing some of the new challenges and usages that have surfaced over the last few years. The aim of this article is to present the new architecture of \texttt{CIGALE}, its different modules, and various examples of its application. For conciseness, we do not dwell on the more theoretical aspects of the Bayesian strategy that have been presented in \cite{noll2009a} and many other articles. Similarly, we do not give excessive details on the precision and accuracy of the results as the topic has already been covered extensively in several papers from the same authors \citep[e.g.][]{boquien2012a,boquien2016a,buat2014a,ciesla2017a}. Finally, \texttt{CIGALE} being in constant evolution and development, this article describes its status as of version 2018.1. Further developments will be presented in separate publications. The article is structured as follows. We present the guiding principles and the architecture of this new code in Sect.~\ref{sec:principles}. The modules used to construct the SED and carry out the analysis are presented in Sects.~\ref{sec:SED-creation} and \ref{sec:analysis-modules}. The versatility of \texttt{CIGALE} is shown through various examples of its application in Sect.~\ref{sec:versatility}. We conclude in Sect.~\ref{sec:summary}. For reference we provide additional technical details, performance benchmarks, and various examples in the appendices. \section{Architecture\label{sec:principles}} To interpret the results of the modelling of galaxies and avoid a detrimental black--box effect, it is important to understand the whys and wherefores of the model. We present here the broad design principles that we have followed, the reasons for which we have chosen the \texttt{python} language, a high--level overview of the architecture to compute the models and estimate the physical properties of galaxies, and finally some important implementation choices. \subsection{Design principles} While the intrinsic scientific quality of a model is certainly one of the most important criteria determining its impact, other factors also play a role. Ideally, a model should provide clear and meaningful results to a wide population of astronomers from Masters students learning galaxy modelling to highly experienced modellers without requiring a detailed knowledge of the internal mechanics and of the implementation. At the same time a model should remain clear in what it is doing, and how it does it, so that it is flexible and easily adaptable even by inexperienced users, allowing it to easily evolve. Last but not least, the model should not require extraordinary resources. Desktop computers or small departmental servers should be sufficient to analyse large samples of galaxies. To reach these overarching goals, we have designed the \texttt{python} version of \texttt{CIGALE} following three major guiding principles: modularity, clarity, and efficiency both for the users and the developers. \begin{itemize} \item Modularity: the code must be split into different blocks that are as independent as can be from one another. Each of the four main stages: input handling (e.g. reading and processing the input files), computation of the models (e.g. the fluxes and the physical properties of each model), analysis (e.g. fit of observations and estimation of physical properties), and output handling (e.g. saving the physical properties, the best--fit spectrum, the $\chi^2$ of each model, the probability distribution function, etc.), must be entirely independent. Each physical component (stellar populations, nebular emission, attenuation by dust, dust emission, active nucleus, etc.) must be dealt with separately in individual modules, and each module must be able to be substituted as transparently as possible from the point--of--view of upstream and downstream modules. For instance it must be possible to change the attenuation law without affecting the rest of the code in any way. Finally, relying on this modularity, it must be possible to use \texttt{CIGALE} not only as initially intended but also as a library to build new tools. \item Clarity: the code must be as easy to understand as can be not only for the developers but also for the users in order to avoid a black--box effect and facilitate the development of community--driven extensions. This is very important to keep the evolution of \texttt{CIGALE} in phase with the evolution of knowledge and the creation of new emission models for any physical component existing or newly developed. \item Efficiency: large surveys yield increasingly larger multi--wavelength catalogues. We must use computer resources as efficiently as possible in terms of power and memory usage. We aim at being able to model the SED of thousands of galaxies across the universe using millions of models in a matter of a few hours on a typical multi--core computer readily available off--the--shelf. \end{itemize} \subsection{Choice of programming language} With these guiding principles in mind, we have chosen to develop the new version of \texttt{CIGALE} using the \texttt{python} language. We have made this choice based on three main arguments. \begin{enumerate} \item With its clear syntax and its low barrier of entry, \texttt{python} has become an increasingly popular language in Astronomy. It is often the language of choice for teaching programming and has even become the \textit{de facto} standard for many new developments. For \texttt{CIGALE}, this means a large fraction of the community is readily able to develop and adapt it to their needs, increasing its potential beyond its original design. \item A direct cause and consequence of this popularity is that unlike languages more closely tailored for numerical computations such as \texttt{FORTRAN} or \texttt{idl}, \texttt{python} is versatile and has a broad and rich set of specialised and general--purpose libraries. We have relied as much as possible on such libraries, and in particular on \texttt{sqlalchemy}\footnote{\url{http://www.sqlalchemy.org/}} for storing models and filters in a database, \texttt{numpy} and \texttt{scipy} \citep{numpy,scipy} for numerical computations, \texttt{matplotlib} \citep{matplotlib} for plotting, and \texttt{astropy} \citep{astropy2013a,astropy2018a} for astronomically related tasks such as computing cosmology--dependent quantities like the luminosity distances or more generally to handle data input and output in a variety of formats, including \texttt{FITS} and \texttt{VO--table}. This has allowed us to focus our efforts on the scientific challenges rather than on the low level strata of the software. \item Even though it is a scripting language, the aforementioned scientific modules allow fast numerical computations in \texttt{python}, minimising the impact on the performance compared to a compiled language such as \texttt{C++} or \texttt{FORTRAN}. In addition to this, the language comes with a built--in module for parallel programming, allowing for efficient use of multiple cores and processors. \item Last but not least, \texttt{python} is published under a Free license. This means that users do not have to acquire a license to run \texttt{CIGALE}, unlike with \texttt{idl} for instance. Numerous \texttt{python} distributions are available at zero cost with all the required libraries to install and run \texttt{CIGALE} easily. \end{enumerate} \subsection{High--level overview of \texttt{CIGALE}} The primary purposes of \texttt{CIGALE} are to generate theoretical models and, optionally, to use them to estimate the physical properties of galaxies. As the latter case is in fact probably the most common situation, we provide here a high--level overview of how this is achieved, only noting the handful of major differences when \texttt{CIGALE} is simply used to generate theoretical models. \subsubsection{The division of labour\label{sssec:labour}} The \texttt{CIGALE} package provides three executable files, each dedicated to a specific task: \begin{enumerate} \item \texttt{pcigale} carries out the computation of the models and if needed the estimation of the physical properties of galaxies. \item \texttt{pcigale-plots} generates plots from the output of \texttt{pcigale}: best SED, $\chi^2$ distribution, probability distribution function, and physical properties estimations from mock catalogues. \item \texttt{pcigale-filters} allows to list, delete, add, or plot a filter in the database. \end{enumerate} In practice, only \texttt{pcigale} is required to create the models, fit them to observations, and estimate the physical properties. The \texttt{pcigale-plots} and \texttt{pcigale-filters} executables are only provided for convenience and to facilitate the interpretation of the results provided by \texttt{pcigale}. In more detail, \texttt{pcigale} handles the optional guided construction of the configuration file (\texttt{pcigale init}, which produces a configuration file template where the user then indicates the list of the physical modules to be used, and \texttt{pcigale genconf}, which fills the file with the configuration section for each of the user--requested modules), as well as the computation itself (\texttt{pcigale run}). The computation is internally divided into four main stages: \begin{enumerate} \item Input handling. First the configuration file is read and interpreted: name of the input data files, number of cores, fluxes and properties to fit, parameters for each module, and so on. Then the input data for each object to be analysed are also read: names, redshifts, distances (optional)\footnote{If the luminosity distance is explicitly provided, it overrides the distance computed from the redshift. The difference can be particularly important for nearby galaxies where peculiar motions dominate over the Hubble flow.}, fluxes, and physical properties (optional). These data are then processed and normalised (e.g. eliminating invalid data, adding missing uncertainties, etc.). If \texttt{CIGALE} is only used to generate models, the data file is only used to extract the list of bands. \item Model generation. For every combination of the input parameters, compute the physical properties of the model (SFR, stellar mass, attenuation, etc.) and its fluxes in a given set of bands. \item Analysis. For each object: (a) fit all the models to the data, (b) estimate the likelihoods for all the models, and (c) estimate the physical properties from the likelihoods. If only the generation of models is requested, then this step is skipped altogether. \item Output handling. For each object, save (a) the physical properties estimated from the likelihood, (b) the fluxes and the physical properties of the best-fitting model, and (c), optionally, additional information such as the spectrum of the best model, the $\chi^2$ of all the models, the probability distribution functions, and so on. If only the generation of models is requested, then only the computed fluxes are saved as well as the individual spectra. \end{enumerate} After \texttt{pcigale run} has completed, it is possible to generate a plot of the best model along with the observations as well as a range of plots related to the evaluation of the parameters with \texttt{pcigale-plots}. \section{Model creation modules\label{sec:SED-creation}} The physical processes at play in galaxies provide us with a natural path to build models and compute their physical properties. In \texttt{CIGALE}, the models are progressively computed by a series of independent modules called successively, each corresponding to a unique physical component or process. The typical sequence to build each model is the following: \begin{enumerate} \item Computation of the SFH of the galaxy. \item Computation of the stellar spectrum from the SFH and single stellar population models. \item Computation of the nebular emission (lines and continuum) from the Lyman continuum photons production. \item Computation of the attenuation of the stellar and nebular emission assuming an attenuation law; computation of the luminosity absorbed by the dust. \item Computation of dust emission in the mid-infrared (mid-IR) and far-IR based an energy balance principle: the energy absorbed by the dust at short wavelengths, which has been computed in the previous step is re--emitted at longer wavelengths. \item Computation of the emission of an active nucleus. \item Redshifting of the model and computation of the absorption by the intergalactic medium (IGM). \end{enumerate} In practice, the models are progressively computed by successively applying these different modules, each adding a different physical component (spectrum and associated physical parameters). For each model these individual spectral components and the combined spectrum are stored individually to ease the subsequent computation (e.g. to account for the differential reddening between younger and older stellar populations, we need to store these populations separately) and allow the user to easily retrieve the contribution from each physical component. For quantities that are more conveniently computed from the full rest-frame spectrum, in particular those that are directly measured observationally from the spectrum (e.g. line equivalent widths, UV slope $\beta$, colours, etc.), a special module can be added prior to redshifting to calculate them on the rest-frame spectrum. We describe here how we have modelled and parametrised each of these different physical components. As new modelling needs appear in the future, we will keep on improving these modules as well as adding new modules whenever necessary, a unique feature derived from the architecture and modularity of \texttt{CIGALE}. We should note that in addition to the modules we present here, \texttt{CIGALE} also provides unofficial modules that expand its capabilities even further and we support users to develop new modules and encourage them to make them available to the \texttt{CIGALE} community. \subsection{Star formation history\label{ssec:modules-sfh}} As galaxies evolve secularly, accrete and expel gas, or interact with one another over cosmic times, their SFR is expected to vary considerably in non--trivial ways, from episodes of intense star formation to very quiescent phases. Constraining the SFH of galaxies is a tremendously difficult task. Not only because these variations are so complex, but also because dramatically different SFHs can sometimes yield remarkably similar SEDs. This difficulty to constrain the SFH of galaxies from broadband data has led most studies to adopt relatively simple SFH prescriptions aimed at reproducing the broad variations of the SFR with time: decaying or rising exponentials, delayed, or \`a la \cite{sandage1986a} for instance. However, with increasingly detailed numerical simulations it is now also possible to adopt more realistic SFH directly derived from such simulations or semi--analytic models \citep[e.g.][]{pacifici2012a,boquien2014a}. To encompass these two approaches, \texttt{CIGALE} handles both analytical SFH depending on several parameters, and arbitrary SFH. In Fig.~\ref{fig:sfh} we present some SFHs obtained from these modules, using a set of parameters representative of their versatility. We note that only one type of SFH (e.g. exponential, delayed, etc.) can be used in a single run. This means that different runs are needed to compare different parametrisations. \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{sfh.pdf} \caption{SFH generated with the \texttt{sfh2exp}, \texttt{sfhdelayed}, and \texttt{sfhperiodic} modules. These represent the cases of two decreasing exponentials (blue), a single decreasing exponential (orange), one increasing exponential (green), a delayed SFH with different timescales (red and purple), a periodic rectangular SFH (brown), a periodic exponential SFH (pink), and the rotation velocity--dependent SFH of \cite{buat2008a} (grey). We point out the transitory phase for the periodic exponential as each of the decaying exponentials combine. The exact parameters are indicated in the box. All SFHs have been normalised to have formed 1~M$_\odot$ over 13 Gyr. The diversity of generated SFHs allows for an important flexibility in the modelling.\label{fig:sfh}} \end{figure} \subsubsection{Basic assumptions on the SFH} Even though star formation is often modelled as mathematically continuous, it is a fundamentally discrete process, with stars being stochastically born one at a time. Building spectra from the ages of individual stars would rapidly become overwhelming computationally (notwithstanding the fact that we do not know this sort of information beyond the local group), it is therefore reasonable to assume some level of discretisaton on the SFH. In \texttt{CIGALE} we introduce two levels of discretisation. First, we adopt a sampling period of 1~Myr for the SFH. Considering the time $t$ and assuming the age of the galaxy $t_0$, the sampling grid starts at $t=0$~Myr and the last sample is at $t=t_0-1$~Myr. It is important to note that the SFR is computed at the beginning of an age bin, but the contribution of stars to the spectrum is computed at the end of that age bin. A sampling of 1~Myr is however too long to capture some brief but important stellar evolutionary phases. We therefore assume that in any given bin, star formation occurs in ten instantaneous episodes separated by 0.1 Myr. For instance if the SFH sampling indicates an SFR of 1 M$_\sun$ yr$^{-1}$ for a given bin, we distribute equally in time ten bursts of $10^5$~M$_\sun$. To limit the computation cost of this approach, the single stellar populations presented in Sect.~\ref{ssec:stellar-pops} are stored with a sampling over an age grid of 1~Myr but already precomputed assuming ten smaller bursts. Finally, each SFH is automatically normalised so that the total mass of stars formed from the onset of star formation to the last time step is always 1~M$_\odot$. This definition does not correspond to the stellar mass because it does not take into account the return fraction, which depends on the specifics of the stellar populations. We see in Sect.~\ref{ssec:analysis} how the models are scaled to the observations, which in effect is equivalent to scaling the SFH to the proper level for each observation. \subsubsection{\texttt{sfh2exp}, \texttt{sfhdelayed}, and \texttt{sfhperiodic} modules} We present here three modules defining analytic SFH covering three different general cases: SFH defined by single or double exponentials (\texttt{sfh2exp}), delayed SFH with an optional exponential burst(\texttt{sfhdelayed}), and periodic SFH (\texttt{sfhperiodic}). \paragraph{\texttt{sfh2exp}} One of the simplest ways to model the SFH of a galaxy is to model it with one or two decaying exponentials. Conceptually, the first exponential models the long-term star formation that has formed the bulk of the stellar mass, whereas the second one models the most recent burst of star formation. The combination can be expressed in the following way: \begin{equation} \text{SFR}\left(t\right)\propto \begin{cases} \exp\left(-t/\tau_0\right)&\text{if } t < t_0-t_1\\ \exp\left(-t/\tau_0\right)+k\times\exp\left(-t/\tau_1\right)&\text{if } t \ge t_0-t_1, \end{cases} \end{equation} with $t_1$ being the age of the onset of the second episode of star formation relative to $t_0$ (i.e. if the galaxy started forming stars 13~Gyr ago and had a burst of star formation 100~Myr ago, $t_0=13$~Gyr and $t_1=100$~Myr), $\tau_0$ and $\tau_1$ the e--folding times of the populations modelling the older stellar populations and the most recent episode of star formation, and $k$ the relative amplitude of the second exponential, which is computed from the burst strength $f$ defined as the fraction of stars formed in the second burst relative to the total mass of stars ever formed. As (a) the SFH is sampled with a period of 1~Myr, (b) we assume a constant SFR between two samples, and (c) by convention we assign the first timestep a time of 0~Myr, $f$ can be expressed in the form of discrete integrals: \begin{equation} f = \frac{k\sum_{t=t_0-t_1-1}^{t_0-1}\exp\left(-t/\tau_1\right)}{\sum_{t=0}^{t_0-1}\exp\left(-t/\tau_0\right)+k\sum_{t=t_0-t_1-1}^{t_0-1}\exp\left(-t/\tau_1\right)},\label{eq:sf2exp-f} \end{equation} which means that $k$ can be easily computed from the following relation: \begin{equation} k = \frac{f}{1-f}\times\frac{\sum_{t=0}^{t_0-1}\exp\left(-t/\tau_0\right)}{\sum_{t=t_0-t_1-1}^{t_0-1}\exp\left(-t/\tau_1\right)}.\label{eq:sf2exp-k} \end{equation} Such a formulation, despite its apparent simplicity, is very versatile: \begin{itemize} \item Very large values of $\tau$ compared to $t_0$ can be used to model a nearly constant SFR. \item Rising exponentials are obtained setting $\tau$ to a negative value. \item The classical case of a single exponential can be obtained setting $f=0$. \end{itemize} This allows for an efficient modelling of elliptical galaxies (case of a single exponential) or of galaxies having had a recent episode of star formation for instance. However, a clear weakness of this module is that it is not adapted for galaxies which have had a recent drop in their SFR, such as galaxies being quenched due to an infall on a cluster or galaxies over a large range in redshift where we successively increase and decrease the amplitude of SFHs. \paragraph{\texttt{sfhdelayed}} The sudden onset of star formation and burst episodes in a double--exponential parametrisation may be too extreme in many practical cases where we expect the variation of the SFH to be smoother. An increasingly popular way to model the SFH of galaxies is the so--called ``delayed'' SFH: \begin{equation} \text{SFR}\left(t\right)\propto \frac{t}{\tau^2}\times\exp\left(-t/\tau\right) \text{for } 0\le t \le t_o,\label{eq:sfhdelayed} \end{equation} with $t_o$ the age of the onset of star formation, and $\tau$ the time at which the SFR peaks. Such a functional form has the advantage of providing a nearly linear increase of the SFR from the onset of star formation rather than an abrupt one in the case of \texttt{sfh2exp}. After peaking at $t=\tau$, it smoothly decreases. To allow for more flexibility, the module also allows for an exponential burst representing the latest episode of star formation (Ma\l{}ek et al., in press). The burst strength is defined following the same concept as for \texttt{sfh2exp}, substituting the exponential for the older stellar populations (indices 0) in Eqs.~\ref{eq:sf2exp-f} and \ref{eq:sf2exp-k} for the delayed SFH from Eq.~\ref{eq:sfhdelayed}. While a delayed SFH allows us to efficiently model early--type (for small $\tau$) and late--type (for large $\tau$) galaxies, one obvious limitation of this functional form is that it does not allow for a recent quenching of the SFR. To address this issue, \cite{ciesla2017a} expanded \texttt{sfhdelayed} allowing for an instantaneous recent variation of the SFR, upward or downward, and setting it to a constant until the last time step. This module is provided as \texttt{sfhdelayedbq}. This approach was used to successfully model the broad range of KINGFISH galaxies in Hunt et al. (submitted). \paragraph{\texttt{sfh\_buat08}} Rather than relying a priori on pure analytical functions, an alternative approach has been to tie the SFH to an observed physical quantity of the galaxy. This was done for example by \cite{boissier2003a,buat2008a} who related the SFH to the rotational velocity of the galaxy. Their SFH is parametrised as: \begin{equation} \text{SFR}\left(t\right)\propto 10^{a+b\times\log(t)+c\times t^{1/2}}, \end{equation} with $t$ ranging from 1 to $t_0$ in units of gigayears, and $a$, $b$, and $c$ being constants that depend on the rotational velocity of the galaxy. We adopt an extended version of the constants presented in Table 2 of \cite{buat2008a}\footnote{Private communication from Samuel Boissier.}. \paragraph{\texttt{sfhperiodic}} This module provides periodic SFH. The star formation episodes can be of three forms: exponential, delayed, or rectangular. There are four input parameters: 1. the shape of the star formation episodes, 2. $\delta$, the elapsed time between the beginning of each episode of star formation, 3. $\tau$, the duration of each star formation episode, and 4. $t_o$, the age of the onset of the first star formation episode (i.e. the age of the oldest stars). \subsubsection{\texttt{sfhfromfile} module} To build models with arbitrarily complex SFH and combine with hydrodynamical simulations or semi--analytic models, the \texttt{sfhfromfile} module allows to read and process SFH read from files. The first column of the file contains the age, starting from 0 and with a step of 1~Myr and each subsequent column contains the SFH with the SFR in M$_\odot$~yr$^{-1}$ and a step of 1~Myr for each line. The file can be provided indifferently in \texttt{ASCII}, \texttt{FITS}, or \texttt{VO--table} formats. To use these SFH, besides the file name, the module requires the indices of the columns to consider, and the ages in millions of years at which the model should be computed. This allows to one compute the SED of a given SFH at different time steps, for instance to investigate its variation with respect to time \citep{boquien2014a}. Optionally, it is also possible to normalise the SFH so that the total stellar mass formed is 1~M$_\odot$ in a similar way as for the \texttt{sfh2exp}, \texttt{sfhdelayed}, and \texttt{sfhperiodic} modules. \subsection{Stellar populations\label{ssec:stellar-pops}} With the SFH having been computed with one of the modules described in Sect.~\ref{ssec:modules-sfh}, the next step is to compute the intrinsic stellar spectrum. To do so, in addition to the SFH, we need to adopt a library of single stellar populations (SSPs). We rely on two popular libraries of SSPs, that of \cite{bruzual2003a} (module \texttt{bc03}) and that of \cite{maraston2005a} (module \texttt{m2005}). Each SSP library is available for a broad range of metallicities (0.0001, 0.0004, 0.004, 0.008, 0.02, and 0.05 for \cite{bruzual2003a}, and 0.001, 0.01, 0.02, and 0.04 for \cite{maraston2005a}) and for two initial mass functions (IMFs) (\cite{salpeter1955a} and \cite{chabrier2003b} for \cite{bruzual2003a}, and \cite{salpeter1955a} and \cite{kroupa2001a} for \cite{maraston2005a}). The \cite{bruzual2003a} SSPs come in low and high resolution versions, both of which are provided with \texttt{CIGALE}. By default the low-resolution models are used as they are generally sufficient for use with broadband data. An option is provided to build the database with high-resolution models, which are useful for instance when dealing with narrow features such as absorption or emission lines. To compute the spectrum of the composite stellar populations, we calculate the dot product of the SFH with the grid containing the evolution of the spectrum of an SSP with steps of 1 Myr\footnote{Taking $\mathrm{SFH(t)}$ to be the SFR at age t, and $\mathrm{SSP(\lambda, t)}$ to be the spectrum of a single population at wavelength $\lambda$ and of age t, as the age grid is regular and identical, the object spectrum $\mathrm{S(\lambda)}$ is $\mathrm{\sum_tSSP(\lambda, t)\times SFH(t)}$.}. In Fig.~\ref{fig:comp-bc03-m05} we show the results of this computation using the \texttt{bc03} and \texttt{m2005} modules. \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{comp-bc03-m05.pdf} \caption{Spectra of the composite stellar populations for the \cite{bruzual2003a} (blue) and \cite{maraston2005a} models (orange). Both models have an identical SFH generated with the \texttt{sfh2exp} module ($t_0=13000$~Myr, $\tau_0=7000$~Myr, $t_1=250$~Myr, $\tau_1=50$~Myr, $f=0.01$), a \cite{salpeter1955a} IMF, and a metallicity $Z=0.02$. Even if the \cite{maraston2005a} models have been developed to handle the contribution of thermally pulsating asymptotic giant branch stars, there are also clear differences at all wavelengths. Such differences are strongly dependent on the actual SFH and the differences seen in this plot are not necessarily representative of what would be obtained with another SFH.\label{fig:comp-bc03-m05}} \end{figure} At this stage the stellar populations are dust--free. We need however to anticipate that there can be a differential reddening between young stellar populations embedded in their dust clouds and older populations that have escaped and are therefore less reddened \citep[e.g.][]{charlot2000a}. To account for this, we compute and store separately the spectra of old and young stars so they can be attenuated independently in a downstream module (Sect.~\ref{ssec:modules-attenuation}). Following the prescription of \cite{charlot2000a}, the default age of separation between these populations is 10~Myr but it can be configured freely. \subsection{Nebular emission} The most massive stars emit a significant fraction of their light in the Lyman continuum. These high-energy photons ionise the surrounding gas which re--emits the energy in the form of a series of emission lines and a continuum (free--free, free--bound, and two--photon) that extends far into the radio regime. This emission is important as it provides us with excellent probes into the most recent star formation through hydrogen lines and radio continuum as well as the gas metallicity from metal lines. While the nebular emission generally contributes little to the broadband fluxes of quiescent star--forming galaxies, and is therefore ignored, this is not the case at the local scale when considering starbursting dwarf and very young star--forming regions \citep[e.g.][]{anders2003a,boquien2010c} as well as some high--redshift galaxies \citep[e.g.][]{stark2013a,debarros2014a}. This has a direct impact on SED modelling and the nebular emission needs to be carefully taken into account. To model the nebular emission in \texttt{CIGALE} we have adopted nebular templates based on \cite{inoue2011a}, which have been generated using \texttt{CLOUDY 13.01} \citep{ferland1998a,ferland2013a}. They predict the relative intensities of 124 lines from H\,\textsc{ii} regions from He\,\textsc{ii} at 30.38~nm to [N\,\textsc{ii}] at 205.4~$\mu$m. These templates are parametrised according to ionisation parameter $U$, and the metallicity $Z$, which is assumed to be the same as the stellar one. The electron density is assumed to be constant and is set to 100~cm$^{-3}$. Important improvements compared to the original templates of \cite{inoue2011a} include a refinement of the sampling in $\log U$ to steps of 0.1~dex, the extension down to $\log U=-4$, and changes in the abundances. The abundance set is based on the Orion nebula. The helium and nitrogen abundances are scaled by metallicity following \cite{nagao2011a}. This is motivated by the fact that helium has a primordial abundance floor and the nitrogen production is dominated by the secondary nucleosynthesis process through the CNO cycle at high metallicity. In practice the computation of the nebular emission in \texttt{CIGALE} follows several steps. First of all, after having selected a given template (based on $U$, and $Z$), which gives line luminosities normalised to the ionizing photon luminosity, the spectrum of emission lines is computed. Each line has a Gaussian shape with a user--defined line width to take gas motion into account, which can be especially important for narrow--band filters and high--redshift objects due to the apparent line broadening with redshift in the observed frame. While this gives the normalised nebular emission line spectrum which could be rescaled to the appropriate level by multiplying by the ionizing photon luminosity which was computed along with the composite stellar population, this would ignore the fact that not all Lyman-continuum photons ionize the surrounding gas. Two main processes can affect the ionisation rate of the surrounding gas. First of all, a fraction of the Lyman continuum can simply escape from the galaxy. Even though the escape fraction is generally low in the nearby universe, it may reach much higher values at high redshift to reionise the universe \citep[e.g.][]{inoue2006a,hayes2011a}. The other process that can prevent Lyman continuum photons from ionising the surrounding gas is absorption by dust \citep{inoue2001a,inoue2001b}. In this case it contributes to the general dust heating and is accounted for in the dust emission models presented in Sect.~\ref{ssec:dust-emission}. To take these two processes into account, we downscale the nebular line spectrum by the following factor from \cite{inoue2011a}: \begin{equation} k=\frac{1-f_\text{esc}-f_\text{dust}}{1+\alpha_1\left(T_e\right)/\alpha_B\left(T_e\right)\times\left(f_\text{esc}+f_\text{dust}\right)},\label{eqn:correct-fesc-fdust} \end{equation} with $\alpha_B$ being the case B recombination rate in m$^3$~s$^{-1}$, $\alpha_1=\alpha_A-\alpha_B$, the recombination rate to the ground level, $T_e$ the electron temperature in K, $f_\text{esc}$ the Lyman continuum escape fraction and $f_\text{dust}$ the partial absorption by dust before ionisation. Numerically, for $T_e=10^4$~K, we take $\alpha_B=2.58\times10^{-19}$~m$^3$~s$^{-1}$ $\alpha_1=1.54\times10^{-19}$~m$^3$~s$^{-1}$ \citep{ferland1980a}. The nebular continuum is computed following the prescription by \cite{inoue2010a} with the same parameters as the emission line templates and is computed in a similar fashion, including normalisation to the Lyman continuum photon luminosity and correction for the escape fraction and absorption by dust. Only the hydrogen continuum is taken into account as helium and other metal element continua are weak and negligible. It is important to note that the nebular emission does not consider the emission of metal and CO lines in photo--dissociation regions and molecular clouds. In effect, [C\,\textsc{ii}] at 158~$\mu$m or [O\,\textsc{i}] 63~$\mu$m/145~$\mu$m are seriously underestimated for galaxies with photo--dissociation regions. This will be considered for a future version of the code. We present a model of an SED including nebular emission in Fig.~\ref{fig:nebular}. \begin{figure*}[!htbp] \includegraphics[width=\columnwidth]{nebular.pdf} \includegraphics[width=\columnwidth]{nebular_zoom.pdf} \caption{\textit{Left}: Nebular (blue) and stellar (black) FUV to NIR spectra. In total 124 lines from H\,\textsc{ii} regions are taken into account. The nebular continuum takes into account free--free, free--bound, and two--photon processes. The lines are modelled from an improved version of the \texttt{CLOUDY} templates computed in \cite{inoue2011a}. Here the density is set to 100~cm$^{-3}$, the metallicity $Z=0.02$, and the ionisation parameter $\log U=-2$. Both $f_\text{esc}$ and $f_\text{dust}$ are set to 0. The stellar emission is computed with the \texttt{sfh2exp} ($t_1=13000$~Myr, $\tau_1=7000$~Myr, $t_2=25$~Myr, $\tau_2=50$~Myr, and $f=0.1$) and \texttt{bc03} (\cite{salpeter1955a} IMF and $Z=0.02$) modules. \textit{Right}: Zoom in the 460~nm to 740~nm range. At this scale we can see the line width due to gas motions. Here the line width is set to 300~km~s$^{-1}$. \label{fig:nebular}} \end{figure*} \subsection{Attenuation laws\label{ssec:modules-attenuation}} Galaxies contain dust, and this dust is very efficient at absorbing short-wavelength radiation. The energy absorbed from the UV to the NIR is then re--emitted in the mid-- and far-IR. This energy balance principle lies at the core of \texttt{CIGALE}. It is therefore important that the attenuation is properly modelled. First, we have to distinguish between an extinction curve, which is only dependent on the dust grain mix (composition, size distribution, etc.), and an attenuation curve, which also depends on the geometry. Except for a handful of very nearby objects such as the Magellanic Clouds, observers only see the effect of attenuation laws in galaxies. Because attenuation laws change with redshift and from galaxy to galaxy, \texttt{CIGALE} needs to be able to cover a broad range of such laws both in terms of shape and in terms of normalisation. The most direct approach would be to consider a mix of dust grains and a geometry. However as we see below, we use different sets of templates to model dust emission. Each template would therefore need to have a specific extinction curve corresponding to the assumed mix. This would be difficult for empirical templates that do not assume specific grain properties. The assumption of a geometry is also a problem as \texttt{CIGALE} can be used on vastly different objects, from small regions in galaxies (down to sub--kpc scale) to large galaxies at all redshifts. In any case, observations of the Milky Way show that the relative distribution of dust and stars can be much more complex than the simple geometries that are often assumed (slab, sandwich, shell, etc.) and would therefore require a full radiative transfer with a realistic geometry \citep[e.g.][]{delooze2014a}. An indirect but suitably much faster approach is to assume attenuation laws. Numerous studies have focussed on determining attenuation laws in galaxies, finding a remarkable diversity \citep[e.g.][Buat et al. (in press), Decleir et al. (submitted), and many others]{wild2011a,reddy2015a,reddy2016a,lofaro2017a,salim2018a}. This means that these attenuation laws must be flexible so that they can adapt to the broad diversity of observed curves. In this endeavor, we have pursued two ways of modelling attenuation curves in galaxies: the implementation of the \cite{charlot2000a} model, and flexible laws inspired from the starburst curve \citep{calzetti2000a}. \subsubsection{The \texttt{dustatt\_modified\_CF00} module} As a first approach to addressing this problem, we implemented the \cite{charlot2000a} model through the \texttt{dustatt\_modified\_CF00} module. The key idea behind this model is the realisation that not only young stars still embedded in their birth cloud suffer from additional attenuation compared to stars that have broken out and escaped into the ISM, but also that the attenuation curves association to the birth cloud and the ISM must be different. In practice, this is modelled by assuming two different power--law attenuation curves of the form $\text{A}\left(\lambda\right)\propto\lambda^\delta$: one for the birth cloud with a default slope of $\delta_{BC}=-1.3$, and one for the ISM with a default slope of $\delta_{ISM}=-0.7$. Because radiation from young stars has to travel through both the birth cloud and the ISM to escape the galaxy, the spectrum of stars younger than 10~Myr are attenuated by both the birth cloud and ISM curves. Stars older than 10~Myr are only attenuated by the ISM curve. Following Sect.~\ref{ssec:stellar-pops}, this age can be configured freely through the stellar populations module. In each case the nebular emission is attenuated following the same law as the stars giving rise to it. The V--band attenuation of both curves are linked through the relation: \begin{equation} \mu=\frac{\mathrm{A_V^{ISM}}}{\mathrm{A_V^{BC}+A_V^{ISM}}}, \end{equation} or, in other words, the ratio of the total attenuation undergone by stars older than 10~Myr to that undergone by stars younger than 10~Myr. This module is flexible beyond a strict implementation of the \cite{charlot2000a} model in the sense that $\mathrm{A_V^{ISM}}$, $\mu$, $\delta_{BC}$, and $\delta_{ISM}$ are all input parameters. It should be noted that \texttt{CIGALE} also provides the module \texttt{dustatt\_powerlaw}, which should not be confused with this module as it departs in several ways from the \cite{charlot2000a} model, having a single power law for both young and old stars, only with a different absolute attenuation, and the attenuation is set as the total attenuation for each component. \subsubsection{The \texttt{dustatt\_modified\_starburst} module} A more empirical approach is to use a well-known curve as a baseline. Subsequently, this curve can be parametrised to make it more generic and allow for some flexibility, for example in terms of slope or to account for the presence of a bump around 220~nm. We have also adopted this solution with the \texttt{dustatt\_modified\_staburst} module. It is based on the \cite{calzetti2000a} starburst attenuation curve, extended with the \cite{leitherer2002a} curve between the Lyman break and 150~nm. Its slope can be modified by multiplying it by a power law function of slope $\delta$ similar to the one described above and a UV bump can be added. This bump is modelled as a Lorentzian--like Drude profile which is described by three parameters: its central wavelength, its full width at half maximum (FWHM), and its amplitude \citep[see Eq.~3 from][]{noll2009a}. The overall attenuation can be expressed as: \begin{equation} k_{\lambda} = \left(k_\lambda^{starburst}\times\left(\lambda/550~\textrm{nm}\right)^\delta+D_\lambda\right)\times\frac{\mathrm{E(B-V)_{\delta=0}}}{\mathrm{E(B-V)_{\delta}}}, \end{equation} with $D_\lambda$ the Drude profile, and the last term renormalising the curve so that E(B-V) remains equal to the input E(B-V) when $\delta\neq0$. We show a set of stellar attenuation curves representative of the flexibility of our approach in Fig.~\ref{fig:attenuation-curves}. \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{attenuation.pdf} \caption{Three stellar attenuation curves generated by the \texttt{dustatt\_modified\_starburst} module of \texttt{CIGALE}. Based on the \cite{calzetti2000a} attenuation law, the red, green, and blue correspond to a power--law modification with indices $\delta$ of $0.00$, $-0.25$, and $-0.50$. In addition, a 220 nm bump has been added with three different amplitudes: 0 (dotted), 1.5 (solid), and 3 (dashed). The difference in the normalisation comes from the fact that $E(B-V)$ is kept constant after multiplying the starburst law (case $\delta=0$) by a power law.\label{fig:attenuation-curves}} \end{figure} Formally, the starburst law is defined for the continuum only, the emission lines being dimmed with a Milky Way extinction law. Here we have adopted a slightly more flexible approach, adopting the Milky Way curve of \cite{cardelli1989a} with the \cite{odonnell1994a} update, as well as the Small and Large Magellanic Cloud extinction curves of \cite{pei1992a}. The value of $R_V$ can be modified. The overall normalisation of the curves affecting the stars and the lines is determined according to the reddening of the emission lines $\mathrm{E(B-V)_{lines}}$, with a simple reduction factor $f$ between the two curves: \begin{equation} f=\frac{\mathrm{E(B-V)_{continuum}}}{\mathrm{E(B-V)_{lines}}}, \end{equation} with lines being more dimmed by dust than stars. \subsection{Dust emission\label{ssec:dust-emission}} As dust absorbs stellar photons from the UV to the NIR, this energy is re--emitted at longer wavelengths, essentially in the mid-- and far--IR domains. In general, dust emission can be split into three broad components. In the mid--IR, around 8~$\mu$m, the emission is dominated by polycyclic aromatic hydrocarbon (PAH) bands. At longer wavelengths, the emission is progressively taken over by very small, warm grains that tend to be stochastically heated for weak and moderate radiation field intensities but progressively becomes dominated by equilibrium emission at higher intensities. Beyond $\sim100~\mu$m, the emission is increasingly due to big, relatively cold grains. The different heating mechanisms of these different dust species, their composition, the metallicity, and the intensity and shape of the incident radiation field, all have an impact on the dust SED. The modelling of dust emission is a very active domain of research, building on several generations of increasingly more powerful IR instruments, from IRAS to \textit{Herschel}. For \texttt{CIGALE}, we consider three different sets of models: the \cite{dale2014a} empirical templates, the \cite{draine2007a} models \cite[including the updates of][]{draine2014a}, and the \cite{casey2012a} analytic model. The modules are described hereafter and some examples of dust SED are shown in Fig.~\ref{fig:dust}. \begin{figure*}[!htbp] \includegraphics[width=\columnwidth]{dh2014.pdf} \includegraphics[width=\columnwidth]{dl2007.pdf}\\ \includegraphics[width=\columnwidth]{dl2014.pdf} \includegraphics[width=\columnwidth]{casey2012.pdf} \caption{Examples of the SED produced by the four dust modules of \texttt{CIGALE}: \texttt{dale2014} (top--left), \texttt{dl2007} (top--right), \texttt{dl2014} (bottom--left), and \texttt{casey2012} (bottom--right). Each colour indicates a different set of parameters shown in the bottom--right corner. The solid lines represent the total SED, summing up all components. For the \texttt{dl2007} and \texttt{dl2014} modules, the dashed line corresponds to the star--forming regions and the dotted line to the diffuse emission. Finally, for the \texttt{casey2012} module, the dashed line corresponds to the modified black body whereas the dotted line corresponds to the power law.\label{fig:dust}} \end{figure*} \subsubsection{\texttt{dale2014} module} The dust templates of \cite{dale2014a} are based on a sample of nearby star--forming galaxies originally presented in \cite{dale2002a}. The latest update refines the PAH emission and also adds an optional AGN component as seen in Sect.~\ref{ssec:agn}. Aiming at simplicity, the star--forming component is parametrised by a single parameter $\alpha$ defined as: $dM_d\left(U\right)\propto U^{-\alpha}dU$, with $M_d$ being the dust mass, and $U$ the radiation field intensity. The $\alpha$ parameter is itself tightly linked to the 60--to--100 $\mu$m colour. The main strength of this model is its simplicity, with only one easy-to-interpret parameter. However, the PAH emission relative to the total infrared shows a limited variation with respect to $\alpha$. This can be an issue in particular in metal--poor galaxies which are known to have only little PAH emission \citep[e.g.][]{engelbracht2005a}. \subsubsection{\texttt{dl2007} and \texttt{dl2014} modules} Presented in \cite{draine2007a}, these models are based on a dust mixture of amorphous silicate and graphite grains, and PAH. One of the key features of the \cite{draine2007a} templates is the separation of dust emission into two components. The first one models the diffuse dust emission heated by the general stellar population. In this context, the dust is illuminated with a single radiation field $U_{min}$. The second one models dust tightly linked to star-forming regions. In that case the dust is illuminated with a variable radiation field ranging from $U_{min}$ to $U_{max}$ following a power--law index $\alpha$ \citep[see Eq. 23 of][]{draine2007a}. By default it is set to a fixed value of $\alpha=-2$. The dust mass fraction of dust linked to the star--forming regions (respectively diffuse emission) is $\gamma$ (respectively $1-\gamma$). The last parameter of these models is $q_{PAH}$, the mass fraction of the PAH, which is common for the two components. These components are kept separate in \texttt{CIGALE} to allow for their individual inspection. In recent years, this model has been refined further. Because the parameters are not identical, a different module is available to use the updated models: \texttt{dl2014}. Among the main differences to note, these new models have led to the following: 1) an expansion on the range of radiation field intensities and PAH mass fractions; 2) the power law index $\alpha$ is now a free parameter; 3) $U_{max}$ has been set to $10^7$; 4) a change in the treatment of graphite ; 5) the dust masses have been renormalised \citep{draine2014a}. One of the main strengths of these models is that they are very flexible. They can account for very different physical conditions with a variety of radiation fields and a variable PAH emission. However, this flexibility comes at the cost of a much larger parameter space to explore compared to the \cite{dale2014a} templates and is therefore more expensive in terms of processing power and memory. \subsubsection{\texttt{casey2012} module} The \texttt{casey2012} module implements the analytic model of \cite{casey2012a}. Dust emission is modelled with two components: a single temperature modified black body in the FIR ``representing the reprocessed starburst emission in the whole galaxy'' and a power law in the mid--IR ``which approximates hot--dust emission from AGN heating or clumpy, hot starbursting regions'' \citep{casey2012a}. In practice, the module depends on three parameters: the temperature of the dust, the emissivity index of the dust, and the mid-IR power law index. To distinguish both components and easily assess their relative contributions, \texttt{CIGALE} stores them separately in the SED. While less physically motivated than the \cite{draine2007a} models and not based on observations as the \cite{dale2014a} templates, the \cite{casey2012a} models are very flexible and can be easily used for local and high-redshift galaxies. The main limitations of this module, however, are that it includes no PAH emission and that the IR is computed from an energy balance and thus AGN heating is in effect not included. \subsection{Synchrotron radio emission} With the advent of the Square Kilometre Array, an avalanche of data in the centimetre regime is upon us. At such wavelengths, the emission is split between thermal processes related to the ionisation of the gas by massive stars and non--thermal processes related to the interaction of relativistic electrons from supernovae with the local magnetic field. While the \texttt{nebular} module naturally models the thermal radio continuum, it lacks synchrotron emission. The exact shape and intensity of the synchrotron spectrum depends on various parameters such as the strength of the magnetic field, the energy of the relativistic electrons propagating through it, and so on. Rather than attempting to model the synchrotron in such detail, the \texttt{synchrotron} module relies on the radio-IR correlation $q_{IR}$ of \cite{helou1985a}, a free power--law spectral slope $\alpha$, and on the assumption that at 21~cm the spectrum is largely dominated by non--thermal emission. In effect, knowing the IR emission, $q_{IR}$ directly provides the luminosity density at 21~cm. It is then a simple matter of computing a spectrum with the requested $\alpha$ and scaling it so that it matches the estimated luminosity. On the other hand, radio data can help to estimate the IR emission if no other data are available in this range. \subsection{Active galactic nuclei\label{ssec:agn}} Along with star formation, AGNs are thought to have a dramatic impact on galaxy evolution. Yet, properly disentangling the emission of AGNs from star formation is not necessarily an easy task as they can both strongly emit in the UV, and a large fraction of this emission can be reprocessed by dust and re--emitted at longer wavelengths. Several options are available in \texttt{CIGALE} to model the presence of an AGN from coarse but rapid methods to more detailed but slower methods. If only the IR is fitted, the \texttt{casey2012} module can be used, with the AGN being simply parametrised by the slope of the power law $\alpha$. If the AGN is a quasar, the \texttt{dale2014} module provides a simple template from the UV to the IR. The AGN is simply parametrised with the AGN fraction defined as the ratio of the AGN luminosity to the sum of the AGN and dust luminosities. While these methods are rapid and easy to use, they do not necessarily offer all the flexibility one may want to take into account the variety of AGN SED. \texttt{CIGALE} also provides the detailed AGN models of \cite{fritz2006a}. It explicitly takes into account three components through a radiative transfer model: the primary source located in the torus, the scattered emission by dust, and the thermal dust emission. These modules are determined through a set of seven parameters: $r$ the ratio of the maximum to minimum radii of the dust torus, $\tau$ the optical depth at 9.7~$\mu$m, $\beta$ and $\gamma$ describing the dust density distribution ($\propto r^\beta e^{-\gamma\left|\cos\theta\right|}$) with $r$ the radius and $\theta$ the opening angle of the dust torus, $\psi$ the angle between the AGN axis and the line of sight, and the AGN fraction. We show some examples of the SED generated by the \texttt{fritz2006} module in Fig.~\ref{fig:agn}. \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{AGN.pdf} \caption{Examples of the SED produced with the \texttt{fritz2006} module. Each colour indicates a different set of parameters shown in the bottom--left corner. The solid lines represent the total emission. The dotted lines represent the AGN accretion disk, the dashed lines the scattered component, and the dash--dot line the thermal emission.\label{fig:agn}} \end{figure} \subsection{Measuring physical properties from the rest--frame spectrum} The previous modules have allowed us to build a full FUV--to--radio rest--frame spectrum. If we have the physical properties (SFR, attenuation, stellar and dust mass, etc.) associated with each of the components, some observed quantities can only be accurately computed from the full spectrum. The aim of this module is to compute such quantities before redshifting the spectrum. The following quantities are measured: \begin{itemize} \item The UV slope $\beta$ is computed by fitting a straight line to the $F_\lambda$ spectrum in log--log space over the wavelength ranges defined in Table 2 of \cite{calzetti1994a}. \item The $D_n4000$ break index is computed from the ratio of the mean fluxes of the $F_\lambda$ spectrum from the rest--frame 400 to 410~nm on the red side and from 385 to 395~nm on the blue side \citep{balogh1999a}. \item IRX is computed as the log of the ratio of the dust to rest--frame far--UV (GALEX band) luminosities. \item Rest--frame equivalent widths are computed as the ratio of the integral of the spectrum over a user--defined wavelength range, including and excluding nebular lines. \item Rest-frame luminosity densities and colours are computed integrating the spectrum over any filter or pair of filters that are present in the filters database. \end{itemize} \subsection{Redshifting} Finally, the last module called to build the SED is the \texttt{redshifting} module. It has two effects. First, it redshifts the spectrum and dims it, multiplying the wavelengths by $1+z$ and dividing the spectrum by $1+z$. The second effect of this module is to take into account the absorption of short wavelength radiation by the IGM. To do so the \texttt{CIGALE} applies the prescription of \cite{meiksin2006a}, which is shown in Fig.~\ref{fig:igm}. \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{IGM.pdf} \caption{IGM transmission prescription of \cite{meiksin2006a} in the observed frame. The three redshifts are indicated in the top--left corner.\label{fig:igm}} \end{figure} We note that this module takes only one parameter, the redshift. However if the redshifts to apply are not indicated in the configuration file, \texttt{CIGALE} then automatically builds a list of redshifts from the input flux file, rounding them to two decimal places by default in order to avoid computing models with many close redshifts. Nevertheless, the physical quantities are computed for the exact input redshift at full precision. \section{Analysis modules\label{sec:analysis-modules}} The modules presented in Sect.~\ref{sec:SED-creation} are the building blocks to compute individual models. But these building blocks alone do not provide us with the desired set of SEDs nor do they provide us with estimates of the physical properties of the objects under consideration. Such tasks lie upon the so--called analysis modules. Two analysis modules are available with \texttt{CIGALE}: \texttt{savefluxes} to generate a grid of models and save the outputs (fluxes, SFH, physical properties, etc.) and \texttt{pdf\_analysis} that not only generates a grid of models but also fits these models to observations to estimate various physical properties and save the outputs. \subsection{Computing physical quantities} The computation of the physical quantities in \texttt{CIGALE} depends on their nature. Intrinsic intensive and extensive physical properties are computed in the different physical modules presented in Sect.~\ref{sec:SED-creation} where it makes most sense. Fluxes however depend on the observer and are measured after the computation of a given model. The technique differs whether we compute fluxes in bandpasses or whether they are extracted from low- or high-resolution spectra. \subsubsection{Bandpasses} The fluxes in bands are computed by integrating the model spectrum through the corresponding filter. The basic method is standard: \begin{equation} f_\lambda = \frac{\int_{\lambda_{low}}^{\lambda_{high}}F_\lambda\left(\lambda\right) \times T\left(\lambda\right) d\lambda}{\int_{\lambda_{low}}^{\lambda_{high}}T\left(\lambda\right) d\lambda},\label{eqn:flux-bandpass} \end{equation} with $f_\lambda$ being the flux per wavelength through a filter of transmission $T$ in units of energy\footnote{If the filter is provided in units of photons, it is converted and stored in units of energy in the database of \texttt{CIGALE}.} defined between wavelengths $\lambda_{low}$ and $\lambda_{high}$, and $F_\lambda$ the flux per wavelength of the model spectrum. To preserve the best sampling and not miss features, both the spectrum and the transmission filter are interpolated on one another's wavelengths. We have to note that the denominator does not depend on the model spectrum and is a constant. To avoid its computation, the transmission curves are normalised in such a way that $\int_{\lambda_{low}}^{\lambda_{high}}T\left(\lambda\right) d\lambda=1$. The observed fluxes are however provided in units of frequency. The flux per wavelength from the integral can easily be converted to fluxes per frequency through the use of the pivot wavelength $\lambda_{pivot}$ that is independent of the source \citep{koornneef1986a}: \begin{equation} f_\nu = \frac{\lambda_{pivot}^2}{c}f_\lambda, \end{equation} with $c$ the speed of light in the vacuum and $\lambda_{pivot}$ defined as: \begin{equation} \lambda_{pivot} = \sqrt{\frac{\int_{\lambda_{low}}^{\lambda_{high}}Td\lambda}{\int_{\lambda_{low}}^{\lambda_{high}}T/\lambda^2d\lambda}}. \end{equation} In effect, as $\lambda_{pivot}/c$ is a constant, we also integrate it to the normalisation of $T$ and we finally rescale the filter so that the integration of a spectrum in flux per wavelength provides a flux density in mJy. \subsubsection{Emission lines} Photometric observations in bandpasses are fairly straightforward to deal with as can be seen above. Measuring emission lines is however more difficult. This is due in no small part to the fact that while bandpasses encompass all of the emission within their range, the continuum has to be subtracted from emission lines. The presence of sometimes strong absorption lines under emission lines makes this a difficult problem and the spectral resolution of the data leads to different strategies. \paragraph{Low resolution data} Low resolution spectra and narrow--band observations do not allow to measure the underlying absorption lines. A commonly used technique is to measure the level of the continuum around the emission line and therefore only take into account the flux above the inferred continuum under the line. This has a natural downside of not taking into account the loss of flux due to the underlying absorption line. To model this we have implemented special filters that naturally subtract the continuum. Each filter is made of a positive part with the transmission set to 1 on the emission part. Off the line it is set to a negative value such that $\int_{\lambda_{low}}^{\lambda_{high}}T\left(\lambda\right) d\lambda=0$. The flux $f$ is then directly obtained through $f=\int_{\lambda_{low}}^{\lambda_{high}}F_\lambda\left(\lambda\right)T\left(\lambda\right) d\lambda$ without further normalisation of the filter as we compute a flux rather than a flux density. In effect, the integration of the spectrum on the negative part of the filter will evaluate the flux provided by the continuum and will allow us to subtract it from the flux computed by integrating the positive value. Assuming that the continuum is evaluated well enough, the remainder will be the flux from the line. For more flexibility, \texttt{CIGALE} provides these filters at different spectral resolutions for the main rest--frame optical lines. To compute the line fluxes from the spectra at any redshift the filters are stretched in wavelength by a factor $1+z$. This stretching is necessary to ensure that each filter remains centred on the line with the same resolution as in the rest frame. \paragraph{High-resolution data} If the data are at high resolution or if the lines have been corrected for absorption lines, the previous technique would not provide reliable results. In such a case \texttt{CIGALE} computes the emission line fluxes directly from their theoretical emission based on the nebular emission templates presented above. After normalisation to the number of Lyman continuum photons and extinction by dust, they provide the luminosity in any of the following lines that can be listed in the \texttt{pcigale.ini} file: \texttt{Ly-alpha}, \texttt{CII-133.5}, \texttt{SiIV-139.7}, \texttt{CIV-154.9}, \texttt{HeII-164.0}, \texttt{OIII-166.5}, \texttt{CIII-190.9}, \texttt{CII-232.6}, \texttt{MgII-279.8}, \texttt{OII-372.7}, \texttt{H-10}, \texttt{H-9}, \texttt{NeIII-386.9}, \texttt{HeI-388.9}, \texttt{H-epsilon}, \texttt{SII-407.0}, \texttt{H-delta}, \texttt{H-gamma}, \texttt{H-beta}, \texttt{OIII-495.9}, \texttt{OIII-500.7}, \texttt{OI-630.0}, \texttt{NII-654.8}, \texttt{H-alpha}, \texttt{NII-658.4}, \texttt{SII-671.6}, and \texttt{SII-673.1}. \subsection{\texttt{savefluxes} module\label{ssec:savefluxes}} Our ability to compare observations to theoretical models is key to constraining models of galaxy evolution. This can take various forms, from simple colour--colour plots to the computation of the SED of galaxies in numerical simulations and semi--analytic models. The \texttt{savefluxes} module has been designed for this kind of application: it aims at computing and saving the spectra and the properties of arbitrary theoretical galaxies. In practice the steps taken are the following. \begin{enumerate} \item From the list of parameters of each SED creation module given in the configuration file (see e.g. Appendix \ref{sec:pcigale.ini} ), \texttt{savefluxes} determines the complete list of parameters for each model to be computed. This essentially consists in finding all the possible combinations of parameters, creating the equivalent of an $n$--dimensional grid, with each dimension corresponding to an individual parameter. Alternatively, the parameters for each model can be explicitly provided in a file (one line per SED and one column per parameter). The former approach is useful to compute a systematic grid of theoretical models whereas the latter is more adapted for galaxies from simulations whose properties do not follow a grid. \item For each model, the spectrum is computed, and its physical properties (both input and derived) and fluxes in passbands (which can be narrow as well as broad) are stored in memory. Optionally, the full spectrum along with the individual components (stellar populations, nebular emission, dust emission, etc.) and the SFH are saved to disk as FITS tables. \item The integrated fluxes and the physical properties of all the models are saved to disk both as \texttt{ASCII} and \texttt{FITS} tables. \end{enumerate} \subsection{\texttt{pdf\_analysis} module\label{ssec:analysis}} The SEDs of galaxies contain a treasure trove of information on their physical properties, which we need to access to understand how they form and evolve. To do so, the simplest and probably most common method consists in fitting the observed SED of a galaxy with a set of models and selecting the best-fitting one. Unfortunately, this method suffers from severe drawbacks. First of all, it ignores the degeneracies one can encounter. Models with almost equally good fits can have very different properties. As such the properties corresponding to the best fit are not necessarily representative of the true properties of the object. A related issue revolves around the estimation of the uncertainties on the physical properties. The best fit in itself does not provide information on the uncertainties. Methods such as bootstrap can be applied at the cost of repeating the fitting procedure numerous times. A technique that has become increasingly popular over the past decade to address these issues is to rely on the goodness of fit of all the models rather than just the best--fitting model. This is generally done through the likelihood. In this case each model in the grid of models (the priors) will have an associated likelihood taken as $\exp\left(-\chi^2/2\right)$. These likelihoods can then be used as weights to estimate both the physical parameters (the likelihood--weighted means of the physical parameters) and the related uncertainties (the likelihood--weighted standard deviations of the physical parameters). This method works well when the probability distribution function is well behaved (e.g. a single peak). For more difficult cases, the marginalised probability distribution function (PDF) provides the full information to estimate the physical properties. With either method, an important difference between various fitting codes is the algorithm to sample the priors. Two main strategies are commonly considered. Some codes use a Monte--Carlo Markov Chain (MCMC) method (or some variant of it). This is especially the case when the dimension of the problem is very large, for instance when considering non--parametric SFH (the SFH does not follow any given functional form but rather the SFR is free at every single time step) to fit spectra. While the evident upside is that it allows a large volume of priors to be explored, the sampling can be sparse, with the risk of missing some high-likelihood regions. In addition, the SEDs have to be recomputed (or at least reinterpolated on a grid of precomputed priors) at each step of the exploration of the parameter space and this for each object, which may require particularly long computing times for large catalogues. An alternative method that we adopt in \texttt{CIGALE} is to rely on a fixed grid of models. The main advantage is that the models need to be computed only once for all the objects. Because of this, numerous optimisations can be applied to compute the grid of models and to fit them to the data. The main downside is that it can be somewhat memory intensive. To get good results, the grid of models needs to be reasonably well sampled. At the same time, for the process to be computationally efficient, the grid along with the associated physical properties to estimate need to remain in memory. The amount of memory that is required primarily depends on 1) the number of models to compute, 2) the number of bands and physical properties to fit, and 3) the number of physical properties to estimate. In the \texttt{pdf\_analysis} module we implemented the estimation of the physical properties from likelihood--weighted parameters on a fixed grid of models. The computation of the grid of SEDs and associated physical properties follows the same steps as for the \texttt{savefluxes} module described in Sect.~\ref{ssec:savefluxes}, except for minor differences of no consequence here. Once the grid of models has been computed, the high level algorithm to estimate the physical properties is the following. \begin{enumerate} \item From the complete set $S_0$ of models, selection of the subset $S_1$ of models closest to the rounded redshift of the analysed object. By default, the rounding is to two decimal places, but this can be user--defined. \item Computation of the multiplicative factors (Eq.~\ref{eq:scaling}) to scale the $S_1$ models to the observations. \item Computation of the $\chi^2$ (Eq.~\ref{eq:chi2}) between all the $S_1$ models and the observations. \item Computation of the likelihood $\exp\left(-\chi^2/2\right)$ for the $S_1$ models. \item Estimation of each physical property along with the associated uncertainty as the likelihood--weighted mean and standard deviation of the $S_1$ models. \item Save the estimates and the uncertainties on the physical properties along with the fluxes and the physical properties of the best fitting model. \item Optionally, save the spectrum of the best fit with the individual components (stellar populations, nebular emission, dust emission, etc.), the $\chi^2$ distribution, and the marginalised PDF for each analysed variable. \end{enumerate} Several of the key steps here require further explanation. First of all, as mentioned earlier the SFH is normalised so that its integral is 1~M$_\odot$ (when stellar populations are available, or to 1~W otherwise for the dust emission). This means that in order to obtain the extensive physical properties such as masses or luminosities, we need to rescale the models by a factor $\alpha$ before computing the $\chi^2$. This can be done analytically: \begin{equation} \alpha=\frac{\sum_{i} f_i\times m_i/\sigma_i^2}{\sum_{i} m_i^2/\sigma_i^2} + \frac{\sum_{j} f_j\times m_j/\sigma_j^2}{\sum_{j} m_j^2/\sigma_j^2},\label{eq:scaling} \end{equation} with $f_i$ and $m_i$ being the observed and model fluxes, $f_j$ and $m_j$ the observed and model extensive physical properties, and $\sigma$ being the corresponding observational uncertainties. Then the computation of the $\chi^2$ is straightforward: \begin{equation} \chi^2=\sum_i\left(\frac{f_i-\alpha\times m_i}{\sigma_i}\right)^2+\sum_j\left(\frac{f_j-\alpha\times m_j}{\sigma_j}\right)^2+\sum_k\left(\frac{f_k-m_k}{\sigma_k}\right)^2,\label{eq:chi2} \end{equation} with $f_k$ and $m_k$ being the observed and model intensive physical properties. This means that the stellar mass (or the dust luminosity when there is no stellar population included in the model) is not a free parameter even though it is technically possible to treat it as such. This would greatly expand the size of the parameter space by adding an extra dimension, while slowing the computation of the grid and the analysis by a similar amount, and degrade the accuracy of the estimation of the physical properties. Optionally, \texttt{CIGALE} can also handle fluxes for which only upper limits have been determined. We adopt the method presented in Appendix A2 of \cite{sawicki2012a}. This affects the aforementioned computing steps in several ways. First, the computation of the $\chi^2$ is divided between regular quantities (first three terms corresponding to Eq.~\ref{eq:chi2}) and those with only an upper limit (last three terms): \begin{equation} \begin{aligned} \chi^2&=\sum_i\left(\frac{f_i-\alpha\times m_i}{\sigma_i}\right)^2 +\sum_j\left(\frac{f_j-\alpha\times m_j}{\sigma_j}\right)^2 +\sum_k\left(\frac{f_k- m_k}{\sigma_k}\right)^2\\ & -2\sum_i\ln\left(\frac{1}{2}\left[1+\textrm{erf}\left(\frac{f_{ul,i}-\alpha\times m_i}{\sqrt{2}\sigma_i}\right)\right]\right)\\ & -2\sum_j\ln\left(\frac{1}{2}\left[1+\textrm{erf}\left(\frac{f_{ul,j}-\alpha\times m_j}{\sqrt{2}\sigma_j}\right)\right]\right)\\ & -2\sum_k\ln\left(\frac{1}{2}\left[1+\textrm{erf}\left(\frac{f_{ul,k}- m_k}{\sqrt{2}\sigma_k}\right)\right]\right),\label{eq:chi2-ul} \end{aligned} \end{equation} with `erf' being the error function, $f_{ul}$ the flux upper limit, and the indices $i$, $j$, and $k$ indicating respectively the fluxes, extensive properties, and intensive properties. See equations A6 to A10 of \cite{sawicki2012a} for a full derivation\footnote{We should note however that equation A10 from \cite{sawicki2012a}, corresponding to Eq.~\ref{eq:chi2-ul} here, contains a mistake, which we have corrected for.}. The main difficulty is to determine $\alpha$. For this we have to numerically solve $\partial\chi^2/\partial\alpha=0$ for every model, which is equivalent to solving Eq.~A11 from \cite{sawicki2012a}: \begin{equation} \begin{aligned} \sum_i\left(\frac{f_i-\alpha\times m_i}{\sigma_i}\right)\times\left(\frac{m_i}{\sigma_i}\right) +\sum_j\left(\frac{f_j-\alpha\times m_j}{\sigma_j}\right)\times\left(\frac{m_j}{\sigma_j}\right)&\\ -\sqrt{\frac{2}{\pi}}\sum_i\frac{m_i\times\exp\left(-\left[\left(f_{ul,i}-\alpha\times m_i\right)/\sqrt{2}\sigma_i\right]^2\right)}{\sigma_i\left[1+\textrm{erf}\left(\left(f_{ul,i}-\alpha\times m_i\right)/\sqrt{2}\sigma_i\right)\right]}&\\ -\sqrt{\frac{2}{\pi}}\sum_j\frac{m_j\times\exp\left(-\left[\left(f_{ul,j}-\alpha\times m_j\right)/\sqrt{2}\sigma_j\right]^2\right)}{\sigma_j\left[1+\textrm{erf}\left(\left(f_{ul,j}-\alpha\times m_j\right)/\sqrt{2}\sigma_j\right)\right]}&=0. \end{aligned} \end{equation} We do so using the \texttt{scipy.optimize.root} root-finding method. Once the $\chi^2$ are computed, the subsequent steps no longer depend on the presence or absence of upper limits. We have to note that objects in a catalogue do not all need to be fitted with the same set of data. \texttt{CIGALE} will automatically only consider the available data for a given object. The lack of certain data for some targets has a direct impact on some of the aforementioned computation steps. For steps 2 and 3, we simply do not include the data that have been marked as invalid in the input file (value lower than 0 or set to ``nan'', by convention) in the computation of $\alpha$ and $\chi^2$. For step 4, we take into account the number of bands to compute the probability that a model will reproduce the observations. Another feature of this module is the possibility to assess whether or not physical properties can actually be estimated in a reliable way through the analysis of a mock catalogue. The idea is to compare the physical properties of the mock catalogue, which are known exactly, to the estimates from the analysis of the likelihood distribution. To build the mock catalogue we consider the best fit for each object. We then modify each quantity by adding a value taken from a Gaussian distribution with the same standard deviation as the observation. This mock catalogue is then analysed in the exact same way as the original observations. Physical properties for which the exact and estimated values are similar can be estimated reliably. Applications of the \texttt{pdf\_analysis} module to a sample of representative galaxies is presented in Sect.~\ref{ssec:application-fit}. \section{Examples of \texttt{CIGALE} use cases\label{sec:versatility}} As seen above, \texttt{CIGALE} has been designed to be flexible and versatile so that it may have various applications: estimation of the physical properties of an object from the observed SED, generation of theoretical SEDs from analytical considerations or numerical simulations, library to build new tools, or even to serve as a basis for simulating observations. Here we briefly present examples of the former three applications. \subsection{Example of \texttt{CIGALE} as an SED generation tool} The automated generation of SEDs for specific parameters or over different sets of models can be useful for multiple applications: quickly generate artificial observations of galaxies whose physical properties are known (e.g. from numerical simulations), compare observations with grids of models without having to resort to full-scale SED modelling of large samples of galaxies, derive theoretical relations depending on one or more parameters, and so on. An example of the former use case with \texttt{CIGALE} can be seen in \cite{boquien2014a}. They used a set of SFHs from idealised galaxy simulations at $1\le z\le2$ to simulate observations in star-formation-tracing bands and examine the impact of the SFH on the measure of the SFR. Another example can be found in \cite{alvarez2016a}, where they defined Lyman--break galaxy selection criteria in the redshift range $2.5<z<3.5$ using \texttt{CIGALE}. Here, for the purpose of illustrating the generation of theoretical models, we focus on the latter case. We examine the question of the stellar contribution in the mid-IR. Indeed, even though mid-IR emission in galaxies is often used as a tracer of star--formation \citep[e.g.,][]{calzetti2007a}, the stellar contamination can be important. To correct for it the standard method consists in extrapolating the NIR flux to the wavelength of interest \citep[e.g.][]{helou2004a,ciesla2014a}, exploiting the Rayleigh--Jeans regime of the emission. In practice such methods are calibrated by computing the ratio between near-- and mid--IR fluxes, so that with one NIR band that is free of dust, one can estimate the stellar emission in the mid-IR. In Fig.~\ref{fig:comp-stellar-contrib} we show how mid--to--near-IR ratios vary depending on the age and timescale of a ``delayed'' SFH using the \cite{bruzual2003a} models assuming a \cite{salpeter1955a} IMF and a metallicity of $Z=0.02$. \begin{figure*}[!htbp] \includegraphics[width=0.333\textwidth]{{stellar_colors_IRAC4_Ks}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_IRAC4_WISE1}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_IRAC4_IRAC1}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_WISE3_Ks}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_WISE3_WISE1}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_WISE3_IRAC1}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_WISE4_Ks}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_WISE4_WISE1}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_WISE4_IRAC1}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_MIPS1_Ks}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_MIPS1_WISE1}.pdf} \includegraphics[width=0.333\textwidth]{{stellar_colors_MIPS1_IRAC1}.pdf} \caption{Comparison of the relative fluxes between the stellar fluxes in mid-IR dust--dominated bands (from top to bottom: IRAC 8~$\mu$m, WISE 12~$\mu$m, WISE 22~$\mu$m, and MIPS 24~$\mu$m) and NIR dust--free bands (from left to right: Ks 2.2~$\mu$m, WISE 3.4~$\mu$m, and IRAC 3.6~$\mu$m) bands depending on the age of the galaxy ($x$--axis) and the $\tau$ constant of a ``delayed'' SFH ($y$--axis). The colour indicates the value of the ratio of the fluxes according to the colour bar to the right of each plot, with white lines indicating isocontours. A total of 22500 models were computed representing the combination of 150 parameters on each axis. The \cite{bruzual2003a} models were adopted, assuming a \cite{salpeter1955a} IMF and a metallicity $Z=0.02$.\label{fig:comp-stellar-contrib}} \end{figure*} The simple configuration file to run this example is shown in Appendix~\ref{sec:pcigale.ini}. It allows to generate a total of 22500 models: the combination of 150 values for the age and 150 values for the timescale $\tau$. The plots show interesting variations depending on the SFH and the selected NIR and mid--IR wavelengths. Unsurprisingly, when the wavelength difference is small, the variation in the mid--to--near-IR flux ratio is limited to typically less than 20\% in the worst cases. In other words, as was noted by \cite{helou2004a}, there is only a weak dependence on the SFH. The variation is larger when considering longer wavelengths, such as WISE 22~$\mu$m or MIPS 24~$\mu$m. In this case there can be variations of up to a factor two with a strong dependence on the SFH. The most important difference depends on the age of early--type galaxies (small values for $\tau$), with $\sim1$~Gyr-old galaxies having much bluer colours. When star formation is still on--going (larger values for $\tau$), the colours show a much weaker dependence on star formation and a constant colour can easily be adopted for late--type galaxies. This simple example shows how easy it is with \texttt{CIGALE} to explore theoretical grids of models to understand the effects of specific assumptions. We showed here the influence of the age and the timescale of a ``delayed'' SFH, but similar studies can be done for different parametrisations of the SFH and also with different parameters (IMF, stellar population models, metallicity, presence of dust, presence of an active nucleus, etc.). Beyond the creation of grids of models of a broad range of purposes, the generation of models can also be used in connection to numerical simulations. For instance \cite{boquien2014a} used \texttt{CIGALE} to compute the emission of galaxies in star formation tracing bands with respect to time from 23 high--resolution numerical simulations. This allowed them to investigate the effect of the SFH on the determination of the SFR from standard methods and provide new SFR estimators on different timescales. In a similar way, \cite{ciesla2015a} coupled \texttt{CIGALE} with semi--analytic models to investigate the ability of SED modelling to disentangle the emission of active nuclei and measure their properties. \subsection{Example of \texttt{CIGALE} as a library} The modular and flexible design of \texttt{CIGALE} enables its use not just as a stand-alone package, but also as a library to build new applications well beyond what it was initially conceived for. As a simple example of such a case, we have created a simple pedagogical tool to interactively explore the effect of dust attenuation on the FUV--to-far-IR spectrum of a star--forming galaxy. We show a screen capture of \texttt{cigale commander} in Fig.~\ref{fig:pcigale-commander}. \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{pcigale_commander.png} \caption{Screenshot of a simple application built using \texttt{CIGALE} as a library. It is designed to interactively explore the effect of dust attenuation on the FUV--to-far-IR spectrum of a star--forming galaxy. Here the key parameters for the attenuation can easily be changed with sliders, allowing for a rapid examination of the impact of each parameter. The script is approximately only 150 lines long. It transparently uses \texttt{CIGALE} modules to build the SED from user--provided parameters and \texttt{matplotlib} to display the plot and handle the sliders.\label{fig:pcigale-commander}} \end{figure} While this example is limited to attenuation curves, it could be extended to all parameters of all modules. Not all applications need to be interactive though. It is also possible to use \texttt{CIGALE} as a database to easily access the base models (single stellar populations, dust emission templates, etc.) in a uniform way without having to write additional code to read the original models that come in different specific formats. \subsection{Example of \texttt{CIGALE} to estimate the physical properties of star--forming galaxies\label{ssec:application-fit}} Measuring the physical properties of galaxies is probably one of the most common uses of SED-modelling codes such as \texttt{CIGALE} and the literature is rich with examples covering a broad range of questions at all redshifts. Various articles have already covered the subject of the reliability of \texttt{CIGALE} to retrieve a number of the intrinsic physical properties of galaxies \citep[e.g.][]{boquien2012a,boquien2016a,buat2014a,lofaro2017a}. As the outcome of such studies naturally depends on the quality and breadth of the data available along with the sampled priors, we refer to these articles for in--depth analyses. Another interesting question is that of how the estimates of the physical properties from different codes compare. To answer this question, Hunt et al. (submitted) modelled the SED of the galaxies of the KINGFISH sample \citep{kennicutt2011a} comparing results from \texttt{CIGALE}, \texttt{MAGPHYS} \citep{dacunha2008a}, and the latest iteration of \texttt{grasil} \citep{silva1998a}. Examining the average SFR over 100 Myr, stellar mass, the FUV attenuation, and the dust luminosity, \texttt{CIGALE} provides excellent consistency with other codes. In another recent effort, this time more geared towards higher redshift samples, \texttt{CIGALE} equally shows excellent performance (Pacifici et al., in prep.). As a simple illustration of the capabilities of \texttt{CIGALE}, we present the results of the modelling of the star--forming galaxies from the UV to mid--IR SED atlas of \cite{brown2014a}. This sample is especially interesting as it provides a set of carefully vetted photometric data, which is ideal for SED modelling. We select a subsample of 78 star--forming galaxies which 1) have both FUV and NUV flux, and 2) are classified either as Sa or later-type or as peculiar. We exclude galaxies that are classified as purely AGN by \cite{brown2014a} as this may contaminate in an appreciable way both the UV and/or the mid--IR ends of the spectra, significantly affecting the results\footnote{For simplicity here we do not include AGN models. We refer to \cite{ciesla2015a} for a presentation of the capabilities of \texttt{CIGALE} regarding AGNs.}. We model these galaxies with a representative set of modules as described in Table~\ref{tab:parameters}, leading to a modest grid of 8164800 models. \begin{table*} \centering \begin{tabular}{lll} \hline\hline Module &Parameter &Value\\\hline \texttt{sfhdelayed} &\texttt{tau\_main} ($10^6$ years) &1, 500, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000\\ &\texttt{age\_main} ($10^6$ years) &13000\\ &\texttt{tau\_burst} ($10^6$ years) &$10^9$\\ &\texttt{age\_burst} ($10^6$ years) &5, 10, 25, 50, 100, 200, 350, 500, 750, 1000\\ &\texttt{f\_burst} &0, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.25\\\hline \texttt{bc03} &\texttt{imf} &1 (Chabrier)\\ &\texttt{metallicity} &0.02\\\hline \texttt{nebular} &\texttt{logU} &$-3.0$\\ &\texttt{f\_esc} &0.0\\ &\texttt{f\_dust} &0.0\\ &\texttt{lines\_width} (km s$^{-1}$)&300\\\hline \texttt{dustatt\_modified\_starburst}&\texttt{E\_BV\_nebular} (mag)&0.005, 0.01, 0.025, 0.05, 0.075, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35,\\ & &0.40, 0.45, 0.50, 0.55, 0.60\\ &\texttt{E\_BV\_factor} &0.25, 0.50, 0.75\\ &\texttt{uv\_bump\_wavelength} (nm)&217.5\\ &\texttt{uv\_bump\_width} (nm)&35.0\\ &\texttt{uv\_bump\_amplitude} &0.0, 1.5, 3.0 (Milky Way)\\ &\texttt{powerlaw\_slope} &$-0.5$, $-0.4$, $-0.3$, $-0.2$, $-0.1$, 0.0\\ &\texttt{Ext\_law\_emission\_lines} &1 (Milky Way)\\ &\texttt{Rv} &3.1\\ &\texttt{filters} &FUV, V\_B90\\\hline \texttt{dale2014} &\texttt{alpha} &0.5, 1.0, 1.5, 2.0, 3.5, 3.0, 3.5, 4.0\\\hline \texttt{restframe\_parameters} &\texttt{beta\_calz94} &True\\ &\texttt{D4000} &False\\ &\texttt{IRX} &True\\ &\texttt{EW\_lines} &500.7/1.0 \& 656.3/1.0\\ &\texttt{luminosity\_filters} &FUV \& V\_B90\\ &\texttt{colours\_filters} &FUV-NUV \& NUV-r\_prime\\ \hline \texttt{redshifting} &\texttt{redshift} &0\\\hline \end{tabular} \caption{Modules and parameter values used to model the sample of \cite{brown2014a}. The grid of models (fluxes and physical properties) is estimated over all the possible combinations of parameters, leading to a total of 8164800 models. The corresponding \texttt{pcigale.ini} file is provided in Appendix \ref{sec:pcigale.ini}.\label{tab:parameters}} \end{table*} An example of a typical best fit is shown in Fig.~\ref{fig:4321}. \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{brown2014_ngc4321.pdf} \caption{Best-fitting model for the spiral galaxy NGC~4321 (grey) located in the nearby Virgo cluster, showing the total stellar (blue, the dust attenuation has already been accounted for), nebular (green, also including dust attenuation), and dust (red) emission. The model fluxes in passbands computed using Eq.~\ref{eqn:flux-bandpass} are indicated with black circles. These fluxes were then fitted to the observations (turquoise cross with the uncertainties indicated with the vertical lines), yielding a final reduced $\chi^2\simeq 0.5$.\label{fig:4321}} \end{figure} While \texttt{CIGALE} can provide measurements for numerous physical properties, we chose here to concentrate on a subsample of six quantities that are frequently used to study galaxies: the FUV attenuation ($\mathrm{A_{FUV}}$), the dust luminosity ($\mathrm{L_{dust}}$), the instantaneous SFR, the stellar mass ($\mathrm{M_\star}$), the UV slope \citep[$\beta$,][]{calzetti1994a}, and the decimal logarithm of the ratio between the IR and FUV luminosities (IRX). The distributions of these six physical properties are shown in Fig.~\ref{fig:hist}. \begin{figure*}[!htbp] \includegraphics[width=0.33\textwidth]{brown2014_hist_attenuation.pdf} \includegraphics[width=0.33\textwidth]{brown2014_hist_beta.pdf} \includegraphics[width=0.33\textwidth]{brown2014_hist_IRX.pdf}\\ \includegraphics[width=0.33\textwidth]{brown2014_hist_Ldust.pdf} \includegraphics[width=0.33\textwidth]{brown2014_hist_Mstar.pdf} \includegraphics[width=0.33\textwidth]{brown2014_hist_SFR.pdf} \caption{Distribution of the physical properties of the \cite{brown2014a} sample. We see that in a single run \texttt{CIGALE} can model galaxies with a wide range of properties: FUV attenuation, UV slope, IR--to--UV luminosity, dust luminosity, stellar mass, and SFR. This is only a small excerpt of the numerous physical properties that can also be estimated with \texttt{CIGALE} (see Appendix~\ref{sec:properties}).\label{fig:hist}} \end{figure*} Even though \texttt{CIGALE} will always provide an estimate of the physical properties, this estimate may or may not be reliable depending on the wavelength coverage, the quality of the data, and the explored parameter space, among other factors. A standard way to test whether the physical properties can at least be retrieved in a self--consistent way is through a mock catalogue as described in Sect.~\ref{ssec:analysis}. As a reminder, in a nutshell the idea is to fit the observations and build an artificial catalogue from the best fits. Considering these best fits, we know exactly what the corresponding physical parameters are, so we know the `truth'. Noise is then injected into the fluxes of this new catalogue to simulate new observations. Fitting these artificial observations we can then compare the inferred physical properties from the likelihood distribution to their actual values. We show such an analysis in Fig.~\ref{fig:mocks} for the six aforementioned physical properties. \begin{figure*}[!htbp] \includegraphics[width=0.33\textwidth]{brown2014_mock_attenuation.pdf} \includegraphics[width=0.33\textwidth]{brown2014_mock_beta.pdf} \includegraphics[width=0.33\textwidth]{brown2014_mock_IRX.pdf}\\ \includegraphics[width=0.33\textwidth]{brown2014_mock_Ldust.pdf} \includegraphics[width=0.33\textwidth]{brown2014_mock_Mstar.pdf} \includegraphics[width=0.33\textwidth]{brown2014_mock_SFR.pdf} \caption{Comparison between the true (x--axis) and estimated (y--axis) values of $\mathrm{A_{FUV}}$), $\mathrm{L_{dust}}$, the SFR, $\mathrm{M_\star}$, and $\beta$ from the upper-left to the lower-right panel. Each galaxy is indicated with a blue cross, with the vertical line giving the 1--$\sigma$ error bar on the estimate. From this analysis it is clear that with this dataset and parameter space \texttt{CIGALE} can measure these physical properties self--consistently. Not all physical properties are equally well measured however. It is apparent for instance that if the dust luminosity is extremely reliable ($-0.014\pm0.034$~dex), the performance for $\beta$ for example, even though still excellent, shows somewhat more dispersion ($-0.047\pm0.217$~dex).\label{fig:mocks}} \end{figure*} Overall, all the physical quantities are reliably estimated with the average estimated values being remarkably close to the true values. Regarding the scatter around this mean value, there is a marked difference in performance. The SFR, $\mathrm{L_{dust}}$, and IRX show very little scatter relative to the true values, lower than 0.1 dex (IRX is already a log--scale quantity). The two quantities with the lower performance appear to be $\beta$ (scatter of 0.217 or less than 5\% of the dynamical range) and $\mathrm{A(FUV)}$ (scatter of 0.157 dex). In either case, this performance remains excellent. This slight difference in performance reflects the fact that not all physical quantities can be determined with the same reliability, partly due to intrinsic degeneracies between the physical processes at play, and partly due to the breadth and quality of the photometric coverage. It is important to note that even though this exercise is useful, it does not take into account the uncertainties due to the reliability of the models themselves, and therefore yields lower limits on the actual uncertainties on the physical properties. With this inspection done, well--measured physical quantities can then be used to understand the properties of these galaxies. As a very simple example, we plot here the classical relation between IRX and the UV spectral slope, $\beta,$ in Fig.~\ref{fig:IRX-beta}. \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{brown2014_IRX_beta.pdf} \caption{Relation between the observed IR-to-FUV luminosity (IRX) and the UV spectral slope ($\beta$) for the 129 galaxies of the \cite{brown2014a} sample. The colour of each symbol indicates the specific SFR, following the scale given by the colour bar to the right. The locus followed by resolved quiescent star--forming (respectively starburst) galaxies from \cite{boquien2012a} \citep[respectively][]{kong2004a} is indicated by the blue (respectively orange) line.\label{fig:IRX-beta}} \end{figure} This relation is important as $\beta$ is often used to estimate the attenuation of distant galaxies where dust observations are missing, and IRX is a simple proxy for the attenuation. Numerous works have found and tried to explain the extensive variations observed in the IRX--$\beta$ relation over the years \citep[e.g. ][and many others]{kong2004a, burgarella2005a, boquien2009a, boquien2012a, overzier2011a, grasha2013a, ye2016a, popping2017a, reddy2018a}. This simple example shows the effect of the SFH to explain why more quiescent galaxies deviate from the relation followed by starburst galaxies, with more active galaxies following the relation of \cite{kong2004a} and more quiescent galaxies progressively moving closer to the relation of \cite{boquien2012a}. It is possible to investigate such questions, and many more, thanks to the flexibility of \texttt{CIGALE} in modelling a broad range of galaxies from intense starburst to elliptical galaxies and estimating their physical properties. \section{Summary\label{sec:summary}} In this paper, we present the new generation of the Code Investigating GALaxy Emission, \texttt{CIGALE}. Three principles have guided its development: modularity (it is easy to add new modules, and we encourage and support such developments, or swap modules modelling the same component), clarity (the code is easy to use and to understand), and efficiency (it runs quickly and is parallelised to take advantage of modern processors with multiple cores), both for the developers and for the users. In practical terms, CIGALE is based on an energy balance principle (the energy absorbed by dust in the UV--to--near--IR domain is re--emitted self--consistently in the mid-- and far--IR). The models from the FUV to the radio are built in a modular way, taking into account flexible SFH and stellar populations \citep{bruzual2003a,maraston2005a}, ionised gas \citep{inoue2011a}, attenuation by dust \citep{calzetti2000a,charlot2000a} and re--emission of the energy at longer wavelengths \citep{draine2007a,casey2012a,dale2014a}, active nuclei \citep{fritz2006a,dale2014a}, and the IGM \citep{meiksin2006a}. The computation of these models is done in a parallel way on grids of models that can reach several hundred million elements. These models can then be simply saved for theoretical studies or can be used to evaluate a wide range of physical properties for observed objects. Such evaluation is based on the likelihood--weighted mean and standard deviation, taking into account the presence of upper limits. Finally, thanks to its versatility, \texttt{CIGALE} can also be used as a library to build new applications. This article has presented a snapshot of the current state of \texttt{CIGALE}. It is however a constantly evolving code and we will present in upcoming papers new major evolutions to adapt it to the ever changing challenges of panchromatic modelling. The code along with its documentation is publicly available at \url{http://cigale.lam.fr}. \begin{acknowledgements} We would like to thank Daniel Dale, Caitlin Casey, Bruce Draine, Jacopo Fritz for their help to implement the latest version of their models in \texttt{CIGALE}. We would like to thank Simone Bianchi, Wouter Dobbels, Nimisha Kumari, Samir Salim, Manal Yassin, and Vivienne Wild for their feedback on early versions that have helped improve the ease of use of the code. We also thank the anonymous referee who has helped clarify and improve various aspects of this article. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. MB was supported by the FONDECYT regular project 1170618. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2018-11-09T02:00:13", "yymm": "1811", "arxiv_id": "1811.03094", "language": "en", "url": "https://arxiv.org/abs/1811.03094" }
\section{Introduction} The famous Gallai's theorem states that \begin{equation*} \alpha(G)+\beta (G)=n, \end{equation*} where $G$ is a graph on $n$ vertices, and $\alpha(G)$ and $\beta(G)$ represent the vertex cover number and the independence number, respectively, of $G$. Its beauty lies not only in the numbers, but also in the fact that the union of any vertex cover set and an independent set that yield $\alpha(G)$ and $\beta(G)$, respectively, results in the whole vertex set. With such an elegance, the hunt for analog results is always open. One possibility occurs if we observe an independent set from an equivalent, but yet different, perspective. This comes from the notion of $k$-packings in graphs. A set $P\subseteq V(G)$ is a $k$-\textit{packing set} (or $k$-packing for short) of $G$ if the distance between any pair of distinct vertices from $P$ is greater than $k$. The $k$-\emph{packing number} of $G$ is the maximum cardinality of any $k$-packing of $G$ and is denoted by $\rho_k(G)$. Clearly, 1-packings represent independent sets of $G$, and maximum 1-packings are maximum independent sets of $G$. A $\rho_k(G)$-set is a packing of cardinality $\rho_k(G)$. Since we are interested only in 2-packings, hereinafter we will simply use the terminology $\rho(G)$ and packing, instead of $\rho_2(G)$ and 2-packing, respectively. A natural question concerns finding an analogy to the Gallai's theorem for maximum $k$-packings for $k\geq 2$. We precisely deal with this problem for the case when $k=2$. Packings were an interesting topics for a longer period as a natural lower bound for the domination number $\gamma(G)$ of graphs (for definitions, terminology and more information on domination in graphs we suggest the books \cite{hhs1,hhs2}). One of the first results (and indeed very remarkable) of that type is from Meir and Moon \cite{MeMo}, who have shown that $\rho(T)=\gamma(T)$ for every tree $T$ (in a different notation). Efficient closed domination graphs represent a class of graphs with $\rho(G)=\gamma(G)$, where both maximum 2-packing sets and minimum dominating sets coincide. In such a case we call a minimum dominating set a 1-perfect code. The study of perfect codes in graphs was initiated by Biggs~\cite{biggs-1973}. Later it was intensively studied and we recommend \cite{KPY} for further information and references. In the last decade the packing number became itself more interesting, and not only in connection with the domination number. For instance, the relationship between the packing number and maximal packings of minimum cardinality, called also the lower packing number, is investigated in \cite{SHSa}. A connection between the packing number and the double domination in the form of an upper bound is presented in \cite{MSKG}. Graphs for which their packing number equals the packing number of their complement are described in \cite{Dutt}. In \cite{HeLoRa}, it was shown that the domination number can be also bounded from above by the packing number multiplied with the maximum degree of a graph. A generalization of packings presented in \cite{GaGuHa} is that of $k$-limited packings, where every vertex can have at most $k$ neighbors in a $k$-limited packing set $S$. A probabilistic approach to $k$-limited packings to achieve some bounds can be found in \cite{GaZv}. A further generalization, that is, generalized limited packing of the $k$-limited packing, see \cite{DoHiLe}, brings a dynamic approach with respect to the vertices of $G$, where different vertices can have a different number of neighbors in a generalized limited packing. The problem of computing the packing number of graphs is NP-hard, but polynomially solvable for $P_4$-tidy graphs as shown in \cite{DoHiLe}. It is now our goal to continue finding several contributions on packings in connection with other topic in graphs, which has attracted the attention of several researchers in the last recent years. This is the case of the metric dimension related parameters. The concept of metric dimension in graphs (introduced first in \cite{Harary1976,Slater1975}), and its large number of variants are nowadays commonly studied, due to their properties of uniquely recognizing (identifying or determining) the vertices or edges of graphs. Some of the most recent variants precisely deals with uniquely identifying the edges of graphs (which is in some way one of the ideas which raised up this contributions). The first works on these recent topics are \cite{mix-dim,KeTrYe}. Another metric parameter that can be taken as a predecessor of that we study here concerns identification of vertices throughout neighborhoods (see \cite{JanOmo2012}). We next describe all of these related concepts. Given a graph $G$, a set $S$ of vertices of $G$ is an \emph{adjacency generator}\footnote{In fact, these sets were called adjacency resolving sets in \cite{JanOmo2012}% , where the concept was first described.} for $G$ if for any two different vertices $u,v\in V(G)-S$ there exists $x\in S$ such that $|N(x)\cap \{u,v\}| =1$. An adjacency generator of minimum cardinality is called an \emph{adjacency basis} for $G$ and its cardinality, the \emph{adjacency dimension} of $G$, which is denoted by $\mathrm{dim}_A(G)$. These concepts were first introduced in \cite{JanOmo2012} as a tool while studying some metric properties of the lexicographic product of graphs. More results on the adjacency dimension of graphs can be found in \cite{adjacency1,adjacency2}. Now, given a vertex $v\in V$ and an edge $e=uw\in E$, the distance between the vertex $v$ and the edge $e$ is defined as $d_G(e,v)=\min\{d_G(u,v),d_G(w,v)\}$. A vertex $w\in V$ \emph{distinguishes} two edges $e_1,e_2\in E$ if $d_G(w,e_1)\ne d_G(w,e_2)$. A nonempty set $S\subset V$ is an \emph{edge metric generator} for $G$ if any two edges of $G$ are distinguished by some vertex of $S$. An edge metric generator with the smallest possible cardinality is called an \emph{edge metric basis} for $G$, and its cardinality is the \emph{edge metric dimension}, which is denoted by $\mathrm{edim}(G)$. This concept was introduced in \cite{KeTrYe}. Some other studies on the edge metric dimension of graphs appeared in \cite{geneson,kratica,peterin-yero,zubri}. As a kind of a mixed point of view of these two parameters above, we introduce the concept of incidence dimension in graphs which arises from the two concepts above in some natural way of research evolution. However, we shall formally define it in the next section, based on the existence of some properties of the complement of a packing set, which also yields the definition of the incidence dimension. There we also show the formal connection between the incidence dimension and packing number of graphs. A section about complexity of incidence dimension will further follows. We conclude this work with some additional information about incidence dimension. We consider only finite undirected simple graphs. Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$. For a fixed $v\in V(G)$, set $\{u\in V(G):uv\in E(G)\}$ represents the \emph{open neighborhood} of $v$ and is denoted by $N(v)$. The \emph{degree} of $v$ is $d(v) = |N(v)|$. The \emph{% closed neighborhood} of $v\in V(G)$ is $N[v]=N(v)\cup \{v\}$. The \emph{% distance} $d(u,v)$ between any two vertices $u$ and $v$ is the minimum number of edges on a path between them. Given a set of vertices $S$ of $G$, we use $G-S$ to denote the graph obtained from $G$ by removing all the vertices of $S$ and the edges incident with them. If $S=\{v\}$ for some vertex $v$, then we simply write $G-v$. Also, the subgraph of $G$ induced by $D \subset V(G)$ will be denoted by $G[D]$. \section{Defining the incidence dimension and its connection with packing number} As mentioned in the introduction we are interested in some properties of the complement of the packing sets of a graph $G$. The following result provides a motivation for the definition of incidence dimension. \begin{proposition}\label{lemma-complement} If a set $X \subseteq V(G)$ is a packing set of a graph $G$, then the set $S=V(G)\setminus X$ is a vertex cover of $G$ and for any two different edges $e$ and $f$ there exists $x\in S$ such that either $x$ is incident with $e$ or $x$ is incident with $f$. \end{proposition} \begin{proof} First, let $e=uv$ be an arbitrary edge of $G$. If $\{u,v\} \cap S = \emptyset$, then $u,v \in X$. Since $d(u,v)=1$, this yields a contradiction with $X$ being a packing set of $G$. Thus, $S$ is a vertex cover. (This also follows from the fact that every packing is also a 1-packing and with this an independent set. Since the complements of independent sets are vertex covers the result follows.) \noindent Take now two different arbitrary edges $e,f \in E(G)$. If $e=uv$ and $f=ab$ are not incident, then they are distinguished by one endpoint of $e$ which exists in $S$ because $S$ is a vertex cover. Otherwise they are incident in one vertex, say in $u=a$. If $\{v,b\} \cap S = \emptyset$, then $v,b \in X$. This is a contradiction with $X$ being a packing of $G$ as $d(b,v)\leq 2$. So at least one of $v$ or $b$, say $v$, is in $S$ and $v$ is the desired vertex. \end{proof} With the second property of the proposition above we are able to define the incidence dimension as follows. For this, note that it is meaningful to demand a minimum cardinality set with this mentioned property in order to retain the analogy with the relationship between independence number and vertex cover number. \begin{definition} Given two edges $e,f\in E(G)$ and a vertex $x\in V(G)$, we say that $x$ \emph{(incidently) resolves} or \emph{distinguishes} the pair $e,f$, if either $x\in e$ or $x\in f$ (exactly one of these two edges is incident with $x$). A set $S$ of vertices of $G$ is an \emph{incidence generator} for $G$ if for any two different edges $e,f\in E(G)$ there exists a vertex $x\in S$ such that $x$ incidently resolves the pair $e,f$. An incidence generator of minimum cardinality is called an \emph{incidence basis} for $G$ and its cardinality, the \emph{incidence dimension} of $G$, is denoted by $\mathrm{dim}_I(G)$. \end{definition} From this we can immediately see that we cannot expect such an elegant result as in the case of Gallai's theorem. Indeed, already in the case of $K_2$ we can see that, since there exists only one edge in $K_2$, an empty set is an incidence generator of minimum cardinality, and we have $\mathrm{dim}_I(K_2)=0$. Clearly, this can be extended to any graph with only one edge. However, as soon as there are two edges in $G$, we have $\mathrm{dim}_I(K_2)>0$. The next observation is that, if $S$ is an incidence generator for $G$, then there exists at most one edge with both endpoints outside of $S$. Namely, such two edges would not be incidently resolved by $S$, a contradiction with $S$ being an incidence generator for $G$. Before we state a deeper connection between the incidence dimension and the packing number of a graph we need some additional terminology. \begin{definition}\label{definition-e-critical-packing} Let $e=uv$ be an edge of a graph $G$. The $e$-\emph{critical packing} of $G-e$, denoted by $P_e(G)$, is a maximum packing of the graph $G-e$ with the following property. \begin{equation}\label{condition} \text{If } \vert \{ u, v \} \cap P_e(G) \vert < 2, \text{ then } P_e(G) \text{ is a packing of } G. \end{equation} \end{definition} \noindent Notice that both $u$ and $v$ can be in $P_e(G)$ and then (\ref{condition}) is trivially fulfilled. Clearly, $P_e(G)$ is not a packing of $G$ in such a case. However, by removing either $u$ or $v$ from $P_e(G)$ we obtain a packing of $G$. If $u,v \notin P_e(G)$, then the set $P_e(G)$ is also a packing for $G$. If exactly one endpoint of the edge $e$, let say $u$, is inside the set $P_e(G)$, then $ N_G(v) \cap P_e(G) = \{u\}$ because otherwise $P_e(G)$ is not a packing of $G$ contradicting (\ref{condition}). Therefore we have \begin{equation}\label{boundA} \rho(G)\leq\vert P_e(G) \vert \leq \rho(G)+1. \end{equation} \noindent Figure \ref{figure-critical-packing} shows an example of a graph $G$ where there is an $e$-critical packing smaller than $\rho(G-e)$ for the drawn dashed edge. Black vertices represent unique maximum packing of $G-e$ which does not fulfill (\ref{condition}) and is therefore not $e$-critical. Hence, the cardinality of every $e$-critical packing is two. \begin{figure}[!ht]\label{figure-critical-packing} \begin{center} \begin{tikzpicture}[scale=1.2] \tikzstyle{rn}=[circle,fill=white,draw, inner sep=0pt, minimum size=5pt] \tikzstyle{bn}=[circle,fill=black,draw, inner sep=0pt, minimum size=5pt] \node [style=bn] (0) at (0, 0) {}; \node [style=rn] (1) at (2, 0) {}; \node [style=rn] (2) at (4, 0) {}; \node [style=bn] (3) at (6, 0) {}; \node [style=bn] (4) at (2, 2) {}; \draw[dotted] (0)--(1) node [midway, above] {e}; \draw (1)--(2); \draw (2)--(3); \draw (1)--(4); \end{tikzpicture} \caption{A graph $G$ and the edge $e$ for which it holds that $\rho(G-e) > \vert P_e(G) \vert$.} \label{Trees} \end{center} \end{figure} \begin{theorem} \label{theorem:n-k} If $G$ is a graph of order $n$ and $k$ is an integer defined as $\displaystyle k=\max_{e \in E(G)}{\vert P_e(G) \vert}$, then $\dim_I(G)= n-k$. An incidence basis for the graph $G$ is any set $S=V(G)\setminus P_e(G)$ for which it holds that $k=\vert P_e(G) \vert$. \end{theorem} \begin{proof} Let $e=uv$ and $T=P_e(G)$ such that $k=\vert P_e(G) \vert$. We want to prove that $S = V(G) \setminus T$ is an incidence generator for $G$. If $\vert \{ u, v \} \cap T \vert < 2$, then $T$ is also a packing in $G$, and due to Proposition \ref{lemma-complement}, $S$ is an incidence generator for $G$. Otherwise $u,v \in T$ and so, $T$ is a packing of the graph $G-e$. Thus, $S$ is an incidence generator for $G-e$ with the property that every edge of $G-e$ has at least one endpoint in $S$, due to Proposition \ref{lemma-complement}. Since $S$ is an incidence generator for $G-e$, we have to focus only on pairs of edges where one edge is $e$. Consider the edge $e$ and an arbitrary edge $f \neq e$ of $G$. Clearly, the edges $e$ and $f$ are distinguished by the endpoint of $f$ that is in $S$. It follows that $S$ is an incidence generator for $G$ and $\mathrm{dim}_I(G) \leq n-k$. \noindent Now suppose that there exists an incidence generator $S'$ for $G$ with cardinality $\vert S' \vert=d' < n-k$. Since $S'$ is an incidence generator for $G$, it follows that there exists at most one edge in $G$ induced by $P_e'(G)=V(G) \setminus S'$. Suppose that such edge $e=uv$ exists. First, notice that there is no edge in $G-e$ between two vertices of $P_e'(G)$. Also, there are not two arbitrary vertices $x,y \in P_e'(G)$ such that $d_{G-e}(x,y)=2$, since $S'$ is an incidence generator for $G$. Thus, it follows that $P_e'(G)$ is a packing for $G-e$ and both $u,v \in P_e'(G)$. Since the cardinality of $P_e'(G)$ is $n-d' > k$, we obtain a contradiction with the maximality of $P_e(G)$. If there is no edge in the graph induced by $P_e'(G)$, then $P_e'(G)$ is a packing of $G$, since there are no vertices at distance 2 in $P_e'(G)$. Every packing of $G$ is also a packing of $G-e$, with the property given in Definition \ref{definition-e-critical-packing}, for an arbitrary edge $e$. Again, this is a contradiction with the maximality of $P_e(G)$. Therefore, we deduce that there is no incidence generator with cardinality less than $n-k$. \end{proof} \noindent A direct consequence of Theorem \ref{theorem:n-k}, together with (\ref{boundA}), is as follows. \begin{corollary} For every graph $G$ of order $n$ it holds that $n-\rho(G)-1\leq \mathrm{dim}_I(G)\leq n-\rho(G)$. \end{corollary} \noindent This yields a natural partition of graphs into two classes, those whose incidence dimension equals to $|V(G)|-\rho(G)-1$ and those for which $\mathrm{dim}_I(G)=|V(G)|-\rho(G)$. To show that a graph $G$ belongs to the first class, we 'only' need to find an edge $e$ such that $|P_e(G)|=\rho(G)+1$. For the second class, on the other hand, we need to show that for each edge $e$ we have $|P_e(G)|=\rho(G)$. We next derive exact results for the incidence dimension for some graph families, and we first consider a class of graphs called edge-triangular. A graph is called \textit{edge-triangular} if every edge of a graph is in at least one 3-cycle. \begin{proposition}\label{proposition:edge-triangular} Let $S$ be any incidence generator for a graph $G$. Then the graph $G$ is edge-triangular if and only if for every $e=uv \in E(G)$ it holds that $\vert \{u,v\} \cap S \vert > 0$. \end{proposition} \begin{proof} Let $G$ be an edge-triangular graph. Suppose, that there exists en edge $e=uv$ and an incidence generator $S$ for $G$ such that $\vert \{u,v\} \cap S \vert = 0$. Since $G$ is an edge-triangular graph, there exists a vertex $w$ such that $uvwu$ is a triangle. Note that the vertex $w$ has to be in $S$ because $e$ and $uw$ have to be distinguished by at least one endpoint. But then, the edges $uw$ and $vw$ are not distinguished by any vertex from $S$. A contradiction with $S$ being an incidence generator. Conversely, suppose that the graph $G$ is not edge-triangular. Thus, there exists an edge $e=uv$ that is not a part of a triangle. Consequently, the set $S=V(G) \setminus \{u,v\}$ is an incidence generator for $G$, which means there exists an edge $e=uv$ such that $\vert \{u,v\} \cap S \vert = 0$. \end{proof} Proposition \ref{proposition:edge-triangular} implies the following corollary. \begin{corollary}\label{corollary:edge-triangular} Let $G$ be an edge-triangular graph. The set $S$ is an incidence generator for $G$ if and only if $V(G) \setminus S$ is a packing of $G$. \end{corollary} \begin{proof} Suppose that $S$ is an incidence generator. Due to Proposition \ref{proposition:edge-triangular} there are no two vertices at distance 1 in $V(G) \setminus S$. Since $S$ is an incidence generator, there are also not two vertices at distance 2 in $V \setminus S$. In consequence, it follows that $V(G) \setminus S$ is a packing of $G$. \noindent The converse holds for any graph $G$ by Proposition \ref{lemma-complement}. \end{proof} \begin{proposition}\label{basic-values} Let $n,r$ and $t$ be integers. \begin{enumerate} \item[{\rm (i)}] If $n\ge 3$, then $\mathrm{dim}_I(K_n)=n-1$. \item[{\rm (ii)}] If $n\ge 3$, then $\mathrm{dim}_I(P_n)=\left\lfloor\frac{2(n-1)}{3}% \right\rfloor$. \item[{\rm (iii)}] If $n\ge 4$, then $\mathrm{dim}_I(C_n)=\left\lfloor\frac{2n}{3}% \right\rfloor$. \item[{\rm (iv)}] If $r,t\ge 1$, then $\mathrm{dim}_I(K_{r,t})= r+t-2.$ \end{enumerate} \end{proposition} \begin{proof} (i) Clearly $K_n$ is an edge-triangular graph. So, the result follows from Corollary \ref{corollary:edge-triangular}, and the fact that any set containing all but one vertex of any graph $G$ is an incidence generator for $G$. (ii) Let $V(P_n)=\{v_0,v_1,\dots,v_{n-1}\}$ such that $v_iv_{i+1}\in E(P_n)$ for every $i\in\{0,\dots,n-2\}$. Consider the set $S'=\{v_i: i\ge 2\mbox{ and } i\equiv 0\mbox{ or }i\equiv 2\; (\mathrm{mod }\; 3)\}$. Note that any two edges of $P_n$ are incidently resolved by $S'$, and so $\mathrm{dim}_I(P_n)\leq |S'|=\left\lfloor\frac{2(n-1)}{3}\right\rfloor$. On the other hand, let $S$ be an incidence basis for $P_n$. There could be at most one edge which is not incident to any vertex of $S$. Also, if $v_i\in S$ and $i\ne 0,n-1$, then $v_{i-1}\in S$ or $v_{i+1}\in S$. Thus, for any three consecutive vertices $v_{i},v_{i+1},v_{i+2}$ at least two of them are in $S$ with only one possible exception. According to these facts, $\mathrm{dim}_I(P_n)=|S|\ge \left\lfloor\frac{2(n-1)}{3}\right\rfloor$, which completes the proof of (ii). (iii) Let $e=uv$ be any edge of $C_n$. Clearly, $C_n-e\cong P_n$. It is well known, see \cite{MeMo}, that $\rho (T)=\gamma (T)$ for every tree $T$ and we have $\rho (P_n)=\left\lceil \frac{n}{3}\right\rceil$. Moreover, there always exists a $\rho(P_n)$-set $P$ such that $u,v\in P$. Hence, $|P_e(C_n)|=\left\lceil \frac{n}{3}\right\rceil$. On the other hand, $\rho(C_n)=\left\lfloor \frac{n}{3}\right\rfloor$. If $n\neq 3k$, then $|P_e(C_n)|=\rho(C_n)+1$, and by Theorem \ref{theorem:n-k}, we have $\mathrm{dim}_I(C_n)=n-\left\lceil \frac{n}{3}\right\rceil=\left\lfloor \frac{2n}{3}\right\rfloor$. For $n=3k$ we have $|P_e(C_n)|=\rho(C_n)=k$ for every edge $e$ and, again by Theorem \ref{theorem:n-k}, we have $\mathrm{dim}_I(C_n)=n-\rho(C_n)=3k-k=\frac{2n}{3}=\left\lfloor \frac{2n}{3}\right\rfloor$. So, we are done with the proof of (iii). (iv) Clearly $P_e(K_{r,t})=\{u,v\}$ for any edge $e=uv$ of $K_{r,t}$, while $\rho(K_{r,t})=1$. By Theorem \ref{theorem:n-k} we have $\mathrm{dim}_I(K_{r,t})=r+t-2$, because all edges are symmetric to each other. \end{proof} We end this section with a short discussion on incidence dimension of trees. We recall that $A\triangle B$ denotes the symmetric difference of the sets $A$ and $B$. \begin{theorem} \label{edge} Let $G$ be a graph. If $\mathrm{dim}_I(G)=|V(G)|-\rho(G)-1$, then $G[P_1\triangle P_2]$ is not an empty graph for some maximum packings $P_1$ and $P_2$ of $G$. \end{theorem} \begin{proof} Let $G$ be a graph and let $S$ be an incidence basis of $G$ such that $\mathrm{dim}_I(G)=|V(G)|-\rho(G)-1$. Let $P=V(G)-S$. Since $|P|=\rho(G)+1$, $P$ is not a packing for $G$. So, there exist two different vertices $u, v \in P$, such that $1\leq \textnormal{d}(u,v) \leq 2$. If $\textnormal{d}(u,v)=2$, then the edges $uw$ and $wv$, where $w$ is a common neighbour of $u$ and $v$, are not resolved by $S$, a contradiction. Thus $\textnormal{d}(u,v)=1$. Let $P_1=V(G)-(S \cup \{u\})$ and $P_2=V(G)-(S \cup \{v\})$. The cardinality of both packings is maximum possible, since $|P_1|=|P_2|=|V(G)|-\left( (|V(G)|-\rho(G)-1)+1 \right) = \rho(G)$. Since $u$ and $v$ are adjacent it follows that $G[P_1\triangle P_2]$ is not an empty graph and the proof is completed. \end{proof} The converse implication of Theorem \ref{edge} does not hold in general as we can see from the example in Figure \ref{example-for-theorem-edge}. The left side and middle picture show two different maximum packings of $G$, which are denoted with black vertices. On the right side there is a picture of $G[P_1\triangle P_2]$ which is clearly not an empty graph. However, the incidence dimension of $G$ is not equal to $|V(G)|-\rho(G)-1$. \begin{figure}[ht!] \label{example-for-theorem-edge} \begin{center} \begin{tikzpicture}[xscale=0.4, yscale=0.5, style=thick,x=1cm,y=1cm] \tikzstyle{rn}=[circle,fill=white,draw, inner sep=0pt, minimum size=5pt] \tikzstyle{bn}=[circle,fill=black,draw, inner sep=0pt, minimum size=5pt] \node [style=rn] (a1) at (0, 0) {}; \node [style=rn] (a2) at (2, 0) {}; \node [style=bn] (a3) at (4, 0) {}; \node [style=rn] (a4) at (6, 0) {}; \node [style=rn] (a5) at (8, 0) {}; \node [style=rn] (a6) at (10, 0) {}; \node [style=rn] (a7) at (12, 0) {}; \node [style=rn] (b1) at (0, 2) {}; \node [style=rn] (b2) at (6, 2) {}; \node [style=rn] (b3) at (12, 2) {}; \node [style=bn] (c1) at (0, 4) {}; \node [style=bn] (c2) at (6, 4) {}; \node [style=bn] (c3) at (12, 4) {}; \node [style=bn] (d1) at (0, -2) {}; \node [style=bn] (d2) at (12, -2) {}; \draw (a1) -- (a2); \draw (a2) -- (a3); \draw (a3) -- (a4); \draw (a4) -- (a5); \draw (a5) -- (a6); \draw (a6) -- (a7); \draw (a1) -- (b1); \draw (a4) -- (b2); \draw (a7) -- (b3); \draw (b1) -- (c1); \draw (b2) -- (c2); \draw (b3) -- (c3); \draw (a1) -- (d1); \draw (a7) -- (d2); \node [style=rn] (a1) at (14, 0) {}; \node [style=bn] (a2) at (16, 0) {}; \node [style=rn] (a3) at (18, 0) {}; \node [style=rn] (a4) at (20, 0) {}; \node [style=bn] (a5) at (22, 0) {}; \node [style=rn] (a6) at (24, 0) {}; \node [style=rn] (a7) at (26, 0) {}; \node [style=rn] (b1) at (14, 2) {}; \node [style=rn] (b2) at (20, 2) {}; \node [style=rn] (b3) at (26, 2) {}; \node [style=bn] (c1) at (14, 4) {}; \node [style=bn] (c2) at (20, 4) {}; \node [style=bn] (c3) at (26, 4) {}; \node [style=rn] (d1) at (14, -2) {}; \node [style=bn] (d2) at (26, -2) {}; \draw (a1) -- (a2); \draw (a2) -- (a3); \draw (a3) -- (a4); \draw (a4) -- (a5); \draw (a5) -- (a6); \draw (a6) -- (a7); \draw (a1) -- (b1); \draw (a4) -- (b2); \draw (a7) -- (b3); \draw (b1) -- (c1); \draw (b2) -- (c2); \draw (b3) -- (c3); \draw (a1) -- (d1); \draw (a7) -- (d2); \node [style=rn] (a1) at (28, 0) {}; \node [style=bn] (a2) at (30, 0) {}; \node [style=bn] (a3) at (32, 0) {}; \node [style=rn] (a4) at (34, 0) {}; \node [style=bn] (a5) at (36, 0) {}; \node [style=rn] (a6) at (38, 0) {}; \node [style=rn] (a7) at (40, 0) {}; \node [style=rn] (b1) at (28, 2) {}; \node [style=rn] (b2) at (34, 2) {}; \node [style=rn] (b3) at (40, 2) {}; \node [style=rn] (c1) at (28, 4) {}; \node [style=rn] (c2) at (34, 4) {}; \node [style=rn] (c3) at (40, 4) {}; \node [style=bn] (d1) at (28, -2) {}; \node [style=rn] (d2) at (40, -2) {}; \draw (a2) -- (a3); \end{tikzpicture} \end{center} \caption{An example showing that the converse implication of Theorem \ref{edge} does not in general hold.} \end{figure} We next provide an exact result for the class of trees with a unique maximum packing. Two characterizations of such trees were presented recently in \cite{BoPe}. \begin{theorem} If $T$ is a tree with the unique maximum packing $P$, then $\mathrm{dim}_I(G)=|V(G)|-\rho(G)$. \end{theorem} \begin{proof} Let $T$ be a tree and $P$ its unique maximum packing. To prove that $\mathrm{dim}_I(G)=|V(G)|-\rho(G)$ we will use a contraposition of the Theorem~\ref{edge}: \textit{If $G[P_1\triangle P_2]$ is an empty graph, then $\mathrm{dim}_I(G)\neq|V(G)|-\rho(G)-1$.} It follows that $P_1=P_2=P$, because $P$ is a unique maximum packing of $T$. So, $G[P_1\triangle P_2]$ is an empty graph for any maximum packings $P_1$ and $P_2$ and $\mathrm{dim}_I(G)\neq|V(G)|-\rho(G)-1$. By using Theorem~\ref{theorem:n-k}, we conclude that $\mathrm{dim}_I(G)=|V(G)|-\rho(G)$. \end{proof} \section{Complexity of the problem} In this section we consider the computational complexity of the problem of computing the incidence dimension of a graph. It is well known that the problem of calculating the metric dimension of a graph is NP-hard as stated in the book \cite{garey}, and formally proved in \cite{Khuller1996}. We show that the problem of finding the incidence dimension of an arbitrary graph is also NP-hard. For this, we strongly rely on edge-triangular graphs. We first consider the following decision problem. \begin{equation*} \begin{tabular}{|l|} \hline \mbox{INCIDENCE DIMENSION PROBLEM (IDIM problem for short)} \\ \mbox{INSTANCE: A graph $G$ of order $n\ge 3$ and an integer $1\le r\le n-1$.} \\ \mbox{QUESTION: Is $\mathrm{dim}_I(G)\le r$?} \\ \hline \end{tabular}% \end{equation*}% \newline To study the complexity of IDIM problem we make a reduction from the 3-SAT problem, which is one of the most classical NP-complete problems known in the literature. For more information on the 3-SAT problem and reducibility of NP-complete problems in general, we suggest \cite{garey}. \begin{theorem}\label{theorem:NP-complete} The IDIM problem is NP-complete. \end{theorem} \begin{proof} For a set of vertices $S$ guessed for the problem by a nondeterministic algorithm, one needs to iterate through all pairs of edges and check that every pair is incidently resolved by one vertex from $S$. This can be done in polynomial time and therefore IDIM problem is in NP class. We make a polynomial transformation of 3-SAT problem to IDIM problem in the following way. Consider an arbitrary instance of the 3-SAT problem, \emph{i.e.}, a finite set $U=\{u_1,\ldots,u_n\}$ of Boolean variables and a collection $C=\{c_1, \ldots, c_m\}$ of clauses over those Boolean variables. We will construct a graph $G=(V,E)$ and set a positive integer $r \leq |V|-1$, such that $\mathrm{dim}_I(G) \leq r$ if and only if $C$ is satisfiable. The construction will be made up of several gadgets and edges between them. For each variable $u_i \in U$, $1\leq i\leq n$, we construct a truth-setting gadget $X_i=(V_i,E_i)$, with $V_i=\{x_i,y_i,z_i,w_i,T_i,F_i\}$ and $E_i=\{x_iy_i,x_iz_i,y_iz_i,y_iw_i,z_iw_i,w_iT_i,w_iF_i,T_iF_i\}$, see Figure \ref{figure:NP1}. Each truth-setting gadget is connected with the rest of the graph only through $T_i$ and $F_i$ nodes, which are \texttt{TRUE} and \texttt{FALSE} representing values, respectively. \begin{claim}\label{remark:NP1} Let $u_i$ be an arbitrary variable in $U$. Any incidence generator $S$ must contain at least four vertices from its truth-setting gadget. Moreover, if there are exactly four vertices from a truth-setting gadget in $S$, then $y_i,w_i,z_i\in S$ and $x_i\notin S$. \end{claim} \proof Towards a contradiction suppose that there exists an incidence generator $S$ with less than four vertices from the truth-setting gadget corresponding to $u_i$. It follows that there exists a set of three vertices $W_i=\{v_1,v_2,v_3\} \subset V_i$ that are not in $S$. Make a partition of $V_i$ into two sets $V_i=\{x_i,y_i,z_i \} \cup \{ w_i, T_i, F_i\}$. There are at least two vertices from $W_i$ in one of the partition sets. Since each partition set forms a triangle, it follows, that there is an edge lying outside $S$. That is a contradiction with Proposition \ref{proposition:edge-triangular}. Suppose now that exactly four vertices from a truth-setting gadget are in $S$. If $w_i\notin S$, then $T_i,F_i,y_i,z_i\in S$ because otherwise we have a contradiction with Proposition \ref{proposition:edge-triangular}. But then $x_i\notin S$ and edges $w_iz_i$ and $x_iz_i$ are not distinguished by $S$, a contradiction. Hence, $w_i\in S$. If $y_i\notin S$ (resp. $z_i\notin S$), then $x_i\in S$ and $z_i\in S$ (resp. $y_i\in S$) to fulfill Proposition \ref{proposition:edge-triangular}. Clearly, exactly one from $T_i$ and $F_i$ is in $S$. If $T_i\in S$, then edges $F_iw_i$ and $y_iw_i$ (resp. $z_iw_i$) are not distinguished by $S$, a contradiction. Thus, $y_i,z_i\in S$. If in addition $x_i\in S$, then for the triangle $w_iT_iF_iw_i$ we have a contradiction with Proposition \ref{proposition:edge-triangular}. Therefore, $x_i\notin S$.~{\tiny ($\Box$)} \medskip \begin{figure}[h] \centering \begin{tikzpicture}[scale=.9, transform shape] \node [draw, shape=circle] (a1) at (0.2,0) {}; \node [draw, shape=circle] (a2) at (1.8,0) {}; \node [draw, shape=circle] (a3) at (1,1) {}; \node [draw, shape=circle] (a4) at (0.2,2) {}; \node [draw, shape=circle] (a5) at (1.8,2) {}; \node [draw, shape=circle] (a6) at (1,3) {}; \draw (0.1,0) node[left] {$T_i$}; \draw (1.9,0) node[right] {$F_i$}; \draw (0.9,1) node[left] {$w_i$}; \draw (0.1,2) node[left] {$y_i$}; \draw (1.9,2) node[right] {$z_i$}; \draw (1,3.1) node[above] {$x_i$}; \foreach \from/\to in { a1/a2, a1/a3, a2/a3, a3/a4, a3/a5, a4/a5, a4/a6, a5/a6} \draw (\from) -- (\to); \end{tikzpicture} \caption{The truth-setting gadget for variable $u_i$.}\label{figure:NP1} \end{figure} For each clause $c_j=y_j^1 \vee y_j^2 \vee y_j^3$, $1\leq j\leq m_j$, where $y_j^k$ is a literal in the clause $c_j$, we construct a satisfaction testing gadget $Y_j=(V_j',E_j')$, with $V_j'=\{a_j^1,b_j^1,c_j^1,a_j^2,b_j^2,c_j^2, a_j^3,b_j^3,c_j^3\}$ and $E_j'=\{a_j^1b_j^1,a_j^1c_j^1,b_j^1c_j^1,a_j^2b_j^2,a_j^2c_j^2,b_j^2c_j^2,a_j^3b_j^3,a_j^3c_j^3, b_j^3c_j^3,a_j^1a_j^2,a_j^2a_j^3,a_j^3a_j^1,b_j^1b_j^2,b_j^2b_j^3,b_j^3b_j^1,c_j^1c_j^2, c_j^2c_j^3,c_j^3c_j^1\}$ (see Figure \ref{figure:NP2}). Notice that the satisfaction testing gadget is isomorphic to the Cartesian product$C_3 \square C_3$. \begin{claim}\label{remark:NP2} Let $c_j$ be an arbitrary clause in $C$ and $Y_j=(V_j',E_j')$ its satisfaction testing gadget. Then any incidence generator must contain at least 8 vertices from $V_j'$. \end{claim} \proof Towards a contradiction, suppose there exists an incidence generator $S$ with less than 8 vertices from the satisfaction testing gadget corresponding to $c_j$. It follows that there exist two vertices $x,y \in V_j'$ that are not in $S$. Since the diameter of $C_3 \square C_3 $ is 2, the vertices $x$ and $y$ are either at distance one or two. Every edge of $Y_j$ is a part of some triangle, and so $x$ and $y$ cannot be at distance one, due to Proposition \ref{proposition:edge-triangular}. Thus, there is a vertex $z$ such that the edges $xz$ and $zy$ exist. But those two edges are not resolved by any endpoint, a contradiction.~{\tiny ($\Box$)} \medskip \begin{figure}[h] \centering \begin{tikzpicture}[scale=.9, transform shape] \node [draw, shape=circle] (a1) at (-3,-2) {}; \node [draw, shape=circle] (b1) at (-3.8,0) {}; \node [draw, shape=circle] (c1) at (-2.2,0) {}; \node [draw, shape=circle] (a2) at (0,-2) {}; \node [draw, shape=circle] (b2) at (-0.8,0) {}; \node [draw, shape=circle] (c2) at (0.8,0) {}; \node [draw, shape=circle] (a3) at (3,-2) {}; \node [draw, shape=circle] (b3) at (2.2,0) {}; \node [draw, shape=circle] (c3) at (3.8,0) {}; \draw (-3.1,-2) node[left] {$a_j^1$}; \draw (-3.9,0) node[left] {$b_j^1$}; \draw (-2.1,0) node[right] {$c_j^1$}; \draw (0,-2.1) node[below] {$a_j^2$}; \draw (-0.9,0) node[left] {$b_j^2$}; \draw (0.9,0) node[right] {$c_j^2$}; \draw (3.1,-2) node[right] {$a_j^3$}; \draw (2.1,0) node[left] {$b_j^3$}; \draw (3.9,0) node[right] {$c_j^3$}; \foreach \from/\to in { a1/b1, b1/c1, c1/a1, a2/b2, b2/c2, c2/a2, a3/b3, b3/c3, c3/a3, a1/a2, a2/a3} \draw (\from) -- (\to); \foreach \from/\to in { c1/c2, c2/c3, c1/c3} \draw (\from) edge[out=45, in=135] (\to); \foreach \from/\to in { a1/a3, b1/b2, b2/b3, b1/b3} \draw (\from) edge[out=-45, in=-135] (\to); \end{tikzpicture} \caption{The satisfaction testing gadget for clause $c_j$.}\label{figure:NP2} \end{figure} We also add some edges to connect the truth-setting gadgets with corresponding satisfaction testing gadgets. If a variable $u_i$ occurs as a literal $y_j^k$ in a clause $c_j=y_j^1 \vee y_j^2 \vee y_j^3$, then we add the following edges. If $y_j^k$ is a positive literal then we add the edges $F_ib_j^k$ and $F_ic_j^k$. If a variable $y_j^k$ is a negative literal in a clause $c_j$, then we add the edges $T_ib_j^k$ and $T_ic_j^k$. For each clause $c_j \in C$ denote those six added edges with $E_j''$. We call them \textit{communication} edges. Figure \ref{figure:NP3} shows the edges that were added corresponding to the clause $c_j= (\overline{u_1} \vee \overline{u_2} \vee u_3)$, where $\overline{u_1}$ and $\overline{u_2}$ represent the negative literal corresponding to the variables $u_1$ and $u_2$, respectively. \begin{figure}[h] \centering \begin{tikzpicture}[scale=.9, transform shape] \node [draw, shape=circle] (a11) at (-4.8,2) {} \node [draw, shape=circle] (a12) at (-3.2,2) {} \node [draw, shape=circle] (a13) at (-4,3) {} \node [draw, shape=circle] (a14) at (-4.8,4) {} \node [draw, shape=circle] (a15) at (-3.2,4) {} \node [draw, shape=circle] (a16) at (-4,5) {} \draw (-4.9,2) node[left] {$T_1$}; \draw (-3.1,2) node[right] {$F_1$}; \draw (-4.1,3) node[left] {$w_1$}; \draw (-4.9,4) node[left] {$y_1$}; \draw (-3.1,4) node[right] {$z_1$}; \draw (-4,5.1) node[above] {$x_1$}; \foreach \from/\to in { a11/a12, a11/a13, a12/a13, a13/a14, a13/a15, a14/a15, a14/a16, a15/a16} \draw (\from) -- (\to); \node [draw, shape=circle] (a21) at (-0.8,2) {} \node [draw, shape=circle] (a22) at (0.8,2) {} \node [draw, shape=circle] (a23) at (0,3) {} \node [draw, shape=circle] (a24) at (-0.8,4) {} \node [draw, shape=circle] (a25) at (0.8,4) {} \node [draw, shape=circle] (a26) at (0,5) {} \draw (-0.9,2) node[left] {$T_2$}; \draw (0.9,2) node[right] {$F_2$}; \draw (-0.1,3) node[left] {$w_2$}; \draw (-0.9,4) node[left] {$y_2$}; \draw (0.9,4) node[right] {$z_2$}; \draw (0,5.1) node[above] {$x_2$}; \foreach \from/\to in { a21/a22, a21/a23, a22/a23, a23/a24, a23/a25, a24/a25, a24/a26, a25/a26} \draw (\from) -- (\to); \node [draw, shape=circle] (a31) at (3.2,2) {} \node [draw, shape=circle] (a32) at (4.8,2) {} \node [draw, shape=circle] (a33) at (4,3) {} \node [draw, shape=circle] (a34) at (3.2,4) {} \node [draw, shape=circle] (a35) at (4.8,4) {} \node [draw, shape=circle] (a36) at (4,5) {} \draw (3.1,2) node[left] {$T_3$}; \draw (4.9,2) node[right] {$F_3$}; \draw (3.9,3) node[left] {$w_3$}; \draw (3.1,4) node[left] {$y_3$}; \draw (4.9,4) node[right] {$z_3$}; \draw (4,5.1) node[above] {$x_3$}; \foreach \from/\to in { a31/a32, a31/a33, a32/a33, a33/a34, a33/a35, a34/a35, a34/a36, a35/a36} \draw (\from) -- (\to); \node [draw, shape=circle] (a1) at (-3,-2) {}; \node [draw, shape=circle] (b1) at (-3.8,0) {}; \node [draw, shape=circle] (c1) at (-2.2,0) {}; \node [draw, shape=circle] (a2) at (0,-2) {}; \node [draw, shape=circle] (b2) at (-0.8,0) {}; \node [draw, shape=circle] (c2) at (0.8,0) {}; \node [draw, shape=circle] (a3) at (3,-2) {}; \node [draw, shape=circle] (b3) at (2.2,0) {}; \node [draw, shape=circle] (c3) at (3.8,0) {}; \draw (-3.1,-2) node[left] {$a_j^1$}; \draw (-3.9,0) node[left] {$b_j^1$}; \draw (-2.1,0) node[right] {$c_j^1$}; \draw (0,-2.1) node[below] {$a_j^2$}; \draw (-0.9,0) node[left] {$b_j^2$}; \draw (0.9,0) node[right] {$c_j^2$}; \draw (3.1,-2) node[right] {$a_j^3$}; \draw (2.1,0) node[left] {$b_j^3$}; \draw (3.9,0) node[right] {$c_j^3$}; \foreach \from/\to in { a1/b1, b1/c1, c1/a1, a2/b2, b2/c2, c2/a2, a3/b3, b3/c3, c3/a3, a1/a2, a2/a3, a11/b1, a11/c1, a21/b2, a21/c2, a32/b3, a32/c3} \draw (\from) -- (\to); \foreach \from/\to in { c1/c2, c2/c3, c1/c3} \draw (\from) edge[out=45, in=135] (\to); \foreach \from/\to in { a1/a3, b1/b2, b2/b3, b1/b3} \draw (\from) edge[out=-45, in=-135] (\to); \end{tikzpicture} \caption{The subgraph associated to the clause $c_j=(\overline{u_1} \vee \overline{u_2} \vee u_3)$.}\label{figure:NP3} \end{figure} The construction of the IDIM instance is then completed by setting $r=4n+8m$ and $G=(V,E)$, where $$ V= \left( \bigcup_{i=1}^n V_i \right) \cup \left( \bigcup_{j=1}^m V_j' \right)$$ and $$ E= \left( \bigcup_{i=1}^n E_i \right) \cup \left( \bigcup_{j=1}^m E_j' \right) \cup \left( \bigcup_{j=1}^m E_j'' \right) $$ One can do the described construction in polynomial time. Notice that the graph $G$ is edge-triangular. If we show that $C$ is satisfiable if and only if $G$ has incidence dimension less or equal than $r$, then the proof of NP-completeness is completed. From Claims \ref{remark:NP1} and \ref{remark:NP2} we get the following corollary. \begin{corollary}\label{corollary:np} The incidence dimension of the graph $G$ constructed above is at least $r=4n+8m$. \end{corollary} The following Lemmas together with Corollary \ref{corollary:np} complete the proof for the IDIM problem being NP-complete. \begin{lemma}\label{lema:NP1} If $C$ is satisfiable, then $\mathrm{dim}_I(G)=r$. \end{lemma} \proof We construct an incidence generator $S$ of size $r$ based on a truth assignment of elements from the set $U$ that satisfies the collection of clauses $C$. Let $t:U \rightarrow \{\texttt{TRUE,FALSE}\}$ be a truth assignment that satisfies the collection of clauses $C$. For each clause $c_j=y_j^1 \vee y_j^2 \vee y_j^3$, from $C$, put into $S$ the vertices $b_j^k,c_j^k$ for $k \in \{1,2,3\}$. Since the collection of clauses $C$ is satisfiable, there exists a literal $y_j^k$, $k \in \{1,2,3\}$, that satisfies $c_j$. Fix one such $k$ and put into $S$ the other two vertices $a_j^\ell$ for $\ell \in \{1,2,3\} \setminus \{k\}$ For each Boolean variable $u_i \in U$, put into $S$ the vertices $\{y_i,z_i,w_i\}$. Also add to the set $S$, the vertex $F_i$ if $t(u_i)=\texttt{TRUE}$, or the vertex $T_i$ if $t(u_i)=\texttt{FALSE}$. The cardinality of the constructed set $S$ is clearly $r=4n+8m$. We now take a look at the set $X=V(G) \setminus S$. For each $u_i \in U$ there are $x_i$ and exactly one of the vertices from the set $\{T_i, F_i\}$ in the set $X$. The distance between these two vertices is three. For each $c_j \in C$ exactly one of the vertices $a_j^1,a_j^2,a_j^3$ is in $X$. The vertex that is in $X$ corresponds to the variable that satisfies $c_j$. It follows that this vertex is at the distance three or more from all the other vertices in $X$. All other possible pairs of vertices in $X$ are also at distance greater or equal to three. It follows that $X$ is a packing of $G$. Graph $G$ is edge-triangular, and together with the Corollary \ref{corollary:edge-triangular}, it follows that $S$ is an incidence generator for $G$.~{\tiny ($\Box$)} \medskip \begin{lemma}\label{lema:NP2} If $\mathrm{dim}_I(G)=r$, then the collection of clauses $C$ is satisfiable. \end{lemma} \proof Let $S$ be an arbitrary incidence generator for $G$ with cardinality $r$. The set $S$ must contain at least eight vertices from each satisfaction testing gadget and at least four vertices from each truth-setting gadget due to Claims \ref{remark:NP1} and \ref{remark:NP2}. Since $|S|=r=8m+4n$, it follows that in $S$ there are exactly four vertices from each truth-setting component, and exactly eight vertices from each satisfaction testing component. Since the graph $G$ is edge-triangular, and together with Corollary \ref{corollary:edge-triangular}, it follows that $X=V(G) \setminus S$ is a packing for $G$. Moreover, for each $i\in \{1, \ldots, n\}$ it holds that $x_i\in X$ and exactly one of the vertices $T_i, F_i$ is in $X$, by Claim \ref{remark:NP1}. For each $j \in \{1, \ldots, m\}$ exactly one of the vertices $a_j^1, a_j^2, a_j^3$ is in $X$ because $b_j^i$ and $c_j^i$, $i\in \{1, 2,3\}$, are in a common triangle with either $T_\ell$ or $F_\ell$ where $u_\ell$ belongs to clause $c_j$. We now define a function that satisfies all clauses from $C$. For an arbitrary $i\in \{1, \ldots, n\}$, let $v_i \in \{T_i,F_i\} \cap X$. Let $t: U \rightarrow \{\texttt{TRUE,FALSE}\}$ be as follows: $$t(u_i)=\left\{\begin{array}{ll} \texttt{TRUE}, & v_i=T_i , \\ \texttt{FALSE}, & v_i=F_i . \end{array} \right.$$ We need to show that $t$ is a satisfying truth assignment for $C$. Let $c_j=y_j^1 \vee y_j^2 \vee y_j^3 \in C$ be an arbitrary clause and denote the corresponding boolean variables with $u_{j_1},u_{j_2},u_{j_3}$, respectively. To show that at least one of its literals has value \texttt{TRUE}, take the vertex from $V_j'$ that belongs to $X$. There is exactly one of the vertices $a_j^1, a_j^2, a_j^3$ in $X$. Let $a_j^k$, $k \in \{1,2,3\}$, be the vertex that is in $X$. The communication edges are added in such a way that $a_j^k$ can be in $X$ (packing set) only if $u_{j_k}$ occurs in $c_j$ as: \begin{itemize} \item a positive literal and $v_{j_k} =T_{j_k}$; \item a negative literal and $v_{j_k} =F_{j_k}$. \end{itemize} In both cases $c_j$ is satisfied by the literal corresponding to the variable $u_{j_k}$. It finally follows that $C$ is satisfiable, which completes the proof of this lemma.~{\tiny ($\Box$)} \medskip Lemmas \ref{lema:NP1} and \ref{lema:NP2} show that the above construction is a polynomial transformation from 3-SAT to the IDIM problem. Therefore, the proof of Theorem \ref{theorem:NP-complete} is completed. \end{proof} The proof of Theorem \ref{theorem:NP-complete} yields the following result. \begin{corollary} \label{np-hard} The problem of finding the incidence dimension of a graph is NP-hard. \end{corollary} \section{Some final remarks on $\mathrm{dim}_I(G)$} Given any graph $G$, it is easy to see that the set $V(G)$ minus one arbitrary vertex is an incident generator for $G$. On the other hand, given an incidence basis for $G$, for all but probably one edge in $E(G)$ at least one of its endpoints belongs to $S$. Moreover for any three edges incident with a same vertex, at least two different endpoints of two different edges must be in $S$ too. In consequence, the following bounds are easy to deduce. \begin{remark} \label{trivial-bounds} If $G$ is a connected graph of order $n$ with at least two edges, then $$\left\lfloor\frac{n}{2}\right\rfloor\le \mathrm{dim}_I(G)\le n-1.$$ \end{remark} The lower bound of Remark~\ref{trivial-bounds} is achieved for a path $P_n$, $n\in\{3,4,5,6,8\}$, a cycle $C_4$, a star $K_{1,3}$ and some graphs obtained by attaching a pendant vertex or an edge to some vertices of previously mentioned examples. While it is not clear if this list is complete, we can entirely describe all graphs achieving the upper bound of Remark \ref{trivial-bounds}. \begin{proposition} Let $G$ be a connected graph of order $n$ with at least two edges. Then $\mathrm{dim}_I(G)=n-1$ if and only if any two vertices of $G$ have a common neighbor. \end{proposition} \begin{proof} If there are two different vertices $x,y\in V(G)$ such that they do not have a common neighbor, then it is not difficult to see that the set $V(G)-\{x,y\}$ is an incidence generator for $G$. Thus, to have an incidence generator of order $n-1$, it is required that any two different vertices of $G$ have a common neighbor, and vice versa. \end{proof} Now, concerning the bounds of Remark \ref{trivial-bounds}, we next study the existence of graphs $G$ of order $n$ and incidence dimension $r$ for any $r,n $ such that $\left\lfloor\frac{n}{2}\right\rfloor\le r\le n-1$. \begin{proposition} For any integers $r,n$ with $2\le \left\lfloor\frac{n}{2}\right\rfloor\le r\le n-1$ there exists a graph $G$ of order $n$ such that $\mathrm{dim}% _I(G)=r$. \end{proposition} \begin{proof} If $r=2$, then $n\in\{4,5\}$. In these situations, the graphs $P_4$ and $P_5$ respectively satisfy our requirements. Hence, from now on we may assume $r\ge 3$. Consider $n$ is odd and $r=\left\lfloor\frac{n}{2}\right\rfloor < n-1$. Let $G_{r,n}$ be the graph obtained as follows. \begin{itemize} \item We begin with a complete graph $K_r$ with vertex set $V=\{u_1,\ldots,u_r\}$. \item Add $r+1$ vertices $w,v_1,\ldots,v_r$. \item Add the edges $u_iv_i$ for every $i\in \{1,\dots,r\}$ and the edge $wv_1$. \end{itemize} Clearly, $G_{r,n}$ has order $2r+1=n$. It is not difficult to see that $V$ is an incidence generator for $G_{r,n}$ and so, $\mathrm{dim}_I(G_{r,n})\le r$. Now, suppose $\mathrm{dim}_I(G_{r,n}) < r$ and let $S$ be an incidence basis for $G_{r,n}$. That means that there is at least one vertex $u_j\in V$ such that $u_j\notin S$. If there exists some other vertex $u_i\in V$, $i\ne j$, such that $u_i\notin S$, then there are two edges $u_ju_k$, $u_iu_k$, with $k\ne i,j$ (since $r\ge 3$), such that they are not incidently resolved by $S$, a contradiction. Thus, $V-\{u_j\}\subseteq S$, which means $|S| = r-1$ and $S=V-\{u_j\}$. But, in such a case, the edges $wv_1$ and $u_jv_j$ are not incidently resolved by $S$, which is a contradiction again. As a consequence, $\mathrm{dim}_I(G_{r,n})=r$. We next consider ($n$ is even and $r=\left\lfloor\frac{n}{2}\right\rfloor < n-1$) or $\left\lfloor\frac{n}{2}\right\rfloor < r < n-1$. Let $G'_{r,n}$ be the graph obtained as follows. \begin{itemize} \item We begin with a complete graph $K_r$ with vertex set $V=\{u_1,\ldots,u_r\}$. \item Add $n-r$ vertices $v_1,\ldots,v_{n-r}$. \item Add the edges $u_iv_i$ for every $i\in \{1,\ldots,n-r-1\}$. \item Add the edges $v_{n-r}u_{i}$ for every $i\in \{n-r,\ldots,r\}$. \item Add the edge $v_1v_2$ (notice that such two vertices always exist because $r<n-1$). \end{itemize} Clearly the order of $G'_{r,n}$ is $n$ and we can easily notice that $V$ is an incidence generator for $G'_{r,n}$ and so, $\mathrm{dim}_I(G_{r,n})\le r$. Hence, suppose $\mathrm{dim}_I(G_{r,n}) < r$ and let $S'$ be an incidence basis for $G'_{r,n}$. In consequence, there is at least one vertex $u_j\in V$ such that $u_j\notin S'$. A similar procedure as earlier leads to the fact that $\mathrm{dim}_I(G_{r,n}) = r-1$ and that $S'=V-\{u_j\}$. However, in this case there is an edge $u_jv_l$ for some $l\in \{1,\dots,r\}$ such that the edges $v_1v_2$ and $u_jv_l$ are not incidently resolved by $S'$, a contradiction. Therefore, $\mathrm{dim}_I(G_{r,n}) = r$. We finally consider the situation $r = n-1$, which is straightforward to realize by just taking the complete graph $K_n$, and that completes the proof. \end{proof} It is natural to think that the incidence dimension is related to the (edge, or adjacency) dimension of graphs. Accordingly, we conclude this work by comparing $\mathrm{dim}_I(G)$ with $\mathrm{dim_e}(G)$ and $\mathrm{dim}_A(G)$. \begin{proposition}\label{inc-adj-edge} For any graph $G$ without isolated vertices, $\mathrm{dim}_I(G)\ge \max\{% \mathrm{dim}_A(G),\mathrm{dim_e}(G)\}$. \end{proposition} \begin{proof} Let $S$ be an incidence basis for $G$. Consider two different vertices $x,y\in V(G)-S$. If $N(x)\cap S=\emptyset$ and $N(y)\cap S=\emptyset$, then since $G$ has no isolated vertices there are at least two edges $xx'$ and $yy'$ such that $x',y'\notin S$. Thus, $xx'$ and $yy'$ are not incidently resolved by any vertex of $S$, which is a contradiction. So $N(x)\cap S\ne \emptyset$ or $N(y)\cap S\ne\emptyset$. Now, suppose $N(x)\cap S=N(y)\cap S$. Hence, there exists a vertex $w\in S$ such that the edges $xw$ and $yw$ are not incidently resolved by any vertex of $S$, a contradiction again. Thus, $N(x)\cap S\ne N(y)\cap S$ and, as a consequence, $S$ is an adjacency generator for $G$ and $\mathrm{dim}_I(G)\ge \mathrm{dim}_A(G)$. Now, since any two edges $e_1,e_2$ are incident to at least two different vertices $x,y$, and at least one of $x,y$ must be in $S$, it is clear that the edges $e_1,e_2$ are distinguished by $x$ or $y$. So, $S$ is also an edge metric generator for $G$, and $\mathrm{dim}_I(G)\ge \mathrm{dim_e}(G)$, which completes the proof. \end{proof} From \cite{JanOmo2012} we know that $\mathrm{dim}_A(K_{r,t})=r+t-2$. Also, from \cite{KeTrYe}, we have $\mathrm{dim_e}(K_{r,t})=r+t-2$. Now, from Proposition \ref{basic-values} (iv), we observe that the bound of Proposition \ref{inc-adj-edge} is tight. In such case, we have the equality $\mathrm{dim}_I(K_{r,t})=\mathrm{dim}_A(K_{r,t})=\mathrm{dim_e}(K_{r,t})$. An interesting problem is then to characterize the families of graphs for which the bound of Proposition \ref{inc-adj-edge} is achieved, and moreover, finding whether the situations $\mathrm{dim}_I(K_{r,t})=\mathrm{dim}_A(K_{r,t})\ne \mathrm{dim_e}(K_{r,t})$, $\mathrm{dim}_I(K_{r,t})=\mathrm{dim_e}(K_{r,t})\ne\mathrm{dim}_A(K_{r,t})$ or $\mathrm{dim}_I(K_{r,t})=\mathrm{dim}_A(K_{r,t})=\mathrm{dim_e}(K_{r,t})$ happen. \section*{Acknowledgements} Aleksander Kelenc and Iztok Peterin are partially supported by the Slovenian Research Agency by the projects No. N1-0063 and No. P1-0297, respectively, and both by project No. J1-9109.
{ "timestamp": "2018-11-09T02:02:49", "yymm": "1811", "arxiv_id": "1811.03156", "language": "en", "url": "https://arxiv.org/abs/1811.03156" }
\section{Introduction} In the constituent quark model, mesons consist of a pair of quark and antiquark ($q\bar q$)~\cite{2007-Klempt-p1-202, PhysRevD.98.030001}. They can be characterized by the isospin $I$, the total angular momentum $J$, the parity $P$, and the charge-conjugation parity $C$ (for charge neutral states). For a fermion-antifermion system ($q\bar q$), these quantum numbers are given as \begin{eqnarray} I=0, 1, \, J=0, 1, 2\cdots, \, P=(-1)^{L+1}\, , \, C=(-1)^{L+S}\, , \end{eqnarray} where $L$ is the relative orbit angular momentum between $q$ and $\bar q$ and $S$ the total spin. For charged mesons, it is useful to define the $G$ parity instead of $C$ parity \begin{eqnarray} G=(-1)^IC=(-1)^{L+S+I}\, . \end{eqnarray} For such meson states, the allowed $J\leq2$ quantum numbers are $J^{PC}=0^{-+}\, , 0^{++}\, , 1^{--}\, , 1^{+-}\, , 1^{++}\, , 2^{--}\, , 2^{-+}\, , 2^{++}$. The combinations $J^{PC}=0^{--}\, , 0^{+-}\, , 1^{-+}\, , 2^{+-}$ are not allowed for the conventional $q\bar q$ systems. In other words, they are exotic quantum numbers in the quark model. However, these exotic quantum numbers can be reached in other configurations such as hybrids~\cite{Ho:2018cat,Zhang:2013rya,Huang:2014hya,Huang:2016upt,Chetyrkin:2000tj}, glueballs~\cite{Qiao:2015iea,Pimikov:2017bkk} and tetraquarks, which are not forbidden by QCD itself. Tetraquarks ($qq\bar q\bar q$) are bound states of diquarks ($qq$) and antidiquarks ($\bar q\bar q$). Their existence was firstly suggested by R. Jaffe in 1977~\cite{1977-Jaffe-p267-267,1977-Jaffe-p281-281}. The light scalar mesons have been considered as good candidates of tetraquarks \cite{2004-Maiani-p212002-212002,2007-Chen-p94025-94025,2007-Chen-p369-372}. For hadrons with exotic quantum numbers, the $1^{-+}$ hybrid meson has been extensively studied since it was predicted to be the lightest hybrid state~\cite{2015-Meyer-p21-58}. Especially, there are now some evidence on the existence of such hybrid mesons~\cite{1997-Thompson-p1630-1633,1999-Abele-p349-355,2007-Adams-p27-31}. The $1^{-+}$ light tetraquarks have also been studied in Refs.~\cite{2008-Chen-p117502-117502,2008-Chen-p54017-54017} to predict their masses for both $I=0$ and $1$ channels. In Refs.~\cite{2009-Jiao-p114034-114034,2017-Huang-p76017-76017}, the light tetraquark state with $J^{PC}=0^{--}$ was predicted to be the possible $\rho\pi$ dominance in the $D^0$ decay. To date, the studies for other exotic quantum numbers $J^{PC}=0^{+-}\, , 2^{+-}$ are much less appealing. In Ref.~\cite{Du:2012pn}, the tetraquark states with $J^{PC}=0^{+-}$ were studied for both light and heavy sectors in QCD sum rules. The authors used the tetraquark currents containing a covariant derivative, which will increase the dimension of the interpolating operators. Finally, the light $0^{+-}$ tetraquark was concluded not exist due to the bad OPE series behavior. In this paper, we shall revisit the light $0^{+-}$ tetraquarks by using the interpolating currents without derivative. These currents have lower dimension than those in Ref.~\cite{Du:2012pn}, which will result in better OPE behaviors and mass predictions. We shall use both the Laplace sum rules (LSR) and finite energy sum rules (FESR) to perform numerical analyses. The possible decay patterns of the $0^{+-}$ tetraquark states will be discussed at last. \section{Laplace sum rules and finite energy sum rules} In this section, we introduce the formalism of QCD sum rules, which has been a very useful method to study the hadronic properties in the past several decades~\cite{1979-Shifman-p385-447,1985-Reinders-p1-1,2000-Colangelo-p1495-1576}. For a vector current of the general form $j_{\mu}(x)$, we consider the two-point correlation function \begin{eqnarray} \nonumber \Pi_{\mu\nu}(q^{2}) & = & i\int d^{4}xe^{iqx}\left\langle 0\left|T\left[j_{\mu}(x)j_{\nu}^{+}(0)\right]\right|0\right\rangle \label{eq:1}\\ & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\Pi_{v}(q^{2})+q_{\mu}q_{\nu}\Pi_{s}(q^{2})\, , \end{eqnarray} where $\Pi_{v}(q^{2})$ and $\Pi_{s}(q^{2})$ are the invariant functions receiving the contributions from the corresponding pure vector and scalar intermediate states, respectively. The invariant functions obey the dispersion relation, which relates the $\Pi(q^2)$ with the spectral function \begin{eqnarray} \Pi(q^{2})=\left(q^{2}\right)^{N}\int_{0}^{\infty}\mathrm{d}s\frac{\rho\left(s\right)}{s^{N}\left(s-q^{2}-\mathrm{i}\epsilon\right)}+\sum_{k=0}^{N-1}b_{n}\left(q^{2}\right)^{k}\, , \label{eq:20} \end{eqnarray} where $b_n$ are the unknown subtraction constants and they can be removed by performing the Borel transformation to $\Pi(q^2)$. On the QCD side, we can perturbatively calculate the correlation function by using the operator product expansion (OPE) method. In such calculations, the correlation function $\Pi(q^2)$ is expressed as power expansion series using the QCD vacuum condensates of increasing dimensions. On the hadronic side, the spectral functions are expressed via the quark hadron duality \begin{eqnarray} \nonumber \rho(s)\equiv\frac{1}{\pi}\textrm{Im}\Pi_{v/s}(s)&\simeq&\sum_n\delta(s-m_n^2)\langle0|\eta|n\rangle\langle n|\eta^+|0\rangle\\ &\simeq&f_H^2\delta(s-m_H^2)+ \cdots\, , \label{eq:rho} \end{eqnarray} where we use the ``one single narrow resonance ansatz"~\cite{Narison:1980ti,Narison:2002hk}. ``$\cdots$" denotes the contributions of the higher excited states and the QCD continuum, and $m_H$ and $f_H$ are the mass and the coupling of the lowest lying hadron state. One should note that we omit the tensor structure in the second step if $\eta$ be a vector current. Under this duality ansatz, one can match the QCD side of the correlation function with the hadronic side, and then obtain the sum rules for the hadron parameters such as the hadron mass, the coupling constant and the magnetic moment. Technically, one usually applies the Borel transformation to $\Pi(q^2)$ at both sides in order to enhance the contribution of the lowest-lying state and improve the OPE convergence. Finally, the Laplace/Borel sum rules (LSR) moment can be derived as \begin{eqnarray}\label{usr} {M}(\tau,s_0) &=& \int_{0}^{s_0} {ds}~e^{-s\tau}\rho(s)\, . \end{eqnarray} Inserting the spectral function in Eq.~\eqref{eq:rho}, we extract the lowest-lying hadron mass as the following ratio \begin{eqnarray}\label{usr} {R}(\tau,s_0)&=& -\frac{d}{d\tau} \ln { M}=M_H^2\, , \end{eqnarray} in which $\tau$ is the Borel parameter and $s_0$ is the continuum threshold above which the contributions from the higher excited states and QCD continuum can be approximated well by the spectral function. It is clear that the hadron mass $M_H$ will depend on these two parameters $(\tau, s_0)$. To establish reliable sum rule analyses, one needs to pick out suitable $(\tau, s_0)$ working range to make sure the validity of the OPE truncation and the suppression of the continuum contribution. In this parameter working range, we would expect that the mass curves are insensitive to the variation of $\tau$ and $s_0$, which will finally provide reliable predictions on the hadron masses. As is well-known, the LSR for light tetraquarks usually suffer from higher dimension of the interpolating currents which lead to poor OPE convergence or absence of the sum rule stability. The finite energy sum rule, also known as the local duality sum rule, can be a valid complementary method. FESR can be obtained either by applying the Cauchy theorem to the correlator on a contour with radius $r=s_0$~\cite{Shankar:1977ap} or by simply letting the Borel parameter $\tau$ vanish in LSR. This method reduces the effects from the power corrections to the sum rule moment and has been shown to be useful in studying multi-quark states~\cite{Narison:2009vj,2017-Huang-p76017-76017,2008-Chen-p54017-54017,Matheus:2006xi}\footnote{For very recent applications of FESR in studying the decay constants of heavy-light mesons, see~\cite{Lucha:2018ouu,Lucha:2017zng}}. The nth moment and ratio of FESR read \begin{eqnarray} W(n\, ,s_0)&=& \int^{s_0}_0 \rho(s)s^n ds\, , \label{moment} \\ M_H^2(n\, ,s_0)&=&\frac{W(n+1\, ,s_0)}{W(n\, ,s_0)}. \end{eqnarray} In this work we use the 0th moment ratio which enhances the convergence of the FESR expansion series in Eq.~\eqref{moment}. \section{Tetraquark interpolating currents with $J^{PC}=0^{+-}$} In this section, we construct the tetraquark interpolating currents with exotic quantum numbers $J^{PC}=0^{+-}$. In Ref.~\cite{Du:2012pn}, the exotic $0^{+-}$ light tetraquarks have been studied by using the tetraquark currents containing covariant derivatives. However, their calculations do not support the existence of such tetraquarks since there is no stable mass sum rule there. Such bad sum rule behavior appears due to the special Lorentz structures of the interpolating currents adopted in Ref.~\cite{Du:2012pn}. The covariant derivative draw a $P$-wave excitation to the current, which will result in unstable mass sum rules. In this work, we shall revisit the light $0^{+-}$ tetraquarks by constructing the interpolating currents without derivative. These lower dimension currents may lead to better OPE behavior and mass prediction. We compose the light tetraquark currents by using six distinct diquark operators in Lorentz space: $q^T_aCq_b$, $q^T_a C\gamma_5q_b$, $q^T_aC\gamma_\mu q_b$, $q^T_aC\gamma_\mu\gamma_5q_b$, $q^T_a C\sigma_{\mu\nu}q_b$, $q^T_aC\sigma_{\mu\nu}\gamma_5q_b$. The diquark-antidiquark type of tetraquark currents without derivatives are then constructed as \begin{eqnarray} \nonumber J_{1\mu}&=&u^T_aC\gamma_5d_b(\bar{u}_a\gamma_{\mu}\gamma_5C\bar{d}^T_b+\bar{u}_b\gamma_{\mu}\gamma_5C\bar{d}^T_a) - u^T_aC\gamma_{\mu}\gamma_5d_b(\bar{u}_a\gamma_5C\bar{d}^T_b+\bar{u}_b\gamma_5C\bar{d}^T_a)\, , \\ \nonumber J_{2\mu}&=&u^T_aC\gamma^{\nu}d_b(\bar{u}_a\sigma_{\mu\nu}C\bar{d}^T_b-\bar{u}_b\sigma_{\mu\nu}C\bar{d}^T_a) - u^T_aC\sigma_{\mu\nu}d_b(\bar{u}_a\gamma^{\nu}C\bar{d}^T_b-\bar{u}_b\gamma^{\nu}C\bar{d}^T_a)\, , \\ \nonumber J_{3\mu}&=&u^T_aC\gamma_5d_b(\bar{u}_a\gamma_{\mu}\gamma_5C\bar{d}^T_b-\bar{u}_b\gamma_{\mu}\gamma_5C\bar{d}^T_a) - u^T_aC\gamma_{\mu}\gamma_5d_b(\bar{u}_a\gamma_5C\bar{d}^T_b-\bar{u}_b\gamma_5C\bar{d}^T_a)\, , \\ \label{currents2} J_{4\mu}&=&u^T_aC\gamma^{\nu}d_b(\bar{u}_a\sigma_{\mu\nu}C\bar{d}^T_b+\bar{u}_b\sigma_{\mu\nu}C\bar{d}^T_a) - u^T_aC\sigma_{\mu\nu}d_b(\bar{u}_a\gamma^{\nu}C\bar{d}^T_b+\bar{u}_b\gamma^{\nu}C\bar{d}^T_a)\, , \\ \nonumber J_{5\mu}&=&u^T_aCd_b(\bar{u}_a\gamma_{\mu}C\bar{d}^T_b+\bar{u}_b\gamma_{\mu}C\bar{d}^T_a) - u^T_aC\gamma_{\mu}d_b(\bar{u}_aC\bar{d}^T_b+\bar{u}_bC\bar{d}^T_a)\, , \\ \nonumber J_{6\mu}&=&u^T_aC\gamma^{\nu}\gamma_5d_b(\bar{u}_a\sigma_{\mu\nu}\gamma_5C\bar{d}^T_b+\bar{u}_b\sigma_{\mu\nu}\gamma_5C\bar{d}^T_a) - u^T_aC\sigma_{\mu\nu}\gamma_5d_b(\bar{u}_a\gamma^{\nu}C\bar{d}^T_b+\bar{u}_b\gamma^{\nu}C\bar{d}^T_a)\, , \\ \nonumber J_{7\mu}&=&u^T_aCd_b(\bar{u}_a\gamma_{\mu}C\bar{d}^T_b-\bar{u}_b\gamma_{\mu}C\bar{d}^T_a) - u^T_aC\gamma_{\mu}d_b(\bar{u}_aC\bar{d}^T_b-\bar{u}_bC\bar{d}^T_a)\, , \\ \nonumber J_{8\mu}&=&u^T_aC\gamma^{\nu}\gamma_5d_b(\bar{u}_a\sigma_{\mu\nu}\gamma_5C\bar{d}^T_b-\bar{u}_b\sigma_{\mu\nu}\gamma_5C\bar{d}^T_a) - u^T_aC\sigma_{\mu\nu}\gamma_5d_b(\bar{u}_a\gamma^{\nu}\gamma_5C\bar{d}^T_b-\bar{u}_b\gamma^{\nu}\gamma_5C\bar{d}^T_a)\, , \end{eqnarray} where $a, b$ are color indices, $T$ is the transposition operator and $C$ the charge conjugation operator. These interpolating currents in Eq.~\eqref{currents2} can couple to both the $J^{PC}=0^{+-}$ and $1^{--}$ channels, which will induce the scalar $\Pi_s(q^2)$ and vector $\Pi_v(q^2)$ respectively in Eq.~\eqref{eq:1}. In this work, we focus on the exotic $0^{+-}$ channel in our calculation. By replacing $d\to s$ in Eq.~\eqref{currents2}, we obtain the corresponding $us\bar{u}\bar{s}$ tetraquark currents with $J^{PC}=0^{+-}$. Using these interpolating currents, we calculate their two-point correlation functions and the spectral functions. We shall study both the $ud\bar{u}\bar{d}$ and $us\bar{u}\bar{s}$ tetraquark systems in the following section. \section{QCD expressions for the two-point correlation functions} Using the interpolating currents listed above, we obtain the QCD expressions for the corresponding two-point correlation functions via the standard technique of SVZ expansion \cite{Shifman:1984wx}. Up to dimension-8 condensate terms in the power expansion and leading-order contributions in the perturbative expansion, the general expression for the LSR moment corresponding to the $ud\bar{u}\bar{d}$-type and the $us\bar{u}\bar{s}$-type currents respectively read \begin{flalign} \label{eq:udud} M^{ud\bar{u}\bar{d}}_{0}(\tau,s_0)=&\int^{s_0}_{0}\rho^{ud\bar{u}\bar{d}}_{0}(s)e^{-\tau s}ds=a_{i}\frac{e^{-s_0 \tau } \lbrace -s_0 \tau \lbrack s_0 \tau (s_0 \tau +3)+6\rbrack -6\rbrace+6}{\tau ^4}+b_{i}\frac{1-e^{-s_0 \tau } (s_0 \tau +1)}{\tau ^2}{}\nonumber\\ &-c_{i}\frac{1-e^{-s_0 \tau }}{\tau }+ d_{i} \lbrack 2+\ln(4\pi)-\ln(\frac{1}{\tau\tilde\mu^2})+\Gamma(0,s_0 \tau)\rbrack\, , \end{flalign} and \begin{flalign} \label{eq:usus} M^{us\bar{u}\bar{s}}_{0}(\tau,s_0)=&\int^{s_0}_{0}\rho^{us\bar{u}\bar{s}}_{0}(s)e^{-\tau s}ds = -a^{\prime}_{i} \frac{e^{-s_0 \tau }\lbrace -s_0 \tau \lbrack s_0 \tau (s_0 \tau +3)+6\rbrack -6\rbrace+6}{\tau ^4}-b^{\prime}_{i} \frac{e^{-s_0 \tau } \lbrack -s_0 \tau (s_0 \tau +2)-2 \rbrack +2}{\tau ^3}\nonumber\\ &-c^{\prime}_{i} \frac{1-e^{-s_0 \tau } (s_0 \tau +1)}{\tau ^2} - d^{\prime}_{i}\frac{1-e^{-s_0 \tau }}{\tau }+e^{\prime}_{i}+ f^{\prime}_{i}\lbrack \gamma_{E}-\ln(\frac{1}{\tau\tilde\mu^2})+\Gamma(0,s_0 \tau)\rbrack\, , \end{flalign} where $\tilde\mu=1$ GeV, $\gamma_{E}$ the Euler constant, and $\Gamma$ the incomplete Gamma function. The values of the QCD condensates are listed in Table.~\ref{param_tab}, and $a_i$-$d_i$, $a^{\prime}_i$-$f^{\prime}_i$ are the Wilson coefficients. We list their expressions in APPENDIX~\ref{app:A}. \begin{table} \renewcommand\arraystretch{1.5} \caption{\label{param_tab}QCD parameters used in our analysis: $\rho$ indicates the violation of factorization hypothesis.} \begin{tabular}{cc} \hlinewd{.8pt} QCD parameters& Reference \\ \hline $\langle\alpha_s G^2\rangle \simeq (7\pm2)\times10^{-2}~\rm{GeV}^4$ & \cite{Launer:1983ib,Narison:1995jr,Eidelman:1978xy,Narison:2011xe,Narison:2011rn} \\ $g\langle\bar{\psi}G\psi\rangle\equiv g\langle\bar{\psi}\frac{\lambda_a}{2}\sigma^{\mu\nu}G^a_{\mu\nu}\psi\rangle\simeq (0.8\pm 0.1)~{\rm GeV}^2\langle\bar \psi\psi\rangle$ & \cite{Dosch:1988vv,Ioffe:1981kw} \\ $\rho\alpha_s \langle\bar \psi\psi\rangle^2\simeq (4.5\pm 0.3) \times 10^{-4}~\rm{ GeV}^6$ & \cite{Narison:1992ru,Narison:1995jr,Narison:2009vy} \\ $\Lambda_{\rm QCD}= (353\pm 15)~{\rm MeV}$ & \cite{Narison:2009vy} \\ $\langle \bar{s}s \rangle / \langle \bar{u}u \rangle=0.74 \pm 0.03$ &\cite{NARISON2015189}\\ $m_s=95^{+9}_{-3} ~\rm{MeV}$ &\cite{PhysRevD.98.030001}\\ \hlinewd{.8pt} \end{tabular} \end{table} \section{LSR and FESR numerical analyses} In this section, we shall perform our numerical analyses for the non-strange and hidden-strange $0^{+-}$ tetraquark states using both the LSR and the FESR methods. The methodology adopted in this work directly follow from \cite{2017-Huang-p76017-76017} and the references therein, where one can find more details about the sum rule stability criteria applied. \subsection{Analysis for the $ud\bar{u}\bar{d}$ tetraquark states} We first focus on the $ud\bar{u}\bar{d}$-type tetraquark systems. From the Wilson coefficients listed in APPENDIX~\ref{app:A}, we find that there exist degeneracies between the OPE results corresponding to the $ud\bar{u}\bar{d}$-type $J_1$/$J_5$, $J_2$/$J_8$, $J_3$/$J_7$ and $J_4$/$J_6$ in the chiral limit ($m_u=m_d=0$). Therefore, we will only present the analyses for $J_1$-$J_4$. While applying the LSR analysis, we require both the $\tau$ and $s_0$ stability in order to get solid predictions. Practically, we read the predictions from the extremum points of the $m_H-\tau$ and $m_H-s_0$ curves, from which continuum threshold can also be rigorously determined. In the case where the LSR stability is reached, the OPE convergence at the given stability points should be further checked before taking into account the obtained mass values in the final estimation. For the $ud\bar{u}\bar{d}$ currents, we find that all the associated LSR moment ratios reach stability, but only those corresponding to $J_3$ ($J_7$) and $J_4$ ($J_6$) have converging OPE series. For $J_1$ ($J_5$) and $J_2$ ($J_8$), the highest order power corrections (dimension-8 condensate terms) contribute more than $10\%$ in the OPE series, rendering a problematic truncation. Therefore we only consider results obtained using $J_3$ ($J_7$) and $J_4$ ($J_6$) for the final mass determination. The behaviours of the LSR ratios corresponding to $J_3$ ($J_7$) and $J_4$ ($J_6$) are shown in FIG.~\ref{fig:LSRududj3j4}, where the left panel shows the $\tau$-stability (if any) and the right panel shows the $s_0$-stability. One can see that the LSR ratio corresponding to $J_3$ ($J_7$) and $J_4$ ($J_6$) reach the $\tau$ and $s_0$ stability when the values of the dimension-8 condensates are estimated using the vacuum saturation approximation. However, the mass curve corresponding to $J_4$ ($J_6$) becomes monotonous when factorization is violated by a factor of 2 (see the green curve in FIG.~\ref{fig:LSRududj3j41}). In contrast, the LSR curves corresponding to $J_3$ ($J_7$) have both the $\tau$ and $s_0$ stability with and without considering the violation of factorization, which can provide an error estimation due to violation of factorization. From FIG.~\ref{fig:LSRududj3j42} we obtain the mass predictions at the $s_0$-stability points (extremums) as below \begin{eqnarray} M_{J_{3}/J_7;LSR}&=&1.39(1.49)~{\rm GeV}~at~s_0=4.50(4.75)~{\rm GeV^2}\, ,\nonumber\\ M_{J_{4}/J_6;LSR}&=&1.35~{\rm GeV}~at~s_0=4.09~{\rm GeV^2}\, , \end{eqnarray} where the values in the parentheses are obtained by taking into account the violation of factorization of the dimension-8 condensates by a factor of 2. For $J_1$ ($J_5$) and $J_2$ ($J_8$), we obtain similar curves as those in FIG.~\ref{fig:LSRududj3j4}, but the stability points associate with poorly OPE convergence therefore we do not consider these values. In TABLE~\ref{tab:ududlsrconvergence}, we present the behaviour of OPE at the stability points of $\tau$ and $s_0$. \begin{figure}[htbp] \centering \subfigure[]{\label{fig:LSRududj3j41} \includegraphics[scale=0.7]{ududlsrj3j4taustability.eps}} \subfigure[]{\label{fig:LSRududj3j42} \includegraphics[scale=0.7]{ududlsrj3j4s0stability.eps}} \caption{\label{ududlsr}(a) $ud\bar{u}\bar{d}$ four quark state mass versus $\tau$ in LSR obtained using $J_3(J_7)$ (red continuous), $J_3(J_7)$ considering violation of factorization (blue dotted-dashed), $J_4(J_6)$ (black continuous) and $J_4(J_6)$ considering violation of factorization (green dotted); (b) the same as (a) but for mass versus $s_0$. } \label{fig:LSRududj3j4} \end{figure} \begin{table}[htbp] \renewcommand\arraystretch{1.5} \centering \caption{\label{tab:ududlsrconvergence}OPE terms at the LSR stability points using the $ud\bar{u}\bar{d}$ currents.} \begin{tabular}{cccccccc} \hlinewd{.8pt} $J_i$ &${1\over\tau}{\hat B}\Pi_{i}^{d=0}/{\rm GeV}^4$ & ${1\over\tau}{\hat B}\Pi_{i}^{d=4}/{\rm GeV}^4$ & ${1\over\tau}{\hat B}\Pi_{i}^{d=6}/{\rm GeV}^4$ & ${1\over\tau}{\hat B}\Pi_{i}^{d=8}/{\rm GeV}^4$ & ${1\over\tau}{\hat B}\Pi_{i}^{d=8}$/OPE & $\tau$/${\rm GeV}^{-2}$ & $s_0/{\rm GeV}^2$\\ \hline $J_1/J_5$ & $5.77935\times 10^{-6}$& $-7.94295\times 10^{-7}$& $-8.64314\times 10^{-6}$&$-9.16139\times 10^{-7}$ &$0.200283$ &$0.433$&5.33 \\ $J_2/J_8$ &$2.12627\times 10^{-5}$ &$1.24396\times 10^{-6}$ & $-1.62246\times10^{-5}$&$-2.38152\times10^{-6}$ &-0.610577 &0.346 &4.89 \\ $J_3/J_7$ &$2.05976\times 10^{-5}$ &$2.12063\times10^{-6}$ &$-7.06129\times10^{-6}$ &$-1.1931\times10^{-6}$ &-0.0824886 &0.265 &4.50 \\ $J_4/J_6$ &$1.2703\times10^{-3}$ &$7.47871\times10^{-5}$ &$-7.58611\times10^{-5}$ &$-1.23908\times10^{-5}$ &-0.00985874 &0.148 &4.09 \\ \hlinewd{.8pt} \end{tabular} \end{table} In FIG.~\ref{fig:ududfesr}, we show the FESR curves obtained by truncating the OPE at different orders. One can see that for all currents the corresponding FESR moment ratios increase gradually for considering only the perturbative terms. With inclusion of the dimension-4 or dimension-6 condensate contributions, the mass curves start to present inflexions or stability points. One can then read the optimal mass values from these stability points. As shown in FIG.~\ref{fig:ududfesr}, the stability points of the FESR curves obtained by considering the condensate contributions up to dimension-8 (black continuous curves) are close to those of the curves obtained by only considering $d\leqq6$ condensate terms (blue dotted-dashed curves), except for the curve corresponding to $J_4$ ($J_6$). The difference is about 13\% for $J_4$ ($J_6$) while less than 10\% for other currents, which indicates that the OPE truncation for $J_4$ ($J_6$) may associate with relatively large uncertainties. Accordingly, we retain the mass predictions in FESR as \begin{figure}[htbp] \renewcommand\arraystretch{1.5} \centering \subfigure[]{\label{fig:ududfesr1} \includegraphics[scale=0.7]{ududfesrj1convergence.eps}} \subfigure[]{\label{fig:ududfesr2} \includegraphics[scale=0.7]{ududfesrj2convergence.eps}}\\ \subfigure[]{\label{fig:ududfesr3} \includegraphics[scale=0.7]{ududfesrj3convergence.eps}} \subfigure[]{\label{fig:ududfesr4} \includegraphics[scale=0.7]{ududfesrj4convergence.eps}}\\ \caption{\label{fig:ududfesr}(a) $ud\bar{u}\bar{d}$ four quark state mass versus $s_0$ in FESR obtained using $J_1(J_5)$ considering in the OPE series the perturbative terms (yellow continuous), d=4 condensate terms (red dotted), d=6 condensate terms (blue dotted-dashed), d=8 condensate terms (black continuous); (b), (c), (d) are the same as (a) but for $J_2(J_8)$, $J_3(J_7)$, $J_4(J_6)$ respectively. } \end{figure} \begin{eqnarray} M_{J_{1}/J_5;FESR}&=&1.53(1.57)~{\rm GeV}~at~s_0=5.32(5.43)~{\rm GeV^2},\nonumber\\ M_{J_{2}/J_8;FESR}&=&1.46(1.49)~{\rm GeV}~at~s_0=4.86(4.85)~{\rm GeV^2},\nonumber\\ M_{J_{3}/J_7;FESR}&=&1.41(1.44)~{\rm GeV}~at~s_0=4.49(4.41)~{\rm GeV^2}. \end{eqnarray} Taking the arithmetic average of the valid LSR and FESR results, we give our prediction for the $0^{+-}$ $ud\bar{u}\bar{d}$ tetraquark mass as \begin{eqnarray} M_{ud\bar{u}\bar{d}}&=&1.43\pm0.09~{\rm GeV}\, , \end{eqnarray} where the error comes from the uncertainties of the QCD parameters, the results from different interpolating currents and the violation of factorization. \subsection{Analysis for the $us\bar{u}\bar{s}$ tetraquark states} The situation for the $us\bar{u}\bar{s}$ tetraquark state is somewhat different from that for the $ud\bar{u}\bar{d}$ system. The OPE degeneracy between $J_1$ and $J_5$ (also for $J_2$/$J_8$, $J_3$/$J_7$ or $J_4$/$J_6$) in the $ud\bar{u}\bar{d}$ system is slightly changed for the $us\bar{u}\bar{s}$ system due to the SU(3) breaking. The coefficients of $\langle \alpha_{s}G^2\rangle m_s \langle \bar qq \rangle $ have opposite signs for $J_1$ and $J_5$, as shown in Appendix \ref{app:A}. Although the contributions of these terms are small, we still perform numerical analyses for all interpolating currents in the $us\bar{u}\bar{s}$ system. In this case LSR doesn't work as well as FESR: the LSR moment ratios are dominated by the dimension-6 condensates rather than the perturbative terms, which suggests that the OPE truncations are invalid. We shall take the current $J_1$ as an example. From FIG.~\ref{fig:ususlsrj3}, it is shown that the curves have the $s_0$ and $\tau$ stability. However, the OPE series are not converging but dominated by dimension-6 condensate terms, as shown in Table~\ref{tab:ususj3convergence}. This situation holds for all the other $us\bar{u}\bar{s}$ currents in LSR. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[scale=0.7]{ususlsrj1taustability.eps}} \subfigure[]{ \includegraphics[scale=0.7]{ususlsrj1s0stability.eps}}\\ \caption{(a) $us\bar{u}\bar{s}$ four quark state mass versus $\tau$ in LSR obtained using $J_1$ (black continuous) and $J_1$ considering violation of factorization(blue dotted-dashed); (b) the same (a) but for mass versus $s_0$. } \label{fig:ususlsrj3} \end{figure} \begin{table}[htbp] \renewcommand\arraystretch{1.5} \centering \caption{\label{tab:ususj3convergence}OPE terms at the LSR stability points using the $us\bar{u}\bar{s}$ currents.} \begin{tabular}{ccccccccc} \hlinewd{.8pt} $J_i$ &${1\over\tau}{\hat B}\Pi_{i}^{d=0}/{\rm GeV}^4$ & ${1\over\tau}{\hat B}\Pi_{i}^{d=2}/{\rm GeV}^4$ & ${1\over\tau}{\hat B}\Pi_{i}^{d=4}/{\rm GeV}^4$ & ${1\over\tau}{\hat B}\Pi_{i}^{d=6}/{\rm GeV}^4$ & ${1\over\tau}{\hat B}\Pi_{i}^{d=8}/{\rm GeV}^4$ & ${1\over\tau}{\hat B}\Pi_{i}^{d=8}$/OPE & $\tau$/${\rm GeV}^{-2}$ & $s_0/{\rm GeV}^2$ \\ \hline $J_1$ &$1.13112\times 10^{-6}$ &$-2.2152\times 10^{-7}$ &$-1.14284\times 10^{-6}$ &$-5.26536\times 10^{-6} $&$2.23076\times 10^{-7}$ &-0.0438063 &0.651 &6.17\\ $J_2$ &$2.80721\times 10^{-6}$ &$-4.84744\times 10^{-6} $&$-1.07504\times 10^{-6} $&$-8.99266\times 10^{-6} $&$-8.23584\times 10^{-8}$ &0.0105215 &0.574 &5.7\\ $J_3$ & $1.0706\times 10^{-6}$& $-1.78751\times 10^{-7}$& $-6.0989\times 10^{-8}$&$-3.10259\times 10^{-6}$ & $-6.50102\times 10^{-8}$&$0.0278209$ &$0.555$&5.28 \\ $J_4$ &$2.26533\times 10^{-5}$ &$-2.76002\times 10^{-6}$&$3.85241\times 10^{-6} $&$-2.56595\times 10^{-5} $&$-2.62288\times 10^{-6}$ &0.578144 &0.405 &4.76\\ $J_5$ &$9.84444\times 10^{-7}$ &$-1.99608\times 10^{-7}$ &$-1.06617 \times 10^{-6} $&$-5.08568\times 10^{-6} $&$2.91861\times 10^{-7}$ &-0.0575079 &0.674 &6.18\\ $J_6$ &$1.68658\times 10^{-5}$ &$-2.21217\times 10^{-6} $&$3.32406\times 10^{-6}$ &$-2.38351\times 10^{-5}$ &$-2.00034\times 10^{-6}$ &0.254568 &0.436 &4.62\\ $J_7$ &$1.54953\times 10^{-6} $&$-2.35871\times 10^{-7} $&$-7.3373\times 10^{-8}$ &$-3.40304\times 10^{-6}$ &$-1.73889\times 10^{-7} $&0.0744181 &0.506 &5.28\\ $J_8$ &$2.27458\times 10^{-6}$ &$-4.13982\times 10^{-7} $&$-9.67697\times 10^{-7} $&$-8.53188\times 10^{-6}$ &$8.61095\times 10^{-8}$ &-0.0114009 &0.605 &5.68\\ \hlinewd{.8pt} \end{tabular} \end{table} We then perform the FESR analyses for the $us\bar{u}\bar{s}$ systems. In our analysis, the FESR ratio shows nice behavior in the sense that it reach stability in the continuum threshold $s_0$, as shown in Fig.~\ref{fig:ususfesr}. The dimension-8 condensate only slightly affect the mass prediction, which justify the validity of OPE truncation. The same situations happen for all the other currents $J_2-J_8$. The mass predictions obtained from the stability points read \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{ususfesrj1convergence.eps} \caption{\label{ususfesr}Mass prediction for the $us\bar{u}\bar{s}$ tetraquark state versus $s_0$ in FESR by using $J_1$ considering in the OPE series the perturbative term (yellow continuous), d=2 term (green dashed), d=4 term (red dotted), d=6 term (blue dotted-dashed), d=8 term (black continuous). } \label{fig:ususfesr} \end{figure} \begin{eqnarray} M_{J_{1};FESR}&=&1.65(1.69)~{\rm GeV}~at~s_0=6.17(6.45)~{\rm GeV^2},\nonumber\\ M_{J_{2};FESR}&=&1.58(1.63)~{\rm GeV}~at~s_0=5.67(5.99)~{\rm GeV^2},\nonumber\\ M_{J_{3};FESR}&=&1.51(1.57)~{\rm GeV}~at~s_0=5.25(5.60)~{\rm GeV^2},\nonumber\\ M_{J_{4};FESR}&=&1.44(1.51)~{\rm GeV}~at~s_0=4.76(5.15)~{\rm GeV^2},\nonumber\\ \label{ususmass} M_{J_{5};FESR}&=&1.65(1.69)~{\rm GeV}~at~s_0=6.16(6.44)~{\rm GeV^2},\\ M_{J_{6};FESR}&=&1.43(1.51)~{\rm GeV}~at~s_0=4.75(5.14)~{\rm GeV^2},\nonumber\\ M_{J_{7};FESR}&=&1.51(1.57)~{\rm GeV}~at~s_0=5.26(5.61)~{\rm GeV^2},\nonumber\\ \nonumber M_{J_{8};FESR}&=&1.57(1.62)~{\rm GeV}~at~s_0=5.66(5.98)~{\rm GeV^2}. \end{eqnarray} We finally give the mass of the hidden-strange $0^{+-}$ tetraquark to be \begin{eqnarray} M_{us\bar{u}\bar{s}}&=&1.54\pm0.12~{\rm GeV}\, , \end{eqnarray} where we have already considered the mass differences in Eq.~\eqref{ususmass} as one kind of the error sources. \section{Decay patterns of the $0^{+-}$ tetraquarks} Under the SU(2) symmetry, the $ud\bar u\bar d$ interpolating currents would couple to all $I=0, 1, 2$ isospin multiplets. In our calculation, these multiplets shall degenerate since we do not consider the effects of isospin symmetry breaking. In other words, our calculations will give the same mass predictions for the $I^GJ^{PC}=0^-0^{+-}, 1^+0^{+-}, 2^-0^{+-}$ tetraquarks with quark contents $ud\bar u\bar d$. For the $us\bar{u}\bar{s}$ tetraquarks, the quantum numbers will be $I^GJ^{PC}=0^-0^{+-}, 1^+0^{+-}$ considering the isospin. Tetraquark states have abundant decay modes so long as the kinematics allows. Considering the symmetry constraints for the isospin $I$, spin $J$, parity $P$, $C$-parity and $G$-parity, all S-wave two-body strong decays are forbidden for the charge neutral $ud\bar u \bar d$ tetraquarks, while only P-wave decay modes are allowed (as shown in Table~\ref{ududdecay}). Their dominant decay modes $b_1\pi, h_1\pi, a_1\pi$ are suitable for detection and partial-wave analyses. The charged partners $ud\bar d \bar d$ and $ud\bar u \bar u$ in the isospin-1 multiplet can dominantly decay into $\pi\pi$ modes in $S$-wave, and thus are expected to be very wide. \begin{center} \renewcommand{\arraystretch}{1.3} \begin{tabular*}{5.6cm}{ccc} \hlinewd{.8pt} ~~~$I^GJ^{PC}$ ~~~ & ~~~ $S$-wave ~~~ & ~~~ $P$-wave \\ \hline $0^-0^{+-}$ & $\times$ & $b_1\pi\, , \sigma\omega$ \\ $1^+0^{+-}$ & $\times$ & $h_1\pi\, , \rho\sigma\, , a_1\pi$\\ $2^-0^{+-}$ & $\times$ & $b_1\pi$\\ \hlinewd{.8pt} \end{tabular*} \def\@captype{table}\caption{Possible two-body strong decay modes for the charge neutral $ud\bar u \bar d$ tetraquarks. \label{ududdecay}} \end{center} For the isovector $u s\bar u \bar s$ tetraquark, both two-body and three-body strong decays are totally forbidden by the kinematics and the strong constraints on the exotic quantum numbers $I^GJ^{PC}=1^+0^{+-}$. Especially the $S$-wave $K^+K^-$ and $K^0\bar K^0$ final states are forbidden by the negative $C$-parity of the tetraquarks. In other words, the charge neutral isovector $us\bar u \bar s$ tetraquark state can only decay via weakly interaction into final states such as $K\pi\pi$. It is predicted to be very narrow. For the charged partners $us\bar d \bar s$ and $ds\bar u \bar s$ states, one may also search for them in the $K^+\bar K^0, K^-K^0$ hadronic modes respectively. The decay behavior of the isoscalar $u s\bar u \bar s$ tetraquark will be different. Besides the weak decays, the dominant decay mode for the isoscalar $us\bar u \bar s$ tetraquark will be the three-body hadronic mode $us\bar u \bar s\to\phi\pi\pi (\sigma\to\pi\pi)$. However, such a three-body decay is relatively small due to a $P$-wave suppression between $\phi$ and $\sigma (\pi\pi)$. This isoscalar $u s\bar u \bar s$ tetraquark state may be not very wide $\Gamma<1$ MeV and has convenient decay modes like $\phi\pi\pi, K\pi\pi$. \section{Summary and conclusions} In this work, we have studied the $ud\bar{u}\bar{d}$ and $us\bar{u}\bar{s}$ tetraquark states with $J^{PC}=0^{+-}$ in Laplace sum rules and finite energy sum rules. We consider all possible diquark-antidiquark interpolating currents with such exotic quantum numbers without containing covariant derivative operators. The dimension of such currents is lower than those used in Ref.~\cite{Du:2012pn}, which will result in better OPE behaviors and more reliable mass predictions. We find that both LSR and FESR work well for the $ud\bar{u}\bar{d}$ system following the standard stability criteria of QCD sum rules and the requirement of well justified OPE truncation. For the $us\bar{u}\bar{s}$ system, all interpolating currents suffer from poor OPE convergence in LSR. However, the FESR ratios could yield to good $s_0$-stability, around which one can extract the hadronic masses. Considering both the LSR and FESR results, we predict the masses of the $ud\bar{u}\bar{d}$ and the $us\bar{u}\bar{s}$ tetraquarks to be $1.43\pm0.09~{\rm GeV}$ and $1.54\pm0.12~{\rm GeV},$ respectively. Accordingly, we discuss the possible decay patterns for these exotic tetraquarks. The light $ud\bar{u}\bar{d}$ tetraquark can decay into two-body final states in P-wave hadronic modes, as shown in Table~\ref{ududdecay}. Our analyses show that the $0^{+-}$ charge neutral isovector $us\bar{u}\bar{s}$ tetraquark may only decay via the weak interaction mechanism such as $us\bar{u}\bar{s}\to K\pi\pi$, since its strong decay modes are all forbidden by the kinematics and strong constraints on the exotic quantum numbers. It is predicted to be very narrow. The $0^{+-}$ isoscalar $us\bar{u}\bar{s}$ tetraquark is also expected to be not very wide due to its $P$-wave dominant decay mode $us\bar u \bar s\to\phi\pi\pi$. These tetraquark states may be detectable in the near future at BESIII and Belle~II. \section*{ACKNOWLEDGMENTS} This project is supported by the Chinese National Youth Thousand Talents Program and the China Postdoctoral Science Foundation funded project (2018M631572).
{ "timestamp": "2018-11-13T02:20:13", "yymm": "1811", "arxiv_id": "1811.03333", "language": "en", "url": "https://arxiv.org/abs/1811.03333" }
\section{Introduction} \subsection{Problem setting and main results} The aim of the present paper is to make a new link between a number of recent papers on Dirac operators in bounded Euclidean domains with the theory of Dirac operators on manifolds, which is a classical topic in Riemannian geometry. Namely, let $\Omega\subset\RR^n$ be a bounded domain with smooth boundary $\Sigma$. We are going to show that the intrinsic Dirac operator $\Dsl$, which acts on sections of the spinor bundle of $\Sigma$, can be interpreted as a limit of Euclidean Dirac operators, either in $\Omega$ with a suitable boundary condition, or in the whole of $\RR^n$, with a suitably chosen term containing a large mass. For $n\ge 2$ and $N:=2^{[\frac{n+1}{2}]}$ let $\alpha_1,\dots,\alpha_{n+1}$ be anticommuting Hermitian $N\times N$ matrices with $\alpha_j^2=I_N$, where $I_N$ is the $N\times N$ identity matrix. The associated Dirac operator with a mass $m\in\RR$ acts on functions $u:\RR^n\to \CC^N$ (spinors) by the differential expression \begin{equation} \label{eqedm} D_m u=-\rmi\sum_{j=1}^n \alpha_j \dfrac{\partial u}{\partial x_j} + m \alpha_{n+1} u, \end{equation} see e.g. \cite{thaller}. We remark that the expression $D_m$ does not correspond to the intrinsic Dirac operator on $\RR^n$ (see Subsection~\ref{quad2}) and can be interpreted as follows: the intrinsic operator $\widetilde D$ in $\RR^{n+1}$ is defined as \[ \widetilde D v=-\rmi\sum_{j=1}^{n+1} \alpha_j \dfrac{\partial v}{\partial x_j}, \] and acts on functions $v:\RR^{n+1}\to \CC^N$, then assuming that $v$ is of the form $v(x_1,\dots,x_{n+1})=e^{\rmi m x_{n+1}} u(x_1,\dots,x_n)$ one obtains $\widetilde D v=e^{\rmi m x_{n+1}} D_m u$. For $x=(x_1,\dots,x_n)\in\RR^n$ we define the associated $N\times N$ matrices $\Gamma(x)$ by \begin{equation} \label{ekg1} \Gamma(x):=\sum_{j=1}^n x_j\alpha_j. \end{equation} Denote by $\nu$ the unit normal at $\Sigma$ pointing to the exterior of~$\Omega$ and consider the $N\times N$ matrices \begin{equation} \label{ekg3} \cB(s):=-\rmi \alpha_{n+1} \Gamma\big(\nu(s)\big), \quad s\in\Sigma. \end{equation} By the Dirac operator $A_m$ in $\Omega$ with a mass $m\in\RR$ and the infinite mass boundary condition (also called MIT Bag boundary condition) we mean the operator in $L^2(\Omega,\CC^N)$ given by \[ A_m u=D_m u \] on the domain $\dom(A_m)=\big\{u\in H^1(\Omega,\CC^N): \, u=\cB u \text{ on } \Sigma\big\}$, which is self-adjoint with compact resolvent (see Subsection~\ref{quad1}). In addition, for $m,M\in\RR$ we consider the following operator $B_{m,M}$ in $L^2(\RR^n,\CC^N)$, which is the Dirac operator in the whole space with the mass $m$ in $\Omega$ and the mass $M$ outside~$\Omega$, i.e. \begin{gather*} B_{m,M}=D_0 + \big[m 1_{\Omega} + M(1-1_\Omega)\big]\alpha_{n+1} \equiv D_m+(M-m)(1-1_\Omega)\,\alpha_{n+1} \end{gather*} with domain $\dom(B_{m,M})=H^1(\RR^n,\CC^N)$. We are going to show that the eigenvalues of the intrinsic Dirac operator $\Dsl$ (whose construction is briefly reviewed in Subsection~\ref{quad2}) and of the Euclidean Dirac operators $A_m$ and $B_{m,M}$, are related to each other for suitable values of $m$ and $M$. For a self-adjoint lower semibounded operator $T$ and $j\in\NN$ we denote by $E_j(T)$ the $j$th eigenvalue of $T$, if it exists, when enumerated in the non-decreasing order and counted with multiplicities. First we show that the eigenvalues of $\Dsl^2$ on $\Sigma$ are the limits of the eigenvalues of the square of the MIT Bag Dirac operator $A_m$ on $\Omega$ for large negative $m$: \begin{theorem}\label{thm1a} For each $j\in\NN$ there holds $E_j(\Dsl^2)=\lim_{m\to-\infty}E_j(A_m^2)$. \end{theorem} Then we show that, in turn, for any fixed $m$, the MIT Bag Dirac operators $A_m$ on $\Omega$ can be viewed as the limits of the Dirac operators $B_{m,M}$ in the whole space with a large mass outside $\Omega$ (which justifies the use of the term ``infinite mass boundary condition''): \begin{theorem}\label{thm2} For each $j\in\NN$ and $m\in\RR$ there holds $E_j(A_m^2)=\lim_{M\to+\infty} E_j(B_{m,M}^2)$. \end{theorem} Finally, by an additional construction we find an asymptotic regime in which the eigenvalues of $\Dsl^2$ on $\Sigma$ are directly recovered as the limits of the eigenvalues of the square of the Dirac operator $B_{m,M}^2$ on the whole space: \begin{theorem}\label{thm3} For each $j\in\NN$ the eigenvalue $E_j(B_{m,M}^2)$ converges to $E_j(\Dsl^2)$ as $m\to -\infty$ and $M\to+\infty$ with $m/M\to 0$. \end{theorem} Let us comment on the three theorems. In the recent paper \cite{ALTR16} the operator $A_m$ in three dimensions was considered, and it was shown that for each $j\in\NN$ one has $\lim_{m\to-\infty}E_j(A_m^2)=E_j(L)$ for some operator $L$ on $\Sigma$ given by its sesquilinear form. Hence, this result is extended in two directions: first, we consider arbitrary dimensions and, second, we show that the operator $L$ in question is in fact unitarily equivalent to $\Dsl^2$, which is our main observation. Some analogs of Theorem~\ref{thm2} in two and three dimensions were obtained very recently in~\cite{ALTR18,BCLTS18,SW}, and we extend them to all dimensions. The result of Theorem~\ref{thm3} providing an interpretation of $\Dsl$ using an infinite mass jump on $\Sigma$ does not seem to have previous analogs. In a sense, it can be viewed as a potential-induced collapse by analogy with Dirac operators on manifolds converging to a lower-dimensional structure~\cite{lott,roos}. As a possible application of our results, we remark that estimating the central gap (i.e. the first eigenvalue) of $A_m$ or $B_{m,M}$ in the respective asymptotic regime is reduced to the eigenvalue estimate for the Dirac operator $\Dsl$, for which a number of results are available: we refer to the book \cite{ginoux} for a review. The text is organized as follows. In Subsection~\ref{sec-not} we recall a link between self-adjoint operators and sesquilinear forms, choose a suitable notation, and then recall two important tools of the spectral analysis: the min-max characterization of the eigenvalues and the monotone convergence. In Section~\ref{sec-quad} we construct the sesquilinear forms for the squares of all the Dirac operators in question, which will allow one to obtain eigenvalue estimates based on the min-max principle: in Subsection~\ref{quad1} we recall the definition of various curvatures of $\Sigma$ and study $A_m$ and $B_{m,M}$, and in Subsection~\ref{quad2} we introduce an operator $L$, which already appeared in \cite{ALTR16} for the three-dimensional case, and prove that it is unitary equivalent to $\Dsl^2$. The unitary equivalence is shown using a Schr\"odinger-Lichnerowicz formula for extrinsic Dirac operators whose elementary proof for our Euclidean setting is given in Appendix~\ref{sec-lichn} for reader's convenience. In Section~\ref{sec-prelim} we collect some preliminary constructions: in Subsection~\ref{ssec1d} we study the eigenvalues and the eigenfunctions of one-dimensional Laplacians $S$ and $S'$ with a large parameter in the boundary conditions, and in Subsection~\ref{sec-curv} we give some computations in tubular coordinates near $\Sigma$. In Section~\ref{sec-thm1} we prove Theorem~\ref{thm1a}. We first reduce the problem to the spectral analysis is small tubular $\delta$-neighborhoods of $\Sigma$, and in order to work in $\Sigma\times(0,\delta)$ we use the computations from Subsection~\ref{sec-curv}. The upper bound is obtained by taking as test functions the tensor products of the eigenfunctions of (a small perturbation of) the effective operator $L$ on $\Sigma$ with the first eigenfunction of the model operator $S$ in the normal direction. For the lower bound we perform a unitary transform, which is just the expansion in eigenfunctions of the second model operator $S'$ in the normal variable, thus transforming the problem into the study of a monotonically increasing sequence of operators. A simple application of the respective machinery presented in Subsection~\ref{sec-not} then shows that only the projection onto the lowest eigenfunction of $S'$ contributes to the asymptotics of the individual eigenvalues, which induces an effective operator acting on $\Sigma$ only. The proof of Theorem~\ref{thm2} is presented in Section~\ref{sec-thm2}. To establish the upper bound we construct first an extension operator from $\Sigma$ to the exterior of $\Omega$ with a suitable control in terms of the mass $M$, and then use the corresponding extensions of the eigenfunctions of $A_m$ to construct test functions for $B_{m,M}$ used in the min-max principle. For the lower bound we first decouple the two sides of $\Omega$ in order to deal separately with $\Omega$ and its exterior, then it is easily seen that the exterior does not contribute to the lowest eigenvalues, while the part in $\Omega$ appears to be monotonically increasing in $M$ and then easily handled with the help of the monotone convergence. The overall scheme here is very close to the one used in~\cite{SW} for the two-dimensional case. In Section~\ref{sec-thm3} we prove Theorem~\ref{thm3}. The proof is essentially by combining in a new way various components from the preceding analysis, but we still provide a complete self-contained argument. The upper bound is obtained by taking the eigenfunctions of the operator $L$ on $\Sigma$ and extending them on both sides of $\Sigma$ by taking tensor products with the first eigenfunctions of the model operators $S$ and $S'$ in the two normal directions, and then using them as test functions in the min-max principle for $B_{m,M}^2$. For the lower bound we again decouple the two sides of $\Sigma$ and eliminate the exterior of $\Omega$ as in Theorem~\ref{thm2}. The analysis of the part in $\Omega$ is then quite similar to the one in Theorem~\ref{thm1a}: one is first reduced to the analysis in a thin tubular neighborhood of $\Omega$, and then one applies a unitary transform in order obtain a monotone family with an explicit limit operator. As will be seen from the proof, the domain $\Omega$ and its exterior play symmetric roles, and, as a result, the eigenvalue convergence in Theorem~\ref{thm3} also holds in the asymptotic regime $m\to+\infty$, $M\to -\infty$, $M/m\to 0$. Our approach based on the monotone convergence was chosen on purpose in order to obtain the main terms in a transparent way and to be able to concentrate on the geometric aspects. A more precise analysis involving remainder estimates and a more detailed operator convergence should be possible in the spirit of the recent works on specific dimensions, e.g.~\cite{ALTR18,ALTR16,HOBP}, but a rigorous implementation requires a considerably higher technical effort, and we prefer to discuss the related aspects in a separate forthcoming paper. \subsection{Notation, min-max principle, monotone convergence}\label{sec-not} The most part of the subsequent spectral analysis is based on the min-max principle for the eigenvalues of self-adjoint operators and uses rather sesquilinear forms than operators (in particular, most operators are introduced just through their sesquilinear forms, while the action and the domain of the operators are not specified explicitly). In order to avoid potential confusions, and to make the presentation more accessible to non-experts, we recall here some basic facts of the theory and introduce some notation. Let $\cG$ be a Hilbert space, then by $\langle \cdot,\cdot\rangle_\cG$ we denote the scalar product in $\cG$, which is assumed antilinear with respect to the \emph{first} argument, and the associated norm is denoted $\|\cdot\|_\cG$. A sesquilinear form $t$ in $\cG$ defined on a subspace $\dom(t)$ of $\cG$ is a map \[ \dom(t)\times\dom(t)\ni(u,v)\mapsto t(u,v)\in \CC \] which is antilinear with respect to the first argument and linear with respect to the second one, and it is called Hermitian if $t(v,u)=\overline{t(u,v)}$ for all $u,v\in\dom(t)$. As a consequence of the polar identity, a Hermitian sesquilinear form $t$ is uniquely determined by its diagonal values $t(u,u)$ with $u\in\dom(t)$. An Hermitian sesquilinear form $t$ is called lower semibounded if there is $c\in\RR$ such that $t(u,u)\ge c\|u\|^2_\cG$ for all $u\in\dom(t)$. Such a form is then called closed if $\dom(t)$ endowed with the scalar product $\langle u,v\rangle_t:=t(u,v)+(1-c)\langle u,v\rangle_\cG$ is a Hilbert space. With such a sesquilinear form $t$ one associates a self-adjoint operator $T$ in $\cG$ uniquely defined by the following two conditions: (a) the domain $\dom(T)$ of $T$ is contained in $\dom(t)$ and (b) $t(u,v)=\langle u, Tv\rangle_\cG$ all $u,v\in\dom(T)$, and we then say that $T$ is the \emph{self-adjoint operator generated by the form~$t$}. It is worth noting that $\dom(T)\ne \dom(t)$ in general. On the other hand, let $T$ be a self-adjoint operator in $\cG$ with domain $\dom(T)$. It is called lower semibounded if for some $c\in \RR$ one has $\langle u,T u\rangle_\cG\ge c\|u\|^2_\cG$ for all $u\in\dom(T)$, or $T\ge c$ for short. In such a case, the completion of $\dom(T)$ with respect to the scalar product $\langle u,v\rangle_Q:=\langle u, Tv\rangle_\cG +(1-c)\langle u,v\rangle_\cG$ is called the \emph{form domain} of $T$ and is denoted by $\qdom(T)$. The map $\dom(T)\times\dom(T)\ni(u,v)\mapsto \langle u, Tv\rangle_\cG$ then uniquely extends to a closed lower semibounded Hermitian sesquilinear form $t$ with domain $\dom(t)=\qdom(T)$, which will be called the \emph{sesquilinear form generated by the operator $T$}. In turn, $T$ is exactly the self-adjoint operator generated by this form~$t$. To have a shorter writing (and to reduce the number of symbols in use), we will write \[ T[u,v]:=t(u,v) \text{ for } u,v\in\qdom(T), \] in particular, one has the simple equality $T[u,v]=\langle u, T v\rangle_\cG$ if $v\in\dom(T)$. We further recall that due to the spectral theorem we have \begin{gather*} \qdom(T)=\dom\big({\sqrt{T-c}}\big)=\dom(\sqrt{|T|}),\\ T[u,v]\equiv t(u,v)=\langle \sqrt{T-c}\, u,\sqrt{T-c}\, v\rangle_\cG + c\langle u,v\rangle_\cG, \quad u,v\in \qdom(T), \end{gather*} and the operator $T$ has compact resolvent iff its form domain $\cQ(T)$ endowed with the above scalar product $\langle\cdot,\cdot\rangle_t\equiv \langle\cdot,\cdot\rangle_Q$ is compactly embedded into $\cG$. It follows from the preceding discussion that a lower semibounded self-adjoint operator $T$ is uniquely determined by the knowledge of its form domain $\qdom(T)$ and of the diagonal values $T[u,u]$ of its sesquilinear form for all $u\in \qdom(T)$. Many operators appearing in the subsequent discussion will be introduced in this way. Using the above convention let us recall the min-max characterization of eigenvalues. Let $T$ be a lower semibounded self-adjoint operator in an infinite-dimensional Hilbert space $\cG$. For $j\in\NN$ we denote \[ E_j(T):=\inf_{\substack{S\subset \qdom(T)\\ \dim S=j}} \sup_{\substack{u\in S\\u\ne 0}} \dfrac{T[u,u]}{\|u\|^2_\cG}. \] It follows from the min-max principle that $E_j(T)$ is the $j$th eigenvalue of $T$, when enumerated in the non-decreasing order and counted with multiplicities, provided that it is strictly below the bottom of the essential spectrum of $T$, and $E_1(T)$ coincides with the bottom of the spectrum of $T$, see e.g. \cite[Section XIII.1]{RS}. In particular, if $T$ has compact resolvent, then $E_j(T)$ is the $j$th eigenvalue of $T$ for any $j\in\NN$. The main consequence of the min-max principle we are going to use is as follows (the proof directly follows from the definition): \begin{prop}\label{prop-incl} Let $T$ and $T'$ be lower semibounded self-adjoint operators in infinite-dimensional Hilbert spaces $\cG$ and $\cG'$ respectively. Assume that there exists a linear map $J:\qdom(T)\to\qdom (T')$ such that $\|J u\|_{\cG'}=\|u\|_\cG$ and $T'[Ju,Ju]\le T[u,u]$ for all $u\in\qdom(T)$, then $E_j(T')\le E_j(T)$ for any $j\in\NN$. \end{prop} We will also use some classical results on the monotone convergence of operators. The following particular case which will be sufficient for our purposes: \begin{prop}\label{prop-mon} Let $\cH$ be a Hilbert space and $\cH_\infty$ be a closed subspace of $\cH$ endowed with the induced scalar product. Let \begin{itemize} \item $T_n$ with $n\in\NN$ be lower semibounded self-adjoint operators with compact resolvents in $\cH$, \item $T_\infty$ be a lower semibounded self-adjoint operator with compact resolvent in~$\cH_\infty$ \end{itemize} such that the following conditions are satisfied: \begin{itemize} \item the sequence $(T_n)$ is monotonically increasing, i.e. \[ \qdom(T_n)\supset \qdom(T_{n+1}), \quad T_n[u,u]\le T_{n+1}[u,u] \quad \text{for all $n\in\NN$ and $u\in \qdom (T_{n+1})$}, \] \item one has the equalities \begin{gather*} \qdom(T_\infty)=\big\{u\in \bigcap\limits_{n\in \NN} \qdom(T_n): \quad \sup T_n[u,u]<\infty\big\},\\ T_\infty[u,u]=\lim_{n\to+\infty}T_n[u,u] \text{ for each } u\in \qdom(T_\infty), \end{gather*} \end{itemize} then for each $j\in\NN$ there holds $E_j(T_\infty)=\lim_{n\to+\infty} E_j(T_n)$. \end{prop} The result follows, for example, from the constructions of \cite[Abs.~3]{weid}: Satz 3.1 establishes a (generalized) strong resolvent convergence of $T_n$ to $T_\infty$ and Satz~3.2 gives the convergence of the eigenvalues. An interested reader may refer to the papers \cite{bh,simon,weid} dealing with the monotone convergence in a more general framework, i.e. beyond densely defined operators with compact resolvents. \section{Sesquilinear forms}\label{sec-quad} \subsection{Sesquilinear forms for the squares of Euclidean Dirac operators}\label{quad1} For the rest of the text we denote \[ \Omega^c:=\RR^n\setminus\overline{\Omega}. \] The shape operator $W:T\Sigma\to T\Sigma$ is given by $W X:=\nabla_X \nu$ with $\nabla$ being the gradient in $\RR^n$, and its eigenvalues $h_1,\dots,h_{n-1}$ are the principal curvatures of $\Sigma$. For $k=1,\dots,n-1$ we will denote by $H_k$ the \emph{$k$-th mean curvature of $\Sigma$ with respect to $\nu$} defined by \[ H_k=\sum_{1\le j_1<\dots<j_k\le n-1} h_{j_1}\cdot \ldots \cdot h_{j_k}, \] in particular, $H_1=h_1+\ldots +h_{n-1}=\tr W$ is the mean curvature, $R=2H_2\equiv H_1^2-|W|^2$ with $|W|^2:=\tr (W^2)$ is the scalar curvature. We set formally $H_k=0$ for $k\ge n$. \begin{lemma}\label{qfa} The operator $A_m$ is self-adjoint with compact resolvent and its eigenfunctions belong to $C^\infty(\overline \Omega,\CC^N)$. For all $u\in \dom(A_m)$ there holds \begin{equation} \label{qform} \langle A_m u, A_m u\rangle_{L^2(\Omega,\CC^N)} =\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma\Big(m+ \dfrac{H_1}{2}\Big)\, |u|^2\dd s. \end{equation} \end{lemma} \begin{proof} Remark first that the map $x\mapsto \Gamma (x)$ in~\eqref{ekg1} gives a representation of the Clifford algebra~$\Cl(0,n)$. Furthermore, the self-adjointness is not influenced if one adds a bounded operator, hence, it is sufficient to consider the case $m=0$. The operator $A_0$ is covered e.g. by the analysis of \cite[Section 2]{hmz} by noting that $\cB$ is a chirality operator defining a local boundary condition. Hence, the self-adjointness, the compactness of the resolvent and the smoothness of eigenfunctions follow from \cite[Proposition~1 and Corollary~2]{hmz}. An interested reader may refer e.g. to \cite{baer} for a more detailed discussion of boundary value problems for Dirac-type operators. In order to obtain the representation \eqref{qform} we use additional constructions. The map $\Gamma$ induces the extrinsic Dirac operator $\widetilde D^\Sigma$ in $L^2(\Sigma,\CC^N)$ given by \[ \widetilde D^\Sigma \psi:=\dfrac{H_1}{2}\,\psi-\Gamma(\nu) \sum_{j=1}^{n-1} \Gamma(e_j) \nabla_{e_j}\psi \] with $(e_1,\dots,e_{n-1})$ being an orthonormal frame tangent to $\Sigma$. For $u\in H^2(\Omega,\CC^N)$ one has the integral identity, see \cite[Section 3, Eq. (13)]{hmw}, \begin{equation*} \int_\Omega |D_0 u|^2\dd x= \int_\Omega |\nabla u|^2\dd x +\int_\Sigma \Big( \dfrac{H_1}{2}\, |u|^2 -\langle \widetilde D^\Sigma u, u\rangle\Big)\dd s, \end{equation*} where $D_0$ is given by \eqref{eqedm} with $m=0$. Therefore, for $u\in H^2(\Omega,\CC^N)\cap\dom(A_m)$ one has \begin{multline} \label{form1} \langle A_m u,A_m u\rangle_{L^2(\Omega,\CC^N)}\equiv \Big\langle \big(D_0 +m \alpha_{n+1}\big)u,\big(D_0 +m \alpha_{n+1}\big)u\Big\rangle_{L^2(\Omega,\CC^N)}\\ = \langle D_0 u,D_0 u\rangle_{L^2(\Omega,\CC^N)} +2m\Re \Big(\big\langle D_0 u,\alpha_{n+1}u\big\rangle_{L^2(\Omega,\CC^N)}\Big)+m^2 \big\langle \alpha_{n+1}u,\alpha_{n+1}u\big\rangle_{L^2(\Omega,\CC^N)}\\ =\int_\Omega \big( |\nabla u|^2+m^2|u|^2\big)\dd x +\int_\Sigma \Big( \dfrac{H_1}{2}\, |u|^2 -\langle \widetilde D_\Sigma u, u\rangle\Big)\dd s\\ +2m\Re\Big( \langle D_0 u,\alpha_{n+1}u\rangle_{L^2(\Omega,\CC^N)}\Big). \end{multline} The operator $\widetilde D^\Sigma$ anticommutes with $\Gamma(\nu)$, see \cite[Proposition 1]{hmw}. As the matrix $\alpha_{n+1}$ anticommutes with all $\Gamma(x)$, it commutes with $\widetilde D^\Sigma$ by construction. Therefore, using the boundary condition for $u$ we have the pointwise equalities \begin{align*} \langle \widetilde D^\Sigma u, u\rangle&=\big\langle \widetilde D^\Sigma \big[-\rmi \alpha_{n+1}\Gamma(\nu) \big] u, u\big\rangle\\ &=\big\langle \rmi \alpha_{n+1}\Gamma(\nu) \widetilde D^\Sigma u, u \big\rangle =\big\langle \widetilde D^\Sigma u, -\rmi\Gamma(\nu)\alpha_{n+1} u\big\rangle\\ &=\big\langle \widetilde D^\Sigma u, \rmi\alpha_{n+1}\Gamma(\nu) u\big\rangle =-\langle \widetilde D^\Sigma u, u\rangle, \end{align*} implying $\langle \widetilde D^\Sigma u, u\rangle=0$ on $\Sigma$. It remains to transform the third summand on the right-hand side of \eqref{form1}. Recall that due to the integration by parts for any $v,w\in H^1(\Omega,\CC^N)$ we have \[ \int_\Omega \sum_{j=1}^n \langle \alpha_j \partial_j v,w\rangle_{\CC^N}\dd x =-\int_\Omega \sum_{j=1}^n \langle v, \alpha_j \partial_j w\rangle_{\CC^N}\dd x +\int_\Sigma \sum_{j=1}^n \langle \alpha_j \nu_j v, w\rangle_{\CC^N}\dd s, \] which then gives \begin{multline} \label{dpart} \big\langle D_0 u,\alpha_{n+1}u\big\rangle_{L^2(\Omega,\CC^N)}= \int_\Omega \big\langle D_0 u,\alpha_{n+1}u\big\rangle_{\CC^N}\dd x\\ =\int_\Omega \big\langle u, D_0\alpha_{n+1}u\big\rangle_{\CC^N}\dd x +\int_\Sigma \sum_{j=1}^n \langle -\rmi\alpha_j \nu_j u, \alpha_{n+1} u\rangle_{\CC^N}\dd s\\ =-\int_\Omega \big\langle \alpha_{n+1}u, D_0 u\big\rangle_{\CC^N}\dd x +\int_\Sigma \big\langle -\rmi\Gamma(\nu)u,\alpha_{n+1}u\big\rangle_{\CC^N}\dd s. \end{multline} Therefore, \begin{align*} 2m\Re\Big(\big\langle D_0 u,\alpha_{n+1}u\big\rangle_{L^2(\Omega,\CC^N)}\Big)&= m\Big(\big\langle D_0 u,\alpha_{n+1}u\big\rangle_{L^2(\Omega,\CC^N)} + \big\langle \alpha_{n+1}u, D_0 u\big\rangle_{L^2(\Omega,\CC^N)} \Big)\\ &=m\int_\Sigma \big\langle -\rmi\Gamma(\nu)u,\alpha_{n+1}u\big\rangle_{\CC^N}\dd s\\ &=m \int_\Sigma \big\langle -\rmi\alpha_{n+1} \Gamma(\nu)u,u\big\rangle_{\CC^N}\dd s =m\int_\Sigma |u|^2_{\CC^N}\dd s. \end{align*} This shows the sought identity \eqref{qform} for the $H^2$ functions in the domain. It is then extended to the whole of $\dom(A_m)$ by a standard density argument. \end{proof} \begin{lemma} The operator $B_{m,M}$ is self-adjoint, and for all $u\in \dom(B_{m,M})$ there holds \begin{multline} \label{qform2} \langle B_{m,M} u, B_{m,M} u\rangle_{L^2(\RR^n,\CC^N)} =\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x + \int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x\\ +(M-m)\Big(\int_\Sigma | \cP_- u|^2\dd s -\int_\Sigma | \cP_+ u|^2\dd s\Big), \end{multline} where $\cP_\pm(s):=\dfrac{I_N \pm \cB(s)}{2}$ for $s\in\Sigma$. \end{lemma} \begin{proof} The self-adjointness is obvious with the help of the Fourier transform, so let us concentrate on the sesquilinear form. Representing $B_{m,M}=D_M +(m-M) 1_\Omega \alpha_{n+1}$ we have \begin{multline*} \langle B_{m,M} u, B_{m,M} u\rangle_{L^2(\RR^n,\CC^N)}\\ \begin{aligned} &=\langle D_M u +(m-M) 1_\Omega \alpha_{n+1}u, D_M u +(m-M) 1_\Omega \alpha_{n+1} u\rangle_{L^2(\RR^n,\CC^N)}\\ &=\langle D_M u, D_M u\rangle_{L^2(\RR^n,\CC^N)}+(m-M)^2 \langle 1_\Omega \alpha_{n+1}u, 1_\Omega \alpha_{n+1} u\rangle_{L^2(\RR^n,\CC^N)}\\ &\quad+(m-M)\Big( \langle D_M u, 1_\Omega \alpha_{n+1} u\rangle_{L^2(\RR^n,\CC^N)}+\langle 1_\Omega \alpha_{n+1}u, D_M u\rangle_{L^2(\RR^n,\CC^N)} \Big)\\ &=\int_{\RR^n} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x + (m-M)^2\int_\Omega |u|^2\dd x\\ &\quad + (m-M)\Big( \langle D_M u, 1_\Omega \alpha_{n+1} u\rangle_{L^2(\RR^n,\CC^N)}+\langle 1_\Omega \alpha_{n+1}u, D_M u\rangle_{L^2(\RR^n,\CC^N)} \Big). \end{aligned} \end{multline*} Now using $D_M=D_0+M\alpha_{n+1}$ we transform the last summand as follows: \begin{multline*} (m-M)\Big[ \langle D_M u, 1_\Omega \alpha_{n+1} u\rangle_{L^2(\RR^n,\CC^N)}+\langle 1_\Omega \alpha_{n+1}u, D_M u\rangle_{L^2(\RR^n,\CC^N)}\Big]\\ \begin{aligned} &= (m-M)\Big[ \langle D_0 u+M\alpha_{n+1} u, 1_\Omega \alpha_{n+1} u\rangle_{L^2(\RR^n,\CC^N)}\\ &\qquad +\langle 1_\Omega \alpha_{n+1}u, D_0 u+M\alpha_{n+1} u\rangle_{L^2(\RR^n,\CC^N)}\Big]\\ &=2M(m-M)\int_\Omega |u|^2\dd x\\ &\qquad + (m-M)\Big( \langle D_0 u, 1_\Omega \alpha_{n+1} u\rangle_{L^2(\RR^n,\CC^N)}+\langle 1_\Omega \alpha_{n+1}u, D_0 u\rangle_{L^2(\RR^n,\CC^N)}\Big)\\ &=2M(m-M)\int_\Omega |u|^2\dd x+(m-M)\int_\Sigma \langle \cB u, u\rangle_{\CC^N}\dd s, \end{aligned} \end{multline*} where we used the equality \eqref{dpart} in the last step. This gives \begin{multline*} \langle B_{m,M} u, B_{m,M} u\rangle_{L^2(\RR^n,\CC^N)} =\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x\\ + \int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x -(M-m)\int_\Sigma \langle \cB u, u\rangle_{\CC^N}\dd s, \end{multline*} and it remains to remark that \begin{multline*} \langle \cB u, u\rangle_{\CC^N}=\dfrac{1}{2}\,\big\langle (1+\cB) u, u\big\rangle_{\CC^N}- \dfrac{1}{2}\,\big\langle (1-\cB) u, u\big\rangle_{\CC^N}\\ =\langle \cP_+ u, u\rangle_{\CC^N}-\langle \cP_- u, u\rangle_{\CC^N}\equiv |\cP_+u|_{\CC^N}-|\cP_-u|_{\CC^N}, \end{multline*} where in the last step we used the fact that $\cP_\pm$ are orthogonal projectors. \end{proof} \subsection{Intrinsic and extrinsic Dirac operators on Euclidean hypersurfaces}\label{quad2} The definition of the intrinsic Dirac operator $\Dsl$ on $\Sigma$ with a detailed presentation of preliminary constructions can be found in the monographs \cite{moroianu, fried,ginoux}. Recall that if $\mathbb{S}\Sigma$ is the intrinsic spinor bundle over $\Sigma$ with the associated spin connection $\dnabla$ and carrying the natural Hermitian and Clifford module structures, then $\Dsl$ acts on smooth sections $\psi$ of $\mathbb{S}\Sigma$ by $\Dsl \psi=\sum_{j=1}^{n-1} e_j\cdot \dnabla_{e_j} \psi$, where $(e_1,\dots,e_{n-1})$ is an orthonormal frame tangent to $\Sigma$ and $\cdot$ is the Clifford multiplication. For our situation, the study of $\Dsl$ is easier to approach through the so-called extrinsic Dirac operators, which will be more suitable for the subsequent asymptotic analysis, and we explain this link in the present section. For $n\ge 2$ and $K:=2^{[\frac{n}{2}]}$ let $\beta_1,\dots,\beta_n$ be anticommuting Hermitian $K\times K$ matrices with $\beta_j^2=I_K$. The intrinsic Dirac operator $D^{\RR^n}$ in $\RR^n$ acts then by \[ D^{\RR^n}=-\rmi\sum_{j=1}^n \beta_j \dfrac{\partial}{\partial x_j}, \] and it is a self-adjoint operator in $L^2(\RR^n,\CC^K)$ with domain $H^1(\RR^n,\CC^K)$. Remark that the expression $D_0$ given in the introduction does not correspond to the intrinsic Dirac operator on $\RR^n$ as $N\ne K$ in general. The \emph{extrinsic} Dirac operator $D^\Sigma$ on $\Sigma$ is a self-adjoint operator in $L^2(\Sigma,\CC^K)$ with domain $H^1(\Sigma,\CC^K)$ and given by \[ D^\Sigma=\dfrac{H_1}{2}-\beta(\nu) \sum_{j=1}^{n-1} \beta(e_j) \nabla_{e_j}, \] where $(e_1,\dots,e_{n-1})$ is an orthonormal frame tangent to $\Sigma$, and for $x=(x_1,\dots,x_n)$ we denote $\beta(x)=\sum_{j=1}^n \beta_j x_j$. It is a fundamental result that $D^\Sigma$ is unitarily equivalent to $\Dsl$ for odd $n$ and to $\Dsl\oplus (-\Dsl)$ for even $n$; for even $n$ the operator $\Dsl$ can be identified with the restriction of $\beta(\nu) D^\Sigma$ on $\ker\big(1-\beta(\nu)\big)$, see e.g. \cite[Section 2.4]{moroianu}. In other words, the study of the eigenvalues of $(D^\Sigma)^2$ is equivalent to that of $\Dsl^2$, modulo the multiplicities for even $n$. In turn, a classical tool for the analysis of the eigenvalues of $(D^\Sigma)^2$ is provided by the Schr\"odinger-Lichnerowicz formula $(D^\Sigma)^2=(\nabla^\Sigma)^*\nabla^\Sigma+\frac{1}{2} \,H_2\, I$ (whose proof we recall in Appendix~\ref{sec-lichn}), where $\nabla^\Sigma$ is the induced spin connection \[ \nabla^\Sigma_X=\nabla_X+\dfrac{1}{2}\,\beta(\nu)\beta(WX):\, C^\infty(\Sigma,\CC^K)\to C^\infty(\Sigma,\CC^K), \quad X\in T\Sigma. \] In other words, for $u\in H^1(\Sigma,\CC^K)$ one has \begin{equation} \label{lichn} \langle D^\Sigma u,D^\Sigma u\rangle_{L^2(\Sigma,\CC^K)} =\int_\Sigma \Big(|\nabla^\Sigma u|^2 + \dfrac{H_2 |u|^2}{2} \Big)\dd x, \end{equation} while in the local coordinates on $\Sigma$ one has \begin{equation} \label{lichn2} |\nabla^\Sigma u|^2=\sum_{j,k=1}^{n-1} g^{jk} \Big\langle \partial_j u +\dfrac{1}{2}\,\beta(\nu)\beta(\partial_j\nu) u,\partial_k u +\dfrac{1}{2}\,\beta(\nu)\beta(\partial_k\nu) u \Big\rangle_{\CC^K}, \end{equation} where $(g^{jk}):=(g_{jk})^{-1}$ and $(g_{jk})$ is the Riemannian metric on $\Sigma$ induced by the embedding into $\RR^n$. For the subsequent analysis we introduce the Hilbert space \begin{equation} \cH:=\Big\{ f\in L^2(\Sigma,\CC^N):\, f=\cB f\Big\}, \quad \|f\|^2_\cH:=\int_\Sigma |f|^2\dd s, \end{equation} with $\cB$ given in \eqref{ekg3}, and the self-adjoint operator $L$ in $\cH$ given by its sesquilinear form as follows: \[ L[f,f]=\int_\Sigma \Big[|\nabla f|^2 +\Big(H_2-\dfrac{H_1^2}{4}\Big) |f|^2\Big]\dd s, \quad \qdom(L)=H^1(\Sigma,\CC^N)\cap \cH, \] with $\qdom(L)$ being the form domain (see Section~\ref{sec-prelim}). The operator $L$ will arise naturally in the asymptotic spectral analysis of the Dirac operators $A_m$ and $B_{m,M}$, and its importance is explained in the following assertion: \begin{lemma}\label{lemld} The operator $L$ is unitarily equivalent to $\Dsl^2$. \end{lemma} \begin{proof} The proof is by direct computation, by constructing an explicit isomorphism between $L^2(\Sigma,\CC^{N/2})$ and $\cH$ and then by establishing a link with the extrinsic Dirac operator $D^\Sigma$ using the Schr\"odinger-Lichnerowicz formula. Following the standard rules, see e.g. \cite[Chapter 15]{dg} or \cite[Appendix~E]{wit}, for $n\in\NN$ we define $2^{[\frac{n}{2}]}\times 2^{[\frac{n}{2}]}$ Dirac matrices $\gamma_j(n)$ with $j\in\{1,\dots,n\}$ using the following iterative procedure: \begin{itemize} \item For $n=1$, set $\gamma_1(1):=(1)$. \item For $n=2$, set $\gamma_1(2):=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ and $\gamma_2(2):=\begin{pmatrix} 0 & -\rmi \\ \rmi & 0 \end{pmatrix}$. \item For $n=2m+1$ with $m\in\NN$: \begin{align} \gamma_j(2m+1)&:=\gamma_j(2m), \quad j=1,\dots,2m, \nonumber\\ \gamma_{2m+1}(2m+1)&:=\pm\rmi^m \gamma_1(2m)\cdot \ldots \cdot \gamma_{2m}(2m)=\pm\begin{pmatrix} -I_{2^{m-1}} & 0 \\ 0 & I_{2^{m-1}} \end{pmatrix}, \label{eqpmg} \end{align} \item For $n=2m+2$ with $m\in\NN$: \begin{align*} \gamma_j(2m+2)&:=\begin{pmatrix} 0 & \gamma_j(2m+1)\\ \gamma_j(2m+1) & 0 \end{pmatrix}, \quad j=1,\dots,2m+1,\nonumber\\ \gamma_{2m+2}(2m+2)&:=\begin{pmatrix} 0 & -\rmi I_{2^{m}}\\ \rmi I_{2^{m}}& 0 \end{pmatrix} . \nonumber \end{align*} \end{itemize} One easily checks that at a fixed $n\in\NN$ the matrices $\gamma_j(n)$ are Hermitian and anticommute, the square of each of them is the identity matrix. Furthermore, if $\big(\gamma'_j(n)\big)$ is another set of matrices with these properties and of the same size, then there exists a unitary matrix $C$ and a suitable choice of $\pm$ in \eqref{eqpmg} such that the equalities $\gamma'_j(n) C=\gamma_j(n) C$ hold for all $j$, see e.g. \cite[Prop.~15.16]{dg}. Therefore, without loss of generality one may assume that the matrices $\alpha_j$ in the expression \eqref{ekg3} of $\cB$ and the matrices $\beta_j$ used in the definition of $D^\Sigma$ are chosen in the form \begin{equation} \label{agan} \alpha_j=\gamma_j(n+1), \quad j=1,\dots,n+1, \qquad \beta_j=\gamma_j(n), \quad j=1,\dots,n. \end{equation} For $x=(x_1,\dots,x_n)\in\RR^n$ and $q\in\{n,n+1\}$ we define a matrix $\Gamma_q(x)$ by \[ \Gamma_q(x)=\sum_{j=1}^n x_j \gamma_j(q), \] then one has the relations \begin{gather} \label{eq-comm-rel} \Gamma_n(x)\Gamma_n(y)+\Gamma_n(y)\Gamma_n(x)=2\langle x,y\rangle_{\RR^n}I, \quad x,y\in\RR^n,\\ \Gamma(x)=\Gamma_{n+1}(x), \quad \beta(x)=\Gamma_n(x).\nonumber \end{gather} Consider first the case when $n$ is odd, $n=2m+1$ with $m\in\NN$. Represent $f\in\cH$ as $f=(f_-,f_+)$ with $f_\pm\in L^2(\Sigma,\CC^{N/2})$, then, under the convention \eqref{agan}, the condition $f=\cB f$ takes the form \begin{gather*} \begin{pmatrix} f_- \\ f_+ \end{pmatrix}=-\rmi \begin{pmatrix} 0 & -\rmi I_{2^m}\\ \rmi I_{2^m}& 0\end{pmatrix} \begin{pmatrix} 0 & \Gamma_n(\nu) \\ \Gamma_n(\nu) & 0 \end{pmatrix} \begin{pmatrix} f_- \\ f_+ \end{pmatrix}, \end{gather*} which holds if and only if $f_\pm=\pm \Gamma_n(\nu)f_\pm$. Therefore, the map \[ U:L^2(\Sigma,\CC^{N/2})\to \cH, \quad (U f)(s)=\dfrac{1}{2}\begin{pmatrix} \big(1-\Gamma_n(\nu) \big) f\\ \big(1+\Gamma_n(\nu) \big) f\end{pmatrix} \] defines a unitary operator, and $Uf \in H^1(\Sigma,\CC^N)$ iff $f\in H^1(\Sigma,\CC^{N/2})$. As $H_j$ are scalar functions, one has \begin{equation} \label{h1h2} \Big(H_2-\dfrac{H_1^2}{4}\Big) |U f|^2_{\CC^N} = \Big(H_2-\dfrac{H_1^2}{4}\Big) |f|^2_{\CC^{N/2}}. \end{equation} In order to compute $\big|\nabla (Uf)\big|^2$ we use local coordinates on $\Sigma$. One has \begin{align*} \big|\nabla (Uf)\big|^2&= \dfrac{1}{4}\sum_{j,k=1}^{n-1} g^{j,k}\bigg[ \Big\langle \partial_j \Big(\big(1-\Gamma_n(\nu) \big) f\Big),\partial_k \Big(\big(1-\Gamma_n(\nu) \big) f\Big)\Big\rangle_{\CC^{N/2}} \\ &\qquad+ \Big\langle \partial_j \Big(\big(1+\Gamma_n(\nu) \big) f\Big),\partial_k \Big(\big(1+\Gamma_n(\nu) \big) f\Big)\Big\rangle_{\CC^{N/2}} \bigg]\\ &= \dfrac{1}{2}\sum_{j,k=1}^{n-1}g^{j,k}\bigg[ \langle \partial_j f,\partial_k f\rangle_{\CC^{N/2}} + \Big\langle \partial_j \big(\Gamma_n(\nu) f\big) , \partial_k \big(\Gamma_n(\nu) f\big) \Big\rangle_{\CC^{N/2}}\bigg]. \end{align*} We have then \begin{multline*} \Big\langle \partial_j \big(\Gamma_n(\nu) f\big), \partial_k \big(\Gamma_n(\nu) f\big) \Big\rangle_{\CC^{N/2}}\\ =\Big\langle \Gamma_n(\nu)\partial_j f + \Gamma_n(\partial_j\nu) f,\Gamma_n(\nu)\partial_k f + \Gamma_n(\partial_k\nu) f\Big\rangle_{\CC^{N/2}}\\ =\Big\langle \partial_j f + \Gamma_n(\nu)\Gamma_n(\partial_j\nu) f,\partial_k f + \Gamma_n(\nu)\Gamma_n(\partial_k\nu) f\Big\rangle_{\CC^{N/2}}, \end{multline*} and it follows that \begin{align*} \big|\nabla (Uf)\big|^2&=\sum_{j,k=1}^{n-1} g^{j,k} \Big\langle\partial_j f + \dfrac{1}{2}\,\Gamma_n(\nu) \Gamma_n(\partial_j\nu) f, \partial_k f + \dfrac{1}{2}\,\Gamma_n(\nu) \Gamma_n(\partial_k\nu) f\Big\rangle_{\CC^{N/2}}\\ &\quad +\frac14\sum_{j,k=1}^{n-1} g^{j,k} \Big\langle\Gamma_n(\partial_k \nu)\Gamma_n(\partial_j\nu)f,f\Big\rangle_{\CC^{N/2}}\\ &= |\nabla^\Sigma f|^2+\dfrac{1}{4}\langle f, V f\rangle_{\CC^{N/2}}, \qquad V:=\sum_{j,k=1}^{n-1} g^{j,k} \Gamma_n(\partial_k \nu)\Gamma_n(\partial_j\nu). \end{align*} Using the symmetry of $(g^{j,k})$ and the commutation relation \eqref{eq-comm-rel} we compute \begin{multline*} V=\dfrac{1}{2}\sum_{j,k=1}^{n-1} g^{j,k} \Big(\Gamma_n(\partial_j \nu)\Gamma_n(\partial_k\nu)+\Gamma_n(\partial_k \nu)\Gamma_n(\partial_j\nu)\Big)\\ = \sum_{j,k=1}^{n-1} g^{j,k} \langle\partial_j \nu,\partial_k\nu\rangle\, I= |\nabla\nu|^2 I=|W|^2 I=(H_1^2-2H_2) I. \end{multline*} By combining with \eqref{h1h2} we arrive at \begin{gather*} L[Uf,Uf]=\int_\Sigma \Big( |\nabla^\Sigma f|^2+\dfrac{H_2 |f|^2}{2} \Big)\dd s. \end{gather*} Due to the Schr\"odinger-Lichnerowicz formula \eqref{lichn} we conclude that $L=U^*(D^\Sigma)^2 U$, while $(D^\Sigma)^2$ is unitarily equivalent to $\Dsl^2$ as $n$ is odd. This proves the claim for odd dimensions. Now consider the case when $n$ is even, $n=2m$ with $m\in\NN$. As for the previous case, we try to find a block representation for the condition $f=\cB f$, which now takes the form \begin{equation} \label{loc0} \Big(I_{2^m} +\rmi \gamma_{2m+1}(2m+1) \sum_{j=1}^{2m} \gamma_j(2m+1)\, \nu_j\Big)f=0. \end{equation} We first remark that for $x=(x_1,\dots,x_n)\in\RR^n$ we have the block representation \begin{gather} \sum_{j=1}^{2m} \gamma_j(2m+1)x_j\equiv \sum_{j=1}^{2m} \gamma_j(2m)x_j\equiv\Gamma_n (\nu)= \begin{pmatrix} 0 & \lambda(x) \\ \lambda(x)^* & 0 \end{pmatrix}, \label{lambdas}\\ \lambda(x):=\sum_{j=1}^{2m-1} \gamma_j(2m-1) \, x_j -\rmi x_{2m} I_{2^{m-1}}. \nonumber \end{gather} Represent $f=(\psi_-,\psi_+)$ with $\psi_\pm\in L^2(\Sigma,\CC^{N/2})$, then we rewrite the condition \eqref{loc0} in the block form \[ \left[ \begin{pmatrix} I & 0 \\ 0 & I\end{pmatrix} \pm\rmi \begin{pmatrix} -I & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} 0 & \lambda(\nu) \\ \lambda(\nu)^* & 0\end{pmatrix} \right]\begin{pmatrix} \psi_- \\ \psi_+\end{pmatrix}=\begin{pmatrix} 0 \\ 0 \end{pmatrix}, \] where $I:=I_{2^{m-1}}$. Using $\lambda(\nu)\lambda(\nu)^*=\lambda(\nu)^*\lambda(\nu)=I$ we see that the condition $f=\cB f$ can be rewritten as $\psi_-=\pm\rmi \lambda(\nu)\,\psi_+$. Hence, the map \[ U: L^2(\Sigma,\CC^{N/2})\to \cH, \quad U \psi = \dfrac{1}{\sqrt{2}}\begin{pmatrix} \pm\rmi \lambda(\nu) \psi \\ \psi \end{pmatrix} \] defines a unitary operator, and at each point of $\Sigma$ there holds \begin{equation} \label{equu1} \begin{aligned} \big|\nabla (U\psi)\big|^2&= \sum_{j,k=1}^{n-1} g^{j,k} \bigg(\dfrac{1}{2}\, \Big\langle \rmi\lambda(\nu)\partial_j \psi + \rmi\lambda(\partial_j\nu) \psi,\rmi\lambda(\nu)\partial_k \psi + \rmi\lambda(\partial_k\nu) \psi\Big\rangle_{\CC^{N/2}}\\ &\quad+\dfrac{1}{2}\,\langle\partial_j \psi,\partial_k\psi\rangle_{\CC^{N/2}}\bigg). \end{aligned} \end{equation} We then transform \begin{multline*} \dfrac{1}{2}\,\Big\langle \rmi\lambda(\nu)\partial_j \psi + \rmi\lambda(\partial_j\nu) \psi,\rmi\lambda(\nu)\partial_k \psi + \rmi\lambda(\partial_k\nu) \psi\Big\rangle_{\CC^{N/2}}+\dfrac{1}{2}\langle\partial_j \psi,\partial_k\psi\rangle_{\CC^{N/2}}\\ \begin{aligned} &=\dfrac{1}{2}\,\Big\langle \partial_j \psi + \lambda(\nu)^*\lambda(\partial_j\nu) \psi,\partial_k \psi + \lambda(\nu)^*\lambda(\partial_k\nu) \psi\Big\rangle_{\CC^{N/2}}+\dfrac{1}{2}\langle\partial_j \psi,\partial_k\psi\rangle_{\CC^{N/2}}\\ &=\Big\langle \partial_j \psi + \dfrac{1}{2}\lambda(\nu)^*\lambda(\partial_j\nu) \psi,\partial_k \psi + \dfrac{1}{2}\lambda(\nu)^*\lambda(\partial_k\nu) \psi\Big\rangle_{\CC^{N/2}}\\ &\quad+\dfrac{1}{4}\,\Big\langle \lambda(\nu)^*\lambda(\partial_j\nu) \psi,\lambda(\nu)^*\lambda(\partial_k\nu) \psi\Big\rangle_{\CC^{N/2}}\\ &=\Big\langle \partial_j \psi + \dfrac{1}{2}\lambda(\nu)^*\lambda(\partial_j\nu) \psi,\partial_k \psi + \dfrac{1}{2}\lambda(\nu)^*\lambda(\partial_k\nu) \psi\Big\rangle_{\CC^{N/2}}\\ &\quad+\dfrac{1}{4}\,\Big\langle \psi,\lambda(\partial_j\nu)^*\lambda(\partial_k\nu) \psi\Big\rangle_{\CC^{N/2}}. \end{aligned} \end{multline*} The substitution into \eqref{equu1} gives \begin{align*} \big|\nabla (U\psi)\big|^2&=\sum_{j,k=1}^{n-1} g^{j,k}\Big\langle \partial_j \psi + \dfrac{1}{2}\lambda(\nu)^*\lambda(\partial_j\nu) \psi,\partial_k \psi + \dfrac{1}{2}\lambda(\nu)^*\lambda(\partial_k\nu) \psi\Big\rangle_{\CC^{N/2}}\\ &\quad+\dfrac{1}{4}\langle \psi, V \psi\rangle_{\CC^{N/2}}, \qquad V:=\sum_{j,k=1}^{2m-1}g^{j,k}\lambda(\partial_j\nu)^*\lambda(\partial_k\nu). \end{align*} In order to compute $V$ we introduce \[ \widetilde V:=\sum_{j,k=1}^{2m-1}g^{j,k}\lambda(\partial_j\nu)\lambda(\partial_k\nu)^*, \] then \begin{align*} \begin{pmatrix} \widetilde V & 0\\ 0 & V \end{pmatrix} &=\sum_{j,k=1}^{2m-1}g^{j,k}\begin{pmatrix}\lambda(\partial_j\nu)\lambda(\partial_k\nu)^* & 0 \\ 0 &\lambda(\partial_j\nu)^*\lambda(\partial_k\nu)\end{pmatrix}\\ &=\sum_{j,k=1}^{2m-1}g^{j,k}\begin{pmatrix}0 & \lambda(\partial_j\nu) \\ \lambda(\partial_j\nu)^* &0\end{pmatrix} \begin{pmatrix}0 & \lambda(\partial_k\nu) \\ \lambda(\partial_k\nu)^* & 0\end{pmatrix}\\ &= \sum_{j,k=1}^{2m-1}g^{j,k} \Gamma_n(\partial_j \nu)\Gamma_n(\partial_k\nu)\\ &=\dfrac{1}{2} \sum_{j,k=1}^{2m-1}g^{j,k} \Big( \Gamma_n(\partial_j \nu)\Gamma_n(\partial_k\nu)+\Gamma_n(\partial_k \nu)\Gamma_n(\partial_j\nu)\Big)\\ &=\sum_{j,k=1}^{2m-1}g^{j,k} \langle \partial_j \nu,\partial_k\nu\rangle\, I= |\nabla \nu|^2 I=|W|^2I=(H_1^2-2H_2)I. \end{align*} In addition, as the functions $H_j$ are scalar, we have \[ \Big\langle U\psi, \Big(H_2-\dfrac{H_1^2}{4}\Big)U\psi\Big\rangle_{\cH} =\Big\langle \psi, \Big(H_2-\dfrac{H_1^2}{4}\Big)\psi\Big\rangle_{L^2(\Sigma,\CC^{N/2})}, \] and then \begin{multline*} L[U\psi,U\psi]\\ =\int_\Sigma \sum_{j,k=1}^{n-1} g^{j,k}\Big\langle \partial_j \psi + \dfrac{1}{2}\lambda(\nu)^*\lambda(\partial_j\nu) \psi,\partial_k \psi + \dfrac{1}{2}\lambda(\nu)^*\lambda(\partial_k\nu) \psi\Big\rangle_{\CC^{N/2}}\dd s\\ +\dfrac{1}{2}\langle \psi, H_2 \psi\rangle_{L^2(\Sigma,\CC^{N/2})}. \end{multline*} Now consider the unitary transform $U_0:L^2(\Sigma,\CC^{N/2})\to L^2(\Sigma,\CC^{N/2})$ given by $U_0\psi=\lambda(\nu)^* \psi$, then a simple computation shows that \begin{multline*} L[UU_0\psi,UU_0\psi]\\ =\int_\Sigma \sum_{j,k=1}^{n-1} g^{j,k}\Big\langle \partial_j \psi + \dfrac{1}{2}\lambda(\nu)\lambda(\partial_j\nu)^* \psi,\partial_k \psi + \dfrac{1}{2}\lambda(\nu)\lambda(\partial_k\nu)^* \psi\Big\rangle_{\CC^{N/2}}\dd s\\ +\dfrac{1}{2}\langle \psi, H_2 \psi\rangle_{L^2(\Sigma,\CC^{N/2})}. \end{multline*} Using \eqref{lambdas}, for $\psi_\pm\in H^1(\Sigma,\CC^{N/2})$ and $\psi:=(\psi_-,\psi_+)\in H^1(\Sigma,\CC^N)$ one has \begin{multline*} L[UU_0\psi_-,UU_0\psi_-]+L[U\psi_+,U\psi_+]\\ =\int_\Sigma \sum_{j,k=1}^{n-1} g^{j,k} \big\langle\partial_j \psi + \dfrac{1}{2} \Gamma_n(\nu)\Gamma_n(\partial_j\nu)\psi, \partial_k \psi + \dfrac{1}{2} \Gamma_n(\nu)\Gamma_n(\partial_k\nu)\psi\big\rangle_{\CC^N}\dd s\\ +\dfrac{1}{2}\, \langle \psi, H_2 \psi\rangle_{L^2(\Sigma,\CC^N)}. \end{multline*} By comparing with the Schr\"odinger-Lichnerowicz formula \eqref{lichn}--\eqref{lichn2} we see that the operator $(U_0^*U^*LUU_0)\oplus (U^*LU)$ is unitarily equivalent to $(D^\Sigma)^2$. As $(D^\Sigma)^2$ is now unitarily equivalent to $\Dsl^2\oplus\Dsl^2$ (because $n$ is even), it follows that $L$ is unitarily equivalent to $\Dsl^2$. \end{proof} \section{Preliminary constructions for the spectral analysis}\label{sec-prelim} \subsection{One-dimensional model operators}\label{ssec1d} \begin{lemma}\label{lem1dd} Let $\delta>0$ be fixed. For $\alpha>0$, let $S$ be the self-adjoint operator in $L^2(0,\delta)$ with \[ S[f,f]=\int_0^\delta |f'|^2\dd t-\alpha \big|f(0)\big|^2, \quad \qdom(S)=\big\{f\in H^1(0,\delta):\, f(\delta)=0\big\}, \] then for $\alpha\to+\infty$ one has $E_1(S)=-\alpha^2+\cO(e^{-\delta\alpha})$, and the associated eigenfunction $\psi$ with $\|\psi\|_{L^2(0,\delta)}=1$ satisfies $\big|\psi(0)\big|^2=2\alpha+\cO(1)$. \end{lemma} \begin{proof} One easily see that the operator $S$ acts as $f\to-f''$ defined of the functions $f\in H^2(0,\delta)$ with $f'(0)+\alpha f(0)=f(\delta)=0$. Let us estimate its first eigenvalue as $\alpha\to+\infty$. Look for negative eigenvalues $E=-k^2$ with $k>0$, then using the boundary condition at $\delta$ we see that the associated normalized eigenfunction $\psi$ is of the form $\psi(t)=c\sinh\big(k(\delta-t)\big)$ with $c\ne 0$ being a normalizing constant. The boundary condition at $0$ gives $0=\psi'(0)+\alpha \psi(0)=-k \cosh (k\delta)+\alpha \sinh (k\delta)$, i.e. \begin{equation} \label{fkd} F(k\delta)=\alpha\delta, \quad F(x):= x\coth x. \end{equation} One easily sees that $F:(0,+\infty)\to (1,+\infty)$ is strictly increasing and bijective, and for $\alpha\delta>1$ the equation \eqref{fkd} admits a unique solution $k$, and then $k\delta\to +\infty$ for $\alpha\to+\infty$. Now rewrite \eqref{fkd} as $k=\alpha \tanh (k\delta)$. Due to $k\delta\to+\infty$ we have $\frac{3}{4}\le \tanh (k\delta)\le 1$ implying $3\alpha/4\le k\le \alpha$. Then using the equation again we have $\alpha \tanh\big(\frac{3}{4} \,\alpha\delta\big)\le k\le \alpha$, while $\tanh\big(\frac{3}{4} \,\alpha \delta\big)=1+\cO(e^{-3 \delta \alpha/2})$. Therefore, with some $c_1>0$ one has $E_1(S)=-k^2=-\alpha^2\big(1+\cO(e^{-3\delta\alpha/2})\big)\le -\alpha^2+c_1e^{-\delta\alpha}$ as $\alpha\to +\infty$. In order to find the value of the normalizing constant $c$ we use \[ 1=\|\psi\|^2_{L^2(0,\delta)}=|c|^2\int_0^\delta \sinh^2\big( k(\delta-t)\big)\dd t=|c|^2 \Big(\dfrac{1}{4 k} \sinh (2 k\delta) - \dfrac{\delta}{2}\Big), \] then \[ \big|\psi(0)\big|^2=\big(\sinh^2 (k\delta) \big)\Big(\dfrac{\sinh (2 k\delta)}{4k}-\dfrac{\delta}{2}\Big)^{-1}= 2k+\cO(1)=2\alpha+\cO(1). \qedhere \] \end{proof} \begin{lemma}\label{lem1dr} Let $\delta>0$ and $\beta\ge 0$ be fixed. For $\alpha>0$, let $S'$ be the self-adjoint operator in $L^2(0,\delta)$ given by \[ S'[f,f]=\int_0^\delta |f'|^2\dd t -\alpha \big|f(0)\big|^2 - \beta \big|f(\delta)\big|^2, \quad \qdom(S')=H^1(0,\delta), \] then for $\alpha\to+\infty$ one has $E_1(S')=-\alpha^2+\cO(e^{-\delta\alpha})$. Furthermore, there exist $b_\pm>0$ and $b>0$ such that \begin{equation} \label{ejb} b^- j^2-b \le E_j(S')\le b^+ j^2 \text{ for all $j\ge 2$ and $\alpha\in\RR$}. \end{equation} \end{lemma} \begin{proof} The operator $S'$ clearly acts as $f\mapsto -f''$ on the functions $f\in H^2(0,\delta)$ with $f'(0)+\alpha f(0)=f'(\delta)-\beta f(\delta)=0$. To estimate $E_1(S')$ we remark that a value $E=-k^2$ with $k>0$ is an eigenvalue of $S'$ iff one can find $(C_1,C_2)\in\CC^2\setminus\big\{(0,0)\big\}$ such that the function $f:t\mapsto C_1e^{kt}+C_2 e^{-kt}$ belongs to its domain. The boundary conditions give \begin{align*} 0&=f'(0)+\alpha f(0)=(\alpha+k)C_1 + (\alpha-k)C_2,\\ 0&=f'(\delta)-\beta f(\delta)=(k-\beta)e^{k\delta} C_1 - (k+\beta)e^{-k\delta}C_2, \end{align*} and one has a non-zero solution iff the determinant of the system vanishes, i.e. iff $k$ solves $(k+\alpha)(k+\beta)e^{-k\delta}=(k-\alpha)(k-\beta)e^{k\delta}$, which we rewrite as \begin{equation} \label{eq-gh} g(k)=h(k), \quad g(k):=\dfrac{k+\alpha}{k-\alpha}, \quad h(k):=\dfrac{k-\beta}{k+\beta} \,e^{2k\delta}. \end{equation} Both $g$ and $h$ are continuous, and $g$ is strictly decreasing on $(\alpha,+\infty)$ with $g(\alpha^+)=+\infty$ and $g(+\infty)$=1, while $h$ is strictly increasing on $(\alpha,+\infty)$ being the product of two strictly increasing positive functions (we assume without loss of generality that $\alpha>\beta$), and $h(\alpha^+)=e^{2\alpha\delta}(\alpha-\beta)/(\alpha+\beta) <+\infty$ and $h(+\infty)=+\infty$. Therefore, there exists a unique solution $k$ of \eqref{eq-gh} with $k\in(\alpha,+\infty)$. To obtain the required estimate we use again the monotonicity of $h$ on $(\alpha,+\infty)$: \[ \dfrac{k+\alpha}{k-\alpha}=g(k)=h(k)> h(\alpha^+)=\dfrac{\alpha-\beta}{\alpha+\beta} \,e^{2\alpha\delta}. \] We bound the last term from below very roughly by $e^{3\alpha\delta/2}$ then \[ \dfrac{k+\alpha}{k-\alpha}\ge e^{3\alpha\delta/2}, \quad k\le \alpha\,\dfrac{1+e^{-3\alpha\delta/2}}{1-e^{-3\alpha \delta/2}}= \alpha\big(1+\cO(e^{-3\alpha\delta/2})\big). \] By combining with $k>\alpha$ we arrive at the sought estimate \[ E_1(S')=-k^2=-\alpha^2\big(1+\cO(e^{-3\alpha\delta/2})\big)=-\alpha^2+\alpha^2\cO(e^{-3\alpha\delta/2}) =-\alpha^2+\cO(e^{-\alpha\delta}). \] To estimate $E_j(S')$ with $j\ge 2$ we remark that by the min-max principle for any $\alpha\in\RR$ one has $E_{j-1}(S'_N)\le E_j(S')\le E_j(S'_D)$, where the operator $S'_{D/N}$ acts in $L^2(0,\delta)$ as $f\mapsto -f''$ on the functions $f\in H^2(0,\delta)$ with the Dirichlet/Neumann boundary condition at $0$ and $f'(\delta)-\beta f(\delta)$. As the eigenvalues of both $S'_{D/N}$ satisfy the Weyl asymptotics $E_j(S'_{D/N})\sim \pi^2j^2/\delta^2$ as $j\to +\infty$, one arrives at the inequalities. \end{proof} \subsection{Tubular coordinates}\label{sec-curv} Recall that the shape operator $W$ and curvatures of $\Sigma$ were defined in Subsection~\ref{quad1}. In what follows we will actively use tubular coordinates on both sides of $\Sigma$. In this section, \[ \text{let $\Omega_*$ be either $\Omega$ or $\Omega^c$,} \] and let $\nu_*$ be the unit normal on $\Sigma$ pointing to the exterior of $\Omega_*$, i.e. \[ \nu_*:=\nu,\ W_*:=W \text{ for $\Omega_*=\Omega$}, \quad \nu_*:=-\nu,\ W_*:=-W \text{ for $\Omega_*=\Omega^c$}. \] The principal curvatures and the (higher) mean curvatures of $\Sigma$ with respect to $\nu_*$ will be denoted by $h^*_j$ and $H^*_k$ respectively, i.e. \begin{gather*} \text{$h_j^*:=h_j$ and $H^*_k=H_k$ for $\Omega_*=\Omega$},\\ \text{$h_j^*:=-h_j$ and $H^*_k=(-1)^k H_k$ for $\Omega_*=\Omega^c$.} \end{gather*} For small $\delta>0$ denote \[ \Pi_\delta:=\Sigma\times(0,\delta), \quad \Omega_*^\delta=\big\{x\in \Omega_*: \dist(x,\Sigma)<\delta\big\} \] It is a well known result in differential geometry that there exists a small $\delta_0>0$ such that for sufficiently small $\delta>0$ the map \[ \Phi_*: \Pi_\delta\to \Omega_*^\delta, \quad (s,t)\mapsto s-t\nu_*(s), \] is a diffeomorphism, and $\dist\big(\Phi_*(s,t),\partial U\big)=t$ for $(s,t)\in\Pi_\delta$. Consider the associated unitary map \[ \Theta_\delta: L^2(\Omega_*^\delta) \to L^2(\Pi_\delta), \quad u\mapsto \sqrt{\det (\Phi_*')}\, u\circ \Phi_* \] We will use several times the following computations: \begin{lemma}\label{lem7} For $\gamma\in\RR$ denote \[ J_\gamma(u)\equiv J(u):=\int_{\Omega_*^\delta} |\nabla u|^2 \dd x +\int_{\Sigma} \Big(\gamma+\dfrac{H^*_1}{2}\Big) |u|^2\dd s, \quad u\in H^1(\Omega_*^\delta). \] There exist $\delta_0>0$ and $c>0$ such that for any $\gamma\in\RR$ and $\delta\in(0,\delta_0)$ the following assertions hold true with $v:=\Theta_\delta u$: \begin{itemize} \item[(a)] for any $u\in H^1(\Omega_*^\delta)$ with $u=0$ on $\partial \Omega_*^\delta\setminus\Sigma$ one has \[ J(u)\le \int_{\Pi_\delta} \bigg[(1+c\delta) |\nabla_s v|^2 + |\partial_t v|^2 + \Big(H^*_2 -\dfrac{(H^*_1)^2}{4}+c\delta\Big) |v|^2 \bigg]\dd s\dd t +\gamma\int_{\Sigma} \big|v(s,0)\big|^2\dd s, \] \item[(b)] for any $u\in H^1(\Omega_*^\delta)$ one has \begin{multline*} J(u)\ge \int_{\Pi_\delta} \bigg[(1-c\delta) |\nabla_s v|^2 + |\partial_t v|^2 + \Big(H^*_2 -\dfrac{(H^*_1)^2}{4}-c\delta\Big) |v|^2 \bigg]\dd s\dd t\\ +\gamma\int_{\partial U} \big|v(s,0)\big|^2\dd s-c\int_{\Sigma} \big|v(s,\delta)\big|^2\dd s, \end{multline*} \end{itemize} where $\nabla_s$ is the gradient on $\Sigma$, i.e. with respect to the coordinates $s\in\Sigma$. \end{lemma} \begin{proof} The metric $G$ on $\Pi_\delta$ induced by the map $\Phi_*$ is given by $G=g\circ (1- tW_*) + \dd t^2$, with $g$ being the metric on $\Sigma$ induced by the embedding in $\RR^n$, and the volume form is $\det G \dd s \dd t=\varphi \dd s \dd t$ with $\dd s$ being the volume form on $\Sigma$ and the weight \begin{equation} \label{eq-varphi} \varphi(s,t)=\prod\nolimits_{j=1}^{n-1}\big(1-t h^*_j(s)\big)=1+\sum\nolimits_{j\ge 1}(-t)^j H^*_{j}(s). \end{equation} Denote $w:=u\circ \Phi_*$, then the standard change of variables gives, for any $u\in H^1(\Omega_*^\delta)$, \[ J(u)=\int_{\Pi_\delta} |\nabla w|^2\varphi\dd s \dd t +\int_\Sigma\Big(\gamma+\dfrac{H_1^*}{2}\Big)\, \big|w(s,0)\big|^2\dd s, \] and we remark that the condition $u=0$ on $\partial \Omega_*^\delta\setminus\Sigma$ is equivalent to $w(\cdot,\delta)=0$. Due to the above representation of the metric $G$, for a suitable fixed $c_0>0$ one can estimate, uniformly in $u$, \[ (1-c_0\delta)|\nabla_s w|^2 + |\partial_t w|^2\le |\nabla w|^2 \le (1+c_0\delta)|\nabla_s w|^2 + |\partial_t w|^2, \] with $\nabla_s$ being the gradient on $\Sigma$ (i.e. with respect to the variable $s$), which gives \begin{multline} \label{eq15} \int_{\Pi_\delta} \big((1-c_0\delta)|\nabla_s w|^2+|\partial_t w|^2\big)\varphi\dd s \dd t +\int_\Sigma\Big(\gamma+\dfrac{H_1^*}{2}\Big)\, \big|w(s,0)\big|^2\dd s\\ \le J(u) \le \int_{\Pi_\delta} \big((1+c_0\delta)|\nabla_s w|^2+|\partial_t w|^2\big)\varphi\dd s \dd t +\int_\Sigma\Big(\gamma+\dfrac{H_1^*}{2}\Big)\, \big|w(s,0)\big|^2\dd s. \end{multline} Recall that $w=\varphi^{-\frac{1}{2}} v$, and that $\varphi=1$ on $\Sigma$. Hence, \[ \Big(\gamma+\dfrac{H_1^*}{2}\Big)\, \big|w(s,0)\big|^2= \Big(\gamma+\dfrac{H_1^*}{2}\Big)\, \big|v(s,0)\big|^2, \] which allows to transform the last summand in \eqref{eq15}. In addition, \[ |\nabla_s w|^2\varphi=\Big|\nabla_s v -\dfrac{1}{2\varphi}v \nabla_s \varphi\Big|^2= |\nabla_s v|^2 +\dfrac{|v|^2}{4\varphi^2} |\nabla_s\varphi|^2 - \dfrac{1}{\varphi} \Re \big(\langle \nabla_s v, v\nabla_s\varphi\rangle\big). \] The Cauchy-Schwarz inequality gives $\big|\Re \langle \nabla_s v, v\nabla_s\varphi\rangle\big|\le \delta |\nabla_s v|^2+ |v|^2|\nabla_s \varphi|^2/\delta$, and in view of the expression \eqref{eq-varphi} for $\varphi$ one has $|\nabla_s \varphi|^2\le c_1\delta^2$ for some $c_1>0$ and all $t\in(0,\delta)$. Therefore, for a suitable $c_2>0$ one estimates, uniformly in $u$, \[ (1-c_2\delta)|\nabla_s v|^2-c_2\delta |v|^2 \le (1\pm c_0\delta)|\nabla_s w|^2\varphi \le (1+c_2\delta)|\nabla_s v|^2+c_2 \delta |v|^2. \] We represent now \[ |\partial_t w|^2\varphi=\Big|\partial_t v -\dfrac{1}{2\varphi}v \,\partial_t \varphi\Big|^2 =|\partial_t v|^2 - \dfrac{\partial_t \varphi}{2\varphi} \partial_t \big(|v|^2\big)+\dfrac{(\partial_t \varphi)^2}{4\varphi^2} |v|^2 \] and performing an integration by parts with respect to $t$ in the middle term we have \begin{multline*} \int_{\Pi_\delta} |\partial_t w|^2\varphi\dd s\dd t =\int_{\Pi_\delta} \bigg( |\partial_t v|^2+ \Big( \partial_t \Big(\dfrac{\partial_t \varphi}{2\varphi}\Big) +\dfrac{(\partial_t \varphi)^2}{4\varphi^2}\Big) |v|^2 \bigg)\dd s \dd t\\ -\int_\Sigma \dfrac{H_1^*}{2} \big|v(s,0)\big|^2\dd s -\int_\Sigma \dfrac{(\partial_t\varphi)(s,\delta)}{2\varphi(s,\delta)} \big|v(s,\delta)\big|^2\dd s, \end{multline*} while the last summand vanishes for $v(\cdot,\delta)=0$, i.e. for $u=0$ on $\partial \Omega_*^\delta\setminus\Sigma$. Putting the above estimates together we obtain \begin{align*} J(u)&\le\int_{\Pi_\delta} \bigg((1+c_2\delta)|\nabla_s v|^2+|\partial_t v|^2 +\Big(\dfrac{\partial^2_t \varphi}{2\varphi} -\dfrac{(\partial_t\varphi)^2}{4\varphi^2}+c_2\delta\Big)|v|^2\dd s\dd t\\ &\quad+\gamma\int_\Sigma\big|v(s,\delta)\big|^2\dd s, \quad u\in H^1(\Omega_*^\delta), \quad u=0 \text{ on } \partial \Omega_*^\delta\setminus\Sigma,\\ J(u)&\ge \int_{\Pi_\delta} \bigg((1-c_2\delta)|\nabla_s v|^2+|\partial_t v|^2 +\Big(\dfrac{\partial^2_t \varphi}{2\varphi} -\dfrac{(\partial_t\varphi)^2}{4\varphi^2}-c_2\delta\Big)|v|^2\dd s\dd t\\ &\quad+\gamma\int_\Sigma\big|v(s,0)\big|^2\dd s -\int_\Sigma \dfrac{(\partial_t\varphi)(s,\delta)}{2\varphi(s,\delta)} \big|v(s,\delta)\big|^2\dd s, \quad u\in H^1(\Omega_*^\delta). \end{align*} It remains to estimate, with a suitable $c_3>0$, \[ \Big\|\dfrac{(\partial_t\varphi)(\cdot,\delta)}{2\varphi(\cdot,\delta)}\Big\|_{L^\infty(\Sigma)}\le c_3, \quad \Big\| \dfrac{\partial^2_t \varphi}{2\varphi} -\dfrac{(\partial_t\varphi)^2}{4\varphi^2} -\Big(H^*_2-\dfrac{(H^*_1)^2}{4}\Big)\Big\|_{L^\infty(\Sigma)}\le c_3 \delta \] and to choose $c:=\max\{c_2,c_3\}$. \end{proof} \section{Proof of Theorem~\ref{thm1a}}\label{sec-thm1} We are going to show that $E_j(A_m^2)\to E_j(\Dsl^2)$ for each $j\in \NN$ as $m\to-\infty$. Due to Lemma~\ref{lemld} for each $j\in\NN$ there holds $E_j(\Dsl^2)=E_j(L)$, hence, it is sufficient to prove that \begin{equation} \label{conv01} E_j(L)=\lim_{m\to-\infty} E_j(A_m^2)\text{ for each $j\in\NN$}. \end{equation} \subsection{Dirichlet-Neumann bracketing} For small $\delta>0$ denote $\Omega_\delta:=\big\{x\in\Omega:\, \dist(x,\Sigma)<\delta\big\}$ and $\Pi_\delta:=\Sigma\times(0,\delta)$ and consider the diffeomorphisms $\Phi:\Pi_\delta\to \Omega_\delta$ given by $(s,t)\mapsto s-t\nu(s)$ together with the associated unitary maps $\Theta_\delta: L^2(\Omega_\delta,\CC^N)\to L^2(\Pi_\delta,\CC^N)$, $\Theta_\delta u= \sqrt{\det (\Phi')}\,\, u\circ\Phi$. Consider the self-adjoint operator $Z^+_m$ in $L^2(\Omega_\delta,\CC^N)$ given by \begin{align} \label{loc-5} Z^+_m[u, u]& =\int_{\Omega_\delta} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma\Big(m+ \dfrac{H_1}{2}\Big)\, |u|^2\dd s,\\ \qdom(Z^+_m)&=\big\{u\in H^1(\Omega_\delta,\CC^N): u=\cB u \text{ on } \Sigma, \quad u=0 \text{ on } \partial\Omega_\delta\setminus\Sigma\big\},\nonumber \end{align} the self-adjoint operator $Z^-_m$ in $L^2(\Omega_\delta,\CC^N)$ given by \begin{align} \label{loc-6} Z^-_m[u, u]& =\int_{\Omega_\delta} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma\Big(m+ \dfrac{H_1}{2}\Big)\, |u|^2\dd s,\\ \qdom(Z^-_m)&=\big\{u\in H^1(\Omega_\delta,\CC^N): u=\cB u \text{ on } \Sigma\big\}, \nonumber \end{align} and the self-adjoint operator $Z'_m$ in $L^2(\Omega^c_\delta,\CC^N)$ given by \[ Z'_m[u, u] =\int_{\Omega^c_\delta} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x, \quad \qdom(Z'_m)=H^1(\Omega^c_\delta,\CC^N), \] with $\Omega_\delta^c := \Omega \setminus \overline{\Omega_\delta}$. Due to the min-max principle for any $j\in\NN$ and we have the eigenvalue inequality \[ E_j(Z^-_m\oplus Z'_m)\le E_j(A_m^2)\le E_j(Z^+_m). \] (It is sufficient to apply Proposition~\ref{prop-incl}: for the left inequality one takes $T=A_m^2$, $T':=Z^-_m\oplus Z'_m$, and $J:L^2(\Omega,\CC^N)\mapsto (f^-,f')\in L^2(\Omega_\delta,\Omega^c_\delta)$ defined by $f^-:=f|_{\Omega_\delta}$ and $f':=f|_{\Omega^c_\delta}$, while for the right inequality one takes $T:=Z^+_m$, $T':=A_m^2$ and $J:L^2(\Omega_\delta)\to L^2(\Omega)$ the extension by zero.) Noting that $Z'_m\ge m^2$ we deduce that \begin{equation} \label{loc-8} E_j(Z^-_m)\le E_j(A_m^2)\le E_j(Z^+_m) \text{ for any $j\in\NN$ with $E_j(Z^+_m)< m^2$}. \end{equation} Using the change of coordinates of Lemma~\ref{lem7} to bound $Z^\pm_m[\Theta_\delta^*v,\Theta_\delta^*v]$ from above and below we then obtain \[ E_j(Z^+_m)\le E_j(Y^+_m), \quad E_j(Z^-_m)\ge E_j(Y^-_m) \text{ for any $j\in\NN$} \] with $Y^\pm_m$ being the self-adjoint operators in $L^2(\Pi_\delta,\CC^N)$ given by \begin{align*} Y^+_m[v,v]&=\int_{\Pi_\delta} \bigg[(1+c\delta) |\nabla_s v|^2 + |\partial_t v|^2 + \Big(m^2+H_2 -\dfrac{H_1^2}{4}+c\delta\Big) |v|^2 \bigg]\dd s\dd t\\ &\quad +m\int_{\Sigma} \big|v(s,0)\big|^2\dd s,\\ \qdom(Y^+_m)&=\big\{v\in H^1(\Pi_\delta,\CC^N): v(\cdot ,0)=\cB v(\cdot,0) \text{ and } v(\cdot,\delta)=0\big\},\\ Y^-_m[v,v]&=\int_{\Pi_\delta} \bigg[(1-c\delta) |\nabla_s v|^2 + |\partial_t v|^2 + \Big(m^2+H_2 -\dfrac{H_1^2}{4}-c\delta\Big) |v|^2 \bigg]\dd s\dd t\\ &\quad+m\int_{\Sigma} \big|v(s,0)\big|^2\dd s-c \int_{\Sigma} \big|v(s,\delta)\big|^2\dd s,\\ \qdom(Y^-_m)&=\big\{v\in H^1(\Pi_\delta,\CC^N): v(\cdot ,0)=\cB v(\cdot,0)\big\}, \end{align*} where $c$ is independent of $\delta\in(0,\delta_0)$ and $m\in \RR$ is arbitrary. Therefore, we arrive at the two-sided estimate \begin{equation} \label{loc-9} E_j(Y^-_m)\le E_j(A_m^2)\le E_j(Y^+_m) \text{ for any $j\in\NN$ with $E_j(Y^+_m)< m^2$}. \end{equation} \subsection{Upper bound} To obtain an upper bound for the eigenvalues of $Y^+_m$ let us consider the self-adjoint operator $S$ in $L^2(0,\delta)$ with \[ S[f,f]=\int_{0}^\delta |f'|^2\dd t + m\big|f(0)\big|^2, \quad \qdom(S)=\big\{f\in H^1(0,\delta):\, f(\delta)=0\big\} \] and let $\psi$ be an eigenfunction for the first eigenvalue normalized by $\|\psi\|^2_{L^2(0,\delta)}=1$. The analysis of Lemma~\ref{lem1dd} shows that for some $b>0$ one has $E_1(S)\le-m^2+be^{-\delta|m|}$ as $(-m)$ is large, and then $S[\psi,\psi] + m^2\le be^{-\delta|m|}$. Let $c>0$ be the same as in the above expressions for $Y^\pm_m$. For small $a\in\RR$, let $L_a$ be the self-adjoint operator in $\cH$ given by \begin{equation} \label{lavv} \begin{aligned} L_a[g,g]&=\int_\Sigma \Big[(1+ca)|\nabla g|^2 +\Big(H_2-\dfrac{H_1^2}{4}+ca\Big) |g|^2\Big]\dd s,\\ \qdom(L_a)&=H^1(\Sigma,\CC^N)\cap \cH. \end{aligned} \end{equation} Remark that for $a=0$ we recover exactly the operator $L$ and that due to the min-max principle one has \begin{equation} \label{conv03} E_j(L)=\lim_{a\to 0} E_j(L_a) \text{ for each $j\in\NN$.} \end{equation} Let $j\in\NN$ be fixed and $g_1,\dots,g_j$ be linearly independent eigenfunctions of $L_\delta$ for the first $j$ eigenvalues, then the subspace $G:=\vspan(g_1,\dots,g_j)$ is $j$-dimensional and $L_\delta[g,g]/\|g\|^2_\cH\le E_j(L_\delta)$ for any $0\ne g\in G$. Consider the subspace \[ V=\{v \in L^2(\Pi_\delta,\CC^N): \,v(s,t)=g(s)\psi(t), \ g\in G\}\subset \qdom (Y^+_m), \] then for $v\in V$ with $v(s,t)=g(s)\psi(t)$ and $g\in G$ one has $\|v\|^2_{L^2(\Pi_\delta,\CC^N)}=\|g\|^2_\cH$ and \begin{multline*} Y^+_m[v,v]=L_\delta[g,g] \|\psi\|^2_{L^2(0,\delta)} + \Big(S[\psi,\psi] + m^2\|\psi\|^2_{L^2(0,\delta)}\Big)\|g\|^2_\cH\\ \le L_\delta[g,g] + be^{-\delta|m|}\|g\|^2_\cH\le \big( E_j(L_\delta) + be^{-\delta|m|}\big)\|g\|^2_\cH\\ \equiv \big( E_j(L_\delta) + be^{-\delta|m|}\big)\|v\|^2_{L^2(\Pi_\delta,\CC^N)}. \end{multline*} As $\dim V=\dim G=j$, it follows by the min-max principle that \[ E_j(Y^+_m)\le \sup_{0\ne v\in V}\dfrac{Y^+_m[v,v]}{\|v\|^2_{L^2(\Pi_\delta,\CC^N)}}\le E_j(L_\delta) + be^{-\delta|m|}, \] hence, $\limsup_{m\to-\infty}E_j(Y^+_m)\le E_j(L_\delta)$. As $\delta>0$ can be chosen arbitrarily small, the convergence \eqref{conv03} implies $\limsup_{m\to-\infty}E_j(Y^+_m)\le E_j(L)$, and then due to the upper bound \eqref{loc-9} we arrive at \begin{equation} \label{conv04} \limsup_{m\to-\infty}E_j(A_m^2)\le E_j(L). \end{equation} \subsection{Lower bound} Now let us pass to a lower bound for $E_j(Y^-_m)$. In the constructions below, the constant $c>0$ is the same as in the expression for $Y^-_m$. Let $S'$ be the self-adjoint operator in $L^2(0,\delta)$ with \[ S'[f,f]=\int_0^\delta |f'|^2\dd t +m \big|f(0)\big|^2 - c \big|f(\delta)\big|^2, \quad \qdom (S')=H^1(0,\delta). \] Let $\psi_k\in L^2(0,\delta)$ with $k\in\NN$ be real-valued eigenfunctions of $S'$ for the eigenvalues $E_k(S')$ forming an orthonormal basis in $L^2(0,\delta)$, which induces the unitary transforms $\Theta:L^2(0,\delta)\to \ell^2(\NN)$ given by $(\Theta f)_k=\langle \psi_k,f\rangle_{L^2(0,\delta)}$, $k\in\NN$. Recall that due to the analysis of Lemma~\ref{lem1dr} we have, with some $b^\pm>0$, $b>0$ and $b_0>0$, \begin{gather} \label{conv05} E_1(S')\ge -m^2-b e^{-\delta|m|} \text{ as } m\to-\infty, \\ \label{conv06} b^- k^2- b_0\le E_k(S')\le b^+ k^2 \text{ for all $k\ge 2$ and $m\in \RR$.} \end{gather} Let us give some more details on the subsequent constructions. Let $Y_m$ be the self-adjoint operator whose sesquilinear form is given by the same expression as the one for $Y^-_m$ but on the larger form domain $\qdom(Y_m)=H^1(\Pi_\delta,\CC^N)$. It follows easily that the new operator $Y_m$ admits a separation of variables. Namely, for small $a\in\RR$ we consider the self-adjoint operator $\Lambda_a$ in $L^2(\Sigma,\CC^N)$ given by \[ \Lambda_a[g,g]=\int_\Sigma \Big[(1+ca)|\nabla g|^2 +\Big(H_2-\dfrac{H_1^2}{4}+ca\Big) |g|^2\Big]\dd s, \quad \qdom(\Lambda_a)=H^1(\Sigma,\CC^N), \] i.e. its sesquilinear form is given by the same expression as the one for $L_a$ in \eqref{lavv} but without the restriction $g\in\cH$. Now, if one identifies $L^2(\Pi_\delta,\CC^N)=L^2(0,\delta)\otimes L^2(\Sigma,\CC^N)$, then $Y_m=(S'+m^2)\otimes 1 + 1\otimes \Lambda_{-\delta}$. Using the unitary transform \begin{gather*} \Xi:L^2(\Pi_\delta)\to \ell^2(\NN)\otimes L^2(\Sigma,\CC^N),\\ \Xi v=(v_k), \quad v_k:=\int_0^\delta \psi_k(t)v(t,\cdot)\dd t\in L^2(\Sigma,\CC^N), \end{gather*} and the spectral theorem we see that the operator $\widehat Y_m:=\Xi Y_m\Xi^*$ is given by \[ \widehat Y_m\big[(v_k),(v_k)\big]=\sum_{k\in\NN} \Big( \Lambda_{-\delta}[v_k,v_k] +\big(E_k(S')+m^2\big)\|v_k\|^2_{L^2(\Sigma,\CC^N)}\Big), \] while the form domain $\qdom(\widehat Y_m)$ consists of all $(v_k)\in \ell^2(\NN)\otimes L^2(\Sigma,\CC^N)$ with $v_k\in H^1(\Sigma,\CC^N)$ such that the right-hand side of the preceding expression is finite. Using the two-sided estimate \eqref{conv06} we can rewrite \begin{multline} \qdom(\widehat Y_m)=\Big\{ (v_k)\in \ell^2(\NN)\otimes L^2(\Sigma,\CC^N):\ v_k\in H^1(\Sigma,\CC^N) \text{ for each $k\in \NN$}\\ \text{and } \sum_{k\in\NN} \Big(\|v_k\|^2_{H^1(\Sigma,\CC^N)}+k^2\|v_k\|^2_{L^2(\Sigma,\CC^N)}\Big)<\infty\Big\}. \label{qdomy0} \end{multline} As the sesquilinear form for $Y^-_m$ is simply the restriction of that for $Y_m$ on the functions $v$ with $v(\cdot,0)=\cB v(\cdot,0)$, for the operator $\widehat Y^-_m:=\Xi Y^-_m \Xi^*$ we have \begin{equation} \label{qdomy1} \qdom(\widehat Y^-_m)=\big\{ \widehat v=(v_k)\in \qdom(\widehat Y_m): (1-\cB) (\Xi^*\widehat v)(\cdot,0)=0\big\}. \end{equation} Using the lower bounds \eqref{conv05} and \eqref{conv06} for $E_k(S')$, for all $\widehat v=(v_k)\in \qdom(\widehat Y^-_m)$ we obtain the inequality $\widehat Y^-_m[\widehat v,\widehat v]\ge w_m(\widehat v,\widehat v)$ with the sesquilinear form $w_m$ defined on $\dom (w_m):=\qdom(\widehat Y^-_m)$ by \begin{multline*} w_m(\widehat v,\widehat v):=\Lambda_{-\delta}[v_1,v_1]-b e^{-\delta|m|}\|v_1\|^2_{L^2(\Sigma,\CC^N)}\\ +\sum_{k\ge 2} \Big( \Lambda_{-\delta}[v_k,v_k] + (b^-k^2-b_0 +m^2)\|v_k\|^2_{L^2(\Sigma,\CC^N)}\Big). \end{multline*} It follows from representation \eqref{qdomy0} that the form $w_m$ is lower semibounded and from reprentation \eqref{qdomy1} that it is closed. Thus, it defines a self-adjoint operator $W_m$ in $\ell^2(\NN)\otimes L^2(\Sigma,\CC^N)$ with compact resolvent. For any $j\in\NN$ we have then \begin{equation} \label{conv07a} E_j(A_m^2)\ge E_j(Y^-_m)=E_j(\widehat Y^-_m)\ge E_j(W_m). \end{equation} We are now in the classical situation for the monotone convergence (Proposition~\ref{prop-mon}) to analyze the eigenvalues of $W_m$. Namely, consider the set \begin{equation} \label{conv08} \cQ_\infty:=\Big\{\widehat v=(v_k)\in \bigcap_{m<0} \qdom (W_m)\equiv \qdom(\widehat Y^-_m):\quad \sup_{m<0} W_m[\widehat v,\widehat v]<+\infty\Big\}. \end{equation} It is easily seen that a vector $\widehat v=(v_k)\in \qdom(\widehat Y^-_m)$ belongs to $\cQ_\infty$ if and only if $v_k=0$ for $k\ge 2$ and $0=(1-\cB) (\Xi^*\widehat v)(\cdot,0)\equiv\psi_1(0)(1-\cB)v_1 $, i.e. $v_1\in\cH$. This gives the equality \[ \cQ_\infty=\big\{ \widehat v=e_1\otimes v_1:\, v_1\in H^1(\Sigma,\CC^N)\cap \cH\}, \quad e_1=(1,0,0,\dots)\in \ell^2(\NN). \] For each $\widehat v\in \cQ_\infty$ one has \begin{align*} \lim_{m\to-\infty} W_m[\widehat v,\widehat v]&=\lim_{m\to-\infty} \Big(\Lambda_{-\delta}[v_1,v_1]-c_1 e^{-\delta|m|}\|v_1\|^2_{L^2(\Sigma,\CC^N)}\Big)\\ &=\Lambda_{-\delta}[v_1,v_1]\equiv L_{-\delta}[v_1,v_1]; \end{align*} we recall that $L_a$ was defined in \eqref{lavv}. Let $W_\infty$ be the self-adjoint operator in the Hilbert space $\cH_\infty:=e_1\otimes \cH$ with $\qdom(W_\infty)=\cQ_\infty$ and $W_\infty[e_1\otimes v_1,e_1\otimes v_1]=L_{-\delta}[v_1,v_1]$, then the monotone convergence principle (Proposition~\ref{prop-mon}) gives $\lim_{m\to-\infty} E_j(W_m)=E_j(W_\infty)$ for each $j\in\NN$. On the other hand, the operator $W_\infty$ is unitarily equivalent to $L_{-\delta}$, and by combining with \eqref{conv07a} we have $\liminf_{m\to-\infty}E_j(A_m)\ge E_j(L_{-\delta})$. As $\delta$ can be arbitrarily small, the convergence \eqref{conv03} implies $\liminf_{m\to-\infty}E_j(A_m)\ge E_j(L)$. In combination with the upper bound \eqref{conv04} one arrives at the sought limit \eqref{conv01}, which proves Theorem~\ref{thm1a}. \section{Proof of Theorem~\ref{thm2}}\label{sec-thm2} \subsection{Preliminary estimates} We are going to prove that for each $m\in\RR$ and $j\in\NN$ one has $\lim_{M\to+\infty}E_j(B_{m,M}^2)=E_j(A_m^2)$. We recall that $\qdom(B_{m,M}^2)\equiv \dom(B_{m,M})=H^1(\RR^n,\CC^N)$, and \begin{equation} \label{qform3} \begin{aligned} B_{m,M}^2[u, u]&\equiv \langle B_{m,M} u, B_{m,M} u\rangle_{L^2(\RR^n,\CC^N)}\\ &=\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x + \int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x\\ &\qquad+(M-m)\Big(\int_\Sigma | \cP_- u|^2\dd s -\int_\Sigma | \cP_+ u|^2\dd s\Big),\\ &=\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x + \int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x\\ &\qquad+2(M-m)\int_\Sigma | \cP_- u|^2\dd s + (m-M)\int_\Sigma |u|^2\dd s \end{aligned} \end{equation} where $\cP_\pm(s):=\dfrac{1 \pm \cB(s)}{2}$ for $s\in\Sigma$, while \begin{gather*} \qdom(A_{m}^2)\equiv \dom(A_{m})=\big\{u\in H^1(\Omega,\CC^N):\ \cP_- u=0\text{ on }\Sigma \big\},\\ A_m^2[u,u]\equiv\langle A_m u, A_m u\rangle_{L^2(\Omega,\CC^N)} =\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma\Big(m+ \dfrac{H_1}{2}\Big)\, |u|^2\dd s. \end{gather*} Taking any $\varepsilon\in\RR$ we rewrite the above expression for $B_{m,M}^2[u,u]$ as \begin{multline} B_{m,M}^2[u, u]=\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x\\ +\int_\Sigma \Big(m-\varepsilon+\dfrac{H_1}{2}\Big)|u|^2\dd s + 2(M-m)\int_\Sigma | \cP_- u|^2\dd s\\ +\int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x -\int_\Sigma \Big(M-\varepsilon+\dfrac{H_1}{2}\Big)|u|^2\dd s. \label{eq-bmm1} \end{multline} Let us start with an additional estimate which will allow us to control the term in the last line of \eqref{eq-bmm1}. \begin{lemma}\label{lem11} For $\gamma>0$ let $R_\gamma$ be the self-adjoint operator in $L^2(\Omega^c)$ given by \begin{equation} R_\gamma[u,u]=\int_{\Omega^c} |\nabla u|^2 \dd x -\int_{\Sigma} \Big(\gamma+\dfrac{H_1}{2}\Big) |u|^2\dd s, \quad \qdom(R_\gamma)=H^1(\Omega^c), \end{equation} then: \begin{itemize} \item[(a)] For some fixed $C>0$ and all large $\gamma>0$ there exists a linear map $F_\gamma: H^1(\Sigma)\to H^1(\Omega^c)$ such that for all $f\in H^1(\Sigma)$ one has $F_\gamma f = f$ on $\Sigma$ and \[ R_\gamma[F_\gamma f,F_\gamma f]+\gamma^2\|F_\gamma f\|_{L^2(\Omega^c)}^2\le \dfrac{C}{\gamma}\|f\|^2_{H^1(\Sigma)}. \] \item[(b)] For some $C_0>0$ there holds $E_1(R_\gamma)\ge -\gamma^2-C_0$ for $\gamma\to+\infty$. \end{itemize} \end{lemma} \begin{proof} For a small $\delta>0$ consider the sets $\Omega^c_\delta:=\big\{x\in\Omega^c:\, \dist(x,\Sigma)<\delta\big\}$ and $\Pi_\delta:=\Sigma\times(0,\delta)$ together with the the diffeomorphisms $\Phi^c:\Pi_\delta\to \Omega^c_\delta$ given by $\Phi^c(s,t)\mapsto s+t\nu(s)$ and the associated unitary maps $\Theta^c_\delta: L^2(\Omega^c_\delta)\to L^2(\Pi_\delta)$ with $\Theta^c_\delta u= \sqrt{\det \big((\Phi^c)'\big)}\,\, u\circ \Phi^c$. Let us prove (a). Consider the self-adjoint operator $S$ in $L^2(0,\delta)$ given by \[ S[f,f]=\int_{0}^\delta |f'|^2\dd t -\gamma\big|f(0)\big|^2, \quad \qdom(S)=\big\{f\in H^1(0,\delta):\, f(\delta)=0\big\} \] and let $\psi$ be an eigenfunction for the first eigenvalue normalized by $\psi(0)=1$. By Lemma~\ref{lem1dd}, with some $b>0$ one has $E_1(S)\le-\gamma^2+b$ and $\|\psi\|^2_{L^2(0,\delta)}\le b/\gamma$ as $\gamma$ is large. For $f\in H^1(\Sigma)$ define $v\in H^1(\Pi_\delta)$ by $v=f\otimes\psi$, i.e. $v(s,t)=f(s) \psi(t)$, and then set \[ (F_\gamma f)(x):=\begin{cases} (\Theta^c_\delta)^{-1} v & \text{ in } \Omega^c_\delta,\\ 0 &\text{ in } \Omega^c\setminus \Omega^c_\delta. \end{cases} \] Due to $f\in H^1(\Sigma)$ and $\psi(\delta)=0$ one has $F_\gamma f\in H^1(\Omega^c)$, and the equality $F_\gamma f|_\Sigma=v(\cdot,0)=f$ holds by construction. Furthermore, using the result and the notation of Lemma~\ref{lem7}(a) we obtain, with some $a>0$, \begin{multline*} R_\gamma[F_\gamma f,F_\gamma f] + \gamma^2 \|F_\gamma f\|^2=J_{-\gamma}(F_\gamma f)+ \gamma^2\|F_\gamma f\|^2\\ \begin{aligned} &\le \int_{\Pi_\delta}\Big( a|\nabla_s v|^2 + |\partial_t v|^2 + (\gamma^2+a)|v|^2\big)\dd s\dd t -\gamma\int_\Sigma |F_\gamma f|^2\dd s\\ &=\int_{\Sigma} \Big(a|\nabla_s f|^2\dd s + \big(E_1(S) +\gamma^2 +a\big) \|f\|^2_{L^2(\Sigma)}\Big)\dd s\, \|\psi\|^2_{L^2(0,\delta)}\\ &\le \int_{\Sigma}\Big(a|\nabla_s f|^2\dd s + \big(be^{-\delta \gamma} +a\big) \|f\|^2_{L^2(\Sigma)}\Big)\dd s \, \dfrac{b}{\gamma}\le \dfrac{C}{\gamma} \|f\|^2_{H^1(\Sigma)} \end{aligned} \end{multline*} with $C:= b\big(b +a\big)$. Hence, the assertion (a) is proved. To prove (b) we remark first that due to the min-max principle one has the inequality $E_1(R_\gamma)\ge E_1(R^0_\gamma\oplus R'_\gamma)$ where $R^0_\gamma$ is the operator in $L^2(\Omega^c_\delta)$ given by \[ R^0_\gamma[u,u]=\int_{\Omega^c_\delta} |\nabla u|^2 \dd x -\int_{\Sigma} \Big(\gamma+\dfrac{H_1}{2}\Big) |u|^2\dd s, \quad \qdom(R^0_\gamma)=H^1(\Omega^c_\delta), \] and $R'_\gamma$ is the self-adjoint operator in $L^2(\Omega'_\delta)$, with $\Omega'_\delta:=\Omega^c\setminus \overline{\Omega^c_\delta}$, given by \[ R'_\gamma[u,u]=\int_{\Omega'_\delta} |\nabla u|^2 \dd x, \quad \qdom(R'_\gamma)=H^1(\Omega_\delta'). \] Due to $R'_\gamma\ge 0$ one has $E_1(R_\gamma)\ge \min\{E_1(R^0_\gamma),0\}$. By Lemma~\ref{lem7}(b) one has $E_1(R^0_\gamma)\ge E_1(X_\gamma)$ with $R$ being the self-adjoint operator in $L^2(\Pi_\delta)$ with \[ X_\gamma[v,v]= \int_{\Pi_\delta} \bigg[a' |\nabla_s v|^2 + |\partial_t v|^2 -a' |v|^2 \bigg]\dd s\dd t\\ -\gamma\int_{\Sigma} \big|v(s,0)\big|^2\dd s-a'\int_{\partial U} \big|v(s,\delta)\big|^2\dd s \] and $\qdom(X_\gamma)=H^1(\Omega^c_\delta)$, with some $a'>0$. Let $S'$ be the self-adjoint operator in $L^2(0,\delta)$ given by \[ S'[f,f]=\int_0^\delta |f'|^2\dd t -\gamma \big|f(0)\big|^2 - a' \big|f(\delta)\big|^2, \quad \qdom(S')=H^1(0,\delta). \] As $|\nabla_s v|^2\ge 0$, due to Fubini's theorem one has $E_1(X_\gamma)\ge E_1(S')-a'$, and now it is sufficient to remark that by Lemma~\ref{lem1dr} one has $E_1(S')\ge -\gamma^2-a_0$ with some $a_0>0$ as $\gamma\to+\infty$. \end{proof} \subsection{Upper bound} Pick $m\in\RR$ and $j\in\NN$, and let $u_1,\dots,u_j$ be linearly independent eigenfunctions of $A_m^2$ for the first $j$ eigenvalues, then for any function $u\in V:=\vspan(u_1,\dots,u_j)$ there holds $A_m^2[u,u]\le E_j(A_m^2)\|u\|^2_{L^2(\Omega,\CC^N)}$. Recall that due to Lemma~\ref{qfa} one has $V\subset C^\infty(\overline{\Omega},\CC^N)$, and then \[ a:=\sup\big\{ \|u\|^2_{H^1(\Sigma,\CC^N)}: u\in V \text{ with } \|u\|^2_{L^2(\Omega,\CC^N)}=1\big\}<\infty. \] Using the linear map $F_\gamma$ as in Lemma~\ref{lem11}(a), for $u\in V$ define $\widetilde u \in H^1(\RR^n,\CC^N)$ by \[ \widetilde u=\begin{cases} u & \text{ in } \Omega,\\ (F_M\otimes 1)(u|_\Sigma) & \text{ in } \Omega^c. \end{cases} \] with $1$ understood as the identity operator in $\CC^N$, then for any $u\in V$ we have \begin{multline*} \int_{\Omega^c} \big(|\nabla \widetilde u|^2 +M^2|\widetilde u|^2\big)\dd x -\int_\Sigma \Big(M+\dfrac{H_1}{2}\Big)| \widetilde u|^2\dd s\\ \equiv \Big((R_M+M^2)\otimes 1\Big)[\widetilde u,\widetilde u]\le \dfrac{C}{M} \|u\|^2_{H^1(\Sigma,\CC^N)}\le \dfrac{C a}{M} \|u\|^2_{L^2(\Omega,\CC^N)} \end{multline*} with $C>0$ independent of $u$. Noting that for $u\in V$ we have $\cP_- u=0$ on $\Sigma$ and substituting the preceding upper bound into \eqref{eq-bmm1} with the choice $\varepsilon=0$ we arrive at \[ B_{m,M}^2[\widetilde u,\widetilde u]=A_m^2[u,u] +\big((R_M+M^2)\otimes 1\big)[\widetilde u,\widetilde u] \le \Big(E_j(A_m^2) + \dfrac{C a}{M}\Big)\|u\|^2_{L^2(\Omega,\CC^N)}. \] For $u\in V$ there holds $\|\widetilde u\|^2_{L^2(\RR^n,\CC^N)}\ge \|u\|^2_{L^2(\Omega,\CC^N)}$, and $\widetilde V:=\{\widetilde u:\, u\in V\}$ is therefore a $j$-dimensional subspace of $H^1(\RR^n,\CC^N)\equiv \qdom(B_{m,M}^2)$. The min-max principle gives \begin{multline*} E_j(B_{m,M}^2)\le \sup_{0\ne v\in \widetilde V} \dfrac{B_{m,M}^2[v,v]}{\|v\|^2_{L^2(\RR^n,\CC^N)}}=\sup_{0\ne u\in V} \dfrac{B_{m,M}^2[\widetilde u,\widetilde u]}{\|\widetilde u\|^2_{L^2(\RR^n,\CC^N)}}\\ \le \sup_{0\ne u\in V} \dfrac{\Big(E_j(A_m^2) + \dfrac{C a}{M}\Big)\|u\|^2_{L^2(\Omega,\CC^N)}}{\|\widetilde u\|^2_{L^2(\RR^n,\CC^N)}}\le E_j(A_m^2) + \dfrac{C a}{M}, \end{multline*} which implies $\limsup_{M\to+\infty}E_j(B_{m,M}^2)=E_j(A_m^2)$. \subsection{Lower bound} Now we use the representation \eqref{eq-bmm1} with an arbitrary fixed $\varepsilon>0$. By the min-max principle, for any $j\in\NN$ one has \begin{equation} \label{ejej0} E_j(B_{m,M}^2)\ge E_j(K_{m,M,\varepsilon} \oplus K^c_{M,\varepsilon}) \end{equation} where $K_{m,M,\varepsilon}$ is the self-adjoint operator in $L^2(\Omega,\CC^N)$ with the form domain $\qdom(K_{m,M,\varepsilon})=H^1(\Omega,\CC^N)$ and \begin{multline*} K_{m,M,\varepsilon}[u,u] =\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x\\ +\int_\Sigma \Big(m-\varepsilon+\dfrac{H_1}{2}\Big)|u|^2\dd s + 2(M-m)\int_\Sigma | \cP_- u|^2\dd s, \end{multline*} and $K^c_{M,\varepsilon}$ is the self-adjoint operator in $L^2(\Omega^c,\CC^N)$ with the form domain $\qdom(K^c_{M,\varepsilon})=H^1(\Omega^c,\CC^N)$ and \[ K^c_{M,\varepsilon}[u,u]=\int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x -\int_\Sigma \Big(M-\varepsilon+\dfrac{H_1}{2}\Big)|u|^2\dd s. \] Using the operator $R_\gamma$ from Lemma~\ref{lem11} one easily sees that $K^c_{M,\varepsilon}=(R_{M-\varepsilon}\otimes 1)+M^2$ with $1$ being the identity in $\CC^N$, and then, using Lemma~\ref{lem11}(b), $E_1(K^c_{M,\varepsilon})=E_1(R_{M-\varepsilon})+M^2\ge \varepsilon M$ as $M$ is large. Due to the upper bound proved in the preceding subsection we know already that for each fixed $j\in\NN$ there holds $E_j(B_{m,M}^2)=\cO(1)$ for large $M$, hence, Eq.~\eqref{ejej0} implies \[ E_j(B_{m,M}^2)\ge \min\big\{E_j(K_{m,M,\varepsilon}), E_1(K^c_{M,\varepsilon})\big\}=E_j(K_{m,M,\varepsilon}) \text{ as } M\to+\infty. \] As the operators $K_{m,M,\varepsilon}$ are increasing with respect to $M$, with the help of the monotone convergence (Proposition~\ref{prop-mon}) for each $j\in\NN$ one obtains $\lim_{M\to+\infty} E_j(K_{m,M,\varepsilon})=E_j(C_{m,\varepsilon})$, where $C_{m,\varepsilon}$ is the self-adjoint operator in $L^2(\Omega,\CC^N)$ given by \begin{gather*} C_{m,\varepsilon}[u,u] =\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma \Big(m-\varepsilon+\dfrac{H_1}{2}\Big)|u|^2\dd s,\\ \qdom(C_{m,\varepsilon})=\big\{ u\in H^1(\Omega,\CC^N):\, \cP_- u=0 \text{ on } \Sigma \big\}\equiv \qdom (A_m^2). \end{gather*} This shows that $\liminf_{M\to+\infty} E_j(B_{m,M}^2)\ge E_j(C_{m,\varepsilon})$. As $\varepsilon>0$ is arbitrary and we have the obvious limit $\lim_{\varepsilon\to 0} E_j(C_{m,\varepsilon})=E_j(C_{m,0})\equiv E_j(A_m^2)$, we arrive at the sought lower bound $\liminf_{M\to+\infty} E_j(B_{m,M}^2)\ge E_j(A_m^2)$, which finishes the proof. \section{Proof of Theorem~\ref{thm3}}\label{sec-thm3} We are going to show that for each $j\in\NN$ the eigenvalues $E_j(B_{m,M}^2)$ converge to $E_j(\Dsl^2)$ as $m\to -\infty$ and $M\to+\infty$ with $m/M\to 0$. Due to Lemma~\ref{lemld} for each $j\in\NN$ there holds $E_j(\Dsl^2)=E_j(L)$, hence, it is sufficient to prove that $E_j(B_{m,M}^2)$ converges to $E_j(L)$ in the same asymptotic regime. The proof is essentially by combining in a new way some constructions used in the proofs of Theorems~\ref{thm1a} and~\ref{thm2}. \subsection{Upper bound} Let us recall the important technical ingredients. For small $\delta>0$ consider the sets $\Omega_\delta:=\{x\in\Omega:\, \dist(x,\Sigma)<\delta\}$ and $\Pi_\delta:=\Sigma\times(0,\delta)$ as well as the diffeomorphisms $\Phi:\Pi_\delta\to \Omega_\delta$ given by $\Phi(s,t)=s-t\nu(s)$ and the associated unitary maps $\Theta_\delta: L^2(\Omega_\delta,\CC^N)\to L^2(\Pi_\delta,\CC^N)$ with $\Theta_\delta u= \sqrt{\det(\Phi')}\,\, u\circ\Phi$. Consider the self-adjoint operator $S$ in $L^2(0,\delta)$ with \[ S[f,f]=\int_{0}^\delta |f'|^2\dd t + m\big|f(0)\big|^2, \quad \qdom(S)=\big\{f\in H^1(0,\delta):\, f(\delta)=0\big\} \] and let $\psi$ be an eigenfunction for the first eigenvalue normalized by $\|\psi\|^2_{L^2(0,\delta)}=1$. By Lemma~\ref{lem1dd} with some $b>0$ one has \[ E_1(S)\le-m^2+be^{-\delta|m|}, \quad \big|\psi(0)\big|^2\le b |m|, \text{ as $(-m)$ is large}. \] Also recall that due to Lemma~\ref{lem11}(a) one can find $c>0$ such that for $\delta\in(0,\delta_0)$ and $u\in H^1(\Omega_\delta)$ with $u=0$ on $\partial \Omega_\delta\setminus\Sigma$ there holds, with $w:=\Theta_\delta u$, \begin{multline} \label{upp01} \int_{\Omega_\delta} |\nabla u|^2\dd x +\int_{\partial U} \Big(m+\dfrac{H_1}{2}\Big) |u|^2\dd s\\ \le \int_{\Pi_\delta} \bigg[(1+c\delta) |\nabla_s w|^2 + |\partial_t w|^2 + \Big(H_2 -\dfrac{H_1^2}{4}+c\delta\Big) |w|^2 \bigg]\dd s\dd t\\ +m\int_{\Sigma} \big|w(s,0)\big|^2\dd s. \end{multline} We will use the representation \eqref{eq-bmm1} with $\varepsilon=0$, i.e. \begin{multline} B_{m,M}^2[u, u]=\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma \Big(m+\dfrac{H_1}{2}\Big)|u|^2\dd s + 2(M-m)\int_\Sigma | \cP_- u|^2\dd s\\ + \int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x -\int_\Sigma \Big(M+\dfrac{H_1}{2}\Big)|u|^2\dd s, \quad u\in H^1(\RR^n,\CC^N). \label{eq-bmm2} \end{multline} For small $a\in\RR$ consider the operator $L_a$ in $\cH$ given by \begin{equation} \label{lavv1} \begin{aligned} L_a[g,g]&=\int_\Sigma \Big[(1+ca)|\nabla g|^2 +\Big(H_2-\dfrac{H_1^2}{4}+ca\Big) |g|^2\Big]\dd s,\\ \qdom(L_a)&=H^1(\Sigma,\CC^N)\cap \cH. \end{aligned} \end{equation} Finally, by Lemma~\ref{lem11} for large $M>0$ there exists $C>0$ and a linear extension map $F_M:H^1(\Sigma,\CC^N)\to H^1(\Omega^c,\CC^N)$ with $(F_M f) |_\Sigma=f$ and \[ \int_{\Omega^c} \big(|\nabla F_M f|^2 +M^2|F_M f|^2\big)\dd x -\int_\Sigma \Big(M+\dfrac{H_1}{2}\Big)|F_M f|^2\dd s\le \dfrac{C}{M}\, \|f\|^2_{H^1(\Sigma,\CC^N)}. \] for all $f\in H^1(\Sigma,\CC^N)$. Let $j\in\NN$ and $v_1,\dots,v_j$ be linearly independent eigenfunctions of $L_\delta$ for the first $j$ eigenvalues, then for $v\in V:=\vspan(v_1,\dots,v_j)$ one has $L_\delta[v,v]\le E_j(L_\delta)\,\|v\|^2_\cH\equiv E_j(L_\delta)\,\|v\|^2_{L^2(\Sigma,\CC^N)}$. Denote \[ a_0:=\sup\big\{ \|v\|^2_{H^1(\Sigma,\CC^N)}:\, v\in V \text{ with } \|v\|^2_{\cH}=1\big\}<\infty. \] For $v\in V$ construct $u\in H^1(\RR^n,\CC^N)$ as follows: \[ u=\begin{cases} \Theta_\delta^{-1} (v\otimes \psi)& \text{ in } \Omega_\delta,\\ \psi(0)F_M v& \text{ in } \Omega^c,\\ 0& \text{ in } \Omega\setminus \Omega_\delta. \end{cases} \] By construction one has \[ \|u\|^2_{L^2(\RR^n,\CC^N)}\ge \|u\|^2_{L^2(\Omega_\delta,\CC^N)}=\|v\|^2_{L^2(\Sigma,\CC^N)} \|\psi\|^2_{L^2(0,\delta)}=\|v\|^2_{L^2(\Sigma,\CC^N)} \equiv \|v\|^2_\cH, \] hence, the subspace $U:=\{u:\, v\in V\}\subset H^1(\RR^n,\CC^N)$ is $j$-dimensional. By the above properties of $F_M$ and $\psi$ one has \begin{multline*} \int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x -\int_\Sigma \Big(M+\dfrac{H_1}{2}\Big)|u|^2\dd s\\ =\big|\psi(0)\big|^2 \bigg(\int_{\Omega^c} \big(|\nabla F_M v|^2 +M^2|F_M v|^2\big)\dd x -\int_\Sigma \Big(M+\dfrac{H_1}{2}\Big)|F_M v|^2\dd s\bigg)\\ \le \big|\psi(0)\big|^2 \,\dfrac{C}{M}\, \|v\|^2_{H^1(\Sigma,\CC^N)}\le b|m| \, \dfrac{C}{M}\,a_0\|v\|^2_{L^2(\Sigma,\CC^N)}\equiv a_0bC\dfrac{|m|}{M}\,\|v\|^2_{\cH}, \end{multline*} and due to \eqref{upp01} there holds \begin{multline*} \int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma \Big(m+\dfrac{H_1}{2}\Big)|u|^2\dd s + 2(M-m)\int_\Sigma | \cP_- u|^2\dd s\\ \begin{aligned} \equiv &\int_{\Omega_\delta} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma \Big(m+\dfrac{H_1}{2}\Big)|u|^2\dd s\\ &\le \int_0^\delta\int_{\Sigma} \bigg[(1+c\delta) \big|\nabla_s (v\otimes\psi)\big|^2 + \big|\partial_t (v\otimes\psi)\big|^2\\ &\qquad + \Big(m^2+H_2 -\dfrac{H_1^2}{4}+c\delta\Big) \big|(v\otimes\psi)\big|^2 \bigg]\dd s\dd t +m\int_{\Sigma} \big|(v\otimes\psi)(s,0)\big|^2\dd s\\ &=\Big(\int_\Sigma \Big[(1+c\delta)|\nabla v|^2 +\Big(H_2-\dfrac{H_1^2}{4}+c\delta\Big) |v|^2\Big]\dd s\Big)\, \|\psi\|^2_{L^2(0,\delta)}\\ &\qquad+\Big( \int_{0}^\delta |\psi'|^2\dd t + m\big|\psi(0)\big|^2+m^2 \|\psi\|^2_{L^2(0,\delta)}\Big)\,\|v\|^2_{L^2(\Sigma,\CC^N)}\\ &=L_\delta[v,v] +\big(E_1(S) +m^2\big) \|v\|^2_\cH\le \big(E_j(L_\delta)+be^{-\delta|m|}\big)\|v\|^2_{\cH}. \end{aligned} \end{multline*} Inserting the preceding inequalities into the expression \eqref{eq-bmm2} for $B^2_{m,M}$ one sees that for all $u\in U$ there holds \[ B_{m,M}^2[u,u]\le \Big(E_j(L_\delta)+be^{-\delta|m|}+a_0bC\dfrac{|m|}{M}\Big)\|v\|^2_{\cH}, \quad \|u\|^2_{L^2(\RR^n,\CC^N)}\ge \|v\|^2_\cH, \] and the min-max principle gives \[ E_j(B_{m,M}^2)\le \max_{0\ne u\in U} \dfrac{B_{m,M}^2[u,u]}{\|u\|^2_{L^2(\RR^n,\CC^N)}} =\max_{0\ne v\in V} \dfrac{B_{m,M}^2[u,u]}{\|u\|^2_{L^2(\RR^n,\CC^N)}} \le E_j(L_\delta)+be^{-\delta|m|}+a_0bC\dfrac{|m|}{M}. \] Therefore, one has $\limsup_{m\to -\infty,\, m/M\to 0} E_j(B_{m,M}^2)\le E_j(L_\delta)$. As $\delta$ can be chosen arbitrarily small and $\lim_{\delta\to 0} E_j(L_\delta)=E_j(L_0)\equiv E_j(L)$ one arrives at \begin{equation} \label{eq-thm3a} \limsup_{m\to -\infty,\, m/M\to 0} E_j(B_{m,M}^2)\le E_j(L). \end{equation} \subsection{Lower bound} Now we will use the representation \eqref{eq-bmm1} with $\varepsilon=\varepsilon_0/|m|$ and an arbitrary but fixed $\varepsilon_0>0$, i.e. \begin{multline*} B_{m,M}^2[u, u]\\ =\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma \Big(m- \dfrac{\varepsilon_0}{|m|}-\dfrac{H_1}{2}\Big)|u|^2\dd s + 2(M-m)\int_\Sigma | \cP_- u|^2\dd s\\ + \int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x -\int_\Sigma \Big(M - \dfrac{\varepsilon_0}{|m|}+\dfrac{H_1}{2}\Big)|u|^2\dd s, \quad u\in H^1(\RR^n,\CC^N). \end{multline*} Due to the min-max principle for any $j\in\NN$ one has \begin{equation} \label{ebm01} E_j(B_{m,M}^2)\ge E_j(K_{m,M}\oplus K^c_{m,M}), \end{equation} where $K_{m,M}$ is the self-adjoint operator in $L^2(\Omega,\CC^N)$ with the form domain given by $\qdom(K_{m,M})=H^1(\Omega,\CC^N)$ and \begin{multline*} K_{m,M}[u,u]=\int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x\\ +\int_\Sigma \Big(m- \dfrac{\varepsilon_0}{|m|}-\dfrac{H_1}{2}\Big)|u|^2\dd s + 2(M-m)\int_\Sigma | \cP_- u|^2\dd s \end{multline*} and $K^c_{m,M}$ is the self-adjoint operator in $L^2(\Omega^c,\CC^N)$ with $\qdom(K^c_{m,M})=H^1(\Omega^c,\CC^N)$ and \[ K^c_{m,M}[u,u]=\int_{\Omega^c} \big(|\nabla u|^2 +M^2|u|^2\big)\dd x -\int_\Sigma \Big(M - \dfrac{\varepsilon_0}{|m|}+\dfrac{H_1}{2}\Big)|u|^2\dd s. \] Using the operator $R_\gamma$ from Lemma~\ref{lem11} we see that in the asymptotic regime under consideration we have, with some $C_0>0$, \begin{multline*} E_1(K^c_{m,M})=E_1(R_{M-\varepsilon_0/m})+M^2\ge M^2-\Big( M- \dfrac{\varepsilon_0}{|m|}\Big)^2-C_0\\ =2\varepsilon_0 \dfrac{M}{|m|}-\dfrac{\varepsilon_0^2}{m^2}-C_0\to+\infty. \end{multline*} As we have already the upper bound $E_j(B_{m,M}^2)=\cO(1)$, it follows from \eqref{ebm01} that $E_j(B_{m,M}^2)\ge E_j(K_{m,M})$. One can assume in addition that $M\ge 0$ and $m\le 0$, then $2(M-m)\ge -2m\ge 2|m|$, which implies \begin{equation} \label{ebm02} E_j(B_{m,M}^2)\ge E_j(K_m), \end{equation} with $K_m$ being the self-adjoint operator in $L^2(\Omega,\CC^N)$ with $\qdom(K_m)=H^1(\Omega,\CC^N)$ and \[ K_m[u,u]= \int_{\Omega} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma \Big(m- \dfrac{\varepsilon_0}{|m|}-\dfrac{H_1}{2}\Big)|u|^2\dd s + 2|m|\int_\Sigma | \cP_- u|^2\dd s. \] In order to obtain a lower bound for the eigenvalues of $K_m$ we take a small $\delta>0$ and consider the domains $\Omega_\delta=\big\{x\in\Omega: \dist(x,\delta)\big\}$ and $\Omega^c_\delta:=\Omega\setminus\overline{\Omega_\delta}$, then due to the min-max principle one has \begin{equation} \label{ekm03} E_j(K_m)\ge E_j(K'_m\oplus K''_m), \end{equation} where $K'_m$ is the self-adjoint operator in $L^2(\Omega_\delta,\CC^N)$ with the form domain $\qdom(K_m')=H^1(\Omega_\delta,\CC^N)$ and \[ K'_m[u,u]=\int_{\Omega_\delta} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x +\int_\Sigma \Big(m- \dfrac{\varepsilon_0}{|m|}-\dfrac{H_1}{2}\Big)|u|^2\dd s + |m|\int_\Sigma | \cP_- u|^2\dd s, \] while $K''_m$ is the self-adjoint operator in $L^2(\Omega_\delta^c,\CC^N)$ with \[ \qdom(K''_m)=H^1(\Omega^c_\delta,\CC^N), \quad K''_m[u,u]=\int_{\Omega^c_\delta} \big(|\nabla u|^2 +m^2|u|^2\big)\dd x, \] and $E_1(K''_m)\ge m^2\to +\infty$. By combining \eqref{ebm02} and~\eqref{ekm03} one sees that $E_j(B_{m,M}^2)\ge E_j(K'_m\oplus K''_m)$. As we already have proved the upper bound $E_j(B^2_{m,M})=\cO(1)$, it follows that \begin{equation} \label{ebm04} E_j(B_{m,M}^2)\ge E_j(K_m'). \end{equation} Using now the diffeomorphism \[ \Phi:\Pi_\delta\to \Omega_\delta, \quad \Pi_\delta:=\Sigma\times(0,\delta), \quad \Phi(s,t)\mapsto s-t\nu(s), \] and the unitary maps $\Theta_\delta: L^2(\Omega_\delta,\CC^N)\to L^2(\Pi_\delta,\CC^N)$, $\Theta_\delta u= \sqrt{\det (\Phi')}\,\, u\circ\Phi$, with the help of Lemma~\ref{lem7}(b) one obtains $E_j(K'_m)=E_j(\Theta_\delta^* K'_m\Theta_\delta)\ge E_j(K^0_m)$ with $K^0_m$ being the self-adjoint operator in $L^2(\Pi_\delta,\CC^N)$ given by \begin{multline} \label{ekk0} K^0_m[v,v]=\int_{\Pi_\delta} \bigg[(1-c\delta) |\nabla_s v|^2 + |\partial_t v|^2 + \Big(H_2 -\dfrac{H_1^2}{4}-c\delta\Big) |v|^2 \bigg]\dd s\dd t\\ +\Big(m-\dfrac{\varepsilon_0}{|m|}\Big)\int_\Sigma \big|v(s,0)\big|^2\dd s-c\int_\Sigma \big|v(s,\delta)\big|^2\dd s +|m|\int_\Sigma \big|\cP_- v(s,0)\big|^2\dd s \end{multline} on the form domain $\qdom(K^0_m)=H^1(\Pi_\delta,\CC^N)$, where $c>0$ is chosen independent of $\delta$ and $v$. With this choice of $c$, let $S'$ be the self-adjoint operator in $L^2(0,\delta)$ with \[ S'[f,f]=\int_0^\delta |f'|^2\dd t +\Big(m-\dfrac{\varepsilon_0}{|m|}\Big) \big|f(0)\big|^2 - c \big|f(\delta)\big|^2, \quad \qdom (S')=H^1(0,\delta). \] and $\psi_k\in L^2(0,\delta)$ with $k\in\NN$ be its eigenfunctions for the eigenvalues $E_k(S')$ forming an orthonormal basis in $L^2(0,\delta)$. Due to Lemma~\ref{lem1dr} we have, with some $b^\pm>0$, $b>0$ and $b_0>0$, \begin{gather} \label{conv05a} E_1(S')\ge -\Big(|m|+\dfrac{\varepsilon_0}{|m|}\Big)^2-b e^{-\delta|m|}\ge -m^2-3\varepsilon_0 \text{ as } m\to-\infty, \\ \label{conv06a} b^- k^2- b_0\le E_k(S')\le b^+ k^2 \text{ for all $k\ge 2$ and $m\in \RR$.} \end{gather} For small $a\in\RR$, in addition to the operator $L_a$ in $\cH$ defined in \eqref{lavv1} we consider the self-adjoint operator $\Lambda_a$ in $L^2(\Sigma,\CC^N)$ given by \[ \Lambda_a[g,g]=\int_\Sigma \Big[(1+ca)|\nabla g|^2 +\Big(H_2-\dfrac{H_1^2}{4}+ca\Big) |g|^2\Big]\dd s, \quad \qdom(\Lambda_a)=H^1(\Sigma,\CC^N). \] Let $K^1_m$ be the self-adjoint operator in $L^2(\Pi_\delta)$ having the same form domain as $K^0_m$ and with the sesquilinear form obtained from the one of $K^0_m$ by omitting the last summand in \eqref{ekk0}, then $K^1_m$ admits a separation of variables: using the identification $L^2(\Pi_\delta)\simeq L^2(0,\delta)\otimes L^2(\Sigma,\CC^N)$ one has $K^1_m=S'\otimes 1+1\otimes\Lambda_{-\delta}$. Using the unitary transform \[ \Theta:L^2(0,\delta)\to \ell^2(\NN), \quad (\Theta f)_k=\langle \psi_k,f\rangle_{L^2(0,\delta)}, \quad k\in\NN, \] the identification $L^2(\Pi_\delta)\simeq L^2(0,\delta)\otimes L^2(\Sigma,\CC^N)$ and another unitary transform \begin{gather*} \Xi:=\Theta\otimes 1:L^2(\Pi_\delta)\to \ell^2(\NN)\otimes L^2(\Sigma,\CC^N),\\ \Xi v=(v_k)=:\widehat v, \quad v_k:=\int_0^\delta \psi_k(t)\,v(t,\cdot)\dd t\in L^2(\Sigma,\CC^N), \end{gather*} for the self-adjoint operator $\widehat K^1_m:=\Xi K^1_m \Xi^*$ in $\ell^2(\NN)\otimes L^2(\Sigma,\CC^N)$ one has \[ \widehat K^1_m[\widehat v,\widehat v]= \sum_{k\in\NN} \Big( \Lambda_{-\delta}[v_k,v_k] +\big(E_k(S')+m^2\big)\|v_k\|^2_{L^2(\Sigma,\CC^N)}, \] while $\qdom(K^1_m)$ consists of all $\widehat v\in \ell^2(\NN)\otimes L^2(\Sigma,\CC^N)$ with $v_k\in H^1(\Sigma,\CC^N)$ such that the right-hand side of the preceding expression is finite. Using the two-sided estimate \eqref{conv06a} one can rewrite \begin{multline} \qdom(\widehat K^1_m)=\Big\{ \widehat v=(v_k)\in \ell^2(\NN)\otimes L^2(\Sigma,\CC^N):\, v_k\in H^1(\Sigma,\CC^N) \text{ for each $k\in \NN$}\\ \text{and } \sum_{k\in\NN} \Big(\|v_k\|^2_{H^1(\Sigma,\CC^N)}+k^2\|v_k\|^2_{H^1(\Sigma,\CC^N)}\Big)<\infty\Big\}. \label{qdomy0a} \end{multline} For the operator $\widehat K^0_m:=\Xi K^0_m \Xi^*$ one has the same form domain and \[ \widehat K^0_m[\widehat v,\widehat v]= \sum_{k\in\NN} \Big( \Lambda_{-\delta}[v_k,v_k] +\big(E_k(S')+m^2\big)\|v_k\|^2_{L^2(\Sigma,\CC^N)}\Big) +|m|\int_\Sigma \big|\cP_- \Xi^* \widehat v(\cdot,0)\big|^2\dd s. \] Using the lower bounds \eqref{conv05a} and \eqref{conv06a} for $E_k(S')$, for any $\widehat v\in \qdom(\widehat K^0_m)$ we obtain the inequality $\widehat K^0_m[\widehat v,\widehat v]\ge w_m(\widehat v,\widehat v)$ with the sesquilinear form $w_m$ in $\ell^2(\NN)\otimes L^2(\Sigma,\CC^N)$ defined on $\dom (w_m):=\qdom(\widehat K^0_m)$ by \begin{multline*} w_m(\widehat v,\widehat v):=\Lambda_{-\delta}[v_1,v_1]-3\varepsilon_0\|v_1\|^2_{L^2(\Sigma,\CC^N)}\\ +\sum_{k\ge 2} \Big( \Lambda_{-\delta}[v_k,v_k] + (b^-k^2-b_0 +m^2)\|v_k\|^2_{L^2(\Sigma,\CC^N)}\Big) + |m|\int_\Sigma \big|\cP_-\Xi^* \widehat v(\cdot,0)\big|^2\dd s. \end{multline*} Using the above representation \eqref{qdomy0a} one sees that the form $w_m$ is lower semibounded and closed, hence it generates a self-adjoint operator $W_m$ in $\ell^2(\NN)\otimes L^2(\Sigma,\CC^N)$ with compact resolvent, and then $E_j(\Hat K^0_m)\ge E_j(W_m)$ for all $j\in\NN$. By summarizing all the preceding constructions, for any $j\in\NN$ in the asymptotic regime under consideration one has \begin{equation} \label{conv07b} E_j(B_{m,M}^2)\ge E_j(W_m). \end{equation} For the analysis of the eigenvalues of $W_m$ as $m\to-\infty$ we are now in the classical situation for the monotone convergence (Proposition~\ref{prop-mon}), as $W_m$ are increasing with respect to $|m|$. Namely, consider the set \[ \cQ_\infty:=\Big\{\widehat v=(v_k)\in \bigcap_{m<0} \qdom (W_m)\equiv \qdom(\widehat K^0_m), \quad \sup_{m<0} W_m[\widehat v,\widehat v]<+\infty\Big\}, \] then a vector $\widehat v=(v_k)\in \qdom(\widehat K^0_m)$ belongs to $\cQ_\infty$ iff the following two conditions are satisfied: (i) $v_k=0$ for all $k\ge 2$ and (ii) $\cP_- \Xi^* \widehat v(\cdot,0)=0$. The condition (i) gives $v=e_1\otimes v_1$ with $e_1=(1,0,0,\dots)\in\ell^2(\NN)$, and then the condition (ii) reduces to $\cP_- v_1=0$, i.e. $v_1\in\cH$. Therefore, \[ \cQ_\infty=\big\{e_1\otimes v_1: \, v_1\in H^1(\Sigma,\CC^N)\cap\cH\big\}. \] Moreover, for any $e_1\otimes v_1\in\cQ_\infty$ one has \[ \lim_{m\to-\infty} W_m[e_1\otimes v_1,e_1\otimes v_1]=L_{-\delta}[v_1,v_1] -3\varepsilon_0\|v_1\|^2_{\cH}, \] while we recall that $L_{-\delta}$ is defined as in \eqref{lavv1}. Therefore, if one denotes by $W_\infty$ the self-adjoint operator in $e_1\otimes\cH$ given by \[ W_\infty[e_1\otimes v_1,e_1\otimes v_1]=L_{-\delta}[v_1,v_1] -3\varepsilon_0\|v_1\|^2_{\cH}, \] then it follows by the monotone convergence (Proposition~\ref{prop-mon}) that for each $j\in\NN$ there holds $\lim_{m\to-\infty}E_j(W_m)=E_j(W_\infty)\equiv E_j(L_{-\delta})-3\varepsilon_0$. By \eqref{conv07b} one has $\liminf_{M\to+\infty,m\to-\infty,m/M\to 0}\ge E_j(L_{-\delta})-3\varepsilon_0$. As both $\delta$ and $\varepsilon_0$ can be chosen arbitrarily small and we have the convergence $\lim_{a\to0}E_j(L_a)=E_j(L)$, we arrive at the inequality $\liminf_{M\to+\infty,m\to-\infty,m/M\to 0} E_j(B_{m,M}^2)\ge E_j(L_)$. By combining it with the upper bound \eqref{eq-thm3a} we arrive at the result of Theorem~\ref{thm3}.
{ "timestamp": "2018-11-09T02:11:29", "yymm": "1811", "arxiv_id": "1811.03340", "language": "en", "url": "https://arxiv.org/abs/1811.03340" }
\section{Introduction} It is generally believed that the white holes, T-symmetrical analogs of the black holes, become unstable when they start interacting with external matter, see the papers by Eardley~\cite{Eardley}, Ori and Poisson~\cite{OriPoisson}, Barceló et al.~\cite{151100633}. The simplest way to this conclusion is to consider the model of a white hole with outgoing and ingoing flows of matter, represented as null shells. Problems with stability begin at the moment when the null shell, representing the flow of matter ejected by the white hole, meets the null shell, representing the flow of matter falling to the white hole from the outside. When these shells cross in the vicinity of the white hole's particle horizon, their energy is redistributed. As a result, the ingoing shell receives a greater part of the energy, while only a negligible part of the initial energy of the outgoing shell comes through to the outside. Finally, the ingoing shell goes under its own event horizon, thereby making the white hole black, emitting virtually nothing to the outer space. In this paper we are going to reconsider some aspects of this model. First of all, modeling of external matter, usually representing relic radiation or scattered light of stars, in the form of collapsing shell, can be questioned. The boundary condition in the form of collapsing shell clearly violates T-symmetry. If we consider T-conjugated boundary condition in the form of an expanding shell, then stability problems will appear not for white, but for black holes. Instead, if we consider external matter as photon gas or null fluid, it will be T-symmetric. In a photon gas for any group of photons from which a collapsing shell can be formed, there is a group of photons that make up the expanding shell. Collapsing null shell also violates spatial isotropy and homogeneity. The photons in the shell move toward the center, even at a great distance, when the action of the central mass does not manifest itself. That is, this motion is not associated with the gravitational attraction of the central mass, but is due to a special choice of initial conditions. Also, in collapsing null shell there is no tangential pressure, there is only radial one. In a photon gas, the distribution of photons at any point is isotropic and there are both radial and tangential pressure components. The collapsing shell preserves its total energy, and while its area shrinks with decreasing radius, the energy density increases inversely with the square of the radius. Further, if the photons are directed exactly to the center, the shell, as a result of its own evolution, even in the absence of the central mass, will go under the event horizon and form a black hole. On the other hand, the rarefied photon gas is approximately homogeneous, with the exception of small thermal fluctuations, and spontaneous production of black holes does not occur there. Another possible extension of the model is the introduction of a negative mass. Such a modification, of course, contradicts the energy conditions, usually imposed on the solutions of general relativity. On the other hand, the energy conditions have been repeatedly criticized, see the paper by Barceló and Visser~\cite{ec-crit} and references therein. Negative masses are required in many interesting astrophysical models, such as wormholes, various versions presented by Visser in his book~\cite{Visser-wrmh-book}, and warp drives, by Alcubierre~\cite{Alcubierre-warp-drive}. The fundamental work on the interaction of null shells by Dray and 't~Hooft~\cite{Dray-tHooft} has also considered the cases with negative mass. In our previous work~\cite{static-rdm} T-symmetric stationary scenario with radial matter flows has been studied. These flows can be represented as a continuous sequence of ingoing and outgoing shells. At the center of the model there is a singularity of ultraviolet type, possessing negative mass. In the other our work~\cite{wrmh-rdm}, the negative mass in the model was distributed in space, leading to the opening of a wormhole in the center. T-symmetry and stationarity, characteristic to these models, give them the properties both of white and black holes, which absorb matter from the outer universe, throw it out and are in a dynamic equilibrium with it. In this paper, we will continue to study T-symmetric stationary scenarios in the model of white and black holes, interacting with external matter flows. In Sec.\ref{sec2}, we will show that the introduction of a negative mass allows to stabilize the model of a white hole with collapsing null shell. As a result, the white hole can throw out an arbitrary fraction of its mass, depending on the choice of model parameters. In particular, it can throw out the amount of energy equal to the energy of collapsing shell, thereby realizing T-symmetric scenario. In Sec.\ref{sec3}, radial matter flows will be considered in T-symmetric stationary scenario previously investigated in \cite{static-rdm}. We will calculate Misner-Sharp mass in this model and show that in the center of the system there is a core of negative mass. In Sec.\ref{sec4}, a photon gas described by TOV equation will be considered and T-symmetric stationary solutions will be investigated, possessing negative central mass. \section{Interaction of white hole with null shell}\label{sec2} This scenario is shown in Fig.\ref{f1}. Two null shells cross each other and divide the spacetime into four areas, marked ABCD. In each of these four regions a vacuum solution with own mass is established: \begin{equation} m_A=M-E,\ m_B=M-dm,\ m_C=m_0,\ m_D=M. \end{equation} Here we follow the notation of Ori and Poisson \cite{OriPoisson}: $M$ -- the mass of a white hole with a collapsing shell, $dm$ -- the mass of the collapsing shell, $E$ -- the energy of the outgoing shell measured at infinity. The difference is that for the region C we consider an arbitrary mass $m_0$, remaining in the center of the white hole after the radiation of the outgoing shell, while Ori and Poisson considered the case of zero mass and flat spacetime in region C, Fig.\ref{f1}a. We also allow for this mass both positive and negative values. For negative mass, Fig.\ref{f1}b, spacetime in region C has the form of Schwarzschild solution with a naked timelike singularity. For positive mass, Fig.\ref{f1}c, the global structure of spacetime has a form of Kruskal-Szekeres maximal extension of Schwarzschild solution, the eternal black hole. Note that in \cite{OriPoisson} other global structure was used that links vacuum solutions with a cosmological model. As we will see, this does not affect the formulae expressing energetic characteristics of the white hole. Next, we use Dray-'t Hooft-Redmount (DTR) relation \cite{Dray-tHooft} \begin{equation} f_Af_B=f_Cf_D,\ f_i=1-2m_i/R,\ i=A,B,C,D. \end{equation} Here $R$ is the radius value at collision point and the system of units $G=c=1$ is used. Solving this equation for $E$ and introducing new variables \begin{equation} \xi=R/(2M)-1,\ \alpha=dm/M,\ \beta=m_0/M,\ \eta=E/M, \end{equation} we have the expression for an efficiency of white hole explosion: \begin{equation} \eta=(1 - \alpha - \beta)\,\xi/(\alpha + \xi). \end{equation} At $\beta=0$ it is identical with (2.3) in \cite{OriPoisson}. Note that at $\beta=0$ and $0< \xi \ll \alpha \ll 1$ we have $\eta\sim\xi/\alpha \ll 1$, i.e., the closer is the collision of null shells to the particle horizon, the smaller fraction of the mass of the white hole is radiated by it to the infinity. Further, let's consider the evolution of the null shell: $\xi=\xi_0\exp(-t/(2M))$, where $t$ is the time of distant observer, see, e.g., (25.30) in Blau, Lecture Notes on General Relativity~\cite{Blau}. Summing the time necessary for ingoing shell to reach the collision point and the time necessary to outgoing shell to escape from the collision point to the starting distance, the time is doubled: $\tau=2t$ and we have \begin{equation} \xi=\xi_0\exp(-\tau/(4M)).\label{xitau} \end{equation} Here $\xi_0$ represents the starting distance relative to the Schwarzschild's radius. It should be sufficiently small to keep the approximation under (25.30) \cite{Blau} valid, however, it has only a gauge meaning and a little influence to the final result. The total time necessary for null shell to come from a large distance to the collision point is summed up from (i) the time from a large distance to the $\xi_0$-vicinity of particle horizon, this time is approximately proportional to the starting distance and is controlled by it, and (ii) the time of waiting at the particle horizon, which the distant observer will measure for the propagation of the null shell from $\xi_0$ to the collision point $\xi$, located even closer to the particle horizon. For practically relevant settings of the problem, this latter time prevails, moreover, at $\xi\to0$ the term $\log\xi$ dominates over $\log\xi_0$ in the expression for $\tau=-4M(\log\xi-\log\xi_0)$. These formulae are sufficient to reproduce the estimations for efficiency of white hole explosion obtained by Eardley \cite{Eardley} and Ori and Poisson \cite{OriPoisson}. Indeed, fixing $\beta=0$ and considering the limit $0<\xi,\alpha \ll1$, we have $\eta=(1+\alpha/\xi)^{-1}$, identical with (2.4) \cite{OriPoisson}. Further, using (\ref{xitau}), we see that $\xi=\alpha$, i.e., $\eta=0.5$ is reached at $\alpha=\xi_0\exp(-\tau_d/(4M))$, i.e., $\tau_d=-4M\log(\alpha/\xi_0)$. Thus, we have $\eta=(1+\alpha/\xi_0\exp(\tau/(4M)))^{-1}=(1+\exp((\tau-\tau_d)/(4M)))^{-1}$, identical with (3.15) \cite{OriPoisson}. Further, considering small $\alpha$ and moderate $\xi_0$, we obtain an estimation $\tau_d\sim4M\log(M/dm)$, identical with \cite{Eardley}. The main result of \cite{Eardley} and \cite{OriPoisson} is that considering the time values of the order of the age of the universe and the masses of the white hole much smaller than the mass of the observable universe, one will have large values $\tau/(4M)\gg1$. It leads to exponentially small values of $\xi$ and $\eta$. Numerically, they are extremely small indeed, of the order $\xi,\eta\sim\exp(-10^{5})$ in \cite{OriPoisson}. This estimation corresponds to the distances much less than Planck length or any other lower limit on distances used in physics. The reason is the exponential evolution $\xi\sim\exp(-\tau/(4M))$, producing so small numbers for large $\tau$-values. Evidently, this estimation is an extrapolation in the range of small distances, while the main result is qualitative, the radiated energy is proportional to $\xi$ and is extremely small. If $\xi$ will be cut off at Planck length, the efficiency will still be enormously small. If negative mass values are allowed, new opportunities are opening up. For $\beta<0$ and $0< \xi \ll \alpha \ll 1$ the value $\eta\sim(1-\beta)\,\xi/\alpha$ can be $\sim1$, if $\beta\sim -\alpha/\xi$. Considering exact formulae, we see that $\eta=0.5$, i.e., half-efficient white hole explosion with $E=M/2$, can be reached at \begin{equation} \beta=-(\alpha - \xi + 2 \alpha \xi)/(2 \xi), \end{equation} while $\eta=\alpha$, T-symmetric case of $E=dm$, can be reached at \begin{equation} \beta = -(\alpha^2 - \xi + 2 \alpha \xi)/\xi. \end{equation} The reason for the appearance of these solutions is that the white hole can emit the energy greater than its original mass, leaving the negative mass behind. The greater part of this energy will return back and revert this mass to positive. At least, such behavior is encoded in the scenario Fig.\ref{f1}b, where the timelike singularity is linked with two spacelike ones. Further, only a small part of the initial outgoing energy comes through. The point is that the initial outgoing energy can be made arbitrarily large, thus for the final outgoing energy in absolute units the large values can be also obtained. Formally, let's define a new efficiency $\eta_2=E/(M-dm-m_0)$, the final energy of outgoing shell relative to the initial energy of outgoing shell, rather than to the total mass of the system. After simplifications we have \begin{equation} \eta_2=\xi/(\alpha+\xi). \end{equation} It is interesting, that this expression coincides with (2.4) \cite{OriPoisson}, however, the efficiency there is $E/M$ and this expression appears only in the limit $0<\xi,\alpha \ll1$, while here it is the new type of efficiency $\eta_2$ and the relation is exact. Remarkably, it does not depend on $\beta$, i.e., $m_0$. Therefore, at $0<\xi\ll\alpha\ll1$ the value $\eta_2\sim\xi/\alpha$ can be extremely small, but for obtaining $E$ in absolute units it is multiplied to $(M-dm-m_0)$, which can be made arbitrarily large by choosing the negative $m_0$ values. \begin{table} \begin{center} \caption{Various scenarios in null shell model of the white hole}\label{tab1} ~ \begin{tabular}{|c|c|c|c|c|}\hline $M/M_\odot$& $\xi=R/(2M)-1$& $\tau=4M\log(\xi_0/\xi)$& $m_0/M_\odot$& $\eta=E/M$\\ \hline $4.06\cdot10^6$& $\sim\exp(-10^{16})$& $13.8\cdot10^9$~years& $0$& $~99\xi$\\ $4.06\cdot10^6$& $10^{-5}$& $737$~sec& $0$& $10^{-3}$\\ $4.06\cdot10^6$& $\sim\exp(-10^{16})$& $13.8\cdot10^9$~years& $-2\cdot10^4/\xi$& $0.5$\\ $4.06\cdot10^6$& $\sim\exp(-10^{16})$& $13.8\cdot10^9$~years& $-4\cdot10^2/\xi$& $0.01$\\ $4.06\cdot10^6$& $10^{-5}$& $737$~sec& $-2.03\cdot10^9$& $0.5$\\ $4.06\cdot10^6$& $10^{-5}$& $737$~sec& $-3.67\cdot10^7$& $0.01$\\ $10$& $10^{-5}$& $1.81$~msec& $-5\cdot10^3$& $0.5$\\ $10$& $10^{-5}$& $1.81$~msec& $-90.2$& $0.01$\\ \hline \end{tabular} ~ (common parameters: $\alpha=dm/M=0.01$, $\xi_0=0.1$) \end{center} \end{table} In Table~\ref{tab1}, we calculated several interesting scenarios for the null shell model of the white hole. First, we consider the mass of a white hole of the order of the central galactic black hole. In the first line of the table we take the time of the order of the age of the universe and set the central mass $m_0$ to zero. For definiteness, the parameters $\alpha=0.01$, $\xi_0=0.1$ are fixed, but the final result depends little on them. For $\xi$ and $\eta$ the extremely small values $\sim\exp(-10^{16})$ are obtained. In work \cite{OriPoisson}, in a similar setting with a different choice of model parameters, the extremely small values $\sim\exp(-10^{5})$ were also obtained. In the second line, in order to show that this result is entirely determined by the choice of the parameter $\tau$, a moderately small $\xi=10^{-5}$ value is chosen, corresponding to $\tau=737$~sec. As a result, the moderately small efficiency value $\eta=10^{-3}$ is obtained. In the next four lines, we show that the choice of a negative mass allows one to obtain predetermined values of efficiency, in particular, the case of half-efficiency and the T-symmetric case with $\eta=\alpha$. For time $\tau$ of the order of the age of the universe, the extremely large absolute values of the mass are obtained. These values significantly exceed the mass of the observable universe and arise from the extremely small values of $\xi$-distance, to which they are inversely proportional. As we discussed above, the application of the model in the range of such small distances is the extrapolation, and the result should be understood qualitatively. Cutoff of $\xi$ at the Planck length also leads to a very large negative mass in the center. For moderately small values of $\xi$, moderately large values of negative mass are obtained. The last two lines represent the case of the white hole mass in the order of stellar black hole mass, with the similar results. In this paper, we consider the case when, according to the clock of an external observer, the shells collide in a finite time, which is possible only for $R>2M$. The case considered by Eardley \cite {Eardley} and Barceló et al. \cite {151100633} is also possible, corresponding to slightly different Penrose diagrams. This is the case when the outgoing shell does not come out at all, from the viewpoint of the external observer, but collides with the incoming shell under the event horizon. For the external observer, what happens under the horizon is inaccessible, and this case is indistinguishable from that, as if the outgoing shell did not exist at all, that is, the solution would look like an eternal black hole, onto which an additional shell was thrown. In the present work, our goal is to find the regions of model parameters in which the explosion of the white hole occurs, principally observable from the outside. \section{Interaction of white hole with radial flows of matter}\label{sec3} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{sch1.pdf} \caption{Static spherically symmetric problem with radially directed flows of matter, in $xyz$-coordinates on the left, in $tr$-coordinates on the right. Image from~\cite{static-rdm}.} \label{f2a} \end{figure} In this section we will consider a static model with T-symmetric ingoing and outgoing radially directed matter flows, shown on Fig.\ref{f2a}. This matter distribution was used in \cite{static-rdm} as a model for dark matter halo in spiral galaxies. Here we will consider a special case of light-like (null directed) matter flows. Also, it will not be so important to us whether this matter is dark or not. It interests us as a model of a compact massive object, permanently emitting and absorbing the matter flows. We will set a boundary condition in the form of converging and diverging shells at infinity and will integrate the solution inwards, to have a look what happens there. We use spherical coordinates $(t,r,\theta,\phi)$ and a standard metric \begin{equation} ds^2=-A(r)dt^2+B(r)dr^2+r^2(d\theta^2+\sin^2\theta\; d\phi^2)\label{stdmetr} \end{equation} and select energy-momentum tensor \begin{equation} T^{\mu\nu}=\rho(r)(u_+^\mu(r)u_+^\nu(r)+u_-^\mu(r)u_-^\nu(r)),\label{Tmunu} \end{equation} where $\rho(r)$ is the density and $u_\pm(r)=(\pm u^t(r),u^r(r),0,0)$ are the velocity fields of outgoing and ingoing radial matter flows. The general solution of geodesic equations has a form: \begin{eqnarray} &&4\pi\rho=c_1/\left(r^2u^r\sqrt{AB}\right),\ u^t=c_2/A, \ u^r=\sqrt{c_2^2+c_3A}/\sqrt{AB},\label{eq_geode} \end{eqnarray} with positive constants $c_{1,2}$ and the third constant defining a norm $c_3=u_\mu u^\mu$. Three variants $c_3=0,\pm1$ for this norm have been considered in \cite{static-rdm}, while here we will consider only one case: $c_3=0$, null radial dark matter (NRDM). Further difference with \cite{static-rdm} is the appearance of $4\pi$ factor in $\rho$-equation, coming due to a different system of units used in the present paper, $G=c=1$. The field equations to solve are \begin{eqnarray} &&da/dx=-1+e^b+\epsilon\ e^{b-a},\label{eq_dadx}\\ &&db/dx=1-e^b+\epsilon\ e^{b-a},\label{eq_dbdx} \end{eqnarray} in logarithmic variables \begin{eqnarray} &&a=\log A,\ b=\log B,\ x=\log r,\ \epsilon=4c_1c_2,\label{eq_c45} \end{eqnarray} with the starting point \begin{eqnarray} &&x_1=\log r_1,\ a_1=0,\ b_1=\epsilon + r_s/r_1,\label{xab1} \end{eqnarray} where $r_s$ is a gravitational radius of the central massive object, $r_1$ is the starting radius of integration. We solve this system numerically and show the result on Fig.\ref{f2} left. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{wh-fig2a.pdf} ~~\includegraphics[width=0.45\textwidth]{wh-fig2b.pdf} \end{center} \caption{On the left: NRDM solution with $\epsilon=0.04$, the evolution of $a=\log A$, $b=\log B$ as functions of $x=\log r$. Starting from point~1, the solution reaches maximal $b$ in the point~2. Then the functions $a$ and $b$ fall down rapidly until the solution reaches minimal $a$ in the point~3. Then the functions $a,b$ go apart symmetrically. On the right: Misner-Sharp mass for the solution. Starting from $M\sim\epsilon r/2$ near the point~1, the solution comes close to the horizon $M\sim r/2$ in the point~2 and is bounced into the negative region. There it comes through the inflection point~3 and tends to a negative constant at $r\to0$.}\label{f2} \end{figure} Setting $\epsilon=0.04$ and $r_s=1$, we start integration from a point~1, located well above $r_s$. The solution reaches a maximum of $b$ in a point~2, after that the functions fall down almost in parallel. In a point~3 a minimum of $a$-function is reached and the functions go apart with opposite slopes. On the right part of Fig.\ref{f2} we have presented Misner-Sharp mass \cite{Blau} for this solution, which in our coordinates reads \begin{eqnarray} &&M=r/2\ (1-B^{-1})=e^x/2\ (1-e^{-b}). \end{eqnarray} At large $r$ the function $M(r)$ is almost linear with a small slope, corresponding to a dark matter halo in the model of spiral galaxies. A difference $M(r_1)-M(r_2)$ represents a mass of the halo. Further, at smaller $r$, the solution shows Schwarzschild's behavior and goes towards $M>r/2$ zone, where black or white holes are formed. The solution comes very close to this zone, however, the formation of horizon does not happen. The solution is bounced into a region of negative masses, where it tends to a negative constant at $r\to0$. The mass of dark matter in this model is always positive, while Misner-Sharp mass, a total mass enclosed by a sphere of radius $r$, becomes negative due to the contribution of the central singularity. The difference $M(r_2)-M(0)$ represents a positive mass of a coat surrounding a singularity of negative mass $M(0)$. In more details, the radial density of Misner-Sharp mass is given by $M'(r)=4\pi r^2\rho_{\mbox{\footnotesize eff}}$, where $\rho_{\mbox{\footnotesize eff}}$ is an effective mass density of dark matter flows, i.e., a component of energy-momentum tensor $T_\nu^\mu=\mbox{diag}(-\rho_{\mbox{\footnotesize eff}},p_{\mbox{\footnotesize eff}},0,0)$. In \cite{static-rdm} the general expression for this density is given, which for NRDM case and our system of units becomes \begin{eqnarray} &&\rho_{\mbox{\footnotesize eff}}=p_{\mbox{\footnotesize eff}}=\epsilon/(8\pi r^2A).\label{rhopeff} \end{eqnarray} This density is positive, leading to positive contribution of dark matter to Misner-Sharp mass. The asymptotic formulae for different zones in the solution are available in \cite{static-rdm}. The asymptotic solution at large $x$ is \begin{eqnarray} &&a=Const+2\epsilon x -r_s e^{-x},\ b=\epsilon+r_s e^{-x},\label{eq_z1} \end{eqnarray} leading to Misner-Sharp mass $M=(\epsilon r +r_s)/2$. On the other hand, using $\varphi=a/2$ as Newtonian potential, one obtains the expression for the orbital velocity $v$ and the orbital acceleration \begin{eqnarray} &&v^2/r=\varphi'_r=\epsilon/r +r_s/(2r^2).\label{eq_ar} \end{eqnarray} At large $r$ the dark matter term $\epsilon/r$ dominates over Newtonian term $r_s/(2r^2)$, producing asymptotically flat rotation curves $v^2=\epsilon$. One can also notice that this orbital acceleration corresponds to a mass function $M_{\mbox{\footnotesize grav}}=\epsilon r +r_s/2$ with the dark matter contribution twice larger than its Misner-Sharp mass. The reason is that the pressure also contributes to gravity and for the considered NRDM case the contributions of the mass density $\rho_{\mbox{\footnotesize eff}}$ and the pressure $p_{\mbox{\footnotesize eff}}$ are equal. Further, the solution comes close to the horizon $M=r/2$. The relative distance to the horizon line $1-M/(r/2)=e^{-b}$ is minimal in the maximum of $b$, i.e., in the point 2. After passing the point 2, the functions $a,b$ start to fall rapidly, reaching there large negative values. This phenomenon leads to extremely large redshift for this region as observed from the large distances and has been termed in \cite{static-rdm} as {\it red supershift}. Through (\ref{rhopeff}) extremely large negative $a$ provides extremely large positive values of density and pressure, leading to the formation of dark matter coat with the extremely fast mass accumulation. A similar phenomenon, {\it mass inflation} for counterstreaming matter flows, has been observed by Hamilton and Pollack in the paper \cite{HamiltonPollack}, studying the structure of charged black holes. In point 3 the $a$-function reaches a minimum, corresponding to a maximum of radial density $M'(r)$, or, equivalently, inflection point of $M(r)$. Due to the logarithmic transform, it is formally different from the inflection point of $M(x)$, although visually the graph $M(x)$ is also consistent with the inflection point located in the range of point 3. After passing the point 3, the radial density gradually falls down. The coat continues till the origin, where the solution enters the regime typical for naked Schwarzschild's singularity: \begin{eqnarray} &&a=-x+c_{12},\ b=x+c_{13},\label{eq_z3} \end{eqnarray} with large negative constants $c_{12,13}$. It corresponds to a large negative constant mass $M=-e^{-c_{13}}/2$, located in the origin. Note that for the case of massive dark matter (MRDM) considered in \cite {static-rdm} there is {\it a vacuole} around the central singularity, due to the presence of turning points on massive geodesics, which arise due to the gravitational repulsion of the central negative mass. For NRDM case, there is no vacuole, the coat closely joins the singularity, but its radial density begins to decrease after passing the point 3, also due to the repulsion of the central singularity. This repulsive effect resembles a quantum mechanical bounce discussed in the work by Barceló et al.~\cite {151100633} and the references therein. Although, in our model the bounce is classical, appearing due to the gravitational repulsion of the central singularity. With $\epsilon\to0$, the solution outside the gravitational radius arbitrarily closely approaches the Schwarzschild's solution, however, under the gravitational radius, in this limit, the effects of supershift and mass inflation become stronger and stronger. Thus, the solution from the outside fits arbitrarily close to a black or white hole, but is different from the inside. Such a structure of massive compact objects was also found in other models, see the review by Visser et al. \cite{09020346}. Now we will calculate the key points of NRDM model for the parameters typical for the dark matter halo in our Milky Way galaxy. The calculation is similar to \cite {static-rdm}, it uses a different choice of the constants associated with the type of matter. For compatibility, we write the complete set of constants defined in \cite{static-rdm}, for our special case: \begin{eqnarray} &&c_1=\epsilon/4,\ c_2=1,\ c_3=c_5=0,\ c_4=c_6=c_7=\epsilon. \end{eqnarray} Most of the model constants are bound to $\epsilon$, related to the orbital velocity $v$ on asymptotically flat rotation curves. The measurements for the Milky Way galaxy in the paper by Sofue and Rubin \cite{SofueRubin} give $v\sim200$~km/s, corresponding to $\epsilon=(v/c)^2\sim4\cdot10^{-7}$. The estimations for the mass of the central black hole in our galaxy have been also made by Ghez et al. \cite{08082870}, corresponding to $r_s\sim1.2\cdot10^{10}$m. Now we have all parameters necessary to start the integration. The results are presented in Table~\ref{tab2}. \begin{table} \begin{center} \caption{NRDM model parameters and ranges for the Milky Way galaxy}\label{tab2} ~ \def1.1{1.1} \begin{tabular}{|c|c|} \hline model parameters&$\epsilon=4\cdot10^{-7}$, $r_s=1.2\cdot10^{10}$m\\ \hline a border of the galaxy,&$r_1=3.1\cdot10^{21}$m,\\ starting point of& $a_1=0$, $b_1=4.00004\cdot10^{-7}$,\\ the integration& $\log_{10}(M_1/M_\odot)=11.6231$\\ \hline &$r_{2}=1.07883\cdot10^{10}$m, $a_{2}=-14.7318$,\\ supershift begins&$b_{2}=13.3455$, $\log_{10}(M_2/M_\odot)=6.56265$,\\ & $r_2-2GM_2/c^2=17261.9$m\\ \hline &$r_{3}=6.79592\cdot10^6$m, \\ supershift ends&$a_{3}=b_3-14.7238$, $b_{3}=-1.24995\cdot10^6$,\\ & $\log_{10}(-M_3/M_\odot)=5.4285\cdot10^5$\\ \hline minimal radius& $r_4=1.62\cdot10^{-35}$m, \\ (Planck length),&$a_4=a_3+95.3439$, $b_4=b_3-96.3359$,\\ end of the integration& $M_4=1.64\ M_3$\\ \hline \end{tabular} \end{center} \end{table} At the outer limit of the galaxy, at $r_1=100$~kpc from the center, the integration is started, and the clock for the global time are set, $a_1=0$. Misner-Sharp mass for the system, including the central massive object and the dark matter halo, at this point is $M_1=4.2\cdot10^{11} M_\odot$. At $r_2=1.08\cdot10^{7}$~km, approximately 16 solar radii, $b$-function reaches a maximum, then a supershift regime typical for RDM model begins. The value $r_2$ is located a bit below the nominal value $r_s$ for gravitational radius. Misner-Sharp mass at this point, including only the central object, is $M_2=3.653\cdot10^6 M_\odot$. The object is very close to a formation of horizon, the difference of the actual radius $r_2$ and the gravitational radius for the mass $M_2$ is only $17$~km, a small value in comparison with $r_2$ itself. Further, at $r_{3}=6.8\cdot10^3$~km, approximately Earth radius, the supershift regime ends. The values $a_{3}\sim b_{3}=-1.25\cdot10^6$ are reached there. Misner-Sharp mass at this point is deeply negative, {\it its logarithm} is $\log_{10}(-M_3/M_\odot)\sim5\cdot10^5$. This value is similar to the extremely large negative masses for the central object obtained in the previous section. In further decrease of the radius, $a$ increases and $b$ decreases according to the naked singularity asymptotics (\ref{eq_z3}). However on the lower limit at Planck length, where we stop the integration, the absolute variation of $a$ and $b$ is of the order $10^2$, much less than the values themselves, of the order $10^6$. In particular, the redshift defined by $a$-value is still extremely large at this point. Misner-Sharp mass is increased by $64\%$ relative to the point 3, rather steadily than the exponential inflation on the previous stage. As a variant of the calculation, we also considered the case when, at the outer radius of the galaxy, the boundary condition is set not to the typical density of dark matter, but to the density of the relic radiation. The density value on the outer radius becomes 7 orders less. Further, the geometry of the solution assumes that the density will then increase inversely to the square of the radius, by 22 orders of magnitude from $ r_1 $ to $ r_2 $, which, of course, is not correct for relic radiation. This phenomenon does not occur because of the influence of the black hole, but only because of the choice of initial conditions with the concentration of light rays at the center of the system. More correct behavior will be obtained in the next section when considering a photon gas. However, we performed the calculation for this scenario as well. Reducing $ \epsilon $ generally leads to sharper dependencies, making the calculation a challenge for the integrator. The graphs slide further into the region of negative $ x $, while the point 3 is shifted behind the Planck length. Therefore, we concentrate here on the scenario with the typical density of dark matter, as a more representative case. There is another version of the calculation with the modification of the model at the minimum radius. In \cite {wrmh-rdm} we have shown that the central singularity in the RDM solution can be relatively easily replaced with a wormhole. Both solutions contain negative mass, but the throat of the wormhole is characterized by $ B \to \infty $, which for the narrow wormhole corresponds to a small positive Misner-Sharp mass $ M = r _ {\min} / 2 $. Thus, the Misner-Sharp mass first decreases to a large negative value, then returns to the positive region, due to the contribution of the exotic fluid contained in the model. In the symmetric solution considered in \cite {wrmh-rdm}, the $A$-function has a deep minimum in the throat and in the redshift characteristics the wormhole RDM solution behaves in the same way as truncated in the point~3 solution of RDM model without wormhole. As aside note, the use of RDM model for the description of the dark matter halos is justified even in the case when the matter is relativistic, NRDM. In \cite {static-rdm}, all three types of matter were considered, massive, null and tachyonic, and for them identical asymptotically flat rotation curves were obtained. These curves are characterized by a single parameter $\epsilon=(v/c)^2$, which for all types of matter can be set to a small value, to reproduce the observed non-relativistic velocities of stars in the galaxy. The interior solutions for different types of matter also appear to be similar. The reason for this independence on the type of matter has been explained in \cite {static-rdm}: in the outer region the matter terms contribute only to the slowly varying common factors, while in the inner regions their contribution vanishes. Amazingly, Barranco et al. \cite {13016785} obtain different rotation curves for the hot dark matter than for the cold one. Indeed, considering for simplicity in (10) and (14) in \cite {13016785} a limit of flat rotation curves $a\to0$, one obtains a known result with the barotropic equation of state $p=\rho\ (v/c)^2/2$. The measured non-relativistic rotation curves $v\ll c$ correspond to the cold dark matter $p\ll\rho$, rather then the hot one. The detailed analysis resolves this controversy: \cite {13016785} considers dark matter with three equally distributed pressure components, while in \cite {static-rdm} and in this section the matter has only radial pressure component. It turns out that this difference significantly affects the equations and the resulting orbital velocities. In the next section, we will consider the case of equally distributed pressure and see that the solution of this model actually differs from RDM. On the other hand, the purpose of this work is not to study the dark matter models, but to find stable variants of white holes. Here we actually built such a variant, white hole alike massive compact object, permanently radiating outgoing null shells, as well as absorbing ingoing null shells, and remaining stable during arbitrary periods of time. Crossing null shells give rise to the phenomenon of mass inflation, which leads to the formation of extremely massive coat surrounding the naked singularity. As in the previous section, the solution requires the negative mass in the center. \section{Interaction of white hole with photon gas}\label{sec4} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{wh-fig3a.pdf} ~~\includegraphics[width=0.45\textwidth]{wh-fig3b.pdf} \end{center} \caption{On the left: solutions of TOV equations for $w=1/3$, $r_1=100$, $\rho(r_1)=10^{-6}$ and different values of $M(r_1)$. A regular solution, shown by black solid line, corresponds to $M(r_1)=4.5354$ and $M(0)=0$. Solutions below this line correspond to negative $M(0)$ and naked central singularity. Solutions above this line are would-be black holes, they initially tend to positive $M(0)$, but really bounce off the horizon line $M=r/2$, go through mass inflation and end in the central singularity with even larger negative $M(0)$. On the right: the corresponding density curves. The regular solution corresponds to the almost constant density, below this line the solution is rarefied by the action of the central singularity, above this line the density strongly increases when solution approaches the horizon, then is rarefied by the central singularity.}\label{f3} \end{figure} In this section, a photon gas in a spherical container with a mirror wall (in a thermos) will be considered. In the center there will be a massive compact object interacting with this gas. Ideally, on the outer border, this system should be stitched with a dynamic cosmological model. Here, as a simplification, we consider the stationary problem, and the mirror wall on the outer boundary will support this stationarity. The energy-momentum tensor has the form $T^\mu_\nu=\mathop{\mbox{diag}}(-\rho,p,p,p)$, with isotropically distributed pressure components and the equation of state (EOS) of the form $p=w\rho$, for photon gas $w=1/3$. The system to solve is Tolman-Oppenheimer-Volkoff (TOV) equations, see, e.g., Blau \cite{Blau} (23.86)+(23.87)+(23.80), in the system of units $G=c=1$: \begin{eqnarray} &&w \rho'_r=-(\rho M/r^2)(1+w)(1+4\pi r^3 w\rho/M)(1-2M/r)^{-1},\label{tov1}\\ &&M'_r=4\pi r^2\rho,\ h'_r=4\pi r(1-2M/r)^{-1}\rho(1+w),\label{tov2} \end{eqnarray} where $M$ is the Misner-Sharp mass and $\rho$ is the mass density. The system defines the spherically symmetric stationary metric of the form (\ref {stdmetr}) with coefficients \begin{eqnarray} &&A=e^{2h}f,\ B=f^{-1},\ f=1-2M/r. \end{eqnarray} We solve this system numerically for different starting values $M_1$, $\rho_1$ at the outer radius $r_1$. The solution is presented in the logarithmic coordinates $a=\log A$, $b=\log B$, $x=\log r$. To represent the mass and density behavior at different scales, $\log\rho$ is used as well as $\mathop{\mbox{arcsinh}} M$, possessing the logarithmic asymptotics at negative and positive infinities. Fig.\ref{f3} left presents solutions of TOV equations for $w=1/3$, $r_1=100$, $\rho(r_1)=10^{-6}$ and different values of $M(r_1)$. There is a regular solution, shown by black solid line, starting at $M(r_1)=4.5354$ and ending at $M(0)=0$. The solutions starting at smaller $M(r_1)$, have negative $M(0)$, corresponding to naked central singularity. The solutions starting at larger $M(r_1)$, initially behave like they would have positive $M(0)$, however, they inevitably approach the horizon line $M=r/2$, from which they bounce off towards negative masses and end in the central singularity with even larger negative $M(0)$. Thus, the regular solution line separates solutions with naked central singularity from those having this singularity covered by a massive coat, similar to the earlier considered RDM solutions. We deliberately keep off the region $M_1>r_1/2$, where the starting point is located inside the black or white hole. The density curves, shown on Fig.\ref{f3} right, show that the regular solution in logarithmic scale has almost constant density and the mass function close to $M\sim r^3$. The same plot in the normal scale shows a gradual 22\% increase of density to the center and corresponding slight deviation of the mass function, as needed for the hydrostatic equilibrium. Below the regular solution line the density is rarefied by the action of the central singularity, above this line the density strongly increases when solution approaches the horizon, then is rarefied by the central singularity. Fig.\ref{f4} shows $a,b$- and $M$-functions for a selected solution with the starting value $M(r_1)=5.1888$. It can be found in the previous Fig.\ref{f3} as the first gray line over the solid black regular solution. The $a,b$-profiles are similar to RDM solutions. The exception is shown in a closeup on the right, a different asymptotics near the point~1. The considered solutions possess large distance asymptotics \begin{eqnarray} &&a=Const+4\pi(1+3w)\rho_1 r^2/3 - 2M_0/r,\label{tovasy1}\\ &&b=8\pi\rho_1 r^2/3+2M_0/r,\label{tovasy2} \end{eqnarray} different from (\ref{eq_z1}) for RDM model. It corresponds to the asymptotically constant density $\rho=\rho_1$, the Misner-Sharp mass $M=4\pi\rho_1 r^3/3+M_0$ and the metric coefficient $b=-\log(1-2M/r)$ at $2M/r\ll1$. The coefficient $a$ in the weak field limit determines the gravitational potential $\varphi=a/2$, which corresponds to the radial gravitational acceleration of the form $v^2/r=\varphi'_r=M_{\mbox{\footnotesize grav}}/r^2$, where $v$ is the orbital velocity, $M_{\mbox{\footnotesize grav}}=4\pi(1+3w)\rho_1 r^3/3+M_0$ is the effective gravitating mass. Pay attention to the multiplier $(1+3w)$, distinguishing this mass from the Misner-Sharp mass. It arises from the fact that not only energy of radiation but also three pressure components contribute to gravity. We also note that with the prevalence of the first term in the gravitating mass over the second, the rotation curves turn out not flat $v=Const$, but linearly growing $v\sim r$. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{wh-fig4a.pdf} ~~\includegraphics[width=0.45\textwidth]{wh-fig4b.pdf} \end{center} \caption{Solutions of TOV equations for the same parameters as in the previous figure and $M(r_1)=5.1888$. On the left: $a,b$-functions, on the right: a closeup. The curves show similar behavior as RDM solutions, except of a different asymptotics near the point~1. }\label{f4} \end{figure} \begin{table} \begin{center} \caption{TOV model scenario with a stellar mass compact object in cosmic microwave background}\label{tab3} ~ \def1.1{1.1} \begin{tabular}{|c|c|} \hline model parameters&$M_1=10 M_\odot$, $w=1/3$, $\rho_1=\rho_{cmb}=4\cdot10^{-14}$~J/m$^3$\\ \hline starting point of&$r_1=10^6$m, $a_1=0$, $b_1=0.0299773$,\\ the integration& $M_1/M_\odot=10$ \\ \hline &$r_{2}=29532.4$m, $a_{2}=-54.2719$, $b_{2}=53.7265$, \\ supershift begins&$M_2/M_1-1=-3.64729\cdot10^{-23}$,\\ & $r_2-2GM_2/c^2=1.37139\cdot10^{-19}$m\\ \hline &$r_{3}=20638.1$m, \\ supershift ends&$a_{3}=-107.522$, $b_{3}=-104.685$,\\ & $\log_{10}(-M_3/M_\odot)=46.3087$\\ \hline minimal radius& $r_4=1.62\cdot10^{-35}$m, \\ (Planck length),&$a_4=-17.6594$, $b_4=-195.278$,\\ end of the integration& $M_4=1.728\ M_3$\\ \hline \end{tabular} \end{center} \end{table} For numerical integration of realistically dimensioned scenarios, it is more convenient to present the system in the form, that was used in our previous work \cite{wrmh-rdm} for the description of an exotic fluid in the RDM model of wormholes. First of all, the consequence of the TOV system is the hydrostatic equation \begin{eqnarray} &&r (p+ \rho) A'_r + 2 A r p'_r =0 \end{eqnarray} for our EOS possessing general solution \begin{eqnarray} &&4\pi w\rho=k_3 A^{k_4}. \end{eqnarray} In comparison with \cite{wrmh-rdm}, factor $4\pi$ comes from different system of units. We also write all constants defined in \cite{wrmh-rdm} here for compatibility: \begin{eqnarray} &&k_1=1/w,\ k_2=1,\ k_3=4\pi\rho_1 w,\\ &&k_4=-(1 + k_1)/2,\ k_5=0,\ k_6=\log k_3. \end{eqnarray} We use constant $k_3>0$, differently from \cite{wrmh-rdm} where the exotic fluid with $k_3<0$ was considered. Then, TOV system becomes equivalent to \begin{eqnarray} &&a'_x=-1 + e^b + 2 e^{2 x + k_4 a + b + k_6},\label{tovax}\\ &&b'_x= 1 - e^b + (2/w)e^{2 x + k_4 a + b + k_6}\label{tovbx} \end{eqnarray} with initial data $a_1=0$, $b_1=-\log(1-2M_1/r_1)$, where $w=1/3$, $k_4=-2$, $k_6=\log(4\pi\rho_1/3)$, $\rho_1$ and $M_1$ at $r_1$ are given. Before numerical integration, it is better to normalize the system \begin{eqnarray} &&a'_y=v_a/norm,\ b'_y=v_b/norm,\ x'_y=v_x/norm,\\ &&norm=\sqrt{v_a^2+v_b^2+v_x^2}, \end{eqnarray} where $v_a$, $v_b$ are the right hand sides of (\ref{tovax}),(\ref{tovbx}) and $v_x=1$. The solution is then given parametrically as $(a,b,x)(y)$. The integration is performed with {\it Mathematica} algorithm {\tt NDSolve}. The critical points of solution are presented in Table~\ref{tab3}. Here we consider a compact object of stellar mass $M_1=10M_\odot$, with a nominal gravitational radius of about $r_s\sim30$~km, placed in cosmic microwave background of energetic density $\rho_{cmb}=4\cdot10^{-14}$~J/m$^3$. Integration begins at a distance $r_1=1000$~km. At larger distances, the density of the photon gas is almost constant, and nothing interesting happens. The maximum value of the metric coefficient $b_2\sim54$ is reached at a distance close to the gravitational radius $r_2\sim r_s$. Note the extremely small gap $r_2-r_s\sim10^{-19}$~m separating the object from the gravitational collapse. This is the distance at which initially inactive matter terms, describing relic radiation, wake up and begin to influence strongly the structure of the solution. They are responsible both for the finite value at the maximum of $b$-coefficient, and for the subsequent phenomenon of mass inflation, which proceeds similarly, but not so stiff as in the RDM model. The minimum value of the coefficient $a\sim-108$ is reached at the point $r_3\sim21$~km. The mass $M_3/M_\odot=-2\cdot10^{46}$ reached there is extremely large negative, but still significantly less than the mass values reached in the RDM model. Further, up to the Planck length, the mass slightly grows in the negative direction $M_4/M_\odot=-3.5\cdot10^{46}$, while the $a,b$-factors behave as they should be for the naked Schwarzschild singularity, $a$ grows, $b$ decreases. The peculiarity here is the presence of a strong dip (supershift) in the initial values of the coefficients, as a result of which the coefficient $a$ grows till the Planck length only to the value of $a_4\sim-18$ located in the infrared region. This value corresponds to the moderate redshift factor $A_4^{-1/2}=\exp(-a_4/2)\sim6.8\cdot10^3$. Compared to the naked Schwarzschild singularity $A_4=1-2M_4/r_4\sim\exp(-b_4)$, for the obtained extremely large negative mass the extremely high ultraviolet shift factor would be reached $A_4^{1/2}=\exp(-b_4/2)\sim2.5\cdot10^{42}$. To illustrate the strength of this ultraviolet shift, consider one photon with typical energy of the background radiation $T=2.7$~K, with Boltzmann factor $kT\sim3.7\cdot10^{-23}$~J, after ultraviolet shift it will be $\sim10^{20}$~J, which corresponds to a mass of about 1~ton. A distant observer bombarded by such photons has a good reason to fear for safety. This is why naked singularities of negative mass are considered as dangerous cosmic objects. To our common happiness, in reality, ultraviolet accelerators of such power do not appear, in full accordance with the principle of cosmic censorship. On the other hand, in the scenario considered here, the super-strong ultraviolet shift is compensated by super-strong infrared one. The remote observer receives the relic photon {\it weakened} in energy by $\sim$7~thousand times, giving the cosmic censor no reason to impose a ban. For comparison, we also performed the calculation for TOV model with the Milky Way parameters: $r_s=1.2\cdot10^{10}$~m, $M_1/M_\odot=4.06\cdot10^6$, $w=1/3$, $\rho_1=\rho_{cmb}$, $r_1=100r_s$. The result is a solution of a similar structure, on the Planck length having ultraviolet shift with a moderate factor $\exp(a_4/2)\sim3.8\cdot10^4$. Compared to the naked Schwarzschild singularity, this ultraviolet factor effectively raises the radiation temperature just to $T\sim100$~thousand K, rather cool by cosmic standards. At large distances, the mass density in the considered scenarios is approximately constant. Integration of the stellar mass scenario outwards of $r_1=10^3$~km shows that $\rho$ quickly goes to a constant and between $r\sim5.4\cdot10^3$~km and $r\sim2.6\cdot10^7$~km changes only by $\Delta\rho/\rho\sim1$\%. \paragraph{Remark on self-similar solution and isothermal halo.} The structure of solutions described above represents only a part of the big picture. TOV equations (\ref{tov1}), (\ref{tov2}) are invariant with respect to the scaling transform \begin{eqnarray} &&r\to cr,\ \rho\to c^{-2}\rho,\ M\to cM.\label{scaltov} \end{eqnarray} This means that this transform translates the solutions of the system into its solutions, generally others. At the same time, the work of Visser and Yunes \cite{0211001} showed that TOV system possesses a fixed point for associated autonomous equation, and, in fact, TOV system possesses a solution that, being considered as a curve in $(r,\rho,M)$ space, is translated by scaling transform (\ref {scaltov}) into itself. This scale-invariant or self-similar solution has the form \begin{eqnarray} &&\rho=(2w/(1 + 6 w + w^2))/(4\pi r^2),\\ &&M=(2w/(1 + 6 w + w^2))r,\\ &&h=Const+(2w/(1+w))\log r, \end{eqnarray} corresponding to metric coefficients \begin{eqnarray} &&a=Const+(4 w / (1 + w))x,\\ &&b=\log((1 + 6 w + w^2)/(1 + w)^2), \end{eqnarray} $a$ is a linear function of $x$, $b$ is a constant. Near the critical point, where scale-invariant solution appears, complex phenomena, bifurcations occur. In particular, at low density $ \rho_1 $, the boundary problem $ M(0) = 0 $ has only one solution, i.e., only one initial $ M_1 $ leads to the final $ M(0) = 0 $. As our numerical experiments show, near the critical point this problem has several solutions, that is, at a fixed boundary density, there are several regular solutions separating naked singularities from solutions with the massive coat. The complex structure of these alternating singular and regular solutions deserves special consideration. On the other hand, we will see below that the scenario of interest to us is located far from scale-invariant solution, in a subcritical regime. The scale-invariant solution of TOV system physically corresponds to the so-called isothermal halo in the model of dark matter in spiral galaxies. Using the exact relativistic formula for orbital velocity from \cite{static-rdm} \begin{eqnarray} &&v^2=a'_x/2=2 w/(1 + w), \label{relorb} \end{eqnarray} we see that for any $w$ we obtain constant (flat) rotation curves. In Barranco et al. work \cite {13016785} for plane rotation curves and small $w$, a non-relativistic formula $v^2\sim2w$ consistent with (\ref{relorb}) was obtained. For $w=1/3$, the relativistic orbital velocity $v^2=1/2$ is obtained. From this we can conclude, in agreement with \cite {13016785}, that the experimentally observed non-relativistic orbital velocities are reached only for small $w$, cold dark matter, when describing it by the scale-invariant TOV solution. In this aspect TOV solution differs from RDM, in which all types of matter produce asymptotically flat rotation curves with freely adjustable factor $ \epsilon $. In particular, the NRDM model can be configured to obtain non-relativistic flat rotation curves. Considering this question in even more detail, in the limit of more and more rarefied gas, $\rho_1 \to0$, TOV will produce the asymptotics (\ref{tovasy1}), (\ref{tovasy2}), with constant density and linear rotation curves $v\sim r$. Note that RDM will always produce $\rho\sim r^{-2}$, due to the geometry of the system with radially convergent flows of matter. This is exactly the density profile that is required for flat rotation curves. On the other hand, if we fix TOV and require, for agreement with the experiment, asymptotically flat rotation curves, then their realization can be achieved on the scale-invariant solution, whose formation requires the critical density. This density is proportional to $w$, thus small densities and non-relativistic orbital velocities at the scale-invariant TOV solution are available only for small $w$. In RDM case, the rotation curves are always asymptotically flat, and the orbital velocities can be freely adjusted from non-relativistic to relativistic by simple scaling of density. The reason for such different behavior is the inclusion of EOS with tangential pressure components, this changes the type of equations and the structure of their solutions. There is another, methodological difference between our work and \cite{13016785}. Although the same TOV system was solved, in \cite{13016785} this system was not solved with respect to the metric coefficients for a given EOS. The problem was solved as if from the other end, substituting the known rotation curves into the equations and finding EOS from them. In this approach, differential equations are solved by direct integration of experimentally known profiles. At the same time, the question that interests us, what happens deep inside the system, remains unanswered, since there are no experimental profiles there. For clarity, let us consider several scenarios in which we will evaluate the critical mass and density for a given external radius: \begin{eqnarray} &&\rho_{crit}=(2w/(1 + 6 w + w^2))/(4\pi r_1^2),\\ &&M_{crit}=(2w/(1 + 6 w + w^2))r_1. \end{eqnarray} At $w=1/3$, $r_1=1$~m, $M_{crit}=0.21$~m, in physical units it corresponds to $3\cdot10^{26}$~kg, 15\% Jupiter mass converted to radiation and closed in the container of 1~m radius. Further, consider non-relativistic ideal gas, $w=RT/(\mu c^2)$. For definiteness, fix parameters of nitrogen $N_2$, $\mu=28\cdot10^{-3}$~kg/mol, $T=273.15$~K, $w=9\cdot10^{-13}$. For $r_1=1$~m obtain $\rho_{crit}=2\cdot10^{14}$~kg/m$^3$, which is 14 orders of magnitude greater than the density of nitrogen at normal pressure and temperature. So one should not worry that the gas in the balloon will start to form black holes or scale-invariant solutions. Considering again $w=1/3$, at $r_1=3.1\cdot10^{21}$~m, the radiation in the volume of Milky Way. The critical case corresponds to the energetic density $\rho_{crit}c^2=0.21$~J/m$^3$, which is 13 orders of magnitude greater than $\rho_{cmb}=4\cdot10^{-14}$~J/m$^3$. Thus, the considered problem is in a deeply subcritical regime. In this regime, at large distances, $\rho$ is almost constant, TOV equations can be used to find the next order correction \begin{eqnarray} &&\rho = \rho_1 + \rho_1^2(r_1^2-r^2)\, 2\pi (1 + w)(1 + 3 w)/(3w) +... \end{eqnarray} This function defines the density bump required in the first non-vanishing order for hydrostatic equilibrium. In this formula, the second term is much smaller than the first, when the regime is subcritical. \paragraph{Remark on naked singularities and cosmic censorship.} In the studied models, the central singularities are not covered by event horizons and, formally speaking, are naked. A photon from the singularity in principle can reach the distant observer. However, there is something instead of event horizon -- the supermassive coat, that provides an extremely strong redshift for this photon. The coat is thicker for RDM model and thinner for TOV. In RDM model, if the photon, escaping from Planck’s vicinity of the singularity to infinity, has the initial energy reasonably bounded from above, the final energy will be extremely small, making this photon practically unobservable. In TOV model, the final shift is moderate, balanced between infrared and ultraviolet, dependently on the model parameters. In any case, the final energy is strongly suppressed comparing with the case, when the singularity of the same mass would be really naked. The coat weakens the extremely strong ultraviolet effect arising from gravitational repulsion from very small distances for very large negative masses, which is typical for the time-like singularity considered in this paper. There are also singularities of spacelike type. This behavior has a white hole in Schwarzschild model, in which the central singularity is naked, since it is covered by particle horizon, not by event horizon, and the light from it reaches the external observer. Zeldovich et al. \cite{Zeldovich} considered a dynamic problem with a white hole having the positive central mass and the spacelike singularity, in contrast to the stationary models we considered, with the negative central mass and the timelike singularity. In scenario \cite{Zeldovich}, the white hole is individualized from the surrounding space in a finite relatively short time, during which the white hole can explode and throw out the matter. After this time, the white hole does not explode, and the matter remains under the horizon. The white hole actually becomes black, encapsulated from the rest of the universe. Apparently, this model describes the processes for which the starting point on the diagram Fig.\ref{f3} is located above the horizon line, the solution is bounced off the horizon from the inside and subsequently remains under the horizon. This is the difference with the timelike singularity, for which the solution is located outside the horizon and extends to infinity. \begin{table} \begin{center} \caption{Considered models and their symmetries}\label{tab4} ~ {\footnotesize \begin{tabular}{|c|c|c|c|}\hline model/scenario & T-symmetry & stationarity & isotropy and homogeneity\\ & & & at large distance\\ \hline null shell, generic & $-$ & $-$ &$-$ \\ \hline null shell, T-symmetric & $+$ & $-$ &$-$ \\ \hline RDM & $+$ & $+$ &$-$ \\ \hline TOV $M_1=M_{crit}$, $\rho_1=\rho_{crit}$ & $+$ & $+$ &$-$ \\ \hline TOV $M_1\ll M_{crit}$, $\rho_1\ll\rho_{crit}$ & $+$ & $+$ &$+$ \\ \hline \end{tabular} } \end{center} \end{table} \section{Conclusion} In this paper, we studied the possibility of stabilizing white hole models by more realistic modeling of external matter falling on the white holes, as well as introducing a core of negative mass into the model. First of all, we investigated the question, what will happen if one directs a converging null shell from outside to a white hole emitting a diverging null shell. In the standard scenario, if one throws a null shell of initially low energy into a white hole, then waits 13.8~billion years during which the shell hangs on the particle horizon and strengthens itself by ultraviolet shift, then it collides with the outgoing shell, acting as an opaque wall, letting practically nothing out. As we have shown, the model also has other solutions. In particular, the white hole can emit an amount of energy greater than its initial mass, so large that the outgoing shell can break through the almost opaque wall created by the incoming shell. The core of the negative mass remains behind, which is then compensated by the incoming shell and exists is only a finite time in the transition process, so that at both time infinities there are only positive masses. In an alternate scenario, a white hole radiates the null shells continuously and also absorbs the incoming shells continuously falling on it. This solution is T-symmetric and stationary. The intersection of flows leads to mass inflation phenomenon, as a result of which the removal of the particle horizon and the event horizon occurs, and a massive compact object is formed, almost reaching its gravitational radius, but not crossing it. Inside the object there is a massive coat surrounding the singularity of negative mass. The strong ultraviolet shift in the Planck neighborhood of the singularity is compensated by the strong infrared shift from the coat, as a result, the photons born near the singularity, reasonably limited in the initial energy, reach the distant observer with an extremely small final energy. We also investigated the question, what will happen if we replace the converging and diverging shells in this scenario with a real photon gas. In gas, photons move in all possible directions, in radially converging, in radially diverging, as well as in tangential ones. In this case, in the subcritical regime, at large distances, the complete symmetry of the system is restored, including isotropy, uniformity, stationarity and T-symmetry. Table~\ref{tab4} describes the models considered in this paper, where the subcritical TOV gas has maximum symmetry. Inside, such a solution looks similar to a solution with radial flows of matter, it also has a massive inflation coat surrounding the central singularity of negative mass. Just the dependencies in this solution turn out to be less sharp. In particular, the shift of photons from Planck neighborhood is moderate and balances between infrared and ultraviolet, depending on the choice of model parameters. Formally speaking, the stationary solutions considered here are not white or black holes in the exact sense, since they do not have event horizons or particle horizons. They are similar to quasi-black holes, gravastars, fuzzballs, bosonic stars, other dark stars, reviewed by Visser et al. in \cite{09020346}, because outside of the gravitational radius the solution can be mathematically as close as one likes to Schwarzschild black hole, although inside the solution is arranged quite differently. The objects considered in our work exhibit the properties of white and black holes at the same time, they erupt matter and absorb matter, remaining stable for an unlimited time. Note that the matter ejected and absorbed by these objects can be dark, as in the halo model for spiral galaxies, then these objects will look like dark stars almost indistinguishable from black holes. This matter can also be formed by ordinary photons or other relativistic particles from the normal matter sector. Since the most realistic model is the subcritical TOV solution, the deviation of photon gas density caused by these objects becomes large only in the immediate vicinity of the object, while at long distances the photon gas is homogeneous and isotropic without detecting the presence of compact massive objects in it. These objects can also be identified by the gravitational lensing of the light rays and the orbital velocity of the celestial bodies captured by them, and due to the similarity of the external metric to the Schwarzschild one, these objects will be indistinguishable from the ordinary black holes. Thus, in a stationary, equilibrium state, these objects successfully mimic the black holes, like the other objects described in \cite{09020346}. A feature of the objects we studied is the presence of a central timelike singularity of negative mass and the existence of a light trajectory connecting it with a remote observer. Thus, if in the depths of the object, near the singularity of negative mass, any dynamical processes will occur, signals about them can reach a remote observer, shifted in frequency to infrared or ultraviolet, in the form of radio or gamma bursts. The specific signature of these bursts depends on the exact model of the process, and additional investigation will be required to clarify it. Another characteristic feature of the studied objects are the huge, almost compensating each other, masses of the exotic core and the inflation coat, in absolute value significantly exceeding the mass of the observable universe. This may mean that the studied structure corresponds to a theoretical stationary limit, which is practically not reached or takes a lot of time to reach. A hypothetical mechanism for the appearance of an exotic core can be the dissociation of matter into particles of positive and negative mass that occurs at superhigh energies. If such a process takes place long enough, it can lead to the formation of an equilibrium configuration of the exotic core and inflation coat. At normal energies, this mechanism can be suppressed, for example, by a mass threshold, if the process goes through the formation of intermediate supermassive particles, and is activated only at high energies. It would be interesting to perform the calculations for the corresponding dynamic scenario.
{ "timestamp": "2018-11-09T02:12:53", "yymm": "1811", "arxiv_id": "1811.03368", "language": "en", "url": "https://arxiv.org/abs/1811.03368" }
\section{Introduction}\label{SecIntro} In the young universe, massive metal-free Population III stars (Schwarzschild \& Spitzer 1953; Larson 1998) may have spawned `intermediate mass black holes' (IMBHs) with masses greater than $10^2\, M_{\odot}$ (e.g.\ Bond et al.\ 1984; Carr et al.\ 1984; Madau \& Rees 2001; Schneider et al.\ 2002), but see Umeda \& Nomoto (2003) and Fraser et al.\ (2017) who cap the `Pop III' masses at 120-130 $M_{\odot}$. Additional mechanisms have also been proposed for the creation of IMBHs (see, e.g., Miller \& Colbert 2004 and Mezcua 2017), including: the runaway merging of stellar mass black holes and stars (Zel'dovich \& Podurets 1965; Larson 1970; Shapiro \& Teukolsky 1985; Quinlan \& Shapiro 1990; Portegies Zwart \& McMillan 2002; G\"urkan et al.\ 2004); primordial black holes (e.g.\ Argyres et al.\ 1998; Bean \& Magueijo 2002; Carr et al.\ 2010; Grobov et al.\ 2011); the direct collapse of massive gas clouds, bypassing the Pop~III stage (Doroshkevich et al.\ 1967; Umemure et al.\ 1993; Bromm \& Loeb 2003; Mayer et al.\ 2010); and a stunted or inefficient growth of nuclear black holes via gas accretion at the centres of galaxies (e.g.\ Johnson \& Bromm 2007; Sijacki et al.\ 2007; Alvarez et al.\ 2009; Heckman \& Best 2014). In the last of those alternative scenarios, IMBHs are an intermediate step on the way to the maturation of supermassive black holes (SMBHs, $M_{\rm bh} > 10^5\, M_{\odot}$; Rees 1984; Shankar et al.\ 2004; Ferrarese \& Ford 2005; Kormendy \& Ho 2013; Graham 2016a, and references therein). In contrast to the plethora of theoretical formation models, direct observational detection of IMBHs remains elusive. There is a long history of disproved suggestions and claims of IMBHs in globular clusters, stretching back to at least the X-ray data from Clark et al.\ (1975). Most recently, the presence of an IMBH with a mass of $\approx$2000\,$M_{\odot}$ in the core of the Milky Way globular cluster 47 Tuc was suggested by a kinematic modelling of its pulsars (Kiziltan et al.\ 2017), but there is no electromagnetic evidence for its existence, nor proof of any other IMBH in Galactic globular clusters (Anderson \& van der Marel 2010; Strader et al.\ 2012). In the centre of nearby galaxies, there are only a handful of candidate IMBHs with an X-ray detection, i.e.\ with plausible signature of gas accretion onto a compact object. These include: NGC\,4178\footnote{For NGC\,4178, the prediction that $M_{\rm bh} < 10^5\, M_{\odot}$ is simply based on the assumption that the nuclear BH mass is less than 20 per cent of this galaxy's nuclear star cluster mass.} (Satyapal et al.\ 2009; Secrest et al.\ 2012); LEDA\,87300 (Baldassare et al.\ 2015; Graham et al.\ 2016); NGC\,404 (Nguyen et al.\ 2017); NGC~3319 (Jiang et al.\ 2018); and possibly NGC\,4395 (Iwasawa et al.\ 2000; Shih et al.\ 2003; Filippenko \& Ho 2003, Nucita et al.\ 2017, but see den Brok et al.\ 2015). Outside of galactic nuclei, IMBH searches initially focused on a rare class of point-like X-ray sources with X-ray luminosities $\sim$10$^{40}$--$10^{41}$ erg s$^{-1}$ (e.g.\ Colbert \& Mushotzky 1999; Swartz et al.\ 2008; Sutton et al.\ 2012; Mezcua et al.\ 2015; Zolotukhin et al.\ 2016). This was partly based on the assumption that the X-ray luminosity of an accreting compact object cannot be much in excess of its classical Eddington limit (hence, luminosities $\ga$10$^{40}$ erg s$^{-1}$ would require BH masses $\ga$100\, $M_{\odot}$), and partly on the detection of a low-temperature thermal component ($kT \sim 0.2$ keV) that was interpreted as emission from an IMBH accretion disk (Miller et al.\ 2003). However, most of the sources in this class are today interpreted as super-Eddington stellar-mass black holes or neutron stars (Feng \& Soria 2011; Kaaret et al.\ 2017). To date, the most solid IMBH identification in this class of off-nuclear sources is HLX-1, in the galaxy cluster Abell 2877, and seen in projection near the S0 galaxy ESO\,243-49 (Farrell et al.\ 2009, Soria et al.\ 2010; Yan et al.\ 2015; Webb et al.\ 2010, 2017). HLX-1 has a mass of $\sim$10$^4$\,$M_{\odot}$ (Davis et al.\ 2011; Godet et al.\ 2012; Soria et al.\ 2017) and may reside in the remnant nucleus of a gravitationally-captured and tidally-stripped satellite galaxy (Mapelli et al.\ 2013; Farrell et al.\ 2014), which leads us back to galactic nuclei as the most likely cradle of IMBHs. In this work, we focus on IMBH candidates in galactic nuclei. Due to their low mass, it is currently impossible to spatially resolve the gravitational sphere-of-influence of these black holes; therefore, astronomers need to rely on alternative means to gauge their mass. There are now numerous galaxy parameters that can be used to predict the mass of a galaxy's central black hole, and Koliopanos et al.\ (2017) report on the consistency of various black hole scaling relations. The existence, or scarcity, of central IMBHs obviously has implications for theories regarding the growth of supermassive black holes. For example, some have theorised that supermassive black holes started from seed masses $\ga$10$^5\, M_{\odot}$ --- created from the direct collapse of large gas clouds and viscous high-density accretion-discs (e.g.\ Haehnelt \& Rees 1993; Loeb \& Rasio 1994; Koushiappas, Bullock \& Dekel 2004; Regan et al.\ 2017) --- which could potentially bypass the very existence of IMBHs. Therefore, defining the demography of IMBHs has implications for the co-evolution of massive black holes and their host galaxy alike. For two reasons, spiral galaxies may represent a more promising field to plough than early-type galaxies or dwarf galaxies\footnote{ Given the rarity of dwarf spiral galaxies (Schombert et al.\ 1995; Graham et al.\ 2003), dwarf galaxies are overwhelmingly early-type galaxies.}. This is due to their low mass bulges and disks --- and thus low mass black holes --- and the presence of gas which may result in an active galactic nucleus around the central black hole, potentially betraying the black hole's presence. Until very recently, the largest sample of spiral galaxies, with directly measured BH masses, that had been carefully decomposed into their various structural components, e.g.\ bar, bulge, rings, etc., and therefore with reliable bulge parameters, stood at 17 galaxies (Savorgnan \& Graham 2016). This has now more than doubled, with a sample of 43 such spiral galaxies\footnote{With a central rather than global spiral pattern, we exclude the ES galaxy Cygnus~A from the list of 44 galaxies in Davis et al.\ (2017), who, we note, reported that three of these remaining 43 galaxies appear to be bulgeless.} presented in Davis et al.\ (2018a), along with revised and notably more accurate $M_{\rm bh}$--$M_{\rm *,bulge}$ and $M_{\rm bh}$--$M_{\rm *,galaxy}$ relations for the spiral galaxies (Davis et al.\ 2018b). Here, we apply three independent, updated, black hole scaling relations to a sample of 74 spiral galaxies in the Virgo cluster. X-ray images already exist for 22 members of this sample, and new images will be acquired for the remaining members during the {\it Chandra X-ray Observatory's} Cycle~18 observing program (see Section~\ref{SecData}). This paper's tabulation of predicted black hole masses for these 74 galaxies will serve as a reference, enabling two key objectives to be met. First, in the pursuit of evidence for the (largely) missing population of IMBHs, we will eventually be able to say which of the 74 galaxies predicted to have an IMBH additionally contain electromagnetic evidence for the existence of a black hole. We are not, however, just laying the necessary groundwork for this, but we are able to now, and do, explore which of the initial 22 galaxies contain both an active galactic nucleus (AGN) and a predicted IMBH. Second, by combining the existing and upcoming X-ray data with the predicted black hole masses for the full sample, we will be able to compute the black holes' Eddington ratios and investigate how the average Eddington-scaled X-ray luminosity scales with BH mass (Soria et al.\ 2018, in preparation). Gallo et al.\ (2010) have already attempted this measurement for the early-type galaxies in the Virgo cluster, and in Graham \& Soria (2018, hereafter Paper~I) we revisit this measurement using updated black hole scaling relations for early-type galaxies, such that in low-mass systems the black hole mass scales quadratically, rather than linearly, with the early-type galaxies' $B$-band luminosity (Graham \& Scott 2013). The layout of this current paper is as follows. In Section~(\ref{SecData}) we briefly introduce the galaxy set that will be analysed. A more complete description will be provided in Soria et al.\ (2018, in preparation). In Section~(\ref{sec_Param}) we explain the measurements of pitch angle, velocity dispersion, and stellar mass that we have acquired for these 74 galaxies, and we introduce the latest (spiral galaxy) black hole scaling relations involving these quantities, from which we derive the expected black hole masses, that are presented in the Appendix. In Section~(\ref{Sec_Results}) we compare the black hole mass predictions from the three independent methods. We additionally take the opportunity to combine the black hole scaling relations by eliminating the black hole mass term and providing revised galaxy scaling relations between pitch angle, velocity dispersion, and galaxy stellar mass. In Section~(\ref{Sec_X}) we pay particular attention to galaxies predicted to have black hole masses less than $10^5\, M_{\odot}$, and we investigate the X-ray properties of those nuclei for which archival X-ray data already exists. Finally, Section~(\ref{Sec_Disc}) provides a discussion of various related issues. \section{Galaxy Sample}\label{SecData} Soria et al.\ (2018, in preparation) selected the complete sample of 74 Virgo cluster spiral galaxies with star-formation rates $>$0.3\,$M_{\odot}$ yr$^{-1}$ (see the Appendix for this galaxy list). This resulted in a mix of (early- and late-type) spiral galaxies, in the inner and outer regions of the cluster, spanning more than 5 mag in absolute $B$-band magnitude from roughly $-$18 to $-$23 mag (Vega). Of these 74 galaxies, just three have directly measured black hole masses; they are: NGC~4303, $\log(M_{\rm bh}/M_{\odot}) = 6.58^{+0.07}_{-0.26}$ (Pastorini et al.\ 2007); NGC~4388, $\log(M_{\rm bh}/M_{\odot}) = 6.90^{+0.04}_{-0.05}$ (Tadhunter et al.\ 2003); and NGC~4501, $\log(M_{\rm bh}/M_{\odot}) = 7.13^{+0.08}_{-0.08}$ (Saglia et al.\ 2016). In the X-ray bands, 22 of those galaxies already have archival {\it Chandra X-ray Observatory} data, and the rest are currently being observed with the Advanced CCD Imaging Spectrometer (ACIS-S) detector, as part of a 559-ks {\it Chandra} Large Project titled `Spiral galaxies of the Virgo cluster' (PI: R.~Soria. Proposal ID: 18620568). General results for our X-ray study, (including both nuclear and non-nuclear source catalogues, luminosity functions, multiband identifications, and comparisons between the X-ray properties as a function of Hubble type, will be presented in forthcoming work, once the observations have been completed. Here, we only use the archival {\it Chandra} data to characterise the nuclear X-ray properties of spiral galaxies that we identify as possible IMBH hosts, based on their black hole scaling relations. \section{Predicting black hole masses}\label{sec_Param} In this section, we introduce the three\footnote{There is also a scaling relation between $M_{\rm bh}$ and the bulge S\'ersic index $n$ (Graham \& Driver 2007; Savorgnan 2016; Davis et al.\ 2018a). However, we do not use that relation for this work, partly because of the steepness at low masses, and partly to avoid the need for bulge/disk decompositions of our Virgo sample.} black hole scaling relations that will be used to predict the black hole masses of our Virgo cluster spiral galaxy sample, and we describe where the three associated parameter sets came from. \subsection{Pitch Angles} For galaxies whose disks are suitably inclined, such that their spiral pattern is visible, we project these images to a face-on orientation and measure their spiral arm `pitch angle' $\phi$, i.e.\ how tightly or loosely wound their spiral arms are. The mathematical description of the pitch angle, and the method of image analysis, is detailed in Davis et al.\ (2017), which also presents a significantly updated $M_{\rm bh}$--$|\phi|$ relation (equation~(\ref{eq1a}), below) for spiral galaxies, building on Seigar et al.\ (2008) and Berrier et al.\ (2013). As noted in Davis et al.\ (2017), a prominent difficulty in pitch angle measurement is the identification of the fundamental pitch angle, which is analogous to the fundamental frequency in the musical harmonic series of frequencies. Pitched musical instruments produce musical notes with a characteristic timbre that is defined by the summation of a fundamental frequency and naturally occurring harmonics (integer multiples of the fundamental frequency). Careful Fourier analysis of the sound will allow discovery of the fundamental frequency and any perceptible harmonics. A synonymous scenario occurs in the measurement of galactic spiral arm pitch angle via two-dimensional Fourier analysis (Kalnajs 1975; Iye et al.\ 1982; Krakow et al.\ 1982; Puerari \& Dottori 1992; Seigar et al.\ 2005; Davis et al.\ 2012; Yu et al.\ 2018). Therefore, pitch angle measurement methods, when performed in haste, can incorrectly select a `harmonic' pitch angle instead of the `fundamental' pitch angle. Similarly, the Fourier analysis of sound becomes less certain when the source tone is soft, short duration, or blended with contaminating noise. Spiral galaxies also become more difficult to analyse when resolution is poor, their disk orientation is close to edge-on, their spiral structure is intrinsically flocculent, or the arc length of their spiral segments are short. Whereas the former problems are stochastic and lead to increased uncertainty in pitch angle measurements (i.e., constant mean with an increased standard deviation), the latter problem of short spiral arc segments (i.e., small subtended polar angle) poses a potential systematic bias and can lead one to incorrectly identify a harmonic rather than the fundamental pitch angle. Typically, this problem manifests itself when spiral arc segments subtend polar angles $<\pi/2$ radians. One clear benefit is that the measurement of galactic spiral arm pitch angle only requires simple imaging that highlights a perceptible spiral pattern, without the need of any photometric calibrations. Therefore, we accessed publicly available imaging from telescopes such as the Galaxy Evolution Explorer (GALEX), Hubble Space Telescope (HST), Spitzer Space Telescope (SST), Sloan Digital Sky Survey (SDSS), etc. This wide selection of telescopes also implies a wide range of passbands from far-ultraviolet up to mid-infrared wavelengths. Pour-Imani et al.\ (2016) concluded that pitch angle is statistically tighter in passbands that reveal young stellar populations, such as ultraviolet filters. The difference between young stellar spiral patterns and old stellar spiral patterns is small, typically less than 4 degrees in pitch angle. Because of this, we preferentially use young stellar passbands when they are available and if the resolution is sufficient to clearly display the spiral pattern. The same preference was applied in the derivation of the $M_{\rm bh}$--|$\phi$| relation in Davis et al.\ (2017). The bisector linear regression between black hole mass and the absolute value of the spiral arm pitch angle, for the full sample of 44 `spiral' galaxies\footnote{Davis et al.\ (2017) reported that excluding Cygnus~A from the linear regression between black hole mass and spiral arm pitch angle did not have a significant effect.} with directly measured black hole masses, is such that \begin{equation}\label{eq1a} \log (M_{\rm bh}/M_{\odot}) = (7.01\pm0.07) - (0.171\pm0.017) ( |\phi^{\circ}| - 15^{\circ} ), \end{equation} with an intrinsic and total rms scatter in the $\log M_{\rm bh}$ direction of $0.30\pm0.08$ and 0.43 dex, respectively (Davis et al.\ 2017, their equation~8). Importantly, and curiously, the rms scatter in the $\log M_{\rm bh}$ direction about this black hole scaling relation is smaller than the rms scatter observed in the other black hole scaling relations. This is in part due to the shallow slope of the relation in equation~(\ref{eq1a}), and because of the careful pitch angle measurements that were determined using three different approaches (see Davis et al.\ 2017 for details). In passing, we note that the bulgeless galaxy NGC~2748 probably had an incorrect pitch angle assigned to it. Removing this galaxy, along with the early-type ES galaxy Cygnus~A, plus two potential outliers (NGC 5055 and NGC 4395) seen in Davis et al.\ (2017, their figure~4), gives the revised and more robust relation \begin{equation}\label{eq1b} \log (M_{\rm bh}/M_{\odot}) = (7.03\pm0.07) - (0.164\pm0.018) ( |\phi^{\circ}| -15^{\circ} ), \end{equation} with intrinsic and total rms scatter equal to $0.31\pm0.07$ and 0.41 dex, respectively. Equation~(\ref{eq1b}) has been used here to predict the black hole masses in 43 Virgo cluster spiral galaxies for which we were able to determine their pitch angle. The results are presented in the Appendix table. \subsection{Velocity Dispersions} Homogenised velocity dispersions are available in Hyperleda\footnote{http://leda.univ-lyon1.fr} (Paturel et al.\ 2003) for 39 of the 74 Virgo galaxies. We have assigned a 15\% uncertainty to each of these values. The bisector linear regression between $\log M_{\rm bh}$ and $\log \sigma$ --- taken from Table~4 in Davis et al.\ (2017) for their reduced sample of 40 spiral galaxies (see below) --- is given by \begin{eqnarray}\label{eq2} \log (M_{\rm bh}/M_{\odot}) = (8.06\pm0.13) + \nonumber \\ (5.65\pm0.79)\log \left( \sigma / 200\, {\rm km\, s}^{-1} \right). \end{eqnarray} The intrinsic scatter is $0.51\pm0.04$ dex in the $\log M_{\rm bh}$ direction, and the total rms scatter is 0.63 dex in the $\log M_{\rm bh}$ direction. The slope of this expression agrees well with the $M_{\rm bh}$--$\sigma$ relation from Savorgnan \& Graham (2015, their Table~2), who found that their bisector regression yielded slopes between 4.8 and 5.7 for both `fast rotators' and `slow rotators'. We have used equation~(\ref{eq2}) to predict the black hole masses for those 39 Virgo galaxies with available velocity dispersions, and we provide these values in the Appendix table. As noted above, in deriving equation~(\ref{eq2}), four galaxies were excluded from the initial sample of 44 galaxies with directly measured black hole masses. NGC~6926 has no reported velocity dispersion, while Cygnus~A is not a (typical) spiral galaxy, but rather an ES galaxy (see the discussion in Graham et al.\ 2016) with a nuclear, rather than large-scale, bar and spiral pattern. Another such example, albeit in a dwarf ES galaxy, is LEDA~2108986 (Graham et al.\ 2017). Finally, NGC~4395 and NGC~5055 are outliers that appear to have unusually low velocity dispersions; they were also excluded by Davis et al.\ (2017) in order to obtain a more robust regression unbiased by outliers. \subsection{Galaxy Stellar Masses} As revealed by the $M_{\rm bh}$--$|\phi|$ relation in Davis et al.\ (2017, see also Seigar et al.\ 2008 and Ringermacher \& Mead 2009), the central black hole masses in spiral galaxies are not unrelated to their disks. Furthermore, disks contain the bulk of the stellar mass in spiral galaxies. We have derived total galaxy stellar masses for our sample of 74 Virgo cluster galaxies via the $K^{\prime}$-band (2.2 $\mu$m) total apparent magnitudes (Vega) available in the GOLD Mine\footnote{http://goldmine.mib.infn.it/} database (Gavazzi et al.\ 2003). We had initially explored using the {\it Two Micron All Sky Survey} (2MASS\footnote{www.ipac.caltech.edu/2mass}, Jarrett et al.\ 2000) $K_s$-band total apparent magnitudes (Vega), but it sometimes under-estimates the galaxy luminosities (e.g., Kirby et al.\ 2008; Schombert 2011), as can be seen in Figure~\ref{Fig0c}. The GOLD Mine apparent magnitudes were converted into absolute magnitudes using the mean, redshift-independent, distance moduli provided by the NASA/IPAC Extragalactic Database (NED)\footnote{http://nedwww.ipac.caltech.edu}. These absolute magnitudes were converted into solar units using an absolute magnitude for the Sun of $\mathfrak{M}_{\odot ,K} = 3.28$ mag (Vega), taken from Willmer (2018), and then converted into a stellar mass, or rather a scaled-luminosity, using a constant $K$-band stellar mass-to-light ratio $M/L_K=0.62$. The uncertainty that we have associated with our (GOLD Mine)-based stellar masses --- which are tabulated in the Appendix --- stems from adding in quadrature: (i) an assumed 10 per cent error on the apparent stellar luminosity, (ii) the standard deviation provided by NED for the mean redshift-independent distance modulus; and (iii) a 15 per cent error on the stellar mass-to-light ratio. \begin{figure} \includegraphics[trim=1cm 2cm 2cm 1.5cm, width=\columnwidth]{pgplot-0c} \caption{2MASS $K$-band apparent magnitudes versus the $K$-band magnitudes from GOLD Mine. Both magnitudes have been corrected for Galactic extinction. At the faint end, some of the 2MASS magnitudes under-estimate the galaxy light.} \label{Fig0c} \end{figure} We have been able to verify these stellar masses by using, when available, the published 3.6-$\mu$m {\it Spitzer} galaxy magnitudes. Using the same redshift-independent distance moduli provided by NED\footnote{We note that NGC~4276 only has one redshift-independent distance estimate; and we hereafter use the (Virgo + Great-Attractor + Shapley)-infall adjusted distance from NED for this galaxy.}, Laine et al.\ (2014, their table~1) provide absolute galaxy magnitudes (AB, not Vega), at 3.6 $\mu$m, for 31 of our 74 galaxies. On average, 25\% of a spiral galaxy's flux at 3.6 $\mu$m comes from the glow of dust (Querejeta et al.\ 2015, their Figures~8 and 9). We therefore dim Laine et al.'s magnitudes by 25\% before converting them into stellar masses using $M_{\odot,3.6}=6.02$ (AB mag) and a (stellar mass)-to-(stellar light) ratio $M/L_{3.6}=0.60$ (Meidt et al.\ 2014)\footnote{Based on a Chabrier (2003) initial stellar mass function.}. This (stellar mass)-to-(stellar light) ratio, coupled with the above mentioned 25\% flux reduction due to glowing dust, yields a (stellar mass)-to-(total light) ratio of 0.45, or $\log(M_*/L_{\rm tot}) = -0.35$, which can be seen in Figure~10 of Querejeta et al.\ (2015) to provide a good approximation for more than 1600 large and bright nearby spiral galaxies. A comparison of these 31 Spitzer-based stellar masses with our (GOLD Mine)-based stellar masses can be seen in Figure~(\ref{Fig0a}). These masses are better thought of as scaled-luminosities, and we will return to this issue in the following subsection. \begin{figure} \includegraphics[trim=1cm 2cm 2cm 1.5cm,width=\columnwidth]{pgplot-0a} \caption{Galaxy stellar masses based on 2.2 $\mu$m magnitudes (with no dust correction), and using $M/L_K = 0.62$, versus the stellar masses based on the {\it Spitzer} 3.6 $\mu$m magnitudes (using a constant (stellar mass)-to-(total light) ratio $M_*/L_{\rm tot}=0.45$, from Querejeta et al.\ (2015). For the Virgo sample (orange stars), for which we used the GOLD Mine $K$-band data, 31 galaxies have Spitzer data. For the 43 galaxies (excluding the Milky Way) with directly measured black hole masses, we used the 2MASS $K$-band data (open black stars). The $K$-band data for NGC~2974 is likely contaminated by a foreground star. } \label{Fig0a} \end{figure} Using a symmetrical implementation\footnote{This involves taking the bisector of the `forward' and `inverse' regressions.} of the modified {\sc FITEXY} routine (Press et al.\ 1992) from Tremaine et al.\ (2002), Davis et al.\ (2018b) reported a linear regression between black hole mass and galaxy stellar mass, for their sample of 40 spiral galaxies with directly measured black hole masses (excluding the ES transition galaxy Cygnus~A, and the three bulgeless galaxies NGC~4395, NGC~2748 and NGC~6926). The relation is \begin{eqnarray}\label{eq3} \log (M_{\rm bh}/M_{\odot}) = (7.26\pm0.14) + \nonumber \\ (2.65\pm0.65)\log \left[ M_{\rm *,galaxy} / \upsilon (6.37\times10^{10}\, M_{\odot}) \right], \end{eqnarray} where $\upsilon$ (lowercase $\Upsilon$) is a corrective stellar mass-to-light ratio term --- which depends on the initial mass function of the stars and the star formation history --- (see Davis et al.\ 2018a) that we can set equal to 1 given the agreement seen in Figure~\ref{Fig0a}. The intrinsic scatter and total rms scatter in the $\log M_{\rm bh}$ direction is equal to $0.64$ and 0.75 dex, respectively. Equation~(\ref{eq3}) was used to predict the black hole masses for our 74 spiral galaxies, and the results are tabulated in the Appendix. If the stellar mass is wrong by 50\%, then the predicted logarithm of the black hole mass will be off by 0.47 dex. Combining this offset with the 1$\sigma$ intrinsic scatter in equation~(\ref{eq3}), one could find that the predicted black hole mass is $\sim$1 dex, i.e.\ an order of magnitude, different from the actual black hole mass. We therefore place less confidence in the black hole masses predicted from only the galaxy stellar mass. However, readers should be aware that our reported intrinsic and total rms scatters are not error-weighted quantities. That is, they can be dominated by outlying data points with large error bars, and therefore they can give a misleading view of how tightly defined the scaling relations are. The slope and intercept of the scaling relations presented here, and their associated uncertainty, do however take into account the error bars on the data used to define them. Finally, Davis et al.\ (2018b) additionally reported the following steeper $M_{\rm bh}$--$M_{\rm *,galaxy}$ relation, derived using a sophisticated Bayesian analysis, \begin{eqnarray}\label{eq3B} \log (M_{\rm bh}/M_{\odot}) = (7.25^{+0.13}_{-0.14}) + \nonumber \\ (3.05^{+0.57}_{-0.49})\log \left[ M_{\rm *,galaxy} / \upsilon (6.37\times10^{10}\, M_{\odot}) \right]. \end{eqnarray} This relation predicts black masses which agree well with those predicted from our $M_{\rm bh}$--$\sigma$ relation (equation~\ref{eq2}), but it tends to yield lower black hole masses than those predicted from our $M_{\rm bh}$--$|\phi|$ relation (equation~\ref{eq1a}). This is also true for equation~\ref{eq3}, and will be quantifed in Section~\ref{Sec_Results}. Erring on the side of caution, such that we do not want to under-estimate the black hole masses and claim a greater population of IMBHs than actually exists, we proceed by using Equation~(\ref{eq3}) as our primary black hole mass based on the galaxy stellar mass. Black hole masses based on equation~\ref{eq3B} are, however, additionally included. \subsubsection{What about colour-dependent $M/L$ ratios?} We have assumed that the previous $M_{\rm bh}$--$M_{\rm *,tot}$ relations are log-linear, and we extrapolate this to masses below that which was used to define them. However, given that some of our Virgo cluster spiral galaxies are less massive and bluer than those in Davis et al.\ (2018b), it may be helpful if we provide some insight into what happens if the scaling which gives the scaled-luminosity, i.e.\ the so-called stellar mass, is not constant. The 40 spiral galaxies used to define the above $M_{\rm bh}$--$M_{\rm *,tot}$ relations have stellar masses greater than $2\times 10^{10}\,M_{\odot}$ (and absolute $K$-band magnitudes brighter than $\approx -23$ mag), and, therefore, the assumption of a constant 3.6~$\mu$m $M_*/L_*$ ratio of 0.60 (and $M_*/L_{\rm tot}$ ratio of 0.45) --- which was used to derive the stellar masses in Davis et al.\ (2018b) --- is likely to be a good approximation. This is because these galaxies' stellar populations have roughly the same red colour. As such, the $M_{\rm bh}$--$M_{\rm *,tot}$ relations from Davis et al.\ (2018b) can be thought of as a (black hole)-(scaled luminosity) relation. Had the Davis et al.\ (2018b) sample contained some less massive {\it blue} galaxies, then, for the following reason, one may expect the $M_{\rm bh}$--luminosity relation not to be log-linear, but to steepen at the faint end. Bell \& de Jong (2001) provide the following equation for the $K$-band (stellar mass)-to-(stellar light) ratio as a function of the $B-K$ optical-(near-infrared) colour: \begin{equation}\label{Eq_Bell} \log M/L_{K} = 0.2119(B-K)-0.9586. \end{equation} We have obtained the 2MASS $K$-band data, and the RC3 $B$-band data\footnote{The (Vega) $B$-band magnitudes are the $B_T$ values from the {\it Third Reference Catalogue of Bright Galaxies} (de Vaucouleurs et al.\ 1991) as tabulated in NED, and were subsequently corrected for Galactic extinction using the values from Schlafly \& Finkbeiner 2011, also tabulated in NED.} , for our Virgo spiral galaxies. Their $B-K$ colour, and the associated $M/L_{K}$ ratio, is displayed in Figure~\ref{FigB-K}. One can see that at $\mathfrak{M}_K \gtrsim -23$ mag, the $M/L_{K}$ ratios become smaller. To maintain the log-linear $M_{\rm bh}$--$M_{\rm *,tot}$ relation (equation~\ref{eq3}), obviously the $M_{\rm bh}$--luminosity relation needs to steepen for $\mathfrak{M}_K \gtrsim -23$ mag. If we were to employ the falling $M/L_{K}$ ratios seen in Figure~\ref{FigB-K} as one progresses to fainter galaxies, then we would also need to employ this steeper $M_{\rm bh}$--$M_{\rm *,tot}$ relation at these magnitudes. The net effect would be to cancel out and return one to the single log-linear $M_{\rm bh}$--$M_{\rm *,tot}$ relation (equation~\ref{eq3}) that we are using together with a constant $M/L_K=0.62$ for the GOLD Mine $K$-band data. \begin{figure} \includegraphics[trim=2.0cm 2cm 8.6cm 1.5cm, width=\columnwidth]{B-K} \caption{(RC3 $B$-band) - (Gold Mine $K^{\prime}$-band) colour, and the ($B-K^{\prime}$)-dependent $K^{\prime}$-band stellar mass-to-light ratio (equation~\ref{Eq_Bell}), versus the $K^{\prime}$-band absolute magnitude. The grey points in the upper panel are based on the observed magnitudes, while the black points have been corrected for dust/inclination dimming using the prescription in Driver et al.\ (2008). } \label{FigB-K} \end{figure} There is one additional element worthy of some exploration, and it pertains to the $\upsilon$ term seen in equations~\ref{eq3} and \ref{eq3B}. We have made use of the SDSS Data Release 12 (Alam et al.\ 2015) to obtain three additional stellar mass estimates. Taylor et al.\ (2011) advocated that a ($g^{\prime}-i^{\prime}$)-dependent $i^{\prime}$-band stellar mass-to-light ratio, $M_*/L_{i^{\prime}}$, yields reliable stellar masses. Their relation is such that $\log(M_*/L_{i^{\prime}})=0.70(g^{\prime}-i^{\prime})-0.68$, and applies to the observed, i.e.\ not the dust-corrected, magnitudes. We have also used the relation $\log(M_*/L_{i^{\prime}})=0.518(g^{\prime}-i^{\prime})-0.152$ from Bell et al.\ (2003). Reddening due to dust will roughly move galaxies along this relation (see Figure~6 in Bell et al.\ 2003, and Figure~13 in Driver et al.\ 2007), and thus the relation can be applied to either the dust-corrected or observed magnitudes; for consistency with Taylor et al.\ (2011), we have chosen the latter. Finally, based on the stellar population synthesis model of Conroy et al.\ (2009), Roediger \& Courteau (2015) give the relation $\log(M_*/L_{i^{\prime}})=0.979(g^{\prime}-i^{\prime})-0.831$. These three relations for the mass-to-light ratios have given us three more sets of stellar mass estimates for (most of) our 74 spiral galaxies, which are shown in Figure~\ref{Fig0b} against the (GOLD Mine $K^{\prime}$)-based mass estimates. While small random differences are apparent, due to uncertainties in the magnitudes and simplicities in the stellar population models, the main offsets that are visible can be captured / expressed by the $\upsilon$ term. \begin{figure} \includegraphics[trim=1cm 2cm 2cm 1.5cm,width=\columnwidth]{pgplot-0b} \caption{Stellar masses based on the GOLD Mine 2.2-$\mu$m $K^{\prime}$-band magnitudes (not dust corrected, and using $M/L_K=0.62$) versus the stellar masses based on the observed (not dust corrected) SDSS $i^{\prime}$-band 0.62-$\mu$m magnitudes (using a [$g^{\prime}-i^{\prime}$]-dependent $M_*/L_{i^{\prime}}$ ratio) from Bell et al.\ (2003, red crosses), Taylor et al.\ (2011, open blue hexagram), and Roediger \& Courteau (2015, black filled stars). The data reveal the need for the $\upsilon$ term in equations~\ref{eq3} and \ref{eq3B}.} \label{Fig0b} \end{figure} \section{Results} \label{Sec_Results} \begin{figure} \includegraphics[trim=1cm 2cm 2cm 1.5cm, width=\columnwidth]{pgplot-3} \caption{Absolute value of the spiral arm pitch angle, $|\phi|$, versus the stellar velocity dispersion $\sigma$. The open black stars (and circles) represent spiral galaxies with bulges (and without bulges) that have directly measured black hole masses (see Davis et al.\ 2017). The line represents equation~(\ref{eq4}), and is the expected trend based on the $M_{\rm bh}$--$|\phi|$ relation given in equation~(\ref{eq1a}) and the $M_{\rm bh}$--$\sigma$ relation given in equation~(\ref{eq2}) for galaxies with directly measured black hole masses. The filled orange stars are the Virgo cluster spiral galaxies studied in this work. They appear to follow the line well, as does LEDA~87300 (open square).} \label{Fig3} \end{figure} \begin{figure} \includegraphics[trim=1cm 2cm 2cm 1.5cm, width=\columnwidth]{pgplot-4} \caption{The predicted black hole masses in our Virgo galaxy sample, derived from, when available, the absolute value of the pitch angle $|\phi|$ (using equation~(\ref{eq1a})) and the velocity dispersion $\sigma$ (using equation~(\ref{eq2})).} \label{Fig4} \end{figure} \subsection{Pitch Angle vs Velocity Dispersion} Combining equations~(\ref{eq1a}) and (\ref{eq2}) to eliminate $M_{\rm bh}$, one obtains the relation \begin{equation} \log(\sigma/200\, {\rm km\, s}^{-1} ) = 0.268 - 0.030|\phi|. \label{eq4} \end{equation} This is shown by the line in Figure~(\ref{Fig3}), which plots $|\phi|$ versus $\log \sigma$ for spiral galaxies with directly measured black hole masses, plus our sample of Virgo cluster spiral galaxies, NGC~4395 and LEDA~87300. The Virgo galaxies appear consistent with the trend (equation~(\ref{eq4})) defined by the galaxy sample with directly measured black hole masses. Using equations~(\ref{eq1a}) and (\ref{eq2}) to predict the black hole masses in these Virgo galaxies, we plot the results in Figure~(\ref{Fig4}). Of particular interest are NGC~4178 (Secrest et al.\ 2012) and NGC~4713, the two galaxies in the lower left of the right hand panel, plus NGC~4294 in the lower section of the left hand panel. They are predicted here to have black hole masses of $10^3$ to $10^4 \,M_{\odot}$ (see the Appendix for every galaxies' predicted BH mass). \begin{figure} \includegraphics[trim=1cm 2cm 2cm 1.5cm, width=\columnwidth]{pgplot-2} \caption{Galaxy stellar mass versus stellar velocity dispersion. Note: neither LEDA~87300 (open square) nor NGC~4395 (lower left open circle) were used in either the linear regression between $M_{\rm bh}$ and $\sigma$, nor between $M_{\rm bh}$ and $M_{\rm *,galaxy}$, for the galaxy set with directly measured black hole masses (open stars and circles). Those regressions (equations~(\ref{eq2}) and (\ref{eq3}), respectively), have been combined to produce equation~(\ref{eq5}) which is shown here by the solid line which has a slope of 2.13. The bulk of the Virgo cluster spiral galaxies (filled orange stars) appear to follow this line well. Equation~\ref{eq5B}, constructed from equation~\ref{eq2} and equation~\ref{eq3B}, is shown by the dashed grey line and has a slope equal to 1.85. } \label{Fig2} \end{figure} \begin{figure} \includegraphics[trim=1cm 2cm 2cm 1.5cm, width=\columnwidth]{pgplot-1} \caption{The orange stars show the predicted black hole masses in our Virgo galaxy sample, derived from, when available, the galaxy's stellar mass $M_{\rm *,galaxy}$ (using equation~(\ref{eq3})) and the stellar velocity dispersion $\sigma$ (using equation~(\ref{eq2})). The grey circles had their ($M_{\rm *,total}$)-based black hole masses derived using equation~\ref{eq3B}. } \label{Fig1} \end{figure} \subsection{Stellar Mass vs Velocity Dispersion} Combining equations~(\ref{eq2}) and (\ref{eq3}) to eliminate $M_{\rm bh}$, one obtains the relation \begin{equation} \log ( M_{\rm *,galaxy} / 6.37\times10^{10}\, M_{\odot} ) = 0.302 + 2.132\log(\sigma/200\, {\rm km\, s}^{-1} ). \label{eq5} \end{equation} This is shown by the solid line in Figure~(\ref{Fig2}), which is a plot of $M_{\rm *,galaxy}$ versus $\log \sigma$ for spiral galaxies with directly measured black hole masses, and for our sample of Virgo cluster spiral galaxies. In Figure~(\ref{Fig1}), we display the result of using equations~(\ref{eq2}) and (\ref{eq3}) to predict the black hole masses in our Virgo galaxy sample. As before, two galaxies stand out, they are NGC~4178 and NGC~4713, the two galaxies in the lower left of the right hand panel of Figure~(\ref{Fig1}). In addition, we note NGC~4396 and NGC~4299 in the lower section of the left hand panel of Figure~(\ref{Fig1}) Coupling equation~\ref{eq3B}, rather than equation~\ref{eq3}, with equation~\ref{eq2} results in the relation \begin{equation} \log ( M_{\rm *,galaxy} / 6.37\times10^{10}\, M_{\odot} ) = 0.266 + 1.852\log(\sigma/200\, {\rm km\, s}^{-1} ). \label{eq5B} \end{equation} This equation is represented by the dashed grey line in Figure~\ref{Fig2}. Equation~\ref{eq5} and \ref{eq5B} give the scaling relation $M_{\rm *,galaxy} \propto \sigma^{2\pm0.15}$ for spiral galaxies, which matches well with the relation for dwarf and ordinary early-type galaxies fainter than $\mathfrak{M}_B \approx -20.5$ mag (e.g.\ Davies et al.\ 1983; Matkovi\'c \& Guzm\'an 2005). \begin{figure} \includegraphics[trim=1cm 2cm 2cm 1.5cm, width=\columnwidth]{pgplot-5} \caption{Galaxy stellar mass versus the absolute value of the spiral arm pitch angle. Symbols have the same meaning as in Figure~(\ref{Fig3}). The solid line represents equation~(\ref{eq6}), and is the expected trend based on the $M_{\rm bh}$--$|\phi|$ relation given in equation~(\ref{eq1a}) and the $M_{\rm bh}$--$M_{\rm *,galaxy}$ relation given in equation~(\ref{eq3}), defined by galaxies with directly measured black hole masses (which excludes LEDA~87300, denoted by the open square). The dashed grey line is given by equation~\ref{eq6B} and was obtained by combining equation~\ref{eq1a} and equation~\ref{eq3B}.} \label{Fig5} \end{figure} \begin{figure} \includegraphics[trim=1cm 2cm 2cm 1.5cm, width=\columnwidth]{pgplot-6} \caption{The orange stars show the predicted black hole masses in our Virgo galaxy sample, derived from both the galaxy's stellar mass $M_{\rm *,galaxy}$ (using equation~(\ref{eq3})) and, when available, the spiral arm pitch angle $|\phi|$ (using equation~(\ref{eq1a})). The grey circles show the ($M_{\rm *,total}$)-based black hole masses derived using equation~\ref{eq3B}. } \label{Fig6} \end{figure} \subsection{Pitch Angle vs Stellar Mass} Combining equations~(\ref{eq1a}) and (\ref{eq3}) to eliminate $M_{\rm bh}$, one obtains the relation \begin{equation} \log ( M_{\rm *,galaxy} / 6.37\times10^{10}\, M_{\odot} ) = 0.874 - 0.0645 |\phi|, \label{eq6} \end{equation} which is shown by the solid line in Figure~(\ref{Fig5}). This is, once again, the expected relation for spiral galaxies with directly measured black hole masses. The trend seen here bears a resemblance to the distribution of spiral galaxies in the diagram of pitch angle versus $B$-band absolute magnitude shown by Kennicutt (1981, his figure~9). Figure~(\ref{Fig6}) displays the result of using equations~(\ref{eq1a}) and (\ref{eq3}) to predict the black hole masses in the Virgo spiral galaxies. This time there are many galaxies of interest in regard to potentially harbouring an IMBH. These findings have briefly been summarised in Table~\ref{Tab_IMBH}. Results for all galaxies are shown in the Appendix. Finally, use of equation~\ref{eq3B}, rather than equation~\ref{eq3}, results in the relation \begin{equation} \log ( M_{\rm *,galaxy} / 6.37\times10^{10}\, M_{\odot} ) = 0.762 - 0.0561 |\phi|. \label{eq6B} \end{equation} It is represented by the dashed grey line in Figure~\ref{Fig5}. \section{IMBH targets of interest} \label{Sec_X} \begin{table} \centering \caption{33 spiral galaxies with a potential IMBH}\label{Tab_IMBH} \begin{tabular}{lccc} \hline Galaxy & $M_{\rm bh}$ ($M_{\rm *,total}$) & $M_{\rm bh}$ ($\phi$) & $M_{\rm bh}$ ($\sigma$) \\ & $M_{\odot}$ & $M_{\odot}$ & $M_{\odot}$ \\ \hline \multicolumn{4}{c}{3 estimates $< 10^5\,M_{\odot}$} \\ N4178 & 3$\times10^4$ & 2$\times10^4$ & 1$\times10^3$ \\ N4713 & 9$\times10^3$ & 3$\times10^3$ & 6$\times10^2$ \\ \multicolumn{4}{c}{2 estimates $< 10^5\,M_{\odot}$, no estimate $> 10^5\,M_{\odot}$} \\ IC3392 & 2$\times10^4$ & 6$\times10^4$ & ... \\ N4294 & 2$\times10^4$ & 3$\times10^3$ & ... \\ N4413 & 1$\times10^4$ & 3$\times10^4$ & ... \\ \multicolumn{4}{c}{2 estimates $< 10^5\,M_{\odot}$, 1 estimate $\ge 10^6\,M_{\odot}$} \\ N4424 & 4$\times10^4$ & 5$\times10^6$ & 1$\times10^5$ \\ N4470 & 1$\times10^4$ & 4$\times10^4$ & 1$\times10^6$ \\ \multicolumn{4}{c}{1 estimate $\lesssim 10^5\,M_{\odot}$, no estimate $> 10^6\,M_{\odot}$} \\ N4197 & 7$\times10^4$ & 2$\times10^5$ & ... \\ N4237 & 5$\times10^5$ & 5$\times10^4$ & 2$\times10^5$ \\ N4298 & 5$\times10^5$ & 4$\times10^5$ & 2$\times10^4$ \\ N4299 & 7$\times10^3$ & 2$\times10^5$ & ... \\ N4312 & 1$\times10^4$ & ... & 3$\times10^5$ \\ N4313 & 2$\times10^5$ & ... & 2$\times10^5$ \\ N4390 & 8$\times10^3$ & 7$\times10^5$ & ... \\ N4411b & 3$\times10^4$ & 4$\times10^5$ & ... \\ N4416 & 9$\times10^5$ & 1$\times10^4$ & ... \\ N4498 & 2$\times10^4$ & 6$\times10^5$ & ... \\ N4519 & 6$\times10^4$ & 3$\times10^5$ & ... \\ N4647 & 4$\times10^5$ & 8$\times10^5$ & 4$\times10^4$ \\ N4689 & 3$\times10^5$ & 3$\times10^5$ & 2$\times10^4$ \\ \multicolumn{4}{c}{1 estimate $\lesssim 10^5\,M_{\odot}$} \\ IC3322 & 1$\times10^4$ & ... & ... \\ N4206 & 5$\times10^4$ & ... & ... \\ N4222 & 7$\times10^4$ & ... & ... \\ N4330 & 6$\times10^4$ & ... & ... \\ N4356 & $\times10^5$ & ... & ... \\ N4396 & 2$\times10^3$ & ... & ... \\ N4405 & 6$\times10^4$ & ... & ... \\ N4445 & 2$\times10^4$ & ... & ... \\ N4451 & 1$\times10^5$ & ... & ... \\ N4522 & 2$\times10^4$ & ... & ... \\ N4532 & 1$\times10^4$ & ... & ... \\ N4606 & 3$\times10^4$ & ... & ... \\ N4607 & 3$\times10^4$ & ... & ... \\ \hline \end{tabular} Uncertainties can reach an order of magnitude, as shown in the Appendix Table~\ref{Tab_App1} and Figures~\ref{Fig4}, \ref{Fig1} and \ref{Fig6}. \end{table} From the previous section, we can identify five primary targets of interest: NGC~4178 and NGC~4713 (with three black hole mass estimates less than $10^5\,M_{\odot}$), and IC~3392, NGC~4294 and NGC~4413 (with two black hole mass estimates less than $10^5\,M_{\odot}$ but no velocity dispersion to provide a third black hole mass estimate). Table~\ref{Tab_IMBH} lists these 5 galaxies along with an additional 28 galaxies which may have a central black hole mass of less than $10^5$--$10^6\,M_{\odot}$. The next step is to determine which of the candidate IMBH hosts harbours a point-like nuclear X-ray source, which is likely evidence of an accreting nuclear black hole. We use {\it Chandra} data as the primary resource for our search, because {\it Chandra} is the only X-ray telescope that can provide accurate sub-arcsecond localisations of faint point-like sources (down to $\lesssim$10 counts), thanks to the low instrumental background of its Advanced CCD Imaging Spectrometer (ACIS). In the absence of {\it Chandra} data, we inspected archival {\it XMM-Newton} European Photon Imaging Camera (EPIC) data, particularly in cases when a long EPIC exposure partly made up for the much lower spatial resolution and much higher instrumental and background noise. For the five primary targets identified above, two (NGC~4178 and NGC~4713) have archival {\it Chandra}/ACIS X-ray data already available, and one (NGC~4294) has {\it XMM-Newton}/EPIC data. The other two (IC~3392 and NGC~4413) have recently been observed as part of our ongoing {\it Chandra} survey of the Virgo cluster; the results of the new observations will be presented in a separate paper. We re-processed and analysed the archival {\it Chandra} X-ray data using the Chandra Interactive Analysis of Observations ({\small{CIAO}}) Version 4.9 software package (Fruscione et al.\ 2006). For sources with a sufficient number of counts, we extracted spectra and built response and auxiliary response files with the {\small {CIAO}} task {\it {specextract}}, and fitted the spectra with {\small XSPEC} version 12.9.1 (Arnaud 1996). For sources with fewer counts, we converted between count rates and fluxes using the Portable, Interactive Multi-Mission Simulator ({\small{PIMMS}}) software Version 4.8e, available online\footnote{http://cxc.harvard.edu/toolkit/pimms.jsp} within the {\it Chandra X-ray Observatory} Proposal Planning Toolkit. X-ray contour plots, aperture photometry, and other imaging analysis was done with the {\small {DS9}} visualization tool, part of NASA's High Energy Astrophysics Science Archive Research Center (HEASARC) software. For the archival {\it XMM-Newton} data, we used standard pipeline products (event files, images, and source lists), downloaded from the HEASARC archive; we also used {\sc ds9} for aperture photometry and {\sc pimms} for flux conversions. \begin{figure} \includegraphics[angle=90,trim=1cm 1.7cm 0cm 1.7cm,width=1.0\columnwidth]{n4178_sdss_2}\\ \includegraphics[angle=0,trim=0cm 7cm 0cm 5cm,width=1.0\columnwidth]{n4178_ngvs_2} \caption{Top panel: SDSS image of NGC\,4178 (red = $i$ filter; green = $g$; blue = $u$), with {\it Chandra}/ACIS-S contours (0.3--7.0 keV band) overlaid in green. North is up, east is to the left. Bottom panel: zoomed-in view of the nuclear region, from the Next Generation Virgo-cluster Survey, with the position of the {\it Chandra} nuclear source overlaid as a red circle (radius 1$\arcsec$).} \label{Fig7} \end{figure} \begin{figure} \hspace{-0.2cm} \includegraphics[angle=270,trim=2.7cm 2cm 1cm 4cm, width=1.0\columnwidth]{n4178_nucl_diskbb} \caption{{\it Chandra}/ACIS-S spectrum of the nuclear source in NGC\,4178, fitted with a disk-blackbody model. The datapoints have been grouped to a signal-to-noise $>$1.5 for plotting purposes only. The fit was done on the individual counts, using Cash statistics. See Section 4.1 for the fit parameters. The sharp drop of detected counts above 2 keV disfavours a power-law model.} \label{Fig8} \end{figure} \subsection{NGC~4178} From the previous sections, we have three predictions for the black hole mass in NGC~4178, and they all point towards a black hole in the mass range $10^3$--$10^4\,M_{\odot}$. This galaxy was observed by {\it Chandra}/ACIS-S for 36 ks on 2011 February 19 (Cycle 12). We downloaded the data from the public {\it Chandra} archives, and reprocessed them with the {\small {CIAO}} task {\it chandra\_repro}. We confirm the detection of an X-ray source consistent with both the dynamical centre of the galaxy (Figure~\ref{Fig7}) and a nuclear star cluster (Satyapal et al.\ 2009; Secrest et al.\ 2012, 2013). We extracted the source counts from a circle of radius 2$\arcsec$, and the background counts from an annulus between radii of 3$\arcsec$ and 9$\arcsec$. As discussed by Secrest et al.\ (2012), this source is unusually soft for an AGN. We measured a net count rate of $(2.8 \pm 0.9) \times 10^{-4}$ ct s$^{-1}$ in the 0.3--1.0 keV band, $(6.2 \pm 1.3) \times 10^{-4}$ ct s$^{-1}$ in the 1.0--2.0 keV band, and $(1.0 \pm 0.6) \times 10^{-4}$ ct s$^{-1}$ in the 2.0--7.0 keV band. With only $\approx 36 \pm 6$ net counts, it is clearly impossible to do any proper spectral fitting, and certainly any fitting based on the $\chi^2$ statistics (which requires $\ga$15 counts per bin). Nonetheless, we can fit the data with the Cash statistics (Cash 1979), generally used for sources with a small number of counts, and constrain some simple models. Power-law fitting based on the hardness ratio was carried out and discussed in detail by Secrest et al.\ (2012). We re-fitted the spectrum in {\small {XSPEC}}, with the Cash statistics, after rebinning to 1 count per bin; we confirm that the power-law is steep, i.e.\ it is a soft spectrum, with photon index $\Gamma = 3.4^{+1.7}_{-1.2}$, with an intrinsic absorbing column density $N_{\rm H} = 5^{+5}_{-4} \times 10^{21}$ cm$^{-2}$ (C-statistics of 34.2 for 31 degrees of freedom). Such a steep slope, moderately high absorption, and large uncertainty on both parameters make it difficult to constrain the 0.3--10 keV unabsorbed luminosity: formally we obtain a 90\% confidence limit of $L_{0.3-10} = 9^{+105}_{-6} \times 10^{38}$ erg s$^{-1}$, consistent with the estimates of Secrest et al.\ (2012). However, when we look at the individual detected energy of the few counts, rather than simply considering the hardness ratio, we find that the power-law model is inadequate. The decline in the number of detected counts above 2 keV is very sharp (Figure~\ref{Fig8}), consistent with the Wien tail of an optically thick thermal spectrum. We therefore fit the same spectrum with an absorbed {\it diskbb} model: we obtain a C-statistic of $31.5/31$ (an improvement at the 90\% confidence level, with respect to the power-law fit). The best-fitting parameters are $N_{\rm H} = 1.5^{+3.3}_{-1.5} \times 10^{21}$ cm$^{-2}$ for the intrinsic absorption, $kT_{\rm in} = 0.56^{+0.35}_{-0.19}$ keV for the peak disk temperature, $r_{\rm in} = 94^{+212}_{-62} \, (\cos \theta)^{-1/2}$ km for the apparent inner-disk radius, where $\theta$ is the viewing angle. The unabsorbed luminosity is $L_{0.3-10} = 1.9^{+1.9}_{-0.7} \times 10^{38}$ erg s$^{-1}$. Luminosity, temperature, and inner-disk radius are self-consistent for a stellar-mass black hole in the high/soft state. The temperature is too high, and the radius too small, for a supermassive black hole or even an IMBH. Invoking Occam's razor, we argue that the most likely interpretation of the X-ray source at the nuclear location of NGC\,4178 is a stellar-mass X-ray binary. What to make, then, of the strong mid-IR emission in [Ne {\footnotesize{V}}] (Satyapal et al.\ 2009; Secrest et al.\ 2012), which is usually a signature of strong X-ray photoionisation and was the strongest argument in favour of a hidden AGN in this galaxy? It is always possible to postulate a Compton-thick AGN, powerful enough to supply the required luminosity; we simply argue that this hypothesis is untestable with the available {\it Chandra} data. Alternatively, the nuclear black hole may have been more active in the recent past (producing the highly-ionised gas around it), but is currently in a low state. The optical line ratios do not require an AGN, either: NGC\,4178 is classified as an H{\footnotesize II} nucleus (Ho et al.\ 1997; Decarli et al.\ 2007; Secrest et al.\ 2012). The uncertainty on the current luminosity, and indeed on the detection of X-ray emission from the nuclear black hole, makes it impossible to constrain its mass via fundamental-plane relations (Merloni et al.\ 2003; Plotkin et al.\ 2012; Miller-Jones et al.\ 2012). For these relations, Secrest et al.\ (2013) assumed an intrinsic 0.5--10 keV X-ray luminosity $\approx$10$^{40}$ erg s$^{-1}$, a bolometric correction factor $\kappa \sim 10^3$, and an upper limit of 84.9 $\mu$Jy for the 5-GHz flux density; from those values, they predicted a black hole mass $<$8.4 $\times 10^4 \, M_{\odot}$. Instead, we argue that the X-ray luminosity is pure guesswork, with no empirical constraint. Moreover, if the nuclear black hole was indeed an IMBH, the bolometric correction should be much lower than $10^3$; more likely, $\kappa \la 10$, assuming a peak disk temperature $kT \ga 0.1$ keV for a black hole mass $\la$10$^5$\,$M_{\odot}$. In summary, NGC\,4178 may host an IMBH but neither the X-ray spectrum of the nuclear source, nor the fundamental plane relations can be used to support this hypothesis. Alternatively, Secrest et al.\ (2012) note that the black hole mass may be $\sim$0.1 to 1 times the mass of the nuclear star cluster ($M_{\rm nc} \sim 5\times 10^5\,M_{\odot}$: Satyapal et al.\ 2009) in this galaxy. This implies $M_{\rm bh} \sim (0.5-5)\times 10^5\,M_{\odot}$. We are able to offer an alternative prediction of the black hole mass by using the optimal $M_{\rm bh}$--$M_{\rm nc}$ relation extracted from Graham (2016b), which is such that \begin{eqnarray} &&\log(M_{\rm nc}/M_{\odot}) = \\ \nonumber &&(0.40\pm0.13)\times\log(M_{\rm bh}/[10^{7.89}\,M_{\odot}]) + (7.64\pm0.25). \end{eqnarray} From this, we derive $\log (M_{\rm bh}/M_{\odot}) = 3.04$ dex. We conclude that this relation offers the best additional support, beyond the initial set of three scaling relations used in the previous section, for an IMBH in NGC~4178. If it has a typical Eddington ratio of say $10^{-6}$, then we should not expect to detect it in X-rays (see Paper~I). \begin{figure} \includegraphics[angle=90,trim=1.4cm 1.6cm 0.0cm 1.6cm,width=1.0\columnwidth]{n4713_sdss} \caption{SDSS image of NGC\,4713, with {\it Chandra}/ACIS-S contours overlaid in green. North is up, east is to the left.} \label{Fig9} \end{figure} \subsection{NGC~4713} For NGC~4713 (SDSS J124957.86$+$051841.0), we again have three predictions for the black hole mass, spanning (0.6--9)$\times10^3)\,M_{\odot}$. Based on a 4.9-ks {\it Chandra}/ACIS-S observation taken on 2003 January 28 (Cycle 4), Dudik et al.\ (2005) reported on the lack of a nuclear X-ray source in this dwarf galaxy, with a 0.3--10 keV upper limit of $\approx$4$ \times 10^{-4}$ ct s$^{-1}$; looking in the mid-IR, Satyapal et al.\ (2009) also excluded an active nucleus. However, Nagar et al.\ (2002) had included it in their list of BPT (Baldwin et al.\ 1981) `composite galaxies', as did Reines, Greene \& Geha (2013). Furthermore, Terashima et al.\ (2015) found an X-ray source at the nuclear position in a 52.2-ks {\it XMM-Newton} European Photon Imaging Camera observation, part of the {\it XMM-Newton} Serendipitous Source Catalog Data Release 3 (Watson et al.\ 2009). The unabsorbed 0.3--10 keV flux is reported as $\approx 5\times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$, with the addition of a `weak hint' of an Fe-K line at 6.4 keV. Based on the ratios between the FIR luminosities at 18 $\mu$m and 90 $\mu$m (from the AKARI survey: Kawada et al.\ 2007; Ishihara et al.\ 2010), and the X-ray luminosity, Terashima et al.\ (2015) classified the nucleus of NGC\,4713 as an unobscured transition object between LINERS and H{\footnotesize {II}} nuclei. The lower spatial resolution of {\it XMM-Newton} makes it impossible to determine whether the faint nuclear X-ray emission is point-like, from an AGN, or extended, from hot gas in a star-forming region. We reprocessed and re-examined the 4.9-ks {\it Chandra}/ACIS-S observation. In contrast to the conclusions of Dudik et al.\ (2005), we do find a point-like X-ray nucleus (see Figure~\ref{Fig9}), located within $0\arcsec.2$ of the optical nucleus as defined by SDSS-DR12 (Alam et al.\ 2015) and {\it Gaia} Data Release 2 (Gaia Collaboration et al.\ 2018). We measure a net count rate in the 0.3--7.0 keV band of $2.0^{+1.3}_{-0.9} \times 10^{-3}$ ct s$^{-1}$, i.e.\ 10 raw counts and 0.2 background counts. The errors reported here are 90\% confidence limits calculated from the Tables of Kraft et al.\ (1991), suitable for sources with a low number of counts. Source counts are detected in all three standard bands (soft, 0.3--1 keV; medium, 1--2 keV; hard, 2--7 keV), which is consistent with a power-law spectrum. Assuming a power-law spectrum with photon index $\Gamma = 1.7$, and line-of-sight column density $N_{\rm H} = 2 \times 10^{20}$ cm$^{-2}$, we find with {\sc pimms} that the net count rate corresponds to a 0.3--10 keV unabsorbed flux of $(1.4^{+0.9}_{-0.6}) \times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$ (slightly lower than the {\it XMM-Newton} flux, which may include a hot gas contribution). At the distance of 13.2 Mpc for NGC~4713, this implies a luminosity $L_{0.3-10} = 3.0^{+1.9}_{-1.4} \times 10^{38}$ erg s$^{-1}$. We also obtained an essentially identical estimate of $L_{0.5-7} \approx 3.1 \times 10^{38}$ erg s$^{-1}$ using the {\sc ciao} task {\it srcflux} within {\sc ds9}, with the same input spectral model and column density. \subsection{Galaxies with 2 BH mass estimates $< 10^5 M_{\odot}$} NGC~4294 was observed by {\it XMM-Newton}/EPIC on 2016 June 10, for 46 ks. We found no sources at the nuclear position, in the stacked EPIC pn and MOS image. The nearest point source is located $\approx$9$\arcsec$ from the optical nuclear position, way beyond the possible astrometric uncertainty of the EPIC image. We estimate a 90\% confidence limit for the 0.3--10 keV nuclear luminosity of $L_{0.3-10} < 1 \times 10^{38}$ erg s$^{-1}$, for a distance of 17.0 Mpc. Among the other four galaxies with two black hole mass estimates $< 10^5 M_{\odot}$ (Table~\ref{Tab_IMBH}), three (IC~3392, NGC~4413, and NGC~4424) have been observed by {\it Chandra} this year for our Virgo survey, and we will present the results in a separate paper. The fourth galaxy, NGC~4470, was observed by {\it Chandra} several times between 2010 and 2016: twice with ACIS-I, and four times with ACIS-S. However, in all cases, it was not the primary target of the observation, and was located several arcmin away from the aimpoint, with a resulting degradation of the point spread function at its location. Moreover, in three of the four ACIS-S observations, the nuclear position of NGC~4470 fell onto the less sensitive S2 chip, rather than the S3 chip. Of all the available datasets, the most useful one for our investigation is from a 20-ks observation taken on 2010 November 20 (Cycle 12): it is the only observation in which NGC~4470 is on the S3 chip, only $\approx$4$'$ from the aimpoint. From this observation, we found excess emission centred at $\approx$1$\arcsec$ of the SDSS and {\it Gaia} optical nuclear positions (smaller than the positional uncertainty of the X-ray source at that off-axis position and for the small observed number of counts), with a net count rate of $(5 \pm 2) \times 10^{-4}$ ct s$^{-1}$ ({\it i.e.}, $\approx$ 10 net counts) in the 0.3--7.0 keV band. For an absorbing column density $N_{\rm H} = 1.7 \times 10^{20}$ cm$^{-2}$ a power-law photon index $\Gamma = 1.7$, and a distance of 18.8 Mpc, this corresponds to an unabsorbed luminosity $L_{0.3-10} = 2.2^{+1.9}_{-1.2} \times 10^{38}$ erg s$^{-1}$. \subsection{Galaxies with 1 BH mass estimate $\lesssim 10^5 M_{\odot}$} \begin{figure} \includegraphics[angle=90,trim=1.4cm 1.6cm 0.0cm 1.6cm,width=1.0\columnwidth]{n4299_sdss} \caption{SDSS image of NGC~4299, with {\it Chandra}/ACIS-S contours overlaid in green (seen to the right of the galaxy). North is up, east is to the left.} \label{Fig4299} \end{figure} Finally, we examined the 26 galaxies with one black hole mass estimate $\lesssim 10^5 M_{\odot}$ (Table~\ref{Tab_IMBH}). Twenty of them have been observed as part of our Virgo survey, and we will report on them elsewhere. The other three, NGC~4299, NGC~4647 and NGC~4689, already had archival {\it Chandra} data. NGC~4299 was observed by {\it Chandra}/ACIS-S on 2007 November 18, for 5.0 ks. We do not find any significant emission at the nuclear position (see Figure~\ref{Fig4299}); we place a 90\% upper limit of $8 \times 10^{-4}$ count s$^{-1}$ in the 0.3--7 keV band, using the tables of Kraft et al.\ (1991). Assuming line-of-sight Galactic absorption $N_{\rm H} = 2.5 \times 10^{20}$ cm$^{-2}$, and a power-law spectrum with photon index $\Gamma = 1.7$, this corresponds to a luminosity $L_{0.3-10} < 7 \times 10^{38}$ erg s$^{-1}$ at the distance of 21.9 Mpc. NGC~4647 was observed by {\it Chandra}/ACIS-S on six visits between 2000 and 2011 (2000 April 20: 38 ks; 2007 January 30: 52 ks; 2007 February 01: 18 ks; 2011 August 08: 85 ks; 2011 August 12: 14 ks; 2011 February 24: 101 ks), for a total of $\approx$308 ks. On all occasions, the aimpoint was located at the main target of the observation, the nearby E galaxy NGC~4649; however, NGC~4647 is only $\approx$2$'$.5 away, and falls within the S3 chip in all six datasets. We inspected each observation individually, and we then used the {\sc ciao} script {\it merge\_obs} to created a stacked, exposure-corrected image. We used point-like optical/X-ray associations to align the {\it Chandra} image onto the SDSS optical image, so that the two frames coincide within $\lesssim$0$\arcsec$.3. There is no point-like X-ray source at the nuclear location, defined by SDSS (also in agreement with {\it Gaia}'s nuclear position within the same uncertainty). The nearest point-like X-ray source (most likely an X-ray binary) is located $\approx$3$\arcsec$ to the west of the nuclear location, well above its positional uncertainty. The 90 percent upper limit of the (undetected) nuclear source is $3.6 \times 10^{-5}$ ct s$^{-1}$ in the 0.3--7 keV band. To convert this upper limit into a luminosity limit, we took a weighted average of the contributions over the different observation cycles, to take into account the change in the response of the ACIS detector; we also assumed, as usual, a line-of-sight Galactic absorption (in this case, $N_{\rm H} = 2.1 \times 10^{20}$ cm$^{-2}$) and a power-law spectrum with photon index $\Gamma = 1.7$. We conclude that the nuclear X-ray luminosity is $L_{0.3-10} < 1 \times 10^{37}$ erg s$^{-1}$, at the distance of 17.6 Mpc. NGC~4689 was observed by {\it Chandra}/ACIS-S for 5 ks on 2007 May 7 (Cycle 8). We do not detect any net emission at the nuclear position; the Bayesian 90\% confidence limit on the net count rate is $\approx$5 $\times 10^{-4}$ ct s$^{-1}$. For a line-of-sight Galactic absorption $N_{\rm H} = 2.0 \times 10^{20}$ cm$^{-2}$ and a photon index $\Gamma = 1.7$, we obtain an upper limit to the nuclear black hole luminosity of $L_{0.3-10} < 1.2 \times 10^{38}$ erg s$^{-1}$, at the distance of 16.4 Mpc. A summary of the X-ray observation exposure times, count rates, and luminosities is provided in Table~\ref{Tab_X_Sum}. \begin{table*} \centering \caption{Summary of the X-ray observations used for this work and of their respective nuclear X-ray properties.} \begin{tabular}{lccccc} \hline Galaxy & Observatory & Date & Exp Time & Count Rate$^a$ & $L_{0.3-10}$ \\ & & & (ks) & (ct s$^{-1}$) & (erg s$^{-1}$) \\ \hline NGC~4178 & {\it Chandra} & 2011-02-19 & 36 & $\left(1.0^{+0.2}_{-0.2}\right) \times 10^{-3}$ & $\left(1.9^{+1.9}_{-0.7}\right) \times 10^{38}$\\[3pt] NGC~4713 & {\it Chandra} & 2003-01-28 & 4.9 & $\left(2.0^{+1.3}_{-0.9}\right) \times 10^{-3}$ & $\left(3.0^{+1.9}_{-1.4}\right) \times 10^{38}$\\[3pt] NGC~4294 & {\it XMM-Newton} & 2006-06-10 & 46 & $< 1 \times 10^{-3}$ & $< 1 \times 10^{38}$\\[3pt] NGC~4470 & {\it Chandra} & 2010-11-20 & 20 & $\left(0.5^{+0.2}_{-0.2}\right) \times 10^{-3}$ & $\left(2.2^{+1.9}_{-1.2}\right) \times 10^{38}$\\[3pt] NGC~4299 & {\it Chandra} & 2007-11-18 & 5.0 & $< 0.8 \times 10^{-3}$ & $< 7 \times 10^{38}$\\[3pt] NGC~4647 & {\it Chandra} & 2000-04-20 & 38 & & \\ & & 2007-01-30 & 52 & & \\ & & 2007-02-01 & 18 & & \\ & & 2011-02-24 & 101 & & \\ & & 2011-08-08 & 85 & & \\ & & 2011-08-12 & 14 & & \\ & & (stacked) & 308 & $< 0.036 \times 10^{-3}$ & $< 0.1 \times 10^{38}$\\[3pt] NGC~4689 & {\it Chandra} & 2007-05-07 & 5.0 & $< 0.5 \times 10^{-3}$ & $< 1.2 \times 10^{38}$\\[3pt] \hline \end{tabular} $^a$ For {\it Chandra} observations: observed ACIS-S count rate in the 0.3--7 keV band; for {\it XMM-Newton}: observed EPIC-pn count rate in the 0.3--10 keV band. \label{Tab_X_Sum} \end{table*} \section{Discussion} \label{Sec_Disc} Observational evidence for $\sim$12, 30 and 60 solar mass black holes already exists (Reid et al.\ 2014; Abbott et al.\ 2016, 2017). It is also understood that in today's universe, the end product of a massive star will be a `stellar mass' black hole less than 80--100 $M_{\odot}$, but near this limit if the star's metallicity was low ($10^{-2}$--$10^{-3}$ solar) and the mass loss from its stellar wind was low (Belczynski et al.\ 2010; Spera et al.\ 2015; Spera \& Mapelli 2017). While some authors have advocated that the seed masses which gave rise to the supermassive black holes at the centres of galaxies started out with masses of $10^5$--$10^6\, M_{\odot}$ (e.g.\ Turner 1991; Loeb \& Rasio 1994), and many AGN are known to have $10^5$--$10^6$ $M_{\odot}$ black holes (e.g.\ Graham \& Scott 2015, and references therein), the assumption that there are not black holes with masses of $10^2$--$10^5$ $M_{\odot}$ need not hold. The perceived need for massive seeds was originally invoked because, under the assumption of spherical accretion, there was not sufficient time to grow the massive quasars observed in the early Universe. However, it is possible to grow black holes at a much faster rate than the idealised and restrictive Eddington accretion rate (e.g.\ Alexander \& Natarajan 2014; Nayakshin et al.\ 2012). Moreover, even if the {\it massive} quasars did form from massive seeds (Pacucci et al.\ 2016), there may still be a continuum of BH masses, perhaps with today's IMBHs born from Pop III and II.5 stars, or from other processes, as noted in Section~\ref{SecIntro}. There are, therefore, reasons to expect that IMBHs with $10^2 < M_{\rm bh}/M_{\odot} < 10^5$ should exist. Just as initial searches for exoplanets found the larger ones first, and surveys of galaxies found the bright Hubble-Jeans sequence (Jeans 1919, 1928; Hubble 1926, 1936) prior to the detection of low surface brightness galaxies, sample selection effects hinder the detection of IMBHs in galactic nuclei. Their gravitational spheres-of-influence are too small to be spatially resolved with our current instrumentation. Furthermore, ambiguity also arises because the energy levels of their current low-accretion activity overlaps with that of highly-accreting stellar mass black holes in X-ray binaries. There is, however, no obvious physical reason why these IMBHs should not exist, and a small number of candidates are known, including the already-mentioned $\sim$10$^4 \, M_{\odot}$ black hole near/inside ESO~243-49 (Farrell et al.\ 2009; Yan et al.\ 2015; Webb et al.\ 2017; Soria et al.\ 2017), plus the nuclear black holes in LEDA~87300 (Baldassare et al.\ 2015; whose mass estimate was halved in Graham et al.\ 2016, see also Baldassare et al.\ 2017) and in NGC~404 (with a 3$\sigma$ upper limit on its black hole mass of $1.5\times 10^5\, M_{\odot}$: Nguyen et al.\ 2017). There is also a series of studies (Pardo et al.\ 2016; Mezcua et al.\ 2016, 2018) which have estimated the masses of distant low mass black holes using a near-linear $M_{\rm bh}$--$M_{\rm *tot}$ relation for AGN from Reines \& Volonteri (2015). However, it should be noted that Reines \& Volonteri (2015) appear unaware of, or reject, the bend in the $M_{\rm bh}$--$M_{\rm *spheroid}$ diagram (see Graham \& Scott 2015), and the associated bend in the $M_{\rm bh}$--$M_{\rm *tot}$ diagram which is evident in the data they present. Fitting a log-linear relation to galaxies that they consider to contain a classical bulge rather than a pseudobulge, the right hand panel of figure~10 in Reines \& Volonteri (2015) reveals that all galaxies with $M_{\rm bh} \lesssim 10^8\,M_{\odot}$ (except for the stripped compact elliptical galaxy M~32 which should be down-weighted in this diagram due to its rare nature relative to normal galaxies) reside below their $M_{\rm bh}$--$M_{\rm *tot}$ relation for classical bulges and elliptical galaxies. Many more galaxies with directly measured black hole masses also reside below their relation, but they were labelled ``pseudobulges'' and excluded by Reines \& Volonteri (2015). Given that the bulge-to-total mass ratio tends to decrease as one progresses to lower mass spiral galaxies, the $M_{\rm bh}$--$M_{\rm *tot}$ relation for spiral galaxies (Davis et al.\ 2018b) is steeper than the near-quadratic relation for the bulges of spiral galaxies (e.g.\ Scott et al.\ 2013; Savorgan et al.\ 2016; Davis et al.\ 2018a). Reines \& Volonteri (2015) instead suggest that their AGN sample --- used to define their near-linear $M_{\rm bh}$--$M_{\rm *tot}$ relation for AGN --- reside in pseudobulges that have an $M_{\rm bh}/M_{\rm *,tot}$ ratio of $\approx$0.03 percent at $M_{\rm *tot} = 10^{11}\,M_{\odot}$. However, their distribution of AGN data does not match the distribution of spiral galaxies (also alleged to contain pseudobulges) with directly measured black hole masses and stellar masses derived from space-based infrared images. As such, while the dwarf galaxies studied by Pardo et al.\ (2016) and Mezcua et al.\ (2016, 2018) may contain IMBHs, it may be worthwhile revisiting their black hole masses. A growing number of alleged and potential IMBHs, not located at the center of their host galaxy, are also known (e.g.\ Colbert \& Mushotzky 1999; Farrell et al.\ 2009, 2014; Soria et al.\ 2010; Webb et al.\ 2010, 2014; Liu et al.\ 2012; Secrest et al.\ 2012; Sutton et al.\ 2012; Kaaret \& Feng 2013; Miller et al.\ 2013; Cseh et al.\ 2015; Mezcua et al.\ 2015; Oka et al.\ 2016; Pasham et al.\ 2014, 2015). It has been theorised that some of these may have previously resided at the centre of a galaxy: perhaps from a stripped satellite galaxy or minor merger (e.g.\ Drinkwater et al.\ 2003), or perhaps they were dynamically ejected from the core of their host galaxy (e.g.\ Merritt et al.\ 2009). This latter phenomenon may occur due to the gravitational recoiling of a merged black hole pair (e.g.\ Bekenstein 1973; Favata et al.\ 2004; Herrmann et al.\ 2007; Nagar 2013). Alternatively, or additionally, IMBHs may have formed in their off-centre location. Such speculation should, however, be tempered at this point because, as noted in Section~\ref{SecIntro}, many such past IMBH candidates can be explained as super-Eddington accretion onto stellar-mass compact objects (Feng \& Soria 2011; Kaaret et al.\ 2017). There are many methods, beyond those already employed here, that can be used to identify, and probe the masses of, black holes. This includes reverberation mappings of AGN (e.g.\ Bahcall, Kozlovsky \& Salpeter 1972; Blandford \& McKee 1982; Netzer \& Peterson 1997), the `fundamental plane of black hole activity' (Merloni et al.\ 2003; Falcke et al.\ 2004), spectral modelling of the high-energy X-ray photon coming from the hot accretion discs around IMBHs (Pringle \& Rees 1972; Narayan \& Yi 1995), high-ionization optical emission lines (Baldwin et al.\ 1981; Kewley et al.\ 2001); and high spatial resolution observations of maser emission using radio and millimetre/submillimeter interferometry (e.g.\ Miyoshi et al.\ 1995; Greenhill et al.\ 2003; Humphreys et al.\ 2016; Asada et al.\ 2017). In addition, the merging of black holes is now quite famously known to produce gravitational radiation during their orbital decay (Abbott et al.\ 2016). The merging of galaxies containing their own central IMBH is similarly expected to result in the eventual merging of these black holes. The Kamioka Gravitational Wave Detector (KAGRA: Aso et al.\ 2013) will be a 3-km long underground interferometer in Japan that is capable of detecting the gravitational radiation emanating from collisions involving black holes with masses up to 200 $M_{\odot}$ (T\'apai et al.\ 2015). The planned Deci-Hertz Interferometer Gravitational wave Observatory (DECIGO: Kawamura et al.\ 2011) and the European, Laser Interferometer Space Antenna (LISA) Pathfinder mission\footnote{\url{http://sci.esa.int/lisa-pathfinder/}} (Anza et al.\ 2005; McNamara 2013), with their greater separation of mirrors, will be able to detect longer wavelength gravitational waves, and thus better reach into the domain of intermediate-mass and supermassive black hole mergers, the latter of which are currently being searched for via `pulsar timing arrays' (PTAs) (e.g.\ Hobbs et al.\ 2010; Kramer \& Champion 2013; Shannon et al.\ 2015). A key constraint to the expected detection threshold of such signals from PTAs -- in particular the background of cosmic ripples from the merger of massive black holes (themselves arising from the merger of galaxies) -- is the (black hole)-to-(host galaxy/bulge) mass ratio (see equation~(\ref{eq3}) for spiral galaxies). An additional source of long wavelength gravitational radiation will arise from the inspiral of compact stellar mass objects, such as neutron stars and black holes, around these IMBHs (Mapelli et al.\ 2012). It is reasonable to expect that the densely packed nuclear star clusters, which coexist with low-mass SMBHs (e.g.\ Gonz{\'a}lez Delgado et al.\ 2008; Seth et al.\ 2008; Graham \& Spitler 2009), will similarly surround many IMBHs. Gravitational radiation, and the gravitational tidal disruption of ill-fated stars that venture too close to these black holes (Komossa et al.\ 2009, Komossa 2013 and references therein; Zhong et al.\ 2015; Stone \& Metzger 2016; Lin et al.\ 2018), are therefore expected from these astrophysical entities. There is, therefore, an array of future observations which could yield further confidence and insight into the realm of IMBHs. In the pursuit of galaxies that may harbour (some of) the largely missing population of IMBHs, we have predicted the black hole masses in 74 spiral galaxies in the Virgo cluster that will be imaged with the ACIS-S detector on the {\it Chandra X-ray Observatory}. Previously, Gallo et al.\ (2008) performed a complementary investigation looking at 100 early-type galaxies in the Virgo cluster. However, they only used two global properties of the galaxies ($\sigma$ and $M_{\rm *,galaxy}$) to predict the black hole masses, and their predictions differed systematically and significantly from each other (Gallo et al.\ 2008, their figure~4), revealing that either one, or both, of their black hole scaling relations was in error. That offset, which reached 3 orders of magnitude at the low mass end, is investigated and reconciled in Paper~I. Here, we have used three global properties of spiral galaxies ($\sigma$, $M_{\rm *,galaxy}$ and spiral arm pitch angle $\phi$) to predict the black hole masses in our spiral galaxy sample. Moreover, our updated scaling relations are internally consistent with each other and do not contain any dramatic systematic bias. Table~\ref{Tab_Calib} provides a sense of what galaxy parameter values are associated with a given set of black hole masses. Based on our estimates of these galaxies' stellar masses, 33 of the 74 galaxies are predicted to have a black hole mass less than $10^5$--$10^6\, M_{\odot}$ (see Table~\ref{Tab_IMBH}). \begin{table} \centering \caption{Black hole calibration points}\label{Tab_Calib} \begin{tabular}{cccc} \hline $M_{\rm bh}$ & $M_{\rm *,total}$ & $\phi$ & $\sigma$ \\ $M_{\odot}$ & $M_{\odot}$ & [deg] & km s$^{-1}$ \\ \hline $10^9$ & 2.9$\times10^{11}$ (2.4$\times10^{11}$) & 3.0 & 293 \\ $10^8$ & 1.2$\times10^{11}$ (1.1$\times10^{11}$) & 9.1 & 195 \\ $10^7$ & 5.1$\times10^{10}$ (5.3$\times10^{10}$) & 15.2 & 130 \\ $10^6$ & 2.1$\times10^{10}$ (2.5$\times10^{10}$) & 21.3 & 86 \\ $10^5$ & 8.9$\times10^{9}$ (1.2$\times10^{10}$) & 27.4 & 57 \\ $10^4$ & 3.7$\times10^{9}$ (5.5$\times10^{9}$) & 33.5 & 38 \\ $10^3$ & 1.6$\times10^{9}$ (2.6$\times10^{9}$) & 39.6 & 25 \\ $10^2$ & 6.6$\times10^{8}$ (1.2$\times10^{8}$) & 45.7 & 17 \\ \hline \end{tabular} Reversing equations~\ref{eq3} (\ref{eq3B}), \ref{eq1b} and \ref {eq2}, we provide the total galaxy stellar mass, spiral arm pitch angle and stellar velocity dispersion that corresponds to the black hole masses listed in column~1, respectively. \end{table} The black hole mass estimates presented here shall be used in a number of forthcoming papers once imaging from the new {\it Chandra} Cycle 18 Large Project `Spiral galaxies of the Virgo cluster' (Proposal ID: 18620568) is completed. Given the low degree of scatter about the $M_{\rm bh}$--$|\phi|$ relation, it appears to be the most promising relation to use in the search for IMBHs in late-type galaxies. In future work, we intend to identify those late-type spiral galaxies with open, loosely-wound spiral arms, i.e.\ those expected to have the lowest mass black holes at their centre, and then check for the signature of a hot accretion disk heralding the presence of potentially further IMBHs. \section*{Acknowledgements} This research was supported under the Australian Research Council's funding scheme DP17012923. Part of this research was conducted within the Australian Research Council's Centre of Excellence for Gravitational Wave Discovery (OzGrav), through project number CE170100004. Support for this work was provided by the National Aeronautics and Space Administration through Chandra Award Number 18620568. This research has made use of the NASA/IPAC Extragalactic Database (NED). This publication makes use of data products from the Two Micron All Sky Survey. We acknowledge use of the HyperLeda database (http://leda.univ-lyon1.fr). This research has made use of the GOLDMine Database.
{ "timestamp": "2019-04-16T02:01:45", "yymm": "1811", "arxiv_id": "1811.03232", "language": "en", "url": "https://arxiv.org/abs/1811.03232" }
\section{Introduction} The abundance and spatial distribution of voids depends on the nature of the initial conditions, the expansion history of the universe, and the nature of gravity \citep{svdw04,ms09}. As is true for halos, predictive models of this dependence have three parts. The first is a description of the gravitational physics of void formation \citep{ddglp93,smt01}; the second incorporates this into a statistical treatment aimed at predicting halo or void abundances, typically from knowledge of the initial fluctuation field \citep{svdw04,pls12,psd13,scs13}; and the third extends this treatment to also describe the spatial distribution of these objects and its evolution \citep{dcss10}. The initial and evolved fields are often refered to as Lagrangian and Eulerian, respectively. The statistical model for the abundances typically identifies those protohalo or protovoid patches in the initial conditions which are destined to become halos and voids. Since the abundances are predicted from the initial Lagrangian field, whereas the spatial distribution of the fully formed halos and voids is measured in the final evolved Eulerian field, the description of the spatial distribution is done in two steps: the first describes how the spatial distribution of the initial patches is biased with respect to the initial fluctuation field, and the second describes how this bias is modified when the evolved halo distribution is compared to the evolved matter field. In this paper, we first ask if we can model the evolution of void density profiles without any prejudice about how voids form. We show that naive application of the simplest spherical evolution model does not describe the evolution of void-matter correlation, because voids move. We then build a more elaborate model which is more accurate. Finally, we check if the excursion set peaks approach is able to provide a useful framework for describing how matter is initially arranged around protohalo or protovoid centers, showing that it is reasonably accurate. When coupled with our evolution model, this provides a complete framework for describing the void-matter cross correlation function. Such a framework is of interest for many cosmological purposes: Void density profiles are promising for constraining neutrino masses \citep{Massara:2015}, and modeling void velocity profiles allows one to perform redshift space distortion analyses with voids \citep{Hamaus:2015,Cai:2016,Achitouv:2016,Hawken:2016,Nadathur:2018}. In addition, such a framework is important for interpreting weak-lensing measurements around voids, which are expected to play an important role in the test of theories of gravity \citep{Cai:2014,Baker:2018}. These studies are motivated in part by the fact that underdensities or voids in the galaxy distribution or lensing signal are now being measured in significant numbers \citep{Sutter:2012,Paz_2013,Krause:2013,Clampitt:2015,Nadathur:2016}. We illustrate all our arguments using measurements in simulations which are described in Section~\ref{sec:sims}. Section~\ref{sec:evolve} shows the short-comings of the spherical evolution model, and Section~\ref{sec:move} describes our modification, and its effect on density and velocity profiles. Section~\ref{sec:ESTmodel} describes the excursion set troughs model for void profiles, and compares it with our measurements. A final section summarizes. Details associated with the excursion set troughs approach are provided in an Appendix. \section{Voids in simulations}\label{sec:sims} We use the voids identified in N-body simulations to illustrate a number of the points raised in the discussion which follows. The simulations and void catalogs are from \cite{Massara:2015}. We use their low resolution simulations run in a $\Lambda$CDM cosmology, averaging over the $10$ available realizations. For each realization, the initial conditions (ICs) were set at redshift $z=99$ and particle positions and velocities at $z=2,1,0.5,0$ are available. Voids were identified using the void finder {\sc VIDE} \citep{VIDE} at redshift $z=0$ and were divided into three groups, depending on their size at that time: $R_{\rm eff} = 14-16, 16-18, 20-25$ $h^{-1}$Mpc. \cite[See][for more details.]{Massara:2015}. Also see \cite{Nadathur:2015} for discussion of the subtleties associated with such void finders. At $z=0$, each void has a number of member particles. We define the center of each void at redshift $z\geq 0$ as the center of mass of the $z=0$ member particles. (This is exactly analogous to how one traces halos back in time to the protohalo patches from which they formed.) We use the term `protovoid' to refer to the set of void particles at any earlier time (e.g. at $z=99$). We then compute the cross-correlation between the void or protovoid centers and the matter distribution. Of course, we are free to cross-correlate the (proto)void centers at one redshift with the matter distribution at another. If the void centers do not move, then this cross-correlation is a useful measure of how the voids expanded. However, if the voids moved, then the measurement is more complicated to interpret. \section{The spherical evolution model}\label{sec:evolve} The simplest models of halos and voids assume that they form from the spherically symmetric collapse or expansion of sufficiently over or underdense isolated spherical patches in the primordial mass density fluctuation field \citep{gg72, ddglp93}. In this model, concentric shells remain concentric as they expand (around what will become void centers) or contract (around clusters). The spherical model predicts a deterministic mapping between the linear overdensity $\delta_{\rm L}(<R_{\rm L})$ within the initial (often called Lagrangian) scale $R_{\rm L}$ and the nonlinearly evolved one $\delta_{\rm E}(<R_{\rm E})$ within the evolved scale $R_{\rm E}$ at redshift $z$. The key steps in the spherical model follow from the assumption that shells do not cross, so \begin{equation} 1+\delta_{\rm E}(<R_{\rm E}|a) = \left[R_{\rm L}/R_{\rm E}(a)\right]^3 = 1 + \delta_{\rm Sph}(<R_{\rm E}|a), \label{rLrE} \end{equation} where $R_{\rm L}$ is the initial comoving radius of the patch that has comoving size $R_{\rm E}$ at the epoch $a$, and \begin{equation} 1 + \delta_{\rm Sph}(<R_{\rm E}|a) \approx \left(1 - \frac{D_a\delta_{\rm L}(<R_{\rm L})}{\delta_c}\right)^{-\delta_c} \label{dEdL} \end{equation} \citep{b94,rks98}, where $D_a$ is the linear growth factor normalized to the present time, and $\delta_c=1.686$. (The spherical model mapping between $\delta_{\rm L}$ and $\delta_{\rm E}$ can be written exactly, but the solutions for over and under-densities appear rather different. Equation~\ref{dEdL} is a simple but accurate approximation which is valid in all cases.) Although recent work has shown how to incorporate effects which go beyond the spherical model \citep{scs13}, these only enter at second order in the $\delta_{\rm E}-\delta_{\rm L}$ mapping, so we will ignore them in what follows. Together, equations~(\ref{rLrE}) and~(\ref{dEdL}) allow one to transform from an initial profile shape to a final one. \begin{figure} \centering \includegraphics[width=0.9\hsize]{dEdLsc} \caption{Monotonic mapping between linear and nonlinear density in the spherical model (equation~\ref{dEdL}). The logarithmic scale on the y-axis shows that a wide range of nonlinear densities (greater than $\sim 100$) all have $\delta_{\rm L}/\delta_c\to 1$, whereas small nonlinear densities (less than $\sim 0.2$) correspond to a rather large range of $\delta_{\rm L}$. } \label{fig:SC} \end{figure} Before we show the results of doing so, it is worth making the following point about this mapping. The Eulerian density diverges rapidly as $D_a\delta_{\rm L}\to\delta_c$ (see Figure~\ref{fig:SC}). As a result, large changes in $\delta_{\rm E}$ correspond to small changes in $D_a\delta_{\rm L}$. Therefore, the precise value of $\delta_{\rm E}$ that is used to define a halo is not so important for modeling the protohalo patches from which the halo formed. In contrast, $D_a\delta_{\rm L}\to\infty$ as $1+\delta_{\rm E}\to 0$. In this limit, which is the relevant limit for voids, small changes in $\delta_{\rm E}$ correspond to large changes in the associated $D_a\delta_{\rm L}$. As a result, predictions of void formation are rather sensitive to the details of the void-finding algorithm. To illustrate, models of the largest halos and voids predict comoving number densities which scale approximately $\propto \exp(-\delta_c^2/2s_0)$ and $\propto \exp(-\delta_v^2/2s_0)$. Whereas $\delta_c\approx 1.686$ so long as $\delta_{\rm E}$ is more than a few hundred times the background density, $\delta_v$ can depend strongly on the void finder. The spherical collapse of a tophat profile suggests that the value $D_a\delta_{\rm L} = -2.7$ is special; this value in equation~(\ref{dEdL}) indicates that the associated enclosed density is $\delta_{\rm E}=-0.8$. However, while the enclosed density asssociated with most void finders is close to this value in the central regions, the value within what is commonly quoted as the `effective radius' of the void is more like $-0.5$. This corresponds to $D_a\delta_{\rm L}\approx -0.8.$ Such `voids' would be much more abundant than predicted if one assumed $\delta_v=-2.7$. Accounting for this goes a long way towards explaining many of the reported discrepancies between predicted and measured abundances of voids. E.g., Chan et al. (2014) report that void abundances in their simulations are reasonably well-predicted if one sets $\delta_v\approx -1$. \subsection{From Lagrangian to Eulerian} Figure~\ref{fig:dvoidTH} illustrates how equations~(\ref{rLrE}) and~(\ref{dEdL}) can be used to map an initial Lagrangian profile to an evolved profile at a later time. The top panel shows the evolution of the enclosed density profile that is motivated by the excursion set peaks approach which we describe in more detail in the Appendix. For the current discussion, the main point is simply that, given a value of $D_a$, equation~\ref{dEdL} transforms the dashed curve into the solid ones. Although the exact shape of the dashed curve differs in detail, the evolution as $D$ increases is similar to that shown in the right hand panel of Figure~3 of \cite{svdw04}: as time progresses, underdense regions expand and empty-out. The bottom panel shows the evolution of a Lagrangian enclosed density profile that was initially slightly overdense on large scales (dashed curve is greater than zero). Within the region where the Lagrangian profile was negative, the Eulerian profile expands outwards and empties out. However, where the profile was positive, the Eulerian profile shrinks inwards and grows denser. The zero-crossing scale does not evolve, as can be seen directly by setting $\delta_{\rm }=0$ in equation~(\ref{dEdL}). As a result, there is a range of scales within which the Eulerian profile is triple valued: these are the `shell-crossed' scales where the void in cloud process highlighted by \cite{svdw04} is important. In this regime, the red curves shown are incorrect, and a procedure like that outlined by \cite{pls12} to account for shell-crossing must be used. E.g. the correct curves would replace the large $R$ small $\delta_{\rm E}$ parts of all `S'-shaped profiles with vertical lines which drop down from the left-most (smallest $R$) tangent point. \begin{figure} \centering \includegraphics[width=0.9\hsize]{dupTHb} \includegraphics[width=0.9\hsize]{dupTHa} \caption{Evolution of the density profile around an excursion set trough of height $\delta_p = -1.686$ defined using TopHat smoothing, when the underlying Gaussian field has $P(k)\propto k^{-2}$. Dotted curve shows the Lagrangian profile (i.e. the initial one, evolved using linear theory to the present time); solid curves show the nonlinearly evolved profiles (solid) when the linear theory growth factor is $D_a=0.25$ (magenta) and $D_a=1$ (red) times that of the present time. Top and bottom panels show troughs of size $\nu\equiv \delta_c/\sqrt{s_0^{pp}} = 1$ and 0.25, and were chosen to illustrate the simple case of pure spherical expansion (top), and one where the void-in-cloud process occurs, so the evolution shown is incorrect (bottom). } \label{fig:dvoidTH} \end{figure} \subsection{From Eulerian to Lagrangian} Although the profiles in the upper panel were obtained by inserting the dashed curve for $\delta_{\rm L}$ in the right hand side of equation~(\ref{dEdL}), and evaluated for different $D_a$, we could, in fact, have done the opposite. We could have started from any one of the red curves, solved for the shape of $D_a\delta_{\rm L}$ using \begin{equation} \frac{D_a\delta_{\rm L}(<R_{\rm L})}{\delta_c} = 1 - \Bigl[1 + \delta_{\rm Sph}(<R_{\rm E})\Bigr]^{-1/\delta_c}, \end{equation} and hence, have inferred the shape of the red curve associated with any other value of $D_t/D_a$. (I.e., compute the new $1+\delta(a_t)$ associated with $(D_t/D_a)D_a\delta_{\rm L}(<R_{\rm L})$ and then the new $R_{\rm E}(a_t)$.) This is sufficiently straightforward that it is interesting to ask how well this works for voids identified in simulations. Figure~\ref{fig:onlySC} shows the results. The solid lines show the evolution of the void-matter cross correlation functions associated with voids that have size $R_{\rm eff}$ (as labelled) at $z=0$, but measured using protovoid centers and the mass distribution both at $z=2, 1$, 0.5 and 0 (top to bottom in each panel). The dashed curves show the result of using the spherical model to predict the other curves from the upper most solid curve (i.e. to predict the lower redshift profiles from that at $z=2$). While the predictions are qualitatively accurate, they lie systematically below the measurements on large scales. This is true even for the largest voids, which do not have overdense walls, so the complications of the void-in-cloud process do not arise. \begin{figure} \centering \includegraphics[width=0.95\hsize]{evolution2_sim_profile_cdm_00eV_256_3part_1Gpc_rightCenter.pdf} \caption{Mean enclosed void density profiles -- i.e. the volume averaged void-matter cross-correlation function -- at $z=2, 1, 0.5$ and 0. Solid lines show the measurement and dashed lines show the result of combining the measured shape at $z=2$ with the spherical model (equation~\ref{dEdL}) to predict the shape at later times.} \label{fig:onlySC} \end{figure} The discrepancy on the largest scales is particularly worrisome, since such scales are generally thought to be `in the linear regime' and hence, ought to be the easiest to use for constraining cosmological models. What has gone wrong? \subsection{Non-commutation between averaging and evolution?} One possibility is that, because it is a cross-correlation, each solid curve represents an average over many different void profiles. It is not obvious that the average of the evolved profiles (which the solid lines represent) should equal the evolution of the average profile (which is what our procedure for computing the dashed curves assumes). Non-commutation between averaging and evolution would be particularly relevant if there were significant scatter around the mean profile. \begin{figure} \centering \includegraphics[width=0.95\hsize]{redo2_withSim_profile_SphericalEvolutionfromz2_00eV_z0_cdm_10realiz_256_3part_1Gpc_20-25Mpc.pdf} \caption{Comparison of profiles obtained from evolving the mean $z=2$ profile (orange solid) to $z=0$ (dashed) with those from averaging the evolved profiles (dot-dashed). The dashed curves are the same as the dashed curves shown in the bottom panel of the previous Figure: they show how the (volume averaged) void-matter cross-correlation function measured at $z=2$ is predicted to evolve if the spherical model (equation~\ref{dEdL}) were accurate. While there are differences on small scales, the two are very similar on large scales. Both lie below the black solid curve, which shows the actual mean profile at $z=0$.} \label{fig:noCommute} \end{figure} Figure~\ref{fig:noCommute} provides a direct test. The solid orange and purple curves are the same as the $z=2$ and $z=0$ profiles shown in the bottom panel of the previous figure. The dashed curve shows the result of evolving the orange curve to $z=0$ using the spherical model (equations~\ref{rLrE} and~\ref{dEdL}). I.e., it shows the predicted evolution of the average profile. The dot-dashed curve shows the average of the evolved profiles: it was obtained by taking each $z=2$ profile, evolving it to lower $z$ using the same spherical model, and then averaging the result. The dashed and dot-dashed curves differ on small scales, because evolution and averaging do not commute. However, on large scales, there is good agreement. Therefore, although one can account for the scatter in profile shapes and evolution following \cite{rks98}, this will not solve the problem that the predicted evolution does not change the large scale bias. One might worry that, because we started with $z=2$ profiles, we have already mixed-up some of the evolution and averaging. (This is true, of course, but it would not be an issue if the two commute.) Therefore, Figure~\ref{fig:noCommute99} shows the result of starting with the $z=99$ profiles instead, and then evolving the average profile to $z=2$ and $z=0$ (dashed curves). The dot-dashed curves evolve and then average. The upturn on small scales is due to discreteness effects in the simulations (which is why our prefered default is to work with the $z=2$ profiles). This upturn does not complicate the main point, which is that neither the dashed nor the dot-dashed curves look like the mean profile measured in the simulations (solid curves). Whereas the differences are not large at $z=2$, they are significantly larger at $z=0$. \begin{figure} \centering \includegraphics[width=0.95\hsize]{redo2_withSim_profile_SphericalEvolutionfromz99_00eV_z0_cdm_10realiz_256_3part_1Gpc_20-25Mpc.pdf} \caption{Same as previous figure, but now for evolving the $z=99$ profile to $z=2$ and to $z=0$. The jump at $R\sim 20h^{-1}$Mpc is due to discreteness effects in the $z=99$ profile.} \label{fig:noCommute99} \end{figure} Together, Figures~\ref{fig:noCommute} and~\ref{fig:noCommute99} show that the real fault lies not with the fact that evolution and averaging do not commute, but with equation~(\ref{dEdL}) for the evolution itself. We address this in the next section. \section{Spherical evolution around a moving center}\label{sec:move} \subsection{Large-scale bias} As \cite{svdw04} note, equation~(\ref{dEdL}) is supposed to describe the evolution of the profile of an {\em isolated} object. Since all objects are embedded in the cosmic web, equation~(\ref{dEdL}) cannot be the full story. To see why, first note that the procedure used for measuring a `void profile' is the same as that for measuring the cross-correlation between void centers and the dark matter distribution. Therefore, one should think of the two as equivalent: \begin{equation} 1 + \bar\xi_{\rm bm}^{\rm E}(x|a) \equiv 1 + \frac{3}{r^3}\int_0^r {\rm d}x\,x^2 \xi_{\rm bm}^{\rm E}(x|a) = 1 + \delta_{\rm E}(<r|a). \label{dExibar} \end{equation} This is instructive because it has been known for decades that, if the initial conditions were Gaussian, then the large scale cross-correlation function will have the same shape as the dark-matter auto-correlation function, although it may differ in amplitude: \begin{equation} \delta_{\rm E}(<r|a)\approx b_{\rm E}(a) \, \bar\xi_{\rm mm}(<r|a) \label{linearb} \end{equation} \citep{bbks86}. The constant of proportionality is known as the `linear bias factor'. We note in passing that, therefore, `universal' fitting formulae for the `void profile' shape which do not exhibit this property (sufficiently far from the void center) are not `universal'. E.g., in this sense, the formula of \cite{HSW_2014} is not universal \citep[right-most panel of Figure~6 in][]{chd14}: it is only useful on smaller scales (although the scale beyond which it is inaccurate is not well-defined), and it is not very useful if, as is often the case, one is interested in its Fourier transform. \subsection{Evolution of bias} Next, note that if the comoving number density of biased tracers is conserved, then the bias factors at two different times are related: \begin{equation} b_{\rm E}(a_1) - 1 = (D_2/D_1)\,(b_{\rm E}(a_2) - 1). \label{bconserved} \end{equation} \citep{ck89, nd94}. It is conventional to define \begin{equation} b_{\rm L} \equiv b_{\rm E}(a=1) - 1 \end{equation} \citep{mw96}, so that \begin{equation} b_{\rm E}(a) = (D_a + b_{\rm L})/D_a. \label{bEbL} \end{equation} This is a rather generic argument which implies that the amplitudes of the Lagrangian and Eulerian cross-correlation functions should be different. In contrast, simply using equation~(\ref{dEdL}) to evolve the Lagrangian profile into an Eulerian one will result in no change to the profile (shape or amplitude) on the large scales where $\delta_{\rm L}\ll 1$. Namely, to leading order in $\delta_{\rm L}$, the spherical evolution equation~(\ref{dEdL}) has \begin{equation} \delta_{\rm E}(<R_{\rm E}|a) \approx D_a\,\delta_{\rm L}(<R_{\rm L}) \approx D_a\,b_{\rm L}\, \bar\xi_{\rm mm}(<r|a=1). \label{bnomotion} \end{equation} Compared to equation~(\ref{linearb}) with equation~(\ref{bEbL}), this is missing a factor of $D_a^2\bar\xi_{\rm mm}$. In suggestive notation, if we define $\delta_0\equiv \delta_m(a_i)/D_i$ and $\delta_b(a)\equiv b_{\rm E}(a)\,\delta_m(a)$ then \begin{align} \langle\delta_b(a)\delta_m(a)\rangle &= b_{\rm E}(a)\,\langle\delta_m(a)\delta_m(a)\rangle \nonumber\\ &\approx (D_a + b_{\rm L})\,\langle\delta_0\,\delta_m(a)\rangle, \end{align} where we have approximated $\delta_m(a)/D_a \approx \delta_m(a_i)/D_i$. This expresses the cross-correlation between biased tracers (e.g. halos or voids) and the matter field at the same epoch $a$, and should be reasonably accurate on large scales. Cross-correlating the positions of the biased tracers at $a=1$ with the mass at any other $a\le 1$ yields \begin{equation} \langle\delta_b(a=1)\delta_m(a)\rangle \approx (1+b_{\rm L})\,\langle\delta_0\,\delta_m(a)\rangle \end{equation} since $D_a=1$ when $a=1$. Alternatively, cross-correlating the protohalo or protovoid positions at $a_i\ll 1$ with the mass at any $a$ yields \begin{equation} \langle\delta_b(a_i)\delta_m(a)\rangle \approx b_{\rm L}\,\langle\delta_0\,\delta_m(a)\rangle, \end{equation} where we have assumed $D_i\to 0$. Thus, the Lagrangian and Eulerian density profiles (i.e., the profiles around the biased positions at $a_i$ and at $a=1$) are expected to be different, even on large scales. \begin{figure} \centering \includegraphics[width=0.9\hsize]{evolution_bias_dv0dmz_dm0dmz_00eV_256_3part_1Gpc_16-18Mpc_rebink} \caption{The void matter-cross power spectrum -- i.e. the Fourier transform of the density profile around positions identified as void centers at $z=0$. At small $k$ (i.e., on large scales) the profile around these positions is the same at all redshifts.} \label{b0} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\hsize]{evolution_bias_dvidmz_dmidmz_00eV_256_3part_1Gpc_16-18Mpc_rebink} \caption{Same as previous figure, but now showing the evolution of the matter distribution around the protovoid centers in the initial conditions ($z=99$). Again, there is no evolution at small $k$. However, the small $k$ limit differs by $-1$ from that in the previous figure.} \label{b99} \end{figure} Figure~\ref{b0} shows the Fourier transform of the void-matter cross-correlation function using the $z=0$ void centers and the matter distribution at other redshifts. Although there are large differences at large $k$, there is no evolution at small $k$. Figure~\ref{b99} shows a similar study of the evolution around the protovoid centers at $z=99$. Again, there is no evolution at small $k$. However, the $k\to 0$ limit, the large scale bias factor, differs from that in the previous figure: the difference between the two bias factors equals unity, in agreement with equation~(\ref{bEbL}). The large scale amplitude and detailed scale dependence are different if one considers larger or smaller voids (larger voids have more negative bias), but the fact that large scale bias does not evolve if the void center is fixed to its position at one epoch, is generic. And, if one uses the $z$-dependent protovoid center when cross-correlating with the matter distribution at $z$, then equation~(\ref{bEbL}) provides an excellent description of the large scale bias. \subsection{Evolution and motion} In the context of peak theory, equation~(\ref{bEbL}) is a consequence of the displacement of proto-peaks (or troughs) from their initial positions \citep{dcss10}. Massara \& Sheth (in prep) show that, if the biased tracers do not move, then the same reasoning as in \cite{dcss10} leads to $b_{\rm E}(a) = b_{\rm L}/D_a$ rather than equation~(\ref{bEbL}). This makes $\langle\delta_b(a)\delta_m(a)\rangle\approx b_{\rm L}\,\langle\delta_0\,\delta_m(a)\rangle$ whether one centers on positions at $a_i$ or $a=1$, since, because they do not move, $\delta_b(a) = b_{\rm L}\,\delta_m(a)/D_a = b_{\rm L}\,\delta_0$ is independent of $a$. The fact that the $k\to 0$ limit in Figure~\ref{b0} differs from that in Figure~\ref{b99} by unity indicates that voids move. Figure~\ref{fig:displacements} shows a more direct measure of this motion: the distance between the void and protovoid centers, plotted as a function of void size. Voids at $z=0$ are typically about $6h^{-1}$Mpc away from where they started, but this displacement is approximately independent of void size. Since this displacement is a smaller fraction of the size of a larger void, one might have thought that neglecting void motion would be an acceptable approximation for the larger voids. While this is true, we have shown that, in fact, this motion has a significant impact on the void-matter cross-correlation in Fourier space. The only sense in which the impact is smaller for large voids is that they tend to have $|b_{\rm L}|\gg 1$ so, for large voids, $b_{\rm L}+1\approx b_{\rm L}$. \begin{figure} \centering \includegraphics[width=0.95\hsize]{displacement_Ndensity_voids_z0_cdm_256_3part_1Gpc_rightCenter.pdf} \caption{Displacement $d$ between the void center at $z=0$ and the center of the protovoid patch from which it formed, as a function of void size $R_{\rm eff}$. The color coding is proportional to the fraction of voids of a given size which have displacement $d$, and shows that the distribution of $d$ is the same for all $R_{\rm eff}$.} \label{fig:displacements} \end{figure} In summary: Using equation~(\ref{dEdL}) to model nonlinear evolution leads to the prediction that the large scale bias does not evolve. Non-evolving large-scale bias is only correct if the center of collapse (or expansion) is not moving. If it does move, then the Eulerian and Lagrangian bias factors should differ by unity. \subsection{An improved model} Having motivated why equation~(\ref{dEdL}) does not model the evolution of the void-matter cross-correlation function, we now test the predictions of a simple extension which accounts crudely but effectively for void motions. Namely, we consider \begin{equation} 1 + \delta_{\rm E}(<R_{\rm E}|a) = 1 + \delta_{\rm Sph}(<R_{\rm E}|a) + D_a\,\delta_2(R_{\rm E}|a) \label{dEdLcombined} \end{equation} where $\delta_{\rm Sph}$ is given by equation~(\ref{dEdL}) and \begin{equation} \delta_2(R_{\rm E}|a) = D_a\int \frac{{\rm d}k\,k^2}{2\pi^2}\,P_{\rm L}(k)\, b_{\rm v}(kR_p)\, W(kR_p)\,W_{\rm TH}(kR_{\rm E}) \label{d2} \end{equation} with \begin{equation} W_{\rm TH}(x) = 3\, (\sin x - x\,\cos x)/x^3 , \label{Wth} \end{equation} \begin{equation} W(x) = W_{\rm TH}(x)\,{\rm e}^{-(x/5.5)^2/2} , \label{Weff} \end{equation} and \begin{equation} b_{\rm v}(kR_p) = 1 - k^2 \,s_0^{pp}/s_1^{pp} \label{bvel} \end{equation} with \begin{equation} s_j^{pq} = \int {\rm d}k\,\frac{k^2P(k)}{2\pi^2}\, W(kR_p)\,W_q(kR_q)\, k^{2j}, \label{sjpq} \end{equation} where $W_q = W(kR_q)$ when $R_q=R_p$ but $W_q = W_{\rm TH}(kR_q)$ otherwise. The addition of $\delta_2$ is a crude way of accounting for the correlated pairs which would be present even if $b_{\rm L} = 0$. Its form is motivated by the excursion set peaks approach which we describe in Appendix~A. Briefly, the smoothing windows are related to the void size and to the fact that we are measuring the cross-correlation on scale $R_{\rm E}$. Although (in the context of the spherical evolution model) a TopHat filter is the most natural choice, we have included an exponential factor in equation~(\ref{Weff}) which softens the edges of the TopHat. This is motivated by the work of \cite{css17w}. In addition to the the smoothing windows, $\delta_2$ also depends on $b_v$ (equation~\ref{bvel}). First, note that if we set $b_v\to 0$, then $\delta_2\to 0$, and the model reduces to the original model of isolated spherical evolution. The $k$-dependence of $b_v$ is motivated by the excursion set peaks/troughs approach, in which peak/trough motions are expected to differ from that of the dark matter \citep{bbks86,sd01,ds10,dcss10}. Notice that $\delta_2(R_p)\approx 0$ since most of the contribution to the integral comes from where the difference between $W$ and $W_{\rm TH}$ is small. We have also explored replacing \begin{equation} D_a\,\delta_2(R_{\rm E}) \to \Bigl[1 - D_a\delta_2(R_{\rm L})/\delta_c\Bigr]^{-\delta_c} - 1, \end{equation} or setting $\delta_2=0$, and replacing \begin{equation} D_a\,\delta_{\rm L}\to D_a\,(D_ab_v + b_{\rm L})\,\delta_{\rm L}. \end{equation} The former tries to write the additional pairs as perturbations to linear theory, whereas the latter subsumes everything into a single spherical collapse like term. To lowest order, all three models for $1+\delta_{\rm E}$ are the same, so in what follows, we stick with the simplest case (equation~\ref{dEdLcombined}). \begin{figure} \centering \includegraphics[width=0.95\hsize]{evolution2_complete_sub2h_sim_profile_cdm_00eV_256_3part_1Gpc_rightCenter_rightW.pdf} \caption{Mean enclosed void density profiles -- i.e. the volume averaged void-matter cross-correlation function -- at $z=2, 1, 0.5$ and 0. Solid lines show the measurement (same as Figure~\ref{fig:onlySC}) and dashed lines show the prediction from equation~(\ref{dEdLcombined}), which combines the spherical model with an additional term which allows for the evolution of large scale bias. The dot dashed curves show this extra term. } \label{fig:complete} \end{figure} \subsection{Tests of the modified model} Figure~\ref{fig:complete} shows that this model (dashed) describes the large scale profiles (solid) much better than the spherical model alone (compare solid and dashed curves in Figure~\ref{fig:onlySC}). Here, as before, the profile shape at $z=2$ is used to predict the shape at later times. The only difference is that now we use equation~(\ref{dEdLcombined}) to model the evolution. Dot-dashed lines show the extra term compared to equation~(\ref{dEdL}). Except for the few Mpc around the scale where the profile begins to climb upwards, this extra term leads to a more accurate prediction. Figure~\ref{fig:dLag} shows an additional test. The solid curve shows the measured Lagrangian profile $\delta_{\rm L}$, and dot-dashed and dashed curves show the prediction, based on the measured $z=2$ profile and equation~(\ref{dEdL}) and~(\ref{dEdLcombined}), respectively. The bump in the measurements on small scales is an artifact of the fact that the mean particle separation is $\sim 4h^{-1}$Mpc: the measurements are only reliable on larger scales. This bump is the reason why we chose to normalize our model at $z=2$, and then evolve it to lower and higher redshifts, rather than normalizing directly in the initial conditions. The dashed lines are clearly in better agreeement with the solid ones, indicating that equation~(\ref{dEdLcombined}) is more accurate than equation~(\ref{dEdL}). We remarked earlier that, on large scales, our equation~(\ref{dEdLcombined}) has the profile scaling as $D_a (D_a + b_{\rm L})$ (we have set $b_v=1$ in equation~\ref{xi_largeScale}), with $0< D_a<1$. This shows that the evolution is initially dominated by the piece that is linear in $D_a$, which will have the same sign as $b_{\rm L}$. At late times, the $D_a^2 > 0$ term will also matter. Therefore, if $b_{\rm L}<0$, then the profile initially becomes more negative as $D_a$ increases. The 'turnaround' of the amplitude, due to the $D_a^2$ term, will happen when $d[D_a (D_a + b_{\rm L})]/dD_a = 2D_a + b_{\rm L} = 0$. I.e., from $D_a > -b_{\rm L}/2$ the amplitude will begin to increase. However, if $b_{\rm L}$ is too negative, then the critical $D_a$ will be greater than unity; this means that the profile becomes increasingly negative all the way to $z=0$. This explains why, for the solid curves in Figure~\ref{fig:onlySC}, the late time profile for small voids lies above that at earlier times, whereas the late time profile for big voids becomes ever more negative. Our evolution model, equation~(\ref{dEdLcombined}), is phrased in a rather generic way. Except for our choice for $b_v$, it is not tied to a particular functional form for $\delta_{\rm L}$. Having shown that it is accurate, it is interesting to ask if the shape of $\delta_{\rm L}$ can be predicted from first principles. We address this in Section~\ref{sec:ESTmodel}. \begin{figure} \centering \includegraphics[width=0.95\hsize]{Lagrangian_profile_comparison_00eV_z0_cdm_10realiz_256_3part_1Gpc.pdf} \caption{Measured (solid) and predicted Lagrangian profiles. The predictions result from combining the profile at $z=2$ with equations~(\ref{dEdL}) (dot-dashed) and~(\ref{dEdLcombined}) (dashed) to predict $\delta_{\rm L}$ at $z=99$. Equation~\ref{dEdLcombined} produces a more accurate description of the measurements.} \label{fig:dLag} \end{figure} \subsection{Velocities}\label{sec:v12} The pair conservation (or continuity) equation states that \begin{equation} \frac{\partial\bar\xi_{\rm bm}(<r|a)}{\partial\ln a} = -\frac{v_{\rm bm}(r|a)}{Hr}\,3\,[1+\xi_{\rm bm}(r|a)], \label{continuity} \end{equation} where $v_{\rm bm}(r|a)$ is the mean relative velocity of all biased tracer-matter pairs separated by $r$ when the expansion factor is $a$ (e.g. Peebles 1980). On the large scales where linear theory applies, \begin{equation} \xi_{\rm bm}(r|a) = D_a\,(D_a b_v + b_{\rm L})\,\xi_{\rm mm}(r|a=1). \label{xi_largeScale} \end{equation} \citep{sdhs01}. (Strictly speaking, this expression ignores the $k$-dependence of $b_v$ and $b_{\rm L}$; to include it correctly, we must think of $\xi_{\rm mm}$ as an integral over $P(k)$, and include both $b_v$ and $b_{\rm L}$ in the integrand.) This, in the continuity equation, implies \begin{equation} [b_v + b_{\rm E}(a)]\,f_a\,\bar\xi_{\rm mm}(r|a) = -\frac{v_{\rm bm}(r|a)}{Hr}\,3\,[1+\xi_{\rm bm}(r|a)], \label{v12lin} \end{equation} where $f_a\equiv \partial\ln D_a/\partial\ln a$, so that \begin{equation} v_{\rm bm}(r|a) = \frac{[b_v + b_{\rm E}(a)]}{2}\,v_{\rm mm}(r|a)\,\frac{1+\xi_{\rm mm}(r|a)}{1+\xi_{\rm bm}(r|a)} \end{equation} \citep{sdhs01}. This relates the pairwise velocity between biased tracers and the mass to that of the dark matter. In the current context, it is interesting to write equation~(\ref{v12lin}) as \begin{equation} \frac{v_{\rm bm}(r|a)}{f_a\,Hr}\,[1+\xi_{\rm bm}(r|a)] = - \frac{\bar\xi_{\rm bm}(r|a)}{3} - b_v\,\frac{\bar\xi_{\rm mm}(r|a)}{3}. \label{v12shds} \end{equation} Previous work \citep{HSW_2014,dchp16} has missed the (pair-weighting) term in square brackets on the left hand side, and the second term on the right hand side (the one proportional to $b_v$). It is interesting to compare this with the statistical definition \citep[e.g.][]{kf95}: \begin{equation} v_{\rm bm}(r|a) \equiv \frac{\langle (1+\delta_m)(1+\delta_b)(v_m-v_b)\rangle}{\langle(1+\delta_m)(1+\delta_b)\rangle}. \end{equation} In linear theory, velocities and densities at the same position are uncorrelated, so \begin{align} v_{\rm bm}(r|a) &\equiv \frac{\langle \delta_m v_b - \delta_bv_m\rangle}{\langle(1+\delta_m)(1+\delta_b)\rangle} = \frac{\langle \delta_m b_v v_m - b_{\rm E}\delta_mv_m\rangle}{\langle(1+\delta_m)(1+\delta_b)\rangle}\nonumber\\ &= \frac{b_v + b_{\rm E}}{2}\, \frac{2\langle\delta_m v_m\rangle}{1+\xi_{\rm bm}} = f_aHr\,\frac{b_v + b_{\rm E}}{2}\, \frac{2\bar\xi_{\rm mm}/3}{1+\xi_{\rm bm}}\nonumber\\ &= f_aHr\,\frac{b_v\bar\xi_{\rm mm} + \bar\xi_{\rm bm}}{3\,(1+\xi_{\rm bm})}, \end{align} where $b_v$ accounts for the fact that void speeds may differ from those of the dark matter. Note that this agrees with equation~(\ref{v12shds}). However, this formulation makes it easy to see the effect of assuming voids do not move: simply set $b_v\to 0$. Doing so makes $v_{\rm bm}\to \langle\delta_b v_m\rangle = -(Hr)\,f_a\,\bar\xi_{\rm bm}/3$. However, since voids do move (Figure~\ref{fig:displacements}), there are corrections to this simple expression which should not be ignored if $|b_{\rm E}|$ is not large compared to unity. Of course, we can go beyond linear theory by inserting equation~(\ref{dEdLcombined}) for $1 + \bar\xi_{\rm bm}$ in equation~(\ref{continuity}). This yields \begin{align} \label{v12full} -\frac{v_{\rm bm}}{f_a\,Hr}\,3\,[1+\xi_{\rm bm}] &= D_a\delta_{\rm L} \,\frac{1+\delta_{\rm Sph}}{1 - D_a\delta_{\rm L}/\delta_c} + 2D_a \delta_2 \\ &= \bar\xi_{\rm bm} + D_a\delta_2 - \delta_{\rm Sph} \nonumber\\ & \qquad + \delta_c\,(1+\delta_{\rm Sph})\left[(1+\delta_{\rm Sph})^{1/\delta_c} - 1\right] \nonumber \end{align} where $v_{\rm bm}$ and $\xi_{\rm bm}$ are on scale $r$, $\delta_{\rm Sph}$ is within $r$, and $\delta_{\rm L}$ is within $R_{\rm L}$. It is easy to check that, on large scales, this reduces to the linear theory analysis above. The second equality writes $v_{\rm bm}$ as $\bar\xi_{\rm bm} + D_a\delta_2\ + $ additional terms which can be expressed in terms of $\delta_{\rm Sph} = \bar\xi_{\rm bm} - D_a\delta_2$ (c.f. equation~\ref{dEdLcombined}). This highlights the terms which are missing from previous analyses. \begin{figure} \centering \includegraphics[width=0.95\hsize]{velocity_profile_z0_cdm_10realiz_256_3part_1Gpc_rightCenter_overdvm.pdf} \caption{Measured (black) and predicted (magenta and red) pairwise velocity for void-dark matter pairs, as a function of separation at $z=0$, for the same void samples as shown in the bottom two panels of the previous figure. Dashed curves ignore the contribution from the motion of void centers; solid curves include it. Magenta curves show the linear theory prediction (equation~\ref{v12shds} with $b_v=0$ or with $b_v$ given by equation~\ref{bvel}), and red curve shows the full signal (equation~\ref{v12full}). } \label{fig:v12} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\hsize]{velocityAdim_profile_z0_cdm_10realiz_256_3part_1Gpc_rightCenter_overdvm.pdf} \caption{Same as previous figure, but now for $v_{\rm bm}/Hr$.} \label{fig:v12Hr} \end{figure} Figure~\ref{fig:v12} compares the measured $v_{\rm bm}$ with the linear and nonlinear predictions, and Figure~\ref{fig:v12Hr} shows a similar comparison of measured and predicted $v_{\rm bm}/Hr$, since this highlights the similarity to $\bar\xi_{\rm bm}$ on large scales. To illustrate the effect of void motions on the outflow velocity `profile', the solid black curves in Figure~\ref{fig:v12} show \begin{displaymath} \avg{v_{12}(r)} \equiv \frac{\sum_{i=1}^{N_v} \sum_{m=1}^{N_p}{\cal I}_{mi}\,(\mathbf{v}_m - B\mathbf{v}_i)\cdot \mathbf{r}_{mi}/|\mathbf{r}_{mi}|}{\sum_{i=1}^{N_v} \sum_{m=1}^{N_p}\,{\cal I}_{mi}} \end{displaymath} with $B=1$, where $N_v$ and $N_p$ are the number of voids and dark matter particles in the simulation box, $\mathbf{r}_{mi}\equiv \mathbf{r}_m-\mathbf{r}_i$ is the separation between void $i$ and particle $m$, and ${\cal I}_{mi}=1$ if $|\mathbf{r}_{mi}|$ is within $\Delta r$ of $r$. The dashed black curves show this same sum but with $B=0$; this clearly ignores void motions, so the dashed black curves are like the measurements shown in \cite{HSW_2014} (although they used an additional Voronoi weighting scheme). The dashed and solid curves are different on all scales, showing that void motions matter. The corresponding magenta curves show the linear theory predictions: dashed shows equation~(\ref{v12shds}) with $b_v=0$, and solid uses $b_v\ne 0$ of equation~(\ref{bvel}). The solid red curve shows the full nonlinear prediction (equation~\ref{v12full}). While it does not reproduce the measurements accurately on small scales, it does fare better than linear theory. The agreement with the measurements improves slightly if we do not divide the predictions by $1+\xi_{\rm bm}$ (see Figures~\ref{fig:v12_wrong} and~\ref{fig:v12Hr_wrong}). This factor, which normalizes the predictions by the number of pairs, as is done for the measurement, is often ignored. Ignoring it happens to work well for velocities onto clusters as well \citep{sz09}. \subsection{Comparison with some previous work} Before we move on to modeling these results, it is interesting to contrast our treatment of void density and velocity profiles with previous work. As we noted above, setting $b_v\to 0$ in our equation~(\ref{v12shds}) yields the expression used by \cite{HSW_2014} (though they ignore the $1+\xi_{\rm bm}$ factor). They report good agreement with their analysis, so it necessary to address why. Equation~(3) of \cite{HSW_2014} shows that they only sum over particle velocities -- they explicitly do not include the void velocity when measuring the `pairwise' velocity in their simulations. This means they have unwittingly set $b_v=0$ in their measurements, which is why they find no evidence for this term. The agreement between the dashed curves in Figure~\ref{fig:v12} shows that, at least on large scales, if the measurement ignores void motions, then one does not do too badly if one ignores them in the theory also. \begin{figure} \centering \includegraphics[width=0.95\hsize]{velocity_profile_z0_cdm_10realiz_256_3part_1Gpc_rightCenter.pdf} \caption{Same as Figure \ref{fig:v12}, but now the predictions have not been divided by $1+\xi_{\rm bm}$. } \label{fig:v12_wrong} \end{figure} \cite{dchp16} describe a study of the spherical evolution of cosmic voids; while they find encouraging agreement for voids that are larger than those we have studied, they report that their methodology fails for smaller voids. Our analysis shows why. Demchenko et al. use the void center defined at $z=0$ and ask if the spherical model can describe how the profile around this position evolved. However, voids move (see our Figure~\ref{fig:displacements}), and there is no reason why the spherical model should apply around a position which is not comoving with the center. E.g., in our Figure~\ref{fig:onlySC}, spherical evolution alone is obviously a bad description of the measured evolution, most especially on large scales, because the measured bias evolves. Why, then, did they find good agreement with their measurements? There are two parts to the answer. First, their measurements of the void profile at all other $z$ were always centered on the $z=0$ position. Therefore, their measurements are the real-space analogue of our Figure~\ref{b0}, for which the large scale bias does not evolve! As a result, the failure of the spherical model on large scales, which is so obvious in our analysis, was hidden by the measurement they chose to model. Second, our Figure~\ref{fig:displacements} shows that void motions are approximately independent of void size, so ignoring them is most problematic for small voids. \cite{dchp16} considered voids with effective sizes that were much larger than the typical void displacements: presumably this is why they found that agreement with the model degraded as void size decreased. \section{The Excursion Set Troughs model}\label{sec:ESTmodel} Having shown that we have a reasonably accurate model for the evolution of the void-matter cross-correlation function, we now ask if its shape is understood. For this, we use the excursion set troughs model (Section~4.6 in Sheth \& van de Weygaert 2004), with the technical improvements described in Paranjape et al. (2013). Briefly, although the approach is most often used to predict abundances of objects from the statistics of the initial fluctuation field, it actually includes a model for the Lagrangian space profiles around the protohalo or protovoid patches. This is a direct consequence of recognizing the often overlooked fact that the cross-correlation between the patches and the mass {\em is} the mean density profile \citep{bbks86,rks98}, and, for the excursion set approach, this Lagrangian cross-correlation is known \citep{ms12,mps12}. In what follows, we first write down the Lagrangian cross-correlation function, and show that it provides a reasonable fit to the Lagrangian measurements. We then insert the Lagrangian profile in equation~(\ref{dEdLcombined}) and fit it to the evolved Eulerian profile. Agreement between the fitted coefficients in Lagrangian and Eulerian space is an indication that all is self-consistent. \begin{figure} \centering \includegraphics[width=0.95\hsize]{velocityAdim_profile_z0_cdm_10realiz_256_3part_1Gpc_rightCenter.pdf} \caption{Same as Figure~\ref{fig:v12Hr}, but now the predictions have not been divided by $1+\xi_{\rm bm}$.} \label{fig:v12Hr_wrong} \end{figure} \subsection{EST Lagrangian protovoid profiles} In Appendix~A we discuss why the Lagrangian cross-correlation function -- the density profile centered on positions which are Excursion Set Troughs on scale $R_p$ (EST$_p$) -- is well-approximated by \begin{equation} \delta_{\rm L}(<R_q|{\rm EST}_p) = b_{10}\,s_0^{pq} + b_{01}\,2\frac{\mathrm{d} s_0^{pq}}{\mathrm{d}\ln s_0^{pp}}, \label{dprofile} \end{equation} where $s_0^{pp}$ and $s_0^{pq}$ were defined earlier (equation~\ref{sjpq}), and \begin{equation} b_{10} + b_{01} = \frac{\avg{\!\delta_p\!}}{s_0^{pp}} \label{bconsistency} \end{equation} where $\avg{\!\delta_p\!}$ denotes the typical value of the Lagrangian overdensity enclosed within a protovoid of size $R_p$ (Castorina et al. 2017; Chan et al. 2017). Usually, this value is known: e.g., the simplest expectation for voids is that it equals $-2.7$. However, in the present case, we do not have an a priori prediction for $\avg{\!\delta_p\!}$ because, as we noted in our discussion of the spherical model, this value is rather sensitive to the details of the void finder \citep[also see][]{Nadathur:2015}. Indeed, not only is it not known, but, in contrast to the EST model in which the mean Lagrangian profile is obtained by stacking voids of fixed Lagrangian size, here, we stack voids based on their Eulerian rather than Lagrangian size. Therefore, in what follows, we will treat $b_{10}$ and $b_{01}$ as free parameters, which will be determined by fitting to the measured Lagrangian profile. We will then check if the fitted values do indeed estimate $\delta_p$ well (following equation~\ref{bconsistency}). However, predicting how these parameters depend on void size is beyond the scope of this work. In practice, we do not fit to the solid curves in Figure~\ref{fig:dLag} because of the problems on (small) scales that are comparable to the interparticle separation. Rather, we fit to the dot-dashed curves shown there. These were predicted by our evolution model from the measured profile at $z=2$; they provide an excellent description of the Lagrangian profile, but do not suffer from artifacts on small scales. In addition, when fitting, we set $R_p=R_{eff}/2$, where $R_{eff}$ is the scale shown in the legend in the upper left of the figure. This sets the values of $s_0^{pp}$ and $s_0^{pq}$. \begin{table} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline $R_{\rm eff}$ [$h^{-1}$Mpc] & $b_{10}$ & $b_{01}$ & $s_0^{pp}$ & $s_1^{pp}$ & $\delta_p$ \\ \hline $14-16$ & $-0.54$ & $-0.72$ & 0.71 & 0.03 & $-0.90$ \\ \hline $16-18$ & $-0.75$ & $-0.95$ & 0.60 & 0.02 & $-1.02$ \\ \hline $20-25$ & $-1.5$ & $-1.7$ & 0.39 & 0.01 & $-1.26$ \\ \hline \end{tabular} \caption{\label{tab:lag_fit}Results from fitting equation~(\ref{dprofile}) to the Lagrangian profile.} \end{center} \end{table} \begin{figure} \centering \includegraphics[width=0.95\hsize]{fit_prediction_Lprofile_cdm_10realiz_256_3part_1Gpc_rightCenter_rightW.pdf} \caption{Profiles of the form given by \eqn{dprofile} (dashed) with free parameters $b_{10}$ and $b_{01}$ determined by fitting to the Lagrangian profile (solid, same as dot-dashed in Figure~\ref{fig:dLag}).} \label{fig:L_profile_fit} \end{figure} Table~\ref{tab:lag_fit} gives the best-fitting parameters; the fits themselves are shown as dashed lines in Figure~\ref{fig:L_profile_fit}. Although the agreement with the measured solid curves is not spectacular -- the predicted shape is flatter than measured on the inner side of the void wall -- it does suggest that the excursion set troughs approach provides a reasonably good framework within which to describe voids. As a final consistency check, the final column of Table~\ref{tab:lag_fit} gives the value of $\delta_p$ predicted by equation~(\ref{bconsistency}). It is in rather good agreement with $\delta_{\rm L}(<R_p)$ on the scale $R_p=R_{\rm eff}/2$ shown in the Figure. \subsection{EST + spherical evolution + extra term} We now test if our ESP model for the Lagrangian profile shape, when combined with equation~(\ref{dEdLcombined}) for the evolution, is able to provide a good description of the Eulerian profiles in the simulation. As before, we keep the structure of the model fixed, but fit for the coefficients $b_{10}$ and $b_{01}$. If these turn out to be the same as those in Table~\ref{tab:lag_fit}, then this will be an indication that our approach is self-consistent. \begin{figure} \centering \includegraphics[width=0.95\hsize]{fit_prediction_Eprofile_rminFit_Reff_belowReffover55_Lterm_cdm_10realiz_256_3part_1Gpc_rightCenter_rightW_Rp22_linScale.pdf} \caption{Profiles of the form given by inserting \eqn{dprofile} in \eqn{dEdLcombined} (dashed) with free parameters $b_{10}$ and $b_{01}$ determined by fitting to the Eulerian profile (solid, same as Figure~\ref{fig:complete}).} \label{fig:E_profile_fit} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\hsize]{fit_prediction_Lprofile_fromEprofile_rminFit_Reff_belowReffover55_Lterm_cdm_10realiz_256_3part_1Gpc_rightCenter_rightW_Rp22_linScale.pdf} \caption{Profiles of the form given by \eqn{dprofile} (dashed) with free parameters $b_{10}$ and $b_{01}$ determined by fitting to the Eulerian profile (solid, same as dot-dashed in Figure~\ref{fig:dLag}).} \label{fig:Efit2L} \end{figure} Figure~\ref{fig:E_profile_fit} shows the result of fitting the Eulerian profiles (solid, same as Figure~\ref{fig:complete}) with the shape that is obtained by inserting \eqn{dprofile} in \eqn{dEdLcombined} (dashed) and fitting for $b_{10}$ and $b_{01}$. In practice, because evolution and averaging do not commute (Figure~\ref{fig:noCommute}) we restrict the fit to scales $R<R_{eff}/5.5$ and $R>R_{eff}$. The best-fit values are listed in Table~\ref{tab:eul_fit}. Comparison with those in Table~\ref{tab:lag_fit} shows reasonable agreement, especially for large voids. Figure~\ref{fig:Efit2L} shows another test of self-consistency: the dashed curves show the result of using the $b_{10}$ and $b_{01}$ values obtained from fitting the Eulerian profiles to predict the measured Lagrangian profiles (solid). The agreement is again reasonable, except around the void wall where we know non-commutation matters, and where the predicted profiles tend to be slightly shallower than the measured ones. We conclude that equation~(\ref{dEdLcombined}) provides a reasonably flexible and physically motivated `universal' framework for fitting void profiles. \begin{table} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline $R_{\rm eff}$ [$h^{-1}$Mpc] & $b_{10}$ & $b_{01}$ & $s_0^{pp}$ & $s_1^{pp}$ & $\delta_p$ \\ \hline $14-16$ & $-0.29$ & $-0.98$ & 0.63 & 0.03 & $-0.80$ \\ \hline $16-18$ & $-0.50$ & $-1.31$ & 0.60 & 0.02 & $-0.96$ \\ \hline $20-25$ & $-1.3$ & $-2.4$ & 0.34 & 0.01 & $-1.27$ \\ \hline \end{tabular} \caption{\label{tab:eul_fit}Results from fitting~\eqn{dEdLcombined} with $\delta_L(<R_L)$ from ~\eqn{dprofile} to the Eulerian profile.} \end{center} \end{table} \subsection{Scale-dependence of bias}\label{sec:bk} Before we conclude this section, it is interesting to contrast the role played by the extra term, which, on large scales, makes \begin{equation} b_{10}\to b_{10} + D_a \qquad{\rm and}\quad b_{01}\overset{\!\!\sim}{\to} b_{01} - D_a, \label{b10Eulb01Eul} \end{equation} so that $b_{10} + b_{01}$ of the evolved profile is approximately independent of time, and equation~(\ref{bconsistency}) is a reasonable approximation at all times. (The Appendix discusses why the expression for $b_{01}$ is only an approximation.) For massive halos, $b_{10}$ and $b_{01}$ are both positive, and increase with halo mass. So the term from evolution increases the large scale bias and decreases the amplitude of the scale-dependence term: as a result, bias appears to be scale-independent over a wider range of scales in Eulerian than Lagrangian space. For lower mass halos, $b_{10}$ can be negative, but $b_{01}$ will still be positive (because of equation~\ref{bconsistency}). In this case, evolution will bring both closer to zero, and may even make both change sign. Thus, generically, evolution makes Eulerian bias less scale dependent than Lagrangian. In contrast, for large voids, $b_{10}$ and $b_{01}$ are both negative. Evolution makes the scale-independent piece closer to zero, but makes the scale-dependent piece even more negative, with the result that Eulerian void bias is more scale-dependent than Lagrangian. This is also true for small voids: although $b_{10}$ can be positive (but $b_{01}$ negative), evolution will increase the scale-independent piece, and make $b_{01}$ more negative. Again, Eulerian void bias is more scale dependent than Lagrangian -- unlike for halos. \section{Discussion}\label{sec:discuss} We studied void density and velocity profiles -- the former being just the void-matter cross correlation function -- in simulations. We showed that the cross-correlation function evolves both because of how matter flows around (away from) the voids (equation~\ref{dEdL}), but also because voids move. Voids identified today (and by extension, voids identified at any given time) are displaced from their initial protovoid positions by an amount that is approximately independent of void size (Figure~\ref{fig:displacements}). These displacements make the void-matter cross-correlation evolve even on very large scales (Figures~\ref{b0} and~\ref{b99}), in agreement with theory (equation~\ref{bEbL}). Accounting for these displacements (equation~\ref{dEdLcombined}) is necessary to explain the evolution of void density profiles (compare Figure~\ref{fig:onlySC} with~\ref{fig:complete}) as well as the profiles of velocity inflow/outflow around voids (Figures~\ref{fig:v12}--\ref{fig:v12Hr_wrong}). These displacements and their consequences are missing from all previous work on void evolution. The implications for redshift space distortions are the subject of work in progress. We then explored if the simplest Excursion Set Troughs model provides a useful `universal' framework for describing void profiles. This framework ignores details associated with non-spherical evolution \citep{smt01,scs13,anp15}, and the fact that evolution and averaging do not commute (Figure~\ref{fig:noCommute}), although the non-commutation matters most on scales that are within the void boundary. Although the EST framework is, in principle, fully predictive, predicted void abundances are rather sensitive to how voids are characterized in the evolved field (see discussion associated with Figure~\ref{fig:SC}). To study if the EST framework is useful even when the details of the void finder are not understood, we treated the EST expression for void profiles (equation~\ref{dprofile}) as a framework with free parameters which are to be determined by fitting to data. This worked reasonably well. If one evolves the measured void profile back to the initial Lagrangian conditions and fits there (Figure~\ref{fig:L_profile_fit}), then the fitted parameters yield reliable information about the protovoid patches from which the voids evolved (equation~\ref{bconsistency}). Moreover, these Lagrangian parameters can also be determined by inserting equation~(\ref{dprofile}) in equation~(\ref{dEdLcombined}) and fitting to the evolved Eulerian profile (Figures~\ref{fig:E_profile_fit} and~\ref{fig:Efit2L}). Fitting to either the Lagrangian or the Eulerian profiles yields similar estimates of the Lagrangian bias parameters $b_{10}$ and $b_{01}$, which describe the scale-dependence of void bias (compare Tables~\ref{tab:lag_fit} and~\ref{tab:eul_fit}). When combined with equation~(\ref{bconsistency}), these values indicate that protovoid patches which grow into voids identified by VIDE have average enclosed overdensities of order $-1$, rather than the value of $-2.7$ that is predicted by theory. Much of this difference is because the void walls are not sharp, so the thickness of the wall influences its estimated size. Nevertheless, it is interesting that a Lagrangian underdensity of order $-1$ is close to the value suggested by fitting void abundances using the EST approach. Exploring if the EST approach allows self-consistent determinations of void profiles and abundances is left to future work. Future work must also address the question of how to model voids identified using biased tracers, rather than dark matter particles, in the evolved field (as we have done here). If $\delta_g$ denotes the overdensity of the bias tracer, then the simplest approximation would simply set $1+\delta_{\rm E} = 1 + \delta_g/b_g$ in all the expressions of this paper. Section~\ref{sec:bk} motivates why scale-independent linear bias may be a better approximation for the halos which host galaxies in the evolved field than it is for voids, so that the simple prescription above may work reasonably well in practice. \section*{Acknowledgments} We are grateful to the participants of the meeting on voids at the Simons CCA at the end of September 2018 for discussions which encouraged us to publish our findings. Our work on this project began in 2013. RKS is grateful to the Perimeter Institute for its hospitality in September 2013, the Institut Henri Poincare for its hospitality in November 2013, and the members of the HECAP group at ICTP for their understanding during this time, as well as their hospitality since. He is also grateful to the IPMU for hospitality in March 2014, when he presented many of these results. This work has made use of the python pylians libraries, publicly available at https://github.com/franciscovillaescusa/Pylians. EM is grateful to Francisco Villaescusa-Navarro for creating these libraries. She acknowledges the WFIRST Science Investigation Team “Cosmology with the High Latitude Survey” supported by NASA grant 15-WFIRST15-0008.
{ "timestamp": "2018-11-09T02:01:26", "yymm": "1811", "arxiv_id": "1811.03132", "language": "en", "url": "https://arxiv.org/abs/1811.03132" }
\section{Introduction} Given a function $f$ on a classical phase space $X$, let us first quantize it and then dequantize. This operation on functions, $f \mapsto {\mathcal{B}} f$, is called {\it the Berezin transform}. As a result of this operation, the function $f$ blurs on the phase space. The intuition behind this is as follows \footnote{We thank S.~Nonnenmacher for this explanation.}: assume that $f$ is the Dirac delta-function at a point $x \in X$. Its quantization is a coherent state at $x$, whose dequantization is approximately a Gaussian centered at $x$. In the framework of the Berezin-Toeplitz quantization of closed K\"{a}hler manifolds, ${\mathcal{B}}$ is known to be a Markov operator with finite-dimensional image, and is closely related to the Laplace-Beltrami operator $\Delta$ of the K\"ahler manifold. In fact, the Berezin transform has the following asymptotic expansion as $\hbar\to 0$, due to Karabegov and Schlichenmaier \cite{KS01} \footnote{Note that after renormalization, there is a missing factor of $1/2$ in front of the second term of the analogous formula in \cite[(1.2)]{KS01}.}: \begin{equation}\label{eq-asymp} {\mathcal{B}}_\hbar(f)= f-\frac{\hbar}{4\pi}\Delta f + \mathcal{O}(\hbar^2)\;, \end{equation} for every smooth function $f$ on $X$, with remainder depending on $f$ and where $\hbar$ stands for the Planck constant (see \cref{subsec-BT} for notations and conventions). We focus on the spectral properties of ${\mathcal{B}}$. For fixed $\hbar$, this operator factors through a finite-dimensional space and hence its spectrum consists of a finite collection of points lying in the interval $[0,1]$. Moreover, multiplicities of positive eigenvalues are finite, and $1$ is the maximal eigenvalue corresponding to the constant function. Write its spectrum (with multiplicities) in the form $$1=\gamma_0 \geq \gamma_1 \geq \gamma_2\geq \dots \geq \gamma_k \geq \dots \geq 0$$ The quantity $\gamma:= 1-\gamma_1$ is called {\it the spectral gap}, a fundamental characteristic of a Markov chain responsible for the rate of convergence to the stationary distribution. Our first result, Theorem \ref{thm-quant}, implies that in the context of the Berezin-Toeplitz quantization, the spectral gap $\gamma$ of the Berezin transform equals \begin{equation}\label{eq-gap-intro} \gamma = \frac{\hbar}{4\pi}\lambda_1 + \mathcal{O}(\hbar^2)\;, \end{equation} where $\lambda_1$ stands for the first eigenvalue of $\Delta$. Note that the upper bound on the gap readily follows from \eqref{eq-asymp}. The proof follows a work of Lebeau and Michel \cite{LM} on semiclassical random walks on manifolds with extra ingredients such as an asymptotic expansion for the Bergman kernel due to Dai, Liu and Ma \cite{DLM06}, a comparison between the Berezin transform and the heat operator motivated by the work of Liu and Ma \cite{LM07}, and a refined version of the above-mentioned Karabegov-Schlichenmaier asymptotic expansion \cite{KS01}. In fact, Theorem \ref{thm-quant} shows much more than \eqref{eq-gap-intro}, namely that one can approximate the full spectrum of $\Delta$, as well as the associated eigenfunctions, with those of ${\mathcal{B}}$. Let us point out that the proof of Theorem \ref{thm-quant} can be extended to Berezin-Toeplitz quantization of closed symplectic manifolds, using the quantum spaces given by the eigenstates corresponding to the small eigenvalues of the renormalized Bochner Laplacian. This uses the associated generalized Bergman kernel of Ma and Marinescu \cite{MM08} and asymptotic estimates refining those of Ma, Marinescu and Lu \cite{LMM16} (see the discussion at the end of Section \ref{subsec-BT}). \medskip The Berezin transform is defined in the more general context of positive operator valued measures (POVMs). In fact, the Berezin-Toeplitz quantization is nothing else but the integration over a certain POVM on the phase space $M$ with values in the space of quantum observables, and the dequantization is the dual operation \cite{La,CP}. In addition to quantization, POVMs appear in quantum mechanics on another occasion: they model quantum measurements \cite{Busch-2}. Interestingly enough, within this model the spectral gap of the Berezin transform corresponding to a POVM admits two different interpretations: it measures the minimal magnitude of quantum noise production, and it equals the spectral gap of the Markov chain corresponding to repeated quantum measurements (see Section \ref{sec-noise} for details). \medskip Another theme of this paper is related to Donaldson's program \cite{D} of developing approximate methods for detecting canonical metrics on K\"{a}hler manifolds. Interestingly enough, our study of the Berezin transform yields the asymptotic behaviour of the spectrum and of the eigenfunctions of the $Q$-operator, a geometric operator arising in this program, for K\"ahler metrics of constant scalar curvature. This behaviour, which was predicted by Donaldson in \cite{D}, is stated in Theorem \ref{cor-quant} below. Additionally, Donaldson discovered in \cite{D} a remarkable class of dynamical systems on the space of all Hermitian products on a given complex vector space. Section \ref{sec-don} deals with the spectrum of the linearization of such a system at a fixed point. We show that it can be, roughly speaking, identified with the one of the Berezin transform associated to a certain POVM. We prove that under certain natural assumptions, this linearization in contracting. By the Grobman-Hartman theorem and earlier results of Donaldson, this implies in particular that the iterations of this system converge exponentially fast to a fixed point (see Corollary \ref{expcvcor}). The use of Hartman's theorem in a related context has been suggested by Fine in \cite{Fin12}. \medskip This naturally brings us, in Section \ref{sec-bestfit}, to a geometric viewpoint at POVMs. Following Oreshkov and Calsamiglia \cite[VII.C]{OC}, we encode them as probability measures in the space of quantum states ${\mathcal{S}}$ equipped with the Hilbert-Schmidt metric. It turns out that the spectral gap admits a transparent description in terms of the geometry of such metric measure spaces and exhibits a robust behaviour under perturbations of POVMs in the Wasserstein metric. In a similar spirit, one can consider a POVM as a data cloud in ${\mathcal{S}}$, which leads us to a link between the spectral gap and the diffusion distance, a notion coming from geometric data analysis. Section \ref{sec-groups} contains a case study of POVMs associated to irreducible unitary representations of finite groups. In this case the spectrum of the Berezin transform and the diffusion distance associated to the corresponding Markov chain can be calculated explicitly via the character table of the group, and their properties reflect algebraic features. In particular, we prove that any non-trivial irreducible representation of a simple group has a strictly positive spectral gap (see Corollary \ref{cor-simplegr}). \section{Preliminaries}\label{prel} The mathematical model of quantum mechanics starts with a complex Hilbert space $\mathcal{H}$. In what follows we consider finite-dimensional Hilbert spaces only. Observables are represented by Hermitian operators whose space is denoted by $\mathscr{L}(\mathcal{H})$. Quantum states are provided by {\it density operators}, i.e., positive trace-one operators $\rho \in \mathscr{L}(\mathcal{H})$. They form a subset ${\mathcal{S}}(\mathcal{H}) \subset \mathscr{L}(\mathcal{H})$. {\bf Notation:} We write $((A,B))$ for the scalar product ${{\rm tr}}(AB)$ on $\mathscr{L}(\mathcal{H})$. Let $\Omega$ be a set equipped with a $\sigma$-algebra $\mathscr{C}$ of its subsets. By default, we assume that $\Omega$ is a Polish topological space (i.e., it is homeomorphic to a complete metric space possessing a countable dense subset) and $\mathscr{C}$ is the Borel $\sigma$-algebra. An $\mathscr{L}(\mathcal{H})$-valued {\it positive operator valued measure} $W$ on $(\Omega,\mathscr{C})$, which we abbreviated to POVM, is a countably additive map $W\colon \mathscr{C} \to \mathscr{L}(\mathcal{H})$ which takes each subset $X \in \mathscr{C}$ to a positive operator $W(X) \in \mathscr{L}(\mathcal{H})$ and which is normalized by $W(\Omega) = {1\hskip-2.5pt{\rm l}}$. According to \cite{CDS}, every $\mathscr{L}(\mathcal{H})$-valued POVM possesses a density with respect to some probability measure $\alpha$ on $(\Omega, \mathscr{C})$, that is having the form \begin{equation}\label{eq-POVM-density} dW(s) = nF(s) d\alpha(s)\;, \end{equation} where $n= \dim_{\field{C}} \mathcal{H}$ and $F: \Omega \to {\mathcal{S}}(\mathcal{H})$ is a measurable function. A POVM $W$ given by formula \eqref{eq-POVM-density} is called {\it pure} if \begin{itemize} \item[{(i)}] for every $s \in \Omega$ the state $F(s)$ is pure, i.e. a rank one projector; \item[{(ii)}] the map $F:\Omega \to {\mathcal{S}}(\mathcal{H})$ is one to one. \end{itemize} Pure POVMs, under various names, arise in several areas of mathematics including the Berezin-Toeplitz quantization , convex geometry (see \cite{GM} for the notion of an isotropic measure and \cite{AS} for the resolution of identity associated to John and L\"{o}wner ellipsoids), signal processing (see \cite{EF} for a link between tight frames and quantum measurements) and Hamiltonian group actions \cite{FM}. When $\Omega$ is a finite set, a pure POVM with a given measure $\alpha$ exists if and only if the measure $\alpha(\{s\})$ of each point $s \in \Omega$ is $\leq 1/n$, see \cite{FM} for a detailed account on the structure of the moduli spaces of pure POVMs on finite sets up to unitary conjugations. Let us introduce the main character of our story, the spectral gap of a POVM of the form \eqref{eq-POVM-density}. Define a map $T: L_1(\Omega, \alpha) \to \mathscr{L}(\mathcal{H})$ by $$T(\phi)= \int_\Omega \phi\ dW = n\int_\Omega \phi(s) F(s) d\alpha(s)\;.$$ (here and below we work with spaces of real-valued functions). The dual map $T^*: \mathscr{L}(\mathcal{H}) \to L_{\infty} (\Omega, \alpha)$ is given by $T^*(A)(s) = n((F(s),A))$. Since $L_\infty \subset L_1$, we have an operator $${\mathcal{E}}= \frac{1}{n}TT^*: \mathscr{L}(\mathcal{H}) \to \mathscr{L}(\mathcal{H})\;,$$ \begin{equation}\label{eq-e} {\mathcal{E}}(A) = n\int_{\Omega} ((F(s),A))F(s) d\alpha(s)\;.\end{equation} Observe that ${\mathcal{E}}$ is a unital trace-preserving completely positive map. In the terminology of \cite[Example 5.4]{Hayashi}, this is an example of an entanglement-breaking quantum channel. Furthermore, set $${\mathcal{B}}=\frac{1}{n}T^*T: L_1(\Omega,\alpha) \to L_\infty(\Omega,\alpha)\;,$$ \begin{equation}\label{eq-b}{\mathcal{B}}(\phi)(t)= n\int_\Omega \phi(s)((F(s),F(t))) d\alpha(s)\;.\end{equation} Observe that the image of ${\mathcal{B}}$ is finite-dimensional as ${\mathcal{B}}$ factors through $\mathscr{L}(\mathcal{H})$. Write $(\phi,\psi):= \int_\Omega \phi\psi\, d\alpha$ for the scalar product on $L_2(\Omega,\alpha)$, and $\|\cdot\|$ for the associated norm. Note that ${\mathcal{B}}$ is defined as an operator on $L_2(\Omega,\alpha)$ and its spectrum belongs to $[0,1]$, with $1$ being the maximal eigenvalue associated with the constant function. Note now that positive eigenvalues of ${\mathcal{E}}$ and ${\mathcal{B}}$ coincide. Indeed, $T^*$ maps isomorphically an eigenspace corresponding to a positive eigenvalue of ${\mathcal{E}}$ to the eigenspace of ${\mathcal{B}}$ corresponding to the same eigenvalue. Denote $1=\gamma_0 \geq \gamma_1 \geq \dots $ the positive part of the spectrum of ${\mathcal{E}}$ and ${\mathcal{B}}$. The number $\gamma(W):= 1-\gamma_1$ is called {\it the spectral gap} of the POVM $W$. In what follows, it would be instructive to use the language of Markov chains with the state space $\Omega$. Recall \cite{Bak, Ru} that a {\it Markov kernel} on $\Omega$ is a map $x \mapsto \sigma_x$ sending a point $x \in \Omega$ to a probability measure $\sigma_x$ on $(\Omega,\mathscr{C})$ such that $x \mapsto \sigma_x(A)$ is a measurable function for every $A \in \mathscr{C}$. With every Markov kernel $\sigma$ one associates a Markov chain, i.e., a sequence of $\Omega$-valued random variables $\zeta_k$, $k = 0,1,\dots$ defined on the same probability space, such that for every $n$ and every sequence $x_i \in \Omega$ the conditional probabilities satisfy $$\mathbb{P}(\zeta_n \;|\; \zeta_{n-1}=x_{n-1},\dots, \zeta_0 = x_0) = \sigma_{x_{n-1}}\;.$$ If $\zeta_0$ is distributed according to a probability measure $\nu_0$ on $\Omega$, then $\zeta_1$ is distributed according to $\nu_1$ given by formula $$\nu_1(A) = \int_\Omega \sigma_x(A) d\nu_0(x)\; \forall A \in \mathscr{C}.$$ If $\nu_0=\nu_1$, we say that $\nu_0$ is a stationary measure for the Markov chain. The Markov kernel is called {\it reversible} with respect to a measure $\nu$ on $\Omega$ if $$d\nu(x) d\sigma_x(y) = d\sigma_y(x) d\nu(y)\;,$$ as measures on $\Omega \times \Omega$. In this case $\nu$ is a stationary measure of the Markov chain. Given a $\nu$-reversible Markov kernel $\sigma$ with the state space $\Omega$, define {\it the Markov operator} ${\mathcal{A}}$ on $L_1(\Omega,\nu)$ by \begin{equation}\label{eq-cP} {\mathcal{A}}(\phi)(x) = \int_\Omega \phi(y)d\sigma_x (y)\;. \end{equation} Note that ${\mathcal{A}}$ preserves positivity: ${\mathcal{A}}(\phi)\geq 0$ for $\phi \geq 0$, ${\mathcal{A}}(1)=1$, and its operator norm is $\leq 1$. The reversibility readily yields that the Markov operator ${\mathcal{A}}$ is self-adjoint on $L_2(\Omega, \nu)$. Denote by $1^\bot$ the orthogonal complement to the constant function $1$ on $\Omega$, i.e., the space of functions with zero mean. Then ${\mathcal{A}}$ preserves $1^\bot$. By definition, the spectral gap $\gamma({\mathcal{A}})$ is defined as \begin{equation}\label{gap-Markov} \gamma({\mathcal{A}}) = 1- \|{\mathcal{A}}|_{1^\bot}\| = \inf_{\phi \neq 0} \frac{(\phi -{\mathcal{A}}\phi,\phi)}{(\phi,\phi)-(\phi,1)^2}\;. \end{equation} With this language, the operator ${\mathcal{B}}$ given by \eqref{eq-b} is a Markov operator with the Markov kernel \begin{equation}\label{eq-kernel} t \mapsto n((F(s),F(t)))d\alpha(s)\;.\end{equation} It is reversible with respect to the stationary measure $\alpha$. \section{Spectral gap for quantization}\label{subsec-BT} \subsection{Berezin transform vs. Laplace-Beltrami operator} \label{subsec-BLB} Pure POVMs naturally appear in the context of the Berezin-Toeplitz quantization of closed K\"{a}hler manifolds which are quantizable in the following sense: the cohomology class $[\omega]$ is integral, where $\omega$ is the K\"{a}hler symplectic form on $X$. Recall that this is equivalent to the existence of a Hermitian holomorphic line bundle $L$ over $X$ whose Chern connection has curvature $-2\pi i\omega$. Let us briefly recall the construction of this quantization (see \cite{BMS,S,LF} for prelimiaries). Let $X$ be a $d$-dimensional closed K\"{a}hler manifold with Hermitian holomorphic line bundle $L$ as above. Write $L^p$ for the $p$-th tensor power of $L$, where $p\in\field{N}^*$ is large enough \footnote{Our convention is that the set of natural numbers $\field{N}$ contains $0$. We write $\field{N}^*$ for strictly positive natural numbers.} , and consider the space $\mathcal{H}_p$ of global holomorphic sections of $L^p$. The quantity $\hbar=1/p$ plays the role of the Planck constant, so that the classical limit is given by $p \to +\infty$. In this setting, one defines a family of pure $\mathscr{L}(\mathcal{H}_p)$-valued POVMs $dW_{p} = n_pF_pd\alpha_p$ on $X$, taking the map $F_p: X\to {\mathcal{S}}(\mathcal{H})$ which send a point $x \in X$ to the \emph{coherent state projector} at $x$, and taking $n_p =\dim_{\field{C}} \mathcal{H}_p$. From the viewpoint of algebraic geometry, the map $F_p$ comes from the Kodaira embedding theorem. The measure $\alpha_p$ is given at any $x\in X$ by \begin{equation}\label{eq-alphahbar} d\alpha_p(x) = \frac{R_p(x)}{n_p}dv_X(x)\;, \end{equation} where the density $R_p : X \to \field{R}$ is called the \emph{Rawnsley function}, and $dv_X$ is the measure associated to the canonical volume form $\omega^d/d!$. From the viewpoint of complex geometry, the Rawnsley function is given by the value of the Bergman kernel on the diagonal, and it is a classical fact that $R_p(x)\neq 0$ for all $x\in X$ and $p\in\field{N}^*$ big enough. In the context of the Berezin-Toeplitz quantization, the operator ${\mathcal{B}}_p:= \frac{1}{n_p} T^*_p T_p$ given by formula \eqref{eq-b} above is known as the {\it Berezin transform}. Note that for any $p\in\field{N}^*$, the operator ${\mathcal{B}}_p$ has a finite-dimensional image, and all its eigenvalues lie in the interval $[0,1]$. There is a finite number of positive eigenvalues with multiplicities, while $0$ has infinite multiplicity. Write $$1=\gamma_{0,p} \geq \gamma_{1,p} \geq \gamma_{2,p}\geq \dots \geq \gamma_{k,p} \geq \dots \geq 0$$ for the eigenvalues of ${\mathcal{B}}_p$ with multiplicities. Let $\Delta f= -\text{div} \nabla f$ be the (positive) Laplace-Beltrami operator associated with the Kähler metric, acting on functions on $X$ with eigenvalues \begin{equation}\label{evLB} 0=\lambda_0 < \lambda_1 \leq\lambda_2\leq \dots\leq\lambda_k\leq \dots\;. \end{equation} \begin{thm}\label{thm-quant} For every integer $k\in\field{N}$, we have the following asymptotic estimate as $p\to +\infty$, \begin{equation}\label{eq-LB} 1-\gamma_{k,p} =\frac{1}{4\pi p}\lambda_k + \mathcal{O}(p^{-2})\;. \end{equation} Furthermore, every sequence in $p\in\field{N}^*$ of $L_2(X,\alpha_p)$-normalized eigenfunctions of ${\mathcal{B}}_p$ corresponding to the eigenvalue $\gamma_{k,p}$ contains a subsequence converging to an eigenfunction of the Laplace-Beltrami operator corresponding to $\lambda_k$ in the $\CC^\infty$-sense. \end{thm} Note that in the context of \cref{prel}, \cref{thm-quant} is equivalent to the same statement via $T^*_p$ for the operator ${\mathcal{E}}_p:\mathscr{L}(\mathscr{H}_p)\to\mathscr{L}(\mathscr{H}_p)$ defined from ${\mathcal{B}}_p$ by the formula \cref{eq-e}. \medskip The Berezin transform ${\mathcal{B}}_p$ and its associated operator ${\mathcal{E}}_p$ have prominent cousins, the $Q_K$-operator and the $Q$-operator, respectively introduced by Donaldson \cite[\S 4]{D} in the framework of his program of finding numerical approximation to distinguished K\"{a}hler metrics on complex projective manifolds \footnote{We keep Donaldson's notation for these operators.}. They are defined in the same way, replacing the measure $\alpha_p$ in \cref{eq-alphahbar} by the canonical probability measure $dv_X/\Vol(X)$. In particular, for any $p\in\field{N}^*$, Donaldson's $Q$-operator $Q_p:\mathscr{L}(\mathscr{H}_p)\fl\mathscr{L}(\mathscr{H}_p)$ is defined for any $A\in\mathscr{L}(\mathscr{H}_p)$ by \begin{equation} Q_p(A)=\frac{n_p}{\Vol(X)}\int_X ((F_p(x),A))F_p(x)dv_X(x)\;. \end{equation} Write the eigenvalues of $Q_p$ as \begin{equation} \beta_{0,p} \geq\beta_{1,p}\geq\beta_{2,p}\geq\dots\geq\beta_{k,p}\geq\dots \geq 0\;, \end{equation} and set $$p' : = \left(\frac{n_p}{\Vol(X)}\right)^{1/d}\;.$$ For some K\"{a}hler metrics of constant scalar curvature, Donaldson considered the $Q$-operator as a finite-dimensional approximation of the heat operator and predicted (see p. 611 in \cite{D}) that as $p\fl+\infty$, the spectrum of $Q_p$ approximate the spectrum of $e^{-\frac{\Delta}{4\pi p'} }$, and the eigenvectors of $Q_p$ approximate the eigenfunctions of $e^{-\frac{\Delta}{4\pi p'} }$ via the dequantization map $T^*_p$ of Section \ref{prel}. The following analogue of Theorem \ref{thm-quant} confirms Donaldson's prediction for all K\"{a}hler metrics of constant scalar curvature. \begin{thm}\label{cor-quant} Assume that the K\"{a}hler metric of $X$ has constant scalar curvature. For every integer $k\in\field{N}$, we have the following asymptotic estimate as $p\to +\infty$, \begin{equation}\label{eq-LQ} 1-\beta_{k,p} =\frac{1}{4\pi p'}\lambda_k + \mathcal{O}(p^{-2})\;. \end{equation} Furthermore, for every sequence in $\{A_p\}_{p\in\field{N}^*}$ of normalized eigenvectors of $Q_p$ in $\mathscr{L}(\mathscr{H}_p)$ corresponding to the eigenvalue $\beta_{k,p}$ for all $p\in\field{N}^*$, there is a subsequence of $\{T^*_pA_p\}_{p\in\field{N}^*}$ converging to an eigenfunction of the Laplace-Beltrami operator corresponding to $\lambda_k$ in the $\CC^\infty$-sense. \end{thm} \medskip We refer to \cite{Fin12,Keller16} for a related study of the asymptotic behaviour of the spectrum of certain geometric operators arising in Donaldson's program. \medskip Let us introduce the following useful notion \cite{D,Fine-lectures}. \medskip \noindent \begin{defin}\label{def-balanced} {\rm Let $(X,\omega)$ be a closed K\"{a}hler manifold. Let $L$ be a holomorphic bundle over $X$ equipped with a Hermitian metric $h$ such that the curvature of the corresponding Chern connection equals $-2\pi i\omega$. Fix a positive integer $p$ so that the Kodaira map $X \to H^0(X,L^p)$ is an embedding. We say that the collection $(X,\omega,L,h,p)$ is {\it balanced} if the corresponding Rawnsley function $R_p$ is constant on $X$. } \end{defin} \medskip Note that for the balanced data $(X,\omega,L,h,p)$ the Berezin transform ${\mathcal{B}}_p$ and the $Q_{K,p}$-operator coincide, as well as ${\mathcal{E}}_p$ and the $Q_p$-operator. In that case, the result of Theorem \ref{thm-quant} is relevant in \cite[\S\,4.3]{D}. We refer the reader to \cite[\S\,4.1]{D} and to \cite[\S\,1.4.1]{Fin10} for an interpretation of these operators in terms of complex geometry of $(X,L^p)$. Let us finally mention that the approximation of the heat operator by the $Q_K$-operator has been explored by Lu and Ma in \cite{LM}, and that the analogue of the refined Karabegov-Schlichenmaier expansion of Proposition \ref{KS} for the $Q_K$-operator has been shown by Ma and Marinescu in \cite[Th.6.1]{MM12}. Some ingredients of their approach are instrumental for us. \medskip It follows from Theorem \ref{thm-quant} that the spectral gap of ${\mathcal{B}}_p$ equals \begin{equation}\label{eq-gap-repeated} \gamma({\mathcal{B}}_p)= \frac{\hbar}{4\pi}\lambda_1 + \mathcal{O}(\hbar^2)\;, \;\; \hbar = 1/p\;. \end{equation} In particular, this yields that the eigenvalue $1$ of ${\mathcal{B}}_p$ is simple (i.e., has multiplicity $1$) for all sufficiently large $p$. \medskip\noindent\begin{exam}\label{exam-CP1} {\rm Take the projective line $X= \field{C} P^1=S^2$ of area $1$. Let $L= \mathcal{O}(1)$ be the holomorphic line bundle over $X$ dual to the tautological one. The quantum Hilbert space $\mathcal{H}_p$ of global holomorphic sections of $L^p$ can be identified with the $(p+1)$-dimensional space of homogeneous polynomials of degree $p$ of 2 variables. A representation-theoretical argument (see \cite {gap-representations, D} and Remark \ref{rem-rps} below) shows that the eigenvalue $\gamma_1$ of the Berezin transform equals $p/(p+2)$. The K\"{a}hler metric on $X$ has constant curvature. For such metrics the first eigenvalue $\lambda_1$ of the Laplace-Beltrami operator equals $8\pi/\text{Area} = 8\pi$. We get that $$\gamma = 1-\gamma_1 = \frac{2}{p+2} = \frac{1}{4\pi p}\lambda_1 + \mathcal{O}(p^{-2})\;,$$ as predicted by \eqref{eq-gap-repeated}. } \end{exam} The upper bound in \eqref{eq-gap-repeated} immediately follows from the Karabegov-Schlichenmaier asymptotic expansion \eqref{eq-asymp} of the Berezin transform \cite{KS01} $${\mathcal{B}}_p(f)= f- \frac{1}{4\pi p}\Delta f + \mathcal{O}(p^{-2})\;,$$ for every smooth function $f$ on $X$, where the remainder $\mathcal{O}(p^{-2})$ depends on $f$. Indeed, choosing $f$ to be the $L_2(X,\alpha_p)$-normalized first eigenfunction of $\Delta$, we see that $$\gamma(W_p) \leq (({1\hskip-2.5pt{\rm l}}-{\mathcal{B}}_p)f,f)_p \leq \frac{1}{4\pi p}\lambda_1 + \mathcal{O}(p^{-2}),$$ where $(\cdot,\cdot)_p$ is the scalar product on $L_2(X,\alpha_p)$. The prototype example illustrating a link between the Berezin transform and the Laplace-Beltrami operator is the flat space $\field{R}^{2n}$, where the Berezin transform ${\mathcal{B}}_p$ simply coincides with the heat operator $e^{-\hbar \Delta/4\pi}$ (see \cite{B}). It would be interesting to explore the following problem motivated by a conversation with J.-M. Bismut. Denote by $\chi(t)$ the indicator function of the interval $[0,1]$. \medskip \noindent \begin{problem}\label{prob} Call a non-decreasing sequence $r(p)$ in $p \in \field{N}^*$ \textup{admissible} if $$\left\|(B_p-e^{-\frac{\Delta}{4\pi p}}) \chi\left(\frac{\Delta}{r(p)}\right)\right\| = \mathcal{O}(p^{-2})\;,$$ where the norm stands for the operator norm in $L_2$. According to Theorem \ref{thm-quant}, the constant sequence $r(p) = C$ is admissible for all $C$. Is the sequence $r(p) = p^\tau$ admissible for $\tau >0$? What is the maximal possible growth rate of an admissible sequence? \end{problem} Let us finally make a couple of comments on the physical intuition behind the Berezin transform. It has been noted in the introduction that the Berezin transform can be defined as the composition of the quantization and the dequantization. It is instructive to interpret it in terms of the quantization only. Let $\sigma$ be a classical state, i.e. a Borel probability measure on $X$, and following \cite{CP1}, define its quantization as $$\Theta_p(\sigma) = \int_X F(x)d\sigma(x)\;,$$ where as earlier $F(x)$ stands for the coherent state projector at $x\in X$. Let further $f \in L_2(X)$ be a classical observable. It was noticed in \cite[(11)]{CP1} that the expectation $((T_p(f),\Theta_p(\sigma)))$ of the value of the quantized observable $T_p(f)$ in the quantized state $\Theta_p(\sigma)$ equals the classical expectation $\int_X {\mathcal{B}}(f)\,d\sigma$ of the Berezin transform ${\mathcal{B}}(f)$ in the classical state $\sigma$. Thus in the context of Berezin-Toeplitz quantization, we get another interpretation of the blurring of quantization measured by ${\mathcal{B}}$. Furthermore, in view of Theorem \ref{thm-quant}, we know that ${\mathcal{B}}$ is a Markov operator with strictly positive spectral gap. Thus it has unique stationary measure $\alpha_p$ whose density against the phase volume is given by $R_p/n_p$, as in formula \eqref{eq-alphahbar}. Interestingly enough, this provides an interpretation of the Rawnsley function without appealing to a specific choice of coherent states. \subsection{Comments on the proof} The proof of Theorem \ref{thm-quant} occupies the rest of this section, and we will deduce Theorem \ref{cor-quant} as a consequence of it in Section \ref{specsec}. Our argument is structured similar to the one in a paper by Lebeau and Michel \cite{LM} on the Markov operator associated to the semiclassical random walk on manifolds. The key intermediate results are as follows: \begin{itemize} \item[{(i)}] An apriori estimate stating that for any eigenfunction $f$ of ${\mathcal{B}}_p$ whose eigenvalue is sufficiently bounded away from $0$, {\bf any} Sobolev norm $\|f\|_{H^q}$ is bounded by $C_q \|f\|_{L_2}$. See Lemma \ref{hjrefprop} below which is a counterpart of Lemma 5 in \cite{LM}. \item[{(ii)}] The operators ${\mathcal{A}}_p:= p({1\hskip-2.5pt{\rm l}}-{\mathcal{B}}_p)$ and $\frac{\Delta}{4\pi}$ turn out to be $\sim p^{-1}$-close as operators from $L_2$ to $H^q$ for the Sobolev space $H^q$ with {\bf some} sufficiently large $q$, see formula \eqref{KSSob} below which is a counterpart of formula (3.28) in \cite{LM}, and which can be considered as a refinement of the expansion \eqref{eq-asymp} obtained in \cite{KS01}. \end{itemize} Combining (i) and (ii) we conclude that, roughly speaking, eigenfunctions of ${\mathcal{A}}_p$ as in (i) are ``approximate" eigenfunctions of the Laplacian, which eventually implies that the spectra of ${\mathcal{A}}_p$ and $\Delta$ are close to one another, which yields the desired result (see the ending of our proof which is parallel to the one in \cite{LM}). Proving (i) and (ii) forms the main bulk of the work. In contrast to \cite{LM}, our proof does not involve micro-local analysis. The main ingredients we use is the expansion of the Bergman kernel due to Dai, Liu and Ma \cite{DLM06} (see Theorem \ref{BTasy}) and a comparison between the Berezin transform and the heat operator motivated by the work of Liu and Ma on Donaldson's $Q_K$-operator \cite{LM07} (see Proposition \ref{boundexp} below). Finally, an acknowledgment is in order. After a weaker version of Theorem \ref{thm-quant} was posted and formula \eqref{eq-gap-repeated} was stated as a question, Alix Deleporte kindly shared with us his ideas concerning the proof of \eqref{eq-gap-repeated}. He sent us notes \cite{Deleporte} containing a number of preliminary steps in the direction of (i) and (ii) above. While the original arguments of Deleporte dealt with the case of real-analytic K\"{a}hler manifolds and line bundles and were based on the asymptotic expansion from \cite{Rouby, Deleporte}, he informed us that they also could be adjusted to the $\CC^\infty$-case. \subsection{Preparations}\label{subsecprep} Recall that the measure $dv_X$ associated to the canonical volume form ${\omega}^d/d!$ is also the Riemannian volume form of $X$. Let $\<\cdot,\cdot\>_{L_2}$ be the usual $L_2$-scalar product on $\cinf(X,\field{C})$, and let $\|\cdot\|_{L_2}$ be the associated norm. For all $j\in\field{N}$, let $e_j\in\cinf(X,\field{C})$ be the normalized eigenfunction associated with the $j$-th eigenvalue of the Laplace-Beltrami operator, so that $\|e_j\|_{L_2}=1$ and $\Delta e_j=\lambda_j e_j$ as in \eqref{evLB} for all $j\in\field{N}$. Then for any $f\in\cinf(X,\field{C})$, we have the following equality in $L_2$, \begin{equation} f=\sum_{j=0}^{+\infty}\<f,e_j\>_{L_2}e_j. \end{equation} For any $F:\field{R}\fl\field{R}$ bounded, we define the bounded operator $F(\Delta)$ acting on $L_2(X,\field{C})$ by the formula \begin{equation}\label{calculfct} F(\Delta)f=\sum_{i=0}^{+\infty} F(\lambda_j)\<f,e_j\>_{L_2}e_j\;. \end{equation} The bounded operator $e^{-t\Delta}$ thus defined for all $t>0$ is called the \emph{heat operator}. For any $m\in\field{N}$, let $|\cdot|_{\CC^m}$ be a $\CC^m$ norm on $\cinf(X,\field{C})$. The following result is classical and can be found for example in \cite{Kannai}, \cite[Th.2.29, (2.8)]{BGV04}. \begin{prop}\label{heatexpprop} For any $m\in\field{N}$, there exists $C_m>0$ such that for any $f\in\cinf(X,\field{C})$ and all $t>0$, we have \begin{equation}\label{heatexpfla} |e^{-t\Delta}f-f+t\Delta f|_{\CC^m}\leq C_m t^2 |f|_{\CC^{m+4}}\;. \end{equation} \end{prop} For any $m\in\field{N}^*$, let $\|\cdot\|_{H^m}$ be a Sobolev norm of order $m$ on $\cinf(X,\field{C})$. Using the elliptic estimates for the Laplace-Beltrami operator, for $m$ even we define $\|\cdot\|_{H^m}$ by \begin{equation}\label{ellest} \|f\|_{H^m} := \|\Delta^{m/2}f\|_{L_2} + \|f\|_{L_2}\;. \end{equation} Note that the Laplacian $\Delta$ is symmetric with respect to the corresponding scalar product on $H^m$. By convention, we set $\|f\|_{H^0}:=\|f\|_{L_2}$. Next, turn to the Berezin transform. Recall that the Hermitian product on $L$ and the Riemannian measure $dv_X$ induce an $L_2$-scalar product on sections of $L^p$ for any $p\in\field{N}^*$, and write $L_2(X,L^p)$ for the associated Hilbert space. The central tool for the study of the Berezin transform is the Schwartz kernel $\Pi_p(x,y)$ of the orthogonal projector $\Pi_p:L_2(X,L^p) \to \mathcal{H}_{p}$, called the \emph{Bergman kernel}. Recall that for fixed $x$ and $y$, this is an element of $L^p_x \otimes \bar{L}^p_y$, where $L^p_x$ denotes the fiber of $L^p$ at $x \in X$ and the bar stands for the conjugate line bundle. Since the bundle $L$ comes with a Hermitian metric, we can measure the point-wise norm $|\Pi_p(x,y)|$. By Corollary 9.1.4\,(2) in \cite{LF}, we have that $|\Pi_p(x,y)| = |\langle \xi_{x,p},\xi_{y,p} \rangle|$, where $\xi_{x,p}$ is the non-normalized \emph{coherent state} at $x\in X$ defined up to a phase factor (see e.g. \cite{CP,LF} for the definition). The Rawnsley function $R_p$ is given by $R_p(x) = |\xi_{x,p}|^2$, and thus satisfies $R_p(x)=|\Pi_p(x,x)|$. Since $F_p(x)$ is the projector to $\xi_{x,p}$, we have that $$|\Pi_p(x,y)|^2 = ((F_p(x),F_p(y))) R_p(x)R_p(y)\;.$$ It follows from \eqref{eq-kernel} and \eqref{eq-alphahbar} that \begin{equation}\label{Bpfla} ({\mathcal{B}}_p f)(x) = n_p \int_X ((F_p(x),F_p(y)))f(y) d\alpha_p(y) = \frac{1}{R_p(x)} \int_X |\Pi_p(x,y)|^2 f(y) dv_X(y)\;, \end{equation} so that the Schwarz kernel of ${\mathcal{B}}_p$ with respect to $dv_X$ is given by \begin{equation} \label{eq-schwarzkernelberezin} {\mathcal{B}}_p(x,y)= \frac{|\Pi_p(x,y)|^2}{R_p(x)}\;. \end{equation} Let $\|\cdot\|_p$ be the norm on $L_2(X,\alpha_p)$. From the classical asymptotic expansion of $R_p$ as $p \to +\infty$, we get a constant $C>0$ such that \begin{equation}\label{diagexp} \left(\frac{1}{\Vol(X)}- Cp^{-1}\right)\|\cdot\|_{L_2} \leq\|\cdot\|_p \leq\left(\frac{1}{\Vol(X)}+ Cp^{-1}\right)\|\cdot\|_{L_2}\;. \end{equation} \subsection{Asymptotic expansion of the Berezin transform} For a comprehensive account on the off-diagonal expansion of the Bergman kernel as well as tools of Berezin-Toeplitz quantization in this context, we refer to \cite{MM07}. We always assume that $p\in\field{N}^*$ is as large as needed. For any $s>0$, we use the notation $O(p^{-s})$ as $p\fl+\infty$ in the usual sense, uniformly in $\CC^m$-norm for all $m\in\field{N}^*$. The notation $O(p^{-\infty})$ means $O(p^{-s})$ for any $s>0$. Let $\epsilon_0>0$ be smaller than the injectivity radius of $X$. Fix a point $x_0 \in X$, and let $Z=(Z_1,...,Z_{2d})\in\field{R}^{2d}$ with $|Z|<\epsilon_0$ be geodesic normal coordinates around $x_0$, where $|\cdot|$ is the Euclidean norm of $\field{R}^{2d}$. In these coordinates, the canonical volume form is given by \begin{equation}\label{defkappa} dv_X(Z)=\kappa_{x_0}(Z)dZ\;, \end{equation} with $\kappa_{x_0}(0)=1$. For any kernel $K(\cdot,\cdot)\in\cinf(X\times X,\field{C})$, we write $K_{x_0}(\cdot,\cdot)$ for its image in these coordinates, and we write $|K_x|_{\CC^m(X)}$ for the $\CC^m$-norm of the family of functions $K_x$ with respect to $x \in X$. Let $d^X$ be the Riemannian distance on $X$. We will derive Theorem \ref{thm-quant} as a consequence of the following asymptotic expansion as $p \to +\infty$ of the Schwartz kernel of the Berezin transform. \begin{thm}\label{BTasy} For any $m\,,k\in\field{N}$ and $\epsilon>0$, there is $C>0$ such that for all $p\in\field{N}^*$ and $x,y\in X$ satisfying $d^X(x,y)>\epsilon$, \begin{equation}\label{thetafla} |{\mathcal{B}}_p(x,y)|_{\CC^m}\leq Cp^{-k}\;. \end{equation} For any $m, k\in\field{N}$, there is $N\in\field{N}$ and $C>0$ such that for any $x_0\in X,\,|Z|,|Z'|<\epsilon_0$ and for all $p\in\field{N}^*$, we have \begin{multline}\label{BTexp} \Big|p^{-n} B_{p,x_0}(Z,Z') -\sum_{r=0}^{k-1} p^{-r/2}J_{r,x_0}(\sqrt{p}Z,\sqrt{p}Z') \exp(-\pi p|Z-Z'|^2)\kappa_{x_0}^{-1}(Z')\Big|_{\CC^m(X)}\\ \leq Cp^{-\frac{k}{2}}(1+\sqrt{p}|Z|+\sqrt{p}|Z'|)^N \exp(-\sqrt{p}|Z-Z'|/C)+O(p^{-\infty})\;, \end{multline} where $\{J_{r,x_0}(Z,Z')\}_{r\in\field{N}}$ is a family of polynomials in $Z,Z'\in\field{R}^{2n}$ of the same parity as $r$, depending smoothly on $x_0\in X$. Furthermore, for any $Z,Z'\in\field{R}^{2n}$ we have \begin{equation}\label{|J|0} J_{0,x_0}(Z,Z')=1\quad\text{and}\quad J_{1,x_0}(Z,Z')=0\;. \end{equation} \end{thm} \medskip This readily follows from formula \eqref{eq-schwarzkernelberezin} expressing the Schwarz kernel of the Berezin transform via the Bergman kernel $\Pi_p$ and the analogous result of Dai, Liu and Ma in \cite[Th.~4.18']{DLM06} for the Bergman kernel. For any $x\in X$, let $B^X(x,\epsilon_0)$ be the geodesic ball of radius $\epsilon_0>0$ around $x$, and write $B(0,\epsilon_0) \subset\field{R}^{2d}$ for the Euclidean ball of radius $\epsilon_0$ around $0$. The following proposition is a refinement of the Karabegov-Schlichenmaier expansion \cite[(1.2)]{KS01} of the Berezin transform, where we make explicit the remainder term. \begin{prop}\label{KS} For any $m\in\field{N}$, there exists $C_m>0$ such that for any $f\in\cinf(X,\field{C})$ and all $p\in\field{N}^*$, we have \begin{equation}\label{KSexp} \left|{\mathcal{B}}_pf-f+\frac{\Delta}{4\pi p} f\right|_{\CC^m}\leq \frac{C_m}{p^2}|f|_{\CC^{m+4}}\;. \end{equation} \end{prop} \begin{proof} For any $x\in X$, write $f_x$ for the image of $f$ restricted to $B^X(x,\epsilon_0)$ in normal coordinates around $x$. From \eqref{thetafla}, we know that for any $\epsilon>0$ and $x\in X$, \begin{equation}\label{KStheta} \begin{split} ({\mathcal{B}}_pf)(x)&=\int_X {\mathcal{B}}_p(x,y)f(y)dv_X(y)\\ &=\int_{B^X(x,\epsilon_0)}{\mathcal{B}}_p(x,y)f(y)dv_X(y) +O(p^{-\infty})\,|f|_{\CC^0}\\ &=\int_{B(0,\epsilon_0)}B_{p,x}(0,Z)f_x(Z)\kappa_x(Z)dZ +O(p^{-\infty})\,|f|_{\CC^0}\;. \end{split} \end{equation} For any $k\in\field{N}^*$ and $m\in\field{N}$, we will use the following Taylor expansion of $f_x$ up to order $k-1$, for all $p\in\field{N}^*$ and $|Z|<\epsilon_0$, \begin{equation}\label{Taylorfk} \begin{split} f_{x}&(Z) =\sum_{0\leq|\alpha|\leq k-1}\frac{\partial^{|\alpha|} f_{x}}{\partial Z^\alpha}\frac{Z^\alpha}{\alpha!} +O_m(|Z|^{k})|f|_{\CC^{m+k}}\\ & =\sum_{0\leq |\alpha|\leq k-1}p^{-\frac{|\alpha|}{2}}\frac{\partial^{|\alpha|} f_{x}}{\partial Z^\alpha}\frac{(\sqrt{p}Z)^\alpha}{\alpha!}+p^{-\frac{k}{2}}O_m(|\sqrt{p}Z|^{k})|f|_{\CC^{m+k}}\;, \end{split} \end{equation} where $O_m$ means that the expansion is uniform in $x\in X$ as well as all its derivatives up to order $m\in\field{N}$, and does not depend on $f$. We will compute the asymptotic expansion as $p\fl+\infty$ of \eqref{KStheta} using the Taylor expansion \eqref{Taylorfk} of $f$ and the asymptotic expansion \eqref{BTexp} of the Berezin transform up to order $3$. First, using the fact that ${\mathcal{B}}_p 1=1$ for all $p\in\field{N}^*$, we know that the polynomials $J_{r,x}(Z,Z')$ of the asymptotic expansion \eqref{BTexp} of the Berezin transform satisfy \begin{equation}\label{KSest3} \int_{\field{R}^{2n}}J_{r,x}(0,Z)\exp(-\pi p|Z|^2)dZ=0\;, \end{equation} for all $x\in X$ and $r\in\field{N}^*$. On another hand, recall from \eqref{|J|0} that $J_{0,x}\equiv 1$ and $J_{1,x}\equiv 0$ for all $x\in X$. Using the parity of Gaussian functions, a change of variable $Z\mapsto Z/\sqrt{p}$ and the Taylor expansion \eqref{Taylorfk} for $k=4$, we get that \begin{multline}\label{KSest1} p^n\int_{B(0,\epsilon_0)}\exp(-\pi p|Z|^2)f_x(Z)dZ\\ =f(x) +p^{-1}\sum_{j=1}^{2n}\frac{\partial^2 f_{x}}{\partial Z^2_j}(0) \int_{\field{R}^{2n}}\frac{Z_j^2}{2}\exp(-\pi |Z|^2)dZ+ |f|_{\CC^{m+4}}O_m(p^{-2})\\ =f(x)+p^{-1}\frac{\Delta}{4\pi}f(x)+|f|_{\CC^{m+4}}O_m(p^{-2})\;. \end{multline} Recall that $J_{r,x}(0,Z)\in\field{C}[Z]$ is a polynomial in $Z\in\field{R}^{2n}$ of the same parity than $r\in\field{N}$, so that using \eqref{Taylorfk}, \eqref{KSest3} and the parity of Gaussian functions, we get in the same way \begin{equation}\label{KSest2} \begin{split} p^n\int_{B(0,\epsilon_0)}J_{2,x}(0,\sqrt{p}Z) &\exp(-\pi p|Z|^2)f_x(Z)dZ\\ =f(x)\int_{\field{R}^{2n}}&J_{2,x}(0,Z)\exp(-\pi|Z|^2)dZ+O_m(p^{-1}) |f|_{\CC^{m+2}}\\ &=O_m(p^{-1})|f|_{\CC^{m+2}}\;,\\ p^n\int_{B(0,\epsilon_0)}J_{3,x}(0,\sqrt{p}Z) &\exp(-\pi p|Z|^2)f_x(Z)dZ=O_m(p^{-1/2})|f|_{\CC^{m+1}}\;. \end{split} \end{equation} Finally, again using a change of variable $Z\mapsto Z/\sqrt{p}$, we get for any $N\in\field{N}^*$ and $p\in\field{N}^*$, \begin{equation}\label{KSest4} p^n\int_{B(0,\epsilon_0)}(1+|\sqrt{p}Z|)^N\exp(-\sqrt{p}|Z|/C)f_x(Z)dZ =O_m(1)|f|_{\CC^m}\;. \end{equation} This completes the proof of \eqref{KSexp}. \end{proof} In view of Proposition \ref{heatexpprop} and Proposition \ref{KS}, it is natural to compare the Berezin transform with the heat operator by setting $t=(4\pi p)^{-1}$. This leads to the following result, which is essentially a refinement of \cite[Th.0.1]{LM07}. \begin{prop}\label{boundexp} For any $m\in\field{N}$, there exists $C_m>0$ such that for any $f\in\cinf(X,\field{C})$ and all $p\in\field{N}^*$, we have \begin{equation}\label{boundexpfla} \left\|(e^{-\frac{\Delta}{4\pi p}}-{\mathcal{B}}_p)f\right\|_{H^m}\leq \frac{C_m}{p}\|f\|_{H^m}\;. \end{equation} \end{prop} \begin{proof} Set $S_p:=e^{\frac{\Delta}{4\pi p}}-{\mathcal{B}}_p$, which acts on $L_2(X,\field{C})$ for all $p\in\field{N}^*$ and admits a smooth Schwartz kernel $S_p(\cdot,\cdot)$ with respect to $dv_X$. Comparing the classical asymptotic expansion of the heat kernel, as given for example in \cite[Th.2.29]{BGV04},\cite{Kannai}, with \cref{BTasy}, we see that \begin{equation}\label{thetaflaSp} S_p(x,y)=O(p^{-\infty})\;, \end{equation} for all $x,y\in X$ satisfying $d^X(x,y)>\epsilon_0$, and using the formula \eqref{|J|0} for the first two coefficients, we get for any $m\in\field{N}$ a constant $C>0$ and $N\in\field{N}$ such that \begin{multline}\label{B-eexp} \left|S_{p,x_0}(Z,Z')\right|_{\CC^m(X)}\leq Cp^{-1} (1+\sqrt{p}|Z|+\sqrt{p}|Z'|)^N \exp(-\sqrt{p}|Z-Z'|/C)\\ +O(p^{-\infty})\;. \end{multline} Let us first show \eqref{boundexpfla} for $m=0$. For any $f\in\cinf(X,\field{C})$ and any $\epsilon>0$, we get by Cauchy-Schwarz inequality and \eqref{thetaflaSp} for $S_p$ that for all $p\in\field{N}^*$, \begin{equation}\label{Schurtest} \begin{split} \|S_p f\|_{L_2}^2&\leq\int_X\left(\int_X |S_p(x,y)|\,dv_X(y)\right) \left(\int_X |S_p(x,y)|\,|f(y)|^2\,dv_X(y)\right)dv_X(x)\\ &\leq\sup_{x\in X}\left(\int_X|S_p(x,y)|\,dv_X(y)\right) \sup_{y\in X}\left(\int_X|S_p(x,y)|\,dv_X(x)\right)\|f\|_{L_2}^2\\ &\leq\sup_{x\in X}\left(\int_{B(x,\epsilon_0)}|S_p(x,y)|\,dv_X(y)\right) \sup_{y\in X}\left(\int_{B(x,\epsilon_0)}|S_p(x,y)|\,dv_X(x)\right)\|f\|_{L_2}^2\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad +O(p^{-\infty})\|f\|_{L_2}^2\;. \end{split} \end{equation} Then \eqref{boundexpfla} for $m=0$ follows from \eqref{B-eexp} with $Z=0$ or $Z'=0$ respectively, as in \eqref{KSest4}. To deal with the case of arbitrary $m\in\field{N}^*$, let us assume by induction that \eqref{boundexpfla} is satisfied for $m-1$. Considering the estimates \eqref{thetaflaSp} and \cref{B-eexp} with corresponding $m\in\field{N}^*$, note that for any differential operator $D_x$ of order $m$ in $x\in X$, there exists a differential operator $D'_{x,y}$ in $x,y\in X$ of total order $m$ but of order at most $m-1$ in $x\in X$, such that the operator $S_p^{(m)}$ defined through its kernel for all $x,y\in X$ by \begin{equation} S_p^{(m)}(x,y):=D_xS_p(x,y)+D'_{x,y}S_p(x,y) \end{equation} also satisfies \eqref{thetaflaSp} and \eqref{B-eexp}. Then for all $x\in X$ and $p\in\field{N}^*$, we get \begin{equation} \begin{split} \int_X D_xS_p(x,y)f(y)dv_X(y)&= -\int_X(D'_{x,y}S_p(x,y))f(y)dv_X(y)+(S_p^{(m)}f)(x)\\ &=\int_XD_x'S_p(x,y)(D_y''f(y))dv_X(y)+(S_p^{(m)}f)(x)\;, \end{split} \end{equation} where $D'_x$ and $D''_y$ are differential operators, respectively in $x$ and in $y$, obtained from $D'_{x,y}$ using a partition of unity and integration by parts in local charts, so that in particular $D'_x$ is of order $m-1$ in $x\in X$. Then using the induction hypothesis, the inequality \eqref{boundexpfla} for $m$ follows from the same inequality for $m-1$ replacing $f$ by any number of derivatives of $f$, and from the estimates \eqref{B-eexp} and \eqref{Schurtest} for $S_p^{(m)}$ in the same way than before. \end{proof} \subsection{Spectrum}\label{specsec} Recall that $\|\cdot\|_p$ denotes the norm on $L_2(X,\alpha_p)$. In this section, we consider a sequence $\{f_p\}_{p\in\field{N}^*}$, with $f_p\in\cinf(X,\field{C})$ such that \begin{equation}\label{efBp} \|f_p\|_p=1\;,\quad {\mathcal{B}}_pf_p=\mu_p f_p\;, \end{equation} for some $\mu_p\in\Spec({\mathcal{B}}_p)$ for all $p\in\field{N}^*$. The following estimate is crucial for the proof of Theorem \ref{thm-quant}. \begin{lemma}\label{hjrefprop} Assume that the sequence $\{p(1-\mu_p)\}_{p\in\field{N}^*}$ is bounded by some constant $L>0$. Then for all $m\in\field{N}$, there exists $C_{L,m}>0$ such that for all $p\in\field{N}^*$, we have \begin{equation}\label{hjrefined} \|f_p\|_{H^{2m}}\leq C_{L,m}\;. \end{equation} \end{lemma} \begin{proof} Note that \eqref{hjrefined} is automatically verified for $m=0$ by \eqref{diagexp} and \eqref{efBp}. By induction on $m\in\field{N}$, let us assume that \eqref{hjrefined} is satisfied for $m-1$. Let us write \begin{equation}\label{deltFest} \begin{split} p(e^{-\frac{\Delta}{4\pi p}}-{\mathcal{B}}_p)f_p&=p(1-\mu_p)f_p- p({1\hskip-2.5pt{\rm l}}-e^{-\frac{\Delta}{4\pi p}})f_p\\ &=p(1-\mu_p)f_p-\Delta F(\Delta/p)f_p\;, \end{split} \end{equation} where the bounded operator $F(\Delta/p)$ acting on $L_2(X,\field{C})$ is defined as in \eqref{calculfct} for the continuous function $F:\field{R}\fl\field{R}$ given for any $s\in\field{R}^*$ by $F(s)=4\pi(1-e^{s/4\pi})/s$. As $|p(1-\mu_p)|<L$ for all $p\in\field{N}^*$, by \cref{boundexp} and formula \eqref{ellest} for $\|\cdot\|_{H^{2m}}$, this gives a constant $C_m>0$ such that \begin{equation}\label{sobfest} \|F(\Delta/p)f_p\|_{H^{2m}}\leq C_m\|f_p\|_{H^{2m-2}}\;. \end{equation} On the other hand, note that by hypothesis, we have $\mu_p\fl 1$ as $p\fl+\infty$. Using \cref{boundexp} again, we then get $\epsilon_m>0$ and $p_m\in\field{N}^*$ such that for all $p>p_m$, \begin{equation}\label{hjdeltFest} \begin{split} \|F(\Delta/p)f_p\|_{H^{2m}}&\geq \|F(\Delta/p)f_p+({\mathcal{B}}_p-e^{-\frac{\Delta}{4\pi p}})f_p\|_{H^{2m}} -\|({\mathcal{B}}_p-e^{-\frac{\Delta}{4\pi p}})f_p\|_{H^{2m}}\\ &\geq\inf_{s>0}\,\{F(s)+\mu_p-e^{-s/4\pi}\}\,\|f_p\|_{H^{2m}} -C_m p^{-1}\|f_p\|_{H^{2m}}\\ &\geq\epsilon_m\|f_p\|_{H^{2m}}\;. \end{split} \end{equation} This together with \eqref{sobfest} gives \eqref{hjrefined}. \end{proof} \medskip\noindent{\bf Proof of Theorem \ref{thm-quant}.} For any $f\in\cinf(X,\field{C})$, by Propositions \ref{KS} and \ref{boundexp} we get that \begin{equation}\label{KSSob} \left\|p({1\hskip-2.5pt{\rm l}}-{\mathcal{B}}_p)f-\frac{\Delta}{4\pi}f\right\|_{L_2} \leq Cp^{-1}|f|_{\CC^{4}} \leq Cp^{-1}\|f\|_{H^{q}}\;, \end{equation} with $q$ even and large enough. The inequality on the right follows from Sobolev embedding theorem, and the same is true in $L_2(X,\alpha_p)$-norm by \eqref{diagexp}. Now if $e_j\in\cinf(X,\field{C})$ is such that $\Delta e_j=\lambda_j e_j$, then by \eqref{KSSob} we get $C_j>0$ not depending on $p\in\field{N}^*$ such that \begin{equation}\label{estevdelt} \left\|p({1\hskip-2.5pt{\rm l}}-{\mathcal{B}}_p)e_j-\frac{\lambda_j}{4\pi}e_j\right\| _p\leq C_j p^{-1}\;. \end{equation} Thus if $m_j\in\field{N}$ is the multiplicity of $\lambda_j$ as an eigenvalue of $\Delta$, the estimate \eqref{estevdelt} for all eigenfunctions of $\Delta$ associated with $\lambda_j$ gives a constant $C>0$ such that \begin{equation}\label{spec>} \#\left(\Spec\big(p({1\hskip-2.5pt{\rm l}}-{\mathcal{B}}_p)\big)\cap\left[\frac{\lambda_j}{4\pi}-Cp^{-1},\frac{\lambda_j}{4\pi}+Cp^{-1}\right]\right) \geq m_j\;. \end{equation} This immediately follows from the variational principle for the operator $p({1\hskip-2.5pt{\rm l}}-{\mathcal{B}}_p)-\frac{\lambda_j}{4\pi}{1\hskip-2.5pt{\rm l}}$. Consider now for every $p\in\field{N}^*$ an eigenfunction $f_p\in\cinf(X,\field{C})$ of ${\mathcal{B}}_p$ as in \eqref{efBp} such that the associated sequence $\{p(1-\mu_p)\}_{p\in\field{N}^*}$ of eigenvalues of $p({1\hskip-2.5pt{\rm l}}-{\mathcal{B}}_p)$ is bounded. Combining Lemma \ref{hjrefprop} with the right inequality in \eqref{KSSob}, we get $C>0$ such that \begin{equation}\label{specinv>} \|p(1-\mu_p)f_p-\Delta f_p\|_{L_2}\leq Cp^{-1}\;. \end{equation} In particular, we get that \begin{equation}\label{eq-spec-est} \textup{dist}\left(p(1-\mu_p),\Spec\Delta\right)\leq Cp^{-1}\;. \end{equation} Finally, let us show that there exists $p_0\in\field{N}^*$ such that \eqref{spec>} is in fact an equality for $p>p_0$. To this end, let $l\in\field{N}^*$ with $l\geq m_j$ be such that for all $p\in\field{N}^*$, there exists an orthonormal family $f_{k,p},\,1\leq k\leq l,$ of eigenfunctions of ${\mathcal{B}}_p$ in $L_2(X,d\alpha_p)$ with associated eigenvalues $\mu_{k,p}\in\field{R},\,1\leq k\leq l,$ satisfying \begin{equation} p(1-\mu_{k,p})\in[\lambda_j-Cp^{-1},\lambda_j+Cp^{-1}]\;,~~ \text{for all}~~1\leq k\leq l\;. \end{equation} As the inclusion of the Sobolev space $H^q$ in $H^{q-1}$ is compact, by Lemma \ref{hjrefprop} there exists a subsequence of $\{f_{k,p}\}_{p\in\field{N}^*}$ converging to a function $f_k$ in $H^{q-1}$-norm, for all $1\leq k\leq l$. In particular, taking $q>2$, the family $f_k,\,1\leq k\leq l$, is orthonormal in $L_2(X,\field{C})$ and satisfies $\Delta f_k=\lambda_j f_k$ for all $k\in\field{N}^*$ by \eqref{specinv>}. By definition of the multiplicity $m_j\in\field{N}$ of $\lambda_j$, this forces $l=m_j$. Let us sum up our findings. First, equality $$ \#\left(\Spec\big(p({1\hskip-2.5pt{\rm l}}-{\mathcal{B}}_p)\big)\cap\left[\frac{\lambda_j}{4\pi}-Cp^{-1},\frac{\lambda_j}{4\pi}+Cp^{-1}\right]\right) = m_j\;, $$ where $m_j$ is the multiplicity of $\lambda_j$ as the eigenvalue of $\Delta$, together with \eqref{eq-spec-est} readily yields the first statement of the theorem: $$1-\gamma_{k,p} =\frac{\hbar}{4\pi}\lambda_k + \mathcal{O}(p^{-2})\;.$$ Second, observe that we got a subsequence of $f_{k,p}$, $p \in \field{N}^*$ converging to $f_k$ in the Sobolev $H^{q-1}$ sense, where $q$ even can be chosen arbitrarily large. By the Sobolev embedding theorem, this yields a subsequence which $\CC^l$-converges to $f_k$ with arbitary $l$. Iterating this argument for this subsequence we get that there exists a sequence $p_l \to +\infty$ such that $$|f_{k,p_l} - f_k|_{\CC^l} \leq 1/l\;,$$ which means that $f_{k,p_l}$ converges to $f_k$ in the $\CC^\infty$-sense. This completes the proof.\qed\\ \medskip\noindent{\bf Proof of Theorem \ref{cor-quant}.} Let us consider Donaldson's $Q_K$-operator, acting on $f\in\cinf(X,\field{C})$ by \begin{equation}\label{Q_K} (Q_{K,p} f)(x)= \frac{n_p}{\Vol(X)} \int_X ((F(x),F(y)))f(y) dv_X(y)\;. \end{equation} We will show that when the scalar curvature is constant, the analogue of Theorem \ref{thm-quant} holds for this operator. As $p'/p = 1 + \mathcal{O}(p^{-1})$ by the Riemann-Roch theorem, this will imply Theorem \ref{cor-quant} via the morphism $T^*_p$ of Section \ref{prel}, which relates $Q_p$ with $Q_{K,p}$ in the same way that ${\mathcal{E}}_p$ is related to ${\mathcal{B}}_p$. Recall that $R_p:X\to\field{R}$ denotes the Rawnsley function, and that $n_p=\dim_\field{C}\HH_p$. By the classical asymptotic expansion of the Bergman kernel, which can be found for example in \cite[\S\,4.1.1]{MM07}, we know that when the scalar curvature is constant, we have \begin{equation} \frac{\Vol(X)}{n_p}R_p=1+O(p^{-2})\;. \end{equation} As this expansion holds in $\mathscr{C}^m$-norm for all $m\in\field{N}^*$ and by the definition ${\mathcal{B}}_p$ and $Q_{K,p}$ in formulas \eqref{Bpfla} and \eqref{Q_K} respectively, we get a constant $C_m>0$ for any $m\in\field{N}^*$ such that \begin{equation}\label{QK-Bp} \|Q_{K,p}-{\mathcal{B}}_p\|_{H^m}\leq C_m p^{-2}. \end{equation} It is then easy to see that Lemma \ref{hjrefprop} holds for any sequence $\{f_p\}_{p\in\field{N}^*}$ with $f_p\in\cinf(X,\field{C})$ such that \begin{equation}\label{efQKp} \|f_p\|_{L_2}=1\;,\quad Q_{K,p}f_p=\mu_p f_p\;, \end{equation} with $\{p(1-\mu_p)\}_{p\in\field{N}^*}$ bounded, simply using the estimate \eqref{QK-Bp} to replace $B_p$ by $Q_{K,p}$ in \eqref{deltFest} and \eqref{hjdeltFest}. We can then follow the proof of Theorem \ref{thm-quant} above to get the same result for $Q_{K,p}$, using the estimate \eqref{QK-Bp} to replace ${\mathcal{B}}_p$ by $Q_{K,p}$ in \eqref{KSSob} and \eqref{estevdelt}, and using \eqref{diagexp} to replace $\|\cdot\|_p$ by $\|\cdot\|_{L_2}$ in \eqref{estevdelt}. This completes the proof. \qed\\ Let us now make a final comment on the case when the complex structure is not assumed to be integrable, so that $(X,\omega)$ is a closed symplectic manifold of real dimension $2d$, and $L$ is a Hermitian line bundle with Hermitian connection of curvature $-2\pi i{\omega}$. One can then consider the following \emph{renormalized Bochner Laplacian} acting on $\cinf(X,L^p)$ for any $p\in\field{N}^*$, first introduced by Guillemin and Uribe \cite{GU88}, \begin{equation} \Delta_p:=\Delta^{L^p}-2\pi dp\;, \end{equation} where $\Delta^{L^p}$ stands for the usual Bochner Laplacian on $L^p$. By \cite[Th.2.a]{GU88}, the spectrum of $\Delta_p$ is contained in $I\,\cup\,(C_1p-C_2,+\infty)$ for all $p\in\field{N}^*$, for some $C_1,\,C_2>0$ and some interval $I\subset\field{R}$ containing $0$. We can then consider $\Pi_p$ as the associated spectral projection corresponding to $I$ and set $\mathscr{H}_p={\rm Im}(\Pi_p)$. Using the work of Ma and Marinescu on the kernel of $\Pi_p$ \cite{MM08}, all the preliminaries of \cref{subsecprep} hold in this context, and we claim that \cref{thm-quant} holds as well. In fact, the Berezin transform admits an asymptotic expansion similar to \cref{BTasy} as a consequence of the analogous result in \cite[(2.31),\,(3.2)]{LMM16}, except for the formula \eqref{|J|0}, where we only have $J_{1,x_0}(0,Z')=0$ for all $Z'\in\field{R}^{2d}$ as a consequence of \cite[Lem.6.1,\,Lem.6.2]{ILMM17} (see also \cite[(2.32)]{MM08}). Then \cref{KS} holds as stated, and \cref{boundexp} holds with $p^{-\sigma}$ instead of $p^{-1}$ on the right hand side, for some $\sigma>0$. It is then straightforward to adapt the rest of the proof. Note that the corresponding estimates in \cref{KS} and \cref{boundexp} can be seen as refinements of \cite{LMM16}. On the other hand, the proof extends with no modifications to the case where $L^p$ is replaced by $L^p\otimes E$ for any Hermitian vector bundle $E$ equipped with an Hermitian connection, again by the results of \cite{MM08}. \section{Berezin transform and Donaldson's iterations}\label{sec-don} In \cite{D} Donaldson, as a part of his program of developing approximate methods for detecting canonical metrics on K\"{a}hler manifolds, discovered a remarkable class of dynamical systems on the space of all Hermitian products on a given complex vector spaces. We shall show in this section that the linearization of such a system at a fixed point can be identified with the quantum channel ${\mathcal{E}}$ introduced in \eqref{eq-e} above and prove that under certain natural assumptions, ${\mathcal{E}}$ is injective and has strictly positive spectral gap. Using earlier results by Donaldson, we will then deduce that the iterations of this system converge exponentially fast to the fixed point. For a complex $n$-dimensional vector space ${\mathcal{V}}$, denote by ${\mathcal Prod}({\mathcal{V}})$ the space of Hermitian products on ${\mathcal{V}}$. Given such a $q\in{\mathcal Prod}({\mathcal{V}})$, let $\mathscr{H}:=({\mathcal{V}},q)$ be the corresponding Hilbert space, and define a map \begin{equation} \Psi_q: {\mathbb{P}} ({\mathcal{V}})\longrightarrow\mathscr{L}(\mathscr{H}) \end{equation} sending a line $\Lambda\in{\mathbb{P}}(\mathscr{H})$ to the orthogonal projector to $\Lambda$ with respect to $q$. Every product $q \in {\mathcal Prod}({\mathcal{V}})$ gives rise to an antilinear Riesz map $I_q:{\mathcal{V}}\to{\mathcal{V}}^*$, defined by the formula $I_q(v) = q(\cdot, v)$, for all $v\in{\mathcal{V}}$. This induces in turn a product $q^* \in {\mathcal Prod}({\mathcal{V}}^*)$ by the formula $q^*(\xi,\eta):=q(I_q^{-1}\eta, I_q^{-1}\xi)$, for all $\xi,\,\eta\in{\mathcal{V}}^*$. We write $\mathscr{H}^*:=({\mathcal{V}}^*,q^*)$ for the corresponding Hilbert space. Let $\nu$ be a Borel measure on ${\mathbb{P}} ({\mathcal{V}}^*)$, so that $|\nu|:= \nu({\mathbb{P}} ({\mathcal{V}}^*)) < \infty$. Following Donaldson \cite[p.581]{D}, we say that $q \in {\mathcal Prod}({\mathcal{V}})$ is $\nu$-{\it balanced} if the operator-valued measure \begin{equation} \label{eq-Donaldson-POVM} dW(z):= n\,\Psi_{q^*}(z) \frac{d\nu(z)}{|\nu|}\;, \end{equation} defines an $\mathscr{L}(\mathscr{H}^*)$-valued POVM on ${\mathbb{P}} ({\mathcal{V}}^*)$ as in \eqref{eq-POVM-density}. This translates into the condition \begin{equation} \label{eq-Donaldson-POVM-1} R_\nu \int_{{\mathbb{P}} (\mathscr{H}^*)} \Psi_{q^*}(z) d\nu(z) = {1\hskip-2.5pt{\rm l}},\quad\text{with}~~R_\nu:=\frac{n}{|\nu|}\;, \end{equation} which is precisely Donaldson's formulation. In this section, we will be mainly concerned with the problem of finding a $\nu$-balanced Hermitian product $q\in{\mathcal Prod}({\mathcal{V}}\*)$, under some natural assumptions on the measure $\nu$. The existence of $\nu$-balanced products in this context is due to Bourguignon, Li and Yau \cite{BLY94}, where they use such products to give an upper bound for the first eigenvalue of the Laplacian of complex manifolds embedded in the projective space. This generalizes the seminal work of Hersch \cite{Her70}, where he shows that the first eigenvalue of any metric over $S^2$ is smaller than the one of the round metric, using the notion of balanced product in its simplest form. \medskip \noindent \begin{exam}\label{exam-general}{\rm Let $\mathscr{H}=({\mathcal{V}},q)$ be an $n$-dimensional Hilbert space, and let $W$ be an $\mathscr{L}(\mathscr{H})$-valued POVM, defined as in formula \eqref{eq-POVM-density}. Let $\sigma_W$ be the pushforward measure $F_*\alpha$ on ${\mathbb{P}}({\mathcal{V}})$ by the associated map \begin{equation} F:\Omega\longrightarrow{\mathbb{P}}({\mathcal{V}})\;, \end{equation} where ${\mathbb{P}}({\mathcal{V}})$ is identified with the set of rank one projectors in ${\mathcal{S}}(\mathscr{H})\subset\mathscr{L}(\mathscr{H})$. It is then an immediate consequence of the definitions that $q^*\in{\mathcal Prod}({\mathcal{V}}^*)$ is $\sigma_W$-balanced. } \end{exam} \medskip\noindent\begin{exam}\label{exam-cloudK} {\rm For any $p\in\field{N}^*$, consider the Berezin-Toeplitz POVM $W_p$ on a closed quantizable K\"{a}hler manifold $X$ associated to a Hermitian holomorphic line bundle $L$, as in Section \ref{subsec-BLB}. The associated Hilbert space is $\mathscr{H}_p=(H^0(X,L^p),q_{L^2})$, where $H^0(X,L^p)$ is the space of holomorphic sections of $L^p$, and $q_{L^2}\in{\mathcal Prod}(H^0(X,L^p))$ is the $L^2$-Hermitian product induced by the Kähler metric. In this case, the measure $\sigma_{W_p}$ is the push-forward of $\alpha_p$ under the Kodaira embedding \begin{equation}\label{Kodemb} F_p: X \longrightarrow {\mathbb{P}}(H^0(X,L^p)^*)\;. \end{equation} Then by Lemma 6.13 in \cite{Fine-lectures}, we get as a special case of the previous example that $q_{L^2}\in{\mathcal Prod}(H^0(X,L^p))$ is $\sigma_{W_p}$-balanced. }\end{exam} \medskip \noindent \begin{exam}\label{exam-balanced} {\rm Taking the previous example the other way around, let $X$ be a complex manifold and $L \to X$ be an ample line bundle over $X$. Then the Kodaira map \eqref{Kodemb} is an embedding for $p$ sufficiently large. Let now $q\in{\mathcal Prod}(H^0(X,L^p))$ be any Hermitian product on $H^0(X,L^p)$. This gives rise to the Fubini-Study form $\omega_{FS}^{(q)}$ on $\mathbb{P}(H^0(X,L^p)^*)$, inducing a volume form $\nu_q$ on $F_p(X)$, which we consider as a measure on $\mathbb{P}(H^0(X,L^p)^*)$. Considering the Berezin-Toeplitz POVM of Section \ref{subsec-BLB} associated with the Kähler form $\omega_q:= p^{-1}F_q^*\omega_{FS}^{(q)}$, we get by Proposition 8.3 in \cite{Fine-lectures} that the collection $(X,\omega_q, L,h,p)$ is balanced in the sense of \eqref{def-balanced} if and only if the product $q$ is $\nu_q$-balanced. Combining with the previous example, we see that this happens if and only if $q$ is the $L^2$-Hermitian product induced by $h$. } \end{exam} \medskip Now, starting with a measure $\nu$ on ${\mathbb{P}}({\mathcal{V}}^*)$, we wish to find a $\nu$-balanced product $q\in{\mathcal Prod}({\mathcal{V}})$. First choose any base point $q_0\in{\mathcal Prod}({\mathcal{V}})$, and identify $({\mathcal{V}},q_0)$ with $(\field{C}^n, \langle\cdot,\cdot\rangle)$, where $\langle z,w\rangle = \sum_j z_j\bar{w}_j$. This identifies ${\mathcal Prod}({\mathcal{V}})$ with the set $\mathscr{L}(\field{C}^n)_+$ of positive Hermitian $n \times n$ matrices: every $q \in {\mathcal Prod}({\mathcal{V}})$ can be uniquely written as $q(v,w) = \langle G^T v,w\rangle$, with $G \in \mathscr{L}(\field{C}^n)_+$. On the other hand, under the identification of $(\field{C}^n)^*$ with $\field{C}^n$ induced by $\<\cdot,\cdot\>$, this identifies ${\mathbb{P}}({\mathcal{V}}^*)$ with $\field{C} P^{n-1}$, and the dual product $q^*$ is given by $q^*(v,w)=\<G^{-1}v,w\>$. For a non-zero vector $z \in \field{C}^n$, denote by $\Pi_z$ the orthogonal projector with respect to $\<\cdot,\cdot\>$ to the line generated by $z$. Then one readily checks that $q\in{\mathcal Prod}({\mathcal{V}})$ is $\nu$-balanced if and only if its image $G \in \mathscr{L}(\field{C}^n)_+$ via the identification above satisfies \begin{equation}\label{nubalpovm} R_\nu\int_{\field{C} P^n} \Pi_z G^{-1}\, \frac{|z|^2} {\langle G^{-1} z,z\rangle }\,d\nu(z) = {1\hskip-2.5pt{\rm l}}\;. \end{equation} This can be rewritten as $\TT_\nu (G) = G$, where $\TT_\nu: \mathscr{L}(\field{C}^n)_+ \to \mathscr{L}(\field{C}^n)_+ $ is defined by \begin{equation}\label{Tnudef} G \mapsto R_\nu\int_{\field{C} P^n} \Pi_z\,\frac{|z|^2} {\langle G^{-1} z,z\rangle }\,d\nu(z)\;. \end{equation} This dynamical system $\TT_\nu$ on $\mathscr{L}(\field{C}^n)$ was defined by Donaldson in \cite{D}. Under mild conditions on the measure $\nu$, Donaldson proved that for every initial condition $G_0 \in \mathscr{L}(\field{C}^n)_+$, the iterations $\TT_\nu^r (G_0)$ converge to a fixed point $G\in\mathscr{L}(\field{C}^n)_+$ as $r \to +\infty$, and that this fixed point is unique up to the action of $\field{R}_+$ on $\mathscr{L}(\field{C}^n)$ by scalar multiplication. Donaldson's argument, which we reproduce below, shows that the linearization of $\TT_\nu$ at $G$ coincides with the quantum channel ${\mathcal{E}}$ of $W$ defined in \eqref{eq-e}. To see this, let $q\in{\mathcal Prod}({\mathcal{V}})$ be a $\nu$-balanced Hermitian product, which we identify with a fixed point $G\in\mathscr{L}(\field{C}^n)_+$ of $\TT_\nu$ as above, and let $W$ be the associated POVM as in \eqref{eq-Donaldson-POVM}. Note that the two point of views are linked via composition by $G$, which identifies the space $\mathscr{L}(\field{C}^n)=\mathscr{L}({\mathcal{V}},q_0)$ with $\mathscr{L}(\mathscr{H})=\mathscr{L}({\mathcal{V}},q)$. Note that if we choose $q_0=q$ as a base point, then we have $G={1\hskip-2.5pt{\rm l}}$ and this identification is an equality. \medskip \noindent \begin{lemma}\cite[p.609]{D}\label{prop-D1} Let $q\in{\mathcal Prod}({\mathcal{V}})$ be a $\nu$-balanced Hermitian product, and let $G \in \mathscr{L}(\field{C}^n)_+$ be the associated fixed point of $\TT_\nu$. Under the natural identification of the tangent space $T_G\mathscr{L}(\field{C}^n)_+\simeq\mathscr{L}(\field{C}^n)$ with $\mathscr{L}(\mathscr{H})$ via composition by $G$, the differential of $\TT_\nu$ satisfies $D_G\TT_\nu = {\mathcal{E}}$, where ${\mathcal{E}}$ is the quantum channel of the POVM $W$ associated to $q$. \end{lemma} \begin{proof} Choosing $q=q_0$ as above, we can assume without loss of generality that $\mathscr{H}=(\field{C}^n,\<\cdot,\cdot\>)$ and that $G={1\hskip-2.5pt{\rm l}}$. Let $G(t)$ be a path with $G(0)={1\hskip-2.5pt{\rm l}}$. Abbreviating $\dot{G}:=\dot{G}(0)$, we have $$\frac{d}{dt}\big |_{t=0} \TT_\nu(G(t)) = R_\nu\int_{\field{C} P^n} \Pi_z \frac{\langle\dot{G}z,z\rangle}{|z|^2}d\nu(z)\;.$$ Noticing that $\langle\dot{G}z,z\rangle/|z|^2= ((\dot{G},\Pi_z))$, we rewrite the right hand side as $$n \ \int_{\field{C} P^n} \Pi_z ((\dot{G},\Pi_z)) \frac {d\nu(z)}{|\nu|} = {\mathcal{E}}(\dot{G})\;,$$ as required.\end{proof} \medskip Recall that the quantum channel ${\mathcal{E}}:\mathscr{L}(\mathscr{H})\to\mathscr{L}(\mathscr{H})$ satisfies ${\mathcal{E}}({1\hskip-2.5pt{\rm l}})={1\hskip-2.5pt{\rm l}}$, and that its \emph{spectral gap} is the quantity $\gamma=1-\lambda_1$, where \begin{equation} 1=\lambda_0\geq\lambda_1\geq\lambda_2\geq\cdots\geq 0 \end{equation} is the decreasing sequence of eigenvalues of ${\mathcal{E}}$. Keeping in mind the identification of ${\mathcal{V}}$ and ${\mathcal{V}}^*$ with $\field{C}^n$ via a base point $q_0\in{\mathcal Prod}({\mathcal{V}})$, our first goal is to prove the following result. \medskip\noindent \begin{thm}\label{thm-GH} Let $\nu$ be a Borel measure on $\field{C} P^{n-1}$, and suppose that $\nu$ is supported on a complex variety $Y \subset \field{C} P^{n-1}$, with $\nu$ absolutely continuous on every irreducible component of $Y$. $(i)$ Assume that for any projective subspace $\Sigma$ of $\field{C} P^{n-1}$, we have \begin{equation} \label{eq-spade} \frac{\nu(\Sigma)}{\dim \Sigma +1} < \frac{|\nu|}{n}\;. \end{equation} Then for any $G_0\in\mathscr{L}(\field{C}^n)_+$, the iterations $\TT_\nu^r (G_0)$ converge to a fixed point $G_\infty\in\mathscr{L}(\field{C}^n)_+$ as $r\to+\infty$, unique up to the action of $\field{R}_+$ by scalar multiplication. Furthermore, the quantum channel ${\mathcal{E}}$ associated to $G_\infty$ as in \cref{prop-D1} has positive spectral gap. $(ii)$ Assume in addition that at least one irreducible component of $Y$ is not contained in any proper projective subspace of $\field{C} P^{n-1}$. Then the associated quantum channel ${\mathcal{E}}$ is invertible. \end{thm} \medskip The proof of Theorem \ref{thm-GH} will be divided into Propositions \ref{Psicv}, \ref{prop-D2} and \ref{prop-D3} below. But first, some remarks are in order. Note that if $Y$ is irreducible, assumptions $(i)$ and $(ii)$ are satisfied as soon as $Y$ is not contained in a proper projective subspace of $\field{C} P^{n-1}$. Thus these assumptions are automatically satisfied in the important case when $\nu$ is induced by a smooth volume form over a complex manifold $X$ embedded in a projective space via Kodaira embedding. Conversely, observe that if there exists a $\nu$-balanced Hermitian product, the whole variety $Y$ (in contrast with its irreducible components) cannot lie in a proper projective subspace of $\field{C} P^{n-1}$. Indeed, assume without loss of generality that $\<\cdot,\cdot\>$ is $\nu$-balanced, and assume on the contrary that the lift of $Y$ to $\field{C}^n$ is orthogonal to a non-zero vector, say $u$. Then equation \eqref{eq-Donaldson-POVM-1} yields $$R_\nu \int_{Y} \Pi_y u \,d\nu(y) = u\;.$$ However, $\Pi_y u =0$ by the assumption, and we arrived at a contradiction. Note also that assumption $(i)$ coincides with Donaldson's assumption 2 in \cite[p. 581]{D} when $Y$ is a finite collection of points. Donaldson proved that if either $Y$ is a complex variety which is not contained in any proper projective subspace, or $Y$ is a finite collection of points satisfying $(i)$, every orbit of $\TT_\nu$ converges to a balanced product $G$ as time goes to infinity. The proof of Proposition \ref{Psicv} closely follows the lines of \cite[p. 581]{D}. \medskip \noindent \begin{prop}\label{Psicv} Assume that assumption $(i)$ of Theorem \ref{thm-GH} holds. Then for any $G_0\in\mathscr{L}(\mathscr{H})_+$, the iterations $\TT_\nu^r (G_0)$ converge to a fixed point $G_\infty\in\mathscr{L}(\field{C}^n)_+$ as $r\to+\infty$, unique up to the action of $\field{R}_+$ by scalar multiplication. \end{prop} \begin{proof} Recall that $\<\cdot,\cdot\>$ denotes the canonical Hermitian product of $\field{C}^n$. Following \cite[p. 582]{D}, for any $[z]\in\field{C} P^{n-1}$, let $z\in\field{C}^n$ be a lift of norm $1$, and for any $G\in\mathscr{L}(\field{C}^n)_+$, set \begin{equation}\label{psizdef} \psi_{[z]}(G):=\log\<G^{-1}z,z\>+\frac{1}{n}\log\det G. \end{equation} This quantity does not depend on the choice of a lift of $[z]\in\field{C} P^{n-1}$ of norm $1$, and the second term makes it invariant under multiplication of $G$ by a positive scalar. Given a Borel measure $\nu$ on $\field{C} P^{n-1}$, we then define a functional on $\mathscr{L}(\field{C}^n)_+$ by the formula \begin{equation}\label{Psinudef} \Psi_\nu(G)=\int_{\field{C} P^{n-1}}\psi_{[z]}(G)\,d\nu([z]), \end{equation} for any $G\in\mathscr{L}(\field{C}^n)$. Using \eqref{Tnudef}, we see that $G\in\mathscr{L}(\field{C}^n)_+$ is a critical point of $\Psi_\nu$ if and only if it is a fixed point of $\TT_\nu$. Thus to show the existence and unicity of such a fixed point up to the action of $\field{R}_+$, we can restrict $\Psi_\nu$ to the space $\mathscr{L}(\field{C}^n)_+^1$ of positive Hermitian matrices of determinant $1$, and it suffices to show that $\Psi_\nu$ is strictly convex and proper along any geodesic of $\mathscr{L}(\field{C}^n)_+^1$ for its natural Riemannian metric as a symmetric space. In fact, any strictly convex and proper function over $\field{R}$ has a unique absolute minimum, which is also its unique critical point. Now as two points can always be joined by a geodesic, we conclude in that case that a fixed point of $\TT_\nu$ on $\mathscr{L}(\field{C}^n)_+^1$ coincide with a minimum of $\Psi_\nu$, which exists and is unique. Recall that the structure of symmetric space on $\mathscr{L}(\field{C}^n)_+^1$ is given by the map \begin{equation} \begin{split} \textup{SL}_n(\field{C})&\longrightarrow\mathscr{L}(\field{C}^n)_+^1\\ G&\longmapsto \sqrt{G^*G}\;, \end{split} \end{equation} which realizes $\mathscr{L}(\field{C}^n)_+^1$ as the quotient of the special linear group $\textup{SL}_n(\field{C})$ by the special unitary group $\textup{SU}(n)$. The usual scalar product $((\cdot,\cdot))$ on the space of $n \times n$ matrices induces a Riemannian metric on $\mathscr{L}(\field{C}^n)_+^1$ through the identification of its tangent space at any point with the space of traceless matrices. By general theory of symmetric spaces, geodesics are simply the images of $1$-parameter groups of $\textup{SL}_n(\field{C})$ through the above map, so that up to the action of $\textup{SU}(n)$ by conjugation, they are of the form $G_t\in\mathscr{L}(\field{C}^n)_+^1$, with \begin{equation} G_t=\diag(e^{\lambda_1 t},e^{\lambda_2 t},\cdots,e^{\lambda_n t})\;, \end{equation} for all $t\in\field{R}$, where $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_n$ satisfy $\sum_{j=1}^n\lambda_j=0$. Now if $\nu$ satisfies assumption $(i)$ of Theorem \ref{thm-GH}, its pullback by the action of a unitary matrix also satisfies this assumption, and thus we are reduced to show strict convexity and properness of \begin{equation} t\longmapsto\Psi_\nu(G_t)=\int_{\field{C} P^{n-1}}\log\left(\sum_{j=1}^ne^{\lambda_j t} |z_j|^2\right)\,d\nu([z]),\quad t\in\field{R}\;. \end{equation} Now convexity follows from a direct computation, with strict convexity as long as the total mass of $\nu$ is not contained in any projective subspace of $\field{C} P^{n-1}$, which is a straightforward consequence of assumption $(i)$. Let us now show properness, i.e. that $\Psi_\nu(G_t)\to+\infty$ when $t\to\pm\infty$. By considering the geodesic going to the opposite direction, it suffices to show it when $t\to+\infty$. Consider an irreducible component $Z\subset Y$, and let $k\leq n$ be the largest integer such that $Z$ is contained in the projective subspace \begin{equation}\label{Sigmak} \Sigma_k:=\{[0:\cdots:0:z_k:\cdot:z_n]\in\field{C} P^{n-1}\}\subset\field{C} P^{n-1} \end{equation} As $\nu$ is absolutely continuous over the smooth part of $Z$, this means in particular that the function $\log|z_k|^2$ restricted to $Z$ is integrable with respect to $\nu$. We thus get a constant $C_Z>0$ such that \begin{equation} \begin{split} \int_Z\log\left(\sum_{j=1}^ne^{\lambda_j t} |z_j|^2\right)\,d\nu([z])&\geq\int_Z\log\left(e^{\lambda_k t} |z_j|^2\right)\,d\nu([z])\\ &\geq\lambda_k t\nu(Z)-C_Z\;. \end{split} \end{equation} For any $k\leq n$, write $\nu_k>0$ for the total mass of the irreducible components of $Y$ for which $k$ is the largest integer such that they are not contained in $\Sigma_k$ as above. We then get a constant $C_Y>0$ such that \begin{equation} \Psi_\nu(G_t)\geq t\sum_{j=1}^n\lambda_j\nu_j-C_Y\;. \end{equation} We are thus reduced to show that $\sum_{j=1}^n\lambda_j\nu_j>0$. Notice now that assumption $(i)$ implies \begin{equation} \sum_{j=k}^n\nu_j<\frac{n-k}{n}\sum_{j=1}^n\nu_j, \quad\text{for all}~~1\leq k\leq n\;. \end{equation} Using $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_n$ and $\sum_{j=1}^n\lambda_j=0$, we then get \begin{equation} \begin{split} \sum_{j=1}^n\lambda_j\nu_j&=\lambda_0\sum_{j=1}^n\nu_j+ \sum_{k=1}^n(\lambda_k-\lambda_{k-1})\sum_{j=k}^n\nu_j\\ &>\left(\lambda_0+\sum_{k=1}^n\frac{n-k}{n} (\lambda_k-\lambda_{k-1})\right)\sum_{j=1}^n\nu_j\\ &>\left(\frac{1}{n} \sum_{k=1}^n\lambda_k\right)\sum_{j=1}^n\nu_j=0\;. \end{split} \end{equation} This implies properness. Let us now show the convergence of iterations of $\TT_\nu$ to a fixed point. We will first show that $\TT_\nu$ decreases $\Psi_\nu$, so that iterations have an accumulation point by properness, and we will then show that this accumulation point is in fact a fixed point. First note that for any $G\in\mathscr{L}(\field{C}^n)_+$, using the fact that projectors are of trace $1$, formula \eqref{eq-Donaldson-POVM-1}, together with \eqref{nubalpovm} and \eqref{Tnudef}, gives ${{\rm tr}}\left[\TT_\nu(G)G^{-1}\right]=n$. Using the strict concavity of the logarithm, we thus get \begin{equation}\label{logdet} \begin{split} \frac{1}{n}\log\det\left(\TT_\nu(G)\right) -\frac{1}{n}\log\det\left(G\right)&= \frac{1}{n}\log\det\left(\TT_\nu(G)G^{-1}\right)\\ &\leq\log\left(\frac{{{\rm tr}}\left[\TT_\nu(G)G^{-1}\right]}{n}\right) =0\;, \end{split} \end{equation} with equality if and only if $\TT_\nu(G)G^{-1}={1\hskip-2.5pt{\rm l}}$. Thus to show that $\Psi_\nu(\TT_\nu(G))\leq\Psi_\nu(G)$, by definition \eqref{Psinudef} of $\Psi_\nu$, we only need to show that $\TT_\nu$ decreases the integral against $\nu$ of the first term of formula \eqref{psizdef}. Again by concavity of the logarithm, we get \begin{equation}\label{intlog} \begin{split} \int_{\field{C} P^{n-1}}\log\<\TT_\nu(G)^{-1}z,z\>\,d\nu([z])- \int_{\field{C} P^{n-1}}&\log\<G^{-1}z,z\>\,d\nu([z])\\ &\leq\log\left(\int_{\field{C} P^{n-1}}\frac{\<\TT_\nu(G)^{-1}z,z\>} {\<G^{-1}z,z\>}\,d\nu([z]) \right)\\ &\leq\log\left(\frac{1}{n} {{\rm tr}}\left[\TT_\nu(G)\TT_\nu(G)^{-1}\right]\right)=0\;. \end{split} \end{equation} This, together with \eqref{logdet}, proves $\Psi_\nu(\TT_\nu(G))\leq\Psi_\nu(G)$, for all $G\in\mathscr{L}(\field{C}^n)_+$. To conclude, note first that properness over $\mathscr{L}(\field{C}^n)_+^1$ and invariance under the action of $\field{R}_+$ implies that $\Psi_\nu$ is bounded from below over the whole $\mathscr{L}(\field{C}^n)_+$. Thus for any $G_0\in\mathscr{L}(\field{C}^n)_+$, we get that the decreasing sequence $\{\Psi_\nu(\TT_\nu^r(G_0))\}_{r\in\field{N}}$ converges to its lower bound. As both terms in the definition of $\Psi_\nu$ are decreasing under iterations of $\TT_\nu$ by \eqref{logdet} and \eqref{intlog}, we then deduce that $\{\log\det(\TT_\nu^r(G_0))\}_{r\in\field{N}}$, thus also $\{\det(\TT_\nu^r(G_0))\}_{r\in\field{N}}$, are bounded in $\field{R}$, and that \begin{equation}\label{logdetto0} \frac{1}{n}\log\det\left( \TT_\nu^{r+1}(G_0)\TT_\nu^r(G_0)^{-1}\right) \longrightarrow 0,\quad\text{as}~~r\to+\infty\;. \end{equation} Now from properness of $\Psi_\nu$ over $\mathscr{L}(\field{C}^n)_+^1$ and boundedness in $\field{R}$ of the sequences $\{\Psi_\nu(\TT_\nu^r(G_0))\}_{r\in\field{N}}$ and $\{\det(\TT_\nu^r(G_0))\}_{r\in\field{N}}$, we get that the sequence $\{\TT_\nu^r(G_0)\}_{r\in\field{N}}$ admits an accumulation point $G_\infty\in\mathscr{L}(\field{C}^n)_+$. On the other hand, by strict concavity of the logarithm, formula \eqref{logdetto0} and the equality case in formula \eqref{logdet} imply \begin{equation} \TT_\nu^{r+1}(G_0)\TT_\nu^r(G_0)^{-1} \longrightarrow{1\hskip-2.5pt{\rm l}},\quad\text{as}~~r\to+\infty\;. \end{equation} We thus get that the accumulation point is unique, and satisfies $\TT_\nu(G_\infty)=G_\infty$. This concludes the proof. \end{proof} In the following Proposition, we use the result that a fixed point of $\TT_\nu$ exists as soon as $\nu$ satisfies assumption $(i)$, which was proved in the previous Proposition. \medskip \noindent \begin{prop}\label{prop-D2} Assume that assumption $(i)$ holds. Then the quantum channel ${\mathcal{E}}$ associated to a fixed point $G\in\mathscr{L}(\field{C}^n)_+$ of $\TT_\nu$ as in \cref{prop-D1} has positive spectral gap. \end{prop} \begin{proof} Consider a Borel measure $\nu$ over $\field{C} P^{n-1}$ satisfying assumption $(i)$ of Theorem \ref{thm-GH}, and normalize if by setting $\alpha:= \nu/|\nu|$. Through the identification of Lemma \ref{prop-D1}, we assume without loss of generality that $G={1\hskip-2.5pt{\rm l}}$ and that $D_G\TT_\nu={\mathcal{E}}$. Until the end of the proof we write $z$ for a non-vanishing vector in $\field{C}^n$ and $[z]$ for its class in $\field{C} P^{n-1}$. For any $z,\,w\in\field{C}^n$, write \begin{equation} {\mathcal{B}}([z],[w])= \frac{1}{n}\frac{|\langle z, w \rangle |^2} {|z|^2|w|^2} \end{equation} for the Schwartz kernel of the Berezin transform ${\mathcal{B}}$ on $L_2(\field{C} P^{n-1},\nu)$ with respect to $d\alpha$. Recall that $\<\cdot,\cdot\>$ stands for the canonical Hermitian product of $\field{C}^n$, and let $Y_1,\dots, Y_q$ be the irreducible components of $Y$. Since $(z,w) \mapsto \langle z, w\rangle$ is holomorphic in $z$ and anti-holomorphic in $w$, for every $i,\,j\leq q$, we get that \begin{itemize} \item[{(a)}] either ${\mathcal{B}}([z],[w]) = 0$ for all $([z],[w])\in Y_i \times Y_j\;,$ \item[{(b)}] or ${\mathcal{B}}([z],[w])\neq 0$ for almost all $([z],[w])\in Y_i \times Y_j\;.$ \end{itemize} Consider a graph $\Gamma$ with vertices $1,\dots, q$, where $i,j$ are connected by an edge whenever $(b)$ occurs of $Y_i\times Y_j$. In particular, each $i$ is connected by an edge to itself. Recall that $$\int {\mathcal{B}}(x,y) d\alpha(y) = \int {\mathcal{B}}(x,y) d\alpha(x)=1\;.$$ Using the same trick than in formula \eqref{Schurtest} above, we apply Cauchy-Schwarz inequality on the formula \begin{equation}\label{CS1/2} \int{\mathcal{B}}(x,y)\phi(y)d\alpha(y) = \int {\mathcal{B}}(x,y)^{1/2}\, {\mathcal{B}}(x,y)^{1/2}\phi(y) d\alpha(y)\;, \end{equation} to get for any $\phi\in L^2(\field{C} P^{n-1},\nu)$, \begin{equation} \begin{split} \|{\mathcal{B}}\phi\|^2_{L_2} &= \int \left (\int {\mathcal{B}}(x,y)\phi(y) d\alpha(y)\right) ^2 d\alpha(x)\\ &\leq \int \left ( \int {\mathcal{B}}(x,y) d\alpha(y) \cdot \int {\mathcal{B}}(x,y) \phi^2(y)d\alpha(y) \right) d\alpha(x) =\|\phi\|_{L_2}\;. \end{split} \end{equation} In particular, the equality ${\mathcal{B}}\phi=\phi$ can hold only if the inequality above is an equality, and by the equality case of Cauchy-Schwarz inequality, this implies that for $\alpha$-almost all $x$, there exists $c \neq 0$ such that $c{\mathcal{B}}(x,y)^{1/2}= {\mathcal{B}}(x,y)^{1/2}\phi(y)$ for $\alpha$-almost all $y$. In terms of the graph defined in the previous step, this yields that $\phi$ is constant on every subset of the form $\bigcup_{j \in \text{star}(i)} Y_j,$ where $i=1,\dots ,q$. Thus if $\phi$ is a non constant function satisfying ${\mathcal{B}}\phi=\phi$, it follows that $\Gamma$ is disconnected. Denote by $\Gamma_i$, $i=1,\dots,k$ the connected components, and put $Z_i= \bigcup_{j \in \Gamma_i} Y_j$. Assuming that there exists a non-constant $\phi$ satisfying ${\mathcal{B}}\phi=\phi$ as above, we will show that assumption $(i)$ can not hold. Recall that we work with the POVM $dW(x) = n \Pi_x d\alpha(x)$, where $\Pi_x$ is the orthogonal projector to the line $x \in \field{C} P^{n-1}$ with respect to $\<\cdot,\cdot\>$. With this notation, ${\mathcal{B}}(x,y)=0$ yields $\Pi_x\Pi_y = 0$. Write $P = W(Z_1)$ and $P' = W(Z_2 \cup \dots \cup Z_k)$. It follows that $P+P'={1\hskip-2.5pt{\rm l}}$ and $PP' = 0$. Thus $P$ is an orthogonal projector whose image is a proper projective subspace $\Sigma$ of $\field{C} P^{n-1}$ of dimension $m-1$, with $$m = {{\rm tr}} P = {{\rm tr}} W(Z_1) = n\alpha(Z_1) = n \frac{\nu(Z_1)}{|\nu|}\;.$$ Observe also that if $Pz=0$, we get $$\int_{Z_1} \langle \Pi_x z,z \rangle d\nu(x) =0\;,$$ and hence $\langle \Pi_x z,z\rangle = 0$ for $\nu$-almost all $x$. Since $\nu$ is absolutely continuous on each irreducible component of $Y$, it follows that $x$ is orthogonal to $z$ for all $x \in Z_1$, and hence $Z_1 \subset \Sigma$. We conclude that $$\frac{\nu(\Sigma)}{m} \geq \frac{\nu(Z_1)}{m}= \frac{|\nu|}{n}\;,$$ so that assumption $(i)$ does not hold. \end{proof} The proof of this last Proposition is a variation on the theme of \cite[Proposition 4.1]{BMS}. \medskip \noindent \begin{prop}\label{prop-D3} Assume that assumption $(ii)$ holds. Then the quantum channel ${\mathcal{E}}$ associated to a fixed point $G\in\mathscr{L}(\field{C}^n)_+$ of $\TT_\nu$ as in \cref{prop-D1} is invertible. \end{prop} \medskip\noindent \begin{proof} Once again, we assume without loss of generality that $G={1\hskip-2.5pt{\rm l}}$ and $D_G\TT_\nu={\mathcal{E}}$. Until the end of the proof we write $z$ for a non-vanishing vector in $\field{C}^n$ and $[z]$ for its class in $\field{C} P^{n-1}$. Denote by $\tilde{Y}$ the cone of $Y$ in $\field{C}^n$. Assume on the contrary that an Hermitian matrix $A \neq 0$ lies in the kernel of ${\mathcal{E}}$, and set $$F_A([z],[w]):= \frac{\langle Az, w \rangle}{|z|\cdot |w|}\;.$$ Since ${\mathcal{E}}= n^{-1} TT^*$ and $T^* (A) ([z]) = F_A([z],[z])$ by the results of Section \ref{prel}, we have $F_A([z],[z]) =0$ for all $[z] \in Y$. Noticing that the function $(z,w) \mapsto \langle Az, w \rangle$ is holomorphic in $z$ and anti-holomorphic in $w$ and that it vanishes on the diagonal of $\tilde{Y} \times \tilde{Y}$, we conclude that $F$ vanishes on $Z \times Z$ for every irreducible component $Z$ of $Y$. Pick any irreducible component $Z$. If it fully lies in $\text{Ker} A$, we have that $Z$ is contained in a proper projective subspace. Otherwise, pick $[u] \in Z$ so that $Au \neq 0$. We thus proved that any other $[z] \in Z$ satisfies a linear equation $\langle z, Au \rangle = 0$, meaning that $Z$ lies in a proper projective subspace. This is in contradiction with assumption $(ii)$. \end{proof} The main consequence of \cref{thm-GH} and the main result of this section is the {\it exponential convergence} of Donaldson's iteration process to the $\nu$-balanced product. \begin{cor}\label{expcvcor} Suppose that the measure $\nu$ on $\field{C} P^{n-1}$ satisfies the assumptions $(i)$ and $(ii)$ of \cref{thm-GH}. Then for any $G_0\in\mathscr{L}(\field{C}^n)_+$, there exists a fixed point $G_\infty\in\mathscr{L}(\field{C}^n)_+$ of $\TT_\nu$ and constants $C>0$ and $\beta\in(0,1)$ such that for all $r\in\field{N}$, we have \begin{equation}\label{expcvest} \textup{dist}(\TT_\nu^r(G_0),G_\infty)\leq C\beta^r\;. \end{equation} \end{cor} \begin{proof} Suppose that $\nu$ satisfies assumptions $(i)$ and $(ii)$. Let's simplify the notation by setting $\mathscr{L}:= \mathscr{L}(\field{C}^n)_+$ and $\TT= \TT_\nu$. Take any $G_0\in\mathscr{L}$. By \cref{thm-GH}, its orbit $\TT^r(G_0)$ has a limit $G_\infty\in\mathscr{L}$ as $r \to +\infty$, which without loss of generality equals ${1\hskip-2.5pt{\rm l}}$. Write $\mathscr{L}^1$ for the space of positive Hermitian matrices of determinant $1$. Identify diffeomorphically $\mathscr{L}$ with $\mathscr{L}^1 \times \field{R}_+$ via the map $$\Theta: G \longmapsto\left(\mathscr{D}(G),\det(G)\right)\;\;\text{where}\;\; \mathscr{D}(G):=\frac{G}{\det(G)^{1/n}}\;.$$ Then for every $r \in \field{N}$, \begin{equation} \label{eq-newcoo} \Theta \TT^r \Theta^{-1} (G,g) =\left(\mathscr{D}(\TT^r(G)), g \cdot \det\TT^r(G)\right)\;. \end{equation} Recall that by Lemma \ref{prop-D1}, $D_{{1\hskip-2.5pt{\rm l}}}\TT$ coincides with the quantum channel ${\mathcal{E}}$. Since $\mathscr{L}^1$ is a slice of the $\field{R}_+$-action and $\TT$ is $\field{R}_+$-equivariant, the differential of $\mathscr{D} \circ \TT$ equals to the restriction of ${\mathcal{E}}$ to the tangent space $T_{1\hskip-2.5pt{\rm l}}\mathscr{L}^1$. The latter subspace consists of all trace $0$ Hermitian matrices. It follows from Theorem \ref{thm-GH} that the spectrum of this differential is contained in $(0,1)$, i.e., $\mathscr{D} \circ \TT$ is a local diffeomorphism of $\mathscr{L}^1$ near its hyperbolic fixed point ${1\hskip-2.5pt{\rm l}}$. By the classical Hartman-Grobman theorem, in a neighbourhood of ${1\hskip-2.5pt{\rm l}}$ the map $\mathscr{D} \circ \TT_\nu$ is conjugate by a local homeomorphism to its linearization at ${1\hskip-2.5pt{\rm l}}$. In particular, taking $\beta\in(0,1)$ as the largest eigenvalue of ${\mathcal{E}}$ in $(0,1)$, we get a constant $C>0$ such that \begin{equation}\label{Tnudetexpcv} \textup{dist} \left(\mathscr{D} (\TT_\nu^r(G_0)),{1\hskip-2.5pt{\rm l}} \right)\leq C\beta^r\;,\quad\text{for all}\; r\in\field{N}\;. \end{equation} By \eqref{eq-newcoo}, in order to complete the proof of the exponential convergence of the orbit of $G_0$ to ${1\hskip-2.5pt{\rm l}}$, we need to show that for $r$ large enough \begin{equation}\label{detcv} \left|\det \TT^r(G_0)-1\right| \leq C\beta^r\;. \end{equation} To this end recall that the functional $\Psi_\nu$ of the proof of \cref{Psicv} is decreasing under iterations of $\TT$ and invariant with respect to the action of $\field{R}_+$ by multiplication. By \eqref{Tnudetexpcv} and the differentiability of $\Psi_\nu$ at ${1\hskip-2.5pt{\rm l}}$, there exists a constant $C>0$ such that \begin{equation} 0\leq \Psi_\nu(\TT^r(G_0))-\Psi_\nu({1\hskip-2.5pt{\rm l}})\leq C\beta^r \;. \end{equation} Now as both \eqref{logdet} and \eqref{intlog} are non-positive and as $\TT^r(G_0)\to{1\hskip-2.5pt{\rm l}}$ as $r\to +\infty$, recalling the definition \eqref{psizdef}-\eqref{Psinudef} of $\Psi_\nu$ we deduce that \begin{equation} 0 \leq \log\det(\TT^r(G_0))\leq C\beta^r\;. \end{equation} Since for $x$ close to $1$, we have $2|\log x | > |1-x|$, this yields \eqref{detcv}. The proof is complete. \end{proof} \medskip\noindent \begin{exam}\label{exam-don-kod} {\rm Consider the setting of Example \ref{exam-cloudK} above. The spectral gap of ${\mathcal{E}}$ coincides with the one of the Berezin transform which, by \eqref{eq-gap-repeated} above, equals $\sim \lambda_1(X)/(4\pi p)$. Thus by Corollary \ref{expcvcor}, the first eigenvalue of the Laplace-Beltrami operator controls the convergence rate of Donaldson's iterations in this case. } \end{exam} \section{POVMs and geometry of measures} \label{sec-bestfit} Assume that we are given an $\mathscr{L}(\mathcal{H})$-valued POVM on $\Omega$ satisfying equation \eqref{eq-POVM-density}, i.e., of the form $dW = n\,F\,d\alpha$ for some $F:\Omega \to {\mathcal{S}}(\mathcal{H})$. In this section we discuss spectral properties of the Berezin transform associated to $W$ in terms of the geometry of the measure \begin{equation}\label{eq-mmm} \sigma_W:= F_*\alpha \end{equation} on ${\mathcal{S}}(\mathcal{H})$, focusing on its multi-scale features, and on stability of the spectral gap under perturbations of the measure. Recall that for pure POVMs we have encountered measure \eqref{eq-mmm} in Example \ref{exam-general}. \medskip Write ${\mathcal{V}} \subset \mathscr{L}(\mathcal{H})$ for the affine subspace consisting of all trace $1$ operators, $\text{dist}$ for the distance on ${\mathcal{V}}$ associated to the scalar product $((A,B))={{\rm tr}}(AB)$ on $\mathscr{L}(\mathcal{H})$. Given a compactly supported probability measure $\sigma$ on ${\mathcal{V}}$, introduce the following objects: \begin{itemize} \item the center of mass $C(\sigma)= \int_{\mathcal{V}} v d\sigma(v)$; \item the mean squared distance from the origin, $$I(\sigma)= \int_{\mathcal{V}} \text{dist}(C,v)^2 d\sigma(v)\;;$$ \item the mean squared distance to the best fitting line $$J(\sigma) = \inf_\ell \int_{\mathcal{V}} \text{dist}(v,\ell)^2 d\sigma(v)\;,$$ where the infimum is taken over all affine lines $\ell \subset {\mathcal{V}}$. \end{itemize} The infimum in the definition of $J$ is attained at the (not necessarily unique) {\it best fitting line} which is known to pass through the center of mass $C$ (Pearson, 1901; see \cite[p.188]{Fare} for a historical account). \footnote{The problem of finding $J$ and the corresponding minimizer $\ell$ appears in the literature under several different names including ``total least squares" and ``orthogonal regression".} Observe that the center of mass $C(\sigma_W)$ for the measure $\sigma_W$ given by \eqref{eq-mmm} coincides with the maximally mixed state $\frac{1}{n}{1\hskip-2.5pt{\rm l}}$. \medskip \noindent \begin{thm}\label{prop-best-fit} The spectral gap $\gamma(W)$ depends only on the push-forward measure $\sigma_W$ on ${\mathcal{S}}(\mathcal{H})$: $$\gamma(W) = 1 - n(I(\sigma_W)-J(\sigma_W))\;.$$ \end{thm} \begin{proof} Let $\ell \subset {\mathcal{V}}$ be any line passing through the center of mass $\frac{1}{n}{1\hskip-2.5pt{\rm l}}$ generated by a trace zero unit vector $A \in \mathscr{L}(\mathcal{H})$. For a point $B \in {\mathcal{V}}$ we have $$\text{dist}(B,\ell)^2= ((B-\frac{1}{n}{1\hskip-2.5pt{\rm l}},B-\frac{1}{n}{1\hskip-2.5pt{\rm l}}))- ((B-\frac{1}{n}{1\hskip-2.5pt{\rm l}},A))^2\;.$$ Integrating over $\sigma_W$ and taking infimum over $\ell$ we get that \begin{equation} \label{eq-IJK-vsp} J(\sigma_W) = I(\sigma_W) - K\;, \end{equation} with \begin{equation} \label{eq-Ksup} K= \sup_{\substack{{{\rm tr}}(A)=0 \\ {{\rm tr}}(A^2)=1}} \int_{\mathcal{V}} ((B,A))^2 dF_*\alpha(B)\;. \end{equation} The latter integral can be rewritten as \begin{equation} \label{eq-vsp-e} \int_\Omega ((F(s),A))^2 d\alpha(s) = n^{-1}(({\mathcal{E}}(A), A))\;, \end{equation} so by definition $K = n^{-1}\gamma_1= n^{-1}(1-\gamma(W))$. Substituting this into \eqref{eq-IJK-vsp}, we deduce the theorem. \end{proof} \medskip \begin{rem}\label{rem-eigenv}{\rm Observe that the supremum in \ref{eq-Ksup} is attained at a unit vector $A$ generating the best fitting line. By \eqref{eq-vsp-e}, $A$ is an eigenvector of ${\mathcal{E}}$ with the eigenvalue $\gamma_1$. } \end{rem} \medskip \begin{exam}{\rm For a pure POVM $W$, i.e. when $F$ is a one-to-one map from $\Omega$ to the set of rank-one projectors, $$\text{dist}(C,F(s))^2 = {{\rm tr}}\left[\left(\frac{1}{n}{1\hskip-2.5pt{\rm l}} -F(s)\right)^2\,\right] = 1-1/n$$ for all $s \in \Omega$, and hence $I(\sigma_W)= 1-1/n$. Thus, by Theorem \ref{prop-best-fit}, \begin{equation}\label{eq-J} J(\sigma_W)= \frac{n-2+\gamma}{n}\;. \end{equation} For instance, consider the (pure!) Berezin-Toeplitz POVM $W_p$ from Example \ref{exam-cloudK}. Let us use formula \eqref{eq-J} in order to calculate $J$. Recall that by the Riemann-Roch theorem (see \cite{Fine-lectures}, Propositions 2.25 and 4.21) $$n_p = Vp^d + Up^{d-1}+ \mathcal{O}(p^{d-2})\;,$$ where $$V= \Vol(X) = [\omega]^d/d!,\;\;\;U = c_1(X) \cup [\omega]^{d-1}/(d-1)!\;.$$ It follows from formula \eqref{eq-gap-repeated} for $\gamma_p$ that $$J(\sigma_{W_p})= 1 -\frac{2}{V}p^{-d} + \frac{8\pi U+ V\lambda_1}{4\pi V^2} p^{-d-1} +\mathcal{O}(p^{-d-2})\;.$$ For instance, for the dual to the tautological bundle over $\field{C} P^1$ in Example \ref{exam-CP1} $n=p+1$ and $\gamma= 2/(p+2)$ so by \eqref{eq-J} $J = 1- \frac{2}{p+2}$. } \end{exam} \medskip Furthermore, we explore robustness of the gap $\gamma(W)$, as a function of the measure $\sigma_W$, with respect to perturbations in the Wasserstein distances on the space of Borel probability measures on ${\mathcal{S}}(\mathcal{H})$. They are defined as follows. For compactly supported Borel probability measures $\sigma_1,\sigma_2 $ on a metric space $(X,d)$ the $L_2$-Wasserstein distance is given by $$\delta_2 \left( \sigma_1, \sigma_2 \right) := \inf \limits_{\nu} \left(\,\int \limits_{X\times X} \text{dist}\left(x_1, x_2\right)^2 \, d\nu(x_1,x_2)\right)^{1/2}\;,$$ and the $L_\infty$-Wasserstein distance by $$\delta_\infty \left( \sigma_1, \sigma_2 \right) := \inf \limits_{\nu} \sup_{(x_1,x_2)\in {\it supp\,} (\nu)} \text{dist}(x_1,x_2)\;,$$ where in both cases the infimum is taken over all Borel probability measures $\nu$ on $X \times X $ with marginals $\sigma_1$ and $\sigma_2$. \begin{thm} \label{thm-robust} Let $\sigma_V$ and $\sigma_W$ be measures on ${\mathcal{S}}(\mathcal{H})$ associated to POVMs $V$ and $W$ respectively. \begin{itemize} \item[{(i)}] $| \gamma(V) - \gamma(W)| \leq c(n) \delta_2(\sigma_V,\sigma_W)$, where $c(n)$ depends on the dimension $n=\dim \mathcal{H}$; \item[{(ii)}] If in addition $V$ and $W$ are pure POVMs, there exists a universal constant $c$ such that \begin{equation}\label{eq-rob-pure} |\gamma(V) - \gamma(W)| \leq c\delta_\infty(\sigma_V,\sigma_W)\;. \end{equation} \end{itemize} \end{thm} \medskip \noindent Note that this result enables us to compare spectral gaps of POVMs defined on different sets (but having values in the same Hilbert space). This idea goes back to \cite{OC} \footnote{In \cite{OC} the authors consider the $L_1$-version of this distance, and call it the Kantorovich distance.}. Let us emphasize that the estimate in (ii) is {\it dimension-free}. This is important, for instance, for comparison of spectral gaps corresponding to different Berezin-Toeplitz quantization schemes. Theorem \ref{thm-robust}(i) immediately follows from the fact that $C(\sigma)$, $I(\sigma)$ and $J(\sigma)$ are Lipschitz in $\sigma$ with respect to $L_2$-Wasserstein distance. The details will appear in MSc thesis by V.~Kaminker. For the proof of part (ii), we need the following auxiliary statement. In what follows we write $\|A\|_2$ for the Hilbert-Schmidt norm $({{\rm tr}}(AA^*))^{1/2}$. \begin{lemma}\label{lemma-tr} Let $P,Q$ be rank $1$ orthogonal projectors. Then for every $A \in \mathscr{L}(\mathscr{H})$, $$|{{\rm tr}} (A(P-Q))| \leq \sqrt{2}\|P-Q\|_2 \ ({{\rm tr}} (A^2(P+Q)))^{1/2}\;.$$ \end{lemma} \begin{proof} Suppose that $P$ and $Q$ are orthogonal projectors to unit vectors $\xi$ and $\eta$, respectively. By tuning the phase of $\xi$, we can assume that $\langle \xi, \eta \rangle \geq 0$. We have $$ |{{\rm tr}} (A(P-Q))| = |\langle A\xi,\xi\rangle - \langle A\eta,\eta\rangle| $$ $$= |\langle \xi-\eta, A\xi \rangle + \langle A\eta, \xi-\eta \rangle |\leq |\xi-\eta| (|A\xi|+|A\eta|)$$ $$ = |\xi-\eta|( \langle A^2\xi,\xi\rangle^{1/2} + \langle A^2\eta,\eta\rangle^{1/2}) \leq \sqrt{2}|\xi-\eta|( \langle A^2\xi,\xi\rangle + \langle A^2\eta,\eta\rangle)^{1/2}$$ $$ =\sqrt{2}|\xi-\eta|\left({{\rm tr}}(A^2P)+{{\rm tr}}(A^2Q)\right)^{1/2} \;.$$ But since $0 \leq \langle \xi, \eta \rangle \leq 1$, $$|\xi-\eta| = (2 -2\langle \xi, \eta \rangle)^{1/2} \leq (2 -2\langle \xi, \eta \rangle^2)^{1/2}$$ $$= ({{\rm tr}}(P-Q)^2)^{1/2} = \|P-Q\|_2\;.$$ This completes the proof. \end{proof} \medskip \noindent {\bf Proof of Theorem \ref{thm-robust} (ii):} Denote by $\mathscr{P}$ the space of all rank 1 orthogonal projectors on $\mathcal{H}$. We can assume without loss of generality that pure POVMs $V$ and $W$ are defined on subsets $\Omega_1$ and $\Omega_2$ of $\mathscr{P}$, respectively, and that the maps $F_i: \Omega_i \to \mathscr{P}$ are the inclusions. Thus representation \eqref{eq-POVM-density} in this case can be simplified as $$dV(s) = n\,s\,d\alpha_1(s)\;, \; dW(t) = n\,t\,d\alpha_2(t)\;,$$ where $\sigma_V= \alpha_1$ and $\sigma_W=\alpha_2$ are Borel probability measures supported in $\Omega_1$ and $\Omega_2$, respectively. Let us emphasize that here and below $s,t$ stand for rank 1 orthogonal projectors. Pick any measure $\nu$ on $\mathscr{P} \times \mathscr{P}$ with marginals $\alpha_1$ and $\alpha_2$ and write $$\Delta:= \max_{(s,t) \in {\it supp\,}(\nu) } \|s-t\|_2\;.$$ We use the fact that the operators ${\mathcal{E}}_1,{\mathcal{E}}_2: \mathscr{L}(\mathcal{H}) \to \mathscr{L}(\mathcal{H})$ given by formula \eqref{eq-e} have the same spectrum as the Berezin transform. For $A \in \mathscr{L}(H)$ with ${{\rm tr}} (A^2) =1$ put $$D:= |(({\mathcal{E}}_1 A,A)) - (({\mathcal{E}}_2 A,A))|\;.$$ One readily rewrites $$D = n \ \left | \int_{\Omega_1} ((F_1(s),A))^2d\alpha_1(s) - \int_{\Omega_2} ((F_2(t),A))^2d\alpha_2(t)\right | $$ $$\leq n \ \int_{\Omega_1 \times \Omega_2} |{{\rm tr}}((s-t)A)|\ |{{\rm tr}}((s+t)A)| d\nu\;.$$ By Lemma \ref{lemma-tr}, $$|{{\rm tr}}((s-t)A)| \leq \sqrt{2}\|s-t\|_2\ ({{\rm tr}}(A^2(s+t)))^{1/2}\;.$$ By Cauchy-Schwarz, writing $$(s+t)A = (s+t)^{1/2}((s+t)^{1/2}A)\;,$$ we get $$|{{\rm tr}}((s+t)A)| \leq ({{\rm tr}}(s+t))^{1/2}({{\rm tr}}(A^2(s+t)))^{1/2} = \sqrt{2}({{\rm tr}}(A^2(s+t)))^{1/2} \;.$$ It follows that $$D \leq 2n\max_{(s,t) \in {\it supp\,}(\nu)} \|s-t\|_2 \ \int {{\rm tr}}(A^2(s+t)) d\nu\;.$$ The integral on the right can be rewritten as $${{\rm tr}} \left( A^2 \int_{\Omega_1} s d\alpha_1(s)\right ) + {{\rm tr}}\left ( A^2 \int_{\Omega_2} t d\alpha_2(t)\right )= 2/n\;,$$ since $$\int_{\Omega_1} s\,d\alpha_1(s) = \int_{\Omega_2} t\,d\alpha_2(t) = \frac{1}{n}{1\hskip-2.5pt{\rm l}}$$ and ${{\rm tr}}(A^2)=1$. It follow that $D \leq 4\Delta$. Choosing $\nu$ so that $\Delta$ becomes arbitrary close to $\delta:= \delta_\infty(\alpha_1,\alpha_2)$, and taking $A$ with \begin{equation} \label{eq-A-constraints} {{\rm tr}} (A) =0\;,\ {{\rm tr}}(A^2)=1 \end{equation} to be an eigenvector of ${\mathcal{E}}_1$ with the first eigenvalue $\gamma_1({\mathcal{E}}_1)$, we get that $$|\gamma_1({\mathcal{E}}_1)- (({\mathcal{E}}_2 A,A))| \leq 4\delta\;.$$ But due to the variational characterization of the first eigenvalue, $\gamma_1({\mathcal{E}}_2) = \max (({\mathcal{E}}_2 A,A))$, where the maximum is taken over all $A$ satisfying \eqref{eq-A-constraints}. It follows that $\gamma_1({\mathcal{E}}_1)- \gamma_1({\mathcal{E}}_2) \leq 4\delta$. By symmetry, $\gamma_1({\mathcal{E}}_2)- \gamma_1({\mathcal{E}}_1) \leq 4\delta$, which yields the theorem with $c= 4$. \qed \medskip Our next result provides a geometric characterization of the eigenfunction of the operator ${\mathcal{B}}$ with the eigenvalue $\gamma_1$. Let $A \in \mathscr{L}(\mathcal{H})$ be the trace zero unit vector generating the best fitting line corresponding to $W$. In view of Theorem \ref{prop-best-fit}, $$\gamma_1 = 1-\gamma(W)= n(I-J)\;,$$ with $I= I(\sigma_W)$ and $J=J(\sigma_W)$. \medskip \noindent \begin{thm}\label{thm-eigenf} The function \begin{equation}\label{eq-psi1} \psi_1: \Omega \to \field{R},\; s \mapsto \frac{((F(s),A))}{\sqrt{I-J}}\end{equation} is an eigenfunction of the operator ${\mathcal{B}}$ with the eigenvalue $\gamma_1$. Furthermore, $\|\psi_1\|=1$. \end{thm} \medskip \noindent In other words, up to a multiplicative constant, the first eigenfunction is the projection to the best fitting line. \begin{proof} By Remark \ref{rem-eigenv} above, the operator $A$ generating the best fitting line is an eigenvector of the quantum channel ${\mathcal{E}}$: ${\mathcal{E}} A= \gamma_1 A$. Since ${\mathcal{E}}=n^{-1}TT^*$ and ${\mathcal{B}}= n^{-1}T^*T$, we have ${\mathcal{B}}(T^*A)= \gamma_1 T^*A$ and $(T^*A,T^*A) = n\gamma_1$. Furthermore, $T^*A(s)= n((F(s),A))$ and $n\gamma_1 = n^2(I-J)$. Choosing $\psi_1= T^*A/\|T^*A\|$, we get \eqref{eq-psi1}. \end{proof} \medskip Next, we discuss the {\it diffusion distance} on $\Omega$ associated to the Markov operator ${\mathcal{B}}$ (see \cite{CoL}). This distance, which originated in geometric analysis of data sets, depends on a positive parameter $\tau$ playing the role of the time in the corresponding random process. Take any orthonormal eigenbasis $\{\psi_k\}$ corresponding to eigenvalues $1=\gamma_0 \geq \gamma_1 \geq \gamma_2 \dots$ of ${\mathcal{B}}$ such that $\psi_0$ is constant. The diffusion distance $D_\tau$ is defined by \begin{equation}\label{eq-diffusion} D_\tau(s,t) = \big( \sum_{k \geq 1} \gamma_k ^{2\tau}(\psi_k(s) - \psi_k(t))^2 \big)^{1/2} \;\;\forall s,t \in \Omega\;. \end{equation} If $\gamma_1 < 1$, i.e., the spectral gap is positive, this expression decays exponentially. Suppose now that $\gamma_2 < \gamma_1$. In this case the asymptotic behavior of $D_\tau(s,t)$ as $\tau \to \infty$ is given by \begin{equation} D_\tau(s,t)= \gamma_1^\tau\frac{|((F(s)-F(t),A))|}{(I-J)^{1/2}}\ (1+o(1))\;,\;\text{if}\;\; ((F(s),A))\neq ((F(t),A))\;, \end{equation} and $D_\tau(s,t) = \mathcal{O}(\gamma_2^\tau)$ otherwise. The difference in these asymptotic formulas highlights the multi-scale behaviour of the metric space $(\Omega, D_\tau)$. In the first approximation, this space consists of the level sets of the function $s \mapsto ((F(s),A))$ situated at the distance $\sim \gamma_1^\tau$ from one another, while each fiber has the diameter $\lesssim \gamma_2^\tau$. Viewing POVMs as data clouds in ${\mathcal{S}}$ opens up a prospect of using various tools of geometric data analysis for studying POVMs. The above result on the diffusion distance associated to a POVM can be considered as a step in this direction. \section{Case study: representations of finite groups} \label{sec-groups} In this section we will be interested in finite POVMs associated to irreducible representations of finite groups. We start with some preliminaries from Woldron's book \cite{W}. Let $G$ be a finite set. \begin{defin}\label{def - tight frame} {\rm A finite collection $\{f_s\}_{s\in G}$ of non-zero vectors in a finite-dimensional Hilbert space $\mathcal{H}$ is said to be a {\it tight frame} if there exists a number $A > 0$, called the {\it frame bound}, such that \begin{equation}\label{Bessel} A\| f \|^2 =\sum_{s\in J}|\langle f , f_s \rangle|^2, \forall f\in \mathcal{H}\;. \end{equation} } \end{defin} Denote by $P_s$ the orthogonal projector to $f_s$. One readily checks that for such a frame, the operators \begin{equation}\label{eq-W-fr} W_s:= \frac{\|f_s\|^2}{A} P_s, \; s \in G\;, \end{equation} form a $\mathscr{L}(\mathcal{H})$-valued POVM on $G$. Suppose from now on that $G$ is a finite {\it group}, and we are given its non-trivial irreducible unitary representation $\rho$ on a $d_\rho$-dimensional Hilbert space $V$. \footnote{All the representations considered below are assumed to be unitary.} One can show \cite{W} that the vectors $\{f_s:=\frac{1}{\sqrt{d_\rho}}\rho(s)\}_{s\in G}$ form a tight frame in the operator space $\mathcal{H}:=\text{End}(V)$ equipped with the Hermtian product $((C,D)) = {{\rm tr}} (CD^*)$ with the frame bound $A={| G |}/{d_\rho ^2}$. Write $n=d_\rho^2 =\dim \mathcal{H}$. By \eqref{eq-W-fr}, the corresponding POVM $W= \{W_s\}$, $s \in G$ is given by $W_s = n P_s \alpha_s$ with $\alpha_s = \frac{1}{| G |}$. Interestingly enough, the spectrum of the corresponding Berezin transform can be calculated via the characters of irreducible unitary representations of $G$. Denote by $\chi_\rho:G \to \field{C}$, $\chi_\rho(s) := {{\rm tr}} (\rho(s))$ the character of the representation $\rho$. Consider a basis in $L_2(G)$ consisting of the indicator functions of the elements of $G$. It readily follows from the definition that the Berezin transform ${\mathcal{B}}$ corresponding to the POVM $W$ is given by a matrix $${\mathcal{B}}_{ts} = n \,{{\rm tr}}(P_tP_s) \alpha_s = \frac{1}{| G |}u(st^{-1})\;,$$ where $u(s) := | \chi_\rho(s) | ^2$. The eigenvalues of this matrix and their multiplicities are given by the following proposition, see chapter 3E of \cite{Diaconis}. \begin{prop} \label{lambda varphi} The eigenvalues of ${\mathcal{B}}$ are given by $$\lambda_\varphi := \frac{1}{d_\varphi| G|}\sum_{s\in G} u(s)\chi_\varphi(s)\;,$$ where $\varphi$ runs over irreducible representations of $G$, and the contribution of each $\varphi$ into the multiplicity of $\lambda_\varphi$ is $d^2_\varphi$. \end{prop} \medskip \noindent Let us emphasize that it could happen that $\lambda_\varphi=\lambda_\psi$ for different representations $\varphi$ and $\psi$. Note also that by Lemma \ref{Useful properties}(i) below, $\lambda_\varphi = 1$ when $\varphi$ is the trivial one-dimensional representation. \begin{rem}{\rm We claim that the gap $\gamma(W)$ is rational. Indeed, for a unitary representation $\psi$ by complex unitary matrices denote $\psi' (s) = \overline{\psi(s)}$, where the bar stands for the complex conjugation. Note that $u$ is the character of the (in general, reducible) representation $\theta:= \rho \otimes \rho'$. By the Schur orthogonality relations, $$\frac{1}{| G|}\sum_{s\in G} u(s)\chi_\varphi(s)$$ equals the multiplicity of $\varphi$ in the decomposition of $\theta$ into irreducible representations, and hence is an integer. The claim follows from Proposition \ref{lambda varphi}. } \end{rem} The main result of this section is the following algebraic criterion of the positivity of the spectral gap of $W$. Following chapter $12$ in \cite{Isaacs} we define the {\it vanishing-off subgroup} ${\mathcal{V}}(\rho)$ to be the smallest subgroup of $G$ such that $\chi_\rho$ vanishes on $G \setminus {\mathcal{V}}(\rho)$: $${\mathcal{V}}(\rho) = <s\in G \mid \chi_\rho(s)\not=0>.$$ Since the character $\chi_\rho$ is conjugation invariant, ${\mathcal{V}}(\rho)$ is normal.\\ \begin{thm}\label{thmgap0} The following are equivalent: \begin{itemize} \item[{(i)}] ${\mathcal{V}}(\rho)\not=G$; \item[{(ii)}] $\gamma(W) = 0$, i.e., there exists a non-trivial irreducible unitary representation $\varphi$ of $G$ with $\lambda_\varphi =1$. \end{itemize} \end{thm} \medskip In the next lemma, we collect some standard facts from the representation theory (see e.g. Chapter 2 of \cite{Diaconis}) which will be used in the proof of Theorem \ref{thmgap0}. We write $\text{Irrep}$ for the set of all unitary irreducible representations of $G$ up to an isomorphism. \begin{lemma}\label{Useful properties} $(i)\ \frac{1}{| G |}\sum_{s\in G} | \chi_\rho(s) |^2 = 1,~\forall\,\rho \in \text{Irrep}$;\\ \\ $~~~~\quad\quad\quad\quad\quad(ii)\ \sum_{\varphi\in \textup{Irrep}} d_\varphi^2 = | G |.$ \end{lemma} \medskip\noindent{\bf Proof of Theorem \ref{thmgap0}:} We begin by proving (ii) $\Rightarrow$ (i): Assume that there exists a non-trivial irreducible representation $\varphi$ with $\lambda_\varphi = 1$. By using Lemma \ref{Useful properties} and the explicit formula for the eigenvalues from Proposition \ref{lambda varphi} we see that \begin{equation*} \begin{split} 1 & \underset{\ref{lambda varphi}}{=} \frac{1}{d_\varphi| G|} \sum_{s\in G} | \chi_\rho(s)|^2\chi_\varphi(s) = \\ & = \frac{1}{| G|} \sum_{s\in G} | \chi_\rho(s)|^2 - \frac{1}{d_\varphi| G|} \sum_{s\in G} | \chi_\rho(s)|^2 (d_\varphi - \chi_\varphi(s)) \underset{\ref{Useful properties}}{=} \\ & = 1 - \frac{1}{d_\varphi| G|} \sum_{s\in G} | \chi_\rho(s)|^2 (d_\varphi - \chi_\varphi(s)) \end{split} \end{equation*} By taking the real part of both sides we get $$\sum_{s\in G} | \chi_\rho(s) |^2\,\text{Re}(d_\varphi - \chi_\varphi(s)) = 0$$ Note that since $\varphi(s)$ is unitary, all its eigenvalues are of the form $e^{i\theta}$ so $| \chi_\varphi (s) | = | {{\rm tr}}(\varphi(s)) |\leq d_\varphi$ and $\chi_\varphi(s)=d_\varphi$ iff $\varphi(s)$ is the identity. Hence $\text{Re}(d_\varphi - \chi_\varphi(s))$ must be non-negative. Since $| \chi_\rho(s) |^2$ is also non-negative, $\chi_\varphi(s)=d_\varphi$ for every $s \in G$ with $\chi_\rho(s) \neq 0$. As we have seen above $\chi_\varphi(s)=d_\varphi$ if and only if $\varphi(s)={1\hskip-2.5pt{\rm l}}$. It follows that the vanishing off subgroup ${\mathcal{V}}(\rho)$ is contained in the normal subgroup $$\text{Ker}(\varphi) := \{s\mid\varphi(s)={1\hskip-2.5pt{\rm l}}\}\;.$$ Since $\varphi$ is irreducible and non-trivial, the latter subgroup $\neq G$, and hence ${\mathcal{V}}(\rho) \neq G$, as required. \medskip Next, we prove (i) $\Rightarrow$ (ii): Assume ${\mathcal{V}}(\rho)\not=G$. Consider the quotient $H:=G/{\mathcal{V}}(\rho)$, which is a non-trivial group, and let $\pi: G \to H$ be the natural projection. Take any non-trivial irreducible representation $\psi$ of $H$. Then $\varphi:= \psi \circ \pi$ is an irreducible representation of $G$. We claim that $\lambda_\varphi=1$. Indeed, for $s \in {\mathcal{V}}(\rho)$ we have $\varphi(s)={1\hskip-2.5pt{\rm l}}$ and hence $\chi_\varphi(s) = d_\varphi$, and for $s \notin {\mathcal{V}}(\rho)$ holds $\chi_\rho(s) = 0$. It follows that \begin{equation*} \begin{split} \lambda_{\varphi} = \sum_{s\in {\mathcal{V}}(\rho)} \frac{1}{d_{\varphi} | G |} | \chi_\rho(s) |^2 d_{\varphi} = \frac{1}{| G |} \sum_{s\in G} | \chi_\rho(s) |^2 \underset{\ref{Useful properties}}{=} 1\;. \end{split} \end{equation*} This proves the claim and hence completes the proof of the theorem. \qed \medskip \begin{cor}\label{cor-simplegr} If $G$ is a simple group, then the gap of $W$ is positive. \end{cor} \begin{proof} Indeed, otherwise by Theorem \ref{thmgap0} and the simplicity of $G$, ${\mathcal{V}}(\rho) = \{{1\hskip-2.5pt{\rm l}}\}$, which means that $\chi_\rho(s) =0$ for every $s \neq {1\hskip-2.5pt{\rm l}}$. Then the first statement of Lemma \ref{Useful properties} yields $|G| = d_\rho^2$, while the second statement guarantees that $|G| \geq 1+ d_\rho^2$, since $\rho$ is a non-trivial representation. We get a contradiction. \end{proof} \medskip Let us point out that there exist non-simple groups $G$ admitting an irreducible representation $\rho$ with ${\mathcal{V}}(\rho)=G$. Indeed, consider the irreducible representation $\rho:\mathbb{Z}_m\to U(\mathbb{C}),\ \rho(s)=e^{2\pi i s/m}$ of the abelian cyclic group $\mathbb{Z}_m$. Observe that ${\mathcal{V}}(\rho) = \field{Z}_m$, while $\field{Z}_m$ is simple if and only if $m$ is prime. \medskip Let us describe the diffusion distance $D_\tau$ (see \eqref{eq-diffusion}) corresponding to the POVM $W$ associated to a finite group $G$ and a non-trivial irreducible representation $\rho$. Recall \cite{Diaconis} that for an irreducible representation $\varphi: G \to \mathbb{U}(n)$, the orthonormal basis of eigenfunctions corresponding to the eigenvalue $\lambda_\varphi$ presented in Proposition \ref{lambda varphi} is given by the matrix coefficients of $\varphi$ multiplied by $\sqrt{d_\varphi}$. Assume that the gap of $G$ is strictly positve, and denote by $\beta_1 > \dots > \beta_k$ all {\it pair-wise distinct} eigenvalues of ${\mathcal{B}}$ lying in the open interval $(0,1)$. Denote $$R_j := \{\varphi \in \textup{Irrep}\mid \lambda_{\varphi} = \beta_j\}\;.$$ Then \eqref{eq-diffusion} yields the following expression for the diffusion distance: \begin{equation}\label{eq-diffusion-1} D_\tau(s,t) = \left ( \sum_{j=1^k} \beta_j^{2\tau} \sum_{\varphi \in R_j} d_\varphi \|\varphi(s)-\varphi(t)\|_2^2 \right )^{1/2} \;, \end{equation} where $\|\;\|_2$ stands for the Hilbert-Schmidt norm $\|C\|_2= ({{\rm tr}} (CC^*) )^{1/2}$. Note that this expression can be rewritten in terms of the character $\chi_\varphi$ since $$\|\varphi(s)-\varphi(t)\|_2^2 = 2(d_\varphi- \text{Re}\;\chi_\varphi(st^{-1}))\;.$$ Define a normal subgroup $\Gamma_j := \bigcap_{\varphi \in R_j} \text{Ker} (\varphi)\;, j= 1,\dots, k$ and a normal series $K_0 \supset K_1 \supset \dots ...$ with $K_0 = G$, $K_{k+1} =\{1\}$ and $$K_m : = \bigcap_{j=1}^m \Gamma_j\;, m=1,\dots k\;.$$ It follows from \eqref{eq-diffusion-1} that for $\tau \to +\infty$ \begin{equation}\label{eq-diffusion-2} D_\tau(s,t) \sim \beta_{p+1}^\tau \;\;\text{for}\;\; st^{-1} \in K_p \setminus K_{p+1}\;. \end{equation} In fact we have a sequence of nested partitions $\Delta_p$ of $G$ formed by the cosets of $K_p$. For every pair of distinct points $s,t \in G$ choose maximal $p$ so that $s$ and $t$ lie in the same element of $\Delta_p$. Then asymptotical formula \eqref{eq-diffusion-2} holds, which manifests the multi-scale nature of the diffusion distance. Let us illustrate this in the case when $G=S_4$ is the symmetric group, and $\rho$ a $3$-dimensional irreducible representation. The direct calculation with the character table of $S_4$ shows that the first non-trivial eigenvalue $1/2$ corresponds to the unique $2$-dimensional irreducible representation whose kernel coincides with the normal subgroup $K$ of order $4$ of $S_4$ called the Klein four-group. Thus $D_\tau(s,t) \sim (1/2)^\tau$ if $s,t$ belong to different cosets of $K$ in $S_4$, and one can calculate that $D_\tau(s,t) \sim (1/3)^\tau$ if $s,t$ are distinct and belong to the same coset. \begin{rem}\label{rem-rps}{\rm A modification of the construction presented in this section is related to Berezin-Toeplitz quantization. The modification goes in two directions. First, we deal with unitary representations $\rho$ of compact Lie groups $G$ instead of finite groups, and second, our POVMs are related to the $G$--orbits in a representation space $\mathcal{H}$ as opposed to the image of $\rho$ in the endomorphisms of $\mathcal{H}$. Let us very briefly illustrate this in the following simplest case. Consider the irreducible unitary representation $\rho_j$ of the group $G=SU(2)$ in an $n= 2j+1$-dimensional Hilbert space $\mathcal{H}$, $j \in \frac{1}{2}\field{N}$. Fix a maximal torus $K=S^1 \subset G$, and let $w \in \mathcal{H}$ be the maximal weight vector of $K$, that is $\rho_j(t)w= e^{4\pi i j t}w$ for all $t \in K$. Consider an $\mathscr{L}(\mathcal{H})$-valued POVM $W$ on $\Omega=G/K=\field{C} P^1$ of the form $dW ([g]) = nP_{[g]}d\alpha([g])$, where $[g]$ stands for the class of $g \in G$ in $\Omega$, $\alpha$ is the $G$-invariant measure on $\Omega$ and $P_{[g]}$ is the rank one projector to $gw$. Note that $W$ is nothing else but the Berezin-Toeplitz POVM $W_{p}$ from Example \ref{exam-CP1} with $p=2j$. We refer to \cite[Chapter 7]{CR} for the representation theoretic approach to coherent states and quantization. By using theory of Gelfand pairs (cf. \cite[Chapter 3.F]{Diaconis}) one can check that the eigenvalues of the Berezin transform are of the form $\lambda_\varphi = (u,\chi_\varphi)_{L_2}$ , where $\varphi$ runs over all irreducible unitary representations of $G$, $\chi_\varphi$ stands for the character of $\varphi$ and $u(g)=n|\langle \rho(g) w, w\rangle|^2$. The multiplicity of $\lambda_\varphi$ equals $d_\varphi^2$ where $d_\varphi$ is the dimension of $\varphi$. In order to calculate $\lambda_\varphi$, recall that \begin{equation}\label{eq-square} \rho_j \otimes \rho_j = \bigoplus_{k=0}^{2j} \rho_k\;. \end{equation} Writing $v$ for the vector of weight $-j$ of $\rho_j$, we have $$u(g)=n \langle (\rho_j \otimes \rho_j) (g)\xi,\xi\rangle,\;\,\text{where}\;\; \xi = w\otimes v\;.$$ In order to complete this calculation, one has to decompose $\xi$ in the sense of \eqref{eq-square}. This can be done with the help of explicit expressions for the Clebsch-Gordan coefficients, and it eventually yields eigenvalues of the Berezin transform, including $\gamma_1 = j/(j+1)$ (cf. Example \ref{exam-CP1}), in agreement with calculations by Zhang \cite{gap-representations} and Donaldson \cite[p.613]{D}. The details will appear in MSc thesis by D. Shmoish.} \end{rem} \section{Two concepts of quantum noise}\label{sec-noise} In the present section we provide two different (and essentially tautological) interpretations of the spectral gap in the context of quantum noise. In quantum measurement theory, there are two concepts of quantum noise: the increment of variance for unbiased approximate measurements as formalized by the noise operator, see below, and a non-unitary evolution of a quantum system described by a quantum channel (a.k.a. a quantum operation, see, e.g. \cite[Chapter 8]{NC}). Such a non-unitary evolution can be caused, for instance, by the quantum state reduction in the process of repeated quantum measurements. Interestingly enough, for pure POVMs, the spectral gap $\gamma(W)$ brings together these two seemingly remote concepts: it measures the minimal magnitude of noise production in the context of the noise operator, and it equals the spectral gap of the Markov chain modeling repeated quantum measurements. Given an observable $A \in \mathscr{L}(\mathcal{H})$, write $A= \sum \lambda_iP_i$ for its spectral decomposition, where $P_i$'s are pair-wise distinct orthogonal projectors. According to the statistical postulate of quantum mechanics, in a state $\rho$ the observable $A$ attains value $\lambda_i$ with probability $((P_i,\rho))$. It follows that the expectation of $A$ in $\rho$ equals $\mathbb{E}(A,\rho)= ((A,\rho))$ and the variance is given by $\mathbb{V}ar(A,\rho) = ((A^2,\rho)) - \mathbb{E}(A,\rho)^2$. In quantum measurement theory \cite{Busch-2}, a POVM $W$ represents a measuring device coupled with the system, while $\Omega$ is interpreted as the space of device readings. When the system is in a state $\rho \in {\mathcal{S}}(\mathcal{H})$, the probability of finding the device in a subset $X \in \mathscr{C}$ equals $\mu_{\rho}(X):= ((W(X),\rho))$. An experimentalist performs a measurement whose outcome, at every state $\rho$, is distributed in $\Omega$ according to the measure $\mu_{\rho}$. Given a function $\phi \in L_2(\Omega,\alpha)$ (experimentalist's choice), this procedure yields {\it an unbiased approximate measurement} of the quantum observable $A:=T(\phi)$. The expectation of $A$ in every state $\rho$ equals $((A,\rho))$ and thus coincides with the one of the measurement procedure given by $\int_\Omega \phi d\mu_\rho$ (hence {\it unbiased}), in spite of the fact that actual probability distributions determined by the observable $A$ (see above) and the random variable $(\phi,\mu_{\rho})$ could be quite different (hence {\it approximate}). In particular, in general, the variance increases under an unbiased approximate measurement: \begin{equation}\label{eq-noise} \mathbb{V}ar(\phi,\mu_{\rho}) = \mathbb{V}ar(A,\rho) + ((\Delta_W(\phi),\rho))\;, \end{equation} where $ \Delta_W(\phi):= T(\phi^2)-T(\phi)^2 $ is {\it the noise operator}. This operator, which is known to be positive, measures the increment of the variance. We wish to explore the relative magnitude of this increment for the ``maximally mixed" state $\theta_0= \frac{1}{n}{1\hskip-2.5pt{\rm l}}$. To this end introduce {\it the minimal noise} of the POVM $W$ as $${\mathcal{N}}_{min}(W):= \inf_{\phi} \frac{((\Delta_W(\phi),\theta_0))}{\mathbb{V}ar(\phi,\mu_{\theta_0}) }\;,$$ where the infimum is taken over all non-constant functions $\phi \in L_2(\Omega,\alpha)$. It turns out that the minimal noise coincides with the spectral gap: \begin{equation}\label{eq-Nmingamma} {\mathcal{N}}_{min}(W) = \gamma(W)\;. \end{equation} Indeed, since ${{\rm tr}}(T(\phi^2))= n(\phi,\phi)$, we readily get that $$((\Delta_W(\phi),\theta_0))= (({1\hskip-2.5pt{\rm l}}- {\mathcal{B}})\phi,\phi)\;,$$ where ${\mathcal{B}}= n^{-1}T^*T$ is the Markov operator given by \eqref{eq-b}, while $$\mathbb{V}ar(\phi,\mu_{\theta_0})= (\phi,\phi) - (\phi,1)^2\;.$$ Formula \eqref{eq-Nmingamma} follows from the variational principle. \medskip Suppose now that $\Omega \subset {\mathcal{S}}(\mathcal{H})$ is a finite set consisting of rank one projectors $\{P_1,\dots, P_N\}$ and that $W$ is a pure POVM of the form $W(P_i) := n\alpha_i P_i$, where $\alpha$ is a probability measure on $\Omega$. Given a system in the original state $\rho$, the result of the measurement equals $P_j$ with probability $p= n\alpha_j((P_j,\rho))$. Recall the quantum state reduction (a.k.a. the wave function collapse) axiom for so called {\it L\"{u}ders} repeated quantum measurements: if the result of the measurement equals $P_j$, the system moves from the original state $\rho$ to the new (reduced) state $$\rho' = \frac{1}{p} W(P_j)^{1/2}\rho W(P_j)^{1/2}= P_j\;.$$ It follows that if the original state $\rho$ is chosen from $\Omega$, the repeated quantum measurements are described by the Markov chain with transition probabilities $n\alpha_j((P_i,P_j))$. The corresponding Markov operator equals ${\mathcal{B}}$, and {\it the spectral gap of the Markov chain coincides with the spectral gap $\gamma(W)$} of the POVM $W$. Furthermore, given an original state $\rho \in \Omega$, the expected value of the reduced state equals ${\mathcal{E}}(\rho)$. It follows that if $\gamma(W)>0$, ${\mathcal{E}}^k(\rho)$, $k \to \infty$ converge to the maximally mixed quantum state $\frac{1}{n}{1\hskip-2.5pt{\rm l}}$ at the exponential rate $\sim (1-\gamma(W))^k$. In other words, {\it for pure POVMs the spectral gap controls the convergence rate to the maximally mixed state under repeated quantum measurements}. \medskip \noindent {\bf Acknowledgment.} We are grateful to D.~Aharonov, S.~Artstein-Avidan, J.-M. Bismut, L.~Charles, G.~Kalai, G.~Kindler, M.~Krivelevich, S.~Nonnenmacher, Y.~Ostrover, A.~Reznikov, A.~Veselov and S.~Weinberger for useful discussions on various aspects of this work. We thank O.~Oreshkov for bringing our attention to the paper \cite{OC} and illuminating comments, as well as I.~Polterovich for very helpful remarks. We thank A.~Deleporte for sharing with us his ideas on the proof of Theorem \ref{thm-quant}, and J.~Fine for providing us highly useful insights and references on Donaldson's program.
{ "timestamp": "2019-02-07T02:01:42", "yymm": "1811", "arxiv_id": "1811.03155", "language": "en", "url": "https://arxiv.org/abs/1811.03155" }
\section{Introduction} \subsection{Euler's Equation for Rigid Body Dynamics} The configuration space for the rotational dynamics of a rigid body in $\mathbb{R}^{3}$ about its center of mass is the set of all three-dimensional rotations \begin{equation*} \mathsf{SO}(3) \mathrel{\mathop:}= \setdef{ Q \in \mathsf{M}_{3}(\mathbb{R}) }{ Q^{T}Q = I,\, \det Q = 1}, \end{equation*} where $\mathsf{M}_{n}(\mathbb{R})$ is the vector space of real $n \times n$ matrices. Hence one can describe the dynamics as a Hamiltonian system on the cotangent bundle (or the phase space) $T^{*}\mathsf{SO}(3)$. However, the presence of the $\mathsf{SO}(3)$-symmetry helps us reduce the system to the dual $\mathfrak{so}(3)^{*} \cong (\mathbb{R}^{3})^{*}$ of the Lie algebra $\mathfrak{so}(3) \cong \mathbb{R}^{3}$ of $\mathsf{SO}(3)$; one may think of $\mathfrak{so}(3)$ and $\mathfrak{so}(3)^{*}$ as the space of all possible values of the body angular velocity and momentum, respectively, seen in the frame attached to the body. This yields Euler's equation for the body angular momentum $\Pi$ in $\mathfrak{so}(3)^{*}$ (identified with $\mathbb{R}^{3}$): \begin{equation*} \dot{\Pi} = \Pi \times \mathbb{I}^{-1}\Pi, \end{equation*} where $\mathbb{I} \mathrel{\mathop:}= \operatorname{diag}(I_{1}, I_{2}, I_{3})$ is the inertia tensor with respect to the principal axes of the body, and $\Omega \mathrel{\mathop:}= \mathbb{I}^{-1}\Pi$ is the body angular velocity in $\mathfrak{so}(3)$ (again identified with $\mathbb{R}^{3}$). \subsection{Generalized Rigid Body Equations} \label{ssec:Generalized_Rigid_Body} \citet{Ma1976} and \citet{Ra1980} generalized the three-dimensional rigid body equations to $n$ dimensions as follows: Let \begin{equation*} \mathsf{SO}(n) \mathrel{\mathop:}= \setdef{ Q \in \mathsf{M}_{n}(\mathbb{R}) }{ Q^{T}Q = I,\, \det Q = 1} \end{equation*} and $\mathfrak{so}(n)$ be its Lie algebra, and equip $\mathfrak{so}(n)$ with the inner product $\ip{\,\cdot\,}{\,\cdot\,}\colon \mathfrak{so}(n) \times \mathfrak{so}(n) \to \mathbb{R}$ defined as \begin{equation} \label{eq:ip-so(n)} \ip{A}{B} \mathrel{\mathop:}= \frac{1}{2}\mathop{\mathrm{tr}}\nolimits(A^{T}B). \end{equation} So we may identify the dual $\mathfrak{so}(n)^{*}$ with $\mathfrak{so}(n)$. Under this identification, we define the inertia operator $\mathcal{I}\colon \mathfrak{so}(n) \to \mathfrak{so}(n)^{*} \cong \mathfrak{so}(n)$ of the generalized $n$-dimensional rigid body as \begin{equation} \label{eq:mathcalI} \mathcal{I}(\Omega) \mathrel{\mathop:}= \Lambda\Omega + \Omega\Lambda, \end{equation} where $\Lambda \mathrel{\mathop:}= \operatorname{diag}(\lambda_{1}, \dots, \lambda_{n})$ with $\lambda_{i} + \lambda_{j} > 0$ for all $i \neq j$. This is a generalization of the inertia tensor $\mathbb{I}$ defined above. In fact, setting $I_{1} = \lambda_{2} + \lambda_{3}$, $I_{2} = \lambda_{3} + \lambda_{1}$, and $I_{3} = \lambda_{1} + \lambda_{2}$ for $n = 3$ yields \begin{equation*} \Pi = \mathcal{I}(\Omega) = \begin{bmatrix} 0 & -I_{3}\Omega_{3} & I_{2}\Omega_{2} \\ I_{3}\Omega_{3} & 0 & -I_{1}\Omega_{1} \\ -I_{2}\Omega_{2} & I_{1}\Omega_{1} & 0 \end{bmatrix} \leftrightarrow \begin{bmatrix} I_{1}\Omega_{1} \\ I_{2}\Omega_{2} \\ I_{3}\Omega_{3} \end{bmatrix}, \end{equation*} which is the body angular momentum in $\mathfrak{so}(3)^{*} \cong \mathbb{R}^{3}$. Let $Q \in \mathsf{SO}(n)$ be the rotational configuration of the generalized rigid body, $\Omega \mathrel{\mathop:}= Q^{-1}\dot{Q} \in \mathfrak{so}(n)$ be the body angular velocity, and $\Pi \mathrel{\mathop:}= \mathcal{I}(\Omega) \in \mathfrak{so}(n)^{*} \cong \mathfrak{so}(n)$ be the body angular momentum. Then the generalized rigid body or Euler--Poisson equations on $\mathsf{SO}(n) \times \mathfrak{so}(n)^{*}$ are given by \begin{subequations} \label{eq:RB} \begin{align} \dot{Q} &= Q \Omega, \\ \dot{\Pi} &= [\Pi, \Omega], \label{eq:Euler} \end{align} \end{subequations} where $[\,\cdot\,,\,\cdot\,]\colon \mathfrak{so}(n) \times \mathfrak{so}(n) \to \mathfrak{so}(n)$ is the standard commutator. The $n$-dimensional Euler equation~\eqref{eq:Euler} is the Hamiltonian system on a coadjoint orbit in $\mathfrak{so}(n)^{*}$ with the Hamiltonian (see \citet[Theorem~3.1]{Ra1980}): \begin{equation} \label{eq:h} h(\Pi) \mathrel{\mathop:}= \frac{1}{2}\ip{ \Pi }{ \mathcal{I}^{-1}(\Pi) }. \end{equation} \subsection{Symmetric Representation} The so-called \textit{symmetric representation} of the generalized rigid body equations was originally discovered by \citet{BlCr1996} as a necessary condition of optimality for the following optimal control problem: Let $T > 0$ be fixed, and consider \begin{equation*} \min_{U} \int_{0}^{T} \frac{1}{2}\ip{\mathcal{I}(U)}{U}\,dt \quad \text{subject to}\quad \dot{Q} = Q U \text{ and } Q(0) \in \mathsf{SO}(n), \end{equation*} where $\mathcal{I}$ is the inertia operator~\eqref{eq:mathcalI} and $U \colon [0,T] \to \mathfrak{so}(n)$. Formally, the configuration space is $\mathsf{M}_{n}(\mathbb{R})$, and thus $Q \colon [0,T] \to \mathsf{M}_{n}(\mathbb{R})$. However, we impose the initial condition $Q(0) \in \mathsf{SO}(n)$ so that $Q(t) \in \mathsf{SO}(n)$ for any $t \in [0,T]$ because of the constraint $Q(t)^{-1}\dot{Q}(t) = U(t) \in \mathfrak{so}(n)$. This is a special case of the so-called \textit{embedded optimal control problem} considered in \citet{BlCrNoSa2011}. The control Hamiltonian $H_{\rm c}\colon T^{*}\mathsf{M}_{n}(\mathbb{R}) \times \mathfrak{so}(n) \to \mathbb{R}$ is then defined as \begin{equation*} H_{\rm c}(Q,P,U) \mathrel{\mathop:}= P \cdot (Q U) - \frac{1}{2}\ip{\mathcal{I}(U)}{U} \\ = \mathop{\mathrm{tr}}\nolimits\!\parentheses{P^{T} Q U} - \frac{1}{2}\ip{\mathcal{I}(U)}{U}, \end{equation*} where the ``$\,\cdot\,$'' above stands for the natural dual pairing \begin{equation*} T^{*}_{Q}\mathsf{M}_{n}(\mathbb{R}) \times T_{Q}\mathsf{M}_{n}(\mathbb{R}) \to \mathbb{R}; \qquad (P, \dot{Q}) \mapsto \mathop{\mathrm{tr}}\nolimits\parentheses{ P^{T}\dot{Q} } =\mathrel{\mathop:} P \cdot \dot{Q}. \end{equation*} By the Pontryagin Maximum Principle~\cite{PoBoGaMi1962}, the optimal control $U^{\star}$ necessarily maximizes the Hamiltonian, i.e., for any $\delta U \in \mathfrak{so}(n)$, \begin{equation*} \left.\od{}{s} H_{\rm c}(Q,P,U^{\star} + s\,\delta U) \right|_{s=0} = 0. \end{equation*} However, \begin{align*} \left.\od{}{s} H_{\rm c}(Q,P,U^{\star} + s\,\delta U) \right|_{s=0} &= \mathop{\mathrm{tr}}\nolimits\!\parentheses{P^{T} Q \delta U} - \ip{\mathcal{I}(U^{\star})}{\delta U} \\ &= \frac{1}{2}\mathop{\mathrm{tr}}\nolimits\!\parentheses{ \parentheses{ Q^{T}P - P^{T}Q }^{T} \delta U } - \ip{\mathcal{I}(U^{\star})}{\delta U} \\ &= \ip{ Q^{T}P - P^{T}Q - \mathcal{I}(U^{\star}) }{ \delta U }. \end{align*} Note that $Q^{T}P - P^{T}Q - \mathcal{I}(U^{\star}) \in \mathfrak{so}(n)$. Since $\delta U \in \mathfrak{so}(n)$ is arbitrary, we have $Q^{T}P - P^{T}Q = \mathcal{I}(U^{\star})$, and thus obtain the optimal control $U^{\star}$ as follows: \begin{equation*} U^{\star}(Q,P) = \mathcal{I}^{-1}\parentheses{ Q^{T}P - P^{T}Q }. \end{equation*} In what follows, we write $\Omega \mathrel{\mathop:}= U^{\star}(Q,P)$ for short; this quantity in fact coincides with the angular velocity $\Omega$ defined in Section~\ref{ssec:Generalized_Rigid_Body} as we shall see below. As a result, the (optimal) Hamiltonian $H\colon T^{*}\mathsf{M}_{n}(\mathbb{R}) \to \mathbb{R}$ is given by \begin{equation} \label{eq:H-QP} H(Q,P) \mathrel{\mathop:}= H_{\rm c}(Q,P,U^{\star}(Q,P)) = \frac{1}{2}\ip{ Q^{T}P - P^{T}Q }{ \mathcal{I}^{-1}\parentheses{ Q^{T}P - P^{T}Q } }, \end{equation} whereas the standard symplectic form $\omega$ on $T^{*}\mathsf{M}_{n}(\mathbb{R})$ is written as \begin{equation} \label{eq:Omega-QP} \omega\parentheses{ (\dot{Q}_{1},\dot{P}_{1}), (\dot{Q}_{2},\dot{P}_{2}) } = \mathop{\mathrm{tr}}\nolimits\!\parentheses{ \dot{Q}_{1}^{T} \dot{P}_{2} - \dot{P}_{1}^{T} \dot{Q}_{2} }. \end{equation} The optimal solution is necessarily an integral curve of the Hamiltonian vector field $X_{H}$ on $T^{*}\mathsf{M}_{n}(\mathbb{R})$ defined by Hamilton's equation $\ins{X_{H}}\omega = {\bf d}{H}$, or in coordinates, \begin{equation} \label{eq:SymRB} \dot{Q} = Q \Omega, \qquad \dot{P} = P \Omega. \end{equation} These equations are called the \textit{symmetric representation} of the generalized rigid body equations~\eqref{eq:RB}. In fact, if we set \begin{equation*} \Pi(t) \mathrel{\mathop:}= \mathcal{I}(\Omega(t)) = Q(t)^{T} P(t) - P(t)^{T} Q(t) \end{equation*} for $t \in [0,T]$, then $\Pi\colon [0,T] \to \mathfrak{so}(n)^{*} \cong \mathfrak{so}(n)$ satisfies the $n$-dimensional Euler equation~\eqref{eq:Euler}; see also \citet{BlCrMaSa2008,BlCrNoSa2011} for its generalization to other matrix Lie groups. Although it is a straightforward calculation to show that the generalized rigid body equations~\eqref{eq:RB} follow from the symmetric representation~\eqref{eq:SymRB}, the question remains as to how these two Hamiltonian systems are related to each other from the symplectic-geometric point of view. \citet{BlCrMaRa2002} gave one such relationship: Specifically, they constructed symplectic submanifolds $S \subset \mathsf{SO}(n) \times \mathsf{SO}(n)$ of $T^{*}\mathsf{M}_{n}(\mathbb{R}) \cong \mathsf{M}_{n}(\mathbb{R}) \times \mathsf{M}_{n}(\mathbb{R})$ and $S_{M}$ of $T^{*}\mathsf{SO}(n) \cong \mathsf{SO}(n) \times \mathfrak{so}(n)^{*}$ as well as a diffeomorphism between $S$ and $S_{M}$ in a rather elaborate manner to establish an equivalence between the symmetric representation~\eqref{eq:SymRB} and the Euler--Poisson equations~\eqref{eq:RB}. \subsection{Main Result} We present an alternative connection between the symmetric representation~\eqref{eq:SymRB} and the Euler equation~\eqref{eq:Euler} by showing that the two are related via symplectic reduction. Although this is not an equivalence, this connection exploits an inherent symmetry of the symmetric representation that seems to have been overlooked, and provides a geometrically natural and appealing alternative to the result of \citet{BlCrMaRa2002}. More specifically, we show the following: \begin{theorem*} The symmetric representation~\eqref{eq:SymRB} of the $n$-dimensional generalized rigid body equation possesses an $\mathsf{Sp}(2n,\mathbb{R})$-symmetry, and the Marsden--Weinstein reduction~\cite{MaWe1974} of \eqref{eq:SymRB} restricted to an open subset of $T^{*}\mathsf{M}_{n}(\mathbb{R})$ by the symmetry at a certain level set of the associated momentum map yields the $n$-dimensional Euler equation~\eqref{eq:Euler} in a coadjoint orbit in $\mathfrak{so}(n)^{*}$. \end{theorem*} \section{Symmetry and Conservation Law in the Symmetric Representation} \subsection{$\mathsf{Sp}(2n,\mathbb{R})$-Symmetry of the Symmetric Representation} Let us first identify the cotangent bundle $T^{*}\mathsf{M}_{n}(\mathbb{R})$ with the vector space of real $2n \times n$ matrices: \begin{equation*} T^{*}\mathsf{M}_{n}(\mathbb{R}) \cong \mathsf{M}_{2n\times n}(\mathbb{R}) = \setdef{ Z = \begin{bmatrix} Q \\ P \end{bmatrix} }{ Q, P \in \mathsf{M}_{n}(\mathbb{R}) }. \end{equation*} Then we may write the symplectic form~\eqref{eq:Omega-QP} in a more succinct form: For any $Z \in \mathsf{M}_{2n\times n}(\mathbb{R})$ and any $X, Y \in T_{Z}\mathsf{M}_{2n\times n}(\mathbb{R})$, \begin{equation} \label{eq:Omega} \omega(X, Y) = \mathop{\mathrm{tr}}\nolimits\!\parentheses{ X^{T} \mathbb{J} Y } \quad\text{with}\quad \mathbb{J} \mathrel{\mathop:}= \begin{bmatrix} 0 & I \\ -I & 0 \end{bmatrix} \end{equation} where $I$ is the $n \times n$ identity matrix. Notice also that we may rewrite the Hamiltonian $H\colon \mathsf{M}_{2n\times n}(\mathbb{R}) \to \mathbb{R}$ defined in \eqref{eq:H-QP} in the following more concise form: \begin{equation} \label{eq:H} H(Z) = \frac{1}{2}\ip{ Z^{T}\mathbb{J}Z }{ \mathcal{I}^{-1}(Z^{T}\mathbb{J}Z) }. \end{equation} Let $\mathsf{Sp}(2n,\mathbb{R})$ be the symplectic group \begin{equation*} \mathsf{Sp}(2n,\mathbb{R}) \mathrel{\mathop:}= \setdef{ S \in \mathsf{M}_{2n}(\mathbb{R}) }{ S^{T} \mathbb{J} S = \mathbb{J}}, \end{equation*} and consider the $\mathsf{Sp}(2n,\mathbb{R})$-action on $\mathsf{M}_{2n\times n}(\mathbb{R})$ by left multiplication, i.e., \begin{equation} \label{eq:Phi} \Phi\colon \mathsf{Sp}(2n,\mathbb{R}) \times \mathsf{M}_{2n\times n}(\mathbb{R}) \to \mathsf{M}_{2n\times n}(\mathbb{R}); \qquad (S,Z) \mapsto S Z =\mathrel{\mathop:} \Phi_{S}(Z). \end{equation} Then one easily sees that this action is symplectic as well as that the Hamiltonian is invariant under the action, i.e., $\Phi_{S}^{*}\omega = \omega$ and $H \circ \Phi_{S} = H$ for any $S \in \mathsf{Sp}(2n,\mathbb{R})$. \subsection{$\mathsf{Sp}(2n,\mathbb{R})$-Momentum Map} Let us find the momentum map associated with the above action $\Phi$. First notice that the symplectic form~\eqref{eq:Omega} is written as $\omega = -{\bf d}\theta$ with the one-form $\theta$ on $\mathsf{M}_{2n\times n}(\mathbb{R})$ defined as follows: For any $Z = \begin{tbmatrix} Q \smallskip\\ P \end{tbmatrix} \in \mathsf{M}_{2n\times n}(\mathbb{R})$ and any $\dot{Z} = \begin{tbmatrix} \dot{Q} \smallskip\\ \dot{P} \end{tbmatrix}\in T_{Z}\mathsf{M}_{2n\times n}(\mathbb{R})$, \begin{equation*} \theta(Z)\cdot\dot{Z} \mathrel{\mathop:}= -\frac{1}{2}\mathop{\mathrm{tr}}\nolimits\!\parentheses{ Z^{T}\mathbb{J}\dot{Z} } = \frac{1}{2}\parentheses{ P^{T}\dot{Q} - Q^{T}\dot{P} }. \end{equation*} It is clear that $\theta$ is invariant under the $\mathsf{Sp}(2n,\mathbb{R})$-action, i.e., $\Phi_{S}^{*}\theta = \theta$ for any $S \in \mathsf{Sp}(2n,\mathbb{R})$. Let $\mathfrak{sp}(2n,\mathbb{R})$ be the Lie algebra of $\mathsf{Sp}(2n,\mathbb{R})$, i.e., \begin{equation*} \mathfrak{sp}(2n,\mathbb{R}) = \setdef{ \xi \in \mathsf{M}_{2n}(\mathbb{R}) }{ \xi^{T}\mathbb{J} + \mathbb{J}\xi = 0 }. \end{equation*} We equip $\mathfrak{sp}(2n,\mathbb{R})$ with the inner product $\ip{\,\cdot\,}{\,\cdot\,}\colon \mathfrak{sp}(2n,\mathbb{R}) \times \mathfrak{sp}(2n,\mathbb{R}) \to \mathbb{R}$ defined as \begin{equation*} \ip{\xi}{\eta} \mathrel{\mathop:}= \frac{1}{2}\mathop{\mathrm{tr}}\nolimits(\xi^{T} \eta) \end{equation*} just as in \eqref{eq:ip-so(n)}, and thus we may identify the dual $\mathfrak{sp}(2n,\mathbb{R})^{*}$ with $\mathfrak{sp}(2n,\mathbb{R})$. The infinitesimal generator of the above action $\Phi$ is, for any $\xi \in \mathfrak{sp}(2n,\mathbb{R})$, \begin{equation*} \xi_{\mathsf{M}_{2n\times n}(\mathbb{R})}(Z) \mathrel{\mathop:}= \left.\od{}{s} \Phi_{\exp(s\xi)}(Z) \right|_{s=0} = \xi Z. \end{equation*} Since $T^{*}\mathsf{M}_{n}(\mathbb{R}) \cong \mathsf{M}_{2n\times n}(\mathbb{R})$ is an exact symplectic manifold with $\omega = -{\bf d}\theta$ and $\Phi$ leaves $\theta$ invariant, the associated momentum map $\mathbf{J}\colon \mathsf{M}_{2n\times n}(\mathbb{R}) \to \mathfrak{sp}(2n,\mathbb{R})^{*} \cong \mathfrak{sp}(2n,\mathbb{R})$ satisfies the following (see, e.g., \citet[Theorem~4.2.10 on p.~282]{AbMa1978}): For any $\xi \in \mathfrak{sp}(2n,\mathbb{R})$, \begin{align*} \ip{ \mathbf{J}(Z) }{ \xi } &= \theta(Z) \cdot \xi_{\mathsf{M}_{2n\times n}(\mathbb{R})}(Z) \\ &= -\frac{1}{2}\mathop{\mathrm{tr}}\nolimits\!\parentheses{ Z^{T}\mathbb{J} \xi Z } \\ &= \frac{1}{2}\mathop{\mathrm{tr}}\nolimits\!\parentheses{ (\mathbb{J} Z Z^{T})^{T} \xi } \\ &= \ip{ \mathbb{J} Z Z^{T} }{ \xi }, \end{align*} and so we obtain \begin{equation} \label{eq:J} \mathbf{J}(Z) = \mathbb{J} Z Z^{T} = \begin{bmatrix} P Q^{T} & P P^{T} \\ -Q Q^{T} & -Q P^{T} \end{bmatrix}. \end{equation} This is the special case with $m = n$ of \citet[Proposition~4.1]{SkVi2019}. It is also easy to see that $\mathbf{J}$ is equivariant: For any $S \in \mathsf{Sp}(2n,\mathbb{R})$, \begin{equation*} \mathbf{J} \circ \Phi_{S} = \Ad_{S^{-1}}^{*} \circ \mathbf{J}. \end{equation*} By Noether's Theorem (see, e.g., \citet[Theorem~11.4.1 on p.~372]{MaRa1999}), $\mathbf{J}$ is a conserved quantity of the symmetric representation~\eqref{eq:SymRB} due to the $\mathsf{Sp}(2n,\mathbb{R})$-symmetry. That each block of this matrix is a conserved quantity is also pointed out by \citet{BlCrMaRa2002} via direct computations. \section{Symplectic Reduction of the Symmetric Representation} \subsection{Symplectic Reduction} Let $P$ be a symplectic manifold with symplectic form $\omega$, and suppose in addition that there is a symplectic action of a Lie group $\mathsf{G}$ on $P$, $\mathfrak{g}^{*}$ be the dual of the Lie algebra $\mathfrak{g}$ of $\mathsf{G}$, and $\mathbf{J}\colon P \to \mathfrak{g}^{*}$ be the momentum map associated with the action. The Marsden--Weinstein reduction~\cite{MaWe1974} (see also \cite[Sections~1.1 \& 1.2]{MaMiOrPeRa2007}) states that, if either (i)~the $\mathsf{G}$-action on $P$ is free and proper, or (ii)~$\mu \in \mathfrak{g}^{*}$ is a regular value of $\mathbf{J}$ and the action of the isotropy group \begin{equation*} \mathsf{G}_{\mu} \mathrel{\mathop:}= \setdef{ g \in \mathsf{G} }{ \Ad_{g^{-1}}^{*} \mu = \mu } \end{equation*} on the level set $\mathbf{J}^{-1}(\mu)$ is free and proper, then the quotient space $\mathbf{J}^{-1}(\mu)/\mathsf{G}_{\mu}$ is also a symplectic manifold with symplectic structure $\overline{\omega}_{\mu}$ that is naturally induced by $\omega$ and the geometric setting; see below for more details. Now, given a Hamiltonian $H\colon P \to \mathbb{R}$, one may define the Hamiltonian vector field $X_{H}$ on $P$ by setting $\ins{X_{H}}\omega = {\bf d}{H}$. If $H$ is invariant under the $\mathsf{G}$-action, it gives rise to the reduced Hamiltonian $\overline{H}_{\mu}$ on $\mathbf{J}^{-1}(\mu)/\mathsf{G}_{\mu}$, and then the Hamiltonian dynamics in $P$ is reduced to the Hamiltonian dynamics in $\mathbf{J}^{-1}(\mu)/\mathsf{G}_{\mu}$ defined in terms of $\overline{H}_{\mu}$ and $\overline{\omega}_{\mu}$. In other words, one can reduce a Hamiltonian system with a Lie-group symmetry to a lower-dimensional Hamiltonian system. \subsection{Some Technical Issues of Symplectic Reduction} \label{ssec:technical_issues} We would like to perform the Marsden--Weinstein reduction of the symmetric representation~\eqref{eq:SymRB}; here we have $P = T^{*}\mathsf{M}_{n}(\mathbb{R}) \cong \mathsf{M}_{2n\times n}(\mathbb{R})$, $\mathsf{G} = \mathsf{Sp}(2n,\mathbb{R})$ with the action $\Phi$ defined in \eqref{eq:Phi}, and the momentum map $\mathbf{J}$ from \eqref{eq:J}. However, condition~(i) clearly does not hold because $\Phi$ is not a free action on $\mathsf{M}_{2n\times n}(\mathbb{R})$, and so one either needs to remedy this or check (ii). Otherwise, the quotient $\mathbf{J}^{-1}(\mu)/\mathsf{Sp}(2n,\mathbb{R})_{\mu}$ may not be a manifold. The other issue is how to characterize the quotient space $\mathbf{J}^{-1}(\mu)/\mathsf{Sp}(2n,\mathbb{R})_{\mu}$ explicitly in order to describe the symplectic structure and the reduced dynamics there in an explicit manner. Fortunately, the recent work by \citet{SkVi2019} provides a geometric setting that is tailor-made for circumventing these issues. More specifically, we consider the following pair of momentum maps defined on $\mathsf{M}_{2n\times n}(\mathbb{R})$: \begin{equation*} \begin{tikzcd} \mathfrak{sp}(2n,\mathbb{R})^{*} & \mathsf{M}_{2n\times n}(\mathbb{R}) \arrow[swap]{l}{\mathbf{J}} \arrow{r}{\mathbf{M}} & \mathfrak{o}(n)^{*}, \end{tikzcd} \end{equation*} where $\mathbf{J}$ is defined above in \eqref{eq:J} and $\mathbf{M}$ is the momentum map associated with the action of the orthogonal group $\mathsf{O}(n)$ on $\mathsf{M}_{2n\times n}(\mathbb{R})$ to be described below; $\mathfrak{o}(n)$ is the Lie algebra of $\mathsf{O}(n)$. What they show is that, by considering an open subset $\mathcal{Z}$ of $\mathsf{M}_{2n\times n}(\mathbb{R})$ and restricting the actions and the momentum maps there, one may identify the Marsden--Weinstein quotient $\mathbf{J}^{-1}(\mu)/\mathsf{Sp}(2n,\mathbb{R})_{\mu}$ with a coadjoint orbit in $\mathfrak{o}(n)^{*}$. We note that their result is slightly more general than this: They have the result with $\mathsf{M}_{2n\times m}(\mathbb{R})$ and $\mathsf{O}(m)$ with $n, m \in \mathbb{N}$ instead, and so our setting is the special case of theirs with $m = n$. \subsection{$\mathsf{O}(n)$-action and Momentum Map} Consider the action of the orthogonal group $\mathsf{O}(n)$ on $\mathsf{M}_{2n\times n}(\mathbb{R})$ defined by right multiplication, i.e., \begin{equation*} \Psi\colon \mathsf{O}(n) \times \mathsf{M}_{2n\times n}(\mathbb{R}) \to \mathsf{M}_{2n\times n}(\mathbb{R}); \qquad (R, Z) \mapsto Z R = \Psi_{R}(Z). \end{equation*} It is a straightforward calculation to see that $\Psi$ leaves the one-form $\theta$ invariant and hence is a symplectic action with respect to the symplectic form $\omega$ defined in \eqref{eq:Omega}, i.e., $\Psi_{R}^{*}\theta = \theta$ and hence $\Psi_{R}^{*}\omega = \omega$ for any $R \in \mathsf{O}(n)$. Since $\mathfrak{o}(n) = \mathfrak{so}(n)$, we identify the dual $\mathfrak{o}(n)^{*}$ with $\mathfrak{o}(n)$ via the inner product~\eqref{eq:ip-so(n)}. Then, following a similar calculation as the one for $\mathbf{J}$ from above (see also \citet[Proposition~4.1]{SkVi2019}), we obtain the associated momentum map $\mathbf{M} \colon \mathsf{M}_{2n\times n}(\mathbb{R}) \to \mathfrak{o}(n)^{*} \cong \mathfrak{o}(n)$ as follows: \begin{equation} \label{eq:M} \mathbf{M}(Z) = Z^{T} \mathbb{J} Z = Q^{T}P - P^{T}Q. \end{equation} Again, it is a straightforward calculation to see that $\mathbf{M}$ is equivariant, i.e., for any $R \in \mathsf{O}(n)$, \begin{equation*} \mathbf{M} \circ \Psi_{R} = \Ad_{R}^{*} \circ \mathbf{M}. \end{equation*} \subsection{Symplectic Reduction and Dual Pair} Following \citet{SkVi2019}, let us consider the subset of $\mathsf{M}_{2n\times n}(\mathbb{R})$ that is consisting of the full-rank elements, i.e., \begin{equation} \label{eq:mathcalZ} \mathcal{Z} \mathrel{\mathop:}= \setdef{ Z \in \mathsf{M}_{2n\times n}(\mathbb{R}) }{ \mathop{\mathrm{rank}}\nolimits Z = n }. \end{equation} As shown in \cite{SkVi2019}, $\mathcal{Z}$ is an open subset of $\mathsf{M}_{2n\times n}(\mathbb{R})$, and the actions $\Phi$ and $\Psi$ preserve $\mathcal{Z}$. Hence we may restrict the symplectic form $\omega$ and the momentum maps $\mathbf{J}$ and $\mathbf{M}$ to $\mathcal{Z}$; we denote these restrictions by the same symbols for simplicity of notation: \begin{equation*} \begin{tikzcd} \mathfrak{sp}(2n,\mathbb{R})^{*} & \mathcal{Z} \arrow[swap]{l}{\mathbf{J}} \arrow{r}{\mathbf{M}} & \mathfrak{o}(n)^{*}. \end{tikzcd} \end{equation*} \citet[Proposition~4.2]{SkVi2019} proved that $\Phi$ and $\Psi$ define \textit{mutually transitive actions} on $\mathcal{Z}$ in the following sense: (i)~The $\mathsf{Sp}(2n,\mathbb{R})$-action~$\Phi$ and the $\mathsf{O}(n)$-action $\Psi$ commute; (ii)~$\Phi$ and $\Psi$ are symplectic actions; (iii)~the momentum maps $\mathbf{J}$ and $\mathbf{M}$ are equivariant; (iv)~each level set of $\mathbf{J}$ is an $\mathsf{O}(n)$-orbit, and each level set of $\mathbf{M}$ is an $\mathsf{Sp}(2n,\mathbb{R})$-orbit. The mutual transitivity has the following important consequence~(see also \citet[Theorem 2.9~(iii)]{BaWu2012} and \citet[Proposition~3.5]{Sk2018}): Let $Z_{0} \in \mathcal{Z}$ and $\mu_{0} \mathrel{\mathop:}= \mathbf{J}(Z_{0})$ and $\Pi_{0} \mathrel{\mathop:}= \mathbf{M}(Z_{0})$. Then one can identify the Marsden--Weinstein quotient \begin{equation*} \overline{\mathcal{Z}}_{\mu_{0}} \mathrel{\mathop:}= \mathbf{J}^{-1}(\mu_{0})/\mathsf{Sp}(2n,\mathbb{R})_{\mu_{0}} \end{equation*} with the coadjoint orbit $\mathcal{O}_{\Pi_{0}}$ through $\Pi_{0}$ in $\mathfrak{o}(n)^{*}$. \begin{remark} \label{rem:reduced_space} We do not have to check that the condition (mentioned in Section~\ref{ssec:technical_issues}) that the $\mathsf{Sp}(n,\mathbb{R})$-action $\Phi$ or the $\mathsf{Sp}(n,\mathbb{R})_{\mu_{0}}$ action on $\mathbf{J}^{-1}(\mu_{0})$ is free and proper. In fact, the smooth structure on the reduced space $\overline{\mathcal{Z}}_{\mu_{0}}$ is induced by that of the coadjoint orbit $\mathcal{O}_{\Pi_{0}}$. See the proof of Proposition~2.8 in \cite{SkVi2019} for details. \end{remark} More specifically, let $i_{\mu_{0}}\colon \mathbf{J}^{-1}(\mu_{0}) \hookrightarrow \mathcal{Z}$ be the inclusion and $\pi_{\mu_{0}}\colon \mathbf{J}^{-1}(\mu_{0}) \to \overline{\mathcal{Z}}_{\mu_{0}}$ be the quotient map. Then the reduced symplectic form $\overline{\omega}_{\mu_{0}}$ on $\overline{\mathcal{Z}}_{\mu_{0}}$ is uniquely determined by \begin{equation*} i_{\mu_{0}}^{*} \omega = \pi_{\mu_{0}}^{*} \overline{\omega}_{\mu_{0}}; \end{equation*} see \cite{MaWe1974} and \citet[Sections~1.1 \& 1.2]{MaMiOrPeRa2007}. Also, let $\omega_{\mathcal{O}_{\Pi_{0}}}$ be the $(-)$-Kirillov--Kostant--Souriau symplectic structure, i.e., for any $\Pi \in \mathcal{O}_{\Pi_{0}}$ and $A, B \in \mathfrak{o}(n)$, \begin{equation*} \omega_{\mathcal{O}_{\Pi_{0}}}( -\ad_{A}^{*}\Pi, -\ad_{B}^{*}\Pi ) \mathrel{\mathop:}= -\ip{\Pi}{[A,B]}, \end{equation*} where $[\,\cdot\,,\,\cdot\,]$ is the commutator on $\mathfrak{o}(n)$; see, e.g., \citet[Chapter~1]{Ki2004} and \citet[Chapter~14]{MaRa1999}. Then the momentum map $\mathbf{M}$ restricted to the level set $\mathbf{J}^{-1}(\mu_{0})$ gives rise to a diffeomorphism $\overline{\mathbf{M}}\colon \overline{\mathcal{Z}}_{\mu_{0}} \to \mathcal{O}_{\Pi_{0}}$; moreover this map is symplectic with respect to the above symplectic forms, i.e., \begin{equation*} \overline{\mathbf{M}}^{*} \omega_{\mathcal{O}_{\Pi_{0}}} = \overline{\omega}_{\mu_{0}}. \end{equation*} The diagram below gives an overview of this result. \begin{equation*} \begin{tikzcd}[column sep=7ex, row sep=7ex] \mathcal{Z} & \\ \mathbf{J}^{-1}(\mu_{0}) \arrow{u}{i_{\mu_{0}}} \arrow[swap]{d}{\pi_{\mu_{0}}} \arrow{dr}{\mathbf{M}|_{\mathbf{J}^{-1}(\mu_{0})}} & \\ \overline{\mathcal{Z}}_{\mu_{0}} \arrow[swap]{r}{\overline{\mathbf{M}}} & \mathcal{O}_{\Pi_{0}} \end{tikzcd} \end{equation*} \subsection{Reduction of Symmetric Representation} Let $Q(0) = Q_{0} \in \mathsf{SO}(n)$ and $\Pi_{0} \in \mathfrak{o}(n)^{*}$ be the initial rotational configuration and the initial body angular momentum of the rigid body, and fix $P_{0} \in \mathsf{SO}(n)$ so that \begin{equation*} Q_{0}^{T} P_{0} - P_{0}^{T} Q_{0} = \Pi_{0}. \end{equation*} See \citet{BlCr1996} and \citet{BlCrMaRa2002} for the condition under which this is possible. Then clearly $Z_{0} \mathrel{\mathop:}= (Q_{0}, P_{0})$ is in the open subset $\mathcal{Z} \subset \mathsf{M}_{2n\times n}(\mathbb{R})$ defined in \eqref{eq:mathcalZ} and $\Pi_{0} = \mathbf{M}(Z_{0})$. Now, setting \begin{equation*} \mu_{0} \mathrel{\mathop:}= \mathbf{J}(Z_{0}) = \begin{bmatrix} P_{0} Q_{0}^{T} & I \smallskip\\ -I & -Q_{0} P_{0}^{T} \end{bmatrix} \in \mathfrak{sp}(2n,\mathbb{R})^{*}, \end{equation*} the level set \begin{equation*} \mathbf{J}^{-1}(\mu_{0}) = \setdef{ \begin{bmatrix} Q \\ P \end{bmatrix} \in \mathcal{Z} }{ Q Q^{T} = I,\, P P^{T} = I,\, P Q^{T} = P_{0} Q_{0}^{T} } \end{equation*} is an invariant submanifold of the symmetric representation~\eqref{eq:SymRB}. Let $h\colon \mathfrak{o}(n)^{*} \to \mathbb{R}$ be a collective Hamiltonian, i.e., $h \circ \mathbf{M} = H$. From the expressions \eqref{eq:H} and \eqref{eq:M} of $H$ and $\mathbf{M}$, we find \begin{equation*} h(\Pi) = \frac{1}{2}\ip{ \Pi }{ \mathcal{I}^{-1}(\Pi) }, \end{equation*} which is the Hamiltonian~\eqref{eq:h} of the generalized rigid body in the body representation. Then the result from the previous subsection implies that the $\mathsf{Sp}(2n,\mathbb{R})$-reduced dynamics in $\overline{\mathcal{Z}}_{\mu_{0}}$ is equivalent to the Lie--Poisson equation \begin{equation*} \dot{\Pi} = \ad_{Dh(\Pi)}^{*} \Pi \end{equation*} in the coadjoint orbit $\mathcal{O}_{\Pi_{0}} \subset \mathfrak{o}(n)^{*}$, where $Dh(\Pi) \in \mathfrak{o}(n)$ is defined so that, for any $\delta\Pi \in \mathfrak{o}(n)^{*}$, \begin{equation*} \ip{ \delta\Pi }{ Dh(\Pi) } = \left.\od{}{s} h(\Pi + s\,\delta\Pi) \right|_{s=0} = \ip{ \delta\Pi }{ \mathcal{I}^{-1}(\Pi) }, \end{equation*} that is, $Dh(\Pi) = \mathcal{I}^{-1}(\Pi)$. However, under the identification $\mathfrak{o}(n)^{*} \cong \mathfrak{o}(n)$, $\ad_{A}^{*}\Pi = [\Pi, A]$ for any $A \in \mathfrak{o}(n)$ and $\Pi \in \mathfrak{o}(n)^{*}$, and thus we obtain \begin{equation*} \dot{\Pi} = \left[ \Pi, \mathcal{I}^{-1}(\Pi) \right], \end{equation*} which is the $n$-dimensional Euler equation~\eqref{eq:Euler}. \section*{Acknowledgments} I would like to thank Paul Skerritt for helpful discussions on dual pairs. This work was partially supported by NSF grant CMMI-1824798.
{ "timestamp": "2019-09-16T02:01:16", "yymm": "1811", "arxiv_id": "1811.03184", "language": "en", "url": "https://arxiv.org/abs/1811.03184" }
\section{Wick rotation in gravity} In Quantum Field Theory (QFT) it is customary to define functional integrals in imaginary (Euclidean) time, in order to replace oscillatory integrals by exponentially damped ones, thereby improving their convergence. The rotation of the integration contour over energy or time is called Wick rotation. In this paper we discuss various issues that arise when one considers the Wick rotation for a QFT on a curved spacetime. In general there will be more than one stationary point for the Euclidean action, in which case the functional integral should be defined as a sum of Gaussian integrals around all the regular, finite action solutions of the Euclidean field equations, generally known as instantons. (Note that the instantons will not in general correspond to solutions of the Lorentzian field equations.) In the case of quantum gravity, this procedure has been developed mainly by the Cambridge school in the late '70s and early '80s and is called Euclidean Quantum Gravity \cite{eqg}. In this approach the Euclidean and Lorentzian spacetimes are seen as different real sections of a complex manifold. In practice, one rotates a suitable time coordinate, as in standard QFT. This definition of Wick rotation has some well-known shortcomings that have been recently summarized in \cite{Visser:2017atf}. An alternative definition, where the coordinates are kept fixed and it is the metric that is analytically continued, avoids some of these issues. The main virtue of this alternative definition is that it keeps the spacetime manifold fixed. Thus, in the path integral, one would only consider manifolds that admit a physical, Lorentzian, metric. Unfortunately, the analytic continuation of the metric is far from unique. One may try to fix the ambiguities, or at least to restrict them, by imposing some additional desirable properties, such as mapping local solutions of the Lorentzian field equations to local solutions of the Euclidean equations, and preserving the number of Killing vectors. Our main result is that in some cases these properties seem to be in conflict with the requirement that the Wick rotation preserve the manifold. In the rest of section 1, we will review various definitions of Wick rotation. In section 2 we discuss the analytic continuations of Minkowski and Anti-de Sitter space. Sections 3 and 4 deal with the de Sitter and Schwarzschild metrics, respectively. Section 5 contains a short discussion, where we compare again the continuation of the metric with the continuation of the coordinates, in view of the preceding results. \subsection{Continuing time} Assume that spacetime has topology $R\times\Sigma$, with coordinates $t$ in $R$ and $x^i$ in $\Sigma$. We further assume that spacetime is static, namely that there exists a Killing vector that is everywhere orthogonal to $\Sigma$. There exist coordinates where the metric has the form \begin{equation} g_{\mu\nu}(t,x)=\left( \begin{array}{cc} g_{00}(x) & 0 \\ 0 & g_{ij}(x) \end{array} \right) \end{equation} where $g_{00}<0$. In this case there is a natural time coordinate and the Wick rotation can be defined in the usual way: \begin{equation} iS_L\Big|_{t\to-it_E}=-S_E\ . \label{wick} \end{equation} For example, for a real scalar field this gives \begin{equation} S_E(\phi)=\frac{1}{2}\int d^dx_E\sqrt{g_E}\,g_E^{\mu\nu}\partial_\mu\phi\partial_\nu\phi \end{equation} where \begin{equation} (g_E)_{\mu\nu}=\left( \begin{array}{cc} -g_{00} & 0 \\ 0 & g_{ij} \end{array} \right) \end{equation} is a positive definite metric on an analytically continued manifold. \noindent If this definition is extended to more general spacetimes, several issues arise \cite{Visser:2017atf}. $\bullet$ {\sl First} we note that time has no physical meaning in GR. If the Wick rotation is performed on time, one immediately finds that the result depends very strongly on the coordinate system. Thus for example beginning from the de Sitter metric, written in three different forms: the form with flat spatial sections \begin{equation} \label{deSflat} ds^2=-dt^2+H^{-2}e^{2Ht}\left(dr^2+r^2d\Omega_2^2\right) \end{equation} or the form with positively curved spatial sections \begin{equation} \label{deSpos} ds^2=-d\tau^2+H^{-2}\cosh^2(H\tau)\left(\frac{dr^2}{1-r^2}+r^2d\Omega_2^2\right) \end{equation} or the form with negatively curved spatial sections \begin{equation} \label{deSneg} ds^2=-d\bar\tau^2+H^{-2}\sinh^2(H\bar\tau)\left(\frac{dr^2}{1+r^2}+r^2d\Omega_2^2\right) \end{equation} (where $d\Omega_2^2=d\theta^2+\sin^2\theta d\varphi^2$ is the metric of the 2-sphere) the prescription $t\to -it$ leads to a metric that is either complex, or positive definite, or again Lorentzian but with opposite signature. $\bullet$ {\sl Second,} in flat spacetime, the sense of the Wick rotation is fixed by the requirement that the analytic continuation of the Feynman propagator of a free particle should not cross the poles in the complex energy plane. This is related to Feynman's ``$i\epsilon$'' prescription, which is a way to incorporate the notion of causality in the two-point function. Furthermore, the Euclidean continuation of any correlation functions must satisfy Osterwalder-Schrader positivity, which is again a consequence of causality. No such restrictions from causality seem to limit the analytic continuation of a time coordinate in a generic Lorentzian manifold. On the other hand it has been argued that a notion of causality should be encoded in the functional integral \cite{Teitelboim:1983fh,causet,Ambjorn:1998xu}. $\bullet$ {\sl Third,} given that any manifold admits a Euclidean metric, a definition of the functional integral that started from Euclidean signature, as advocated in Euclidean Quantum Gravity \cite{Gibbons:1976ue}, would include a sum over all topologies. This is a source of various difficulties. At a fundamental mathematical level, we are confronted with the fact that even the classification of all four-dimensional topologies is impossible \cite{vanmeter}. This is a major challenge to the definition of a functional integral, over and above the usual functional analytic issues. More concretely, numerical simulations of sums over four-dimensional Euclidean triangulations (called ``Euclidean Dynamical Triangulations'' - DT) have largely failed to produce viable phases looking like an extended four-dimensional manifold \cite{Bialas:1996wu,deBakker:1996zx,Coumbe:2014nea,Rindlisbacher:2015ewa}. On the other hand, the existence of a Lorentzian structure on a given manifold restricts the possible topologies \cite{hawkingellis}. Numerical simulations within Causal Dynamical Triangulations (CDT) have indeed shown that this requirement has a very beneficial effect on the functional integral for gravity \cite{Ambjorn:2004qm}. $\bullet$ {\sl Fourth,} when gravity is viewed as a gauge theory for the Lorentz group, as in the tetrad formalism, not only is the signature of the metric changed, but also the gauge group itself. This is in sharp contrast to other gauge theories. This problem is particularly urgent when one couples gravity to fermions, because the spinor representations are generally different for different signatures. We will not deal with this issue in detail here, except for some comments in section 1.6. \smallskip For all these reasons we are led to seek a definition of Wick rotation that satisfies the following three conditions: \begin{enumerate} \item[{\bf(1)}] it does not depend on the coordinates\ ; \item[{\bf(2)}] causality is taken into account\ ; \item[{\bf(3)}] the Wick-rotated metric is defined on the same manifold as the original Lorentzian metric\ . \end{enumerate} In the rest of this section we shall discuss two such definitions. In the rest of the paper we show that even these definitions are not completely satisfactory. \subsection{Continuing the lapse} A better procedure is to analytically continue the metric instead of time. One way to do this is to start from an ADM foliation. Let us assume topology $R\times\Sigma$. A generic metric and its inverse can be written locally in the form \begin{equation} g_{(\sigma)\mu\nu}=\left( \begin{array}{cc} \sigma N^2+q_{ij}N^i N^j & N_i \\ N_j & q_{ij} \end{array} \right) \ ,\qquad g_{(\sigma)}^{\mu\nu}=\left( \begin{array}{cc} \frac{1}{\sigma N^2} & -\frac{N^i}{\sigma N^2} \\ -\frac{N^j}{\sigma N^2} & q^{ij}+\frac{N^i N^j}{\sigma N^2} \end{array} \right) \end{equation} where $q_{ij}$ is a positive definite metric in $\Sigma$. Note that $\sqrt{g_{(\sigma)}}=\sqrt{\sigma}N\sqrt{q}$. The metric is Euclidean for $\sigma=1$ and Lorentzian for $\sigma=-1$. (One could equivalently take $\sigma=1$ and assume that the sign of $N^2$ could be negative.) Now we do not take the absolute value of the determinant when we construct the action. For example, in the scalar case we define \begin{equation} S_{(\sigma)}(\phi)=\frac{1}{2}\int d^dx\sqrt{g_{(\sigma)}} g_{(\sigma)}^{\mu\nu}\partial_\mu\phi\partial_\nu\phi\ . \label{scalar} \end{equation} Since for Lorentzian metric ($\sigma=-1$) we have $\sqrt{g_{(-1)}}=iN\sqrt{q}$ whereas for Euclidean metric $\sqrt{g_{(1)}}=N\sqrt{q}$, this action cannot remain real when $\sigma$ changes sign. For $\sigma=-1$ we define the real Lorentzian action \begin{equation} S_L(\phi)=iS_{(-1)}(\phi) \label{scal} \end{equation} whereas for $\sigma=1$ we define the real Euclidean action \begin{equation} S_E(\phi)=S_{(1)}(\phi)\ . \label{scae} \end{equation} This analytic continuation in $\sigma$ is such that if we start from $iS_L=-S_{(-1)}$ we end at $-S_{(1)}=-S_E$. One can similarly check that \begin{equation} S_{(\sigma)}=\frac{1}{4}\int d^dx\sqrt{g_{(\sigma)}} g_{(\sigma)}^{\mu\nu}g_{(\sigma)}^{\rho\sigma}F_{\mu\rho}F_{\nu\sigma} \label{maxwell} \end{equation} and \begin{equation} S_{(\sigma)}=\frac{1}{2\kappa^2}\int d^dx\sqrt{g_{(\sigma)}} (2\Lambda-R(g_{(\sigma)})) \label{hilbert} \end{equation} interpolate between Lorentzian and Euclidean path integrals for electromagnetism and gravity, always with the identifications (\ref{scal}) and (\ref{scae}). In this way we reproduce the result obtained by continuing the time in the static case, but this procedure is not restricted to the static case. \subsection{General procedure} Every manifold admits a Riemannian (Euclidean) metric but that there are topological restrictions for the existence of Lorentzian metrics, namely there must exist a nowhere zero vectorfield \cite{hawkingellis}. Without loss of generality, such vectorfield can be unit-normalized. Then, a Lorentzian metric $g_{(L)\mu\nu}$ can be constructed starting from a Euclidean metric $g_{(E)\mu\nu}$ and a unit vector field $X^\mu$ by the formula: \begin{equation} \label{matthew} g_{(L)\mu\nu}=g_{(E)\mu\nu}-2X_\mu X_\nu\ , \end{equation} where $X_\mu=g_{(E)\mu\nu}X^\nu$. In the Lorentzian metric, $g_{(L)\mu\nu}X^\mu X^\nu=-1$, so $X$ is a unit timelike vectorfield. This formula can be inverted to construct a Euclidean metric out of a given Lorentzian metric and a unit timelike vectorfield. This can be seen as the result of a continuous deformation \cite{Candelas:1977tt}: \begin{equation} g_{(\sigma)\mu\nu}=g_{(L)\mu\nu}+(1+\sigma) X_\mu X_\nu\ , \label{ancont} \end{equation} where $\sigma$ varies between $-1$ and $1$. Clearly $g_{(-1)}=g_{(L)}$ and $g_{(1)}=g_{(E)}$. For $\sigma=0$ the metric is degenerate: for any vectorfield $Y^\mu$, $g_{(0)\mu\nu}X^\mu Y^\nu =0$. We note that continuing the lapse is a special case of this more general procedure, where $X^\mu$ is the unit normal to the hypersurfaces of constant time: $$ X_\mu=(-N,0)\ ;\qquad X^\mu=\left(\frac{1}{N},-\frac{N^a}{N}\right)\ , $$ in the ADM coordinates. The procedure discussed in this section is more general in that the vector $X$ is not assumed to be hypersurface-orthogonal. Let us see how this procedure reproduces the Wick rotation in flat spacetime. We have $g_{(-1)\mu\nu}=g_{(L)\mu\nu}=\eta_{\mu\nu}$, so the interpolating metric is $g_{(\sigma)\mu\nu}=\mathrm{diag}(\sigma,1,1,1)$, and $g_{(1)\mu\nu}\equiv g_{(E)\mu\nu}=\delta_{\mu\nu}$. The volume element $\sqrt{{\rm det} g^{(\sigma)}}$ is $\sqrt{{\rm det}(\eta_{\mu\nu})}=i$ for $\sigma=-1$ and $\sqrt{{\rm det}(\delta_{\mu\nu})}=1$ for $\sigma=1$. With this definition of the Wick rotation, the ``$i$'' in the exponent in the functional integral comes from taking the square root of the determinant of a metric with Lorentzian signature. The interpolating actions for scalar, Maxwell and gravitational field are given again by (\ref{scalar},\ref{maxwell},\ref{hilbert}), with the identifications (\ref{scal}) for the Lorentzian action and (\ref{scae}) for the Euclidean action. \subsection{Complexification} The two definitions of Wick rotation given in the preceding sections give rise to a problem: if we interpret $\sigma$ as a continuous real parameter running between $-1$ and $1$, then for $\sigma=0$ the metric would become degenerate. To avoid this, one has to allow $\sigma$ to describe a path in the complex plane. The question of the contour then arises: does the path pass above or below the point $\sigma=0$? To make a choice, we note that the propagator constructed with the interpolating metric \begin{equation} \frac{i}{-g_{(\sigma)\mu\nu}p^\mu p^\nu-m^2} =\frac{i}{-\sigma E^2-\vec p^2-m^2} \end{equation} coincides for $\sigma=-1$ with the causal (Feynman) propagator, \begin{equation} \Delta^F=\frac{i}{E^2-\vec p^2-m^2+i\epsilon} \end{equation} if $\sigma$ is given a small negative imaginary part \begin{equation} \sigma=-1-\frac{i\epsilon}{E^2}\ . \end{equation} We see that the usual prescription for the choice of integration contour in the definition of the propagator can be interpreted naturally as an incipient complexification of the metric. After allowing $\mathrm{Re}(\sigma)$ to grow from $-1$ to $1$ and letting $\mathrm{Im}(\sigma)$ go back to zero, and taking into account the factor $i$ from the volume element, the correlator takes the Euclidean form \begin{equation} \frac{-1}{-g_{(1)\mu\nu}p^\mu p^\nu-m^2}= \frac{1}{E^2+\vec p^2+m^2}\ . \end{equation} The path that we have just described can be deformed into a path running along the real axis, except for an infinitesimal semicircle passing above $\sigma=0$. In the following this path will always be understood. There is in general no notion of reflection positivity in curved spacetime because generically there is no isometry that can serve the function of reflection. At least on static spacetimes, where such a reflection exists, a suitable generalization of reflection positivity holds \cite{jaffe}. \subsection{Properties} The procedures outlined in sects. 1.2-1.3 clearly satisfy the conditions (1)-(2)-(3) spelled out in section 1.1. In spite of this, important issues remain, the most important one being the lack of uniqueness. The procedure of section 1.2 depends on the choice of a foliation and the procedure of section 1.3 depends on the choice of a one-form $X_\mu$: in both cases there is an infinite dimensional arbitrariness. One may try to restrict this choice by making additional demands. For instance, it would be clearly desirable that a definition of Wick rotation had the following properties: \begin{enumerate} \item[{\bf(4)}] a local solution of the Lorentzian field equations should map to a local solution of the Euclidean field equations. For Einstein's equations with a cosmological constant, this would mean that locally Einstein metrics are mapped to locally Einstein metrics. (The sign of the cosmological constant should be allowed to change.) \item[{\bf(5)}] if the Lorentzian metric has a Killing vector, the Euclidean metric should also have a Killing vector. (In general one would have to allow the algebra of the Killing vectors to be deformed in the Euclidean continuation, as is the case already for flat space.) \item[{\bf(6)}] a maximally symmetric spacetime should be mapped to a maximally symmetric spacetime. (Again, one cannot demand the sign of the curvature to remain the same.) \end{enumerate} In point (4) it would be too much to demand that global solutions are mapped to global solutions. This is because already for a simple scalar field, such a property does not hold. For example on a torus a local solution of the D'Alembert equation of the form $e^{ik(x\pm t)}$ maps to a local solution of the Laplace equation of the form $e^{ikx\mp kt}$, which however does not satisfy the periodicity conditions. Thus only the constants are global solutions of the Laplace equation. All the oscillating solutions do not have a Euclidean analogue. Let us observe that requirements (4-6) are automatically satisfied if one interprets the Wick rotation as a complex change of coordinates. The main point of this paper will be to show that it is not always possible to satisfy all these conditions simultaneously. \subsection{Other approaches} We mention here some related ideas that have appeared in the literature. Whereas here we view the Wick rotation as a mathematical trick, one could think of the signature of the metric as being dynamically determined. One can view the metric as an order parameter whose expectation value breaks the linear group $GL(d)$ to $O(d)$ or $O(d-1,1)$ \cite{Percacci:1990wy,Percacci:2009ij}. Dynamical mechanisms that determine the signature have been discussed in \cite{Carlini:1993up,Dereli:1993pj,Wetterich:2004za,Mukohyama:2013ew,Kehayias:2014uta}. A definition of Euclidean continuation of spinors that avoids the doubling issue has been discussed in \cite{Mehta:1986mi,Wetterich:2010ni}. Mathematical results concerning the analytic continuation of Riemannian manifolds have been discussed in \cite{Helleland:2015wva,Helleland:2017zks}. \section{Regular Examples} \subsection{Minkowski spacetime} We begin from this rather trivial case, just to show that it is possible to satisfy all the requirements 1-4, provided we allow the Killing vectors to be deformed and their algebra to change (discontinuously) when one crosses the point $\sigma=0$. Choosing the one-form $X=dt$ in Minkowski coordinates: $$ ds^2(\sigma) = \sigma dt^2 + \sum_{i=1}^{3}(dx^i)^2\ , $$ where $-1\leq\sigma\leq1$. The Killing vectors are: \begin{equation} \footnotesize \begin{split} & P_0 \equiv \frac{1}{\sqrt{|\sigma|}}\partial_0 \hspace{1cm}, \hspace{1 cm} P_i \equiv \partial_i \hspace{3 cm} \mbox{with} \;i=1,2,3\\ & M_{a b} \equiv x^a \partial_{b} - x^b \partial_{a} \hspace{5 cm} \mbox{with} \;a,b=1,2,3\; \mbox{and}\;a\ne b\\ & K_a \equiv \frac{1}{\sqrt{|\sigma|}} \left[ x^a \partial_{0} + (1-\sigma)x^0 \partial_{a} \right] \hspace{3 cm} \mbox{with} \;a=1,2,3 \end{split} \label{genminkill} \end{equation} The commutators are: \begin{equation} \begin{split} & \left[ P_i , P_j \right] = 0 \hspace{3 cm} \left[ M_{a b} , P_i \right] = \delta_{b i} P_a - \delta_{a i} P_b \\ & \left[ M_{a b} , M_{c d} \right] = -\delta_{a c}M_{b d}+\delta_{a d}M_{b c}+\delta_{c b}M_{a d}-\delta_{b d}M_{a c} \\ & \left[ M_{a b} , K_c \right] = \delta_{b c} K_a - \delta_{a c} K_b \hspace{0.5 cm} \left[ P_i , K_a \right] = \delta_{i a} P_0 \\ & \left[ K_a , K_b \right] = \mbox{sign}(-\sigma) M_{a b} \hspace{1 cm} \left[ P_0 , K_a \right] = \mbox{sign}(-\sigma) P_a \end{split} \label{comgenmin} \end{equation} The main point to observe here is that, as long as $\sigma$ is real, the algebra remains the same under infinitesimal changes of $\sigma$ but changes discontinuously when $\sigma$ changes sign. This is because all the metrics with $\sigma<0$ are isometric to the metric with $\sigma=-1$. The pullbacks of the original Killing vectors by this isometry are Killing vectors for the deformed metric, all satisfying the same algebra. Metrics with $\sigma>0$, however, are not isometric to the original Minkowski metric (they are all isometric to the Euclidean metric with $\sigma=1$) and the boost generators become an additional rotation generator. \subsection{Anti de Sitter space} Anti-de Sitter space can be embedded in a flat 5-dimensional space with metric $ds^2=-dz_0^2+dz_1^2+dz_2^2+dz_3^2-dz_4^2$. The embedding equation is $$ -z^0_2+z^1_2+z^2_2+z^3_2-z^4_2=-r^2\ . $$ One can choose coordinates $\tau,\chi,\theta,\varphi$, defined by \begin{eqnarray} z_i&=& \,r\,\sinh\chi\,\omega_i \ \ \ \ \ \mbox{with}\; i=1,2,3\ \ \mbox{and}\ \ \sum_i\omega_i^2=1 \nonumber\\ z_4&=& \,r\,\cosh\chi\sin\tau,\nonumber\\ z_0&=& \,r\,\cosh\chi\cos\tau\ . \label{adscoord} \end{eqnarray} Aside from the issue of periodicity in the $\tau$ direction, these coordinates cover the whole manifold. This embedding gives rise to the metric \begin{equation} \label{AdeSstat} ds^2=r^2\left(-\cosh^2\chi d\tau^2+d\chi^2+\sinh^2\chi(d\theta^2+\sin^2\theta d\varphi^2)\right)\ . \end{equation} The one-form $X=r\cosh\chi d\tau$ has norm $-1$, and can be used in (\ref{matthew}) to generate the Euclidean metric \begin{equation} \label{hypstat} ds^2=r^2\left(\cosh^2\chi d\tau^2+d\chi^2+\sinh^2\chi(d\theta^2+\sin^2\theta d\varphi^2)\right)\ . \end{equation} This is the standard metric on the 4-four-dimensional one-sheeted hyperboloid, which is embedded in a five-dimensional Minkowski space with metric $ds^2=dz_1^2+dz_2^2+dz_3^2+dz_4^2-dz_5^2$ by the condition $$ z_1^2+z_2^2+z_3^2+z_4^2-z_5^2=-r^2\ . $$ The coordinates are defined as in (\ref{adscoord}), except that in the last two lines the trigonometric functions of $\tau$ are replaced by hyperbolic functions. The curvature scalar of this space is $R=-12/r^2$, and therefore it is a solution of Einstein's equations with cosmological constant $\Lambda=-3/r^2$. It is maximally symmetric, so the number of Killing vectors is preserved, but the isometry group changes from $SO(2,3)$ to $SO(1,4)$. From this point of view AdS behaves exactly like Minkowski space. \section{De Sitter space} De Sitter space in $4$ dimensions has the topology of a cylinder $\mathbb{R}\times S^3$. It can be embedded in a $5$-dimensional Minkowski space with metric $ds^2=-dz_0^2+dz_1^2+dz_2^2+dz_3^2+dz_4^2$ by the equation $$ -z_0^2+z_1^2+z_2^2+z_3^2+z_4^2=H^{-2}\ . $$ The hyperspherical coordinates $\tau$, $\chi$, $\theta$, $\varphi$ are related to the coordinates of (\ref{deSpos}) by $r=\sin\chi$. They are related to the embedding coordinates by \begin{eqnarray} z_0&=& H^{-1}\sinh(H\tau)\nonumber\\ z_i&=&H^{-1} \cosh(H\tau)\,\sin\chi\,\omega_i \ \ \ \ \ \mbox{with}\; i=1,2,3\ \ \mbox{and}\ \ \sum_i\omega_i^2=1 \nonumber\\ z_4&=& H^{-1}\cosh(H\tau)\,\cos\chi\ . \label{hyperspherical} \end{eqnarray} These coordinates cover the whole manifold, aside from a set of measure zero. The metric has the form (\ref{deSpos}), with $r=\sin\chi$. If we now define $\cosh(H\tau)=1/\cos\rho$, with $-\frac{\pi}{2}\leq\rho\leq\frac{\pi}{2}$, so that \begin{equation} z_0 = H^{-1}\tan\rho\ ;\qquad z_4 = H^{-1} \cos \chi/\cos\rho\ , \end{equation} the metric takes the form \begin{equation} ds^2=(H\cos\rho)^{-2}\left[-d\rho^2+d\chi^2+\sin^2\chi d\Omega_2^2 \right]\ , \end{equation} Fixing the spherical coordinates, it corresponds to a finite square of side $\pi$, which is the Penrose diagram for this space, see Fig.1. We will next consider four different choices for the one-form $X$, which are naturally associated to four different coordinate systems: the three FRW forms (\ref{deSflat},\ref{deSpos},\ref{deSneg}) and static coordinates. Throughout this discussion it is important to keep in mind that the analytic continuation of the metric only depends on $X$ and not on the coordinate system: It is just easier to describe if we choose a suitable coordinate system. We will make this point clear by also giving the form of $X$ in the global coordinates $\rho,\chi$ of the Penrose diagram. \begin{SCfigure} \includegraphics[scale=1]{desitter.pdf} \caption{\small Penrose diagram of de Sitter space. The coordinates are $\rho$ (timelike, vertical) and $\chi$ (spacelike, horizontal). Every point in the interior of the square corresponds to a 2-sphere, while the left and right edges correspond to the poles $\chi=0,\pi$.} \label{fig:1} \end{SCfigure} \subsection{First choice of X} We start from the hyperspherical coordinates (\ref{hyperspherical}), which cover the whole de Sitter space. They are related to the FLRW coordinates of (\ref{deSpos}) by $r=\sin\chi$. The surfaces of constant $\tau$ define an ADM foliation with $\Sigma=S^3$. Let us choose the one-form $X=d\tau$. The corresponding vectorfield $X^\mu$ has components $(1,0,\ldots,0)$ in this coordinate system. The analytically continued metrics are \begin{equation} ds^2(\sigma)= \sigma d\tau^2 +\frac{1}{H^2}\cosh^2(H\tau) (d\chi^2+\sin^2\chi d\Omega^2_2)\ . \label{desv1} \end{equation} This choice has the virtue that $X$, and therefore also $g(\sigma)$, are defined globally. In particular, for $\sigma=1$ we obtain a global Euclidean metric. However, the Ricci tensor has the form \begin{equation} R_{(\sigma)\mu\nu} = -\frac{3H^2}{\sigma} g_{(\sigma)\mu\nu} +2H^2\frac{(1+\sigma)}{\sigma}P_{\mu\nu} \label{riccinot} \end{equation} where $P_{\mu\nu}$ is the projector on the spacelike hypersurfaces. This means that for $\sigma>-1$ these metrics are not Einstein. A fortiori they cannot be maximally symmetric. An examination of the Killing equation shows that only the generators of the group $SO(d)$ of isometries of the constant time surfaces are Killing vectors for all $\sigma$. All the other vectorfields that are Killing for $\sigma=-1$ are not Killing for $\sigma>-1$. We can understand this by observing that, unlike the Minkowski case, a change of $\sigma$ cannot be absorbed in a rescaling of $\tau$.\\ \subsection{Second choice of $X$} Next consider the FRW coordinates where $\Sigma=\mathbb{H}^d$ is a space of constant negative curvature. The metric has the form (\ref{deSneg}), but we replace the coordinate $r$ by $\chi$, defined by $r=\sinh\chi$. We choose $X=d\bar\tau$ in these coordinates. Then $$ ds^2(\sigma)= \sigma d\bar{\tau}^2+\frac{1}{H^2}\sinh^2 H \bar{\tau}\; (d\chi^2+\sinh^2\chi d\Omega_2^2)\ . $$ As in the positively curved case, the Ricci tensor is given by (\ref{riccinot}), so these metrics are not Einstein for $\sigma>-1$. Since $\bar{\tau} = H^{-1} \mbox{arcosh}\,\left(H z_4 \right) = H^{-1}\mbox{arcosh}\,\left(\cos \chi \cosh H\tau \right)$, the vectorfield $X^\#$ (with components $X^\mu$) reads in global coordinates: \begin{equation} X^\#=\partial_{\bar{\tau}} = \frac{H \cos^2 \rho }{ \sqrt{ \cos^2 \chi - \cos^2 \rho } } \left( \tan \rho \cos \chi \, \partial_{\rho} + \sin \chi \; \partial_{\chi} \right) \label{second} \end{equation} This vector is defined only in a region of the de Sitter space which satisfies: \begin{equation} \cos^2 \chi > \cos^2 \rho \iff z_4 ^2 > H^{-2} \end{equation} Wherever it is well-defined, its norm is equal to $-1$ {\it This vectorfield becomes singular on the hypersurface $z_4^2=H^{-2}$, which is equivalent to $\bar{\tau}=0$. The singularity corresponds to the diagonals in the Penrose diagram. The vectorfield is imaginary in the quadrants III and IV. } \begin{SCfigure} \includegraphics[scale=0.3]{X2.pdf} \caption{\small The second choice for the vectorfield $X^\mu$. It is imaginary in the central diamond.} \label{fig:1} \end{SCfigure} \subsection{Third choice of $X$} Now we come to the FRW coordinates with flat spatial sections, where the metric has the form (\ref{deSflat}). Once again we choose $X_\mu=(1,0,\ldots 0)$. The analytically continued metric is \begin{equation} ds^2(\sigma)= \sigma dt^2 + \frac{1}{H^2}e^{2Ht}\sum_{i=1}^{3}dx_i^2 \ . \end{equation} and the corresponding Riemann tensor is (in any dimension): \begin{equation} R_{(\sigma)\mu \nu \rho \sigma} = -\frac{H^2}{\sigma} \left[g_{(\sigma)\mu\rho}\,g_{(\sigma)\nu\sigma} - g_{(\sigma)\mu\sigma}\,g_{(\sigma)\nu\rho}\right]\ . \end{equation} Thus the metric is maximally symmetric for all $\sigma$. Indeed, the following vectors are Killing: \begin{equation} \begin{split} & P_i=\partial_i\ , \hspace{1 cm} M_{ij}=x_i\partial_j-x_j \partial_j \ , \hspace{1 cm} B=-\frac{1}{H}\partial_t + \sum_{k=1}^{3}x_k\partial_k\\ & K_i=-\frac{1}{H}x_i\partial_t +\frac{1}{2} \left[-\sigma\, e^{-2Ht} -\sum_{k=1}^{3}x_k^2\right] \partial_i + x_i \sum_{k = 1}^{3} x_k \partial_k \end{split} \label{killplalam} \end{equation} and satisfy {\it the same algebra} for all $\sigma$: \begin{equation*} \begin{split} & \left[ P_{i} , B \right] = P_i \hspace{5 cm} \left[ P_{i} , K_j \right] = \delta_{i j} B - M_{i j} \\ & \left[ M_{i j} , B \right] = 0 \hspace{5 cm} \left[ M_{i j} , K_p \right] = \delta_{j p} K_i - \delta_{i p} K_j\\ & \left[ B , K_i \right] = K_i \hspace{5 cm} \left[ K_i , K_j \right] = 0 \end{split} \end{equation*} \begin{figure} \begin{center} \includegraphics[scale=0.3]{X3.pdf}\qquad \includegraphics[scale=0.3]{X4.pdf} \caption{\small Left: The third choice for the vectorfield $X^\mu$. It is singular on the diagonal (the horizon of the observer at the north pole). Right: The fourth choice for the vectorfield $X^\mu$. It is imaginary in regions I and III.} \label{fig:1} \end{center} \end{figure} Since $t = (1/H)\ln\left[ z_0 - z_4 \right] = (1/H) \ln\left[ \sinh \tau - \cos \chi \cosh \tau \right] $, the vectorfield $X^\#$ can be expressed as follows in global coordinates: \begin{equation} X^\# = \partial_t = H \cos \rho \; \frac{1 -\cos \chi \sin \rho }{ \sin \rho - \cos \chi } \left[ \partial_\rho + \frac{\sin \chi \cos \rho }{ \cos \chi \sin \rho - 1 } \; \partial_{\chi} \right] \label{third} \end{equation} It becomes singular for $\sin \rho = \cos \chi $, which is equivalent to $z_0=z_4$: this is true for $t\to-\infty$ and it means that the vector field $X$ is not well-defined on the boundary of the region I$\cup$IV, in the Penrose diagram in Fig.\ref{fig:1}. Thus the domain of definition of the analytically continued metric is one half of de Sitter space. \subsection{Fourth choice of $X$} The static coordinates on de Sitter space are defined by: \begin{eqnarray} z_0&=&H^{-1} \cos\zeta\sinh t\nonumber\\ z_i&=&H^{-1} \sin\zeta\;\omega_i \ \ \ \ \ \mbox{with}\; i=1,2,3\ \ \mbox{and}\ \ \sum_i\omega_i^2=1 \nonumber\\ z_4&=&H^{-1} \cos\zeta\cosh t \end{eqnarray} where $t \in (-\infty;+\infty)$ and $\zeta\in(-\pi/2;\pi/2)$. Choosing $X=\cos\sigma\, dt$ we get the following family of metrics: \begin{equation} ds^2(\sigma)=H^{-2} \left[\sigma \cos^2 \zeta \;dt^2+ d\zeta^2+\sin^2\zeta\;d\Omega^2_{2} \right]\ . \end{equation} The coordinate $\zeta$ is related to the coordinates of the Penrose diagram by: $\sin\zeta = \sin \chi/\cos\rho$. Then, the vector $X^\#$ can be expressed in global coordinates: \begin{equation} X^\# = H\partial_t =\frac{H \cos^2 \rho}{ \sqrt{\cos^2 \chi -\sin^2 \rho} } \left[ \cos \chi \; \partial_{\rho} - \sin \chi \tan \rho \; \partial_{\chi} \right] \label{fourth} \end{equation} Its norm is equal to $-1$, but this vector is defined only in a region of the de Sitter space which satisfies: \begin{equation} \cos^2 \chi > \sin^2 \rho \iff z_0 ^2 < z_4 ^2 \end{equation} This is in contrast to Anti-de Sitter space, where the static coordinates cover the whole manifold. The Riemann tensor is: \begin{equation} R_{(\sigma)\mu \nu \rho \sigma} = H^2 \left[g_{(\sigma)\mu\rho}\,g_{(\sigma)\nu\sigma} - g_{(\sigma)\mu\sigma}\,g_{(\sigma)\nu\rho}\right] \end{equation} so, in this case too, the metric is maximally symmetric for all $\sigma$. One can deform the Killing vectors of the de Sitter group with the parameter $\sigma$ in such a way that their algebra remains unchanged for $-1\leq\sigma<0$. However, for $\sigma>0$ they satisfy the algebra of $SO(5)$. \subsection{General result} From the preceding examples one may suspect that there exists no globally defined normalized timelike one-form $X_\mu$ such that the analytically continued metrics are maximally symmetric. Let us formulate the problem precisely. Suppose that the Lorentzian metric $g_{\mu\nu}$ is maximally symmetric. For convenience, let $\lambda=\sigma+1$ be infinitesimal. The original Lorentzian metric corresponds to $\lambda=0$. For an infinitesimal $\lambda$, $\delta g_{\mu\nu}=g_{(\lambda)\mu\nu}-g_{\mu\nu} =\lambda X_\mu X_\nu$. If $g_{(\lambda)}$ and $g$ are both maximally symmetric, then there exists an infinitesimal conformal isometry, \begin{equation} \delta g_{\mu\nu}=\lambda\left(\nabla_\mu W_\nu+\nabla_\nu W_\mu - c g_{\mu\nu}\right)\ , \label{agata} \end{equation} for some vectorfield $W$ and constant $c$. Conversely, it is shown in Appendix A that if $g$ is maximally symmetric and (\ref{agata}) holds, then $g_{(\lambda)}$ is also maximally symmetric. The constant $c$ is related to the constant $f(\lambda)$ of (\ref{alessio}) by $f(\lambda)=1-c\lambda+O(\lambda^2)$. Therefore, a local necessary and sufficient condition for the metric $g_{(\lambda)}$ to be maximally symmetric, is that there exists a vectorfield $W$ and a constant $c$ such that \begin{equation} \nabla_\mu W_\nu+\nabla_\nu W_\mu -c g_{\mu\nu}=X_\mu X_\nu\ . \label{ennio} \end{equation} Let us write (\ref{ennio}) explicitly in the coordinate system of the Penrose diagram. \begin{eqnarray} 2\partial_\rho W_\rho-2\tan\rho W_\rho+c\sec^2\rho&=&X_\rho^2 \label{con00}\\ \partial_\rho W_\chi+\partial_\chi W_\rho -2\tan\rho W_\chi&=&X_\rho X_\chi \label{con01}\\ \partial_\rho W_\theta+\partial_\theta W_\rho -2\tan\rho W_\theta&=&X_\rho X_\theta \label{con02}\\ \partial_\rho W_\phi+\partial_\phi W_\rho -2\tan\rho W_\phi&=&X_\rho X_\phi \label{con03}\\ 2\partial_\chi W_\chi-2\tan\rho W_\rho -c\sec^2\rho&=&X_\chi^2 \label{con11}\\ \partial_\chi W_\theta+\partial_\theta W_\chi -2\cot\chi W_\theta&=&X_\chi X_\theta \label{con12}\\ \partial_\chi W_\phi+\partial_\phi W_\chi -2\cot\chi W_\phi&=&X_\chi X_\phi \label{con13}\\ 2\partial_\theta W_\theta-(2\tan\rho W_\rho -2\cot\chi W_\chi +c\sec^2\rho)\sin^2\chi &=&X_\theta^2 \label{con22}\\ \partial_\theta W_\phi+\partial_\phi W_\theta -2\cot\theta W_\phi&=&X_\theta X_\phi \label{con23}\\ 2\partial_\phi W_\phi +2\sin\theta\cos\theta W_\theta -(2\tan\rho W_\rho -2\cot\chi W_\chi +c\sec^2\rho)\sin^2\chi\sin^2\theta &=&X_\phi^2\qquad\qquad \label{con33} \end{eqnarray} We already have two solutions of these equations: they are given by the vectorfields (\ref{third},\ref{fourth}), together with the corresponding infinitesimal isometries and rescalings: \begin{eqnarray} W_{flat} &=& -\frac{1}{2} H \cos \rho \; \frac{1 -\cos \chi \sin \rho }{ \sin \rho - \cos \chi } \left[ \partial_\rho + \frac{\sin \chi \cos \rho }{ \cos \chi \sin \rho - 1 } \; \partial_{\chi} \right] \nonumber\\ X_{flat} &=& H \cos \rho \; \frac{1 -\cos \chi \sin \rho }{ \sin \rho - \cos \chi } \left[ \partial_\rho + \frac{\sin \chi \cos \rho }{ \cos \chi \sin \rho - 1 } \; \partial_{\chi} \right] \label{xflat} \\ c&=&-1 \nonumber \end{eqnarray} and \begin{eqnarray} W_{sta} &=& -\frac{1}{2} \arctanh\left(\frac{\sin\rho}{\cos\chi}\right) \frac{H \cos^2 \rho}{ \sqrt{\cos^2 \chi -\sin^2 \rho} } \left[ \cos \chi \; \partial_{\rho} - \sin \chi \tan \rho \; \partial_{\chi} \right] \nonumber\\ X_{sta} &=& \frac{H \cos^2 \rho}{ \sqrt{\cos^2 \chi -\sin^2 \rho} } \left[ \cos \chi \; \partial_{\rho} - \sin \chi \tan \rho \; \partial_{\chi} \right] \label{xstat} \\ c&=&0 \nonumber \end{eqnarray} As we have discussed earlier, these solutions are singular and we would like to prove in general that the equations cannot have regular solutions. We have not been able to do so in full generality. However, we can make definite statements when we linearize the equations around one of the two solutions given above. Denote $\overline W_\mu$, $\overline X_\mu$ a solution of the full equations and write $$ W_\mu=\overline W_\mu+\delta W_\mu\ ;\qquad X_\mu=\overline X_\mu+\delta X_\mu\ . $$ We show in Appendix B that when $\overline X_\mu$ is the vectorfield (\ref{xflat}) (the one related to the flat FLRW slicing), the linearized equations have no real solutions. Thus the third vectorfield is an isolated solutions of the system (\ref{con00}-\ref{con33}). On the other hand, when $\overline X_\mu$ is the vectorfield (\ref{xstat}), (the one related to static coordinates), there is a family of solutions of the linearized equations, but the perturbed solutions are all singular. Thus, at the linearized level, we could indeed prove that there are no globally regular solutions of (\ref{con00}-\ref{con33}). \section{Schwarzschild spacetime} As an example of a non-maximally symmetric spacetime we consider here Schwarzschild spacetime. It has four Killing vectors generating the isometry group $SO(3)\times T$, where $T$ denotes time translations. \subsection{First choice of $X$} We use Schwarzschild coordinates. Choosing $X=\sqrt{1-\frac{2M}{r}}dt$, the analytically continued metric is \begin{equation} ds^2 = \sigma \left(1 - \frac{2M}{r} \right)dt^2 + \left(1 - \frac{2M}{r} \right)^{-1}dr^2 + r^2 d\Omega_{2}^2\ . \label{eucs} \end{equation} This metric is Ricci-flat for all $\sigma$ and all the Killing vectors of the Schwarzschild metric are Killing vectors for all $\sigma$. However, this analytic continuation is not globally defined. \subsection{Second choice of $X$} Alternatively, let us try to perform the analytic continuation at the level of Kruskal coordinates: \begin{equation} ds^2(\sigma) = \frac{16M^2}{X^2 -T^2 } \frac{W(z)}{W(z)+1} \left[ \sigma dT^2 + dX^2 \right] + 4M^2 \left(W(z)+1 \right)^2 d\Omega_{2}^2 \label{genekru} \end{equation} where $z \equiv\frac{X^2-T^2}{e}$ and $W$ is the Lambert function. The Ricci tensor has a complicated, non-vanishing expression, with a prefactor $1+\sigma$. Thus, the analytically continued metric does not satisfy Einstein's equations in vacuum, even for an infinitesimal deformation. Concerning the symmetries, we find that the generators of $SO(3)$ are preserved, but the timelike vector $$ K_t=4 M \partial_t=X \partial_T+T\partial_X $$ is a Killing vector only for $\sigma=-1$. This should be compared to the standard analytic continuation of the Schwarzschild metric, based on the replacement $T\to -iT$ in the Lorentzian Kruskal metric \cite{Hawking:1976jb}. The difference is that whereas with the present definition one only changes the sign of the $dT^2$ term, keeping all the rest unchanged, in the Cambridge definition one also changes $z$ to $X^2+T^2$, The resulting Euclidean metric is still a solution of the vacuum Einstein equations and still has all the Killing vectors. However, it is only defined for $r>2M$. In fact, one can transform it back to Schwarzschild coordinates and then it coincides with (\ref{eucs}) for $\sigma=1$. Thus, the Cambridge definition of Euclidean Schwarzschild metric is equivalent to the continuation based on our first choice of $X$. \section{Discussion} The Wick rotation is a problematic notion when gravity is involved, or more generally when spacetime is curved. When interpreted as a continuation of some time coordinate, and for a fixed background metric, it has ambiguities that are hard to settle. Things are worse when gravity is dynamical. The Euclidean Quantum gravity programme simply assumed that the functional integral should be performed on all Euclidean metrics. In practice, this has led to many useful and deep insights, but it faces the issue of the classification of all topologies, which is unsolvable in four dimensions. The alternative notion of continuing the metric seems to be better. In particular, it has the attractive feature that the Wick-rotated metric are defined on the same manifold. If a sum over topologies is needed, it is restricted to manifolds admitting a nowhere vanishing vectorfield, which is a much tamer set. It has been seen from numerical simulations with CDTs that the restriction to triangulations admitting a Lorentz metric has a very beneficial effect on the path integral. We have seen here that in certain important cases, the requirement of keeping the spacetime manifold fixed during the Wick rotation clashes with other desirable properties, such as sending local solutions of Einstein's equations to other local solutions of the same equations (possibly up to a change of sign of the cosmological constant) and/or preserving the number of Killing vectors. With the definition of Wick rotation given in Section 1.3, and for de Sitter and Schwarzschild spacetimes, we have seen that when the vectorfield $X^\mu$ is such that the Euclidean metric solves the field equations locally, the solution does not extend to the whole manifold. This is somewhat analogous to the behavior of other fields under Wick rotation, as we have already observed in section 1.5. At least for the cases that we have discussed, this behavior is clearly related to the presence of horizons. It is a familiar fact in Euclidean quantum gravity, when the Wick rotation is interpreted as a complexification of the coordinates, that the region beyond the horizon disappears in the Euclidean section \cite{Gibbons:1976ue}. We see that the same is true also with the alternative definitions of Wick rotation discussed here. \bigskip {\bf Acknowledgments.} We would like to thank M. Visser and C. Wetterich for useful discussions. \goodbreak \begin{appendix} \section{Wick rotation, Einstein equations and Killing vectors} One can ask what are the conditions for an analytically continued metric of the form (\ref{ancont}) to maintain a constant number of Killing vectors, as the parameter $\sigma$ varies continuously. We address this question for infinitesimal deformations of the metric. It is then more convenient to use the parameter $\lambda=\sigma+1$, so that the initial Lorentzian metric corresponds to $\lambda=0$. Obviously, a sufficient condition is that the deformed metric is related to the original metric by an isometry. We prove here a slightly more general result, which covers the examples of section 3. \medskip \noindent {\bf Proposition 1.} Suppose that there exists a vectorfield $W$ and a constant $f(\lambda)$, depending on $\lambda$, such that \begin{equation} g_\lambda=f(\lambda)g+\lambda{\cal L}_W g\ . \label{alessio} \end{equation} If the metric $g$ is Einstein, with $$ Ric(g)=\Lambda g $$ ($Ric(g)$ denoting the Ricci tensor of $g$) then $g_\lambda$ is Einstein with $$ Ric(g_\lambda)=\frac{\Lambda}{f(\lambda)} g_\lambda\ . $$ \smallskip \noindent {\it Proof.} This follows immediately from the fact that $g$ and $g_\lambda$ are isometric up to a constant rescaling. \medskip \noindent {\bf Proposition 2.} Suppose that there exists a vectorfield $W$ and a constant $f(\lambda)=1+O(\lambda)$, such that (\ref{alessio}) holds. Then if $K_i$ are Killing vectors for $g$ satisfying the algebra \begin{equation} [K_i,K_j]=f_{ij}{}^k K_k\ , \label{algebra} \end{equation} to first order in $\lambda$ the vectorfields \begin{equation} K_i(\lambda) = K_i+\lambda[W,K_i] \label{killamb} \end{equation} are Killing vectors for $g_\lambda$ and obey the same algebra (\ref{algebra}). \smallskip \noindent {\it Proof.} For each of the vectorfields $K_i$, suppressing the index $i$, $$ {\cal L}_K g_\lambda=\lambda/f{\cal L}_K{\cal L}_W g =\lambda/f[{\cal L}_K,{\cal L}_W]g =\lambda[{\cal L}_K,{\cal L}_W]g_\lambda+O(\lambda^2)\ . $$ To first order in $\lambda$ we therefore have $$ {\cal L}_{K+\lambda[W,K]}g_\lambda=0\ , $$ showing that $K(\lambda)$ is a Killing vector of $g_\lambda$. To first order in $\lambda$, using the Jacobi identity one gets $$ [K_i(\lambda),K_j(\lambda)]=f_{ij}{}^k K_k +\lambda\left([K_i,[W,K_j]]+[K_j,[K_i,W]]\right) =f_{ij}{}^k K_k(\lambda)\ , $$ so the algebra is unchanged, QED. \section{Solutions of linearized equations} Here we prove the statements made in the end of section 3.5 on the solutions of the linearized equations. Exploiting the fact that $\overline X_\theta=\overline X_\phi=0$, the linearized equations read \begin{eqnarray} 2\partial_\rho \delta W_\rho-2\tan\rho \delta W_\rho+\delta c\sec^2\rho&=&2 \overline X_\rho \delta X_\rho \label{clin00}\\ \partial_\rho \delta W_\chi+\partial_\chi \delta W_\rho -2\tan\rho \delta W_\chi&=&\overline X_\rho \delta X_\chi + \overline X_\chi \delta X_\rho \label{clin01}\\ \partial_\rho \delta W_\theta+\partial_\theta \delta W_\rho -2\tan\rho \delta W_\theta&=&\overline X_\rho \delta X_\theta \label{clin02}\\ \partial_\rho \delta W_\phi+\partial_\phi \delta W_\rho -2\tan\rho \delta W_\phi&=&\overline X_\rho \delta X_\phi \label{clin03}\\ 2\partial_\chi \delta W_\chi-2\tan\rho \delta W_\rho -\delta c\sec^2\rho&=&2 \overline X_\chi \delta X_\chi \label{clin11}\\ \partial_\chi \delta W_\theta+\partial_\theta \delta W_\chi -2\cot\chi \delta W_\theta&=&\overline X_\chi \delta X_\theta \label{clin12}\\ \partial_\chi \delta W_\phi+\partial_\phi \delta W_\chi -2\cot\chi \delta W_\phi&=&\overline X_\chi \delta X_\phi \label{clin13}\\ 2\partial_\theta \delta W_\theta-(2\tan\rho \delta W_\rho -2\cot\chi \delta W_\chi +\delta c\sec^2\rho)\sin^2\chi &=&0 \label{clin22}\\ \partial_\theta \delta W_\phi+\partial_\phi \delta W_\theta -2\cot\theta \delta W_\phi&=&0 \label{clin23}\\ 2\partial_\phi \delta W_\phi +2\sin\theta\cos\theta \delta W_\theta \qquad\qquad\qquad\qquad \qquad\qquad && \nonumber\\ -(2\tan\rho \delta W_\rho -2\cot\chi \delta W_\chi +\delta c\sec^2\rho)\sin^2\chi\sin^2\theta &=&0\qquad\qquad \label{clin33} \end{eqnarray} This is a linear system of equations for $\delta W_\mu$ and $\delta X_\mu$. It is to be supplemented by the condition, \begin{equation} \bar g^{\mu\nu}\overline X_\mu\delta X_\nu=0\ , \end{equation} which follows from the normalization of $X_\mu$. One can solve algebraically the four equations (\ref{clin00},\ref{clin11},\ref{clin02},\ref{clin03}) to express all the $\delta X_\mu$ as linear functions of $\overline X_\mu$, the coordinates, $\delta W_\mu$ and their derivatives. These solutions can be substituted in the remaining six equations obtaining a linear system for the $\delta W_\mu$ alone. Using the normalization $\overline X_\chi^2-\overline X_\rho^2=1$, the resulting equations read: \begin{eqnarray} \left[ \partial_\chi + \frac{\overline X_\rho}{ \overline X_\chi} \tan \rho - \frac{\overline X_\chi}{\overline X_\rho} \left( \partial_\rho- \tan \rho \right) \right] \delta W_\rho +\left[ \partial_\rho -2\tan \rho - \frac{\overline X_\rho}{ \overline X_\chi} \partial_\chi\right]\delta W_\chi &=& -\frac{\delta c }{2 \overline X_\rho \overline X_\chi} \label{lin1}\\ \partial_\theta \delta W_\rho-\frac{\overline X_\rho}{\overline X_\chi} \partial_\theta \delta W_\chi+\left[ \partial_\rho -2\tan\rho -\frac{\overline X_\rho}{\overline X_\chi} \left( \partial_\chi -2\cot\chi \right) \right] \delta W_\theta &=&0 \label{lin2}\\ \partial_\phi \delta W_\rho-\frac{\overline X_\rho}{\overline X_\chi} \partial_\phi \delta W_\chi+\left[ \partial_\rho -2\tan\rho -\frac{\overline X_\rho}{\overline X_\chi} \left( \partial_\chi -2\cot\chi \right) \right] \delta W_\phi &=&0 \label{lin3}\\ 2\tan\rho \delta W_\rho -2\cot\chi \delta W_\chi -2\sin^{-2}\chi\partial_\theta \delta W_\theta &=& -\delta c \sec^2\rho \label{lin4}\\ \partial_\phi \delta W_\theta + \left(\partial_\theta -2\cot\theta \right) \delta W_\phi&=&0 \label{lin5}\\ \sin\theta\left( \cos\theta -\sin \theta \partial_\theta \right) \delta W_\theta+ \partial_\phi \delta W_\phi &=&0\qquad\qquad \label{lin6} \end{eqnarray} The natural way of solving these equations is by separation of variables: $$ \delta W_\mu= A_\mu(\rho,\chi)G_\mu (\theta,\phi)\ . $$ We proceed under this assumption, considering first the linearization around (\ref{xstat}) and then the linearization around (\ref{xflat}). \subsection{Static case} \textit{First possibility: $\delta c \ne 0 $}\\ \smallskip From equation (\ref{lin1}), since the r.h.s. is a function of $\rho$ and $\chi$ only, $G_\rho$ and $G_\chi$ must be constants which, up to rescalings of $A_\rho$ and $A_\chi$ we can assume equal to one. From (\ref{lin4}), $G_\theta$ must be a linear function of $\theta$, which would be strange, considering that everything is expressed as trigonometric functions. Indeed, using this property in equations (\ref{lin5},\ref{lin6}) we conclude that $G_\phi$ should be a linear function of $\phi$. This is inconsistent with periodicity, leading to $G_\phi=0$. Then, the same equations imply that also $G_\theta=0$. So equations (\ref{lin1},\ref{lin4}) simplify: \begin{eqnarray} \left[ \partial_\chi + \frac{\overline X_\rho}{ \overline X_\chi} \tan \rho - \frac{\overline X_\chi}{\overline X_\rho} \left( \partial_\rho- \tan \rho \right) \right] A_\rho +\left[ \partial_\rho -2\tan \rho - \frac{\overline X_\rho}{ \overline X_\chi} \partial_\chi\right] A_\chi &=& -\frac{\delta c }{2 \overline X_\rho \overline X_\chi} \label{deltac1}\\ \tan\rho A_\rho-\cot\chi A_\chi&=& -\frac{\delta c}{2} \sec^2\rho \label{deltac2} \end{eqnarray} Inserting the algebraic constraint (\ref{deltac2}) into equation (\ref{deltac1}) we obtain $\delta c =0$ and so we get no solution. \\ \bigskip \textit{Second possibility: $\delta c = 0 $ and $\partial_\theta \delta W_\theta = 0$ } \smallskip Equations (\ref{lin5},\ref{lin6}) can be solved in one of three ways: either $A_\phi=0$ and $A_\theta=$const, or $A_\phi=$const and $A_\theta=0$ or $A_\phi=A_\theta$. In the first case from (\ref{lin6}) we have $G_\theta=0$ and therefore $\delta W_\theta=\delta W_\phi=0$. In the second case equations (\ref{lin5},\ref{lin6}) have the solution $\delta W_\theta=0$ and $G_\phi=a \sin^2\theta$, where $a$ is a constant. In the third case (\ref{lin5},\ref{lin6}) become \begin{eqnarray} \partial_\phi G_\theta + \left(\partial_\theta -2\cot\theta \right) G_\phi&=&0\\ \sin\theta \cos\theta G_\theta+ \partial_\phi G_\phi &=&0 \end{eqnarray} Since we are looking for $G$'s periodic in $\phi$ we can write: $$ G_\theta = \sum_m c^m_\theta e^{im\phi} \;\qquad G_\phi = \sum_m c^m_\phi (\theta) e^{im\phi} $$ And so: \begin{eqnarray} \partial_\theta c^m_\phi -2\cot\theta c^m_\phi&=&-im c^m_\theta \\ -\sin \theta \cos\theta c^m_\theta &=&im c^m_\phi \end{eqnarray} If $m\ne 0$ and $m\ne 1$, $c^m_\theta = c^m_\phi =0$.\\ If $m=1$, $c^1_\theta= a_1$ and $c^1_\phi= i a_1 \sin \theta \cos \theta$ and for reality condition $a_1 =0$.\\ If $m=0$, $c^0_\theta=0$ and $c^0_\phi= a_2 \sin^2 \theta$.\\ So in all three cases equations (\ref{lin5},\ref{lin6}) imply that $\delta W_\theta =0$ and $\delta W_\phi=A_\phi \sin^2 \theta$. Now let us come to equation (\ref{lin1}). It can be solved in one of three ways: either $G_\rho=0$ and $G_\chi=$const, or $G_\rho=$const and $G_\chi=0$, or $G_\rho=G_\chi$. In the first two cases equation (\ref{lin4}) gives $\delta W_\rho=\delta W_\chi=0$. Therefore in the following we consider only the third case. Using $G_\rho=G_\chi$ and the $\delta W_\theta=0$, as derived above, equation (\ref{lin2}) is seen to be the derivative with respect to $\theta$ of (\ref{lin4}). On the other hand doing the same in equation (\ref{lin3}), the first two terms are seen to be the derivative with respect to $\phi$ of (\ref{lin4}) and therefore can be dropped. The last term gives $$ \left[ \partial_\rho -2\tan\rho -\frac{\overline X_\rho}{ \overline X_\chi} \left( \partial_\chi -2\cot\chi \right) \right]A_\phi=0 $$ where the prefactor of the second term in the square bracket is $\overline X_\rho/\overline X_\chi=\cot\rho\cot\chi$. The general solution of this equation is $$ A_\phi=\sin^2\chi\sec^2\rho\, H\left(\frac{\cos\chi}{\sin\rho}\right)\ , $$ for some function $H$. Putting $G_\rho=G_\chi$ in equations (\ref{lin1},\ref{lin4}), we obtain the following equations: \begin{eqnarray} \left[ \partial_\chi + \frac{\overline X_\rho}{ \overline X_\chi} \tan \rho - \frac{\overline X_\chi}{\overline X_\rho} \left( \partial_\rho- \tan \rho \right) \right] A_\rho +\left[ \partial_\rho -2\tan \rho - \frac{\overline X_\rho}{ \overline X_\chi} \partial_\chi\right] A_\chi &=& 0 \label{secpos1}\\ \tan\rho A_\rho - \cot\chi A_\chi &=& 0 \label{secpos2} \end{eqnarray} Solving the second equation for $A_\chi$ and substituting in the first, one finds that it is identically satisfied. Thus, the solution to the whole system is: \begin{eqnarray} \delta W_\rho &=& A(\rho,\chi)G(\theta,\phi)\\ \delta W_\chi &=& \tan \rho \tan \chi A(\rho,\chi)G(\theta,\phi)\\ \delta W_\theta &=& 0 \\ \delta W_\phi &=& \sin^2\theta \sin^2\chi\sec^2\rho\, H\left(\frac{\cos\chi}{\sin\rho}\right) \end{eqnarray} where $A(\rho,\chi)$ and $G(\theta,\phi)$ are arbitrary functions. Now from equations (\ref{clin00},\ref{clin11}) we can determine: \begin{eqnarray} \delta X_\rho &=& G(\theta,\phi)\frac{ \sqrt{\cos^2 \chi -\sin^2 \rho} }{H \cos \chi}\left[\partial_\rho A(\rho,\chi)-\tan\rho A(\rho,\chi) \right]\\ \delta X_\chi &=& G(\theta,\phi)\frac{ \sqrt{\cos^2 \chi -\sin^2 \rho} }{H \sin \chi} \left[ \partial_\chi \left( \tan \chi A(\rho,\chi)\right)- A(\rho,\chi) \right] \end{eqnarray} Imposing the normalization condition \begin{equation} \overline X_\rho \delta X_\rho-\overline X_\chi \delta X_\chi=0 \label{norcon} \end{equation} we obtain $$ A(\rho,\chi) = \sec \rho \cos \chi\, F[\sec \rho \sin \chi] $$ where $F(x)$ is a generic function, and therefore \begin{eqnarray} \delta X_\rho &=& G(\theta,\phi)\frac{ \sqrt{\cos^2 \chi -\sin^2 \rho} }{H} \sec^2\rho \tan \rho \sin \chi \: F'[\sec \rho \sin \chi]\\ \delta X_\chi &=& G(\theta,\phi)\frac{ \sqrt{\cos^2 \chi -\sin^2 \rho} }{H } \sec^2\rho \cos \chi \:F'[\sec \rho \sin \chi]\\ \delta X_\theta &=& \frac{A}{\overline X_\rho}\partial_\theta G(\theta,\phi) \\ \delta X_\phi &=& \frac{A}{\overline X_\rho}\partial_\phi G(\theta,\phi) \ . \end{eqnarray} So we have an infinite family of solutions. We observe that these solutions are in general regular on the horizon, so they cannot remove the singularity of the solutions $\overline X_\mu$. \bigskip \textit{Third possibility: $\delta c = 0 $ and $\partial_\theta \delta W_\theta \ne 0$ } \smallskip From equation (\ref{lin1}), by the same argument used for the second possibility, $G_\rho=G_\chi$. Then (\ref{lin4}) implies that $$ \sin^2\chi\tan\rho \frac{A_\rho}{ A_\theta} -\sin\chi\cos\chi \frac{A_\chi}{ A_\theta} = \frac{\partial_\theta G_\theta}{G_\rho} $$ where we have separated the functional dependences on $\rho$ and $\chi$ on the left and $\theta$ and $\phi$ on the right. Therefore $G_\rho =\partial_\theta G_\theta (\theta,\phi)$ and we are left with the following equation: $$ \tan\rho A_\rho-\cot\chi A_\chi -\sin^{-2}\chi A_\theta = 0 $$ then using equations (\ref{lin2},\ref{lin3}) we get: \begin{eqnarray} \left[ A_\rho -\frac{\overline X_\rho}{\overline X_\chi} A_\chi\right] \partial^2_\theta G_\theta +G_\theta\left[ \partial_\rho -2\tan\rho -\frac{\overline X_\rho}{\overline X_\chi} \left( \partial_\chi -2\cot\chi \right) \right] A_\theta &=&0\\ \left[ A_\rho-\frac{\overline X_\rho}{\overline X_\chi} A_\chi \right] \partial_\phi \partial_\theta G_\theta+G_\phi\left[ \partial_\rho -2\tan\rho -\frac{\overline X_\rho}{\overline X_\chi} \left( \partial_\chi -2\cot\chi \right) \right] A_\phi &=&0 \end{eqnarray} Separating variables as before this leads to \begin{eqnarray} \partial^2_\theta G_\theta &=& a G_\theta \\ \partial_\phi \partial_\theta G_\theta &=& b G_\phi \end{eqnarray} Equations (\ref{lin5},\ref{lin6}) can be solved in one of three ways: either $A_\phi=0$ and $A_\theta=$const, or $A_\phi=$const and $A_\theta=0$ or $A_\phi=A_\theta$. In the first case from (\ref{lin6}) we have $G_\theta=0$ and therefore $\delta W_\theta=\delta W_\phi=0$. In the second case equations (\ref{lin5},\ref{lin6}) have the solution $\delta W_\theta=0$ and $G_\phi=\sin^2\theta$. This further implies $b=0$ In the third case (\ref{lin5},\ref{lin6}) become: \begin{eqnarray} \partial_\phi G_\theta + \left(\partial_\theta -2\cot\theta \right) G_\phi&=&0\\ \sin\theta\left( \cos\theta -\sin \theta \partial_\theta \right) G_\theta+ \partial_\phi G_\phi &=& \end{eqnarray} From the last two equations we get: \begin{equation} \sin^2 \theta \partial^2_\theta G_\theta + \partial^2_\phi G_\theta +G_\theta - \sin\theta \cos \theta \partial_\theta G_\theta = 0 \end{equation} Proceeding like the previous case: \begin{eqnarray} \partial_\theta c^m_\phi -2\cot\theta c^m_\phi&=&-im c^m_\theta \label{thetaphi1}\\ \sin^2 \theta \partial_\theta c^m_\theta-\sin \theta \cos\theta c^m_\theta &=&im c^m_\phi \label{thetaphi2} \\ \sin^2 \theta \partial^2_\theta c^m_\theta -m^2c^m_\theta +c^m_\theta- \sin\theta \cos \theta \partial_\theta c^m_\theta &=& 0 \label{thetaphi12}\\ \partial^2_\theta c^m_\theta &=& a c^m_\theta \label{thetaphi3}\\ im \partial_\theta c^m_\theta &=& b c^m_\phi \label{thetaphi4} \end{eqnarray} If $m\ne 0$, from the previous equations: \begin{eqnarray} \left(a \sin^2 \theta-m^2 +1\right)c^m_\theta+i \frac{b}{m}\sin\theta \cos \theta c^m_\phi&=& 0\\ \left( \frac{b}{i m}\sin^2 \theta -\sin \theta \cos\theta \right) c^m_\theta -im c^m_\phi&=&0 \end{eqnarray} The result is $c^m_\theta = c^m_\phi =0$ for all $m \ne 0$.\\ If $m=0$ and $a=-1$, we get $c^0_\theta= \sin \theta$ and $c^0_\phi=0$ and so $G_\theta= \sin \theta$ and $G_\phi=0$.\\ So we are left with the following equations: \begin{eqnarray} \left[ \partial_\chi + \frac{\overline X_\rho}{ \overline X_\chi} \tan \rho - \frac{\overline X_\chi}{\overline X_\rho} \left( \partial_\rho- \tan \rho \right) \right] A_\rho +\left[ \partial_\rho -2\tan \rho - \frac{\overline X_\rho}{ \overline X_\chi} \partial_\chi\right] A_\chi &=& 0 \label{third1}\\ A_\rho-\frac{\overline X_\rho}{\overline X_\chi} A_\chi - \left[ \partial_\rho -2\tan\rho -\frac{\overline X_\rho}{\overline X_\chi} \left( \partial_\chi -2\cot\chi \right) \right] A_\theta &=&0 \label{third2}\\ \sin^{2}\chi \tan\rho A_\rho-\sin \chi \cos\chi A_\chi - A_\theta &=& 0 \label{third3} \end{eqnarray} Taking a linear combination of the second and the third equation we get: $$ \left[ \partial_\rho -2\tan\rho -\cot \rho \cot \chi \partial_\chi +2\cot^2\chi \cot \rho -\cot\rho \sin^{-2} \chi \right] A_\theta =0 $$ The solution to the previous system gives: $$ \delta W_\theta = \sin \theta \sec \rho \tan \rho \sin \chi \;F\left[\frac{\cos \chi}{\sin\rho}\right] $$ where $F(x)$ are generic functions.\\ Using the other two equations we get $F=0$: so in this case we get no solution since we have assumed that $\delta W_\theta \ne 0$.\\ To summarize the results for the static case, the first and third possibility do not yield any solution, while the second gives an infinite family of solutions. These solutions are generically regular at the horizon, and therefore they cannot remove the singularity of the background solution $\overline X_\mu$ there. We conclude that all the solutions of the system (\ref{con00}-\ref{con33}) in the neighborhood of (\ref{xstat}) are singular at the horizon. \subsection{Flat slicing case} \textit{First possibility: $\delta c \ne 0 $} \smallskip The same reasoning that was used for the static case leads to equations (\ref{deltac1},\ref{deltac2}). Inserting (\ref{deltac2}) in (\ref{deltac1}) we obtain a complex solution, which is not acceptable. \medskip \textit{Second possibility: $\delta c = 0 $ and $\partial_\theta \delta W_\theta = 0$ } \smallskip The same reasoning that was used for the static case leads to $\delta W_\theta=0$ and $\delta W_\phi=A_\phi\sin^2\theta$. Also the analysis of equation (\ref{lin1}) proceeds in the same way, leading to $G_\rho=G_\chi$. Equation (\ref{lin2}) implies that $\partial_\theta G_\rho=0$. Equation (\ref{lin3}) implies that $\partial_\phi G_\rho$ is a function of $\theta$, $\rho$ and $\chi$. Thus $G_\rho$ must be linear in $\phi$, but the only such function that is compatible with periodicity is independent of $\phi$. Thus $G_\rho$ is a constant that we can set to one without loss of generality. Proceeding in the same way of the static case, from equations (\ref{lin1},\ref{lin4}) we obtain again equations (\ref{secpos1}-\ref{secpos2}). Substituting the explicit form of $\overline X_\mu$, the solution to these equations is: \begin{eqnarray} \delta W_\rho &=& \sec \rho \cos \chi \:F[\tan \rho -\sec \rho \cos \chi]\\ \delta W_\chi &=& \sec \rho \tan \rho \sin\chi \;F[\tan \rho -\sec \rho \cos \chi] \end{eqnarray} So the $\delta X$'s are: \begin{eqnarray} \delta X_\rho &=& \frac{ \sin \rho - \cos \chi } {H}\cos \chi \sec^4 \rho\;F'[\tan \rho -\sec \rho \cos \chi] \\ \delta X_\chi &=& \frac{ \sin \rho - \cos \chi }{H } \tan \rho \sec^2 \rho \sin \chi \;F'[\tan \rho -\sec \rho \cos \chi] \end{eqnarray} Imposing the normalization condition (\ref{norcon}) we get that $F$ is a constant. So the $\delta X$'s are zero: this is a consequence to the fact that $\delta W =(\sec \rho \cos \chi,\sec^2 \rho \sin \rho \sin\chi,0,0)$ is a Killing vector. \bigskip \textit{Third possibility: $\delta c = 0 $ and $\partial_\theta \delta W_\theta \ne 0$ } \smallskip The analysis of this case proceeds as in the static case down to equations (\ref{third1}-\ref{third2}-\ref{third3}). Solving (\ref{third3}) for $A_\theta$ and inserting in the other two, and using the explicit form of $\overline X_\mu$, we get the following equations: \begin{equation*} \begin{split} \left[ \partial_\chi - \frac{ \cos \rho \sin \chi}{1-\cos \chi \sin \rho} \partial_\rho + \left( \frac{1-\cos \chi \sin \rho}{ \cos \rho \sin \chi} + \frac{ \cos \rho \sin \chi}{1-\cos \chi \sin \rho} \right) \tan \rho \right] A_\rho + \qquad\qquad\qquad\\ +\left[ \partial_\rho - \frac{1-\cos \chi \sin \rho}{ \cos \rho \sin \chi} \partial_\chi -2\tan \rho \right] A_\chi = 0 \qquad\qquad& \\ \left[ \sin^2\chi \tan \rho\, \partial_\rho - \frac{1-\cos \chi \sin \rho}{ \cos \rho} \sin \chi \tan \rho \, \partial_\chi - \left( \cos ^2 \chi +2 \sin^2 \chi \tan^2 \rho + \tan \rho \sec \rho \cos \chi \right)\right] A_\rho+\\ \left[ -\sin \chi \cos \chi \,\partial_\rho + \frac{1-\cos \chi \sin \rho}{ \cos \rho} \cos \chi\, \partial_\chi + 2 \tan \rho \cos \chi \sin \chi \right] A_\chi=0 \qquad\qquad& \end{split} \end{equation*} If we multiply the first equation by $\sin \chi \cos \chi$ and we sum the two equations, we get an equation where only $A_\rho$ appears, and that equation has the following solution: $$ A_\rho = \sec \rho \sin \chi \;F[\sin \rho \cos \chi-\tan \rho] $$ Then multiplying the first equation by $\sin \chi \tan \rho \sec \rho (1-\cos \chi \sin \rho)$ and summing the two equations, we get an equation where $A_\rho$ appears only algebraically. Substituting $A_\rho$ with the solution found above, we get the following solutions: $$ A_\chi = \frac{2\cos \chi \sin \rho -1\pm i}{2\sqrt{2}\cos^2 \rho} \;F[\sin \rho \cos \chi-\tan \rho] $$ We can note that it is not necessary to impose the normalization condition in order to conclude that the solution does not exist: in fact we have obtained a complex solution, so there is no acceptable solution in this case. So the overall conclusion is that the solution (\ref{xflat}) is isolated. \end{appendix}
{ "timestamp": "2018-11-09T02:12:54", "yymm": "1811", "arxiv_id": "1811.03369", "language": "en", "url": "https://arxiv.org/abs/1811.03369" }
\section{Introduction} \label{sec:intro} The advent of new instrumentation for the investigation of solar phenomena---in particular, with regard to the short- and long-term variability of the Sun's magnetic activity, and its impact on the near-Earth environment---has put unprecedented demands on the conception of data management plans and infrastructures, which aim to deliver reliable, science-ready data products to the community in a timely fashion. The U.S.\ community is getting ready for the completion and first light of the 4-m Daniel K.\ Inouye Solar Telescope \cite[DKIST;][]{Tr16} in early 2020, with its suite of complex post-focus instruments, most of which will have polarimetric capabilities, in order to detect the subtle signatures of the Sun's magnetic field in its light spectrum. The 1.6-m Goode Solar Telescope \cite[GST;][]{Ca10} has already started to reveal the complexity of the solar spectrum when unprecedented higher spatial and temporal resolutions are attained in solar observations. Other countries are pursuing similar endeavors, such as India's 2-m National Large Solar Telescope \cite[NLST;][]{Ha10}, the 4-m European Solar Telescope \cite[EST;][]{Co13}, and the 8-m annular-mirror Chinese Giant Solar Telescope \cite[CGST;][]{Li14}. Beyond the intrinsic complexity of the theoretical problem of how polarized radiation is formed and transported through the solar atmosphere \cite[e.g.,][for a description of the various mechanisms at play]{St94,LL04,CL08,TB10}, the need to detect and confidently measure very small levels of polarization (down to 0.01\% of the intensity, for the most demanding scientific applications) puts very hard requirements on the identification and removal of instrumental artifacts from the detected signals. One recurrent issue in spectro-polarimetric instruments is the appearance of polarization ``fringes'' that overlap with the actual spectral signal from the observed target. These fringes are interference patterns that are produced by the presence of optical elements in the system of the telescope and the instrument having varying phase retardance properties (e.g., around their optical axis). Such components include polarization modulators, polarizing beam-splitters, and any optical element where parallel optical interfaces may occur (e.g., interference filters, detector windows). These fringes have the appearance of more or less regular 2-dimensional (2D) patterns, preferentially arranged along the spectral dimension of the data (see Figure 1). We refer to review studies of polarized fringes \citep{Li91,Se03,Cl04} for a thorough description of this phenomenon, and to recent work by \cite{Ha17} on the use of Berreman calculus for the modeling of fringes in polarimetric instrumentation. The most common fringe correction methods used in spectro-polarimetric data reduction are Fourier filtering and derivations from it, such as wavelet analysis \citep{Ro06}. However, Fourier filtering only works satisfactorily when the frequency domains of the fringes and of the target spectral signal (in the spectral and/or spatial domains) are clearly separated, and it is severely limited when the fringe pattern's period or amplitude vary over the image. Wavelet analysis is a powerful method for removing smoothly varying fringes in flat-field images. However, it becomes difficult to control in the presence of strong spectral signals from the observed target. \begin{figure}[t!] \centering \includegraphics[width=5in]{Corr_U_Sc7_Stk2.jpg} \caption{\label{fig:example} Example of a polarized spectrum (in this case, the fractional linear polarization, Stokes $U/I$) of the solar radiation around the 1085\,nm. The three dominant features from the left are: the photospheric line of \ion{Si}{1} at 1082.7\,nm ($X\simeq 350$); the chromospheric triplet of \ion{He}{1} around 1083\,nm ($X\simeq 430$); and the photospheric line of \ion{Si}{1} at 1084.4\,nm ($X\simeq 780$). We note the presence of strong polarization fringes across the entire spectrum.} \end{figure} In this paper, we extend previous work on the identification and isolation of polarization fringes by Principal Component Analysis \cite[PCA;][]{Pe01,Jo02}. \cite{Ca12} considered the implementation and performance of a 2D PCA algorithm based on the method described by \citeauthor{Ya04} (\citeyear{Ya04}; hereafter, Y-PCA). In that approach, the spectral data gets ``contracted' over the spatial dimension before performing PCA decomposition. As a result, the PCA basis of \emph{eigenfeatures} consists of spectro-polarimetric profiles (rather than 2D images similar to Figure~\ref{fig:example}), which are akin to spatial averages of the principal components (PCs) of the data. One advantage of the Y-PCA approach is the fast convergence of the singular-value series, which in theory allows to limit the number of PCs used for the data reconstruction to a small set of the lowest order basis vectors. A beneficial side effect is that unwanted, high-order spatial modulations of the signals that tend to cancel out when spatially averaged, such as detector noise, get confined to the lower end of the ordered set of PCs, so a low-pass cut of this set is a very efficient way to remove these unwanted features from the data \cite[see][]{Ca12}. Not surprisingly, these benefits come at a cost. It may often be hard to tell fringes and spectral signals apart (with the exception of the most dominant fringes), and the compression of spatial information in the Y-PCA algorithm causes high-order spatial variations of the spectral signals to be distributed over many PCs, which therefore must be included for the reconstruction of the signal, if such spatial variations are to be preserved, e.g., in high resolution observations. In this work, we consider the application to the polarization fringe problem of the original 2D PCA algorithm of \citeauthor{TP91} (\citeyear{TP91}; hereafter, TP-PCA), which has traditionally been applied to the problem of face recognition. The possibility of using this approach to fringe removal was briefly touched upon in the conclusions of \cite{Ca12}. One downside of the algorithm pointed out in that paper is that the singular value series shows a much slower convergence than in the Y-PCA method, and the dimensionality of the basis of eigenfeatures corresponds to the number of images in the data set to be analyzed, which can in some cases be a small number (even just one). In practice, this implies that often one must keep the entire set of PCs in order to reconstruct the data without significant loss of information, especially for small sets of images. On the other hand, the advantage of the TP-PCA method is that the contribution of the fringes to the data is immediately recognizable in the set of eigenfeatures, which in principle should make the removal of such contribution an easier task than in the Y-PCA method. However, since most or all of the PCs must be included in the reconstruction in order to preserve the original spatial and spectral information of the data, no practical benefit can come from this, unless the fringes somehow are confined to just a few eigenfeatures essentially devoid of spectral signal, which then can be dropped from the data reconstruction without any risk of signal loss. The purpose of this paper is to apply a simple manipulation technique of the PCA basis, in order to optimally attain the isolation of the fringes in a few basis vectors. This technique relies on the intrinsic orthogonality of the PCA basis, and uses rotations on the 2D subspaces generated by all possible pairs of eigenfeatures in the basis, so that the corresponding transformed 2D subspace has one of the two basis vectors clean of fringes (as far as possible). By iteratively processing the entire basis, it is possible to \emph{automatically} confine the fringes to a minimal set of transformed basis vectors. Because this transformation of the PCA basis is constructed as a sequence of isometries, the process naturally preserves the orthonormality of the original PCA basis after rotation. The idea of rotations applied to the PCA basis in order to bring out particular physical properties of the original data set is not new, and it has also been applied to the problem of spectro-polarimetric inversion of photospheric lines \citep{SL02}. The technique presented here can indifferently be applied to both the TP-PCA and Y-PCA approaches. In the case of the Y-PCA method, this manipulation technique can significantly improve the separation of fringes and spectral signals in the set of basis vectors, compared to the direct application of the method as presented in \cite{Ca12}. In Section~\ref{sec:theory}, we present the mathematical concepts and assumptions for the separation of spectral signals and unwanted instrument artifacts in a spectro-polarimetric data set. In Section~\ref{sec:examples}, we show some applications of the basis manipulation technique to the PCA of Stokes data for both the TP-PCA and Y-PCA approaches. The rotation algorithm is summarized in App.~\ref{app:B}. \section{Formulation of the problem} \label{sec:theory} We consider a 2D spectro-polarimetric data set, consisting of an ensemble of $N$ ``Stokes images'', having spectral wavelength (or frequency) as the $x$ coordinate, and the spatial position along the slit of a grating-based spectrograph as the $y$ coordinate. Alternatively, the data may come from an imaging spectro-polarimeter, where the spectral signal is acquired through wavelength scanning of a tunable filter instrument (e.g., using a Fabry-Perot interferometer; \citealt{Ca06}). The data can therefore be represented by the set $\mathscr{D}\equiv\{\mathscr{D}_i(x,y)\}_{i=1,\ldots,N}$. The implementation of 2D PCA according to the TP-PCA method allows to represent each image in the data set as a linear expansion over a basis (i.e., an orthonormal set) of eigenfeatures $\bm{e}_j(x,y)$, where $j=1,\ldots,N$, \begin{equation} \label{eq:expans.d} \mathscr{D}_i(x,y)=\sum_{j=1}^N c_{ij}\,\bm{e}_j(x,y)+\bar{\mathscr{D}}(x,y)\;,\qquad i=1,\ldots,N\;, \end{equation} where $\bar{\mathscr{D}}(x,y)$ is the average image of the data set. The orthonormality condition corresponds to the following definition of inner product on the space $\mathfrak{E}$ generated by the basis $\{\bm{e}_j(x,y)\}_{j=1,\ldots,N}$, \begin{equation} \label{eq:inner} \langle \bm{e}_j,\bm{e}_k \rangle \equiv \int\limits_{D_{XY}}\mathrm{d}x\,\mathrm{d}y\; \bm{e}_j(x,y)\,\bm{e}_k(x,y)=\delta(j,k)\;, \end{equation} where $D_{XY}$ is the area of the image, and $\delta(j,k)$ is Kronecker's $\delta$. In the usual implementation of the TP-PCA algorithm \cite[see][]{TP91}, the basis $\{\bm{e}_j(x,y)\}_{j=1,\ldots,N}$ is determined via diagonalization of the $N\times N$ covariance matrix $\textrm{Cov}(\mathscr{D},\mathscr{D})$. This can be calculated through a simple matrix multiplication, \begin{equation} \label{eq:cov} \textrm{Cov}(\mathscr{D},\mathscr{D})=(\bm{\mathsf{D}}-\bm{\bar{\mathsf{D}}})^T (\bm{\mathsf{D}}-\bm{\bar{\mathsf{D}}})\;, \end{equation} where $\bm{\mathsf{D}}$ is a $(N_X N_Y)\times N$ matrix, whose $i$-th column corresponds to the $\mathscr{D}_i(x,y)$ array of size $N_X\times N_Y$ reformed into a column vector of length $N_X N_Y$ (see App.~\ref{app:A}), whereas $\bm{\bar{\mathsf{D}}}$ is a similarly constructed matrix, where each column is the average of the set of reformed $\mathscr{D}_i(x,y)$ arrays across the map.\footnote{Equation~(\ref{eq:cov}) corrects eq.~(6) of \cite{Ca12} in their description of the \cite{TP91} method.} If we indicate with $\bm{\mathsf{U}}$ the (orthogonal) matrix of the column eigenvectors of (\ref{eq:cov}), then the normalized basis of eigenfeatures is given by the set (cf.\ App.~\ref{app:A}, eq.~(\ref{eq:2D_basis.app})) \begin{equation} \label{eq:2D_basis} \bm{e}_j(x,y)=\lambda_j^{-1/2} \sum_i \left[\mathscr{D}_i(x,y)-\bar{\mathscr{D}}(x,y)\right] U_{ij}\;,\qquad j=1,\ldots,N\;, \end{equation} where the $\lambda_j$'s are the corresponding eigenvalues. It is straightforward to verify that this is indeed an orthonormal set (see App.~\ref{app:A}). It is important to remark that, in general, the average image $\bar{\mathscr{D}}(x,y)\notin\mathfrak{E}$, when the basis is determined via diagonalization of the covariance matrix (\ref{eq:cov}). This can also be gleaned directly from eqs.~(\ref{eq:expans.d}) and (\ref{eq:2D_basis}). In practice, the data will include the desired spectro-polarimetric signal ($\mathscr{S}$) of the target science diagnostics of the observations alongside with unwanted instrumental signatures, such as detector artifacts and polarization fringes ($\mathscr{F}$), so that $\mathscr{D}_i(x,y)=\mathscr{S}_i(x,y)+\mathscr{F}_i(x,y)$, for $i=1,\ldots,N$. The problem of de-fringing the data obviously corresponds to the removal of the contribution $\mathscr{F}_i(x,y)$ for each image in the data set. Because the average image $\bar{\mathscr{D}}(x,y)$ will in general contain unwanted instrumental signal---unlike in the problem of face recognition to which the TP-PCA algorithm is usually applied---the inability of the basis $\{\bm{e}_j(x,y)\}_{j=1,\ldots,N}$ to generate $\bar{\mathscr{D}}(x,y)$ is highly inconvenient. In fact, the need to add back the average image to the reconstructed data, according to the standard algorithm, implies that some of the unwanted instrumental signal is uncontrollably put back into the reconstructed data. For this reason, we adopt a variant of the algorithm, where the basis is determined instead via diagonalization of the correlation matrix (see App.~\ref{app:A}, eq.(\ref{eq:corr.app})) \begin{equation} \label{eq:corr} \textrm{Corr}(\mathscr{D},\mathscr{D})=\bm{\mathsf{D}}^T \bm{\mathsf{D}}\;, \end{equation} so that eqs.~(\ref{eq:expans.d}) and (\ref{eq:2D_basis}) become, respectively, \begin{eqnarray} \label{eq:expans.corr} \mathscr{D}_i(x,y)&=&\sum_{j=1}^N c_{ij}\,\bm{e}_j(x,y)\;,\qquad i=1,\ldots,N\;, \\ \label{eq:2D_basis.corr} \bm{e}_j(x,y)&=&\lambda_j^{-1/2} \sum_i \mathscr{D}_i(x,y)\,U_{ij}\;,\qquad j=1,\ldots,N\;. \end{eqnarray} \begin{figure}[p!] \centering \includegraphics[width=7in]{Corr_V.jpg} \vbox{\medskip Original PCA basis\bigskip} \includegraphics[width=7in]{Rot_V.jpg} \vbox{\medskip Transformed basis\medskip} \caption{\label{fig:bases} \emph{Top:} the TP-PCA basis $\{\bm{e}_j(x,y)\}_{j=1,\ldots,12}$ of a Stokes $V$ map with $N=12$ frames. Note the presence of strong fringes in the dominant basis vectors (B.V.; esp.\ \#1 and \#3). The same fringes affect to a minor degree multiple others elements of the basis. \emph{Bottom:} the transformed basis $\{\bm{\epsilon}_k(x,y)\}_{k=1,\ldots,12}$. Note that the fringes are very well confined to the subspace spanned by the last transformed basis vector. We also note that the residual target signal $\mathscr{S}$ in $\bm{\epsilon}_{12}(x,y)$ is visibly smaller than the one in both $\bm{e}_1(x,y)$ and $\bm{e}_3(x,y)$ of the original basis. Hence, reconstruction of the data set by dropping $\bm{\epsilon}_{12}(x,y)$ in the PCA reconstruction suppresses the fringes with less science data loss than by using the original basis without $\bm{e}_1(x,y)$ and $\bm{e}_3(x,y)$ (see Figure~\ref{fig:reconstr}).} \end{figure} \begin{figure}[t!] \centering \includegraphics[]{Corr_V_Sc7_Stk3_EV12.jpg}\kern 12pt \includegraphics[]{Rot_V_Sc7_Stk3_EV12.jpg} \caption{\label{fig:reconstr} Original (top), PCA-reconstructed (center), and residual of the reconstruction (bottom) for one frame of the same data set used for the creation of the bases of Figure~\ref{fig:bases}. \emph{Left:} using the original TP-PCA basis without $\bm{e}_1(x,y)$ and $\bm{e}_3(x,y)$. \emph{Right:} using the transformed basis without $\bm{\epsilon}_{12}(x,y)$.} \end{figure} Because of the different physical origin of the spectral and instrumental signals (assuming that the instrument behaves as a linear system), the signals are in principle statistically independent, i.e., \emph{uncorrelated}. However, the type and conditions of the observations represented by the (limited) data set $\mathscr{D}$ might be such that this condition of uncorrelation is not fully realized in the data. For the development below, we assume that the set $\mathscr{D}$ indeed satisfies such condition, which is expressed algebraically as \begin{equation} \label{eq:uncorr} \textrm{Corr}(\mathscr{S},\mathscr{F})=0\;. \end{equation} The $(i,j)$ element of this correlation matrix is the inner products $\langle \mathscr{S}_i,\mathscr{F}_j \rangle$ (see App.~\ref{app:A}), and so the condition (\ref{eq:uncorr}) of uncorrelation between $\mathscr{S}$ and $\mathscr{F}$ is equivalent to the orthogonality condition between the corresponding spaces, i.e., $\mathscr{F}=\mathscr{S}_\perp$ in $\mathscr{D}$. This implies that $\mathscr{D}$ is the direct sum of $\mathscr{S}$ and $\mathscr{F}$, i.e., $\mathscr{D}=\mathscr{S}\oplus\mathscr{F}=\mathscr{S}\oplus\mathscr{S}_\perp$, and so the union of any two bases of $\mathscr{S}$ and $\mathscr{F}$ is also a basis of $\mathscr{D}$ \cite[see, e.g.,][Ch.~VII]{BM53}. Therefore, there must exist an orthogonal transformation (isometry) $\bm{\mathsf{R}}$ of the PCA basis $\{\bm{e}_j(x,y)\}_{j=1,\ldots,N}$ into the basis $\{\bm{\epsilon}_k(x,y)\}_{k=1,\ldots,N}$, \begin{equation} \label{eq:newbasis} \bm{\epsilon}_k(x,y)=\sum_{j=1}^N \mathsf{R}_{kj}\,\bm{e}_j(x,y)\;, \end{equation} such that $\mathscr{S}$ is generated (after a possible re-ordering of the basis) by the subset $\{\bm{\epsilon}_k(x,y)\}_{k=1,\ldots,n}$, for some $n<N$, whereas $\mathscr{F}=\mathscr{S}_\perp$ is generated by the complementary subset $\{\bm{\epsilon}_k(x,y)\}_{k=n+1,\ldots,N}$. This allows us to rewrite eq.~(\ref{eq:expans.corr}) as \begin{equation} \label{eq:expans.d1} \mathscr{D}_i(x,y)=\sum_{k=1}^N \gamma_{ik}\,\bm{\epsilon}_k(x,y)\;, \end{equation} where \begin{equation} \gamma_{ik}=\sum_{j=1}^N c_{ij}\,\mathsf{R}_{kj}=\sum_{j=1}^N c_{ij}\,\mathsf{R}^T_{jk}\;. \end{equation} Similarly, for the de-fringed data, we have \begin{equation} \label{eq:expans.s} \mathscr{S}_i(x,y)=\sum_{k=1}^n \gamma_{ik}\,\bm{\epsilon}_k(x,y)\;. \end{equation} The objective of the de-fringing process by 2D PCA is therefore to determine the isometry $\bm{\mathsf{R}}$ that produces a basis set $\{\bm{\epsilon}_k(x,y)\}_{k=1,\ldots,N}$ with the properties given above. The realizability of this objective may in practice be hindered by the observing conditions, the nature of the target, and the limitedness of the data set, as these can prevent the full realization of the uncorrelation condition (\ref{eq:uncorr}) within a specific data set $\mathscr{D}$. Despite this practical limitation, it is always possible to devise an algorithm that determines an optimal set of rotations between two any eigenfeatures with the purpose of eliminating the unwanted signal $\mathscr{F}$ from one of them. In App.~\ref{app:B}, we list the steps of a simple procedure that can easily be automated. By recursively applying that algorithm through the basis $\{\bm{e}_j(x,y)\}_{j=1,\ldots,N}$ of $\mathscr{D}$, the unwanted signal can thus be confined to a \emph{minimal} (i.e., no longer reducible) set of transformed basis vectors $\{\bm{\epsilon}_k(x,y)\}_{k=n+1,\ldots,N}$. Any residual spectral signal in such set practically quantifies the departure of the uncorrelation condition (\ref{eq:uncorr}) from being fully realized within the data set $\mathscr{D}$. It is important to remark that, in general, the transformed basis $\{\bm{\epsilon}_k(x,y)\}_{k=1,\ldots,N}$ no longer represents a basis of PCs of the data set, as it no longer corresponds to the original matrix $\bm{\mathsf{U}}$ of eigenvectors of the correlation matrix $\mathrm{Corr}(\mathscr{D},\mathscr{D})$ via eq.~(\ref{eq:2D_basis.corr}). In particular, this implies that the original set of eigenvalues cannot be used to quantify the relative contributions of the transformed basis vectors to the data (see end of Sect.~\ref{sec:YPCA}). \section{Examples} \label{sec:examples} The concepts presented in the previous section are well demonstrated by Figures~\ref{fig:bases} and \ref{fig:reconstr}. For these, we considered a subset of 12 frames taken from a spectro-polarimetric map of a solar active region in the \ion{He}{1} 1083\,nm multiplet, observed in 2011 with the Facility Infra-Red Spectro-polarimeter \cite[FIRS;][]{Ja10} at the Dunn Solar Telescope (DST) of the National Solar Observatory on Sacramento Peak (NSO/SP; Sunspot, NM). The figures show results of the application of the TP-PCA approach to the Stokes $V$ (circular polarization) signal. \begin{figure}[t!] \centering \includegraphics[]{Rot_V_Sc7_Stk3_EV32.jpg}\kern 12pt \includegraphics[]{Rot_V+Flt_Sc7_Stk3_EV32.jpg} \caption{\label{fig:reconstr.32-FF} Reconstruction of the same map step as in Figure~\ref{fig:reconstr}, but using a basis with 32 vectors. \emph{Left:} dropping $\bm{\epsilon}_{32}(x,y)$ from the reconstruction (similarly to the right side of Figure~\ref{fig:reconstr}). \emph{Right:} including $\bm{\epsilon}_{32}(x,y)$ in the reconstruction, but after running a Fourier filtering of that basis vector, in order to remove the confined fringes after the basis transformation.} \end{figure} Figure~\ref{fig:bases} shows the original TP-PCA basis $\{\bm{e}_j(x,y)\}_{j=1,\ldots,N}$ (top) and the transformed basis $\{\bm{\epsilon}_k(x,y)\}_{k=1,\ldots,N}$ (bottom), the latter being derived by the recursive application of the algorithm described in App.~\ref{app:B}. The original basis shows the presence of fringes in several basis vectors, most notably in the eigenfeatures $\bm{e}_1(x,y)$ and $\bm{e}_3(x,y)$. The determination of the PCA basis relies on the Singular Value Decomposition (SVD) of the correlation matrix, which produces an ordered set of eigenfeatures according to the decreasing importance of their contribution to the data set. Therefore, the presence of strong spectral signals in low-order eigenfeatures is of great concern, since a simple elimination of those eigenfeatures in order to suppress the fringe signal would cause a significant loss of the spectral signal targeted by the science. By applying the optimal rotation algorithm described in App.~\ref{app:B}, the fringe signal in each eigenfeature is moved down the transformed basis $\{\bm{\epsilon}_k(x,y)\}_{k=1,\ldots,N}$, until it is completely constrained within the last few basis vectors. Figure~\ref{fig:bases} demonstrates that, for this specific data set, the rotation algorithm is particularly effective, practically limiting the presence of the unwanted signal $\mathscr{F}$ to just the last basis vector $\bm{\epsilon}_{12}(x,y)$. While the basis transformation generally modifies the ``importance'' ordering of the original PCA basis, the fundamental result of the transformation is to accomplish the reduction---and ideally, the full suppression---of the targeted spectral signal in the fringe eigenfeatures, essentially realizing the process of orthogonalization of $\mathscr{S}$ and $\mathscr{F}$ within the basis of the data set $\mathscr{D}$. Of course, perfect orthogonality (and hence, uncorrelation) in the data set is implied only when the transformed basis vectors generating $\mathscr{F}$ are completely devoid of the spectral signal $\mathscr{S}$. This is certainly not the case for the transformed basis of Figure~\ref{fig:bases}. However, the residual spectral signal in the last basis vector $\bm{\epsilon}_{12}(x,y)$ has been significantly reduced. \begin{figure}[t!] \centering \includegraphics[width=7in]{Corr_1D.eps} \vbox{\medskip Original PCA basis\bigskip} \includegraphics[width=7in]{Rot_1D.eps} \vbox{\medskip Transformed basis\bigskip} \caption{\label{fig:bases.1D} \emph{Top:} the Y-PCA basis $\{\bm{e}_j(x)\}_{j=1,\ldots,12}$ for the same Stokes $V$ map with $N=32$ frames used for Figure~\ref{fig:reconstr.32-FF}. Note the evident presence of fringes in the eigenprofiles 2 to 4. The same fringes affect to a minor degree multiple others elements of the basis, including $\bm{e}_1(x)$. \emph{Bottom:} the transformed basis $\{\bm{\epsilon}_k(x)\}_{k=1,\ldots,12}$. Note that the fringes have been very well confined to the subspace spanned by the last two basis vectors, and significantly suppressed elsewhere.} \end{figure} The above examples were restricted to a small set of Stokes map's frames for practical reasons, and it can be expected that the reduced statistical significance of such a set importantly impacts the realization of the uncorrelation condition (\ref{eq:uncorr}). In typical science applications, instead, the number of frames is much larger (of order $10^2$), and this should favor the realizability of the condition (\ref{eq:uncorr}), \emph{if the Stokes signals of the observed target are sufficiently varied across the map.} This is well demonstrated by Figure~\ref{fig:reconstr.32-FF} (left), which shows how the quality of fringe removal in the previous example improves when the PCA basis is determined using 32 frames in the map, instead of the 12 used for Figure~\ref{fig:reconstr}. \begin{figure}[t!] \centering \includegraphics[]{Corr_1D_Sc7_Stk3_EV12.jpg}\kern 12pt \includegraphics[]{Rot_1D_Sc7_Stk3_EV12.jpg} \caption{\label{fig:reconstr.1D} Original (top), Y-PCA reconstructed (center), and residual of the reconstruction (bottom) for a specific frame of the data set used for the creation of the bases of Figure~\ref{fig:bases.1D}. \emph{Left:} using the original Y-PCA basis without $\bm{e}_2(x)$, $\bm{e}_3(x)$, and $\bm{e}_4(x)$. \emph{Right:} using the rotated basis without $\bm{\epsilon}_{11}(x)$ and $\bm{\epsilon}_{12}(x)$. We note the significantly reduced loss of spectral signal with the rotated basis (especially for the two \ion{Si}{1} lines), despite a comparable suppression of the dominant fringes.} \end{figure} When the residual spectral signal in the set of transformed basis vectors containing the fringes is important, one can attempt to recover such signal rather than drop it from the reconstruction together with the unwanted fringes. A direct way for doing this is by Fourier filtering of the fringe component(s) from the relevant basis vectors. Once the fringes are removed from a basis vector, this can be added back to the basis set used for the data reconstruction, rather than being eliminated as done in the previous examples. Figure~\ref{fig:reconstr.32-FF} (right) shows an example of such manipulation of basis elements by Fourier filtering. In the rotated basis of 32 elements, the last two showed visible fringes, particularly $\bm{\epsilon}_{32}(x,y)$, with non negligible contribution from the targeted spectral signal. Applying Fourier filtering to the last two basis vectors, that signal can be added back to the reconstruction, producing a residual without practically any science data loss (see right panels of Figure~\ref{fig:reconstr.32-FF}). It is important to observe that the suppression of any signal from a basis vector, e.g., by Fourier filtering, breaks the orthonormality of the basis, which therefore can no longer be used as a projection vector set for the data. However, it is possible to reconstruct the data by using the same coefficient matrix $\bm{\mathsf{C}}$ for the transformed basis $\{\bm{\epsilon}_i\}_{i=1,\ldots,N}$ prior filtering. This is mathematically equivalent to performing the data reconstruction by excluding the basis vectors affected by the fringes, and then apply the Fourier filtering to the difference image, in order to retrieve the residual target signal from it. The downside of the second approach is that the residual must in principle be Fourier filtered independently for each map step (although it is easy to devise an automated way to do so for an entire map), whereas by directly Fourier filtering the basis set, the residual correction is automatically applied to the entire Stokes map. In Sect.~\ref{sec:FF}, we compare this \emph{selective} Fourier filtering of fringe PCs or difference images with the more traditional application of Fourier filtering directly to a frame of the Stokes map. \subsection{Application to the Y-PCA method} \label{sec:YPCA} The technique of orthogonal transformations of the PCA basis, in order to confine the polarization fringes components to an orthogonal subspace of the spectral signal, can also be employed to improve the performance of the Y-PCA method considered by \cite{Ca12}. In order to test its performance, we employed the same Stokes $V$ map with 32 scan steps used for the results of Figure~\ref{fig:reconstr.32-FF}. The Y-PCA algorithm implies the creation of the data correlation matrix \cite[cf.\ eq.~(1) of][]{Ca12} \begin{equation} \label{eq:corr.1D} \textrm{Corr}(\mathscr{D},\mathscr{D})=\frac{1}{N}\sum_{i=1}^N \mathscr{D}_i(x,y)^T \mathscr{D}_i(x,y)\;. \end{equation} The above definition implies contraction over the spatial dimension $Y$, and leads to a $N_X\times N_X$ correlation matrix, with $N_X$ eigenprofiles in the resulting PCA basis. \begin{figure}[t!] \centering \includegraphics[width=7in]{Rot_1D_rord.eps} \caption{\label{fig:reord} The transformed basis of Figure~\ref{fig:bases.1D} after reordering of the basis vectors according to the weight of their relative contributions to the map data.} \end{figure} In the case of the Stokes maps considered in the previous examples, $N_X=995$, but in Figure~\ref{fig:bases.1D} we only consider the first 12 basis vectors. The fringes are easily recognizable, and in order to suppress them by using the decomposition of the Stokes map on the original PCA basis (top half of Figure~\ref{fig:bases.1D}), one must drop at least $\bm{e}_2(x)$, $\bm{e}_3(x)$, and $\bm{e}_4(x)$ from the data reconstruction. The result of this operation is shown on the left of Figure~\ref{fig:reconstr.1D}. Qualitatively, it fares comparably to the TP-PCA reconstruction using the full basis of 32 vectors after rotation (left side of Figure~\ref{fig:reconstr.32-FF})---slightly better for \ion{He}{1} 1083\,nm, but visibly worse for the two \ion{Si}{1} lines. However, using the transformed basis (bottom half of Figure~\ref{fig:bases.1D}), where the fringes are very well confined to the last two basis vectors $\bm{\epsilon}_{11}(x)$ and $\bm{\epsilon}_{12}(x)$, the suppression of the fringes in the reconstructed data by dropping those two basis vectors improves significantly, becoming qualitatively comparable to the TP-PCA reconstruction after Fourier filtering of the fringe-carrying basis vectors (cf.\ the right sides of Figs.~\ref{fig:reconstr.32-FF} and \ref{fig:reconstr.1D}). Incidentally, we note the complete absence of random ``noise'' in the reconstructed images of Figure~\ref{fig:reconstr.1D}, because of the low-pass cut implied by considering only the lowest-order 12 basis vectors out of the full set of $N_X=995$. \begin{figure} \centering \includegraphics[width=.49\hsize]{ACfV_frame.jpg}\kern 6pt \includegraphics[width=.49\hsize]{ACfV_res.jpg} \caption{\label{fig:AC} \emph{Left:} A Stokes map frame (top), and its corresponding spectral autocorrelation image (bottom). The frequency range of the dominant fringes is clearly visible as two vertical strips in Fourier space, at around $X=10$ and $X=400$, stretching along the entire spatial dimension. Note the significant amplitude overlap of these fringes with the spectral signals in several spatial regions below $Y=100$. \emph{Right:} The residual of the same frame after PCA reconstruction by excluding the fringe affected basis vectors (top), and its corresponding spectral autocorrelation image (bottom). We note the much improved isolation of the fringe contribution in Fourier space, due to the overall suppression of the spectral signal, which allows for an efficient filtering of the fringes in the residual image.} \end{figure} \begin{figure}[t!] \centering \includegraphics[]{ex_1083_FF_Sc47_Stk3_EV100.jpg}\kern 12pt \includegraphics[]{ex_1083_PCA_ROT_FF_Sc47_Stk3_EV100.jpg}\vskip 6pt \includegraphics[width=.495\hsize]{profile_ex_1083_FF_Sc47_Stk3_EV100_Y19.eps}\kern 6pt \includegraphics[width=.495\hsize]{profile_ex_1083_PCA_ROT_FF_Sc47_Stk3_EV100_Y19.eps} \caption{\label{fig:reconstr.1083} Original (1st row), reconstructed (2nd row), and residual of the reconstruction (3rd row) for one frame of a Stokes $V$ map around the \ion{He}{1} 1083\,nm line. The map (frequency ordered) was taken with the DST/SPINOR instrument, and contains a total of 100 frames. \emph{Left:} Stokes signal reconstructed after a direct Fourier filtering of the frame. \emph{Right:} Stokes signal reconstructed after a selective Fourier filtering of the contribution from the fringe basis vectors, after a TP-PCA of the full map, and successive transformation of the PCA basis. The bottom two rows show the profiles at $Y=19$ in the corresponding reconstructed frames. \emph{Top row:} original data (diamonds) and the reconstructed profile (continuous line). \emph{Bottom row:} filtered fringes. We note the significant spurious signal introduced by the direct Fourier filtering of the frame shown on the left.} \end{figure} \begin{figure}[t!] \centering \includegraphics[]{ex_656_FF_Sc4_Stk1_EV60.jpg}\kern 12pt \includegraphics[]{ex_656_PCA_FF_Sc4_Stk1_EV60.jpg}\vskip 12pt \includegraphics[width=.495\hsize]{profile_ex_656_FF_Sc4_Stk1_EV60_Y168.eps}\kern 6pt \includegraphics[width=.495\hsize]{profile_ex_656_PCA_FF_Sc4_Stk1_EV60_Y168.eps} \caption{\label{fig:reconstr.Ha} Same as Figure~\ref{fig:reconstr.1083} for one frame of a Stokes $Q$ map of the \ion{H}{1} H$\alpha$ line at 656\,nm. The map (frequency ordered) was taken with the DST/SPINOR instrument, and contains a total of 60 frames. For this data set, direct Fourier filtering (left) produces a very good result, with only minor differences with respect to selective Fourier filtering of the contribution from the dominant fringe basis vectors (in this case $\bm{e}_1$, $\bm{e}_2$, and $\bm{e}_6$) shown at the right.} \end{figure} This test confirms the results of \cite{Ca12} about the applicability of the Y-PCA method to the Stokes de-fringing problem. However, depending on the particular data set, the state of polarization, and the instrument properties, having access to different methods for analyzing the data can help optimize the de-fringing process. Typically, compared to Y-PCA, the TP-PCA method manages to confine the fringe subspace into a smaller number of basis vectors (i.e., $\mathscr{F}$ has a lower dimensionality), which are therefore easier to identify and manage for the data reconstruction, whether it is a matter of excluding them from the transformed basis, or of applying Fourier filtering to them in order to retrieve some residual spectral signal. One downside of both methods, when a basis rotation is applied, is the fact that the basis transformation scrambles the \emph{importance ordering} of the original PCA basis. For the sake of reducing data loss, one always tries to minimize the number of basis vectors of $\mathscr{F}$ that must be further handled, whether by simply dropping them from the data reconstruction or by Fourier filtering. Therefore, lack of information on the relative importance of the contributions of the transformed basis vectors to the data can be a disadvantage, especially for the Y-PCA method, where the dimensionality of $\mathscr{F}$ is typically larger. A workaround that allows to restore the proper importance ordering of the transformed basis is to use the expression for the eigenvalues $\lambda_i$ of the correlation matrix of the map data (cf.\ eq.~(\ref{eq:eigen.app})), \begin{equation} \lambda_i\equiv\lambda(\bm{\mathsf{U}}_i)=\bm{\mathsf{U}}_i^T\,\mathrm{Corr}(\mathscr{D},\mathscr{D})\,\bm{\mathsf{U}}_i \end{equation} where $\bm{\mathsf{U}}_i$ is the corresponding (column) eigenvector. Then, if $\bm{\tilde\mathsf{U}}_i=\bm{\mathsf{R}} \bm{\mathsf{U}}_i$ is the transformed eigenvector through the same basis transformation of eq.~(\ref{eq:newbasis}), one can use $\lambda(\bm{\tilde\mathsf{U}}_i)$ as an estimate of the weight of the transformed basis vector $\bm{\epsilon}_i$ to the map data. Figure~\ref{fig:reord} shows the transformed basis of Figure~\ref{fig:bases.1D} after such reordering. \subsection{Comparison with direct Fourier filtering} \label{sec:FF} Direct Fourier filtering of Stokes map frames is a popular method for tackling the problem of polarization fringe removal, because of its straightforward implementation. One must simply identify the frequency range of the fringe pattern to be removed in the spectral Fourier transform of the map frame, and suppress those frequencies before restoring the map frame by inverse Fourier transform. Fringes that are well separated in frequency space from the targeted spectral signal can easily and effectively be removed by this method. However, in general, some frequency overlap between fringes and signals is to be expected, which inevitably leads to some science data loss. Figure~\ref{fig:AC} illustrates a typical example. The top panel on the left shows a frame from a Stokes $V$ map of the \ion{He}{1} 1083\,nm line region, with a total of 100 steps, whereas the panel right below it shows the corresponding spectral autocorrelation image. The instrument in this case was the Spectro-Polarimeter for INfrared and Optical Regions \cite[SPINOR;][]{SN06}, also deployed at the NSO/SP DST (note the spectrum is ordered by frequency rather than wavelength). The frequency interval of the fringes is well captured in Fourier space, and it is identified by the two narrow vertical strips located around $X=10$ and $X=400$, spanning the full spatial dimension. However, the map frame also contains spectral signal with a frequency range that significantly overlaps with the amplitude of the fringes in Fourier space, predominantly for positions along the spectrograph slit below $Y=100$. At those slit positions, we can expect that the reconstruction of the signal will be impacted by Fourier filtering. The panels on the right show instead the residual image of a TP-PCA reconstruction, after transformation of the PCA basis, and the exclusion of the last two fringe-carrying basis vectors. We note that the amplitude of the residual spectral signal in Fourier space has been significantly reduced, and that the separation between fringes and spectral signal is much improved, allowing for an efficient filtering of the fringes in the residual image. Figure~\ref{fig:reconstr.1083} demonstrates in practice the effects of direct Fourier filtering in such case. The left side of the figure shows the reconstruction of the same map step in the example of Figure~\ref{fig:AC} after direct Fourier filtering of the frame. As anticipated, the spatial regions where the spectral signal has significant overlap with the fringes in frequency space suffer the most in the direct application of Fourier filtering. The right side of the figure shows instead how the prior PCA decomposition of the Stokes map (in this case, TP-PCA) allows to attain much better results with Fourier filtering, either by selectively applying the filtering to just the (few) basis vectors visibly affected by fringes, or directly on the difference image after excluding those basis vectors from the data reconstruction (see discussion right before Sect.~\ref{sec:YPCA}). In this example, the full PCA basis of the map with 100 eigenfeatures was priorly transformed in order to confine the fringe contribution to just two basis vectors. The data reconstruction was performed using the complementary set of basis vectors, and then the difference image (corresponding to the top-right panel of Figure~\ref{fig:AC}) was Fourier filtered in order to add back the residual signal. Figure~\ref{fig:reconstr.Ha} provides an illuminating counterexample. In this case, we reconstructed a Stokes $Q$ frame from a map of the \ion{H}{1} H$\alpha$ line region around 656\,nm, with a total of 60 steps, also observed with the SPINOR instrument. Despite the complex fringe pattern in this map, direct Fourier filtering of the full frame (left) already gives very good results, comparable to what can be obtained with the selective Fourier filtering of the difference image from the TP-PCA reconstruction (right). In fact, in this case the spectral autocorrelation image of the map frame shows a clear separation between the fringe and signal contributions in frequency space, allowing a clean suppression of the fringes with minimal loss of science data. In this case, evidently, a prior transformation of the PCA basis does not necessarily improve the fringe suppression, and in fact the right side of Figure~\ref{fig:reconstr.Ha} was obtained by Fourier filtering the difference image after reconstructing the data without the eigenfeatures $\bm{e}_1$, $\bm{e}_2$, and $\bm{e}_6$ of the original PCA basis. Despite the effectiveness of direct Fourier filtering of the map frame in this particular example, a prior PCA of the full map, which allows to target a minimal set of PCs for the selective filtering of the fringe component, still leads to a visibly cleaner removal of the fringe signal from the map frame. This clearly demonstrates the benefit of running a PCA of a Stokes map, even in cases where the transformation of the PCA basis in order to maximally confine the fringe signal may not be necessary. \section{Conclusions} We studied and compared the performance of various methods for identifying and removing polarization fringes from spectro-polarimetric data based on 2-dimensional Principal Component Analysis (2D PCA) of Stokes maps. All examples were taken from data sets acquired with slit-based spectro-polarimeters, but the same methods can indistinctly be applied also to observations taken with wavelength-tunable imaging spectro-polarimeters. We analyzed in depth the performance of the \cite{TP91} algorithm of 2D PCA, which was only briefly touched upon in a previous work by \cite{Ca12}. We compared its performance with that of the \cite{Ya04} algorithm, which was instead the focus of that previous work. In this study, we highlighted the improvements in spectro-polarimetric data de-fringing by PCA that can be expected by subjecting the PCA basis of a Stokes map to a series of 2D rotations among its various eigenfeatures. This transformation can be set up in a quantitative and repeatable way to specifically target the fringe signal in all eigenfeatures, in order to ``sweep'' it across the basis and ``dump'' it into a minimal set of transformed basis vectors, which determine the dimensionality of the fringe subspace $\mathscr{F}$ in the data set. When perfect orthogonality between this subspace and the subspace $\mathscr{S}$ of the targeted spectral signal is realized in a particular data set, it must be expected that the basis vectors spanning the subspace $\mathscr{F}$ will be perfectly orthogonal (hence, uncorrelated) to the subspace $\mathscr{S}$. In most practical cases, this ideal condition can never be achieved, and the problem of handling, and possibly ``rescuing'' any residual spectral signal of interest in the fringe basis vectors was also considered here. It is found that \emph{selective} Fourier filtering of the fringe basis vectors can greatly improve the performance of 2D PCA de-fringing, as well as providing significantly better results than the more traditional \emph{direct} Fourier filtering of each individual frame of the Stokes map. We describe one possible algorithm for the implementation of an optimal transformation of the PCA basis for the purpose of isolating the fringes into a minimal set of basis vectors (see App.~\ref{app:B}). Such algorithmic description lends itself to the development of \emph{automated} procedures for the de-fringing of spectro-polarimetric data sets. Throughout this study, we elected to work with the \emph{correlation} matrix of the Stokes data set rather than its \emph{covariance} matrix as typically done in most PCA methods. The reason for such choice is that the creation of the covariance matrix of the data implies the subtraction of the average image from the data. In the traditional application of 2D PCA to face recognition, this average must then be added back for data reconstruction, but in the problem of Stokes data de-fringing this would inevitably imply adding back fringes to the reconstructed data. On the other hand, if the average image is left out, and this contains some significant residual spectral signal of interest, this would be inevitably lost in the reconstructed data. The use of the covariance matrix (which was considered by \citealt{Ca12}) is thus only justified when the average image is devoid of any significant residual of the targeted spectral signal. In those cases, sometimes good data de-fringing can already be attained simply by subtracting the average image from the full data. In case of evolving fringe patterns during the Stokes map scan, the (covariance based) PCA method for de-fringing can complete the process, delivering spectro-polarimetric data sets fully usable for science.
{ "timestamp": "2018-11-09T02:05:34", "yymm": "1811", "arxiv_id": "1811.03211", "language": "en", "url": "https://arxiv.org/abs/1811.03211" }
\section{Introduction} State-of-the-art photo realistic image synthesis is based on (quasi-) Monte Carlo simulation of light propagation: Rays are traced to connect the camera with the light sources. Then, the contribution of all these light paths is summed up. % Often, fibers for hair and fur are part of the scenery. These fibers are usually modeled as sweep surfaces along B\'ezier curves with a circular cross section and a parametric radius, which may vary along the curve. While triangles are a very common representation for most other parts of the scene geometry, they often are a very unsuitable approximation for fibers. Reasons for this exception include memory restrictions, numerical issues, and efficiency considerations. Therefore, especially in high quality rendering, custom primitives are used for fibers, and intersections of rays with these primitives must be found. \section{Previous Work} Most popular approaches are based on recursive subdivision of the curve \cite{Catmull:1974}, after which either the segment of the curve can be approximated by a simple primitive or an iterative solver refines the solution. The number of subdivisions required for a certain reduction of curvature can be reduced by a more thorough analysis, however at the price of a significantly increased effort \cite{Hain:2005}. Generalized cylinders \cite{Ballard:1982} are a common representation for hair fibers. They are defined by sweeping an arbitrary two-dimensional contour along a three-dimensional curve. Intersections of rays with these objects can be found without tessellation \cite{Bronsvoort:1985}: Each ray is projected into a parametric frame aligned to the trajectory of the curve, i.e. the contour is fixed. At the same time the trajectory of the ray becomes a two-dimensional curve. Then, the ray and the contour are subdivided simultaneously until the size of their bounding boxes fall below a threshold. During the process, combinations of the two intervals for which the bounding boxes do not overlap can be pruned. In a final step, the exact intersection points are calculated, which requires solving equations of higher polynomial degree. The method can be simplified by either restricting the shape of the sweep curve \cite{vanWijk:1984} or the shape of the contour without subdividing the curve first \cite{vanWijk:1985}. However, finding roots of polynomials with a high degree is still required and remains numerically challenging. Intersections of rays and sweep surfaces with a circular cross section can also be found by combining the equations of the trajectory of a ray and the parametric distance of a point to a parametric position on the curve \cite{Leipelt:1995}. Again, roots of a polynomial with high degree must be found. Approximating the intersection on the surface of the fiber with the closest point of approach of the ray and the curve lowers the polynomial degree. The closest point of approach of two lines can be determined very efficiently in ray-centric coordinate systems using an adaptive linearization method based on recursive subdivision \cite{Nakamaru:2002}. If only primary visibility from a pin hole camera is of concern, it can be beneficial to compute line samples instead of point samples \cite{Barringer:2012}. In the same spirit, cone tracing can decrease the number of samples significantly \cite{Qin:2014}. The obtained coverage information, however, may not fit the architecture of a fully path traced simulation. Improvements of a ``top level'' hierarchy referencing fibers and unrolling curve subdivision such that the number of segments matches the SIMD width may improve performance on certain architectures \cite{Woop:2014}. More recent iterative root finding methods can replace recursive subdivision to improve convergence speed and precision at the same time \cite{Reshetov:2017}. While these methods computing the closest point of approach deliver state-of-the-art performance for a certain level of detail, they all suffer from the underlying approximation, which prohibits the determination of the correct intersection on the surface of the fiber and the normal in the intersection. An example for this issue is shown in \Cref{fig:ribbons-vs-ours}. Furthermore, the inefficiency of the pruning tests of subdivision-based methods becomes prohibitive for a high number of subdivisions. Finally, recursive methods using a stack suffer from memory bandwidth limitations, especially on current GPUs. Fast ray tracing is possible due to efficient data structures that identify all potential parts of the scene that may be intersected by a ray. The state-of-the-art for these acceleration data structures performs hierarchical partitioning of either space or the set of objects. Furthermore, there exist hybrid schemes that partition both the set of objects and space in order to improve performance \cite{Stich:2009}. As the construction and traversal of such acceleration data structures is almost orthogonal to the actual ray/fiber intersection, we focus on improving the latter in this article. \begin{figure} \centering \begin{tabular}{@{}ccc@{}} \includegraphics[width=.47\textwidth]{wrong-shape.png}&% \hspace*{.02\textwidth} \includegraphics[width=.47\textwidth]{correct-shape.png}% % % \end{tabular} \vspace*{-2ex} \caption{Left: Methods based on the closest point of approach cannot determine the correct parametric position and normal. Right: Our method computes both with high precision.} \label{fig:ribbons-vs-ours} \end{figure} \section{Algorithm} Our algorithm is a member of the family of subdivision-based methods computing intersections of rays with fibers by recursively bisecting the curve and pruning regions that cannot be intersected by the ray \cite{Catmull:1974}. The first contribution is a stackless iterative variant that only keeps track of subdivision levels that require backtracking and re-computes all necessary data instead of employing a stack. Our second contribution is a fast pruning test with oriented cylinders that significantly improves the accuracy, especially for a high number of subdivisions. After a termination criterion is met, e.g. a fixed number of subdivisions, the final intersection with the linearized segment, represented by a cylinder with oriented end caps, is computed. As bounding cylinders of neighboring curve segments now are disjoint by construction and closer segments are always intersected first, our algorithm can immediately terminate after an intersection has been found (3rd contribution). Instead of approximating the actual intersection on the surface of the fiber with the closest point of approach, we reuse the intersection already determined for pruning (4rd contribution) and are able to compute an accurate normal (5th contribution). \subsection{Numerically Robust Curve Representation} \label{sec:representation} A na\"ive implementation for cubic B\'ezier curves using four control points $(p_0, p_1, p_2, p_3)$ suffers from severe floating point precision issues in our algorithm due to cancellation in differences required to determine the cylinder axis ($p_3 - p_0$) and the tangent in the split point. Therefore, we use a representation tailored to our pruning test as illustrated in \Cref{fig:representation}: We maintain the first control point $p := p_0$, the tangents in the start and end points $t_0 := p_1 - p_0, t_1 := p_3 - p_2$, and the direction $d := p_3 - p_0$. This representation requires different rules for the subdivision of $(p, d, t_0, t_1)$ into $\left(p^L, d^L, t_0^L, t_1^L\right)$ and $\left(p^R, d^R, t_0^R, t_1^R\right)$, where % \begin{align*} \Delta p &= \tfrac{3}{8} t_0 + \tfrac{1}{2} d - \tfrac{3}{8} t_1,\\ t_c &= -\tfrac{1}{8} t_0 + \tfrac{1}{4} d - \tfrac{1}{8} t_1, \end{align*} and \begin{align*} p^L &= p, & p^R &= p + \Delta p,\\ d^L &= \Delta p, & d^R &= d - \Delta p,\\ t_0^L &= \frac{1}{2}t_0, & t_0^R &= t_c,\\ t_1^L &= t_c, & t_1^R &= \frac{1}{2}t_1. \end{align*} As we will use disjoint bounding volumes, only one of the two sets needs to be calculated as determined by the pruning test. In fact this subdivision can be computed even slightly more efficiently than the subdivision of $(p_0, p_1, p_2, p_3)$ and only exposes a minimal amount of instruction divergence due to branching. \begin{figure} \centering \begin{minipage}{.45\textwidth} \centering \begin{tikzpicture}[scale=0.7] \coordinate (A) at (0, 0); \coordinate (B) at (1, 1.5); \coordinate (C) at (2, 1); \coordinate (D) at (4, 0); \node at (A) {\small \textbullet}; \node at (B) {\small \textbullet}; \node at (C) {\small \textbullet}; \node at (D) {\small \textbullet}; \draw[thick, -latex] (A) -- (B) node[midway, above, font=\small, xshift=-2] {$t_0$}; \draw[thick, -latex] (A) -- (D) node[midway, above, font=\small] {$d$}; \draw[thick, -latex] (C) -- (D) node[midway, above, font=\small] {$t_1$}; \node[anchor=north, font=\small] at (A) {$p := p_0$}; \node[anchor=south, font=\small] at (B) {$p_1$}; \node[anchor=south, font=\small] at (C) {$p_2$}; \node[anchor=north, font=\small] at (D) {$p_3$}; \draw[thick, densely dotted] (A) .. controls (B) and (C) .. (D); \end{tikzpicture} \captionof{figure}{Representing the curve with the tuple $\mathbf{\{p, d, t_0, t_1\}}$ improves the numerical robustness of the method significantly.} \label{fig:representation} \end{minipage}% \hspace*{.09\textwidth} \begin{minipage}{.45\textwidth} \centering \begin{tikzpicture} \coordinate (A) at (0, 0); \coordinate (B) at (5, 1); \coordinate (C) at (-1, 1); \coordinate (D) at (4, 0); \coordinate (T) at ($-1.0*(A)-(B)+(C)+(D)$); \newdimen\xta \newdimen\yta \pgfextractx{\xta}{\pgfpointanchor{T}{center}} \pgfextracty{\yta}{\pgfpointanchor{T}{center}} \coordinate (N) at (-\yta, \xta); \coordinate (S) at ($1.0/8.0*(A)+3.0/8.0*(B)+3.0/8.0*(C)+1.0/8.0*(D)$); \draw[thick] (A) .. controls (B) and (C) .. (D); \draw[thick, densely dotted, -latex] (0,0) -- (B); \draw[thick, densely dotted, latex-] (C) -- (D); \node at (0, 0) {\small \textbullet}; \node[below] at (0, 0) {\small $p_0$}; \node[below] at (B) {\small $p_1$}; \node[below] at (C) {\small $p_2$}; \node at (D) {\small \textbullet}; \node[below] at (D) {\small $p_3$}; \draw[thick, densely dashed] (S) -- ($(S)+0.35*(N)$); \draw[thick, densely dashed] (S) -- ($(S)-0.35*(N)$); % \end{tikzpicture} \captionof{figure}{A loop violating the constraints required to construct disjoint bounding volumes must be split beforehand.} \label{fig:loop} \end{minipage} \end{figure} \subsection{Efficient Hierarchical Pruning} % Each region of the subdivision is conservatively bounded by an oriented cylinder, which is partitioned by a plane located in the split point and perpendicular to the tangent in the split point of the curves. The limit surface of these cylinders guarantees that an intersection is always mapped to the closest point on the curve. Only if the intersection of the ray with the plane is inside the cylinder, both sub-regions must be considered. Then, subdivision starts with the region whose bounding volume is intersected first along the ray. \Cref{fig:bounding_cylinder} shows an example with four possible cases. Note that the test for inclusion only requires comparing the distances of the two ray/cylinder intersections with the distance of the intersection with the plane. As these pruning tests are performed with disjoint bounding volumes and refinement always continues with the closest sub-region, subdivision can immediately terminate after an intersection has been found. Instant termination is essential for a high number of subdivisions because otherwise the number of unpruned regions may grow exponentially with subdivision depth. While bounding both subcurves in individual cylinders instead of using one partitioned cylinder improves the culling accuracy, the overhead of computing and intersecting two bounding volumes outweighs the theoretical benefit in practice, especially since the benefit quickly decreases with subdivision. \subsection{Implementation} Pruning is performed in a ray centric coordinate system, in which the ray starts in the origin and goes along the positive $z$ axis (``unit ray''). A reliable orthonormal basis can be efficiently constructed using Duff et al.'s recent improvement of Frisvad's method \cite{frisvad:onb:2012, duff:onb:2017}. The transformation into the local frame only needs to be performed once at the beginning by calculating a local set of control points. In this coordinate system, we can simplify ray/plane intersection and the infinite cylinder intersection described by Cychosz et al.~\cite{Cychosz:1994} significantly since $\forall v \in \mathbb{R}^3$: $\langle {v, d} \rangle = v_z$ and $v - o = v$ for a unit ray defined by its origin and direction $(o, d)$. \Cref{lst:ray-plane,lst:ray-cylinder} present the resulting optimized intersection functions; the simplified ray/cylinder intersection is derived in \Cref{sec:appendix_cylinder}. We use four-dimensional control points, where the first three components are the position, and the last one defines the radius. This consistent representation allows for cubic interpolation of the radius. Bounding cylinders are oriented along the vector connecting the first and last control point and have a conservative radius defined by the sum of the maximum radius in the region and the maximum distance of the inner control points to the cylinder axis. We also use B\'ezier curves for radius interpolation, and bound the parametric radius using the convex hull property. An example implementation for the computation of a conservative radius for the bounding cylinders and cubic B\'ezier curves is given in \Cref{lst:calc_radius}, using the distance of the two inner control points to the axis determined by the method shown in \Cref{lst:distance-point-line}. The infinite cylinders are cropped by restricting the $t$-parameter interval of the ray. After determining initial bounds of the $t$ parameter interval, in each subdivision step one of the interval bounds is updated; both are recalculated after backtracking. A simple implementation for cubic B\'ezier curves is shown in \Cref{lst:recalculation}. Recursive subdivision is performed by an iterative process by maintaining a bit string in which each subdivision level is represented by one bit and the current size of the parametric domain. After the pruning test, the corresponding bit in the bit string is set to one if and only if both subregions must be considered, and only in this case backtracking is required. Then, the control points and the $t$ interval are recalculated to avoid maintaining a stack of control points. The current parametric interval of the curve can directly be derived from the bit stack. It is always maintained in two integer variables for start and size of the interval, which must be converted to floating point values in the unit interval before calculating new control points. This conversion is shown in \Cref{lst:calculate_interval}. Upon termination, the intersection of the ray with the bounding cylinder used for pruning already determines the intersection with the linearized segment. Only very little effort is required to compute the normal and the parametric value in the intersection, and this calculation is performed by all threads of a warp simultaneously in the very end. \Cref{lst:calc_intersection} shows a possible implementation. \begin{figure} \centering \begin{tikzpicture}[scale=0.35] \begin{scope}[rotate=-90] \draw[white] (-5,-2) rectangle (4,8); % \fill[NVGreen!70!black!70] (-2, 3) arc (180:0:2cm and 0.5cm) -- (2, 4) -- (-2, 4) -- cycle; \fill[NVGreen!70!black!70] (0, 3) circle (2cm and 0.5cm); % \fill[left color=gray!50!black,right color=gray!50!black,middle color=gray!50,shading=axis,opacity=0.25] (2,-2.95) -- (2,7.3) -- (-2,6) -- (-2,-0.175) -- cycle; \begin{scope} \clip[shift={(2.05, 1)}, rotate=20] (0,6) circle (2.12cm and 0.5cm); \fill[white] (2,0) -- (2,7.3) -- (-2,6) -- (-2,0) arc (180:360:2cm and 0.5cm); \end{scope} \begin{scope} \clip[shift={(-0.00, -1.55)}, rotate=-35] (0,0) circle(2.42cm and 0.5cm); \fill[white] (2,0) -- (2,-3.2) -- (-2,-3.2) -- (-2,0) -- cycle; \end{scope} \draw[white] (-2, 6) -- (2,7.3); % % \fill[shift={(0.0, -1.55)}, rotate=-35, top color=black!60!,bottom color=black!40,middle color=black!50,shading=axis,opacity=0.25] (0,0) circle(2.417cm and 0.5cm); \begin{scope} \clip (-2.05, -0.2) -- (-2.05, -3.2) -- (2.05, -3.2) -- (2.05, -2.9) -- cycle; \draw[shift={(0.00, -1.55)}, rotate=-35] (0,0) circle(2.417cm and 0.5cm); \end{scope} \begin{scope} \clip (-2.05, -0.2) -- (-2.05, 1) -- (2.05, 1) -- (2.05, -2.9) -- cycle; \draw[densely dashed, shift={(0.00, -1.55)}, rotate=-35] (0,0) circle(2.417cm and 0.5cm); \end{scope} % \begin{scope} \clip[shift={(0, -1.55)}, rotate=-35] (0,0) circle(2.42cm and 0.5cm); \fill[draw=gray!50,left color=gray!50!black,right color=gray!50!black,middle color=gray!50,shading=axis,opacity=0.15] (2,-4.9) -- (2,11.3) -- (-2,6) -- (-2,-4.2) -- cycle; \end{scope} % \fill[shift={(2.05, 1)}, rotate=20,top color=black!60!,bottom color=black!40,middle color=black!50,shading=axis,opacity=0.5] (0,6) circle (2.12cm and 0.5cm); \draw[shift={(2.05, 1)}, rotate=20] (0,6) circle (2.12cm and 0.5cm); \draw (-2,6) -- (-2,-0.25); \draw (2,7.3) -- (2,-2.9); % \fill[NVGreen!70] (-4, 2) -- (-3.5, 3) -- (-2, 3) arc (180:360:2cm and 0.5cm) -- (3.5, 3) -- (4, 2) -- cycle; \fill[NVGreen!70] (-4, 2) -- (-3, 4) -- (-2, 4) -- (-2, 2) -- cycle; \fill[NVGreen!70] (4, 2) -- (3, 4) -- (2, 4) -- (2, 2) -- cycle; % \draw[densely dashed] (-2,3) arc (180:0:2cm and 0.5cm); \draw (-2,3) arc (180:360:2cm and 0.5cm); \draw (-4, 2) -- (4, 2) -- (3, 4) -- (2, 4); \draw[densely dashed] (2, 4) -- (-2, 4); \draw (-2, 4) -- (-3, 4) -- (-4, 2); % \fill[NVGreen!70!black!90] (-2, 2) -- (-2, 3) arc (180:360:2cm and 0.5cm) -- (2, 2) -- cycle; \draw[densely dashed] (2, 2) -- (2, 3); \draw[densely dashed] (-2, 2) -- (-2, 3); % \draw (2, 2) -- (-2, 2); \draw (-2, 3) -- (-2, 5); \draw (2, 3) -- (2, 5); % \draw[thick](-5, -2) -- (-2.5, 3); \fill[NVGreen!70!black!90] (-2.5,3) circle(0.2cm and 0.05cm); \draw[thick, -latex, shorten >= -20] (-2.5, 3) -- (0, 8) node[pos=1.25, below, font=\small] {\soutthick{left}, right}; % \draw[thick](-5, -2) -- (-1.5, 3); \fill[NVGreen!50!black!90] (-1.5,3) circle(0.2cm and 0.05cm); \draw[thick, -latex, shorten >= -20] (-1.5, 3) -- (2, 8) node[pos=1.25, below, font=\small] {left, right}; % \draw[thick, -latex](-5, -2) -- (3, -4.5) node[pos=1.0, below, font=\small] {\soutthick{left}, \soutthick{right}}; % \draw[thick, -latex](0, 3) -- (3, 0) node[pos=1.0, below, font=\small] {right, left}; \fill[NVGreen!50!black!90] (0,3) circle(0.2cm and 0.05cm); \draw[thick](-5, 8) -- (0, 3); \end{scope} \end{tikzpicture} \vspace*{-2ex} \caption{Bounding cylinders with partitioning planes improve the efficiency of pruning tests and allow for instant termination after an intersection has been found.} \label{fig:bounding_cylinder} \vspace*{-0.5ex} \end{figure} \subsection{Constraints and Limitations} A bisection into subregions that can be bounded by disjoint bounding volumes poses well-defined restrictions on the allowed sets of control points. At the same time, the constraints also ensure that curves with valid configurations cannot be split into invalid ones. For cubic B\'ezier curves the constraints \begin{align*} \langle {p_2 - p_0, p_1 - p_0} \rangle &\geq 0,\\ \langle {p_3 - p_1, p_1 - p_0} \rangle &\geq 0,\\ \langle {p_3 - p_1, p_3 - p_2} \rangle &\geq 0,\\ \langle {p_2 - p_0, p_3 - p_2} \rangle &\geq 0,\\ \langle {p_2 - p_0, p_3 - p_1} \rangle &\geq 0 \end{align*} guarantee that the curve can be recursively split into subcurves with disjoint bounding volumes. \Cref{sec:appendix_constraints} provides the constraints required for quadratic B\'ezier curves and all necessary proofs for both quadratic and cubic B\'ezier curves. Curves that do not fulfill these constraints must be subdivided beforehand. An example for such a configuration is shown in \Cref{fig:loop}. While these constraints are necessary, they are not sufficient to guarantee disjoint bounding volumes of fibers: If a point on the surface (perpendicular to the tangent $t_u$ of the point on the curve $p_u$, at a distance defined by the radius in that point $r_u$) intersects the partitioning plane in the split point, a valid part of the surface of the fiber will be cropped. % \begin{figure} \centering \includegraphics[width=.5\textwidth]{radius-disjoint.pdf} \caption{While curves may be split disjointly in cusps, thicker fibers can still overlap splitting planes, causing visible defects. While the left fiber does not suffer from this issue, the fiber in the middle, which goes along the same curve but has a larger radius does. Of course, fibers with parametric radius (right) can also overlap partitioning planes.} \label{fig:issue-cusps} \end{figure} \Cref{fig:issue-cusps} shows three examples: The leftmost fiber with a small, constant radius does not suffer from this issue, while the surface of the other ones intersects the split plane as their radius is too large, and thus errors may be introduced. Besides thick fibers, high curvatures (e.g. in cusps) can cause similar issues. Regions with such a behavior must be isolated and require subdivision beforehand, too. Note that a similar issue affects methods based on the closest point of approach. \begin{figure}% \centering \begin{tikzpicture} \coordinate (A) at (0, 0); \coordinate (B) at (2, 1); \coordinate (C) at (4, 1); \coordinate (D) at (6, 0); \draw[line width=25, opacity=0.1] (A) .. controls (B) and (C) .. (D); \draw[densely dashed] (A) .. controls (B) and (C) .. (D); \draw (5.5, -1) -- (6.5, 1); \draw[-latex, shorten >= 7] (D) -- (7, -0.5) node[font=\small] {$\hat t_1$}; \draw[-latex, shorten >= 7] (3, 0.75) -- (5, 0.75) node[font=\small] {$\hat t_u$}; \draw[-latex, shorten >= 7] (3, 0.75) -- (3, -1.25) node[font=\small] {$\hat n$}; \node[anchor = south, font=\small] at (3, 0.75) {$p_u$}; \fill (3, 0.75) circle (0.05); \fill (A) circle (0.05); \fill (D) circle (0.05); \node[anchor=north west, font=\small, xshift=-4] at (D) {$p_3$}; \end{tikzpicture} \caption{Illustration of finding the closest point on the normal plane in a point on the curve to a partitioning plane. \label{Fig:EvilConfiguration}} % \end{figure} Such configurations can be identified as shown in \Cref{Fig:EvilConfiguration}: The surface point closest to the plane in the split point perpendicular to the split tangent $t_1$ is on the plane perpendicular to the tangent $t$ at the parametric position. Gram-Schmidt orthogonalization yields the displacement from the curve closest to the split plane \begin{align*} n_u &= t_1 - \langle {t_1, \hat t_u} \rangle \cdot \hat t_u\\ &= t_1 - \frac{\langle {t_1, t_u \rangle}t_u}{\langle {t_u, t_u} \rangle}\\ &\propto \langle {t_u, t_u} \rangle t_1 - \langle {t_1, t_u \rangle}t_u{}, \end{align*} where $t_u = (1-u)^2(p_1 - p_0) + 2(1-u)u(p_2 - p_1) + u^2(p_3 - p_2)$ and $t_1 = p_3 - p_2$ for a cubic B\'ezier curve. As $n_u$ has a maximum degree of 4, checking all solutions $u_i \in [0, 1]$ of \begin{align*} \langle {p_3 - p_u + r_u \cdot \hat n_u, \hat t_1} \rangle &= 0&\textrm{or just}\\ \langle {p_3 - p_u + \bar r \cdot \hat n_u, \hat t_1} \rangle &= 0,&r_u \leq \bar r \end{align*} requires solving a quartic equation, e.g. using Ferrari's method \cite{Smith:1929}, eliminating the cubic term using the Tschirnhaus transformation \cite{Boyer:1968}. As the curvature decreases with subdivision (% see subdivision rules in \Cref{sec:representation}% ) and the maximum radius $\bar r$ is a constant, it is sufficient to check each fiber only initially in its two end points with corresponding tangents. % While it would be trivially possible to support all possible curve configurations and overcome the issues in cusps and with very thick fibers by optionally allowing overlapping bounding volumes in certain subdivisions, the overhead of checking the criteria in every subdivision step and recording the ones that overlap will most likely not pay off in practice since the number of such configurations is usually small. As furthermore the cost of splitting beforehand is rather moderate, the overall penalty tends to be negligible. \section{Results and Discussion} We evaluated the performance and precision of the presented algorithm using single fibers along quadratic and cubic B\'ezier curves. \begin{figure} \centering \begin{tabular}{@{}ccc@{}} \includegraphics[width=.21\textwidth]{Nakamaru-12-subdivs.png}&% \includegraphics[width=.21\textwidth]{Ours-12-subdivs.png}&% \begin{tikzpicture}[scale=0.233] \clip (-0.03, -0.03) rectangle (3, 12.03); \pgfdeclareverticalshading{flame} {3cm}{rgb(0cm)=(0,0,0); rgb(0.5cm)=(0,0,0); rgb(1cm)=(1,0,0); rgb(2cm)=(1,1,0); rgb(2.5cm)=(1,1,1); rgb(3cm)=(1,1,1)} \draw[shading=flame](0,0) rectangle (1,12); \node[anchor=south, font=\tiny, yshift=-3] at (1.6,0){0}; \node[font=\tiny] at (1.6,4){15}; \node[font=\tiny] at (1.6,8){30}; \node[anchor=north, font=\tiny, yshift=3] at (1.6,12){75}; \end{tikzpicture}% % % \end{tabular} \vspace*{-2ex} \caption{Pruning with oriented cylinders (right) dramatically lowers the total number of regions of interest by reducing the number false positives compared to pruning with axis aligned bounding boxes (left), especially for a high number of subdivisions.} \label{fig:heatmap-pruning} \end{figure} \Cref{fig:heatmap-pruning} compares the number of pruning tests required for our method to pruning with axis aligned bounding boxes for a high number of subdivisions. As expected the overlapping axis aligned bounding boxes result in an exponential growth of regions that cannot be pruned. Note that the bounding boxes are aligned to the main axis of the coordinate system of each ray, hence they appear to be warped. \Cref{fig:performance} shows the performance for intersecting $\sim$1 million rays with a single fiber on an NVIDIA Titan V. While the relative performance does depend on the orientation and curvature of the fiber, as pruning with axis aligned bounding boxes benefits from straight, aligned fibers, the main issue for the dramatic slowdown of the state-of-the art remains: After a certain number of subdivisions, the bounding boxes of two curves resulting from the subdivision of a region overlap almost entirely. Then, none of them can be pruned and a significant amount of additional backtracking is required. Nakamaru's method \cite{Nakamaru:2002} does not take into account the ray direction for deciding which subcurve is checked first, therefore causing exponential growth of valid regions in the worst case. Hybrids between the two methods, i.e. pruning with axis aligned bounding boxes, taking ray direction into account, and calculating the final intersection with cylinders not only suffer from divergence caused by the additional test, but are mostly limited by the pruning inefficiency, and therefore may be only valuable in rare cases. Combining the fiber intersection with a top level hierarchy referencing fibers, or regions on fibers if the fiber must be split beforehand, is straightforward. The top level hierarchy is primarily orthogonal to fiber intersection unless fibers are partitioned into very small regions. Then, the increased efficiency of the pruning tests of the presented method becomes even more important. Note that in practice, intersection cost is almost always dominated by the performance of traversing the hierarchy referencing the fibers and other geometry in the scene. Nevertheless, we calculate accurate intersections and normals on the surface in a reliable way and do so either with a small overhead or -- if high precision is of concern -- dramatically faster. \section{Conclusion} We have presented an algorithm that outperforms the state-of-the-art subdivision-based ray/fiber intersection method significantly for a high number of subdivisions, and computes accurate intersections on the surface of the fiber with an accurate normal. In addition, we also determine a precise parametric position. Even for small numbers of subdivisions, which quite often lead to visible artifacts in Nakamaru's method \cite{Nakamaru:2002}, the approximation error of our algorithm is well understood and the overhead of pruning with oriented cylinders instead of axis aligned bounding boxes remains reasonable. While the algorithm cannot handle arbitrary fibers, configurations that cannot be supported can be identified in advance. Subdividing such fibers resolves the issues. Future opportunities include displacements and arbitrary contours, both only requiring an additional intersection test after pruning with conservative bounding cylinders. \newcommand{\etalchar}[1]{$^{#1}$}
{ "timestamp": "2018-11-09T02:13:15", "yymm": "1811", "arxiv_id": "1811.03374", "language": "en", "url": "https://arxiv.org/abs/1811.03374" }
\section{Introduction} We denote by $L^2(\mu_m)$ the space of square integrable functions in ${\mathbb C}$ with respect to the measure $d\mu_m(z):= \frac{1}{\pi}e^{-m|z|^2}\, dA(z)$, where $m>0$ and $A$ denotes the Lebesgue area measure in ${\mathbb C}$. The classical Fock space $\mathcal{F}_{m}$ is given by $$ \mathcal{F}_{m}=\Bigl\{f \hbox{ entire} :\ \int_{\mathbb C} |f(z)|^2\, d\mu_m (z)<\infty\Bigr\}, $$ For an entire function $g$ such that $z^n g \in L^2(\mu_m)$ ($n \in {\mathbb N}$), the big and small Hankel operators with symbol $\bar{g}$ are densely defined on $\mathcal{F}_{m}$ by $$H_{\bar{g}}f=(I-P)(\bar{g}f)\,,\quad h_{\bar{g}}f=Q(\bar{g}f)\,,$$ where $f$ is an analytic polynomial, and $P$ and $Q$ are the orthogonal projections from $L^2(\mu_m)$ onto $\mathcal{F}_{m}$, respectively onto $\overline{\mathcal{F}_{m}^0}=\{\bar f :\ f \in \mathcal{F}_{m},\ f(0)=0\}$. Given a closed subspace $Y$ of $L^2(\mu_m)$ such that $\overline{\mathcal{F}_{m}^0} \subset Y \subset (\mathcal{F}_{m})^\perp$, we define the {\it middle}, or {\it intermediate}, Hankel operator with symbol $\bar g$ by $$H_{\bar{g}}^Yf=P_Y(\bar{g}f)\,,$$ where $f$ is an analytic polynomial, and $P_Y$ is the orthogonal projection from $L^2(\mu_m)$ onto $Y$. Notice that, for polynomials $f$, the following holds $$ \|h_{\bar{g}}f\|_{L^2(\mu_m)}\le \|H_{\bar{g}}^Yf\|_{L^2(\mu_m)}\le \|H_{\bar{g}}f\|_{L^2(\mu_m)} $$ Middle Hankel operators on Bergman spaces were considered by Jansson and Rochberg \cite{rochberg1}, as well as by Peng, Rochberg and Wu \cite{rochberg}, where interesting properties regarding Schatten classes were studied. There is one notable difference between the behaviour of such operators on Bergman and Fock spaces. Namely, in the Bergman setting the big Hankel operator is bounded if and only if the little Hankel operator is bounded, if and only if the complex conjugate of the symbol belongs to the Bloch space. This makes the characterisation of the boundedness of middle Hankel operators on Bergman spaces trivial. On the Fock space, however, there is a big gap between the class of symbols generating bounded big Hankel operators (which is the space of polynomials of degree one) and that generating bounded little Hankel operators (which is the space of the entire functions $f$ such that $|f(z)| \lesssim e^{m|z|^2/4}$). This makes the study of middle Hankel operators on Fock spaces of interest. In this paper we construct in a natural way a sequence of middle Hankel operators on Fock spaces, $( H_{\bar{g}}^{Y_N})_{N \in {\mathbb N}}$, ``increasing" in the sense that $\overline{\mathcal{F}_{m}^0}\subset ...Y_{N+1}\subset Y_{N}...\subset Y_1=(\mathcal{F}_{m}) ^\perp$, and such that $ H_{\bar{g}}^{Y_N}$ is bounded if and only if $g$ is a polynomial of degree less than or equal to $N$. The operators we consider are very much related to the minimal $L^2(\mu_m)$-norm solution operator to the equation \begin{equation}\label{dbar} (\bar\partial)^N u=f \end{equation} where the above holds in the sense of distributions and $f \in \mathcal{F}_{m}$. It is easy to see that, for any polynomial $f$, the function $\frac{{\bar z}^N}{N!}\,f$ solves \eqref{dbar} and hence the solution of minimal norm is given by \begin{equation}\label{sdbar} u=(I-P_{F^{N,m}})(\tfrac{{\bar z}^N}{N!}\,f)\,, \end{equation} where $F^{N,m}$ is the kernel of $(\bar\partial)^N$ in $L^2(\mu_m)$, that is, the Fock space of polyanalytic functions of order $N$, equivalently \begin{equation}\label{pf} F^{N,m}=\overline{\text{span}\{\bar z^j z^k:\ 0 \le j \le N-1\,,\ k \in {\mathbb N}\}}^{L^2(\mu_m)}\,. \end{equation} Obviously $F^{1,m}=\mathcal{F}_{m}$. Polyanalytic Fock spaces and kernels appear naturally in time-frequency analysis as well as in the mathematical analysis of Landau levels \cite{groechenig,feichtinger,hedenmalm,vasilevski}. The connections between big Hankel operators and the minimal norm solution operator to $\bar\partial$ (i.e. the case $N=1$) were previously exploited in \cite{rochberg2,haslinger,ortega}. Inspired by \eqref{sdbar}, we introduce the following operator $$\tilde H_{\bar{g}}^Nf=(I-P_{F^{N,m}})({\bar g} f)\,,$$ where $f$ is an analytic polynomial. For $N=1$ we recover the big Hankel operator with symbol $\bar g$, which was studied in \cite{bommier} in several complex variables. The case $N=2$ was treated in \cite{schneider} for monomial symbols. Regarding the boundedness and compactness of $\tilde H^N_{\bar g}$ we prove the following \begin{Theorem}\label{boundedness} Let $g$ be an entire function such that $z^n g \in L^2(\mu_m)$ for all $n \in {\mathbb N}$. Then $\tilde H^N_{\bar g}$ extends to a bounded operator on $\mathcal{F}_{m}$ if and only if $g$ is a polynomial of degree at most $N$. Moreover, $\tilde H^N_{\bar g}$ is compact if and only if $g$ is a polynomial of degree strictly smaller than $N$, that is, if and only if $\tilde H^N_{\bar g}$ is the zero operator. \end{Theorem} In particular, the above theorem shows that the minimal norm solution operator to $(\bar\partial)^N$ is (bounded but) not compact. Since $\overline{\mathcal{F}_{m}^0}\not \subset (F^{N,m})^\perp$, the operator $\tilde H_{\bar{g}}^N$ is not a middle Hankel operator, however it differs by a finite rank operator from the middle Hankel operator $$H_{\bar{g}}^{Y_N}(f)=(I-P_{S^{N,m}})({\bar g} f)\,,$$ where $$ S^{N,m}:= \overline{\text{span}\{\bar z^j z^k:\ 0 \le j \le N-1\,,\ k\ge j,\ k \in {\mathbb N}\}}^{L^2(\mu_m)}, $$ and $Y_N=(S^{N,m})^\perp$. It is easy to see that $$\overline{\mathcal{F}_{m}^0} \subset\, ... \subset Y_{N+1}\subset Y_N\subset ...\,Y_1= (\mathcal{F}_{m})^\perp$$ and hence the operators $ H_{\bar{g}}^{Y_N}$ $(N\ge 1)$ are middle Hankel operators. Also, since $z^ng\in L^2(\mu_m)$ ($n\in\mathbb{N}$), the difference $(H_{\bar{g}}^{Y_N} - \tilde H_{\bar{g}}^N)(f)=P_{span\{\bar z^j z^k:\ 0\le k< j,\ 0 \le j \le N-1\}} (\bar g f) $ has finite rank. In particular, it follows that $ H_{\bar{g}}^{Y_N}$ is bounded (compact) if and only if $\tilde H_{\bar{g}}^N$ is bounded (compact), and hence we get \begin{Corollary}\label{} Let $g$ be an entire function such that $z^n g \in L^2(\mu_m)$ for all $n \in {\mathbb N}$. Then $H^{Y_N}_{\bar g}$ extends to a bounded operator on $\mathcal{F}_{m}$ if and only if $g$ is a polynomial of degree smaller than or equal to $N$. Moreover, $H^{Y_N}_{\bar g}$ is compact if and only if $g$ is a polynomial of degree smaller than $N$. \end{Corollary} \noindent Notice that, unlike $\tilde H_{\bar{g}}^N$, the operator $H^{Y_N}_{\bar g}$ is not identically zero if $g$ is a non-constant polynomial of degree smaller than $N$. \section{Proof of the main result} Theorem \ref{boundedness} will follow from the characterization of the boundedness and compactness of $\tilde H^N_{\bar z^s}$ for $s\in\mathbb{N}$. To this end, we begin by explicitly computing the projection on $F^{N,m}$ of the monomials $\bar z^s z^n$, for $s,n\in\mathbb{N}$. An orthonormal basis of $F^{N,m}$ (see e.g. \cite{hedenmalm}) is given by \begin{eqnarray*} e_{i,r}^1(z)&:=& \sqrt{\tfrac{r!}{(r+i)!}}\,m^{(i+1)/2} z^i L_r^i(m|z|^2)\,,\qquad i \ge 0\,,\quad 0 \le r \le N-1\,,\\ e_{j,k}^2(z)&:=& \sqrt{\tfrac{j!}{(j+k)!}}\,m^{(k+1)/2} \bar{z}^k L_j^k(m|z|^2)\,,\qquad 0\le j \le N-k-1\,,\quad 1 \le k \le N-1\,, \end{eqnarray*} where $L_k^\alpha(x):= \sum_{i=0}^k (-1)^i { k + \alpha \choose k-i} \,\frac{x^i}{i!}$ are the generalized Laguerre polynomials. For $N=1$ we obtain the standard orthonormal basis of $\mathcal{F}_{m}$: $ e_n(z):=e_{n,0}^1(z)=\frac{m^{(n+1)/2}}{\sqrt{n!}} z^n,\quad n\ge 0. $ \begin{Proposition} For $s,\,n \in {\mathbb N}$ we have $$P_{F^{N,m}}(\bar z^s z^n)= \left\{\begin{array}{l} \displaystyle\sum_{r=0}^{N-1} \frac{r!}{(r+n-s)!}\,m^{-s} I_{n,r,(n-s)} L_r^{n-s}(m|z|^2) z^{n-s}\quad\text{if}\quad n \ge s\,,\\[0.67cm] \displaystyle\sum_{j=0}^{N+n-s-1} \frac{j!}{(j+s-n)!}\,m^{-n} I_{s,j,(s-n)} L_j^{s-n}(m|z|^2) \bar{z}^{s-n}\quad\text{if}\quad s>n \ge s-N+1\,,\\[0.25cm] 0 \quad\text{otherwise}\,, \end{array}\right.$$ where $I_{a,b,c}=\int_0^\infty y^a L_b^c(y) e^{-y}\,dy\quad\text{for}\quad a,\,b,\, c \in {\mathbb N}.$ \end{Proposition} {\it Proof.} The result follows by using polar coordinates in \begin{equation*}\label{1} P_{F^{N,m}}(\bar{z}^s z^n) = \sum_{r=0}^{N-1}\sum_{i=0}^\infty \langle \bar{w}^s w^n, e_{i,r}^1\rangle e_{i,r}^1(z) + \displaystyle\sum_{k=1}^{N-1}\sum_{j=0}^{N-k-1} \langle \bar{w}^s w^n, e_{j,k}^2\rangle e_{j,k}^2(z)\,. \qquad\qquad\qquad\qquad\Box \end{equation*} \begin{Theorem}\label{boundedness1} Let $s \in {\mathbb N}$. Then $\tilde{H}_{\bar{z}^s}^N$ extends to a bounded operator from ${\mathcal F}_{m}$ to $L^2(\mu_m)$ if and only of $s \le N$. Moreover, $\tilde{H}_{\bar{z}^s}^N$ is compact if and only if it is the zero operator. \end{Theorem} \begin{proof} We first notice that $\tilde{H}_{\bar{z}^s}^N=0$ for $s<N$ since in this case $\bar{z}^sf \in F^{N,m}$ for any analytic polynomial $f$. We therefore assume that $s \ge N$. Using Proposition 1, a simple calculation involving polar coordinates shows that for any $p,\,n \in {\mathbb N}$ with $p \neq n$ we have $ \langle \tilde{H}_{\bar{z}^s}^N e_p,\,\tilde{H}_{\bar{z}^s}^N e_n \rangle =0\,.$ Let us now evaluate \begin{eqnarray} \langle \tilde{H}_{\bar{z}^s}^N e_n,\,\tilde{H}_{\bar{z}^s}^N e_n \rangle &=& \frac{m^{n+1}}{n!}\, \Big\{\langle \bar{z}^s z^n,\bar{z}^s z^n \rangle - \langle \bar{z}^s z^n,\,P_{F^{N,m}}(\bar{z}^s z^n) \rangle\Big\} \nonumber \\ &=& \frac{(n+s)!}{n!m^s} - \frac{m^{n+1}}{n!}\,\langle \bar{z}^s z^n,\,P_{F^{N,m}}(\bar{z}^s z^n) \rangle \,.\label{2} \end{eqnarray} In view of Proposition 1 we have \begin{eqnarray} \frac{m^{n+1}}{n!}\,\langle \bar{z}^s z^n,\,P_{F^{N,m}}(\bar{z}^s z^n) \rangle &=& \frac{m^{n-s+1}}{n!}\,\sum_{r=0}^{N-1} \frac{r!}{(r+n-s)!}\,I_{n,r,n-s}\, \langle \bar{z}^s z^n,\,L_r^{n-s}(m|z|^2) z^{n-s} \rangle \nonumber\\ &=& \frac{m^{-s}}{n!} \,\sum_{r=0}^{N-1} \frac{r!}{(r+n-s)!}\,(I_{n,r,n-s})^2 \,.\label{3} \end{eqnarray} Now \begin{eqnarray*} I_{n,r,n-s} &=& \int_0^\infty y^n L_r^{n-s}(y) \,e^{-y} dy = \sum_{i=0}^r \frac{(-1)^i}{i!}\,\begin{pmatrix} r+n-s \\ r-i \end{pmatrix} \int_0^\infty y^{n+i}e^{-y}\,dy \\ &=& \sum_{i=0}^r (-1)^i\,\begin{pmatrix} r+n-s \\ r-i \end{pmatrix}\frac{(n+i)!}{i!} = \frac{s! (r+n-s)!}{r!} \, \sum_{i=0}^r (-1)^i \begin{pmatrix} r \\ i \end{pmatrix} \begin{pmatrix} n+i \\ s \end{pmatrix} \\ &=& \frac{s! (r+n-s)!}{r!} \, (-1)^r \begin{pmatrix} n \\ s-r \end{pmatrix} \end{eqnarray*} by the combinatorial identity (see relation $(3.47)$ on page 27 in \cite{gould}) $$ \sum_{i=0}^r (-1)^i \begin{pmatrix} r \\ i \end{pmatrix} \begin{pmatrix} n+i \\ s \end{pmatrix}=(-1)^r \begin{pmatrix} n \\ s-r \end{pmatrix}. $$ Replacing $I_{n,r,n-s}$ by the above expression in relation \eqref{3} yields \begin{eqnarray*} \frac{m^{n+1}}{n!}\,\langle \bar{z}^s z^n,\,P_{F^{N,m}}(\bar{z}^s z^n) \rangle &=& \frac{1}{n!m^s} \, \sum_{r=0}^{N-1} \frac{r!}{(r+n-s)!}\,\Big[\frac{(r+n-s)!}{r!} \Big]^2 (s!)^2 \,\Big[\begin{pmatrix} n \\ s-r \end{pmatrix}\Big]^2 \\ &=& \frac{s!}{m^s}\,\sum_{r=0}^{N-1} \begin{pmatrix} n \\ s-r \end{pmatrix} \begin{pmatrix} s \\ r \end{pmatrix}\,. \end{eqnarray*} Returning to \eqref{2} we now deduce that $$ \Vert \tilde{H}_{\bar{z}^s}^N e_n \Vert^2 = \frac{s!}{m^s}\,\Big\{ \frac{(n+s)!}{n!s!} - \sum_{r=0}^{N-1} \begin{pmatrix} n \\ s-r \end{pmatrix} \begin{pmatrix} s \\ r \end{pmatrix}\Big\}\,.$$ In view of the combinatorial identity $\sum\limits_{r=0}^{s} {n\choose s-r} {s \choose r} ={n+s\choose s} $, for $s=N$ we get $$ \Vert \tilde{H}_{\bar{z}^s}^N e_n \Vert^2 =\frac{s!}{m^s} \quad\text{for any}\quad n \ge s\,,$$ so that $\tilde{H}_{\bar{z}^s}^N$ is bounded. If $s>N$ we obtain \begin{eqnarray*} \Vert \tilde{H}_{\bar{z}^s}^N e_n \Vert^2 &=& \frac{s!}{m^s} \,\Big\{ \begin{pmatrix} n+s \\ s \end{pmatrix} - \begin{pmatrix} n+s \\ s \end{pmatrix} + \sum_{r=N}^{s} \begin{pmatrix} n \\ s-r \end{pmatrix} \begin{pmatrix} s \\ r \end{pmatrix} \Big\} = \frac{s!}{m^s} \, \sum_{r=N}^{s} {n \choose s-r} {s \choose r } \end{eqnarray*} which is a polynomial of degree $s-N$ in `n'. Thus $\tilde{H}_{\bar{z}^s}^N$ is unbounded for $s>N$. \end{proof} {\it Proof of Theorem \ref{boundedness}.} The result for general symbols now follows from Theorem \ref{boundedness1} by an adaptation of the approach in Lemmas 5.3-5 from \cite{bommier} to our context. \hfill $\Box$ \medskip
{ "timestamp": "2018-11-09T02:01:57", "yymm": "1811", "arxiv_id": "1811.03137", "language": "en", "url": "https://arxiv.org/abs/1811.03137" }
\section{Introduction}\label{section1} The $q$-Painlev\'e equations \cite{KNY, RGH} are $q$-difference analogs of the Painlev\'e equations, which were introduced as new special functions beyond elliptic functions and the hypergeometric functions more than one hundred years ago \cite{Gambier, P,P+}, and are now considered as important special functions with many applications both in mathematics and physics. Similarly, as for other integrable systems, tau functions play a crucial role in the studies of the Painelv\'e equations. The recent discovery by \cite{GIL} states that the tau function of the sixth Painlev\'e equation is a Fourier transform of Virasoro conformal blocks with $c=1$, which admit explicit combinatorial formulas by AGT correspondence~\cite{AGT}. Series representations of the tau functions of other types are also studied in \cite{BLMST,GIL1,N,N1} for differential cases, \cite{BS,BS1,JNS} for $q$-difference cases. In \cite{JNS}, a general solution ($y,z$) to the $q$-Painlev\'e VI equation \cite{JS} was expressed by the tau functions having $q$-Nekrasov type expressions, and it was conjectured that the tau functions satisfy the bilinear equations for the $q$-Painlev\'e VI equation. In this paper, we give explicit expressions for general solutions to the $q$-Painlev\'e V, $\mathrm{III_1}$, $\mathrm{III_2}$, and $\mathrm{III_3}$ equations using degenerations of the tau functions of the $q$-Painlev\'e VI equation. We also give conjectures on the bilinear equations satisfied by the tau functions of the $q$-Painlev\'e V equation and prove that the tau functions of the $q$-Painlev\'e $\mathrm{III_1}$, $\mathrm{III_2}$, and $\mathrm{III_3}$ equations satisfy the bilinear equations. Our $q$-difference equations are as follows. \noindent (i) the $q$-Painlev\'e VI equation: \begin{gather*} \frac{y\overline{y}}{a_3a_4}= \frac{(\overline{z}-b_1t)(\overline{z}-b_2t)}{(\overline{z}-b_3)(\overline{z}-b_4)} , \qquad \frac{z\overline{z}}{b_3b_4}= \frac{(y-a_1t)(y-a_2t)}{(y-a_3)(y-a_4)} \end{gather*} (ii) the $q$-Painlev\'e V equation: \begin{gather*} \frac{y \overline y}{a_3 a_4}=-\frac{(\overline z-b_1 t)(\overline z -b_2 t)}{\overline z-b_3},\qquad \frac{z \overline z}{b_3}=-\frac{(y-a_1t)(y-a_2 t)}{a_4(y-a_3)} \end{gather*} (iii) the $q$-Painlev\'e $\mathrm{III}_1$ equation: \begin{gather*} \frac{y \overline y}{a_3 a_4}=-\frac{\overline z(\overline z -b_2 t)}{\overline z-b_3},\qquad \frac{z \overline z}{b_3}=-\frac{y(y-a_1 t)}{a_4(y-a_3)} \end{gather*} (iv) the $q$-Painlev\'e $\mathrm{III}_2$ equation: \begin{gather*} \frac{y \overline y}{a_3 a_4}=-\frac{\overline z^2}{\overline z-b_3} ,\qquad \frac{z \overline z}{b_3}=-\frac{y(y-a_2 t)}{a_4(y-a_3)} \end{gather*} (v-1) the $q$-Painlev\'e $\mathrm{III}_3$ equation of surface type $A_7^{(1)\prime}$: \begin{gather*} \frac{y \overline y}{a_3 }=\overline z^2 ,\qquad z \overline z=-\frac{y(y-a_2 t)}{y-a_3} \end{gather*} (v-2) the $q$-Painlev\'e $\mathrm{III}_3$ equation of surface type $A_7^{(1)}$: \begin{gather*} \frac{y \overline y}{a_3}=-\frac{\overline z^2}{\overline z-b_3} ,\qquad z \overline z=\frac{y(y-a_2 t)}{a_2}. \end{gather*} Here, $y$, $z$ are functions of $t$, $\overline y=y(qt)$, $\overline z=z(qt)$, and $a_i$, $b_i$ ($i=1,2,3,4$) are parameters. From the point of view of Sakai's classification for the discrete Painlev\'e equations~\cite{S}, the $q$-Painlev\'e VI, V, $\mathrm{III}_1$, $\mathrm{III}_2$ and $\mathrm{III}_3$ equations are derived from the symmetries/surfaces of type $D_5^{(1)}/A_3^{(1)}$, $A_4^{(1)}/A_4^{(1)}$, $E_2^{(1)}/A_5^{(1)}$, $E_2^{(1)}/A_6^{(1)}$ and $A_1^{(1)}/A_7^{ (1)}$, respectively. The degeneration scheme of Painlev\'e equations is as follows \begin{gather*} \begin{diagram} \node{\mathrm{P_{VI}}}\arrow{e} \node{\mathrm{P_{V}}}\arrow{e}\arrow{se} \node{\mathrm{P_{III_1}}}\arrow{e}\arrow{se} \node{\mathrm{P_{III_2}}}\arrow{e}\arrow{se} \node{\mathrm{P_{III_3}}} \\ \node[3]{\mathrm{P_{IV}}}\arrow{e} \node{\mathrm{P_{II}}}\arrow{e} \node{\mathrm{P_I}.} \end{diagram} \end{gather*} The degeneration pattern of the $q$-Painlev\'e equations we use is similar to the one in~\cite{Mu} but not exactly the same. Rather, our limiting procedure is a $q$-version for the one used in~\cite{GIL1} in order to derive combinatorial expressions of tau functions of~$\mathrm{P_V}$, $\mathrm{P_{III_1}}$, $\mathrm{P_{III_2}}$, and $\mathrm{P_{III_3}}$ from the Nekrasov type expression of the tau function of~$\mathrm{P_{VI}}$~\cite{GIL}. For the case of the $q$-Painlev\'e $\mathrm{III}_3$ equation of surface type $A_7^{(1)\prime}$, a series representation for the tau function was proposed in~\cite{BS}, which are expressed by $q$-Virasoro Whittaker conformal blocks which equal Nekrasov partition functions for pure ${\rm SU}(2)$ 5d theory \cite{AY,Y}. A~Fredholm determinant representation of the tau function by topological strings/spectral theory duality is proposed in~\cite{BGT}. For the $q$-Painlev\'e $\mathrm{III}_3$ equation of surface type~$A_7^{(1)}$, a series representation for the tau function was proposed in~\cite{BGM}. Our tau functions for the $q$-Painlev\'e $\mathrm{III}_3$ equations obtained by the degeneration are equivalent to them. Our plan is as follows. In Section~\ref{section2}, we recall the result on $q$-Painlev\'e VI equation in \cite{JNS}. In Sections~\ref{section3}--\ref{section6}, we compute limits of tau functions and derive combinatorial expressions of general solutions and bilinear equations for $q$-Painlev\'e~V, $\mathrm{III}_1$, $\mathrm{III}_2$ and $\mathrm{III}_3$ equations. {\it Notations.} Throughout the paper we fix $q\in\mathbb{C}^\times$ such that $|q|<1$. We set \begin{gather*} [u]=\big(1-q^u\big)/(1-q),\qquad (a;q)_N=\prod_{j=0}^{N-1}\big(1-aq^j\big),\\ (a_1,\dots,a_k;q)_{\infty}=\prod_{j=1}^k(a_j;q)_\infty,\qquad (a;q,q)_\infty=\prod_{j,k=0}^\infty\big(1-aq^{j+k}\big). \end{gather*} We use the $q$-Gamma function and $q$-Barnes function defined by \begin{gather*} \Gamma_q(u)=\frac{(q;q)_\infty}{(q^u;q)_\infty}(1-q)^{1-u}, \qquad G_q(u)=\frac{(q^u;q,q)_\infty}{(q;q,q)_\infty}(q;q)_\infty^{u-1}(1-q)^{-(u-1)(u-2)/2}, \end{gather*} which satisfy $\Gamma_q(1)=G_q(1)=1$ and \begin{gather} \Gamma_q(u+1)=[u]\Gamma_q(u), \qquad G_q(u+1)=\Gamma_q(u)G_q(u). \label{eq_Gamma_Barnes} \end{gather} A partition is a finite sequence of positive integers $\lambda=(\lambda_1,\dots,\lambda_l)$ such that $\lambda_1\ge\dots\ge\lambda_{l}>0$. Denote the length of the partition by $\ell(\lambda)=l$. The conjugate partition $\lambda'=(\lambda'_1,\dots,\lambda'_{l'})$ is defined by $\lambda'_j=\sharp\{i\,|\, \lambda_i\ge j\}$, $l'=\lambda_1$. We regard a partition as a Young diagram. Namely, we regard a partition $\lambda$ also as the subset $\{(i,j)\in\mathbb{Z}^2\,|\, 1\le j\le \lambda_i,\, i\ge 1\}$ of $\mathbb{Z}^2$, and denote its cardinality by $|\lambda|$. We denote the set of all partitions by~$\mathbb{Y}$. For $\square=(i,j)\in\mathbb{Z}_{>0}^2$ we set $a_\lambda(\square)=\lambda_i-j$ (the arm length of $\square$) and $\ell_\lambda(\square)=\lambda'_j-i$ (the leg length of $\square$). In the last formulas we set $\lambda_i=0$ if $i>\ell(\lambda)$ (resp.~$\lambda'_j=0$ if $j>\ell(\lambda')$). For a pair of partitions ($\lambda, \mu$) and $u\in\mathbb{C}$ we set \begin{gather*} N_{\lambda,\mu}(u)=\prod_{\square\in \lambda}\big( 1-q^{-\ell_{\lambda}(\square)-a_\mu(\square)-1}u\big) \prod_{\square\in \mu}\big(1-q^{\ell_{\mu}(\square)+a_\lambda(\square)+1}u\big), \end{gather*} which we call a Nekrasov factor. \section[Results on $q$-$\mathrm{P_{VI}}$ from \cite{JNS}]{Results on $\boldsymbol{q}$-$\boldsymbol{\mathrm{P_{VI}}}$ from \cite{JNS}}\label{section2} In this section, we recall the results of \cite{JNS} on the $q$-Painlev\'e VI equation. Define the tau function by \begin{gather*} \tau^{\mathrm{VI}}\left[ \begin{matrix} \theta_1 & \theta_t \\ \theta_\infty & \theta_0 \end{matrix}\Bigl|s,\sigma,t\right] =\sum_{n\in\mathbb{Z}} s^n t^{(\sigma+n)^2-\theta_t^2-\theta_0^2} C\left[\begin{matrix} \theta_1 & \theta_t \\ \theta_\infty & \theta_0\\ \end{matrix}\Bigl|\sigma+n\right] Z\left[\begin{matrix} \theta_1 & \theta_t\\ \theta_\infty &\theta_0 \\ \end{matrix}\Bigl|\sigma+n,t\right] , \end{gather*} with the definition \begin{gather*} C\left[\begin{matrix} \theta_1 & \theta_t \\ \theta_\infty & \theta_0 \end{matrix}\Bigl| \sigma\right] =\frac{\prod\limits_{\varepsilon,\varepsilon'=\pm}G_q(1+\varepsilon\theta_\infty-\theta_1+\varepsilon'\sigma) G_q(1+\varepsilon\sigma-\theta_t+\varepsilon'\theta_0)}{G_q(1+2\sigma)G_q(1-2\sigma)} , \\ Z\left[\begin{matrix} \theta_1 & \theta_t\\ \theta_\infty & \theta_0 \end{matrix}\Bigl| \sigma,t\right] =\sum_{\boldsymbol{\lambda}=(\lambda_+,\lambda_-)\in\mathbb{Y}^2} t^{|\boldsymbol{\lambda}|}\cdot \frac{\prod\limits_{\varepsilon,\varepsilon'=\pm} N_{\varnothing,\lambda_{\varepsilon'}}\big(q^{\varepsilon\theta_\infty-\theta_1-\varepsilon'\sigma}\big) N_{\lambda_{\varepsilon},\varnothing}\big(q^{\varepsilon\sigma-\theta_t-\varepsilon'\theta_0}\big)} {\prod\limits_{\varepsilon,\varepsilon'=\pm}N_{\lambda_\varepsilon,\lambda_{\varepsilon'}}\big(q^{(\varepsilon-\varepsilon')\sigma}\big)} . \end{gather*} Put \begin{alignat*}{3} & \tau_1^{\mathrm{VI}} =\tau^{\mathrm{VI}}\left[ \begin{matrix} \theta_1 & \theta_t \\ \theta_\infty+\tfrac{1}{2} & \theta_0\\ \end{matrix}\Bigl|s,\sigma,t\right] , \qquad && \tau_2^{\mathrm{VI}} =\tau^{\mathrm{VI}}\left[ \begin{matrix} \theta_1 & \theta_t \\ \theta_\infty-\tfrac{1}{2} & \theta_0\\ \end{matrix}\Bigl|s,\sigma,t\right] ,&\\ & \tau_3^{\mathrm{VI}} =\tau^{\mathrm{VI}}\left[ \begin{matrix} \theta_1 & \theta_t \\ \theta_\infty & \theta_0+\tfrac{1}{2}\\ \end{matrix}\Bigl|s,\sigma+\tfrac{1}{2} ,t\right] , \qquad && \tau_4^{\mathrm{VI}} =\tau^{\mathrm{VI}}\left[ \begin{matrix} \theta_1 & \theta_t \\ \theta_\infty & \theta_0-\tfrac{1}{2}\\ \end{matrix}\Bigl|s,\sigma-\tfrac{1}{2} ,t\right] ,& \\ & \tau_5^{\mathrm{VI}}=\tau^{\mathrm{VI}}\left[ \begin{matrix} \theta_1-\tfrac{1}{2} & \theta_t \\ \theta_\infty & \theta_0\\ \end{matrix}\Bigl|s,\sigma,t\right] , \qquad & & \tau_6^{\mathrm{VI}} =\tau^{\mathrm{VI}}\left[ \begin{matrix} \theta_1+\tfrac{1}{2} & \theta_t \\ \theta_\infty & \theta_0\\ \end{matrix}\Bigl|s,\sigma,t\right] ,& \\ & \tau_7^{\mathrm{VI}} =\tau^{\mathrm{VI}}\left[ \begin{matrix} \theta_1 & \theta_t-\tfrac{1}{2} \\ \theta_\infty & \theta_0\\ \end{matrix}\Bigl|s,\sigma+\tfrac{1}{2} ,t\right] , \qquad && \tau_8^{\mathrm{VI}} =\tau^{\mathrm{VI}}\left[ \begin{matrix} \theta_1 & \theta_t+\tfrac{1}{2} \\ \theta_\infty & \theta_0\\ \end{matrix}\Bigl|s,\sigma-\tfrac{1}{2} ,t\right] .& \end{alignat*} Here and after we write $\overline{f}(t)=f(qt)$, $\underline{f}(t)=f(t/q)$. \begin{Theorem}[\cite{JNS}]\label{thm_qPVI} The functions $y$ and $z$ defined by \begin{gather}\label{eq_yz_thm} y= q^{-2\theta_1-1}t\cdot \frac{\tau_3^{\mathrm{VI}}\tau_4^{\mathrm{VI}}}{\tau_1^{\mathrm{VI}}\tau_2^{\mathrm{VI}}} , \qquad z=\frac{\underline{\tau_1^{\mathrm{VI}}}\tau_2^{\mathrm{VI}}-\tau_1^{\mathrm{VI}}\underline{\tau_2^{\mathrm{VI}}}} {q^{1/2+\theta_\infty} \underline{\tau_1^{\mathrm{VI}}}\tau_2^{\mathrm{VI}}- q^{1/2-\theta_\infty }\tau_1^{\mathrm{VI}}\underline{\tau_2^{\mathrm{VI}}}} \end{gather} are solutions to the $q$-Painlev\'e VI equation \begin{gather} \frac{y\overline{y}}{a_3a_4}= \frac{(\overline{z}-b_1t)(\overline{z}-b_2t)}{(\overline{z}-b_3)(\overline{z}-b_4)} , \qquad \frac{z\overline{z}}{b_3b_4}= \frac{(y-a_1t)(y-a_2t)}{(y-a_3)(y-a_4)} ,\label{eq_qPVI} \end{gather} with the parameters \begin{gather*} a_1=q^{-2\theta_1-1},\qquad a_2=q^{-2\theta_t-2\theta_1-1} , \qquad a_3=q^{-1},\qquad a_4=q^{-2\theta_1-1},\\ b_1=q^{-\theta_0-\theta_t-\theta_1},\qquad b_2=q^{\theta_0-\theta_t-\theta_1},\qquad \qquad b_3=q^{\theta_\infty-1/2},\qquad b_4= q^{-\theta_\infty-1/2} . \end{gather*} \end{Theorem} The formula for $y$ above can be regarded as an extension of Mano's asymptotic expansion to all orders for the solution of $q$-$\mathrm{P_{VI}}$~\cite{Ma}. Theorem \ref{thm_qPVI} was obtained by constructing the fundamental solution of the Lax-pair for $q$-$\mathrm{P_{VI}}$ in~\cite{JS}, in terms of $q$-conformal blocks in~\cite{AFS}. The method of construction of the fundamental solution is a $q$-analogue of the CFT approach used in~\cite{ILT}. In the derivation of Theorem \ref{thm_qPVI} convergence of the fundamental solution was assumed and it has not been proved. Recently, analyticity of K-theoretic Nekrasov functions in cases different from our case was discussed in~\cite{FM}. We remark that the convergence of the pure $q$-Nekrasov partition function with $q_1q_2=1$ on $\mathbb{C}$ is proved in~\cite{BS}. \begin{Conjecture}[\cite{JNS}]\label{conj_qPVI} The tau functions $\tau_i^{\mathrm{VI}}$ $(i=1,\dots, 8)$ satisfy the following bilinear equations \begin{gather} \tau_1^{\mathrm{VI}}\tau_2^{\mathrm{VI}}-q^{-2\theta_1}t \tau_3^{\mathrm{VI}}\tau_4^{\mathrm{VI}} -\big(1-q^{-2\theta_1}t\big)\tau_5^{\mathrm{VI}} \tau_6^{\mathrm{VI}}=0, \label{bilin-1}\\ \tau_1^{\mathrm{VI}}\tau_2^{\mathrm{VI}}-t \tau_3^{\mathrm{VI}}\tau_4^{\mathrm{VI}} -\big(1-q^{-2\theta_t}t\big) \underline{\tau_5^{\mathrm{VI}}}\overline{\tau_6^{\mathrm{VI}}}=0, \label{bilin-2}\\ \tau_1^{\mathrm{VI}}\tau_2^{\mathrm{VI}}-\tau_3^{\mathrm{VI}}\tau_4^{\mathrm{VI}}+\big(1-q^{-2\theta_1}t\big)q^{2\theta_t}\underline{\tau_7^{\mathrm{VI}}}\overline{\tau_8^{\mathrm{VI}}}=0, \label{bilin-3}\\ \tau_1^{\mathrm{VI}}\tau_2^{\mathrm{VI}}-q^{2\theta_t}\tau_3^{\mathrm{VI}}\tau_4^{\mathrm{VI}}+ \big(1-q^{-2\theta_t}t\big)q^{2\theta_t}\tau_7^{\mathrm{VI}} \tau_8^{\mathrm{VI}}=0, \label{bilin-4}\\ \underline{\tau_5^{\mathrm{VI}}}\tau_6^{\mathrm{VI}}+q^{-\theta_1-\theta_\infty+\theta_t-1/2}t \underline{\tau_7^{\mathrm{VI}}}\tau_8^{\mathrm{VI}} -\underline{\tau_1^{\mathrm{VI}}}\tau_2^{\mathrm{VI}}=0, \label{bilin-5}\\ \underline{\tau_5^{\mathrm{VI}}}\tau_6^{\mathrm{VI}}+q^{-\theta_1+\theta_\infty+\theta_t-1/2}t \underline{\tau_7^{\mathrm{VI}}}\tau_8^{\mathrm{VI}} -\tau_1^{\mathrm{VI}}\underline{\tau_2^{\mathrm{VI}}}=0, \label{bilin-6}\\ \underline{\tau_5^{\mathrm{VI}}}\tau_6^{\mathrm{VI}}+q^{\theta_0+2\theta_t}\underline{\tau_7^{\mathrm{VI}}}\tau_8^{\mathrm{VI}} -q^{\theta_t} \underline{\tau_3^{\mathrm{VI}}}\tau_4^{\mathrm{VI}}=0, \label{bilin-7}\\ \underline{\tau_5^{\mathrm{VI}}}\tau_6^{\mathrm{VI}}+q^{-\theta_0+2\theta_t}\underline{\tau_7^{\mathrm{VI}}}\tau_8^{\mathrm{VI}} -q^{\theta_t} \tau_3^{\mathrm{VI}}\underline{\tau_4^{\mathrm{VI}}}=0 . \label{bilin-8} \end{gather} Then, the function $y,z$ \begin{gather}\label{yz-final} y=q^{-2\theta_1-1}t \frac{\tau_3^{\mathrm{VI}}\tau_4^{\mathrm{VI}}}{\tau_1^{\mathrm{VI}}\tau_2^{\mathrm{VI}}}, \qquad z=-q^{\theta_t-\theta_1-1} t \frac{\underline{\tau_7^{\mathrm{VI}}}\tau_8^{\mathrm{VI}}}{\underline{\tau_5^{\mathrm{VI}}}\tau_6^{\mathrm{VI}}} \end{gather} solves $q$-$\mathrm{P_{VI}}$ \eqref{eq_qPVI}. \end{Conjecture} The function $y$ in Conjecture \ref{conj_qPVI} is expressed in the same form in Theorem~\ref{thm_qPVI}, while the function $z$ in Conjecture~\ref{conj_qPVI} is not. By the bilinear equations~\eqref{bilin-5} and~\eqref{bilin-6}, we obtain the expression of $z$ in~\eqref{yz-final} from the expression of $z$ in~\eqref{eq_yz_thm}. We note that in \cite{JNS} we have a Lax pair with respect to the shift $t\to qt$, namely, a fundamental solution of the linear $q$-difference equations \begin{gather}\label{eq_qPVI_Lax} Y(qx,t)=A(x,t)Y(x,t),\qquad Y(x,qt)=B(x,t)Y(x,t) \end{gather} for certain 2 by 2 matrices $A(x,t)$ and $B(x,t)$ was constructed in terms of $q$-Nekrasov functions. From \eqref{eq_qPVI_Lax} we obtain the four-term bilinear equation in \cite[Remark~3.5]{JNS}: \begin{gather}\label{eq_qPVI_4termbilinear} \underline{\tau_1^{\mathrm{VI}}}\tau_2^{\mathrm{VI}}-\tau_1^{\mathrm{VI}}\underline{\tau_2^{\mathrm{VI}}} =\frac{q^{1/2+\theta_\infty}-q^{1/2-\theta_\infty}} {q^{-\theta_0}-q^{\theta_0}}q^{-\theta_1-1}t \big(\underline{\tau_3^{\mathrm{VI}}}\tau_4^{\mathrm{VI}}-\tau_3^{\mathrm{VI}}\underline{\tau_4^{\mathrm{VI}}}\big) . \end{gather} \section[From $q$-$\mathrm{P_{VI}}$ to $q$-$\mathrm{P_V}$]{From $\boldsymbol{q}$-$\boldsymbol{\mathrm{P_{VI}}}$ to $\boldsymbol{q}$-$\boldsymbol{\mathrm{P_V}}$}\label{section3} In this section, we take a limit of the tau functions of $q$-$\mathrm{P_{VI}}$ to $q$-$\mathrm{P_V}$. Define the tau function by \begin{gather*} \tau^{\mathrm{V}}(\theta_*,\theta_t,\theta_0\,|\, s,\sigma,t)=\sum_{n\in\mathbb{Z}}s^n t^{(\sigma+n)^2-\theta_t^2-\theta_0^2} C_{\mathrm{V}} [\theta_*,\theta_t,\theta_0\,|\,\sigma+n ] Z_{\mathrm{V}} [\theta_*,\theta_t,\theta_0\,|\,\sigma+n,t ] , \end{gather*} with \begin{gather*} C_{\mathrm{V}}\left[\theta_*,\theta_t,\theta_0\,|\,\sigma\right] = (q-1)^{-\sigma^2} \prod_{\varepsilon=\pm} \frac{G_q(1-\theta_*+\varepsilon\sigma)}{G_q(1+2\varepsilon\sigma)} \prod_{\varepsilon,\varepsilon'=\pm}G_q(1+\varepsilon\sigma-\theta_t+\varepsilon'\theta_0), \\ Z_{\mathrm{V}}\left[\theta_*,\theta_t,\theta_0\,|\,\sigma,t\right]= \!\sum_{(\lambda_+,\lambda_-)\in\mathbb{Y}^2}\!\! t^{|\lambda_+|+|\lambda_-|} \frac{\prod\limits_{\varepsilon=\pm}\! N_{\varnothing,\lambda_{\varepsilon}} (q^{-\theta_*-\varepsilon \sigma}) f_{\lambda_{\varepsilon}}(q^{\varepsilon\sigma}) \!\prod\limits_{\varepsilon,\varepsilon'=\pm}\! N_{\lambda_{\varepsilon},\varnothing}(q^{\varepsilon\sigma-\theta_t-\varepsilon'\theta_0})} {\prod\limits_{\varepsilon,\varepsilon'=\pm}N_{\lambda_\varepsilon,\lambda_{\varepsilon'}}(q^{(\varepsilon-\varepsilon')\sigma})} , \end{gather*} where \begin{gather*} f_{\lambda}(u)=\prod_{\square\in\lambda}\big(-q^{\ell_\lambda(\square)+a_\varnothing(\square)+1}u^{-1}\big). \end{gather*} We remark that the factor $f_{\lambda}(u)$ corresponds to the five-dimensional Chern--Simons term. The Chern--Simons term in~\cite{T} reads as \begin{gather*} \exp\bigg({-}\beta \sum_k \sum_{(i,j)\in Y_k} (a_k+\epsilon (i-j))\bigg), \end{gather*} where $\beta$, $a_k$ are parameters and $Y_1,\dots, Y_N$ are Young tableaux labelling the fixed points. See~\cite{T} for the details. Since \begin{gather*} \sum_{\square\in\lambda}\ell_\lambda(\square)+a_\varnothing(\square)+1= \sum_{(i,j)\in\lambda}\lambda_j'-i-j+1 =\sum_{(i,j)\in\lambda}i-j, \end{gather*} they coincide when $N=2$. It is possible to remove $f_{\lambda_\varepsilon}(q^{\varepsilon\sigma})$ from $Z_\mathrm{V} [\theta_*,\theta_t,\theta_0\,|\,\sigma,t ]$ by change of variables. Because if we set \begin{gather*} Z_\mathrm{V}^{CS=0} [\theta_*,\theta_t,\theta_0\,|\,\sigma,t ] =\sum_{(\lambda_+,\lambda_-)\in\mathbb{Y}^2} t^{|\lambda_+|+|\lambda_-|} \frac{\prod\limits_{\varepsilon=\pm}N_{\varnothing,\lambda_{\varepsilon}} \big(q^{-\theta_*-\varepsilon \sigma}\big) \prod\limits_{\varepsilon,\varepsilon'=\pm} N_{\lambda_{\varepsilon},\varnothing}\big(q^{\varepsilon\sigma-\theta_t-\varepsilon'\theta_0}\big)} {\prod\limits_{\varepsilon,\varepsilon'=\pm}N_{\lambda_\varepsilon,\lambda_{\varepsilon'}}(q^{(\varepsilon-\varepsilon')\sigma})}, \end{gather*} then we have \begin{gather*} Z_\mathrm{V} [\theta_*,\theta_t,\theta_0\,|\,\sigma,t ] =Z_\mathrm{V}^{CS=0}\big[{-}\theta_*,-\theta_t,\theta_0\,|\,\sigma,q^{-\theta_*-2\theta_t}t\big] \end{gather*} from the relations $N_{\varnothing,\lambda}(u)=f_\lambda\big(u^{-1}\big)N_{\lambda,\varnothing}\big(u^{-1}\big)$, $N_{\lambda,\varnothing}(u)=f_\lambda(u)^{-1}N_{\varnothing,\lambda}\big(u^{-1}\big)$, and $N_{\lambda,\mu}(u)=N_{\mu',\lambda'}(u)$ \cite[Lemma~A.2]{JNS}. We define tau functions for $q$-$\mathrm{P_V}$ by \begin{alignat*}{3} &\tau_1^{\mathrm{V}}=\tau^{\mathrm{V}}\big(\theta_*-\tfrac{1}{2}, \theta_t,\theta_0\,|\, s,\sigma,t/\sqrt{q}\big),\qquad&& \tau_2^{\mathrm{V}}=\tau^{\mathrm{V}}\big(\theta_*+\tfrac{1}{2}, \theta_t,\theta_0\,|\, s,\sigma,\sqrt{q}t\big),&\\ &\tau_3^{\mathrm{V}}=\tau^{\mathrm{V}}\big(\theta_*, \theta_t,\theta_0+\tfrac{1}{2}\,|\, s,\sigma+\tfrac{1}{2},t\big),\qquad&& \tau_4^{\mathrm{V}}=\tau^{\mathrm{V}}\big(\theta_*, \theta_t,\theta_0-\tfrac{1}{2}\,|\, s,\sigma-\tfrac{1}{2},t\big),&\\ &\tau_5^{\mathrm{V}}=\tau^{\mathrm{V}}\big(\theta_*, \theta_t-\tfrac{1}{2},\theta_0\,|\, s,\sigma+\tfrac{1}{2},t\big),\qquad&& \tau_6^{\mathrm{V}}=\tau^{\mathrm{V}}\big(\theta_*, \theta_t+\tfrac{1}{2},\theta_0\,|\, s,\sigma-\tfrac{1}{2},t\big).& \end{alignat*} Let \begin{gather*} C_1= C_6=(q-1)^{-\sigma^2} q^{-(\Lambda+1/2)(\sigma^2-\theta_t^2-\theta_0^2)} \prod_{\varepsilon=\pm}G_q\big(\tfrac{1}{2}-\Lambda+\varepsilon \sigma\big)^{-1},\\ C_2= C_5=(q-1)^{-\sigma^2} q^{-(\Lambda-1/2)(\sigma^2-\theta_t^2-\theta_0^2)} \prod_{\varepsilon=\pm}G_q\big(\tfrac{3}{2}-\Lambda+\varepsilon \sigma\big)^{-1},\\ C_3= (q-1)^{-(\sigma+1/2)^2} q^{-\Lambda((\sigma+1/2)^2-\theta_t^2-(\theta_0+1/2)^2)} \prod_{\varepsilon=\pm}G_q\big(1-\Lambda+\varepsilon \big(\sigma+\tfrac{1}{2}\big)\big)^{-1},\\ C_4= (q-1)^{-(\sigma-1/2)^2} q^{-\Lambda((\sigma-1/2)^2-\theta_t^2-(\theta_0-1/2)^2)} \prod_{\varepsilon=\pm}G_q\big(1-\Lambda+\varepsilon\big( \sigma-\tfrac{1}{2}\big)\big)^{-1},\\ C_7= (q-1)^{-(\sigma+1/2)^2} q^{-\Lambda((\sigma+1/2)^2-(\theta_t-1/2)^2-\theta_0^2)} \prod_{\varepsilon=\pm}G_q\big(1-\Lambda+\varepsilon \big(\sigma+\tfrac{1}{2}\big)\big)^{-1},\\ C_8= (q-1)^{-(\sigma-1/2)^2} q^{-\Lambda((\sigma-1/2)^2-(\theta_t+1/2)^2-\theta_0^2)} \prod_{\varepsilon=\pm}G_q\big(1-\Lambda+\varepsilon \big(\sigma-\tfrac{1}{2}\big)\big)^{-1}. \end{gather*} \begin{Proposition}\label{prop_qPV_tau} Set \begin{gather} \theta_1+\theta_\infty=\Lambda,\qquad \theta_1-\theta_\infty=\theta_*,\qquad t = q^{\Lambda}t_1,\nonumber\\ s = \tilde s (q-1)^{-2\sigma}q^{-2\sigma\Lambda} \prod_{\varepsilon=\pm}\Gamma_q\big(\tfrac{1}{2}-\Lambda+\varepsilon\sigma\big)^{-\varepsilon}.\label{eq_limit-V1} \end{gather} Then we have \begin{alignat*}{3} & C_i\tau_i^{\mathrm{VI}}(\theta_\infty,\theta_1,\theta_t,\theta_0\,|\, s,\sigma,t) \to\tau_i^{\mathrm{V}}(\theta_*,\theta_t,\theta_0\,|\, \tilde s, \sigma,t_1),\qquad && i=1,2,3,4,&\\ & C_5\tau_5^{\mathrm{VI}}(\theta_\infty,\theta_1,\theta_t,\theta_0\,|\, s,\sigma,t)\to \tau_1^{\mathrm{V}}(\theta_*,\theta_t,\theta_0\,|\, \tilde s, \sigma,qt_1),&&&\\ & C_6\tau_6^{\mathrm{VI}}(\theta_\infty,\theta_1,\theta_t,\theta_0\,|\, s,\sigma,t)\to \tau_2^{\mathrm{V}}(\theta_*,\theta_t,\theta_0\,|\, \tilde s, \sigma,t_1/q),&&&\\ & C_i\tau_i^{\mathrm{VI}}(\theta_\infty,\theta_1,\theta_t,\theta_0\,|\, s,\sigma,t)\to \tau_{i-2}^{\mathrm{V}}(\theta_*,\theta_t,\theta_0\,|\, \tilde s, \sigma,t_1),\qquad && i=7,8,& \end{alignat*} as $\Lambda\to\infty$. Here, we denote by $\tau_i^{\mathrm{VI}}(\theta_\infty,\theta_1,\theta_t,\theta_0\,|\, s,\sigma,t)$ the tau functions of $q$-$\mathrm{P_{VI}}$ presented in the previous section. \end{Proposition} \begin{proof} First, we verify the limit of the series part. For any partition $\lambda$ we have \begin{gather*} N_{\varnothing,\lambda}\big(q^{-\Lambda}u\big)q^{\Lambda|\lambda|} =\prod_{\square\in\lambda} \big( q^{\Lambda}-q^{\ell_\lambda(\square)+a_\varnothing(\square)+1}u\big) \to f_{\lambda}\big(u^{-1}\big) ,\qquad \Lambda\to \infty. \end{gather*} Hence, the series $Z\left[\begin{matrix} \theta_1 & \theta_t\\ \theta_\infty & \theta_0\\ \end{matrix}\Bigl| \sigma,t\right]$ goes to $Z_{\mathrm{V}}[\theta_*,\theta_t,\theta_0\,|\,\sigma,t]$ as $\Lambda\to \infty$. Second, we examine the limits of the coefficients of $Z$. By the identities \eqref{eq_Gamma_Barnes} on $q$-Gamma function and $q$-Barnes function, for $n\in\mathbb{Z}$ we have \begin{gather} \prod_{\varepsilon=\pm}G_q(1-x+\varepsilon(\sigma+n)) = \prod_{\varepsilon=\pm}G_q(1-x+\varepsilon\sigma) \Gamma_q(-x+\varepsilon\sigma)^{\varepsilon n}\prod_{i=0}^{|n|-1} \left[-x+\frac{|n|}{n}\sigma\right] \nonumber\\ \hphantom{\prod_{\varepsilon=\pm}G_q(1-x+\varepsilon(\sigma+n))=}{} \times \prod_{i=0}^{|n|-1}\prod_{j=1}^{i}[-x+\sigma +j]\prod_{i=0}^{|n|-1}\prod_{j=1}^{i} [-x-\sigma -j].\label{eq_Gqn} \end{gather} Using the identity above, we compute the coefficient of $Z$ in $\tau_1^{\mathrm{VI}}$ multiplied by $C_1$ as follows \begin{gather*} C_1s^n C\left[\begin{matrix} \theta_1 & \theta_t \\ \theta_\infty + \tfrac{1}{2} & \theta_0 \\ \end{matrix}\Bigl| \sigma+n\right] t^{(\sigma+n)^2-\theta_t^2-\theta_0^2} \\ \qquad{} =\tilde{s}^n (q-1)^{\sigma^2-2\sigma n}q^{-(\sigma^2-\theta_t^2-\theta_0^2)/2} t_1^{(\sigma+n)^2-\theta_t^2-\theta_0^2} q^{\Lambda n^2} \prod_{\varepsilon=\pm}\left(\frac{\Gamma_q\big({-}\Lambda-\tfrac{1}{2}+\varepsilon \sigma\big)} {\Gamma_q\big({-}\Lambda+\tfrac{1}{2}+\varepsilon \sigma\big)} \right)^{\varepsilon n} \\ \qquad\quad{} \times \prod_{i=0}^{|n|-1}\left[-\Lambda-\tfrac{1}{2}+\frac{|n|}{n}\sigma\right] \prod_{i=0}^{|n|-1}\prod_{j=1}^{i} [-\Lambda-{\tfrac{1}{2}}+\sigma +j] \prod_{i=0}^{|n|-1}\prod_{j=1}^{i} \big[{-}\Lambda-\tfrac{1}{2}-\sigma -j\big] \\ \qquad\quad{} \times\frac{\prod\limits_{\varepsilon=\pm} G_q(1-\theta_*-{\tfrac{1}{2}}+\varepsilon(\sigma+n))\prod\limits_{\varepsilon,\varepsilon'=\pm}G_q(1+\varepsilon(\sigma+n)-\theta_t+\varepsilon'\theta_0)} {G_q(1+2(\sigma+n))G_q(1-2(\sigma+n))}. \end{gather*} Then we have as $\Lambda\to \infty$ by the definition of $q$-number \begin{gather*} q^{\Lambda n^2}\prod_{i=0}^{|n|-1}\big[{-}\Lambda-\tfrac{1}{2}+\frac{|n|}{n}\sigma\big] \prod_{i=0}^{|n|-1}\prod_{j=1}^{i}\big[{-}\Lambda-{\tfrac{1}{2}}+\sigma +j\big]\prod_{i=0}^{|n|-1}\prod_{j=1}^{i}\big[{-}\Lambda-\tfrac{1}{2}-\sigma -j\big] \\ \qquad{} \to (q-1)^{-n^2}\prod_{i=0}^{|n|-1}q^{-1/2+|n|\sigma/n}\prod_{i=0}^{|n|-1}\prod_{j=1}^{i}q^{-1/2+\sigma +j} \prod_{i=0}^{|n|-1}\prod_{j=1}^{i}q^{-1/2-\sigma -j} \\ \qquad{} =(q-1)^{-n^2}q^{-n^2/2+\sigma n}, \end{gather*} and by the identity \eqref{eq_Gamma_Barnes} of $q$-Gamma function \begin{gather*} \prod_{\varepsilon=\pm}\left(\frac{\Gamma_q(-\Lambda-\tfrac{1}{2}+\varepsilon \sigma)} {\Gamma_q(-\Lambda+\tfrac{1}{2}+\varepsilon \sigma)} \right)^{\varepsilon n} = \left(\frac{[-\Lambda-\tfrac{1}{2}-\sigma]}{[-\Lambda-\tfrac{1}{2}+\sigma]}\right)^n \to q^{-2\sigma n}. \end{gather*} Therefore we obtain \begin{gather*} C_1s^n C\left[\begin{matrix} \theta_1 & \theta_t \\ \theta_\infty + \tfrac{1}{2} & \theta_0 \\ \end{matrix}\Bigl| \sigma+n\right] t^{(\sigma+n)^2-\theta_t^2-\theta_0^2}\\ \qquad{} \to \tilde{s}^n \big(t_1/\sqrt{q}\big)^{(\sigma+n)^2-\theta_t^2-\theta_0^2} C_{\mathrm{V}}\big[\theta_*-\tfrac{1}{2},\theta_t,\theta_0\,|\,\sigma+n\big] \end{gather*} as $\Lambda \to \infty$. Similarly, we can compute the coefficients of $Z$ in the other tau functions and obtain the desired results. \end{proof} In what follows, we abbreviate $\tau_i^{\mathrm{V}}(\theta_*,\theta_t,\theta_0\,|\, s,\sigma,t)$ to $\tau_i$. \begin{Theorem}\label{thm_qPV} The functions \begin{gather*} y=q^{-\theta_*-1}(q-1)^{1/2}t\frac{\tau_3\tau_4}{\tau_1\tau_2},\qquad z=-\frac{\underline{\tau_1}\tau_2-\tau_1\underline{\tau_2}}{q^{\theta_*/2+1/2}\tau_1\underline{\tau_2}} \end{gather*} solves the $q$-Painlev\'e V equation \begin{gather} \frac{y \overline y}{a_3 a_4}=-\frac{(\overline z-b_1 t)(\overline z -b_2 t)}{\overline z-b_3},\qquad \frac{z \overline z}{b_3}=-\frac{(y-a_1t)(y-a_2 t)}{a_4(y-a_3)}\label{eq_qPV} \end{gather} with the parameters \begin{gather*} a_1=q^{-\theta_*-1},\qquad a_2=q^{-2\theta_t-\theta_*-1},\qquad a_3=q^{-1}, \qquad a_4=q^{-3\theta_*/2-1/2},\\ b_1=q^{-\theta_0-\theta_t-\theta_*/2},\qquad b_2=q^{\theta_0-\theta_t-\theta_*/2},\qquad b_3=q^{-\theta_*/2-1/2}. \end{gather*} \end{Theorem} \begin{proof}By definition we have \begin{gather*} C_1C_2=(q-1)^{1/2}C_3C_4. \end{gather*} Hence, by \eqref{eq_limit-V1} the solution ($y,z$) of the $q$-Painlev\'e VI equation has the following limit \begin{gather*} y \to y_1=q^{-\theta_*-1}(q-1)^{1/2}t_1\frac{\tau_3\tau_4}{\tau_1\tau_2}, \qquad q^{-\Lambda/2}z\to z_1=-\frac{\underline{\tau_1}\tau_2-\tau_1\underline{\tau_2}}{q^{\theta_*/2+1/2}\tau_1\underline{\tau_2}},\qquad \Lambda\to \infty . \end{gather*} Substituting \eqref{eq_limit-V1} into the $q$-Painlev\'e VI equation \eqref{eq_qPVI}, we get \begin{gather} \frac{y \overline y}{q^{-\Lambda-\theta_*-2}} =\frac{\big(\overline z- q^{-\theta_0-\theta_t +(\Lambda-\theta_*)/2}t_1\big) \big(\overline z -q^{\theta_0-\theta_t +(\Lambda-\theta_*)/2}t_1\big)} {\big(\overline z-q^{(\Lambda-\theta_*-1)/2}\big)\big(\overline z-q^{-(\Lambda+\theta_*+1)/2)}\big)},\label{eq_qPVI_limit1}\\ \frac{z \overline z}{q^{-1}}=-\frac{\big(y-q^{-\theta_*-1}t_1\big) \big(y-q^{-2\theta_t-\theta_*-1}t_1\big)}{\big(y-q^{-1}\big)\big(y-q^{-\Lambda-\theta_*-1}\big)}.\label{eq_qPVI_limit2} \end{gather} Hence, since $y\to y_1$, $q^{-\Lambda/2}z\to z_1$ as $\Lambda\to \infty$, the system \eqref{eq_qPVI_limit1}, \eqref{eq_qPVI_limit2} degenerate to the $q$-Painle\-v\'e~V equation~\eqref{eq_qPV} for $y=y_1$ and $z=z_1$ as $\Lambda\to \infty$. \end{proof} Since we also have \begin{gather*} C_5C_6=(q-1)^{1/2}C_7C_8,\qquad C_1C_2=C_5C_6, \end{gather*} we obtain the following conjecture. \begin{Conjecture} The tau functions $\tau_i$ $(i=1,\dots, 6)$ satisfy the following bilinear equations \begin{gather} \tau_1\tau_2-q^{-\theta_*}(q-1)^{1/2}t \tau_3\tau_4-\big(1-q^{-\theta_*}t\big)\overline{\tau_1}\underline{\tau_2}=0,\label{bilin-1V}\\ (q-1)^{-1/2}\tau_1\tau_2-\tau_3\tau_4+\big(1-q^{-\theta_*}t\big)q^{2\theta_t}\underline{\tau_5}\overline{\tau_6}=0,\label{bilin-3V}\\ (q-1)^{-1/2}\tau_1\tau_2-q^{2\theta_t}\tau_3\tau_4+q^{2\theta_t}\tau_5 \tau_6=0,\label{bilin-4V}\\ \tau_1\underline{\tau_2}+q^{\theta_t-1/2}(q-1)^{1/2}t\underline{\tau_5}\tau_6-\underline{\tau_1}\tau_2=0,\label{bilin-5V}\\ (q-1)^{-1/2}\tau_1\underline{\tau_2}+q^{\theta_0+2\theta_t}\underline{\tau_5}\tau_6-q^{\theta_t} \underline{\tau_3}\tau_4=0,\label{bilin-7V}\\ (q-1)^{-1/2}\tau_1\underline{\tau_2}+q^{-\theta_0+2\theta_t}\underline{\tau_5}\tau_6-q^{\theta_t} \tau_3\underline{\tau_4}=0 .\label{bilin-8V} \end{gather} Then the functions \begin{gather*} y=q^{-\theta_*-1}(q-1)^{1/2}t\frac{\tau_3\tau_4}{\tau_1\tau_2},\qquad z=-q^{\theta_t-\theta_*/2-1}(q-1)^{1/2}t\frac{\underline{\tau_5}\tau_6}{\tau_1\underline{\tau_2}} \end{gather*} solves $q$-$\mathrm{P_V}$ \eqref{eq_qPV}. \end{Conjecture} The four-term bilinear equation \eqref{eq_qPVI_4termbilinear} admits the following limit. \begin{Proposition}\label{prop_qPV_twotermbilinear}We have \begin{gather} \underline{\tau_1}\tau_2-\tau_1\underline{\tau_2} =\frac{q^{-1/2}(q-1)^{1/2}}{q^{\theta_0}-q^{-\theta_0}}t (\underline{\tau_3}\tau_4-\tau_3\underline{\tau_4} ).\label{bilin-9V} \end{gather} \end{Proposition} \begin{proof} The identity \eqref{bilin-9V} is a direct consequence of \eqref{eq_qPVI_4termbilinear} by the limit \eqref{eq_limit-V1} as $\Lambda\to \infty$. \end{proof} We remark that tau functions without the Chern--Simons term is also obtained by the limit \begin{gather*} \theta_1+\theta_\infty=-\Lambda,\qquad \theta_1-\theta_\infty=\theta_*, \qquad s = \tilde s (q-1)^{-2\sigma} \prod_{\varepsilon=\pm}\Gamma_q\big(\tfrac{1}{2}+\Lambda+\varepsilon\sigma\big)^{-\varepsilon},\qquad \Lambda\to\infty \end{gather*} from the tau functions of $q$-$\mathrm{P_{VI}}$. \section[From $q$-$\mathrm{P_V}$ to $q$-$\mathrm{P_{{III}_1}}$]{From $\boldsymbol{q}$-$\boldsymbol{\mathrm{P_V}}$ to $\boldsymbol{q}$-$\boldsymbol{\mathrm{P_{{III}_1}}}$}\label{section4} In this section, we take a limit of the tau functions of $q$-$\mathrm{P_V}$ to $q$-$\mathrm{P_{{III}_1}}$. Define the tau function by \begin{gather*} \tau^{\mathrm{{III}_1}}(\theta_*,\theta_\star\,|\, s,\sigma,t)= \sum_{n\in\mathbb{Z}} s^n t^{(\sigma+n)^2}C_{\mathrm{{III}_1}} [\theta_*,\theta_\star\,|\,\sigma+n ] Z_{\mathrm{{III}_1}} [\theta_*,\theta_\star\,|\,\sigma+n,t ] , \end{gather*} with \begin{gather*} C_{\mathrm{{III}_1}} [\theta_*,\theta_\star\,|\,\sigma ]= (q-1)^{-2\sigma^2} \prod_{\varepsilon=\pm} \frac{G_q(1-\theta_*+\varepsilon\sigma) G_q(1+\varepsilon\sigma-\theta_\star)}{G_q(1+2\varepsilon\sigma)},\\ Z_{\mathrm{{III}_1}} [\theta_*,\theta_\star\,|\,\sigma,t ]= \sum_{(\lambda_+,\lambda_-)\in\mathbb{Y}^2} t^{|\lambda_+|+|\lambda_-|} \frac{\prod\limits_{\varepsilon=\pm}N_{\varnothing,\lambda_{\varepsilon}} \big(q^{-\theta_*-\varepsilon \sigma}\big) N_{\lambda_{\varepsilon},\varnothing}\big(q^{\varepsilon\sigma-\theta_\star}\big)} {\prod\limits_{\varepsilon,\varepsilon'=\pm}N_{\lambda_\varepsilon,\lambda_{\varepsilon'}}\big(q^{(\varepsilon-\varepsilon')\sigma}\big)} . \end{gather*} Let us define the tau functions for $q$-$\mathrm{P_{III_1}}$ by \begin{alignat*}{3} & \tau^{\mathrm{{III}_1}}_1= \tau^{\mathrm{{III}_1}}\big(\theta_*-\tfrac{1}{2},\theta_\star\,|\, s,\sigma,t/\sqrt{q}\big),\qquad&& \tau^{\mathrm{{III}_1}}_2=\tau^{\mathrm{{III}_1}}\big(\theta_*+\tfrac{1}{2},\theta_\star\,|\, s,\sigma,\sqrt{q}t\big),&\\ & \tau^{\mathrm{{III}_1}}_3= \tau^{\mathrm{{III}_1}}\big(\theta_*,\theta_\star-\tfrac{1}{2}\,|\, s,\sigma+\tfrac{1}{2},t/\sqrt{q}\big),\qquad&& \tau^{\mathrm{{III}_1}}_4=\tau^{\mathrm{{III}_1}}\big(\theta_*,\theta_\star+\tfrac{1}{2}\,|\, s,\sigma-\tfrac{1}{2},\sqrt{q}t\big).& \end{alignat*} Put \begin{gather*} C_1=(q-1)^{-\sigma^2}q^{-\Lambda\sigma^2-(\theta_t^2+\theta_0^2)/2}t^{\theta_t^2+\theta_0^2}\prod_{\varepsilon=\pm}G_q (1-\Lambda+\varepsilon \sigma )^{-1},\\ C_2= (q-1)^{-\sigma^2}q^{-\Lambda\sigma^2+(\theta_t^2+\theta_0^2)/2}t^{\theta_t^2+\theta_0^2}\prod_{\varepsilon=\pm}G_q(1-\Lambda+\varepsilon \sigma)^{-1},\\ C_3=(q-1)^{-(\sigma+1/2)^2}q^{-(\Lambda+1/2)(\sigma+1/2)^2}t^{\theta_t^2+(\theta_0+1/2)^2}\prod_{\varepsilon=\pm}G_q\big(\tfrac{1}{2}-\Lambda+\varepsilon\big( \sigma+\tfrac{1}{2}\big)\big)^{-1},\\ C_4=(q-1)^{-(\sigma-1/2)^2}q^{-(\Lambda-1/2)(\sigma-1/2)^2}t^{\theta_t^2+(\theta_0-1/2)^2} \prod_{\varepsilon=\pm}G_q\big( \tfrac{3}{2} -\Lambda+\varepsilon\big( \sigma-\tfrac{1}{2}\big)\big)^{-1},\\ C_5=(q-1)^{-(\sigma+1/2)^2}q^{-(\Lambda-1/2)(\sigma+1/2)^2}t^{(\theta_t-1/2)^2+\theta_0^2} \prod_{\varepsilon=\pm}G_q\big( \tfrac{3}{2} -\Lambda+\varepsilon\big( \sigma+\tfrac{1}{2}\big)\big)^{-1},\\ C_6=(q-1)^{-(\sigma-1/2)^2}q^{-(\Lambda+1/2)(\sigma-1/2)^2} t^{(\theta_t+1/2)^2+\theta_0^2} \prod_{\varepsilon=\pm}G_q\big(\tfrac{1}{2}-\Lambda+\varepsilon\big( \sigma-\tfrac{1}{2}\big)\big)^{-1}. \end{gather*} \begin{Proposition}Set \begin{gather} \theta_t+\theta_0=\Lambda,\qquad \theta_t-\theta_0=\theta_\star,\qquad t = q^{\Lambda}t_1,\nonumber\\ s = \tilde s (q-1)^{-2\sigma}q^{-\sigma(2\Lambda+1)} \prod_{\varepsilon=\pm}\Gamma_q(-\Lambda+\varepsilon\sigma)^{-\varepsilon} .\label{eq_limit-III1} \end{gather} Then we have \begin{alignat*}{3} & C_i\tau_i^{\mathrm{V}}(\theta_*,\theta_t,\theta_0\,|\, s,\sigma,t)\to \tau_i^{\mathrm{{III}_1}}(\theta_*,\theta_\star \,|\, \tilde s, \sigma,t_1), \qquad && i=1,2,3,4,& \\ & C_i\tau_{i}^{\mathrm{V}}(\theta_*,\theta_t,\theta_0\,|\, s,\sigma,t)\to\tau_{i-2}^{\mathrm{{III}_1}}(\theta_*,\theta_\star \,|\, \tilde s, \sigma,qt_1), \qquad && i=5,6,& \end{alignat*} as $\Lambda\to \infty$. \end{Proposition} \begin{proof}For any partition $\lambda$ we have \begin{gather*} N_{\lambda,\varnothing}\big(q^{-\Lambda}u\big)q^{\Lambda|\lambda|} =\prod_{\square\in\lambda} \big( q^{\Lambda}-q^{-\ell_\lambda(\square)-a_\varnothing(\square)-1}u\big) \to f_{\lambda}(u)^{-1}, \qquad \Lambda\to \infty. \end{gather*} Hence, the series $ Z_{\mathrm{V}} [ \theta_*,\theta_t,\theta_0\,|\, \sigma, t]$ goes to $Z_{\mathrm{III_1}} [\theta_*,\theta_\star\,|\, \sigma,t_1 ]$ as $\Lambda\to \infty$. The coefficients of~$Z_{\mathrm{V}}$ are computed in the same way as in the proof of Proposition~\ref{prop_qPV_tau} using~\eqref{eq_Gqn} and we obtain the desired results. \end{proof} In what follows, we abbreviate $\tau_i^{\mathrm{III_1}}(\theta_*,\theta_\star\,|\, s,\sigma,t)$ to $\tau_i$. Fortunately, the four-term bilinear equation \eqref{bilin-9V} degenerates to a three-term bilinear equation. \begin{Proposition}We have \begin{gather} \underline{\tau_1}\tau_2-\tau_1\underline{\tau_2}=q^{-1/4}t^{1/2} \tau_3\underline{\tau_4}.\label{bilin-9III_1} \end{gather} \end{Proposition} \begin{proof}By definition and \eqref{eq_limit-III1} we have \begin{gather*} \underline{C_1}C_2= C_1\underline{C_2} =\big(q^{-\Lambda}-q^{\sigma}\big)(q-1)^{-1/2}t_1^{-1/2}q^{\theta_0+1/4}\underline{C_3}C_4\\ \hphantom{\underline{C_1}C_2= C_1\underline{C_2}}{} =\big(q^{-\Lambda}-q^{\sigma}\big)(q-1)^{-1/2}t_1^{-1/2}q^{-\theta_0+1/4}C_3\underline{C_4}. \end{gather*} Hence from the four-term bilinear equation \eqref{bilin-9V} degenerates to the three-term bilinear equa\-tion~\eqref{bilin-9III_1} by \eqref{eq_limit-III1} as $\Lambda\to \infty$. \end{proof} \begin{Theorem}The functions \begin{gather}\label{eq_qPIII_1yz} y=q^{-\theta_*-1} t^{1/2}\frac{\tau_3\tau_4}{\tau_1\tau_2},\qquad z=q^{-\theta_*/2-3/4}t^{1/2} \frac{\tau_3\underline{\tau_4}} {\underline{\tau_1}\tau_2} \end{gather} solves the $q$-Painlev\'e $\mathrm{III}_1$ equation \begin{gather} \frac{y \overline y}{a_3 a_4}=-\frac{\overline z (\overline z -b_2 t)}{\overline z-b_3},\qquad \frac{z \overline z}{b_3}=-\frac{y(y-a_2 t)}{a_4(y-a_3)}\label{eq_qPIII_1} \end{gather} with the parameters \begin{gather*} a_2=q^{-\theta_\star-\theta_*-1},\qquad\! a_3=q^{-1}, \qquad\! a_4=q^{-3\theta_*/2-1/2}, \qquad \! b_2=q^{-\theta_*/2},\qquad\! b_3=q^{-\theta_*/2-1/2}. \end{gather*} Furthermore, the tau functions $\tau_i$ $(i=1,\dots, 4)$ satisfy the following bilinear equations. \begin{gather} \tau_1\tau_2- q^{-\theta_*} t^{1/2} \tau_3\tau_4-\overline{\tau_1} \underline{\tau_2}=0,\label{bilin-1III_1}\\ \tau_1\tau_2-q^{\theta_\star}t^{-1/2} \tau_3\tau_4+q^{\theta_\star}t^{-1/2} \overline{\tau_3}\underline{ \tau_4}=0,\label{bilin-4III_1}\\ \tau_1\underline{\tau_2}+q^{-1/4} t^{1/2}\tau_3\underline{\tau_4}-\underline{\tau_1}\tau_2=0,\label{bilin-5III_1}\\ \tau_1\underline{\tau_2}+q^{1/4}t^{-1/2}\tau_3\underline{\tau_4}-q^{1/4}t^{-1/2}\underline{\tau_3}\tau_4=0.\label{bilin-7III_1} \end{gather} \end{Theorem} \begin{proof}By definition and \eqref{eq_limit-III1} we have \begin{gather*} C_1C_2=\big(q^{-\Lambda}-q^{\sigma}\big)(q-1)^{-1/2}t_1^{-1/2}C_3C_4. \end{gather*} Hence, by \eqref{eq_limit-III1} and \eqref{bilin-9III_1} the solution ($y,z$) of the $q$-Painlev\'e V equation degenerates to \begin{gather*} y \to y_1=q^{-\theta_*-1} t_1^{1/2}\frac{\tau_3\tau_4}{\tau_1\tau_2},\qquad z \to z_1=q^{-\theta_*/2-3/4}t_1^{1/2} \frac{\tau_3\underline{\tau_4}} {\underline{\tau_1}\tau_2},\qquad \Lambda\to \infty . \end{gather*} Also, the $q$-Painlev\'e V equation \eqref{eq_qPV} degenerates to the $q$-Painlev\'e $\mathrm{III}_1$ equation~\eqref{eq_qPIII_1} for $y=y_1 $ and $z=z_1$ as $\Lambda\to \infty$. Next we prove the bilinear equations \eqref{bilin-1III_1}--\eqref{bilin-7III_1}. The bilinear equation \eqref{bilin-5III_1} is \eqref{bilin-9III_1}. The identity~\eqref{bilin-7III_1} is obtained by substituting the expression~\eqref{eq_qPIII_1yz} of $(y,z)$ into the $q$-Painlev\'e $\mathrm{III_1}$ equation{\samepage \begin{gather*} \frac{y \overline y}{a_3 a_4}=-\frac{\overline z (\overline z -b_2 t)}{\overline z-b_3}, \end{gather*} and using the bilinear equation \eqref{bilin-5III_1}.} In order to prove \eqref{bilin-1III_1} and \eqref{bilin-4III_1}, we use the following transformation \begin{gather} \big( \tilde{\theta}_*, \tilde{\theta}_\star, \tilde{\sigma}, \tilde{s}, \tilde{t}\big)= \big( -\theta_\star, -\theta_*, \sigma-\tfrac{1}{2}, Cs, q^{-\theta_*-\theta_\star+1/2}t\big),\label{eq_qPIII_1_transformation} \end{gather} where \begin{gather*} C= q^{(\sigma-1)(2\theta_*+2\theta_\star+1)} \prod_{\varepsilon,\varepsilon'=\pm}\Gamma_q\big( \tfrac{1}{2}+\varepsilon\theta_*+\varepsilon'(\sigma-1)\big)^{-\varepsilon\varepsilon'} \Gamma_q\big( \tfrac{1}{2}+\varepsilon\big(\theta_\star+\tfrac{1}{2}\big)+\varepsilon'(\sigma-1)\big)^{-\varepsilon\varepsilon'}. \end{gather*} From the definition of the Nekrasov factor, for a partition $\lambda$ we have \begin{gather*} N_{\varnothing,\lambda}(u)N_{\lambda,\varnothing}(w)= (uw)^{|\lambda|}N_{\varnothing,\lambda}\big(w^{-1}\big)N_{\lambda,\varnothing}\big(u^{-1}\big). \end{gather*} By the identity above, the series part $Z$ of the tau functions $\tau_1,\dots,\tau_4$ transform to \begin{gather*} Z_{\mathrm{{III}_1}}\big[\tilde\theta_*-\tfrac{1}{2},\tilde\theta_\star\,|\,\tilde\sigma,\tilde t/\sqrt{q}\big] = Z_{\mathrm{{III}_1}}\big[\theta_*,\theta_\star+\tfrac{1}{2}\,|\,\sigma-\tfrac{1}{2},\sqrt{q} t\big],\\ Z_{\mathrm{{III}_1}}\big[\tilde\theta_*+\tfrac{1}{2},\tilde\theta_\star\,|\,\tilde\sigma,\sqrt{q}\tilde t\big] = Z_{\mathrm{{III}_1}}\big[\theta_*,\theta_\star-\tfrac{1}{2}\,|\,\sigma-\tfrac{1}{2},\sqrt{q} t\big],\\ Z_{\mathrm{{III}_1}}\big[\tilde\theta_*,\tilde\theta_\star-\tfrac{1}{2}\,|\,\tilde\sigma+\tfrac{1}{2},\tilde t/\sqrt{q}\big] = Z_{\mathrm{{III}_1}}\big[\theta_*+\tfrac{1}{2},\theta_\star\,|\,\sigma,\sqrt{q} t\big],\\ Z_{\mathrm{{III}_1}}\big[\tilde\theta_*,\tilde\theta_\star+\tfrac{1}{2}\,|\,\tilde\sigma-\tfrac{1}{2},\tilde t\big] = Z_{\mathrm{{III}_1}}\big[\theta_*-\tfrac{1}{2},\theta_\star\,|\,\sigma-1,\sqrt{q} t\big], \end{gather*} respectively. Using the identity \begin{gather*} \frac{G_q(1+x+n)G_q(1-x)}{G_q(1-x-n)G_q(1+x)} =(-1)^{n(n+1)/2}q^{n(n+1)x/2+(n-1)n(n+1)/6}\Gamma_q(x)^n\Gamma_q(1-x)^n \end{gather*} for $n\in\mathbb{Z}$, we can compute the coefficients $C_{\mathrm{III_1}}$ and obtain \begin{alignat*}{3} &\tilde \tau_1=K\big[\theta_*,\theta_\star+\tfrac{1}{2},\sigma-\tfrac{1}{2}\big]\tau_4,\qquad&& \tilde \tau_2=s K\big[\theta_*,\theta_\star-\tfrac{1}{2},\sigma-\tfrac{1}{2}\big] \overline{\tau_3},&\\ &\tilde \tau_3=K\big[\theta_*+\tfrac{1}{2},\theta_\star,\sigma\big]\tau_2,\qquad && \tilde \tau_4=s K\big[\theta_*-\tfrac{1}{2},\theta_\star,\sigma-1\big] \overline{\tau_1},& \end{alignat*} where we denote by $\tilde{\tau}_i$ the tau functions with parameters $ ( \tilde{\theta}_*, \tilde{\theta}_\star, \tilde{\sigma}, \tilde{s}, \tilde{t} )$ and by $\tau_i$ the tau functions with parameters $(\theta_*,\theta_\star,\sigma,s,t)$, and \begin{gather*} K [\theta_*,\theta_\star,\sigma ]= q^{-(\theta_*+\theta_\star)\sigma^2} \prod_{\varepsilon,\varepsilon'=\pm}G_q(1+\varepsilon \theta_*+\varepsilon'\sigma)^\varepsilon G_q(1+\varepsilon\theta_\star+\varepsilon'\sigma)^\varepsilon . \end{gather*} By definition we have \begin{gather} \frac{K\big[\theta_*,\theta_\star+\tfrac{1}{2},\sigma-\tfrac{1}{2}\big] K\big[\theta_*,\theta_\star-\tfrac{1}{2},\sigma-\tfrac{1}{2}\big]}{K\big[\theta_*+\tfrac{1}{2},\theta_\star,\sigma\big]K\big[\theta_*-\tfrac{1}{2},\theta_\star,\sigma-1\big]}= -q^{(\theta_\star-\theta_*)/2} . \label{eq_K1} \end{gather} Applying the transformation \eqref{eq_qPIII_1_transformation} to the bilinear equations \eqref{bilin-5III_1} and \eqref{bilin-7III_1} and using the rela\-tion~\eqref{eq_K1}, we obtain the identities \eqref{bilin-1III_1} and \eqref{bilin-4III_1}. \end{proof} We note that the bilinear equations \eqref{bilin-1V}, \eqref{bilin-4V}, \eqref{bilin-5V}, and \eqref{bilin-7V} for the tau functions of~$q$-$\mathrm{P_V}$ degenerate to \eqref{bilin-1III_1}, \eqref{bilin-4III_1}, \eqref{bilin-5III_1}, and \eqref{bilin-7III_1}, respectively. \section[From $q$-$\mathrm{P_{{III}_1}}$ to $q$-$\mathrm{P_{{III}_2}}$]{From $\boldsymbol{q}$-$\boldsymbol{\mathrm{P_{{III}_1}}}$ to $\boldsymbol{q}$-$\boldsymbol{\mathrm{P_{{III}_2}}}$}\label{section5} In this section, we take a limit of the tau functions of $q$-$\mathrm{P_{III_1}}$ to $q$-$\mathrm{P_{III_2}}$. Define the tau function by \begin{gather*} \tau^{\mathrm{{III}_2}}(\theta_*\,|\, s,\sigma,t)= \sum_{n\in\mathbb{Z}}s^n t^{(\sigma+n)^2} C_{\mathrm{{III}_2}} [\theta_*\,|\,\sigma+n ] Z_{\mathrm{{III}_2}} [\theta_*\,|\,\sigma+n,t ] , \end{gather*} with \begin{gather*} C_{\mathrm{{III}_2}} [\theta_*\,|\,\sigma ] = (q-1)^{-3\sigma^2} \prod_{\varepsilon=\pm} \frac{G_q(1-\theta_*+\varepsilon\sigma)}{G_q(1+2\varepsilon\sigma)},\\ Z_{\mathrm{{III_2}}} [\theta_*\,|\,\sigma,t ] = \sum_{(\lambda_+,\lambda_-)\in\mathbb{Y}^2} t^{|\lambda_+|+|\lambda_-|} \frac{\prod\limits_{\varepsilon=\pm}N_{\varnothing,\lambda_{\varepsilon}} \big(q^{-\theta_*-\varepsilon \sigma}\big) f_{\lambda_{\varepsilon}}(q^{\varepsilon\sigma})^{-1}} {\prod\limits_{\varepsilon,\varepsilon'=\pm}N_{\lambda_\varepsilon,\lambda_{\varepsilon'}}\big(q^{(\varepsilon-\varepsilon')\sigma}\big)} . \end{gather*} In the same way as in Section~\ref{section3}, it is possible to remove $f_{\lambda_\varepsilon}(q^{\varepsilon\sigma})^{-1}$ from $Z_\mathrm{{III}_2} [\theta_*\,|\,\sigma,t ]$ by change of variables. Because if we set \begin{gather*} Z_\mathrm{{III_2}}^{CS=0} [\theta_*\,|\,\sigma,t ] =\sum_{(\lambda_+,\lambda_-)\in\mathbb{Y}^2} t^{|\lambda_+|+|\lambda_-|} \frac{\prod\limits_{\varepsilon=\pm}N_{\varnothing,\lambda_{\varepsilon}}\big(q^{-\theta_*-\varepsilon \sigma}\big)} {\prod\limits_{\varepsilon,\varepsilon'=\pm}N_{\lambda_\varepsilon,\lambda_{\varepsilon'}}\big(q^{(\varepsilon-\varepsilon')\sigma}\big)}, \end{gather*} then we have \begin{gather*} Z_\mathrm{III_2} [\theta_*\,|\,\sigma,t ] =Z_\mathrm{III_2}^{CS=0} \big[{-}\theta_*\,|\,\sigma,q^{-\theta_*}t \big]. \end{gather*} Let us define the tau functions for $q$-$\mathrm{P_{III_2}}$ by \begin{gather*} \tau^{\mathrm{{III}_2}}_1= \tau^{\mathrm{{III}_2}}\big(\theta_*-\tfrac{1}{2}\,|\, s,\sigma,t/\sqrt{q}\big),\qquad \tau^{\mathrm{{III}_2}}_2= \tau^{\mathrm{{III}_2}}\big(\theta_*+\tfrac{1}{2}\,|\, s,\sigma+1,\sqrt{q}t\big),\\ \tau^{\mathrm{{III}_2}}_3=\tau^{\mathrm{{III}_2}}\big(\theta_*\,|\, s,\sigma+\tfrac{1}{2},t\big). \end{gather*} Put \begin{gather*} C_1=(q-1)^{-\sigma^2}q^{-\Lambda\sigma^2}\prod_{\varepsilon=\pm}G_q (1-\Lambda+\varepsilon\sigma )^{-1},\\ C_2=C_1,\\ C_3=(q-1)^{-(\sigma+1/2)^2}q^{-(\Lambda-1/2)(\sigma+1/2)^2}\prod_{\varepsilon=\pm}G_q\big( \tfrac{3}{2} -\Lambda+\varepsilon\big(\sigma+\tfrac{1}{2}\big)\big)^{-1},\\ C_4=(q-1)^{-(\sigma-1/2)^2}q^{-(\Lambda+1/2)(\sigma-1/2)^2}\prod_{\varepsilon=\pm}G_q\big(\tfrac{1}{2}-\Lambda+\varepsilon\big(\sigma-\tfrac{1}{2}\big)\big)^{-1}. \end{gather*} \begin{Proposition}Set \begin{gather*} \theta_\star=\Lambda,\qquad t = q^{\Lambda}t_1,\qquad s = \tilde s (q-1)^{-2\sigma}q^{-\sigma(2\Lambda+1)} \prod_{\varepsilon=\pm}\Gamma_q (-\Lambda+\varepsilon\sigma )^{-\varepsilon} \end{gather*} Then we have \begin{gather*} C_i\tau^{\mathrm{III_1}}_i(\theta_*,\theta_\star\,|\, s,\sigma,t)\to \tau^{\mathrm{III_2}}_i(\theta_*\,|\, \tilde s,\sigma,t_1),\qquad i=1,3,\\ C_2\tau^{\mathrm{III_1}}_2(\theta_*,\theta_\star\,|\, s,\sigma,t)\to \tilde s\tau^{\mathrm{III_2}}_2(\theta_*\,|\, \tilde s,\sigma,t_1),\\ C_4\tau^{\mathrm{III_1}}_4(\theta_*,\theta_\star\,|\, s,\sigma,t)\to \tilde s\tau^{\mathrm{III_2}}_3(\theta_*\,|\, \tilde s,\sigma,t_1) \end{gather*} as $\Lambda \to \infty$. \end{Proposition} In what follows, we abbreviate $\tau_i^{\mathrm{III_2}}(\theta_*\,|\, s,\sigma,t)$ to $\tau_i$. Since we have the relation \begin{gather*} C_1C_2=(q-1)^{-1/2}\big(q^{-\Lambda/2}-q^{\Lambda/2-\sigma}\big)C_3C_4, \end{gather*} we obtain the following theorem by the degeneration. \begin{Theorem} The functions \begin{gather* y=q^{-\theta_*-1}(q-1)^{-1/2} t^{1/2}\frac{\tau_3^2}{\tau_1\tau_2},\qquad z=q^{-\theta_*/2-3/4}(q-1)^{-1/2}t^{1/2}\frac{\tau_3\underline{\tau_3}}{\underline{\tau_1}\tau_2} \end{gather*} solves the $q$-Painlev\'e $\mathrm{III_2}$ equation \begin{gather*} \frac{y \overline y}{a_3 a_4}=-\frac{\overline z^2}{\overline z-b_3} ,\qquad \frac{z \overline z}{b_3}=-\frac{y(y-a_2 t)}{a_4(y-a_3) \end{gather*} with the parameters \begin{gather*} a_2=q^{-\theta_*-1},\qquad a_3=q^{-1}, \qquad a_4=q^{-3\theta_*/2-1/2}, \qquad b_2=q^{-\theta_*/2},\qquad b_3=q^{-\theta_*/2-1/2}. \end{gather*} Furthermore, the tau functions $\tau_i$ ($i=1,2,3$) satisfy the following bilinear equations. \begin{gather} \tau_1\tau_2- q^{-\theta_*}(q-1)^{-1/2} t^{1/2} \tau_3^2-\overline{\tau_1} \underline{\tau_2}=0,\label{bilin-1III_2}\\ \tau_1\tau_2-(q-1)^{-1/2} t^{-1/2} \tau_3^2+(q-1)^{-1/2} t^{-1/2} \overline{\tau_3}\underline{ \tau_3}=0,\label{bilin-4III_2}\\ \tau_1\underline{\tau_2}+q^{-1/4}(q-1)^{-1/2}t^{1/2} \tau_3\underline{\tau_3}-\underline{\tau_1}\tau_2=0.\label{bilin-5III_2} \end{gather} \end{Theorem} We note that the bilinear equations \eqref{bilin-1III_1}, \eqref{bilin-4III_1}, and \eqref{bilin-5III_1} for the tau functions of $q$-$\mathrm{P_{III_1}}$ degenerate to \eqref{bilin-1III_2}, \eqref{bilin-4III_2}, and \eqref{bilin-5III_2}, respectively. \section[From $q$-$\mathrm{P_{{III}_2}}$ to $q$-$\mathrm{P_{{III}_3}}$]{From $\boldsymbol{q}$-$\boldsymbol{\mathrm{P_{{III}_2}}}$ to $\boldsymbol{q}$-$\boldsymbol{\mathrm{P_{{III}_3}}}$}\label{section6} In this section, we take a limit of the tau functions of $q$-$\mathrm{P_{III_2}}$ to $q$-$\mathrm{P_{{III}_3}}$. Define the tau function by \begin{gather*} \tau^{\mathrm{III_3}}(s,\sigma,t)= \sum_{n\in\mathbb{Z}}s^n t^{(\sigma+n)^2}C_{\mathrm{{III}_3}}[\sigma+n] Z_{\mathrm{{III}_3}}[\sigma+n,t], \end{gather*} with \begin{gather*} C_{\mathrm{{III}_3}} [\sigma ]=(q-1)^{-4\sigma^2}\prod_{\varepsilon=\pm} \frac{1}{G_q(1+2\varepsilon (\sigma+n))},\\ Z_{\mathrm{{III_3}}}[\sigma,t]=\sum_{(\lambda_+,\lambda_-)\in\mathbb{Y}^2}t^{|\lambda_+|+|\lambda_-|} \frac{1}{\prod\limits_{\varepsilon,\varepsilon'=\pm}N_{\lambda_\varepsilon,\lambda_{\varepsilon'}}\big(q^{(\varepsilon-\varepsilon')\sigma}\big)}. \end{gather*} Let us define the tau functions for $q$-$\mathrm{P_{III_3}}$ by \begin{gather*} \tau^{\mathrm{III_3}}_1=\tau^{\mathrm{III_3}}(s,\sigma,t),\qquad \tau^{\mathrm{III_3}}_2=\tau^{\mathrm{III_3}}\big(s,\sigma+\tfrac{1}{2},t\big). \end{gather*} Put \begin{gather*} C_1=(q-1)^{-\sigma^2}q^{-(\Lambda-1/2)\sigma^2} \prod_{\varepsilon=\pm}G_q\big( \tfrac{3}{2}-\Lambda+\varepsilon\sigma\big)^{-1},\\ C_2=(q-1)^{-(\sigma+1)^2}q^{-(\Lambda+1/2)(\sigma+1)^2}\prod_{\varepsilon=\pm}G_q\big(\tfrac{1}{2}-\Lambda+\varepsilon(\sigma+1)\big)^{-1},\\ C_3=(q-1)^{-(\sigma+1/2)^2}q^{-\Lambda(\sigma+1/2)^2}\prod_{\varepsilon=\pm}G_q\big(1-\Lambda+\varepsilon\big(\sigma+\tfrac{1}{2}\big)\big)^{-1}. \end{gather*} \begin{Proposition}Set \begin{gather*} \theta_*=\Lambda,\qquad t = q^{\Lambda}t_1, \qquad s = \tilde s (q-1)^{-2\sigma}q^{-2\sigma\Lambda} \prod_{\varepsilon=\pm}\Gamma_q\big(\tfrac{1}{2}-\Lambda+\varepsilon\sigma\big)^{-\varepsilon}. \end{gather*} Then we have \begin{gather*} C_1\tau_1^{\mathrm{III_2}}(\theta_*\,|\, s,\sigma,t) \to \tau_1^{\mathrm{III_3}}(\tilde s, \sigma, t_1),\\ C_2\tau_2^{\mathrm{III_2}}(\theta_*\,|\, s,\sigma,t)\to \tau_1^{\mathrm{III_3}}(\tilde s, \sigma, t_1)/\tilde s,\\ C_3\tau_3^{\mathrm{III_2}}(\theta_*\,|\, s,\sigma,t)\to \tau_2^{\mathrm{III_3}}(\tilde s, \sigma, t_1), \end{gather*} as $\Lambda\to\infty$. \end{Proposition} In what follows, we abbreviate $\tau_i^{\mathrm{III_3}}( s,\sigma,t)$ to $\tau_i$. Since we have the relation \begin{gather*} C_1C_2=(q-1)^{1/2}\frac{q^{-\sigma-1/2+\Lambda/2}}{q^{-\sigma-1/2}-q^\Lambda}C_3^2, \end{gather*} we obtain the following theorem by the degeneration. \begin{Theorem} The functions \begin{gather*} y=t^{1/2}\frac{s\tau_2^2}{\tau_1^2},\qquad z=q^{-3/4}t^{1/2}\frac{s\tau_2\underline{\tau_2}}{\tau_1\underline{\tau_1}} \end{gather*} solves the $q$-Painlev\'e $\mathrm{III}_3$ equation \begin{gather} \frac{y \overline y}{a_3 }=\overline z^2 ,\qquad z \overline z=-\frac{y(y-a_2 t)}{y-a_3}\label{eq_qPIII_3} \end{gather} with the parameters \begin{gather*} a_2=q^{-1},\qquad a_3=q^{-1}. \end{gather*} Furthermore, the tau functions $\tau_1$, $\tau_2$ satisfy the following bilinear equations. \begin{gather} st^{1/2} \tau_2^2-\tau_1^2 +\overline{\tau_1}\underline{\tau_1}=0,\label{bilin-1III_3}\\ s^{-1}t^{1/2}\tau_1^2-\tau_2^2+\overline{\tau_2}\underline{\tau_2}=0.\label{bilin-4III_3} \end{gather} \end{Theorem} We note that the bilinear equations \eqref{bilin-1III_2}, \eqref{bilin-4III_2} for the tau functions of~$q$-$\mathrm{P_{III_2}}$ degenerate to~\eqref{bilin-1III_3}, \eqref{bilin-4III_3}, respectively. As suggested in \cite[equations~(2.9)--(2.11)]{BS}, the bilinear equa\-tion~\eqref{bilin-4III_3} is derived from~\eqref{bilin-1III_3} by the transformation $\sigma\to \sigma+1/2$. \begin{Remark}The tau function $\mathcal{T}_c\big(q^{2\sigma},s;q\,|\,t\big)$ proposed in~\cite{BS} for the $q$-Painlev\'e $\mathrm{III}_3$ equation are related to our tau functions by \begin{gather*} \mathcal{T}_c\big(q^{2\sigma},s;q\,|\,t\big)= (-1)^{-2\sigma^2} \tau^{\mathrm{III_3}}\big( (-1)^{-4\sigma}s, \sigma, t\big). \end{gather*} \end{Remark} \begin{Remark}$q$-$P(A_7')$ in \cite{Mu} (or $q$-$P\big(A_1^{(1)}/A_7^{(1)}\big)$ in \cite[equation~(8.14)]{KNY}) is \begin{gather*} \frac{y\overline y}{a_4}=-\frac{\overline z(\overline z-b_2 t)}{\overline z-b_3},\qquad \frac{z\overline z}{b_3}=\frac{y^2}{a_4}, \end{gather*} where $y=y(t)$, $z=z(t)$, and $a_4$, $b_1$, $b_2$, $b_3$ are complex parameters. Replacing $y$, $z$ in \eqref{eq_qPIII_3} by~$z$,~$\underline{y}$, we obtain $q$-$P(A_7')$ with $a_4=1$, $b_2=1$, and $b_3=q^{-1}$. \end{Remark} The bilinear equations \eqref{bilin-1III_3}, \eqref{bilin-4III_3} are also proved by using the Nakajima--Yoshioka blow-up equations~\cite{BS1}. There exists another $q$-difference equation admitting $\mathrm{P_{III_3}}$ and $\mathrm{P_I}$ as limits~\cite{GR}, which corresponds to the $q$-difference Painlev\'e equation of the surface type $A_7^{(1)}$ \cite{S}. Its standard form (see equation~(2.44) in \cite{S1}) is \begin{gather}\label{eq_qPA7} \overline{g}g^2\underline{g}=t^{2}(1-g), \end{gather} where $g=g(t)$. A series expansion of the tau function for $q$-$P\big(A_7^{(1)}\big)$ \eqref{eq_qPA7} was proposed and conjectured to satisfy its bilinear form in \cite{BGM}. Later, it was proved in~\cite{BS1}. Below, we show that their tau function for $q$-$P\big(A_7^{(1)}\big)$~\eqref{eq_qPA7} is also obtained as another limit of the tau function for $q$-$\mathrm{P_{III_2}}$. Redefine the tau function by \begin{gather*} \tau^{\mathrm{III_3}}(s,\sigma,t)= \sum_{n\in\mathbb{Z}} s^n t^{(\sigma+n)^2} C_{\mathrm{{III}_3}} [\sigma+n ] Z_{\mathrm{{III}_3}}[\sigma+n,t], \end{gather*} with \begin{gather*} C_{\mathrm{{III}_3}} [\sigma ] = (-1)^{n^2}(q-1)^{-4\sigma^2}\prod_{\varepsilon=\pm} \frac{1}{G_q(1+2\varepsilon(\sigma+n))},\\ Z_{\mathrm{{III_3}}}[\sigma,t]=\sum_{(\lambda_+,\lambda_-)\in\mathbb{Y}^2}t^{|\lambda_+|+|\lambda_-|} \frac{\prod\limits_{\varepsilon=\pm}f_{\lambda_\varepsilon}(q^{\varepsilon\sigma})^{-1}} {\prod\limits_{\varepsilon,\varepsilon'=\pm}N_{\lambda_\varepsilon,\lambda_{\varepsilon'}}\big(q^{(\varepsilon-\varepsilon')\sigma}\big)}. \end{gather*} Let us define the tau functions for $q$-$P\big(A_7^{(1)}\big)$ by \begin{gather*} \tau^{\mathrm{III_3}}_1=\tau^{\mathrm{III_3}}\big(s,\sigma,t/\sqrt{q}\big),\qquad \tau^{\mathrm{III_3}}_2=\tau^{\mathrm{III_3}}\big(s,\sigma+\tfrac{1}{2},t\big). \end{gather*} Put \begin{gather*} C_1= (q-1)^{-\sigma^2}\prod_{\varepsilon=\pm}G_q\big( \tfrac{3}{2}+\Lambda+\varepsilon\sigma\big)^{-1}, \\ C_2=(q-1)^{-(\sigma+1)^2}\prod_{\varepsilon=\pm}G_q\big(\tfrac{1}{2}+\Lambda+\varepsilon(\sigma+1)\big)^{-1},\\ C_3=(q-1)^{-(\sigma+1/2)^2}\prod_{\varepsilon=\pm}G_q\big(1+\Lambda+\varepsilon\big(\sigma+\tfrac{1}{2}\big)\big)^{-1}. \end{gather*} \begin{Proposition}Set \begin{gather*} \theta_*=-\Lambda, \qquad s = \tilde s (q-1)^{-2\sigma}\prod_{\varepsilon=\pm}\Gamma_q\big(\tfrac{1}{2}+\Lambda+\varepsilon\sigma\big)^{-\varepsilon}. \end{gather*} Then we have \begin{gather*} C_1\tau_1^{\mathrm{III_2}}(\theta_*\,|\, s,\sigma,t) \to \tau_1^{\mathrm{III_3}}(\tilde s, \sigma, t),\\ C_2\tau_2^{\mathrm{III_2}}(\theta_*\,|\, s,\sigma,t)\to \tau_1^{\mathrm{III_3}}(\tilde s, \sigma, q t)/\tilde s,\\ C_3\tau_3^{\mathrm{III_2}}(\theta_*\,|\, s,\sigma,t)\to \tau_2^{\mathrm{III_3}}(\tilde s, \sigma, t), \end{gather*} as $\Lambda\to\infty$. \end{Proposition} In what follows, we abbreviate $\tau_i^{\mathrm{III_3}}( s,\sigma,t)$ to $\tau_i$. Since we have the relation \begin{gather*} C_1C_2=\frac{(q-1)^{1/2}}{1-q^{\Lambda-\sigma+1/2}}C_3^2, \end{gather*} we obtain the following theorem by the degeneration. \begin{Theorem} The functions \begin{gather*} y=-q^{-1}t^{1/2}\frac{s\tau_2^2}{\tau_1\overline{\tau_1}},\qquad z=-q^{-3/4}t^{1/2}\frac{s\tau_2\underline{\tau_2}}{\underline{\tau_1}\overline{\tau_1}} \end{gather*} solves \begin{gather} y \overline y=-q^{-3/2}\frac{\overline z^2}{\overline z-q^{-1/2}} ,\qquad z \overline z=y(qy- t).\label{eq_qPIII_3_2} \end{gather} Furthermore, the tau functions $\tau_1$, $\tau_2$ satisfy the following bilinear equations. \begin{gather} s^{-1}t^{1/2} \tau_1\overline{\tau_1}-\tau_2^2 +\overline{\tau_2}\underline{\tau_2}=0,\label{bilin-4III_3_2}\\ \tau_1^2-sq^{-1/4}t^{1/2}\tau_2\underline{\tau_2}-\underline{\tau_1}\overline{\tau_1}=0.\label{bilin-5III_3_2} \end{gather} \end{Theorem} We note that the bilinear equations \eqref{bilin-4III_2}, \eqref{bilin-5III_2} for the tau functions of $q$-$\mathrm{P_{III_2}}$ degenerate to \eqref{bilin-4III_3_2}, \eqref{bilin-5III_3_2}, respectively. By the change of variables $t\to \sqrt{q}t$, $\sigma\to\sigma+1/2$, the bilinear equation~\eqref{bilin-5III_3_2} transforms~\eqref{bilin-4III_3_2}. The bilinear equation~\eqref{bilin-4III_3_2} is equivalent to the bilinear equation~(4.20) for $N=2$, $m=1$ in~\cite{BS1}, which is for $q$-$P\big(A_7^{(1)}\big)$. Following \cite[Example~3.5]{BGM}, we take a~time evolution $T$ as $T(f(\sigma,t))=f(\sigma+1/2,\sqrt{q} t)$. Then the bilinear equation \eqref{bilin-5III_3_2} is equivalent to \begin{gather*} \tau^2-t^{1/2}\overline{\tau}\underline{\tau}-\overline{\overline{\tau}} \underline{\underline{\tau}}=0, \end{gather*} where $\tau=\tau^{\mathrm{III_3}}(s,\sigma,t)$, $\overline{\tau}=T(\tau)$, $\underline{\tau}=T^{-1}(\tau)$. Let $g=t^{1/2}\overline{\tau}\underline{\tau}\tau^{-2}$, then $g$ satisfies $q$-$P\big(A_7^{(1)}\big)$~\eqref{eq_qPA7}. \subsection*{Acknowledgements} This work is partially supported by JSPS KAKENHI Grant Number JP15K17560. The authors thank the referees for valuable suggestions and comments. \pdfbookmark[1]{References}{ref}
{ "timestamp": "2019-09-24T02:20:13", "yymm": "1811", "arxiv_id": "1811.03285", "language": "en", "url": "https://arxiv.org/abs/1811.03285" }
\section{Introduction} A promising line of recent literature has examined the nonconvex objective functions that arise when certain matrix optimization problems are solved in factored form, that is, when a low-rank optimization variable $\mtx{X}$ is replaced by a product of two thin matrices $\mtx{U} \mtx{V}^\mathrm{T}$ and the optimization proceeds jointly over $\mtx{U}$ and $\mtx{V}$~\cite{zhu2018global,zhu2017global,Chi2018,li2018non,ChenChi2018Harnessing,GeEtAl2017No,SunLuo2016Guaranteed}. In many cases, a study of the geometric landscape of these objective functions reveals that---despite their nonconvexity---they possess a certain favorable geometry. In particular, many of the resulting objective functions ($i$) satisfy the {\em strict saddle property}~\cite{GeEtAl2015Escaping,SunEtAl2015When}, where every critical point is either a local minimum or is a strict saddle point, at which the Hessian matrix has at least one negative eigenvalue, and ($ii$) have no spurious local minima (every local minimum corresponds to a global minimum). One such problem---which is both of fundamental importance and representative of structures that arise in many other machine learning problems~\cite{koren2009matrix}---is the low-rank matrix approximation problem, where given a data matrix $\mtx{Y}$ the objective is to minimize $\| \mtx{U} \mtx{V}^\mathrm{T} - \mtx{Y} \|_F^2$. As we explain in Theorem~\ref{thm:centerss}, building on recent analysis in~\cite{nouiehed2018learning} and~\cite{zhu2017global}, this problem satisfies the strict saddle property and has no spurious local minima. In parallel with the recent focus on the favorable geometry of certain nonconvex landscapes, it has been shown that a number of local search algorithms have the capability to avoid strict saddle points and converge to a local minimizer for problems that satisfy the strict saddle property~\cite{lee2016gradient,lee2017first,jin2017escape,royer2018newton}. As stated in~\cite{lee2017first} and as we summarize in Theorems~\ref{thm:avoid saddle gd} and~\ref{thm:boundedgd}, gradient descent when started from a random initialization is one such algorithm. For problems such as low-rank matrix approximation that have no spurious local minima, converging to a local minimizer means converging to a global minimizer. To date, the geometric and algorithmic research described above has largely focused on {\em centralized optimization}, where all computations happen at one ``central'' node that has full access, for example, to the data matrix $\mtx{Y}$. In this work, we study the impact of {\em distributing} the factored optimization problem, such as would be necessary if the data matrix $\mtx{Y}$ in low-rank matrix approximation were partitioned into submatrices $\mtx{Y} = \begin{bmatrix} \mtx{Y}_1 & \mtx{Y}_2 & \cdots & \mtx{Y}_J \end{bmatrix}$, each of which was available at only one node in a network. By similarly partitioning the matrix $\mtx{V}$, one can partition the objective function \begin{equation} \| \mtx{U} \mtx{V}^\mathrm{T} - \mtx{Y} \|_F^2 = \sum_{j=1}^J \| \mtx{U}\mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2. \label{eq:pca1} \end{equation} As we discuss, one can attempt to minimize the resulting objective, in which the matrix $\mtx{U}$ appears in every term of the summation, using techniques similar to classical distributed algorithms such as distributed gradient descent (DGD)~\cite{nedic2009distributed}. These algorithms, however, involve creating local copies $\mtx{U}^1, \mtx{U}^2, \dots, \mtx{U}^J$ of the optimization variable $\mtx{U}$ and iteratively sharing updates of these variables with the aim of converging to a consensus where (exactly or approximately) $\mtx{U}^1 = \mtx{U}^2 = \cdots = \mtx{U}^J$. In this paper we study a straightforward extension of DGD for solving such problems. This extension, which we term DGD+LOCAL, resembles classical DGD in that each node $j$ has a local {\em copy} $\mtx{U}^j$ of the optimization variable $\mtx{U}$ as described above. Additionally, however, each node has a local {\em block} $\mtx{V}_j$ of the partitioned optimization variable $\mtx{V}$, and this block exists only locally at node $j$ without any consensus or sharing among other nodes. We present a geometric framework for analyzing the convergence of DGD+LOCAL in such problems. Our framework relies on a straightforward conversion which reveals (for example in the low-rank matrix approximation problem) that DGD+LOCAL as described above is equivalent to running conventional gradient descent on the objective function \begin{equation} \sum_{j=1}^J \left(\| \mtx{U}^j \mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2 + \sum_{i=1}^J w_{ji} \|\mtx{U}^j - \mtx{U}^i\|_F^2\right), \label{eq:pca2} \end{equation} where $w_{ji}$ are weights inherited from the DGD+LOCAL iterations. This objective function~\eqref{eq:pca2} differs from the original objective function~\eqref{eq:pca1} in two respects: it contains more optimization variables, and it includes a quadratic regularizer to encourage consensus. Although the geometry of~\eqref{eq:pca1} is understood to be well-behaved, new questions arise about the geometry of~\eqref{eq:pca2}: Does it contain new critical points (local minima that are not global, saddle points that are not strict)? And on the consensus subspace, where $\mtx{U}^1 = \mtx{U}^2 = \cdots = \mtx{U}^J$, how do the critical points of~\eqref{eq:pca2} relate to the critical points of~\eqref{eq:pca1}? We answer these questions and build on the algorithmic results for gradient descent to identify in Theorem~\ref{thm:mainRevisedfj} sufficient conditions where DGD+LOCAL is guaranteed to converge to a point that ($i$) is exactly on the consensus subspace, and ($ii$) coincides with a global minimizer of problem~\eqref{eq:pca1}. Under these conditions, the distributed low-rank matrix approximation problem is shown to enjoy the same geometric and algorithmic guarantees as its well-behaved centralized counterpart. For the distributed low-rank matrix approximation problem, these guarantees are stronger than what appear in the literature for classical DGD and more general problems. In particular, we show exact convergence to the consensus subspace with a fixed DGD+LOCAL stepsize, which in more general works is accomplished only with diminishing DGD stepsizes for convex~\cite{chen2012fast,jakovetic2014fast} and nonconvex~\cite{zeng2018nonconvex} problems or by otherwise modifying DGD as in the EXTRA algorithm~\cite{shi2015extra}. Moreover, we show convergence to a global minimizer of the original centralized nonconvex problem. Until recently, existing DGD results either considered convex problems~\cite{chen2012fast,jakovetic2014fast} or showed convergence to stationary points of nonconvex problems~\cite{zeng2018nonconvex}. Very recently, it was also shown~\cite{daneshmand2018second} that with an appropriately small stepsize, DGD can converge to an arbitrarily small neighborhood of a second-order critical point for general nonconvex problems with additional technical assumptions. Our work differs from~\cite{daneshmand2018second} in our use of DGD+LOCAL (rather than DGD) and our focus on one specific problem where we can establish stronger guarantees of exact global optimality and exact consensus without requiring an arbitrarily small (or diminishing) stepsize. Our main results on distributed low-rank matrix factorization are presented in Section~\ref{sec:theorypca}. These results build on several more general algorithmic and geometric results that we first establish in Section~\ref{sec:theoryunc}. The results from Section~\ref{sec:theoryunc} may have broader applicability, and the geometric and algorithmic discussions in Section~\ref{sec:theoryunc} may have independent interest from one another. \section{General Analysis of DGD+LOCAL} \label{sec:theoryunc} Consider a centralized minimization problem that can be written in the form \begin{align} \minimize_{\vct{x}, \vct{y}} f(\vct{x},\vct{y}) = \sum_{j=1}^J f_j(\vct{x},\vct{y}_j), \label{eq:centeralized problem}\end{align} where $\vct{y} = \begin{bmatrix}\vct{y}_1^\mathrm{T} & \cdots & \vct{y}_J^\mathrm{T}\end{bmatrix}^\mathrm{T}$. Here $\vct{x}$ is the common variable in all of the objective functions $\{f_j\}_{j\in[J]}$ and $\vct{y}_j$ is the variable only corresponding to $f_j$. The standard DGD algorithm~\cite{nedic2009distributed} is stated for problems of the form \begin{align*} \minimize_{\vct{x}} f(\vct{x}) = \sum_{j=1}^J f_j(\vct{x}), \end{align*} and for such problems it involves updates of the form \begin{align*} \vct{x}^j(k+1) &= \sum_{i=1}^J \left(\widetilde w_{ji}\vct{x}^i(k)\right) - \mu \nabla_{\vct{x}} f_j(\vct{x}^j (k)), \end{align*} where $\{\widetilde w_{ji}\}$ are a set of symmetric nonnegative weights, and $\widetilde w_{ji}$ is positive if and only if nodes $i$ and $j$ are neighbors in the network or $i=j$. Throughout this paper, we will make the common assumption~\cite{mokhtari2017network} that \begin{align} \sum_{i =1}^J \widetilde w_{ji} = 1 ~\text{for all}~ j\in[J]. \label{eq:sumto1} \end{align} A very natural extension of DGD to problems of the form~\eqref{eq:centeralized problem}---which involve local {\em copies} of the shared variable $\vct{x}$ and local {\em partitions} of the variable $\vct{y}$---is to perform the updates \begin{align} \vct{x}^j(k+1) &= \sum_{i=1}^J \left(\widetilde w_{ji}\vct{x}^i(k)\right) - \mu \nabla_{\vct{x}} f_j(\vct{x}^j (k),\vct{y}_j(k)), \nonumber \\ \vct{y}_j(k+1) & = \vct{y}_j(k) - \mu \nabla_{\vct{y}}f_j(\vct{x}^j(k),\vct{y}_j(k)). \label{eq:DGDtemplate} \end{align} Because we are interested in solving problems of the form~\eqref{eq:centeralized problem}, we refer to~\eqref{eq:DGDtemplate} as DGD+LOCAL throughout this paper. We note that DGD+LOCAL is not equivalent to algorithm would obtain by applying classical DGD to reach consensus over the concatenated variables $\vct{x}$ and $\vct{y}$ as this would require each node to maintain a local copy of the entire vector $\vct{y}$. For the same reason, DGD+LOCAL is not equivalent to the blocked variable problem described in~\cite{notarnicola2017distributed}. \subsection{Relation to Gradient Descent} \label{sec:dgdgd} Note that we can rewrite the first equation in~\eqref{eq:DGDtemplate} as \begin{align*} \vct{x}^j(k+1) &= (\sum_{i =1}^J \widetilde w_{ji})\vct{x}^j(k) - \mu \left( \nabla_{\vct{x}} f_j(\vct{x}^j (k),\vct{y}_j(k)) + \sum_{i\neq j} \frac{\widetilde w_{ji}}{\mu}(\vct{x}^j(k) - \vct{x}^i(k)) \right) \\ &= \vct{x}^j(k) - \mu \left( \nabla_{\vct{x}} f_j(\vct{x}^j (k),\vct{y}_j(k)) + \sum_{i\neq j} \frac{\widetilde w_{ji}}{\mu}(\vct{x}^j(k) - \vct{x}^i(k)) \right). \end{align*} In the second line, we have used the assumption~\eqref{eq:sumto1}. Thus, by defining $\{w_{ji}\}$ such that \begin{equation} w_{ji}=w_{ij} = \begin{cases} \frac{\widetilde w_{ji}}{4\mu}, & i \neq j, \\ 0, & i=j, \end{cases} \label{eq:wtildetow} \end{equation} we see that DGD+LOCAL~\eqref{eq:DGDtemplate} is equivalent to applying standard gradient descent (with stepsize $\mu$) to the problem \begin{equation}\begin{split} &\minimize_{\vct{z}} g(\vct{z}) = \sum_{j=1}^J \left(f_j(\vct{x}^j,\vct{y}_j) + \sum_{i=1}^J w_{ji} \|\vct{x}^j - \vct{x}^i\|_2^2\right), \end{split}\label{eq:DGD problem}\end{equation} where $\vct{z} = (\vct{x}^1,\ldots,\vct{x}^J,\vct{y}_1,\ldots,\vct{y}_J)$ and $\mtx{W} = \{w_{ji}\}$ is a $J \times J$ connectivity matrix with nonnegative entries defined in~\eqref{eq:wtildetow} and zeros on the diagonal. \subsection{Algorithmic Analysis} \label{sec:dgdalg} We are interested in understanding the convergence of the gradient descent algorithm when it is applied to minimizing $g(\vct{z})$ in~\eqref{eq:DGD problem}; as we have argued in Section~\ref{sec:dgdgd}, this is equivalent to running the DGD+LOCAL algorithm~\eqref{eq:DGDtemplate} to minimize the objective function $f(\vct{x},\vct{y})$ in~\eqref{eq:centeralized problem}. Under certain conditions, we can guarantee that gradient descent will converge to a second-order critical point of the objective function $g(\vct{z})$ in~\eqref{eq:DGD problem}. The proof relies on certain properties of the functions $f_j$ comprising~\eqref{eq:centeralized problem}. We first describe these properties before providing the convergence result. \subsubsection{Objective Function Properties and Convergence of Gradient Descent} The first property concerns the assumption that each $f_j$ comprising~\eqref{eq:centeralized problem} has Lipschitz gradient. In this case we can also argue that $g$ in~\eqref{eq:DGD problem} has Lipschitz gradient. \begin{prop} Let $f(\vct{x},\vct{y}) = \sum_{j=1}^J f_j(\vct{x},\vct{y}_j)$ be an objective function as in~\eqref{eq:centeralized problem} and let $g(\vct{z})$ be as in \eqref{eq:DGD problem} with $\vct{z} = (\vct{x}^1,\ldots,\vct{x}^J,\vct{y}_1,\ldots,\vct{y}_J)$. Suppose that each $f_j$ has Lipschitz gradient, i.e., $\nabla f_j$ is Lipschitz continuous with constant $L_j>0$. Then $\nabla g$ is Lipschitz continuous with constant \[ L_{g} = L + \frac{2\omega}{\mu}, \] where $L := \max_j L_j$, $ \omega :=\sum_{i\neq j }^J \widetilde w_{ji}$, and $\widetilde w_{ji}$ and $\mu$ are the DGD+LOCAL weights and stepsize as in~\eqref{eq:DGDtemplate}. \label{prop:lip} \end{prop} Proposition~\ref{prop:lip} is proved in Appendix~\ref{sec:prooflip}. The second property concerns the following {\L}ojasiewicz inequality, which arises in the convergence analysis of gradient descent. \begin{defi}\cite{attouch2009convergence}\label{def:KL} Assume that $h:\mathbb{R}^n \rightarrow \mathbb{R}$ is continuously differentiable. Then $h$ is said to satisfy the {\L}ojasiewicz inequality, if for any critical point $\overline{\vct{x}}$ of $h(\vct{x})$, there exist $\delta>0,~\theta\in[0,1),~C_1>0$ such that \[ \left|h(\vct{x}) - h(\overline{\vct{x}})\right|^{\theta} \leq C_1 \|\nabla h(\vct{x})\|,~~\forall~\vct{x}\in B(\overline{\vct{x}}, \delta). \] Here $\theta$ is often referred to as the KL exponent. \end{defi} This {\L}ojasiewicz inequality (or a more general Kurdyka-{\L}ojasiewicz (KL) inequality for the general nonsmooth problems) characterizes the local geometric properties of the objective function around its critical points and has proved useful for convergence analysis \cite{attouch2009convergence,bolte2014proximal}. The {\L}ojasiewicz inequality (or KL inequality) is very general and holds for most problems in engineering. For example, every analytic function satisfies this {\L}ojasiewicz inequality, but each function may have different {\L}ojasiewicz exponent $\theta$ which determines the convergence rate; see \cite{attouch2009convergence,bolte2014proximal} for the details on this. A general result for convergence of gradient descent to first-order critical point for a function satisfying the {\L}ojasiewicz inequality is as follows.\footnote{The result in~\cite{attouch2009convergence} is stated for the proximal method, but the result can be extended to gradient descent as long as $\mu<\frac{1}{L}$.} \begin{thm}\cite{attouch2009convergence} Suppose $\inf_{\mathbb{R}^n} h>-\infty$ and $h$ satisfies the {\L}ojasiewicz inequality. Also assume $\nabla h$ is Lipschitz continuous with constant $L>0$. Let $\{\vct{x}(k)\}$ be the sequence generated by gradient descent $\vct{x}({k+1}) = \vct{x}(k) - \mu \nabla h(\vct{x}(k))$ with $\mu<\frac{1}{L}$. Then if the sequence $\{\vct{x}(k)\}$ is bounded, it converges to a critical point of $h$. \label{thm:convergence gd with KL} \end{thm} The following result further characterizes the convergence behavior of gradient descent to a second-order critical point. \begin{thm}\cite{lee2016gradient} Suppose $h$ is a twice-continuously differentiable function and $\nabla h$ is Lipschitz continuous with constant $L>0$. Let $\{\vct{x}(k)\}$ be the sequence generated by gradient descent $\vct{x}({k+1}) = \vct{x}(k) - \mu \nabla h(\vct{x}(k))$ with $\mu<\frac{1}{L}$. Suppose $\vct{x}(0)$ is chosen randomly from a probability distribution supported on a set $S$ having positive measure. Then the sequence $\{\vct{x}(k)\}$ almost surely avoids strict saddles, where the Hessian has at least one negative eigenvalue. \label{thm:avoid saddle gd} \end{thm} Theorems~\ref{thm:convergence gd with KL} and~\ref{thm:avoid saddle gd} apply for functions $h$ that globally satisfy the {\L}ojasiewicz and Lipschitz gradient conditions. In some problems, however, one or both of these properties may be satisfied only locally. Nevertheless, under an assumption of bounded iterations---as is already made in Theorem~\ref{thm:convergence gd with KL}---it is possible to extend the first- and second-order convergence results to such functions. For example, one can extend Theorem~\ref{thm:convergence gd with KL} as follows by noting that the original derivation in~\cite{attouch2009convergence} used the {\L}ojasiewicz property only locally around limit points of the sequence $\{\vct{x}(k)\}$. \begin{thm}\cite{attouch2009convergence} Suppose $\inf_{\mathbb{R}^n} h>-\infty$. For $\rho > 0$, let $B_\rho$ denote the open ball of radius $\rho$: \[ B_\rho := \{ \vct{x}: ~ \|\vct{x}\|_2 < \rho \}, \] and suppose $h$ satisfies the {\L}ojasiewicz inequality at all points $\vct{x} \in B_\rho$. Also assume $\nabla h$ is Lipschitz continuous with constant $L>0$. Let $\{\vct{x}(k)\}$ be the sequence generated by gradient descent $\vct{x}({k+1}) = \vct{x}(k) - \mu \nabla h(\vct{x}(k))$ with $\mu<\frac{1}{L}$. Suppose $\{\vct{x}(k)\} \subseteq B_\rho$ and all limit points of $\{\vct{x}(k)\}$ are in $B_\rho$. Then the sequence $\{\vct{x}(k)\}$ converges to a critical point of $h$. \label{thm:convergence gd with KL in ball} \end{thm} The following result establishes second-order convergence for a function with a locally Lipschitz gradient. \begin{thm} Let $\rho > 0$, and consider an objective function $h$ where: \begin{enumerate} \item $\inf_{\mathbb{R}^n} h>-\infty$, \item $h$ satisfies the {\L}ojasiewicz inequality within $B_\rho$, \item $h$ is twice-continuously differentiable, and \item $\left| h\left(\vct{x}\right)\right| \leq L_0$, $\left\Vert \nabla h\left(\vct{x}\right)\right\Vert \leq L_1$, and $\left\Vert \nabla^{2}h(\vct{x}) \right\Vert_{2}\leq L_2$ for all $\vct{x} \in B_{2\rho}$. \end{enumerate} Suppose the gradient descent stepsize \begin{equation} \mu < \frac{1}{L_{2}+\frac{4L_{1}}{\rho}+\frac{\left(2+2\pi\right)L_{0}}{\rho^{2}}}. \label{eq:steptilde} \end{equation} Suppose $\vct{x}(0)$ is chosen randomly from a probability distribution supported on a set $S \subseteq B_\rho$ with $S$ having positive measure, and suppose that under such random initialization, there is a positive probability that the sequence $\{\vct{x}(k)\}$ remains bounded in $B_\rho$ and all limit points of $\{\vct{x}(k)\}$ are in $B_\rho$. Then conditioned on observing that $\{\vct{x}(k)\} \subseteq B_\rho$ and all limit points of $\{\vct{x}(k)\}$ are in $B_\rho$, gradient descent converges to a critical point of $h$, and the probability that this critical point is a strict saddle point is zero. \label{thm:boundedgd} \end{thm} Theorem~\ref{thm:boundedgd} is proved in Appendix~\ref{sec:proofboundedgd}. \subsubsection{Convergence Analysis of DGD+LOCAL} As described in the following theorem, under certain conditions, we can guarantee that the DGD+LOCAL algorithm~\eqref{eq:DGDtemplate} (which is equivalent to gradient descent applied to minimizing $g(\vct{z})$ in~\eqref{eq:DGD problem}) will converge to a second-order critical point of the objective function $g(\vct{z})$. \begin{thm} Let $f(\vct{x},\vct{y}) = \sum_{j=1}^J f_j(\vct{x},\vct{y}_j)$ be an objective function as in~\eqref{eq:centeralized problem} and let $g(\vct{z})$ be as in \eqref{eq:DGD problem} with $\vct{z} = (\vct{x}^1,\ldots,\vct{x}^J,\vct{y}_1,\ldots,\vct{y}_J)$. Suppose each $f_j$ satisfies $\inf_{\mathbb{R}^n} f_j>-\infty$, is twice continuously-differentiable, and has Lipschitz gradient, i.e., $\nabla f_j$ is Lipschitz continuous with constant $L_j>0$. Suppose $g$ satisfies the {\L}ojasiewicz inequality. Let $L := \max_j L_j$, and let $\widetilde w_{ji}$ and $\mu$ be the DGD+LOCAL weights and stepsize as in~\eqref{eq:DGDtemplate}. Assume $\omega := \max_{j}\sum_{i\neq j} \widetilde w_{ji}<\frac{1}{2}$. Let $\{ \vct{z}(k) \}$ be the sequence generated by the DGD+LOCAL algorithm in \eqref{eq:DGDtemplate} with \begin{align} \mu < \frac{1 - 2\omega}{L} \label{eq:stepsize requirement} \end{align} and with random initialization from a probability distribution supported on a set $S$ having positive measure. Then if the sequence $\{\vct{z}(k)\}$ is bounded, it almost surely converges to a second-order critical point of the objective function in \eqref{eq:DGD problem}. \label{thm:dgdconvergegeneric} \end{thm} \begin{proof} Recall that running the DGD+LOCAL algorithm~\eqref{eq:DGDtemplate} to minimize the objective function $f(\vct{x},\vct{y})$ in~\eqref{eq:centeralized problem} is equivalent to running gradient descent on $g(\vct{z})$ in~\eqref{eq:DGD problem}. The proof is completed by invoking \Cref{thm:convergence gd with KL} and \Cref{thm:avoid saddle gd} with $h$ replaced by $g$. From Proposition~\ref{prop:lip}, we have that $\nabla g$ is Lipschitz continuous with constant $L_g = L + \frac{2\omega}{\mu}$, and so choosing $\mu$ to satisfy~\eqref{eq:stepsize requirement} ensures that $\mu < \frac{1}{L_g}$ as required in \Cref{thm:convergence gd with KL} and \Cref{thm:avoid saddle gd}. \end{proof} \begin{remark} The requirement that the DGD+LOCAL stepsize $\mu = O(\frac{1}{L})$ also appears in the convergence analysis of DGD in \cite{yuan2016convergence,zeng2018nonconvex}. \end{remark} \begin{remark} The function $g$ is guaranteed to satisfy the {\L}ojasiewicz inequality, for example, if every $f_j$ is semi-algebraic, because this will imply that $g$ is semi-algebraic, and every semi-algebraic function satisfies the {\L}ojasiewicz inequality. \end{remark} \begin{remark} In order to satisfy~\eqref{eq:stepsize requirement}, it must hold that $\omega < \frac{1}{2}$. In the case where the DGD+LOCAL weight matrix $\widetilde\mtx{W}$ is symmetric and doubly stochastic (i.e., $\widetilde\mtx{W}$ has nonnegative entries and each of its rows and columns sums to $1$), this condition is equivalent to requiring that each diagonal element of $\widetilde\mtx{W}$ is larger than $\frac{1}{2}$. Given any symmetric and doubly stochastic matrix $\widetilde\mtx{W}$, one can design a new weight matrix $(\widetilde\mtx{W} + {\bf I})/2$ that satisfies this requirement. This strategy is also mentioned at the end of \cite[Section 2.1]{yuan2016convergence}. \end{remark} We also have the following DGD+LOCAL convergence result when the functions $f_j$ have only a locally Lipschitz gradient. \begin{thm} Let $f(\vct{x},\vct{y}) = \sum_{j=1}^J f_j(\vct{x},\vct{y}_j)$ be an objective function as in~\eqref{eq:centeralized problem} and let $g(\vct{z})$ be as in \eqref{eq:DGD problem} with $\vct{z} = (\vct{x}^1,\ldots,\vct{x}^J,\vct{y}_1,\ldots,\vct{y}_J)$. Let $\rho > 0$ and suppose each $f_j$ satisfies \begin{enumerate} \item $\inf_{\mathbb{R}^n} f_j>-\infty$, \item $f_j$ is twice-continuously differentiable, and \item $\left| f_j\left(\vct{x},\vct{y}_j\right)\right| \leq L_{0,j}$, $\left\Vert \nabla f_j\left(\vct{x},\vct{y}_j\right)\right\Vert \leq L_{1,j}$, and $\left\Vert \nabla^{2}f_j(\vct{x},\vct{y}) \right\Vert_{2}\leq L_{2,j}$ for all $(\vct{x},\vct{y}_j) \in B_{2\rho}$. \end{enumerate} Suppose also that $g$ satisfies the {\L}ojasiewicz inequality within $B_\rho$. Let $\widetilde w_{ji}$ and $\mu$ be the DGD+LOCAL weights and stepsize as in~\eqref{eq:DGDtemplate}. Assume $\omega := \max_{j}\sum_{i\neq j} \widetilde w_{ji}< \frac{1}{2}$. Let $\{ \vct{z}(k) \}$ be the sequence generated by the DGD+LOCAL algorithm in \eqref{eq:DGDtemplate} with \begin{equation} \mu < \frac{1-2\omega}{\max_j L_{2,j}+\frac{4L_{1,j}}{\rho}+\frac{\left(2+2\pi\right)L_{0,j}}{\rho^{2}}}. \label{eq:stepsize requirementballfj} \end{equation} Suppose $\vct{z}(0)$ is chosen randomly from a probability distribution supported on a set $S \subseteq B_\rho$ with $S$ having positive measure, and suppose that under such random initialization, there is a positive probability that the sequence $\{\vct{z}(k)\}$ remains bounded in $B_\rho$ and all limit points of $\{\vct{z}(k)\}$ are in $B_\rho$. Then conditioned on observing that $\{\vct{z}(k)\} \subseteq B_\rho$ and all limit points of $\{\vct{z}(k)\}$ are in $B_\rho$, DGD+LOCAL converges to a critical point of the objective function in \eqref{eq:DGD problem}, and the probability that this critical point is a strict saddle point is zero. \label{thm:dgdconvergegenericballfj} \end{thm} \begin{proof} Recall that running the DGD+LOCAL algorithm~\eqref{eq:DGDtemplate} to minimize the objective function $f(\vct{x},\vct{y})$ in~\eqref{eq:centeralized problem} is equivalent to running gradient descent on $g(\vct{z})$ in~\eqref{eq:DGD problem}. Similar to the approach taken in proving Theorem~\ref{thm:boundedgd}, to deal with the local Lipschitz condition, the proof involves constructing a function $\widetilde{g}$ such that $\widetilde{g}(\vct{z}) = g(\vct{z})$ for all $\vct{z} \in B_{\rho}$ but where $\widetilde{g}$ has a globally Lipschitz gradient. To do this, recall the window function $w$ defined in Appendix~\ref{sec:proofboundedgd}. Now, recall that \[ g(\vct{z}) = \sum_{j=1}^J \left(f_j(\vct{x}^j,\vct{y}_j) + \sum_{i=1}^J w_{ji} \|\vct{x}^j - \vct{x}^i\|_2^2\right) \] and define \begin{equation} \widetilde{g}\left(\vct{z}\right) = \sum_{j=1}^J \left(\widetilde{f}_j(\vct{x}^j,\vct{y}_j) + \sum_{i=1}^J w_{ji} \|\vct{x}^j - \vct{x}^i\|_2^2\right), \label{eq:gtilde} \end{equation} where \[ \widetilde{f}_j(\vct{x}^j,\vct{y}_j) = f_j(\vct{x}^j,\vct{y}_j) w(\begin{bmatrix} (\vct{x}^j)^\mathrm{T} & \vct{y}_j^\mathrm{T} \end{bmatrix}^\mathrm{T}). \] Since $\widetilde{f}_j(\vct{x}^j,\vct{y}_j) = f_j(\vct{x}^j,\vct{y}_j)$ for $(\vct{x}^j,\vct{y}_j) \in B_\rho$, we have that $\widetilde{g}\left(\vct{z}\right) = g(\vct{z})$ for all $\vct{z} \in B_\rho$. We have the following properties for $\widetilde{g}$: \begin{itemize} \item Since $g = \widetilde{g}$ in $B_\rho$, $\widetilde{g}$ satisfies the {\L}ojasiewicz inequality in $B_\rho$. \item Since $f_j \in C^2$ for all $j$ and $w \in C^2$, $\widetilde{g} \in C^2$. \item Since $\inf_{\mathbb{R}^n} f_j>-\infty$ for all $j$ and $\inf_{\mathbb{R}^n} w>-\infty$, $\inf_{\mathbb{R}^n} \widetilde{g}>-\infty$. \item To globally bound the Lipschitz constant of the gradient of $\widetilde{g}$, note that \begin{eqnarray*} \left\Vert \nabla^{2}\widetilde{f}_j\right\Vert & = & \left\Vert w\cdot\nabla^{2}f_j+\nabla f_j \cdot \left(\nabla w\right)^{\mathrm{T}}+\nabla w \cdot \left(\nabla f_j\right)^{\mathrm{T}}+f_j\cdot\nabla^{2}w\right\Vert \\ & \leq & \left| w\right| \left\Vert \nabla^{2}f_j\right\Vert +2\left\Vert \nabla w\right\Vert \left\Vert \nabla f_j\right\Vert +\left| f_j\right| \left\Vert \nabla^{2}w\right\Vert \\ & \leq & L_{2,j}+\frac{4L_{1,j}}{\rho}+\frac{\left(2+2\pi\right)L_{0,j}}{\rho^{2}} \quad \text{for all}~(\vct{x}^j,\vct{y}_j). \end{eqnarray*} Therefore, given the form of $\widetilde{g}$ in~\eqref{eq:gtilde}, we can conclude from Proposition~\ref{prop:lip} that globally, $\nabla \widetilde{g}$ is Lipschitz continuous with constant \[ L_{\widetilde{g}} = \left( \max_j L_{2,j}+\frac{4L_{1,j}}{\rho}+\frac{\left(2+2\pi\right)L_{0,j}}{\rho^{2}} \right) + \frac{2\omega}{\mu}. \] \end{itemize} Now consider the gradient descent algorithm with stepsize $\mu$ satisfying~\eqref{eq:stepsize requirementballfj}. Define \begin{align*} T_g = \{\vct{z}(0) \in B_\rho:&~\text{all}~\{\vct{z}(k)\} \subseteq B_\rho~\text{and all limit points of}~\{\vct{z}(k)\}~\text{are in}~B_\rho\\ &~\text{when gradient descent is run on}~g~\text{starting at}~\vct{z}(0)\} \end{align*} and \begin{align*} T_{\widetilde{g}} = \{\vct{z}(0) \in B_\rho:&~\text{all}~\{\vct{z}(k)\} \subseteq B_\rho~\text{and all limit points of}~\{\vct{z}(k)\}~\text{are in}~B_\rho\\ &~\text{when gradient descent is run on}~\widetilde{g}~\text{starting at}~\vct{z}(0)\}. \end{align*} Similarly, define \[ \Sigma_g = \{\vct{z}(0) \in B_\rho:~\{\vct{z}(k)\}~\text{converges to a strict saddle when gradient descent is run on}~g~\text{starting at}~\vct{z}(0)\} \] and \[ \Sigma_{\widetilde{g}} = \{\vct{z}(0) \in B_\rho:~\{\vct{z}(k)\}~\text{converges to a strict saddle when gradient descent is run on}~\widetilde{g}~\text{starting at}~\vct{z}(0)\}. \] Using the above properties, we see that Theorem~\ref{thm:avoid saddle gd} can be applied to $\widetilde{g}$, and so we conclude that $\Sigma_{\widetilde{g}}$ has measure zero. Now, after running gradient descent on $g$ from a random initialization as in the theorem statement, condition on observing that $\{\vct{z}(k)\} \subseteq B_\rho$ and all limit points of $\{\vct{z}(k)\}$ are in $B_\rho$, i.e., that $\vct{z}(0) \in T_g$. Because $\{\vct{z}(k)\} \subseteq B_\rho$ and all limit points of $\{\vct{z}(k)\}$ are in $B_\rho$, and because $\{\vct{z}(k)\}$ matches the sequence that would be obtained by running gradient descent on $\widetilde{g}$, we can apply Theorem~\ref{thm:convergence gd with KL in ball} to conclude that $\{\vct{z}(k)\}$ converges to a critical point of $\widetilde{g}$, and since this critical point belongs to $B_\rho$ and $\widetilde{g} = g$ inside $B_\rho$, we conclude that this is also a critical point of $g$. Finally, using the definition of conditional probability, we have \begin{align*} P(\vct{z}(0) \in \Sigma_g | \vct{z}(0) \in T_g) &= \frac{P( \vct{z}(0) \in \Sigma_g \cap T_g )}{P(\vct{z}(0) \in T_g)} \\ &= \frac{P( \vct{z}(0) \in \Sigma_{\widetilde{g}} \cap T_{\widetilde{g}} )}{P(\vct{z}(0) \in T_g)}, \end{align*} where the second equality follows from the fact that $\widetilde{g} = g$ inside $B_\rho$: if a sequence of iterations stays bounded inside $B_\rho$ and converges to a strict saddle when gradient descent is run on $g$, the same will hold when gradient descent is run on $\widetilde{g}$, and vice versa. Since $\Sigma_{\widetilde{g}}$ has zero measure and because $\vct{z}(0)$ is chosen randomly from a probability distribution supported on a set $S \subseteq B_\rho$ with $S$ having positive measure, $P( \vct{z}(0) \in \Sigma_{\widetilde{g}} \cap T_{\widetilde{g}} ) = 0$. Also, by assumption, $P(\vct{z}(0) \in T_g) > 0$. Therefore, $P(\vct{z}(0) \in \Sigma_g | \vct{z}(0) \in T_g) = \frac{0}{\text{nonzero}} = 0$. \end{proof} \subsection{Geometric Analysis} Section~\ref{sec:dgdalg} establishes that, under certain conditions, DGD+LOCAL will converge to a second-order critical point of the objective function $g(\vct{z})$ in~\eqref{eq:DGD problem}. In this section, we are interested in studying the geometric landscape of the distributed objective function in~\eqref{eq:DGD problem} and comparing it to the geometric landscape of the original centralized objective function in~\eqref{eq:centeralized problem}. In particular, we would like to understand how the critical points of $g(\vct{z})$ in~\eqref{eq:DGD problem}) are related to the critical points of $f(\vct{x},\vct{y})$ in~\eqref{eq:centeralized problem}. These problems differ in two important respects: \begin{itemize} \item The objective function in~\eqref{eq:DGD problem} involves more optimization variables than that in~\eqref{eq:centeralized problem}. Thus, the optimization takes place in a higher-dimensional space and there is the potential for new features to be introduced into the geometric landscape. \item The objective function in~\eqref{eq:DGD problem} involves a quadratic regularization term that will promote consensus among the variables $\vct{x}^1,\ldots,\vct{x}^J$. This term is absent from~\eqref{eq:centeralized problem}. However, along the {\em consensus subspace} where $\vct{x}^1 = \cdots = \vct{x}^J$, this regularizer will be zero and the objective functions will coincide. \end{itemize} Despite these differences, we characterize below some ways in which the geometric landscapes of the two problems may be viewed as equivalent. These results may have independent interest from the specific DGD+LOCAL convergence analysis in Section~\ref{sec:dgdalg}. Our first result establishes that if the sub-objective functions $f_j$ satisfy certain properties, the formulation \eqref{eq:DGD problem} does not introduce any new global minima outside of the consensus subspace. \begin{prop}\label{prop:min f = min g for DGD} Let $f(\vct{x},\vct{y}) = \sum_{j=1}^J f_j(\vct{x},\vct{y}_j)$ be as in~\eqref{eq:centeralized problem}. Suppose the topology defined by $\mtx{W}$ is connected. Also suppose there exist $\vct{x}^\star$ (which is independent of $j$) and $\vct{y}_j^\star, j\in[J]$ such that \begin{align} (\vct{x}^\star,\vct{y}_j^\star) \in\argmin_{\vct{x},\vct{y}_j}f_j(\vct{x},\vct{y}_j), \ \forall \ j\in [J]. \label{eq:condition on f = g}\end{align} Then $g(\vct{z})$ defined in \eqref{eq:DGD problem} satisfies \[ \min_{\vct{z}} g(\vct{z}) = \min_{\vct{x},\vct{y}} f(\vct{x},\vct{y}), \] and $g(\vct{z})$ achieves its global minimum only for $\vct{z}$ with $\vct{x}^1 = \cdots = \vct{x}^J$. \end{prop} Proposition~\ref{prop:min f = min g for DGD} is proved in Appendix~\ref{sec:prooffgdgd}. We note that the assumption in Proposition~\ref{prop:min f = min g for DGD} is fairly strong, and while there are problems where it can hold, there are also many problems where it will not hold. Proposition~\ref{prop:min f = min g for DGD} establishes that, in certain cases, there will exist no global minimizers of the distributed objective function $g(\vct{z})$ that fall outside of the consensus subspace. (Moreover, and also importantly, there will {\em exist} a global minimizer {\em on} the consensus subspace.) Also relevant is the question of whether there may exist any {\em other} types of critical points (such as local minima or saddle points) outside of the consensus subspace. Under certain conditions, the following proposition ensures that the answer is no. \begin{prop} Let $f(\vct{x},\vct{y})$ be as in~\eqref{eq:centeralized problem} and $g(\vct{z})$ be as in \eqref{eq:DGD problem} with $\vct{z} = (\vct{x}^1,\ldots,\vct{x}^J,\vct{y}_1,\ldots,\vct{y}_J)$. Suppose the matrix $\mtx{W}$ is connected and symmetric. Also suppose the gradient of $f_j$ satisfies the following symmetric property: \begin{align} \langle \nabla_{\vct{x}} f_j (\vct{x},\vct{y}_j), \vct{x}\rangle = \langle \nabla_{\vct{y}_j} f_j (\vct{x},\vct{y}_j), \vct{y}_j\rangle \label{eq:symmetric property for gradientdgd}\end{align} for all $j\in[J]$. Then, any critical point of $g$ must satisfy $\vct{x}^1 = \cdots = \vct{x}^J $. \label{prop:nospuriouscritical for DGD} \end{prop} Proposition~\ref{prop:nospuriouscritical for DGD} is proved in Appendix~\ref{sec:proofnscdgd}. Finally, we can also make a statement about the behavior of critical points that do fall on the consensus subspace. \begin{thm} Let $\set{C}_f$ denote the set of critical points of \eqref{eq:centeralized problem}: \begin{align*} \set{C}_f: = \left\{\vct{x},\vct{y}: \nabla f(\vct{x},\vct{y}) = \vct{0} \right\}, \end{align*} and let $\set{C}_g$ denote the set of critical points of \eqref{eq:DGD problem}: \begin{align*} \set{C}_g: = \bigg\{\vct{z}: \nabla g(\vct{z}) = \vct{0}\bigg\}. \end{align*} Then, for any $\vct{z} = (\vct{x}^1,\ldots,\vct{x}^J,\vct{y})\in\set{C}_g$ with $\vct{x}^1 = \cdots = \vct{x}^J = \vct{x}$, we have $(\vct{x},\vct{y})\in \set{C}_f$. Furthermore, if $(\vct{x},\vct{y})$ is a strict saddle of $f$, then $\vct{z} = (\vct{x},\dots,\vct{x},\vct{y})$ is also a strict saddle of $g$. \label{thm:strict saddle preserved for DGD}\end{thm} The proof of Theorem~\ref{thm:strict saddle preserved for DGD} is in Appendix~\ref{sec:proofsspdgd}. \section{Analysis of Distributed Matrix Factorization} \label{sec:theorypca} We now consider the prototypical low-rank matrix approximation in factored form, where given a data matrix $\mtx{Y} \in\mathbb{R}^{n\times m}$, we seek to solve \begin{align} \minimize_{\mtx{U} \in \mathbb{R}^{n\times r},\mtx{V} \in \mathbb{R}^{m\times r}} \| \mtx{U}\mtx{V}^\mathrm{T} - \mtx{Y} \|_F^2. \label{eq:rankrBM} \end{align} Here $\mtx{U} \in \mathbb{R}^{n\times r}$ and $\mtx{V} \in \mathbb{R}^{m\times r}$ are tall matrices, and $r$ is chosen in advance to allow for a suitable approximation of $\mtx{Y}$. In some of our results below, we will assume that the data matrix $\mtx{Y}$ has rank at most~$r$. One can solve problem~\eqref{eq:rankrBM} using local search algorithms such as gradient descent. Such algorithms do not require expensive SVDs, and the storage complexity for $\mtx{U}$ and $\mtx{V}$ scales with $(n+m)r$, which is smaller than $nm$ as for $\mtx{Y}$. Unfortunately, problem~\eqref{eq:rankrBM} is nonconvex in the optimization variables $(\mtx{U},\mtx{V})$. Thus, the question arises of whether local search algorithms such as gradient descent actually converge to a global minimizer of~\eqref{eq:rankrBM}. Using geometric analysis of the critical points of problem~\eqref{eq:rankrBM}, however, it is possible to prove convergence to a global minimizer. In Appendix~\ref{sec:geometry for pca}, building on analysis in~\cite{nouiehed2018learning}, we prove the following result about the favorable geometry of the nonconvex problem~\eqref{eq:rankrBM}. \begin{thm} For any data matrix $\mtx{Y}$, every critical point (i.e., every point where the gradient is zero) of problem \eqref{eq:rankrBM} is either a global minimum or a strict saddle point, where the Hessian has at least one negative eigenvalue. \label{thm:centerss} \end{thm} Such favorable geometry has been used in the literature to show that local search algorithms (particularly gradient descent with random initialization~\cite{lee2016gradient}) will converge to a global minimum of the objective function. \subsection{Distributed Problem Formulation} We are interested in generalizing the matrix approximation problem from centralized to distributed scenarios. To be specific, suppose the columns of the data matrix $\mtx{Y}$ are distributed among $J$ nodes/sensors. Without loss of generality, partition the columns of $\mtx{Y}$ as \begin{align*} \mtx{Y} = \begin{bmatrix} \mtx{Y}_1 & \mtx{Y}_2 & \cdots & \mtx{Y}_J \end{bmatrix}, \end{align*} where for $j \in \{1,2,\dots,J\}$, matrix $\mtx{Y}_j$ (which is stored at node $j$) has size $n \times m_j$, and where $m = \sum_{j=1}^J m_j$. Partitioning $\mtx{V}$ similarly as \begin{align} \mtx{V} = \begin{bmatrix}\mtx{V}_1^\mathrm{T} & \cdots & \mtx{V}_J^\mathrm{T}\end{bmatrix}^\mathrm{T}, \label{eq:vpart} \end{align} where $\mtx{V}_j$ has size $m_j \times r$, we obtain the following optimization problem \begin{align} \minimize_{\mtx{U},\mtx{V}_1,\dots,\mtx{V}_J} \sum_{j=1}^J \| \mtx{U}\mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2, \label{eq:rankrBMconsensus} \end{align} which is exactly equivalent to~\eqref{eq:rankrBM}. Problem~\eqref{eq:rankrBMconsensus}, in turn, can be written in the form of problem~\eqref{eq:centeralized problem} by taking \begin{align} \vct{x} = \text{vec}(\mtx{U}), ~~ \vct{y}_j = \text{vec}(\mtx{V}_j), ~~ \text{and}~ f_j(\vct{x},\vct{y}_j) = \| \mtx{U}\mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2. \label{eq:xyfj} \end{align} Consequently, we can use the analysis from Section~\ref{sec:theoryunc} to study the performance of DGD+LOCAL~\eqref{eq:DGDtemplate} when applied to problem~\eqref{eq:rankrBMconsensus}. For convenience, we note that in this context the DGD+LOCAL iterations~\eqref{eq:DGDtemplate} take the form \begin{align} \mtx{U}^j(k+1) &= \sum_{i=1}^J \left(\widetilde w_{ji}\mtx{U}^i(k)\right) - 2 \mu (\mtx{U}^j(k) \mtx{V}_j^\mathrm{T}(k) - \mtx{Y}_j) \mtx{V}_j(k), \nonumber \\ \mtx{V}_j(k+1) & = \mtx{V}_j(k) - 2\mu (\mtx{U}^j(k) \mtx{V}_j^\mathrm{T}(k) - \mtx{Y}_j)^\mathrm{T} \mtx{U}^j(k), \label{eq:DGDtemplatePCA} \end{align} and the corresponding gradient descent objective function~\eqref{eq:DGD problem} takes the form \begin{equation} \begin{split} &\minimize_{\vct{z}} g(\vct{z}) = \sum_{j=1}^J \left(\| \mtx{U}^j \mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2 + \sum_{i=1}^J w_{ji} \|\mtx{U}^j - \mtx{U}^i\|_F^2\right), \end{split} \label{eq:DGD problem PCA} \end{equation} where $\mtx{U}^1, \dots, \mtx{U}^J \in \mathbb{R}^{n\times r}$ are local copies of the optimization variable $\mtx{U}$; $\mtx{V}_1,\dots,\mtx{V}_J$ are a partition of $\mtx{V}$ as in~\eqref{eq:vpart}; and the weights $\{w_{ji}\}$ are determined by $\{\widetilde{w}_{ji}\}$ and $\mu$ as in~\eqref{eq:wtildetow}. Problems~\eqref{eq:rankrBMconsensus} and~\eqref{eq:DGD problem PCA} (as special cases of problems~\eqref{eq:centeralized problem} and~\eqref{eq:DGD problem}, respectively) satisfy many of the assumptions required for the geometric and algorithmic analysis in Section~\ref{sec:theoryunc}. We use these facts in proving our main result for the convergence of DGD+LOCAL on the matrix factorization problem. \begin{thm} Suppose $\text{rank}(\mtx{Y}) \le r$. Suppose DGD+LOCAL~\eqref{eq:DGDtemplatePCA} is used to solve problem~\eqref{eq:rankrBMconsensus}, with weights $\{\widetilde w_{ji}\}$ and stepsize \begin{equation} \mu < \frac{1 - 2\omega}{\max_j \; (212 + 64\pi)\rho^2+34\|\mtx{Y}_j\|_F + \frac{(4+4\pi)}{\rho^2} \| \mtx{Y}_j \|_F^2} \label{eq:stepsize requirementballMFfj} \end{equation} for some $\rho > 0$ and where $\omega := \max_{j}\sum_{i\neq j} \widetilde w_{ji}< \frac{1}{2}$. Suppose the $J \times J$ connectivity matrix $\mtx{W} = \{w_{ji}\}$ (with $w_{ji}$ defined in~\eqref{eq:wtildetow}) is connected and symmetric. Let $\{ \vct{z}(k) \}$ be the sequence generated by the DGD+LOCAL algorithm. Suppose $\vct{z}(0)$ is chosen randomly from a probability distribution supported on a set $S \subseteq B_\rho$ with $S$ having positive measure, and suppose that under such random initialization, there is a positive probability that the sequence $\{\vct{z}(k)\}$ remains bounded in $B_\rho$ and all limit points of $\{\vct{z}(k)\}$ are in $B_\rho$. Then conditioned on observing that $\{\vct{z}(k)\} \subseteq B_\rho$ and all limit points of $\{\vct{z}(k)\}$ are in $B_\rho$, DGD+LOCAL almost surely converges to a solution $\vct{z}^\star = (\mtx{U}^{1\star},\ldots,\mtx{U}^{J\star},\mtx{V}_1^\star,\ldots,\mtx{V}_J^\star)$ with the following properties: \begin{itemize} \item Consensus: $\mtx{U}^{1\star} = \cdots = \mtx{U}^{J\star} = \mtx{U}^\star$. \item Global optimality: $(\mtx{U}^\star,\mtx{V}^\star)$ is a global minimizer of~\eqref{eq:rankrBM}, where $\mtx{V}^\star$ denotes the concatenation of $\mtx{V}_1^\star,\ldots,\mtx{V}_J^\star$ as in~\eqref{eq:vpart}. \end{itemize} \label{thm:mainRevisedfj} \end{thm} \begin{proof} We begin by arguing that DGD+LOCAL converges almost surely (when $\vct{z}(0)$ is chosen randomly inside $B_\rho$) to a second-order critical point of~\eqref{eq:DGD problem PCA}. To do this, our goal is to invoke Theorem~\ref{thm:dgdconvergegenericballfj}. We note that each $f_j$ defined in~\eqref{eq:xyfj} satisfies $\inf_{\mtx{U},\mtx{V}_j} f_j>-\infty$ and is twice-continuously differentiable. Also, since the functions $f_j$ are semi-algebraic, $g$ satisfies the {\L}ojasiewicz inequality globally. The functions $f_j$ do not have globally Lipschitz gradient. However, we can find quantities $L_{0,j}$, $L_{1,j}$, $L_{2,j}$ such that $\left| f_j\left(\vct{x},\vct{y}_j\right)\right| \leq L_{0,j}$, $\left\Vert \nabla f_j\left(\vct{x},\vct{y}_j\right)\right\Vert \leq L_{1,j}$, and $\left\Vert \nabla^{2}f_j(\vct{x},\vct{y}) \right\Vert _{2}\leq L_{2,j}$ for all $(\vct{x},\vct{y}_j) \in B_{2\rho}$. For $L_{0,j}$: \begin{align*} \left| f_j\left(\vct{x},\vct{y}_j\right)\right| &= \| \mtx{U}\mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2 \\ &\le ( \| \mtx{U}\mtx{V}_j^\mathrm{T}\|_F + \| \mtx{Y}_j \|_F)^2 \\ &\le ( \| \mtx{U} \|_F \| \mtx{V}^j \|_F + \| \mtx{Y}_j \|_F)^2 \\ &\le ( 4\rho^2 + \| \mtx{Y}_j \|_F)^2 \\ &\le 32 \rho^4 + 2\| \mtx{Y}_j \|_F^2. \end{align*} For $L_{1,j}$: \begin{align*} \left\Vert \nabla f_j \left(\vct{x},\vct{y}_j\right)\right\Vert &= \left\| \begin{bmatrix} \nabla_{\mtx{U}} \| \mtx{U}\mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2 \\ \nabla_{\mtx{V}_j} \| \mtx{U}\mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2 \end{bmatrix} \right\|_F \\ &= \left\| \begin{bmatrix} 2 (\mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j) \mtx{V}_j \\ 2 (\mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j)^\mathrm{T} \mtx{U} \end{bmatrix} \right\|_F \\ &\le 2 \left( \|\mtx{U} \mtx{V}_j^\mathrm{T} \mtx{V}_j \|_F + \| \mtx{Y}_j \mtx{V}_j \|_F + \| \mtx{V}_j \mtx{U}^\mathrm{T} \mtx{U} \|_F + \| \mtx{Y}_j^\mathrm{T} \mtx{U} \|_F \right) \\ &\le 2 \left( 8 \rho^3 + 2 \rho \| \mtx{Y}_j \|_F + 8 \rho^3 + 2 \rho \| \mtx{Y}_j \|_F \right) \\ &= 32 \rho^3 + 8 \rho \| \mtx{Y}_j \|_F. \end{align*} For $L_{2,j}$, we can bound the Lipschitz constant of $\nabla f_j$ in $B_{2\rho}$ as follows. Denote $\mtx{D}=\begin{bmatrix}\mtx{D}_{\mtx{U}}\\ \mtx{D}_{\mtx{V}_j}\end{bmatrix}$. Then \begin{align*} &\frac{1}{2}\|\nabla^2 f_j(\mtx{U},\mtx{V}_j)\|=\frac{1}{2}\max_{\|\mtx{D}\|_F=1}[\nabla^2 f_j(\mtx{U},\mtx{V}_j)](\mtx{D},\mtx{D})\\ &=\max_{\|\mtx{D}\|_F=1} \|\mtx{D}_{\mtx{U}}\mtx{V}_j^\mathrm{T} + \mtx{U}\mtx{D}_{\mtx{V}_j}^\mathrm{T}\|_F^2+2\langle \mtx{U}\mtx{V}_j^\mathrm{T},\mtx{D}_{\mtx{U}}\mtx{D}_{\mtx{V}_j}^\mathrm{T}\rangle-2\langle \mtx{Y}_j,\mtx{D}_{\mtx{U}}\mtx{D}_{\mtx{V}_j}^\mathrm{T} \rangle\\ &\le \max_{\|\mtx{D}\|_F=1} \frac{5}{2}(\|\mtx{V}_j\|_F^2+\|\mtx{U}\|_F^2)(\|\mtx{D}_{\mtx{U}}\|_F^2 +\|\mtx{D}_{\mtx{V}_j}\|_F^2)+\|\mtx{Y}_j\|_F(\|\mtx{D}_{\mtx{U}}\|_F^2+\|\mtx{D}_{\mtx{V}_j}\|_F^2)\\ &\le \max_{\|\mtx{D}\|_F=1} (10\rho^2+\|\mtx{Y}_j\|_F)(\|\mtx{D}_{\mtx{U}}\|_F^2+\|\mtx{D}_{\mtx{V}_j}\|_F^2)=10\rho^2+\|\mtx{Y}_j\|_F, \end{align*} where the last inequality holds because $\| \mtx{U} \|_F^2 + \|\mtx{V}_j\|_F^2 \le 4\rho^2$. Therefore we can bound the Lipschitz constant of $\nabla f_j$ as $L_j\le 20\rho^2+2\|\mtx{Y}_j\|_F$ for all $(\mtx{U},\mtx{V}_j)$ such that $\| \mtx{U} \|_F^2 + \|\mtx{V}_j\|_F^2 \le 4\rho^2$. Now, \begin{align*} L_{2,j}+\frac{4L_{1,j}}{\rho}+\frac{\left(2+2\pi\right)L_{0,j}}{\rho^{2}} &= 20\rho^2+2\|\mtx{Y}_j\|_F + \frac{4}{\rho} (32 \rho^3 + 8 \rho \| \mtx{Y}_j \|_F) + \frac{\left(2+2\pi\right)}{\rho^{2}} (32 \rho^4 + 2\| \mtx{Y}_j \|_F^2) \\ &= 20\rho^2+2\|\mtx{Y}_j\|_F + 128 \rho^2 + 32 \| \mtx{Y}_j \|_F + (64+64\pi) \rho^2 + \frac{(4+4\pi)}{\rho^2} \| \mtx{Y}_j \|_F^2 \\ &= (212 + 64\pi)\rho^2+34\|\mtx{Y}_j\|_F + \frac{(4+4\pi)}{\rho^2} \| \mtx{Y}_j \|_F^2. \end{align*} Thus, choosing $\mu$ to satisfy~\eqref{eq:stepsize requirementballMFfj} ensures that~\eqref{eq:stepsize requirementballfj} is met. From Theorem~\ref{thm:dgdconvergegenericballfj}, we then conclude that conditioned on observing that $\{\vct{z}(k)\} \subseteq B_\rho$ and all limit points of $\{\vct{z}(k)\}$ are in $B_\rho$, DGD+LOCAL converges to a critical point of the objective function in~\eqref{eq:DGD problem PCA}, and the probability that this critical point is a strict saddle point is zero. We refer to this point as $\vct{z}^\star$. Next, note that the assumption of Proposition~\ref{prop:min f = min g for DGD} is satisfied if $\mtx{Y}$ has rank at most $r$. In particular, there exist $\widetilde{\mtx{U}}, \widetilde{\mtx{V}}$ such that $\widetilde{\mtx{U}} \widetilde{\mtx{V}}^\mathrm{T} = \mtx{Y}$ and so we may take $\vct{x}^\star = \operatorname*{vec}(\widetilde{\mtx{U}})$ and $\vct{y}^\star_j = \operatorname*{vec}(\widetilde{\mtx{V}}_j)$ to achieve $f_j(\vct{x}^\star,\vct{y}^\star_j) = 0$, which is the smallest possible value for each $f_j$. Proposition~\ref{prop:min f = min g for DGD} thus guarantees that~\eqref{eq:DGD problem PCA} has at least one critical point that is not a strict saddle (and in fact that it is a global minimizer that falls on the consensus subspace). Next, note that the symmetric property required for Proposition~\ref{prop:nospuriouscritical for DGD} is satisfied. To see this, observe that \[ \nabla_{\mtx{U}} \| \mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2 = 2 (\mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j) \mtx{V}_j \] and \[ \nabla_{\mtx{V}_j} \| \mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2 = 2 (\mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j)^\mathrm{T} \mtx{U}. \] Thus, \[ \langle \nabla_{\mtx{U}} \| \mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2, \mtx{U} \rangle = 2\cdot\text{tr}(U^\mathrm{T} (\mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j) \mtx{V}_j) = 2\cdot\text{tr}(\mtx{V}_j^\mathrm{T} (\mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j)^\mathrm{T} \mtx{U}) = \langle \nabla_{\mtx{V}_j} \| \mtx{U} \mtx{V}_j^\mathrm{T} - \mtx{Y}_j \|_F^2 , \mtx{V}_j \rangle. \] Proposition~\ref{prop:nospuriouscritical for DGD} thus guarantees that~\eqref{eq:DGD problem PCA} has no critical points outside of the consensus subspace. Since we have argued that DGD+LOCAL converges to a second-order critical point $\vct{z}^\star$ of~\eqref{eq:DGD problem PCA}, it follows that $\vct{z}^\star$ must be on the consensus subspace; that is, $\vct{z}^\star = (\mtx{U}^{1\star},\ldots,\mtx{U}^{J\star},\mtx{V}_1^\star,\ldots,\mtx{V}_J^\star)$ with $\mtx{U}^{1\star} = \cdots = \mtx{U}^{J\star} = \mtx{U}^\star$. Next, Theorem~\ref{thm:strict saddle preserved for DGD} guarantees that $\vct{z}^\star$ (in which $\mtx{U}^{1\star} = \cdots = \mtx{U}^{J\star} = \mtx{U}^\star$) corresponds to a critical point $(\mtx{U}^\star, \mtx{V}^\star)$ of the centralized problem~\eqref{eq:rankrBMconsensus}, which is exactly equivalent to problem~\eqref{eq:rankrBM}. Here, $\mtx{V}^\star$ is the concatenation of $\mtx{V}_1^\star,\ldots,\mtx{V}_J^\star$ as in~\eqref{eq:vpart}. Theorem~\ref{thm:centerss} tells us that problem~\eqref{eq:rankrBM} has two types of critical points: global minimizers and strict saddles. If $(\mtx{U}^\star,\mtx{V}^\star)$ were a strict saddle point of~\eqref{eq:rankrBM}, Theorem~\ref{thm:strict saddle preserved for DGD} tells us that $\vct{z}^\star$ must then be a strict saddle of~\eqref{eq:DGD problem PCA}. However, $\vct{z}^\star$ is almost surely a second-order critical point of~\eqref{eq:DGD problem PCA}, where the Hessian has no negative eigenvalues. It follows that $(\mtx{U}^\star,\mtx{V}^\star)$ must almost surely be a global minimizer of problem~\eqref{eq:rankrBM}. \end{proof} \section*{Acknowledgements} This work was supported by the DARPA Lagrange Program under ONR/SPAWAR contract N660011824020. The views, opinions and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. The authors gratefully acknowledge Waheed Bajwa, Haroon Raja, Clement Royer, and Stephen Wright for many informative discussions on nonconvex and distributed optimization.
{ "timestamp": "2018-12-27T02:26:37", "yymm": "1811", "arxiv_id": "1811.03129", "language": "en", "url": "https://arxiv.org/abs/1811.03129" }
\section{Introduction} \label{s-intro} This is the second of two papers using the beta function methodology to produce accurate analytic solutions from model dark energy potentials in a quintessence cosmology. The first paper \citep{thm18}, hereinafter paper I, examined solutions for power and inverse power law potentials. This work extends the analysis to logarithmic and exponential potentials. The analytic nature of the solutions provides the means to calculate solutions for other values of the input parameters such as $H_0$ and $\Omega_{m_0}$ in a flat universe for comparison with observations. Exact analytic solutions for specific dark energy potentials are often mathematically intractable \citet{nar17} but the beta function formalism \citep{bin15, cic17} provides a method for achieving accurate analytic solutions using beta potentials $V_b(\phi)$ that are accurate, but not exact, representations of model potentials $V_m(\phi)$. In many cases numerical calculations can provide solutions for specific cases. Such solutions, however, often do not readily reveal the basic physics in play nor do they provide easily calculable solutions for alternative input parameters. The particular potentials examined here are the logarithmic \begin{equation} \label{eq-vl} V_m(\phi) \propto (\frac{\ln(\phi)}{\ln(\phi_0)})^{\beta_l} \end{equation} and exponential \begin{equation} \label{eq-ve} V_m(\phi) \propto \exp{[-\beta_e(\phi-\phi_0)]} \end{equation} potentials where $\beta_l$ and $\beta_e$ are real, positive constants. The methodology follows the descriptions in \citet{bin15,cic17}, particularly \citet{cic17} who explicitly include matter as well as dark energy. The details of the analysis are given in paper I and will not be repeated here except for clarity. This work concentrates on the "late time" evolution of the universe which is taken to be the time between a scale factor of 0.1 and 1.0 corresponding to redshifts between zero and nine. A flat universe is assumed with $H_0 = 70$ km/sec per megaparsec. The current ratio of the dark energy density to the critical density $\Omega_{\phi_0}$ is set to 0.7 where $\phi_0$ is the current value of the scalar $\phi$. The current values of the dark energy equation of state are set to $w_0=(-0.98, -0.96, -0.94, -0.92,-0.90)$ as was done in paper I. The last two values of $w_0$ are unlikely but are included to determine the limits on the validity of the solutions. In the exponential model potential the value of $w_0$ determines the value of $\beta_e$ removing one degree of freedom. In paper I $\kappa=\frac{\sqrt{8\pi}}{m_{pl}}$ was set to one, however, in this paper natural units are used with $m_{pl}$, the Planck mass, set to one. This makes the units of the scalar $\phi$ the planck mass rather than $1/\kappa$. A section on where the quintessence cases considered here and in paper I dwell relative to the swampland conjectures has also been added. \section{Quintessence} \label{s-q} Quintessence is characterized by an action of the form \begin{equation} \label{eq-act} S=\int d^4x \sqrt{-g}[\frac{R}{2}-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\partial_{\nu}\phi -V(\phi)] +S_m \end{equation} where $R$ is the Ricci scalar, $g$ is the determinant of the metric $g^{\mu\nu}$, $V(\phi)$ is the dark energy potential, and, $S_m$ is the action of the matter fluid. Different types of quintessence are defined by different forms of the dark energy potential. The quintessence dark energy density, $\rho_{\phi}$, and pressure, $p_{\phi}$, are given by \begin{equation} \label{eq-rhop} \rho_{\phi} \equiv \frac{\dot{\phi}^2}{2}+V(\phi), \hspace{1cm} p_{\phi} \equiv \frac{\dot{\phi}^2}{2}-V(\phi) \end{equation} \section{The Beta Function} \label{s-beta} The beta function is defined as the derivative of the scalar $\phi$ with respect to the natural log of the scale factor $a$ \citep{bin15} \begin{equation} \label{eq-beta} \beta(\phi) \equiv \frac{\kappa d \phi}{d \ln(a)} =\kappa \phi' \end{equation} where $\kappa=\frac{\sqrt{8\pi}}{m_{pl}}$ and the prime on the right hand term denotes the derivative with respect to the natural log of the scale factor except when it denotes the integration variable inside an integral as in equation~\ref{eq-sfi}. As noted in the introduction paper I set $\kappa$ to one as is often done in the cosmological literature. Here instead the Planck mass is set to one leading to the scalar $\phi$ being expressed in units of the Planck mass, a difference of $\sqrt{8\pi} \approx 5$ from paper I. In the following $k$ is used to denote $\sqrt{8\pi}$ in an equation. Note that $\phi$ now has the dimensions of $m_{pl}$ and that $\kappa \phi$ is dimensionless. The dark energy equation of state $w=\frac{p_{\phi}}{\rho_{\phi}}$ for quintessence is given by \citet{nun04} \begin{equation} \label{eq-nun4} w+1 =\frac{k^2\phi'^2}{3 \Omega_{\phi}} =\frac{k^2 \beta^2(\phi)}{3 \Omega_{\phi}} \end{equation} For the logarithmic potentials this equation provides the boundary condition to determine the current value of the scalar $\phi_0$. For the exponential potential eqn.~\ref{eq-nun4} determines $\beta_e$ as discussed in section~\ref{s-es}. The beta function is not an arbitrary function of $\phi$ and $a$ but is determined by the model dark energy potential $V_m(\phi)$ such that \citep{cic17} \begin{equation} \label{eq-bv} V_m(\phi)=\exp\{-\int k\beta(\phi)dk\phi\} \end{equation} \subsection{Beta Functions from the Potentials} \label{ss-bpot} From eqn.~\ref{eq-bv} the appropriate beta function is the logarithmic derivative of the potential. Using the potentials listed in the introduction the logarithmic beta function is \begin{equation} \label{eq-lb} \beta(\phi) = (\frac{-\beta_l}{k \phi \ln(k\phi)}) \end{equation} The exponential beta function is simply \begin{equation} \label{eq-ilb} \beta(\phi) =\frac{ \beta_e}{k} \end{equation} Five $\beta_l$ values are considered, the integers one through five. The $\beta_e$ values are set by the five values of $w_0$ \section{Evolution of the Scalar} \label{s-es} An important feature of the beta function formalism is that the specification of the beta function, along with a boundary condition determines the evolution of the scalar with respect to the scale factor $\phi(a)$. \subsection{The Scalar as a Function of the Scale Factor (Logarithmic)} \label{ss-sfl} The beta function, eqn.~\ref{eq-beta}, provides the differential equation for $\phi$ as a function of the scalar $a$. For the logarithmic potential \begin{equation} \label{eq-ld} k^2 \phi \ln(k\phi) d \phi = -\beta_l d\ln(a) \end{equation} Integrating both sides \begin{equation} \label{eq-sfi} \int_{\phi_0}^{\phi}k^2 \phi'\ln(k\phi') d\phi' =-\beta_l\int_1^a d\ln(a') \end{equation} gives \begin{equation} \label{eq-il} \frac{k^2 \phi^2}{2}(\ln(k\phi)-\frac{1}{2})=-\beta_l\ln(a)+\frac{k^2\phi_0^2}{2}(\ln(k\phi_0)-\frac{1}{2}) \end{equation} where $\phi_0$ is the current value of the scalar. Denoting the right hand term of the equation by $Q$ the scalar is given by \begin{equation} \label{eq-pl} k\phi=\pm\sqrt{\frac{2Q}{PL(\frac{2Q}{e})}} \end{equation} The term $PL$ in eqn.~\ref{eq-pl} stands for the Product Log, more commonly known as the Lambert $W(x)$ function, the solution to $We^W=x$. Here the ProductLog term, used by Mathematica, is retained to avoid confusion with the superpotential $W(\phi)$ introduced later. The value of $\phi_0$ is determined by the current value of the dark energy equation of state $w_0$ using eqn.~\ref{eq-nun4} \begin{equation} \label{eq-lpo} k\phi_0\ln(k\phi_0)=\frac{\pm\beta_l}{\sqrt{3\Omega_{\phi_0}(w_0+1)}} \end{equation} where $\Omega_{\phi_0}$ is the current ratio of the dark energy density to the critical density. The solution to eqn.~\ref{eq-lpo} again uses the PL function \begin{equation} \label{eq-plpo} k\phi_0=\frac{\frac{\pm\beta_l}{\sqrt{3\Omega_{\phi_0}(w_0+1)}}}{PL(\frac{\pm \beta_l}{\sqrt{3\Omega_{\phi_0}(w_0+1)}})} \end{equation} The Product Log does not have positive real solutions for negative arguments. The definition of the logarithmic beta function assumes that $\beta_l$ is a positive real number, therefore, the positive square root is chosen in eqns.~\ref{eq-pl},~\ref{eq-lpo} and~\ref{eq-plpo}. None of the three equations accommodate phantom solutions where $(w+1)<0$. Figures~\ref{fig-lib} and~\ref{fig-liw} show the evolution of the scalar $\phi$ for the logarithmic beta function with $\beta_l$ held constant at 3 in fig.~\ref{fig-lib} for the five values of $w_0$ and $w_0$ is held constant at -0.94 in fig.~\ref{fig-liw} for the five values of $\beta_l$. \begin{figure} \scalebox{.6}{\includegraphics{figure1.eps}} \caption{The evolution of the scalar field $\phi$ as a function of the scalar $a$ for the logarithmic beta function with $\beta_l = 3.0$ for the five values of $w_0$ listed in the introduction.} \label{fig-lib} \end{figure} Even though $\phi_0$ changes significantly with the value of $\beta_l$, the scalar $\phi$ evolves relatively little over $a$ between 0.1 and 1. \begin{figure} \scalebox{.6}{\includegraphics{figure2.eps}} \caption{The evolution of the scalar field $\phi$ as a function of the scalar $a$ for the logarithmic beta function with the five values of $\beta_l$ and $w_0=-0.94$.} \label{fig-liw} \end{figure} \subsection{The Scalar as a Function of the Scale Factor (Exponential)} \label{ss-sfe} The exponential potential, $V(\phi) \propto \exp[\beta_e(\phi-\phi_0)]$ is the dark energy potential for slow roll quintessence when the first slow roll parameter, $\frac{1}{V}\frac{dV}{d \phi}$ is held constant eg. \citet{sch08}. The beta function for the exponential potential, $\beta_e$, is unique in that it is a constant and not a function of $\phi$. Unlike all of the previous cases $\beta_e$ can not be set arbitrarily. The value of $\beta_e$ is set by eqn.~\ref{eq-nun4} \begin{equation} \label{eq-bew} \beta_e =\sqrt{3 \Omega_{\phi_0} (w_0+1)} \end{equation} independent of $\phi$ or $\phi_0$, therefore there is no boundary condition to set $\phi_0$. The solutions for the relevant cosmological parameters and fundamental constants are all functions of $(\phi - \phi_0)$ therefore it is the appropriate parameter rather than the absolute values of $\phi$ and $\phi_0$. From the exponential potential beta function \begin{equation} \label{eq-exppo} k(\phi - \phi_0) = \beta_e \ln(a) \end{equation} The evolution of $(\phi - \phi_0)$ is shown in fig.~\ref{fig-exphi}. \begin{figure} \scalebox{.6}{\includegraphics{figure3.eps}} \caption{The evolution of the scalar field $(\phi - \phi_0)$ as a function of the scalar $a$ for the exponential beta function for the five values of $w_0$ listed in the introduction.} \label{fig-exphi} \end{figure} The values of $\beta_e$ for the appropriate values of $w_0$ are listed in fig.~\ref{fig-exphi} and are all less than one. An anonymous referee has pointed out that a constant beta function never reaches a fixed de Sitter point which requires a beta function value of zero. The referee also mentioned that for a small value of the beta function, as is found here, that space time is evolving toward a power law geometry that might have interesting consequences in holography as discussed in \citet{cic18}. \section{The Evolution of the Beta Function} \label{s-ebeta} In the beta function formalism many of the cosmological parameters depend on the form of the beta function. Figures~\ref{fig-lgbetab3} and~\ref{fig-lgbetab} \begin{figure} \scalebox{.6}{\includegraphics{figure4.eps}} \caption{The evolution of $\beta(a)$ as a function of the scalar $a$ for the logarithmic potential with $\beta_l=3$ for the five values of $w_0$ listed in the introduction.} \label{fig-lgbetab3} \end{figure} \begin{figure} \scalebox{.6}{\includegraphics{figure5.eps}} \caption{The evolution of $\beta(a)$ as a function of the scalar $a$ for the logarithmic potential with $w_0=-0.94$ for the five values of $\beta_l$ listed in the introduction.} \label{fig-lgbetab} \end{figure} display the evolution of the logarithmic potential beta functions for the five values of $w_0$ with $\beta_l=3$, fig.~\ref{fig-lgbetab3} and for the five values of $\beta_i$ with $w_0=-0.94$, fig.~\ref{fig-lgbetab}. The logarithmic beta functions are negative and between -0.1 and -0.5 for scale factors between 0.1 and 1. Figure~\ref{fig-exbeta} shows the evolution of the exponential potential beta function for \begin{figure} \scalebox{.6}{\includegraphics{figure6.eps}} \caption{The evolution of $\beta(a)$ as a function of the scalar $a$ for the exponential potential.} \label{fig-exbeta} \end{figure} the five $\beta_e$, $w_0$ pairs. The values are positive and constant which simplifies several of the subsequent calculations. \section{The Potentials} \label{s-pot} In the beta function formalism two different types of potentials play a prominent role. The first is the dark energy potential in the action $V(\phi)$ that does not depend on matter. The second, in analogy with particle physics, is termed the superpotential $W$ given by \begin{equation} \label{eq-W} W(\phi) = -2H(\phi) = -2\frac{\dot{a}}{a} \end{equation} Even though the Hubble parameter $H$ is the parameter of interest $W$ is utilized here to be consistent with the literature on beta functions. Both the dark energy potential $V(\phi)$ and the superpotential $W(\phi)$ can be expressed in terms of $\beta(\phi)$ \citep{cic17} by \begin{equation} \label{eq-wphi} W(\phi) = W_0 \exp\{-\frac{1}{2}\int_{\phi_0}^{\phi}\beta(k\phi')kd\phi'\} \end{equation} and \begin{equation} \label{eq-v} V(\phi) = \frac{3}{4 k^2} W_0^2 \exp\{-\int_{\phi_0}^{\phi}\beta(k\phi')kd\phi'\}(1-\frac{\beta^2(k\phi)}{6}) \end{equation} where $W_0$ is the current value of $W$ equal to $-2H_0$. Note that the superpotential is always denoted as a capital $W$ and the dark energy equation of state by a lower case $w$. The potential in eqn.~\ref{eq-v} is referred to as the beta potential of the model potential. It differs from the model potential by the factor of $(1-\frac{\beta^2(\phi)}{6})$. As long as this factor is close to one the beta potential is an accurate, but not exact, representation of the model potential. \subsection{The Logarithmic Potential} \label{ss-lp} The model logarithmic potential is given by \begin{equation} \label{eq-mlp} V_m(\phi)=\frac{3}{4 k^2}W_0^2(\frac{\ln(k\phi)}{\ln(k\phi_0)})^{\beta_l} \end{equation} with the beta function shown in eqn.~\ref{eq-lb}. The logarithmic beta potential is given by \begin{equation} \label{eq-blp} V_b(\phi)=\frac{3}{4 k^2}W_0^2(\frac{\ln(k\phi)}{\ln(k\phi_0)})^{\beta_l}(1- \frac{\beta_l^2} {6(k\phi \ln(k\phi))^2}) \end{equation} The logarithmic potential is decreasing as the scale factor increases. Figure~\ref{fig-vl} shows the potential with $\beta_l$ fixed at 3 for the five different values of $w_0$. The solid lines in fig.~\ref{fig-vl} show the beta potential which follows the model potential (dashed) quite well, particularly for values of $w_0$ close to minus one. The accuracy of the fit is quantified in section~\ref{ss-fit} for all of the potentials. \begin{figure} \scalebox{.6}{\includegraphics{figure7.eps}} \caption{The evolution of the model logarithmic potential with $\beta_l =3$ is shown by the dashed lines and the solid lines indicate the evolution of the beta logarithmic potential.} \label{fig-vl} \end{figure} \subsection{The Exponential Potential} \label{ss-exp} The model potential is of the form \begin{equation} \label{eq-exppot} V_m(\phi)=\frac{3}{4 k^2}W_0^2 \exp(-\beta_ek(\phi-\phi_0)) \end{equation} with a beta potential of \begin{equation} \label{eq-expbetapot} V_b(\phi)=\frac{3}{4 k^2}W_0^2 \exp(-\beta_ek(\phi-\phi_0))(1-\frac{\beta_e^2}{6}) \end{equation} \begin{figure} \scalebox{.6}{\includegraphics{figure8.eps}} \caption{The evolution of the model exponential potential is shown by the dashed lines and the solid lines indicate the evolution of the beta exponential potential.} \label{fig-vexp} \end{figure} Figure~\ref{fig-vexp} shows the evolution of exponential model and beta potentials. \subsection{Normalization} \label{ss-norm} It is clear that the beta dark energy potentials have the desired model potentials multiplied by $(1-\frac{\beta(\phi)^2}{6})$ which produces both an offset and a deviation from the model potentials. The deviation is expected to be small since $\frac{\beta(\phi)^2}{6}$ is much less than one in most cases. In paper I the potential was normalized to be $\frac{3}{4}W_0^2$ at a scale factor of one producing a potential slightly different than the true beta potential. In this work that practice has been abandoned and no normalization has been applied. As a result the beta potentials shown in figs.~\ref{fig-vl}, and~\ref{fig-vexp} cross over each other at $a \approx 0.8$ due to the $\frac{\beta(\phi)^2}{6}$ term. \subsection{Accuracy of Fit} \label{ss-fit} The cosmological parameters derived by the beta function formalism are only useful if the beta potentials accurately represent the model potentials. Figures~\ref{fig-lgferr} and~\ref{fig-expferr} show the fractional deviation of the beta potentials from the model potentials to quantify the deviations of the beta potentials from the model potentials. For the logarithmic potential the minimum, median and maximum $\beta_l$ values are shown with $w_0$ values equal to -0.98, -0.94 and -0.9 to show the extremes without excessive overlap of tracks in the figures. For the exponential potential all of the cases are shown since they do not overlap In paper I a conservative limit of only accepting solutions with fractional deviations of $1\%$ or less was adopted. In this paper that limit is expanded to $4\%$ which is a higher accuracy than the accuracy of most of the available observational data. \subsubsection{The Logarithmic Beta Potential Fractional Error} \label{sss-lfe} \begin{figure} \scalebox{.6}{\includegraphics{figure9.eps}} \caption{The fractional deviation of the beta logarithmic law potentials from the model potentials with $\beta_l=1$, dashed lines, $\beta_l =3.0$, solid lines, and $\beta_p =5.0$, dot dashed lines. For each $\beta_l$ the tracks are marked with the value $w_0$ at the end.} \label{fig-lgferr} \end{figure} The primary feature of the logarithmic potential fractional deviation in fig.~\ref{fig-lgferr} is that all of the cases are within the acceptable error of 0.04. Unlike the normalized cases of paper I the highest fractional deviation for the logarithmic beta potential is at a scale factor of one increasing for values of $w_0$ further from minus one but independent of the value of $\beta_l$. The evolution away from $a=1$ is dependent on $\beta_l$ but is decreasing for lower values of $a$. All of the logarithmic cases are therefore retained in the subsequent analysis. \subsubsection{The Exponential Beta Potential Fractional Errors} \label{sss-efe} The exponential beta potential fractional errors shown in fig.~\ref{fig-expferr} are set by the values of $w_0$ which also sets the value of $\beta_e$. \begin{figure} \scalebox{.6}{\includegraphics{figure10.eps}} \caption{The same as for fig.~\ref{fig-lgferr} except for the exponential law potentials. All five values of $\beta_e$ are shown with the values of $w_0$ marked on the figure.} \label{fig-expferr} \end{figure} As expected the fractional deviations of the exponential beta potential are independent of the scale factor since $\beta(\phi)$ is constant for a given $w_0$ and all fall in the acceptable range. As with the logarithmic beta potential all of the exponential cases are retained in the subsequent analysis. \section{The Matter Density} \label{s-rhom} The dark energy potentials are independent of matter but both baryonic and dark matter must be taken into account to calculate accurate analytic solutions for fundamental constants and cosmological parameters. Matter is represented by the $S_m$ term in the action, eqn.~\ref{eq-act}. From \citet{cic17} and paper I the matter density as a function of the scalar is given by \begin{equation} \label{eq-rhomphi} \rho_m(\phi)=\rho_{m_0}\exp(-3\int_{\phi_0}^{\phi} \frac{d \phi'}{\beta(\phi')}) \end{equation} where $\rho_{m_0}$ is the present day mass density. Different beta functions produce different functions for $\rho_m$ as a function of $\phi$ hiding the universality of the matter density when expressed as a function of the scale factor $a$ \begin{equation} \label{eq-rhoma} \rho_m(a)=\rho_{m_0}\exp(-3\int_1^a d\ln(a') )= \rho_{m0}a^{-3} \end{equation} as expected, independent of $\beta(\phi)$. \section{The Superpotential $W$ and the Hubble Parameter $H$} \label{s-wh} From eqn.~\ref{eq-W} it is obvious that calculating the superpotential $W$ is equivalent to calculating the Hubble Parameter $H$. As shown in \citet{cic17} and paper I the differential equation for $W$ with matter is \begin{equation} \label{eq-difw} WW_{,\phi} + \frac{1}{2} \beta W^2 = -2\frac{\rho_m}{\beta} \end{equation} where the notation $_{,\phi}$ indicates the derivative with respect to the scalar $\phi$. Paper I includes two specific examples, the power and inverse power law potentials and their related beta functions. Here a more general solution is presented that gives a better insight of the process. The solutions to eqn.~\ref{eq-difw} utilize integrating factors $f(x)$ where $x=k\phi$ for ease of notation. The integrating factors multiply both sides of eqn.~\ref{eq-difw} to create an exact equation that can be integrated. The exact form on the left of the equation has the form of the left side of eqn.~\ref{eq-lif}. The right side is then integrated to provide the solution for W. \begin{equation} \label{eq-lif} \frac{d}{dx}(\frac{1}{2}W^2(x) f(x)) = -2f(x)\frac{\rho_m(x)}{\beta(x)} \end{equation} Comparison with eqn.~\ref{eq-difw} shows that the integrating factor must satisfy \begin{equation} \label{eq-ff} \frac{d f(x)}{dx} = \beta(x) f(x) \end{equation} which determines $f(x)$. Writing the equation out as the equality of two differentials gives \begin{equation} \label{eq-fdx} d(W^2(x)f(x)) = -4f(x)\frac{\rho_m(x)}{\beta(x)}dx \end{equation} Integrating both sides of eqn.~\ref{eq-fdx} gives \begin{equation} \label{eq-ww} W^2(x) f(x) - W_0^2 f(x_0) = -4\int_{x_0}^xf(x)\frac{\rho_m(x)}{\beta(x)}dx \end{equation} Equation~\ref{eq-ww} can be solved as a function of $x$ or the much more useful function of $a$ using $x(a)$ from eqns.~\ref{eq-plpo} and~\ref{eq-exppo} and the much simpler $\rho_m(a)$ from eqn.~\ref{eq-rhoma}. The beta function provides the conversion of $dx$ on the right hand side of eqn.~\ref{eq-fdx} to $da$ \begin{equation} \label{eq-dxa} dx = \beta(x(a))d\ln(a) = \beta(x(a))\frac{da}{a} \end{equation} The beta function in eqn.~\ref{eq-dxa} cancels the beta function on the right hand side of eqn.~\ref{eq-fdx}. The right hand side of eqn.~\ref{eq-fdx} is now a function of $a$ rather than $x$ and the integral of the right hand side is \begin{equation} \label{eq-irh} -4\rho_{m_0}\int_1^a f(x(a'))a'^{-4}da' \end{equation} After integrating the left side of eqn~\ref{eq-fdx} and rearranging the final answer is \begin{equation} \label{eq-waf} W(a)=-\{-4\frac{\rho{m_0}}{f(x(a))}\int_1^a f(x(a'))a'^{-4}da +W^2_0\frac{f(x(a=1)}{f(x(a))}\}^{\frac{1}{2}} \end{equation} The integrating factors for the logarithmic and exponential potentials are \begin{align} \label{eq-if} (\ln(k \phi))^{-\beta_l} \hspace{1cm} logarithmic \nonumber \\ \exp[\beta_e k (\phi - \phi_0)] \hspace{1cm} exponential \end{align} The integral in eqn.~\ref{eq-irh} for the exponential integrating factor is quite simple and analytic. The integral for the logarithmic integrating factor is not analytic and must be done numerically since it contains the PL function for $x(a)$ given in eqn.~\ref{eq-pl}. \begin{align} \label{eq-wa} W(a)=-[-4\rho_{m_0}(\ln(k\phi(a))^{\beta_l}\int^a_1(\ln(k\phi(a'))^{-\beta_l}a'^{-4}da' \nonumber \\ +W_0^2(\frac{\ln(k\phi(a))}{\ln(k\phi_0)})^{\beta_l}]^{\frac{1}{2}} \hspace{0.2cm} log \nonumber \\ W(a)=-[\frac{-4 \rho_{m_0}}{\beta_e^2 -3}(a^{-3}-a^{-\beta_e^2}) +W_0^2a^{-\beta_e^2}]^{\frac{1}{2}} \hspace{0.2cm}exp \end{align} where $k\phi(a)$ is given by eqn.~\ref{eq-pl} for the logarithmic potentials. The superpotential is a negative quantity therefore the negative solution of the square roots in eqns.~\ref{eq-wa} are used. \subsection{The Hubble Parameter as a Function of the Scale Factor} \label{ss-eh} The Hubble parameter is simply $-\frac{W(a)}{2}$. As was found in paper I for the power and inverse power law potentials the evolution of the Hubble parameter for the logarithmic and exponential potentials is indistinguishable from the $\Lambda$CDM evolution at the scale of the plots. To highlight the true differences figure~\ref{fig-lh} shows the ratio of the Hubble parameter for logarithmic potential to the $\Lambda$CDM minus one as a function of the scale factor. In fig.~\ref{fig-lh} $\beta_l$ is held constant at three and each of the five values of $w_0$ are plotted \begin{figure} \scalebox{.6}{\includegraphics{figure11.eps}} \caption{The ratio of the logarithmic potential evolution of the Hubble parameter $H_l(a)$. $\beta_l$ is held constant at 3 and all five of the $w_0$ values are plotted.} \label{fig-lh} \end{figure} The same ratio is plotted for the five exponential potential cases in fig.~\ref{fig-le}. \begin{figure} \scalebox{.6}{\includegraphics{figure12.eps}} \caption{The same as in fig.~\ref{fig-lh} except for the five exponential potential cases.} \label{fig-le} \end{figure} In both the logarithmic and exponential cases the deviation from the $\Lambda$CDM case is small and peaks at $a \approx 0.5$ as expected. The similarity of the Hubble parameter evolution for a dynamic quintessence cosmology to the static $\Lambda$CDM cosmology makes it a poor discriminator between the two cases. One percent accuracy observations of $H(a)$ at redshifts near one are required to distinguish between the two. The reason for the similarity of the evolutions is given in the next section. \subsection{The Evolution of the Dark Energy Density} \label{ss-ded} From the Einstein equation with mass \begin{equation} \label{eq-e1} 3H^2=\rho_m + \rho_{\phi} \end{equation} it is clear that \begin{equation} \label{eq-rphi} \rho_{\phi} = 3 H^2 - \rho_m = 3H^2(a) -\frac{\rho_{m_0}}{a^3} \end{equation} for a flat universe. Figures~\ref{fig-lgrde} and~\ref{fig-exrde} show the evolution of the dark energy density for the logarithmic and exponential potentials respectively. \begin{figure} \scalebox{.6}{\includegraphics{figure13.eps}} \caption{The $log_{10}$ of the dark energy density values as a function of the scale factor for the logarithmic potential. The dashed line is the matter density which decreases below the dark energy density near a scale factor of 0.75} \label{fig-lgrde} \end{figure} \begin{figure} \scalebox{.6}{\includegraphics{figure14.eps}} \caption{The dark energy density values as a function of the scale factor for the exponential potential. As in fig~\ref{fig-lgrde} the dashed line shows the matter density.} \label{fig-exrde} \end{figure} The dashed line in the figures shows the evolution of the matter density. The reason for the similarity of the quintessence evolution of $H(a)$ to the $\Lambda$CDM evolution is shown in the figures. The quintessence dark energy density evolves very slowly in the current dark energy dominated epoch, mimicking the static cosmological constant dark energy density. The quintessence dark energy density only evolves significantly at high redshift in the matter dominated era. This is why the $H(a)$ evolution is essentially similar for the two cosmologies and may be true for most freezing cosmologies. There are thawing quintessence cosmologies \citep{sch08} however their potentials are extremely flat and must match the same value of $H_0$ as the freezing models. \section{The Dark Energy Equation of State} \label{s-deos} A primary observational indicator of a dynamical cosmology is a dark energy equation of state different from the cosmological constant value of minus one. From paper I \begin{equation} \label{eq-wden} 1+w(\phi) = \frac{k^2\beta^2}{3}(1-\frac{4\rho_{m_0}a^{-3}}{3W^2})^{-1} =\frac{k^2\beta^2}{3}(1-\Omega_m)^{-1}=\frac{k^2\beta^2(\phi)}{3\Omega_{\phi}} \end{equation} for a flat universe. Figure~\ref{fig-lgwpo} shows the evolution of $(w(a)+1)$ for the logarithmic dark energy potential with $\beta_l=3$ and all five values of $w_0$. \begin{figure} \scalebox{.6}{\includegraphics{figure15.eps}} \caption{The evolution of $(w(a)+1)$ as a function of $a$ for the logarithmic dark energy potential with $\beta_l=3$ and all five values of $w_0$. } \label{fig-lgwpo} \end{figure} Figure~\ref{fig-exwpo} shows the evolution of $(w(a)+1)$ for the exponential dark energy potential with the $\beta_e$ values set by $(w_0+1) = 0.02, 0.04, 0.06, 0.08$ and 0.1. \begin{figure} \scalebox{.6}{\includegraphics{figure16.eps}} \caption{The evolution of $(w(a)+1)$ as a function of $a$ for the exponential dark energy potential for the five $\beta_e$ values set by $(w_0+1) = 0.02, 0.04, 0.06, 0.08$ and 0.1. } \label{fig-exwpo} \end{figure} A common feature of all of the potentials in this paper and paper I is a very slow late time, $a>0.5$ evolution of $w(a)$ with significant evolution for scale factors between 0.1 and 0.5. This indicates that at least for the quintessence cosmology that high redshift observations have the best chance of detecting the presence of dynamical dark energy. The shapes of the logarithmic and exponential potential $w(a)$ are quite similar, particularly for the lower values of $w_0$, while they are more divergent for the higher values. Any determination of the dark energy potential from the $w(a)$ tracks would require a secure knowledge of $w_0$ and very accurate measurements of $w(a)$ at higher redshifts. The required level of accuracy is beyond current observational capabilities. Detection of the predicted value of $w(a)\approx -0.5$ at $a=0.2$, $z=4$, however, might be possible with present techniques. Further discussion of $w(a)$ observations occurs in sec.~\ref{ss-ds}. \subsection{The Fundamental Constants} \label{ss-fc} Paper I gives an extensive discussion of the evolution of the fundamental constants for both the proton to electron mass ratio $\mu$ and the fine structure constant $\alpha$ in terms of a change of $\phi$ and a coupling constant $\zeta_c$ where $c$ is $\mu$ or $\alpha$. \begin{equation} \label{eq-dx} \frac{\Delta c}{c} = \zeta_c k(\phi - \phi_0) = \zeta_c \int^a_1\beta(a')d \ln a', \hspace{0.5cm} c=\alpha,\mu \end{equation} \citep{nun04}. The first equality is usually interpreted as the first term of a Taylor expansion of a possibly more complicated coupling. The observational constraints on $\Delta \alpha /\alpha$ and $\Delta \mu / \mu$ are of the order $10^{-6}$ or less, justifying the assumption. The last equality, not shown in paper I, explicitly shows the connection between the beta function and the evolution of the fundamental constants. Sections~\ref{ss-sfl} and~\ref{ss-sfe} show the transformation of $\beta(\phi)$ to $\beta(a)$ via the formulae for $\phi(a)$. Figure~\ref{fig-lgdmu} shows the evolution of $\Delta \mu / \mu$ versus the scale factor for the logarithmic potential with $\beta_l=3$ and the five values of $w_0$. The positive and negative evolutions simply indicate that the coupling could have either a positive or negative sign. The coupling is arbitrarily set to $\pm 10^{-6}$ for the figure. The evolution of the fine structure constant is identical for the same coupling constant. The evolution reflects the evolution of $\phi(a)$ since the coupling is assumed to be a constant. As expected the higher the deviation of $w_0$ is from minus one the larger the evolution of $\mu$. Similar to the power law potentials in paper I the evolution of $\Delta \mu / \mu$ relatively insensitive to changes in $\beta_l$. \begin{figure} \scalebox{.6}{\includegraphics{figure17.eps}} \caption{The evolution of $\frac{\Delta \mu}{\mu}$ for the logarithmic dark energy potential with $\beta_e = 3$ and $w_0 = -0.98, -0.96, -0.94, -0.92$ and -0.98. The coupling constant $\zeta_{\mu}$ is set to $\pm 10^{-6}$ as an example.} \label{fig-lgdmu} \end{figure} Figure~\ref{fig-expdmu} shows the evolution of $\mu$ for the exponential potential. As described in section~\ref{ss-sfe} $w_0$ and $\beta_e$ are not independent variables in the exponential case. In fig.~\ref{fig-expdmu} the five values of $w_0$ are retained as in fig.~\ref{fig-exphi} with the appropriate values of $\beta_e$ for each case. The values of $\beta_e$ are shown in fig.~\ref{fig-expdmu}. \begin{figure} \scalebox{.6}{\includegraphics{figure18.eps}} \caption{The evolution of $\frac{\Delta \mu}{\mu}$ for the exponential dark energy potential for the $\beta_e$ values set by $(w_0+1) = 0.02, 0.04, 0.06, 0.08$ and 0.1. The coupling constant $\zeta_{\mu}$ is set to $\pm 10^{-6}$.} \label{fig-expdmu} \end{figure} \subsubsection{Observational Constraints on $\frac{\Delta \mu}{\mu}$} \label{sss-muob} As discussed in paper I the primary constraint on a variation of $\mu$ is $\Delta \mu / \mu \le \pm 10^{-7}$ from \citet{bag13} and \citet{kan15} at a redshift of 0.88582. This measurement defines an allowed and a forbidden parameter space in the $\zeta_{\mu}$ $w_0$ plane. The first parameter, $\zeta_{\mu}$, defines the limits on the allowed deviation from the standard model, $\zeta_{\mu} = 0$, and the second, $w_0$, the allowed deviation from the cosmological constant, $(w_0+1)=0$. The upper limit on $\zeta_{\mu}$ is given by \begin{equation} \label{eq-uz} \zeta_{\mu} = \frac{\Delta \mu / \mu}{\int_1^{a_{ob}}\beta(a')d\ln(a')}=\frac{\Delta \mu / \mu} {\sqrt{3 \Omega_0(w_0+1)}\ln(a_{ob})} \end{equation} where $a_{ob}$ is the scale factor at the epoch of the observation. The second equality shows explicitly the dependence on $w_0$. Figure~\ref{fig-lfc} shows the allowed and forbidden parameter space for the logarithmic dark energy potential \begin{figure} \scalebox{.6}{\includegraphics{figure19.eps}} \caption{The allowed and forbidden parameter spaces in the $\zeta_{\mu}$ - $w_0$ plane for the logarithmic dark energy potential.} \label{fig-lfc} \end{figure} and fig.~\ref{fig-efc} the parameter spaces for the exponential potential. \begin{figure} \scalebox{.6}{\includegraphics{figure20.eps}} \caption{The allowed and forbidden parameter spaces in the $\zeta_{\mu}$ - $w_0$ plane for the exponential dark energy potential.} \label{fig-efc} \end{figure} The plots start at $(w_0+1)=0.001$ to avoid the plus and minus infinite values of $\zeta_{\mu}$ at $(w_0+1)=0$. The allowed parameter space contains the $\Lambda$CDM cosmology which is the 0,0 point in the plots. Although constrained to either small values of $\zeta_{\mu}$ or $(w_0+1)$ there is still room in the allowed parameter space to accommodate quintessence. \section{Relevant but not Directly Observable Parameters} \label{s-rp} There are several cosmological parameters that are relevant but not directly observable. Here two parameters, the time derivative of the scalar field and the dark energy pressure, are calculated as functions of the scale factor $a$. \subsection{The Evolution of the Time Derivative of the Scalar} \label{ss-phidot} As shown in paper I the time derivative of the scalar $\dot{\phi}$ is simply the Hubble parameter times the beta function. \begin{equation} \label{eq-pdot} k\dot{\phi}= a\frac{k d\phi}{da}\frac{\dot{a}}{a}=\beta H \end{equation} Figure~\ref{fig-lpdot} shows the evolution of $\dot{\phi}$ with respect to the scale factor $a$ for the logarithmic dark energy potential for the five values of $w_0$ with $\beta_l$ held constant at three. The differences in the tracks is entirely due the differences in the beta function since the values of $H(a)$ are essentially invariant with respect to the input parameters as shown in section~\ref{ss-eh} and paper I. The values of $\dot{\phi}$ are negative because the logarithmic beta function is negative. \begin{figure} \scalebox{.6}{\includegraphics{figure21.eps}} \caption{The time derivative of the scalar for the logarithmic potential for the five values of $w_0$ with $\beta_l$ held constant at three.} \label{fig-lpdot} \end{figure} Figure~\ref{fig-epdot} shows the tracks of $\dot{\phi}$ as a function of $a$ for the exponential dark energy potential for the five values of $w_0$ and the $\beta_e$ values associated with them. \begin{figure} \scalebox{.6}{\includegraphics{figure22.eps}} \caption{The time derivative of the scalar for the exponential potential for the five values of $w_0$ and their associated values of $\beta_e$.} \label{fig-epdot} \end{figure} Both the logarithmic and the exponential have $\dot{\phi}$ values approaching zero at the present time. \subsection{The Evolution of the Dark Energy Pressure} \label{ss-dep} The dark energy pressure comes from the second of the Einstein eqns. $-2\dot{H}= \rho_m +\rho_{\phi}+p_{\phi}$ where $\dot{H} =-\frac{1}{2} \dot{\phi}W_{,\phi}$. From eq~\ref{eq-difw} \begin{equation} \label{eq-dwp} W_{,\phi}=-\frac{2 \rho_m}{\beta W}-\frac{1}{2}\beta W \end{equation} which yields using $k\dot{\phi}=-\frac{\beta W}{2}$ \begin{equation} \label{eq-pp} p_{\phi}=-2\rho_m-\frac{k^2}{2}\dot{\phi}^2+3H^2 \end{equation} Figure~\ref{fig-lgdep} shows the evolution of the dark energy pressure for the logarithmic dark energy potential for the five values of $w_0$ with $\beta_l=3.$ Since the pressure is negative the negative numbers rather than the logarithms are plotted. \begin{figure} \scalebox{.6}{\includegraphics{figure23.eps}} \caption{The dark energy pressure for the five values of $w_0$ with a logarithmic potential with $\beta_l=3$.} \label{fig-lgdep} \end{figure} As expected from the dark energy density plots the $p_{\phi}$ tracks cross over themselves. Figure~\ref{fig-expdep} shows the $p_{\phi}$ for the exponential potential for the five $\beta_e$, $w_0$ pairs. \begin{figure} \scalebox{.6}{\includegraphics{figure24.eps}} \caption{The dark energy pressure for the five $\beta_e$, $w_0$ pairs with an exponential potential.} \label{fig-expdep} \end{figure} \section{Relevance of the Analysis} \label{s-sum} This paper completes the investigation started in paper I of four common dark energy potentials in a quintessence cosmology. Here the relevance of the findings to important cosmological and new physics questions is examined. The literature on determining cosmological parameters based on observations is vast and it is not the purpose of this section to determine the veracity of the various studies. Instead the following points out which parameters calculated in this study and paper I are relevant to the important questions and how they may differ from the current body of work. \subsection{Dynamical versus Static Dark Energy} \label{ss-ds} What observations can discriminate between a dynamic dark energy quintessence cosmology and a static dark energy $\Lambda$CDM universe? An important finding is that due to the flatness of the quintessence potentials in the dark energy dominated eras both cosmologies predict essentially identical evolution of the Hubble parameter $H(a)$. $H(a)$ measurements, therefore, can not effectively discriminate between the two cases. Measurements that differed from the predicted evolution would, however, rule out both cosmologies. Measurements of the dark energy equation of state $w(a)$ and the values of the fundamental constants $\mu$ and $\alpha$ can discriminate between dynamical and static dark energy. A confirmed observation of $w(a) \neq -1$ or a change in the value of a fundamental constant would rule out $\Lambda$CDM but would be consistent with quintessence or other dynamical dark energy cosmologies. Tests for a value of $w(a) \neq -1$ eg. \citet{avs17}, are often conducted using the Chevallier-Polarsky-Linder (CPL) linear model \citep{che01, lin03}. \begin{equation} \label{eq-cpl} w(a) = w_0+w_a(1-a) \end{equation} Examination of figs.~\ref{fig-lgwpo} and~\ref{fig-exwpo} indicates that the model is a reasonable fit at low redshifts but is a bad fit at high redshifts where $w(a)$ is evolving rapidly in the quintessence cosmology. The tracks in these figures provide more realistic templates to compare with observations than the CPL linear model. The shapes of the $w(a)$ tracks suggest a possible reason why many observational studies seem to favor phantom, $w<-1$, values of $w$ e.g. \citep{chen17}. Figure~\ref{fig-cpl} shows CPL fits to the $w_0=-0.94$ $w(a)$ evolution for an exponential potential over three scale factor ranges; the full range between 0.1 and 1.0 ($z=9-0$), the range between 0.2 and 1.0 ($z=4-0$) and the range between 0.5 and 1.0 ($z=1-0$). \FloatBarrier \begin{figure} \scalebox{.6}{\includegraphics{figure25.eps}} \caption{Three CPL fits, dashed lines, to the evolution of $w(a)$ for an exponential potential with $w_0= -0.94$. The fits to the full range and the range between $a=0.2$ and 1.0 produce false phantom crossings.} \label{fig-cpl} \end{figure} As expected the fit between $z=1$ and 0 is a good match but the two fits that include the higher redshift evolution produce phantom values for $w_0$ in eqn.~\ref{eq-cpl} even though the true evolution has no phantom values. It is also evident that observations at redshifts greater than one provide more leverage on constraining deviations of $w(a)$ from minus one than observations between redshifts one and zero. Measurements of the values of $\mu$ and $\alpha$ provide more precise constraints on dynamical dark energy. Figures~\ref{fig-lgdmu} and~\ref{fig-expdmu} show the expected evolutionary tracks for $\mu$ with a coupling constant $\zeta_{\mu} = \pm 10^{-6}$ and the five different values of $w_0$. A single measurement, under the quintessence assumption of homogeneous dark energy, determines the allowed parameter space for dynamical dark energy. Figures.~\ref{fig-lfc} and ~\ref{fig-efc} show the allowed parameter space in the $(w_0+1)$, $\zeta_{\mu}$ plane based on the observational constraint discussed in sec.~\ref{sss-muob}. Any point other than 0,0 in the plane requires dynamical dark energy, new physics or both. \subsection{The Dark Energy Potential} \label{ss-dep} As with the question of dynamical versus static dark energy, the Hubble parameter yields essentially no information on the functional form of the dark energy potential. Although not explicitly depicted here the tracks of the cosmological parameters, such as $w(a)$ for the logarithmic potential have the same insensitivity to the value of $\beta_l$ as shown for the power and inverse power law potential in paper I. The $w(a)$ tracks for the exponential potential, however, are sensitive to $\beta_e$ since the values of $\beta_e$ and $w_0$ are coupled by eqn.~\ref{eq-bew}. An accurate observational measurement of $w(a)$ at a particular scale factor or for a range of scale factors does not uniquely determine the dark energy potential. Examination of figs.~\ref{fig-lgwpo} and~\ref{fig-exwpo} shows that for a given coordinate in the $w(a)$,$a$ plane either a logarithmic or exponential potential can match the coordinate by altering the value of $w_0$. The tracks in the two figures are for specific values of $w_0$ but all of the area between the minimal and maximal tracks are covered by the range of $w_0$ between -0.9 and -0.98. All of the area below the minimal -0.98 can be covered by making $w_0$ arbitrarily close to -1.0 and the area above the maximal -0.9 track can be covered by making $w_0$ even further from -1. The tracks in both figures have very similar shapes, making it difficult to discriminate between the potentials even with good knowledge of $w(a)$ over a large range of scale factors. However, if there is an accurate measurement of $w_0$ along with $w(a)$ at other scale factors there is some leverage in determining the potential. Of course any determination of $w$ other than minus one at any epoch would be a significant finding. \subsection{The Rate of Change of Fundamental Constants} \label{ss-crcfc} Laboratory constraints on the rate of change of fundamental constants is another check on the possibility of dynamical dark energy. Figures~\ref{fig-lgdmu} and~\ref{fig-expdmu} indicate that the rate of change of $\mu$ and $\alpha$ in a quintessence freezing cosmology is slowing down in the current epoch. Figures~\ref{fig-lgdmr} and~\ref{fig-exdmr} show the rate of change, $\dot{\mu}/\mu$ per year for the logarithmic and exponential dark energy potentials with a coupling constant of $\zeta_{\mu}= 10^{-6}$. \begin{figure} \scalebox{.6}{\includegraphics{figure26.eps}} \caption{The rate of change, $\dot{\mu}/\mu$ per year for the logarithmic potential with a coupling constant of $\zeta_{\mu} = 10^{-6}$.} \label{fig-lgdmr} \end{figure} \begin{figure} \scalebox{.6}{\includegraphics{figure27.eps}} \caption{The rate of change, $\dot{\mu}/\mu$ per year for the exponential potential with a coupling constant of $\zeta_{\mu} = 10^{-6}$.} \label{fig-exdmr} \end{figure} The proton to electron ratio is used in the example but the fine structure constant $\alpha$ has exactly the same track if its coupling constant is also $\zeta_{\alpha} = 10^{-6}$. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{4}{c}{$\dot{\mu}/\mu$ in $10^{-17} m_{pl}$ per year} \\ \hline & \multicolumn{2}{|c|}{Logarithmic} & \multicolumn{2}{|c|}{Exponential}\\ \hline $w_0$&a=0.1&a=1.0&a=0.1&a=1.0\\ \hline -0.98 & -23.4 & -1.47 & 25.6 & 1.47 \\ -0.96 & -31.1 & -2.07 & 36.5 & 2.07 \\ -0.94 & -36.2 & -2.54 & 45.0 & 2.54 \\ -0.92 & -40.0 & -2.93 & 52.4 & 2.93 \\ -0.90 & -42.9 & -3.28 & 59.0 & 3.28 \\ \hline \end{tabular} \end{center} \caption{$\dot{\mu}/\mu$ per year for the logarithmic and exponential dark energy potentials at scale factors of 0.1 and 1.0 for the five values of the current dark energy equation of state $w_0$ and a coupling constant of $\pm 10^{-6}$} \label{tab-rfc} \end{table} It is clear from fig.~\ref{fig-lgdmr} and~\ref{fig-exdmr} that in a quintessence freezing cosmology the current rate of change of the fundamental constants is significantly less than rate at high redshift. Table~\ref{tab-rfc} shows the rate of change in units of $10^{-17} m_{pl}$ per year for $\mu$ at scale factors of 0.1 and 1.0 for the logarithmic and exponential potentials for the five values of $w_0$ and a coupling constant of $+10^{-6}$. The signs between the two potentials are opposite and would be reversed for a negative coupling constant. The current rates of change are essentially the same between the two potentials but diverge at a scale factor of 0.1. The average current rate of change is roughly 18 times less than the rate of change at a scale factor of 0.1. Current laboratory bounds~\citep{god14} are $\dot{\mu}/\mu = (0.2 \pm 1.1) 10^{-16}$ year$^{-1}$ and $\dot{\alpha}/\alpha =(-0.7 \pm 2.1) 10^{-17}$ year$^{-1}$. Matching the cosmological observational bounds on $\Delta \mu / \mu$ discussed in sec.~\ref{sss-muob} with a coupling constant of $\pm 10^{-6}$ requires $(w_0+1) \leq 0.02$ which is the first row in table~\ref{tab-rfc}. This sets a limit a factor of ten below the laboratory limit. Unlike the laboratory limits the cosmological limit on $\Delta \mu / \mu$ is more stringent than the limit on $\Delta \alpha / \alpha$. \subsection{Checking on the Swampland} \label{ss-cs} String theory postulates a vast landscape of vacua that is surrounded by an even more vast landscape, termed the swampland, of consistent looking scalar field theories that are inconsistent with a quantum field theory of gravity \citep{vaf05,agr18}. Put another way the swampland is the landscape of valid scalar field theories that are incompatible with quantum gravity \citep{hei18}. Given the current interest in the swampland it is worthwhile to determine whether quintessence with the potentials considered here and in paper I dwells in the swampland. The boundaries of the swampland are usually defined by two conjectures. The first conjecture is that the change in the scalar should be $\Delta \phi < \sim O(1)$ and the second is that $\Delta V/V \geq \sim O(1)$. If either of these conjectures are violated then the cosmology is in the swampland. It is not entirely clear how restrictive of order 1 is or exactly what range of scale factors $\Delta \phi$ and $\Delta V$ encompass. It is obvious that $\Lambda$CDM is in the swampland since $\Delta V = 0$. The quintessence models considered here and in paper I certainly live near the swampland with perhaps one foot in the swamp and one foot dry depending on how of order 1 is interpreted. The swampland parameters for the potentials in this paper and paper I are shown in Table~\ref{tab-swamp}. Both $\Delta$ values are for the scale factor range between 0.1 and 1.0. The potential $V$ in $\Delta V/V$ is the current day potential. The X in the inverse power law parameters for $w_0=-0.90$ indicate that this is not a valid solution as shown in paper I. \begin{table} \begin{center} \begin{tabular}{ccccccccc} \hline & \multicolumn{8}{c}{Swampland Parameters} \\ \hline & \multicolumn{2}{|c|}{Log} & \multicolumn{2}{|c|}{Exp}& \multicolumn{2}{|c|}{Pow}& \multicolumn{2}{|c|}{Inv Pow}\\ \hline $w_0$&$\Delta \phi$&$\frac{\Delta V}{V}$&$\Delta \phi$&$\frac{\Delta V}{V}$&$\Delta \phi$&$\frac{\Delta V}{V}$&$\Delta \phi$&$\frac{\Delta V}{V}$\\ \hline -0.98 \vline & 0.45 & 0.09 \vline & -0.47 & 0.10 \vline &0.55 &0.10 \vline &-0.58 &0.11 \\ -0.96 \vline & 0.62 & 0.18 \vline & -0.67 & 0.21 \vline &0.76 &0.20 \vline &-0.84 &0.23 \\ -0.94 \vline & 0.73 & 0.26 \vline & -0.82 & 0.34 \vline &0.92 &0.30 \vline &-1.06 &0.38 \\ -0.92 \vline & 0.82 & 0.34 \vline & -0.94 & 0.47 \vline &1.04 &0.41 \vline &-1.26 &0.56 \\ -0.90 \vline & 0.90 & 0.42 \vline & -1.06 & 0.62 \vline &1.14 &0.52 \vline &X &X \\ \hline \end{tabular} \end{center} \caption{The two swampland parameters for the potentials in this paper and paper I. The units of $\Delta \phi$ are Planck masses. The $\beta_{l,p,i}$ values are 3 for all potentials except for the exponential potential which uses the $\beta_e$ value appropriate to the $w_0$ value.} \label{tab-swamp} \end{table} All of the exponential and logarithmic potential cases considered here satisfy the condition on $\Delta \phi$ under the assumption that -1.06 is of order 1. The power and inverse power law $\Delta \phi$ entries for the three values of $w_0$ closest to minus one, the most likely values, also satisfy the $\Delta \phi$ conjecture, the dry foot. None of the $\Delta V/V$ entries strictly satisfy the associated conjecture, the wet foot. Very recent work by \citet{kin18} suggests that this is a feature common to most single scalar field cosmologies. Since the potentials $V(\phi)$ are functions of the scalar $\phi$ larger values of $\Delta V$ require larger changes in $\phi$ which, as table~\ref{tab-swamp} shows requires larger deviations of $w$ from minus one and drives the $\Delta \phi$ values higher which could result in violating the $\Delta \phi$ conjecture. \citet{obi18} have also suggested a criterion that $|\phi|<1$ in Planck units which is not satisfied by the scalars in this work. It is not the purpose of this discussion to determine whether having one foot in the swamp is a good or bad thing but rather to simply show where quintessence with the potentials examined here lies with respect to the swampland boundaries. \section{Conclusions} \label{s-con} This and paper I show that the beta function formalism provides an effective way to calculate accurate solutions for cosmological parameters as a function of the scale factor $a$. For the most part the solutions are analytic functions utilizing known mathematical functions. The superpotential for the logarithmic dark energy potential, however, required an easily calculated numerical integral. The two papers also demonstrate the application of the beta function formalism and can act as a guide to the extension of the formalism to other potentials and cosmologies.
{ "timestamp": "2018-11-09T02:03:06", "yymm": "1811", "arxiv_id": "1811.03164", "language": "en", "url": "https://arxiv.org/abs/1811.03164" }
\section{Our Adaptive Algorithm} In this section we give an adaptive algorithm for both weighted and unweighted graphs that in $1/\epsilon$ rounds finds a $1-\epsilon$ approximation. The algorithm --- formalized as Algorithm~\ref{alg:adaptive} --- in each round of adaptivity takes $R^\star$ independent random realizations of the graph and queries the edges in maximum weighted matching of each of these realizations simultaneously. Each round of the algorithm is similar to our non-adaptive algorithm (Algorithm~\ref{alg:nonadaptive}). The only difference is that after each round, we remove all of the unrealized edges from $E$, and we enforce the edges that we already know are realized to appear in all random realizations that we take. \begin{algorithm} \caption{An adaptive algorithm for the weighted stochastic matching problem.} \label{alg:adaptive} \begin{algorithmic}[1] \Statex \textbf{Input:} Input graph $G=(V, E)$, edge weights $w: E \to \mathbb{R}_+$ and realization probability $p \in [0, 1]$. \Statex \textbf{Parameters:} $R=1/\epsilon$ and $R^\star = O(\frac{\log(1/\epsilon p)}{\epsilon^3 p})$. \State $P_1 \gets \emptyset$ \Comment{$P_i$ denotes the set of edges that we know are realized at the start of round $i$.} \State $E^\star \gets E$ \For{$r= 1, \ldots, R$}\Comment{Rounds of adaptivity.} \State $P_{r+1} \gets P_r$ \State $S_r \gets \emptyset$ \For{$r^\star = 1, \ldots, R^\star$}\Comment{Process of constructing our sample for this round.} \State Construct a realization $\mathcal{G}_{r^\star}=(V, \mathcal{E}_{r^\star})$ of $G$, where any edge $e \in (E^\star \setminus S)$ appears in $\mathcal{E}_{r^\star}$ independently with probability $p$, and each edge in $P_r$ appears with probability $1$. \State Add the edges in maximum weighted matching $\matching{\mathcal{E}_{r^\star}}$ of $\mathcal{G}_{r^\star}$ to $S_r$. \EndFor \State Query the edges in $S_r$. \State Remove unrealized edges from $E^\star$, and add realized edges to $P_r+1$. \EndFor \State Return the maximum weighted matching of $P_{R+1}$. \end{algorithmic} \end{algorithm} We need a few definitions before the analysis. At the beginning of round $r$, for each edge $e \in E$, we define $q^r_e$ to be the probability that edge $e$ appears in the omniscient optimum matching given the realizations by round $r-1$. Let $C_r := \{e \, | \, q^r_e \geq \tau \}$ and $N_r := \{ e \, | \, q^r_e < \tau \}$ respectively denote the crucial and non-crucial edges at the beginning of the round $r$ where $\tau$ is asymptotically the same threshold used in the previous section (we soon mention a slight change in the constant factor of this threshold). Moreover, for any $r$ denote $\qw^r_e := q^r_e \cdot w_e$. We need one more definition to continue. Define $$Z_i = \expm \Big( E \, |\, \text{given the outcome of our queries by round $i-1$} \Big).$$ Note that with this definition we have $Z_1 = \opt$ and $\qw^r(E) = Z_r$ for any $r$. In the following claim, we show that if $q^r(C_r \setminus S_r)$ is at most $\sfrac{\epsilon}{2} \cdot \opt$, then the expected weight of the maximum realized matching found by the end of round $r$ is at least $(1-\epsilon) Z_r$. \begin{lemma} $\E_{r}[\qw^{r+1}(P_{r+1})] \geq (1-\epsilon)\qw^{r}(P_{r}) + \epsilon \cdot \qw^r(E)$. \end{lemma} \begin{proof} We assume that we are at the beginning of round $r$ and argue that $\E_{r}[\qw^{r+1}(P_{r+1})]$ is large. Let us consider the following inequality which is equivalent to our desired bound $$\E_r[\qw^{r+1}(P_{r+1})] - \qw^{r}(P_{r}) \geq \epsilon\big(\qw^r(E) - \qw^{r}(P_{r})\big).$$ Note that $P_{r+1}$, the set of edges realized by the end of round $r$, is a random variable whose value is determined at the end of our current round $r$. We consider two scenarios. First, assume that the expected matching of un-queried crucial edges is {\em large}. More precisely, suppose $\qw^r(C_r \setminus P_r) \geq \sfrac{\epsilon}{2} \cdot \qw^r(E)$. In this case, as shown by Lemma~\ref{lem:sampleallcrucial}, we expect our algorithm to query a subset $Z_r \subseteq C_r \setminus P_r$ of these crucial edges where $\E_r[\qw^r(Z_r)] \geq (1-\epsilon)\qw^r(C_r \setminus P_r)$. This means that we have $$\E_r[\qw^r(P_r \cup Z_r)] = \qw^r(P_r) + \E_r[\qw^r(Z_r)] \geq \qw^r(P_r) + \frac{(1-\epsilon)\epsilon}{2} \cdot \qw^r(E).$$ Furthermore, one can easily verify that $\E_r[\qw^{r+1}(P_{r+1})] \geq \E_r[\qw^r(P_r \cup Z_r)]$. The reason is that every realized edge in $P_r \cup Z_r$ also appears. $$\E_r[\qw^{r+1}(P_{r+1})] - \qw^{r}(P_r) \geq \qw^r(C_r \setminus P_r) \geq (1-\epsilon) \frac{(1-\epsilon)\epsilon}{2} \cdot \qw^r(E)$$ as desired. \end{proof} \begin{claim} Suppose that in a round $r$ of the algorithm, the expected matching of unrealized crucial edges is most $\sfrac{\epsilon}{2} \cdot \opt$, i.e., $q^r(C_r \setminus S_r) \le \sfrac{\epsilon}{2} \cdot \opt$. Then the expected weight of the matching returned by the algorithm during this round is at least $(1-10\epsilon) \opt$. \end{claim} \begin{proof} Let $N'$ (resp. $C'$) be the set of non-crucial (crucial) edges which are not known to be realized yet, i.e., they are not in $S_r$. Then, we have $\opt \le \qw(N')+ \qw(S_r) + \qw(C')$. We run Procedure \ref{proc:non-crucial} on the realized edges of $N'$ in $Q_r$. Let $x^{N'}$ be the fractional matching obtained by the procedure. By Lemma~\ref{lem:non-cruciallemma}, we have $$ \E\bigg[\sum_{e \in S_{r+1} \cap N'} x^{N'}_e \cdot w_e \bigg]\geq (1-9\epsilon) \qw(N') \,. $$ However, we can assume at the cost of a constant factor more per-vertex queries for each round (more precisely by changing the threshold for crucial/non-crucial edges) that: $$ \E\bigg[\sum_{e \in S_{r+1} \cap N'} x^{N'}_e \cdot w_e \bigg]\geq (1-\epsilon/2) \qw(N') \,. $$ For each edge $e$ in $S_r$, we add $e$ to our fractional matching and set $x_e = (1-\epsilon/2) q(e)$. For every edge $e$ in $N'$, we set $x_e = x^{N'}_e$. Then we have $$ \E\bigg[\sum_{e \in S_r \cup ( S_{r+1} \cap N')} x_e \cdot w_e \bigg]\geq (1-\epsilon/2) \qw(S_r)+ (1-\epsilon/2) \qw(N') = (1-\epsilon/2)\big( \qw(S_r) + \qw(N') \big)\,. $$ Since $\qw(C') \le \sfrac{\epsilon}{2} \cdot \opt$, we have $$ \E\bigg[\sum_{e \in S_r \cup ( S_{r+1} \cap N')} x_e \cdot w_e \bigg]\geq \qw(S_r)+ (1-\epsilon/2) \qw(N') \ge (1-\epsilon) \opt \,. $$ \end{proof} \begin{theorem}\label{thm:adaptive} For any graph $G$, the expected weight of the matching returned by Algorithm \ref{alg:adaptive} is at least $(1-\epsilon) \opt$. \end{theorem} \section{Appendix: Omitted Proofs}\label{sec:missingproofs} \subsection{Proof of the Non-crucial Edges Lemma} In this section, we provide the complete proof for Lemma~\ref{lem:non-cruciallemma}. \begin{proof}[Proof of Lemma~\ref{lem:non-cruciallemma}] Proof of the first and the second properties were already given in Section~\ref{sec:nonadaptive}. Here we prove the third property. We first start with the following claim. \begin{claim}\label{claim:flargewhp} By the end of Algorithm~\ref{alg:nonadaptive}, we have $\E \Big[\sum_{e \in S \cap N} \min\{f_e, 2 \tau \} \cdot w_e \Big] \geq (1-\epsilon)\qw(N)$. \end{claim} \begin{proof} We can think of the values of $f_e$ in the following way: For any edge $e$, $f_e$ is initially 0; then after each round $r$ of Algorithm~\ref{alg:nonadaptive}, we pick a matching $M(\mathcal{E}_r)$ and for any edge in this matching we update $f_e$ to be $f_e + 1/R$. Clearly by the end of the algorithm, the value of $f_e$ will be equal to the fraction of the matchings picked by the algorithm that contains $e$ which is precisely the definition of $f_e$. To argue that $\sum_{e\in S \cap N} f_e \cdot w_e$ is large, it suffices to show that the average weight of the matchings that are picked by Algorithm \ref{alg:nonadaptive} is close to $\expmatching{E}$. Let $M_1, M_2, \cdots, M_R$ be the random variables denoting the weights of the non-crucial edges in the matchings picked in each round of Algorithm~\ref{alg:nonadaptive}. For each $M_i$, we have $$\E[M_i] = \sum_{e \in S \cap N} q_e \cdot w_e = \qw(N)\,,$$ Further let $\bar M := (M_1, M_2, \cdots, M_R)/R$. One can easily confirm via linearity of expectation that $\E[\bar M] = \qw(N)$. \begin{comment} By Hoeffding's inequality we have, $$P \Big[ \E[\bar M] - \bar M \ge \epsilon \Big ] = P \Big[ \qw(N) - \bar M \ge \epsilon \Big ] \le \exp \big ( -2 R \epsilon^2) \,. $$ Since $R \ge 1/\epsilon^3$, we have $$P \Big[ \E[\bar M] - \bar M \ge \epsilon \Big ] \le \exp \big ( -2 R \epsilon^2) \le \exp(-2/\epsilon) \le \epsilon/2 \,. $$ Note that $\bar M = \sum_{e\in S \cap N} f_e \cdot w_e$. Hence, $$ P \Big[ \qw(N) - \sum_{e\in S \cap N} f_e \cdot w_e \ge \epsilon \Big ] \le \epsilon/2 \,.$$ \end{comment} Note also that by definition of $f_e$, we have $\bar M = \sum_{e\in S \cap N} f_e \cdot w_e$. Hence, $$ \E \Big[\sum_{e \in S \cap N} f_e \cdot w_e \Big ] = \sum_{e \in S \cap N} \E [ f_e ] \cdot w_e = \qw(N) \,.$$ Next, we show that for every non-crucial edge $e$, the probability of $f_e$ exceeding $2 \tau$ is very small. Its proof is derived from the independence of the realizations taken by Algorithm~\ref{alg:nonadaptive}, the definition of $f_e$, and the fact that for all non-crucial edges $q_e < \tau$. Also, we assume that $\epsilon$ is a small number and we have $\epsilon \le e^{-1}$. \begin{claim}\label{cl:probltepsq} For any non-crucial edge $e$, $f_e$ exceeds $2 \tau$ with probability at most $\epsilon \cdot q_e$. \end{claim} \begin{proof For an edge $e$, let $X_1, X_2, \cdots, X_R$ be random variables such that $X_i$ is $1$ if $e$ is picked in the maximum matching of round $i$ of Algorithm~\ref{alg:nonadaptive}, and is $0$ otherwise. Then we have $E[X_i] = q_e$ for each $X_i$. Recall that $f_e$ is the fraction of the matchings picked by the algorithm that contains $e$. Therefore, $f_e$ is the average of $X_1, X_2, \cdots, X_R$, i.e., $f_e = \frac{1}{R}(X_1+X_2+\cdots+X_R)$. Also, we have $\E[f_e] = q_e$. Let $X= X_1+X_2+\cdots+X_R$. It follows that $X= f_e \cdot R$, and we have \begin{align*} P [ f_e \ge 2 \tau ] &= P [ f_e - \tau \ge \tau ] \\ &\le P[ f_e - q_e \ge \tau] & \text{Since } e \text{ is non-crucial and } q_e < \tau . \\& = P\Big[ f_e - \E[f_e] \ge \tau \Big] \\& = P\Big[ R \cdot f_e - \E[R \cdot f_e] \ge R \cdot \tau \Big] \\& = P\Big[ X - \E[X] \ge R \cdot \tau \Big] & \text{Since $X= f_e \cdot R$.} \\ & \le \exp\Big(-\frac{R \cdot \tau \cdot \log\big(1+(R \cdot \tau)/\E[X]\big)}{2}\Big) & \text{By Chernoff bound\footnotemark.} \\ & \le \exp\Big(-\frac{R \cdot \tau \cdot \log(1+\frac{\tau}{q_e})}{2}\Big) & \text{$\E[X]= q_e \cdot R$.} \\ & \le \exp\Big(-50 \log(1/\epsilon p) \log(1+\frac{\tau}{q_e})\Big) &\text{Since } R \cdot \tau > 100 \log(1/\epsilon p). \\ & = \dfrac{1}{\exp\Big(50 \log(1/\epsilon p) \log(1+\frac{\tau}{q_e})\Big)} \\ & = \dfrac{1}{(1+\frac{\tau}{q_e})^{50 \log(1/\epsilon p)}} \\ & \le \dfrac{1}{(1+\frac{\tau}{q_e})(1+\frac{\tau}{q_e})^{49 \log(1/\epsilon p)}} & \text{Since $\epsilon \le e^{-1}$ and $\log(1/\epsilon) \ge 1$.} \\ & \le \dfrac{1}{(1+\frac{\tau}{q_e})\cdot 2^{49 \log(1/\epsilon p)}} & \text{Since $\tau > q_e$ and $1+\frac{\tau}{q_e}>2$.} \\ & = \dfrac{1}{(1+\frac{\tau}{q_e})\cdot \exp\big({49/\log(2) \cdot \log(1/\epsilon p)}\big)} \\ & \le \dfrac{1}{(1+\frac{\tau}{q_e})\cdot \exp\big(30\log(1/\epsilon p)\big)} \\ & \le \dfrac{1}{(1+\frac{\tau}{q_e})\cdot e^{10} \cdot \exp\big(20\log(1/\epsilon p)\big)} & \text{Since $\epsilon \le e^{-1}$ and $\log(1/\epsilon) \ge 1$.} \\ & \le \dfrac{1}{20 (1+\frac{\tau}{q_e}) \cdot \exp\big(5 \log(1/\epsilon p)\big)} \\ & \le \dfrac{1}{20 (1+\frac{\tau}{q_e}) \cdot \log(1/\epsilon p) \cdot \exp\big(4 \log(1/\epsilon p)\big)} & \text{ Since $e^{x} \ge x$ for all real numbers $x$.} \\ & = \dfrac{\epsilon^4 p^4}{20 (1+\frac{\tau}{q_e}) \cdot \log(1/\epsilon p)} \\ & \le \dfrac{\epsilon \tau} {(1+\frac{\tau}{q_e})} & \text{Since } \tau= \frac{\epsilon^3 p}{20 \log(1/\epsilon)}. \\ & = \dfrac{\epsilon \cdot \tau \cdot q_e} {\tau+q_e} \\ & \le \dfrac{\epsilon \cdot \tau \cdot q_e} {\tau} \\ & = \epsilon \cdot q_e \end{align*} \footnotetext{By Chernoff bound we have $P \Big[ X \ge (1+\delta) E[X]\Big] \le \exp(-\frac{\delta \log(1+\delta) E[X]}{2})$.} which proves the claim. \end{proof} By the claim above, we know that with probability at least $1-\epsilon q_e$, we have $f_e \le 2 \tau$. It follows that \begin{equation}\label{ieq:fealgo} \E \Big[ \min \{ f_e, 2 \tau \} \Big] \ge \Pr \Big[f_e \le 2 \tau \Big] \cdot \E \Big [f_e \, |\, f_e \le 2 \tau \Big ] \,. \end{equation} On the other hand, we have \begin{equation} \label{ieq:fesum} q_e = E[f_e] = P \Big[f_e \le 2 \tau \Big] E \Big [f_e | f_e \le 2 \tau \Big ] + P \Big[f_e > 2 \tau \Big] E \Big [f_e | f_e > 2 \tau \Big ]\,. \end{equation} Combining (\ref{ieq:fealgo}) and (\ref{ieq:fesum}) gives \begin{align*} \E \Big[ \min \{ f_e, 2 \tau \} \Big] - E[f_e] &\le P \Big[f_e > 2 \tau \Big] E \Big [f_e | f_e > 2 \tau \Big ] \\ &\le (\epsilon \cdot q_e) E \Big [f_e | f_e > 2 \tau \Big ]\\ &\le (\epsilon \cdot q_e) & f_e \text{ is at most } 1.\\ &= \epsilon E[f_e] \,. \end{align*} Therefore $\E \Big[ \min \{ f_e, 2 \tau \} \Big] \ge (1-\epsilon) E[f_e]$, and we have \begin{align*} \E \Big[\sum_{e \in S \cap N} \min\{f_e, 2 \tau \} \cdot w_e \Big] & = \sum_{e \in S \cap N} \E \Big[\min\{f_e, 2 \tau \}\Big] \cdot w_e\\ & \ge \sum_{e \in S \cap N} (1-\epsilon)\E[f_e] \cdot w_e\\ & = (1-\epsilon) \sum_{e \in S \cap N} \E[f_e] \cdot w_e \\ &= (1-\epsilon)\qw(N), \end{align*} which is our desired bound. \end{proof} \begin{claim} \label{clm:non-crucial} By the end of step 1 of Procedure~\ref{proc:non-crucial}, we have $\E \big[ \sum_{e \in S_p \cap N} \tilde x^N_e . w_e \big] \geq (1-\epsilon)\qw(N)$. \end{claim} \begin{proof} Note that for each edge $e \in S_p \cap N$, we assign $\min\{f_e/p, 2\tau /p \}$ to $\tilde x^N_e$ by the end of step 1. Thus, \begin{align*} \E \bigg[\sum_{e \in S_p \cap N} &w_e \cdot \min\{f_e/p, 2\tau /p\} \bigg]\\ &= \frac{1}{p} \cdot \E\bigg[\sum_{e \in S_p \cap N} w_e \cdot \min\{f_e, 2\tau\}\bigg]\\ &= \frac{1}{p} \cdot \E\bigg[\sum_{e \in S \cap N} w_e \cdot \min\{f_e, 2\tau\} \cdot \mathbbm{1}_{E_p}(e)\bigg] & \text{($\mathbbm{1}_{E_p}(e) = 1$ if $e \in E_p$ and 0 otherwise.)} \\ &= \frac{1}{p} \cdot \sum_{e \in S \cap N} \E\big[ w_e \cdot \min\{f_e, 2\tau\} \cdot \mathbbm{1}_{E_p}(e)\big] & \text{By linearity of expectation.}\\ &= \frac{1}{p} \cdot \sum_{e \in S \cap N} w_e \cdot \E\big[\min\{f_e, 2\tau\}\big] \cdot \E\big[\mathbbm{1}_{E_p}(e)\big] & \text{\parbox{7cm}{\vspace{0.3cm}Since value of $f_e$ is independent of its realization.\vspace{0.3cm}}}\\ &= \frac{1}{p} \cdot \sum_{e \in S \cap N} w_e \cdot \E\big[\min\{f_e, 2\tau\}\big] \cdot p\\ &= \sum_{e \in S \cap N} w_e \cdot \E\big[\min\{f_e, 2\tau\}\big]. \end{align*} Recall by Claim~\ref{claim:flargewhp} that we have $\sum_{e \in S \cap N} w_e \cdot \E\big[\min\{f_e, 2\tau\}\big] \geq (1-\epsilon)\qw(N)$. Combining it with the inequality above, we get, $$\E \big[ \sum_{e \in S_p \cap N} \tilde x^N_e . w_e \big] \geq (1-\epsilon)\qw(N)$$ which is the desired bound. \end{proof} Considering the matchings picked by Algorithm \ref{alg:nonadaptive}, the expected weight of each of them is $\opt$. As we showed in the claim above by the end of step $1$ of Procedure \ref{proc:non-crucial}, we have $$ \E \big[ \sum_{e \in S_p \cap N} \tilde x^N_e . w_e \big] \geq (1-\epsilon)\qw(N) \,. $$ We claim that for every realized edge $e \in (S_p \cap N)$, the scaling-factor of this edge which is $s_e$ is at least $(1-5\epsilon)$ with probability at least $(1-4\epsilon)$. Formally, our claim is as follows. \begin{comment} We first show how this completes the proof. Suppose that our claim holds for every edge in $S_p \cap N$. Let $x^N$ be the fractional matching at the end of Procedure \ref{proc:non-crucial}, then for every edge $e \in S_p$, we have $x^N_e =\tilde x^N_e \cdot s_e$. Suppose that $s_e$ is at least $(1-5\epsilon)$ with the probability at least $(1-4\epsilon)$, then we have $$ \E \big[ \sum_{e \in S_p \cap N} x^N_e . w_e \big] = \sum_{e \in S_p \cap N} \E[x^N_e] w_e \ge (1-9\epsilon) \sum_{e \in N} \E[f_e] w_e =(1-9\epsilon) \qw(N) \,. $$ Each edge with $f_e >0$ is realized with the probability of $p$. Therefore, with the probability at least $1-p$, we have $x_e=0$. If $e$ is realized, $x_e$ is initially $f_e/p$, but at step $3$ of Procedure \ref{proc:non-crucial}, $x_e$ is multiplied $s_e$. The value of $s_e$ depends on the realization of other edges. In the following observation we show that the probability that in a realization, $s_e$ be less than $(1-\epsilon)$, is at most $e$. \end{comment} \begin{claim} \label{clm:non-crucialedgeschernoff} Let $v$ be one of the end points of a realized edge $e \in (S_p \cap N)$, then with probability at least $1-2\epsilon$, we have $$\max\{q^N_v, \epsilon\}/\tilde x^N_v \ge 1-5\epsilon \,.$$ \end{claim} \begin{proof} Since edge $e$ is realized, $\tilde x^N_e$ is $\min\{f_e/p, 2 \tau/p \}$ at step $1$ of the procedure. Let $\tilde x^N_v =\sum_{e: e \in (S_p \cap N), v \in e} \tilde x^N_e$. \begin{comment} In the following claim we show that $\tilde x^N_v$ is very close to $\max\{q^N_v, \epsilon\}$. \begin{claim} \label{clm:fcloseq} With the probability at least $1-\epsilon$, $$ \tilde x^N_v \le (1+\epsilon) \max\{q^N_v, \epsilon\} \,. $$ \end{claim} \end{comment} Without looking at the realization of other edges, let $e_1, e_2, \cdots, e_k$ be the non-crucial edges in $S \cap N$ incident to $v$ except the edge $e$. For each edge $e_i$, let $X_i$ be a random variable which is $0$ if $e_i$ is not realized and otherwise is $\min\{f_{e_i}/p, 2\tau/p\}$. Then, for each edge $e_i$, we have $$ \E[X_i] = p \cdot \min\{f_{e_i}/p, 2\tau /p\} = \min\{f_{e_i}, 2 \tau \} \,. $$ Let $f^N_v = \sum_{e_i} f_{e_i}$, in the following claim we show that $f^N_v$ is a good approximate of $q^N_v$. Specifically, the claim is as follows. \begin{claim} \label{clm:fcloseq} With probability at least $1-\epsilon$, $$ \max\{f^N_v, \epsilon\} \le (1+\epsilon) \max\{q^N_v, \epsilon\} \,. $$ \end{claim} \begin{proof} At each round of the algorithm, each edge $e_i$ is sampled with probability $q_i$. Therefore, the probability that vertex $v$ is matched using one of the edges $e_1, e_2, \cdots, e_k$ is at most $\sum_{e_i} q^N_{e_i} \le q^N_v$. Recall that $f_{e_i}$ is the fraction of the matching picked by Algorithm \ref{alg:nonadaptive} that contains $e_i$. Therefore, $f^N_v$ is the fraction of the matchings that vertex $v$ is matched using one of the edges $e_1, e_2, \cdots, e_k$. Therefore, $E[f^N_v] \le q^N_v$. By Hoeffding's inequality we have $$ P \Big [ f^N_v - q^N_v \ge \epsilon^2 \Big ] \le \exp \big (-2 (R-1) \epsilon^4 \big ) \,. $$ The reason that we have $R-1$ instead of $R$ in the inequality above is that we already know that edge $e$ is realized in one round of the algorithm and we are arguing on other rounds. Therefore, $$ P \Big [ f^N_v - q^N_v \ge \epsilon^2 \Big ] \le \exp \big (-2 (R-1) \epsilon^4 \big ) \le \epsilon \,. $$ Therefore, with probability at least $1-\epsilon$, we have $f^N_v - q^N_v \le \epsilon^2$. It implies that with probability of at least $1-\epsilon$, we have $$ \max\{f^N_v, \epsilon\} - \max\{q^N_v, \epsilon\}\le \epsilon^2 \,. $$ Thus, with probability at least $1-\epsilon$, $$ \max\{f^N_v, \epsilon\} \le \max\{q^N_v, \epsilon\} + \epsilon^2 \le (1+\epsilon) \max\{q^N_v, \epsilon\}. $$ \end{proof} For each edge $e_i$, we have $E[X_i] \le \min\{f_{e_i}, 2 \tau \} \le f_{e_i}$. Therefore, $$ \sum_{e_i} \E[X_i] \le f^N_v ,. $$ At end of step $1$ of Procedure \ref{proc:non-crucial}, $\tilde x^N_v$ is the sum of the $\tilde x^N_e$ for non-crucial edges in $S$ which are incident to $v$. It follows that $$ \E[\tilde x^N_v] = \sum_{e_i} \E[X_i] + \tilde x^N_e \le \sum_{e_i} \E[X_i]+ 2\tau/p \,. $$ If $\tilde x^N_v$ is more than the non-crucial budget of the vertex $v$ which is $\max\{q^N_v, \epsilon\}$, in steps $2$ and $3$ of Procedure \ref{proc:non-crucial}, we scale down the fractional matching such that no vertex violates its non-crucial budget. In the rest of the proof we show that the probability that vertex $v$ violates its budget by a large margin is very small. By the Claim \ref{clm:fcloseq}, we know that $f^N_v$ is very close to non-crucial budget of vertex $v$ and we use $f^N_v$ as a approximation of the budget of vertex $v$. More precisely, we show that with probability at least $1-\epsilon$, $\tilde x^N_v - \tilde x^N_e \le ( 1+ \epsilon) f^N_v$. We use $X$ to denote $\tilde x^N_v - \tilde x^N_e$. Let $\mu = \E[X]$, then $\mu = \E[X]= \sum_{i=1}^{k} \E[X_i]$. We use the variant of Chernoff bound that is given in Lemma~\ref{lem:chernoff}. Note that for each random variable $X_i$, we have $X_i \le \min\{f_{e_i}/p,2 \tau/p \}$ and $\E[X_i] = \min\{f_{e_i},2 \tau\}$. Therefore, $X_i \le \E[X_i] / p$. \begin{comment} Since $X$ is the summation of $X_i$s, we have $X \le \mu/p$. Hence, in the case that $\mu \le 2 \tau$, we have $$X \le \mu/p \le 2 \tau /p \le \epsilon \le \max\{f^N_v, \epsilon\} \,,$$ which is our desired bound. \end{comment} We consider two different cases on $\mu$. The first one is when $\mu \le \epsilon/2$, then $2 \mu \le \max\{f^N_v, \epsilon\}$, and we have \begin{align*} P \Big[ X > \max\{f^N_v, \epsilon\} \Big] &= P\Big[X - \max\{f^N_v, \epsilon\}/2 \ge \max\{f^N_v, \epsilon\}/2\Big] \\ &\le P \Big[ X - \mu \ge \max\{f^N_v, \epsilon\}/2 \Big] \\ &\le \exp \Big(-\dfrac{ \max\{f^N_v, \epsilon\}/2 }{6 \tau/p} \Big) & \text{By Chernoff bound.} \\ &\le \exp \Big(-\dfrac{\epsilon}{12 \tau/p}\Big) \\ &\le \exp(-1/\epsilon^2) \\ &\le \epsilon \end{align*} The remaining case is when $\mu > \epsilon/2$. In this case we have \begin{align*} P [ X > (1+\epsilon) \max\{f^N_v, \epsilon\} ] & \le P [ X > (1+\epsilon) \mu ] \\ &\le \exp \Big(-\dfrac{\epsilon^2 \mu}{ 2 \tau /p }\Big) & \text{By Chernoff bound.} \\ &\le \exp \Big(-\dfrac{\epsilon^3}{6 \tau / p }\Big) & \text{Since } \mu > \epsilon/2 \\ &\le \exp \Big(-\dfrac{1}{\log (1/\epsilon)}\Big) \,. \\ &= \epsilon \,. \end{align*} Which proves the last case. Therefore with probability at least $1-\epsilon$, we have $X \le (1+\epsilon) \max\{f^N_v, \epsilon\}$. And, with probability at least $1-\epsilon$ we have $$ \tilde x^N_v \le (1+\epsilon) \max\{f^N_v, \epsilon\} + \tilde x^N_e \le (1+\epsilon) \max\{f^N_v, \epsilon\} + \epsilon \le (1+2\epsilon) \max\{f^N_v, \epsilon\} \,. $$ Combining with Claim \ref{clm:fcloseq}, with probability at least $1-2\epsilon$ we have $$ \tilde x^N_v \le (1+2\epsilon) \max\{f^N_v, \epsilon\} \le (1+2\epsilon) (1+\epsilon) \max\{q^N_v, \epsilon\} \le (1+5\epsilon) \max\{q^N_v, \epsilon\} \,. $$ Therefore, with probability at least $1-2\epsilon$ we have $$\max\{q^N_v, \epsilon\}/\tilde x^N_v \ge \dfrac{1}{1+5\epsilon} \ge 1-5\epsilon \,,$$ which proves the claim. \end{proof} By Claim \ref{clm:non-crucialedgeschernoff}, the probability that for an edge $e$, $\tilde x^N_e$ multiplied by a factor less than $1-5\epsilon$ by one of end points is at most $2\epsilon$. Therefore, by union bound, the probability that none of its end points multiply $\tilde x^N_e$ by a factor less than $1-5\epsilon$ is at least $1-4\epsilon$, i.e., with probability at least $1-4\epsilon$, $s_e \ge 1-5\epsilon$. Therefore, \begin{align*} \E \big[ \sum_{e \in S_p \cap N} x^N_e . w_e \big] &= \sum_{e \in S_p \cap N} \E[x^N_e] . w_e \\ &= \sum_{e \in S_p \cap N} \E[\tilde x^N_e . s_e] . w_e \\ &\ge \sum_{e \in S_p \cap N} (1-4\epsilon)(1-5\epsilon) \E[\tilde x^N_e] . w_e &\text{By Claim \ref{clm:non-crucialedgeschernoff}.} \\ &\ge (1-9\epsilon) \sum_{e \in S_p \cap N} \E[\tilde x^N_e] . w_e \\ &\ge (1-9 \epsilon) (1-\epsilon) \qw(N) & \text{By Claim \ref{clm:non-crucial}.} \\ &\ge (1-10 \epsilon) \qw(N) \,. \end{align*} \end{proof} \subsection{Other Omitted Proofs}\label{sec:otheromitted} \begin{proof}[Proof of Lemma~\ref{lem:sampleallcrucial}] Let $e \in C$ be a crucial edge. We show that Algorithm~\ref{alg:nonadaptive} samples $e$ with probability at least $1-\epsilon$. Let $p^\star_e$ be the probability that Algorithm~\ref{alg:nonadaptive} samples $e$. By Observation \ref{obs:samplingprob}, we have $ 1-p^\star_e = (1-q_e)^R $. Since $e$ is crucial, we have $q_e \ge \tau$. Thus, $ 1-p^\star_e \le (1-\tau)^R $. Note that $R > \frac{\log(1/\epsilon)}{\tau}$. Therefore, $1-p^\star_e$ is at most \begin{align} \label{ieq:ps} 1-p^\star_e \le (1-\tau)^\frac{\log(1/\epsilon)}{\tau} = ((1-\tau)^{(1/\tau)})^{\log(1/\epsilon)} \,. \end{align} We can use the fact that for $(1-x)^{1/x} \leq 1/e$ (see Lemma~\ref{lem:oneovere}) to simplify this bound. Combined with inequality (\ref{ieq:ps}), we have $$ 1-p^\star_e \le \Big((1-\tau)^{(1/\tau)}\Big)^{\log(1/\epsilon)} \le \Big(\frac{1}{e}\Big)^{\log(1/\epsilon)} = \dfrac{1}{e^{\log(1/\epsilon)}} = \epsilon \,. $$ Therefore, we have $p^\star_e \ge 1-\epsilon$. Now that we know that each crucial edge is in $S$ with probability at least $1-\epsilon$, we can prove the lemma as follows: \begin{align*} E[\qw(S \cap C)] &= \E\Big[ \sum_{e \in C} \qw_e \cdot \mathbbm{1}_{S}(e)\Big] & \text{($\mathbbm{1}_{S}(e) = 1$ if $e \in S$ and 0 otherwise.)} \\ & = \sum_{e \in C} \E[\qw_e \cdot \mathbbm{1}_{S}(e)] & \text{By linearity of expectation.} \\ & = \sum_{e \in C} \qw_e \cdot \E[\mathbbm{1}_{S}(e)] \\ & = \sum_{e \in C} \qw_e \cdot p^\star_e \\ & \ge \sum_{e \in C} \qw_e \cdot (1-\epsilon)\\ & = (1-\epsilon) \sum_{e \in C} \qw_e \\ & = (1-\epsilon) \qw(C) \,. \end{align*} as desired. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:mathratio}] We use induction on the value of $n$. \paragraph{Base case.} Suppose for the base case that $n=1$.\footnote{To help sanity check the rather technical proof of the base case, we also refer the reader to \href{http://www.wolframalpha.com/input/?i=Maximize\%5B\%7B(h*l)\%2F(h+\%2B+0.5+l),+0+\%3C\%3D+h+\%3C\%3D+1,+0+\%3C\%3D+l+\%3C\%3D+1,++++h+\%2B+l+\%3C\%3D+1\%7D,+\%7Bh,+l\%7D\%5D}{this link on wolframalpha.com.}} We need to prove that for any $a, b \geq 0$ with $0 < a + b \leq 1$ we have $f(a, b) = \frac{ab}{a+b/2} \leq 6-4\sqrt{2}$. For this, we first argue that $f(a, b)$ is maximized when $a+b = 1$. To do this, we show that $f(1-b, b) - f(a, b) \geq 0$ for any $a$ and $b$ that satisfy the conditions above. If $b=0$ or $a=0$ both $f(1-b, b)$ and $f(a, b)$ will be zero and the equation is trivially true, thus assume $a \not= 0$ and $b\not=0$. We have \begin{align} \nonumber f(1-b, b) - f(a, b) &= \frac{(1-b)b}{1-b+b/2} - \frac{ab}{a+b/2} \\ \nonumber &= \frac{(1-b)b}{1-b/2} - \frac{ab}{a+b/2}\\ \nonumber &= \frac{(1-b)b (a+b/2) - ab(1-b/2)}{(1-b/2)(a+b/2)}\\ \nonumber &= \frac{(ab-ab^2+b^2/2-b^3/2) - (ab - ab^2/2)}{a + b/2 - ab/2 - b^2/4}\\ \nonumber &= \frac{ab-ab^2+b^2/2-b^3/2 -ab + ab^2/2}{a + b/2 - ab/2 - b^2/4}\\ \nonumber &= \frac{-ab^2/2+b^2/2-b^3/2}{a + b/2 - ab/2 - b^2/4}\\ &= \frac{-ab^2+b^2-b^3}{2a + b - ab - b^2/2}.\label{eq:BXKJ} \end{align} It suffices to show that both the numerator and the denominator of the fraction above are non-negative to show that $f(1-b, b) - f(a, b) \geq 0$. For the numerator, we should show that $-ab^2 + b^2 - b^3 > 0$ or equivalently $b^2 > ab^2 + b^3$. Due to our assumption of $0 < b$, we can divide both sides by $b^2$ to get $1 > a + b$ which is always true as it is part of our initial assumptions on the values of $a$ and $b$. For the denominator, we have to show $2a+b-ab-b^2/2 \geq 0$ or equivalently $2a+b \geq ab + b^2/2$. We have $2a > ab$ since $a, b \in (0, 1)$ and we have $b \geq b^2/2$ since $b \in (0, 1)$. Summing up the two inequalities we get our desired bound that $2a + b \geq ab + b^2/2$; concluding the claim that the fraction in (\ref{eq:BXKJ}) is non-negative and that $f(1-b, b) - f(a, b) \geq 0$. By the discussion above, to prove the base case, it suffices to find the minimum value of $g(b) := f(1-b, b)$ for $b \in (0, 1)$. Taking the derivative of $g$, we have $g'(b) = \frac{2b^2-8b+4}{(2-b)^2}.$ Setting this equal to zero to get the critical points, we get two solutions of $2-\sqrt{2}$ and $2+\sqrt{2}$. The latter is out of the $(0, 1)$ range and thus the only relevant critical point is when $b = 2-\sqrt{2}$. Therefore we have $$\max_{0\leq b \leq 1} g(b) = g(2-\sqrt{2}) = \frac{\big(1-(2-\sqrt{2})\big)(2-\sqrt{2})}{\big(1-(2-\sqrt{2})\big)+ (2-\sqrt{2})/2} = \frac{3 \sqrt{2}-4}{1/\sqrt{2}} = 6-4 \sqrt{2},$$ implying that for any $a, b \geq 0$ with $0 \leq a+b \leq 1$, we have $f(a, b) \leq g(b) \leq 6-4\sqrt{2}$ as desired for the base case. \paragraph{Induction step.} Fix numbers $a_1, \ldots, a_{n+1}$ and $b_1, \ldots, b_{n+1}$ that satisfy the conditions of the lemma. Suppose, as induction hypothesis, that we have \begin{equation}\label{eq:GPMU} \frac{\sum_{i=1}^{n}a_ib_i}{\sum_{i=1}^{n}a_i+\frac{b_i}{2}} \leq 6 - 4\sqrt{2} \end{equation} or equivalently, \begin{equation}\label{eq:MEXC} \sum_{i=1}^{n}a_ib_i \leq (6 - 4\sqrt{2})\Big(\sum_{i=1}^{n}a_i+\frac{b_i}{2}\Big). \end{equation} Our goal is to show that \begin{equation}\label{eq:YCXJ} \frac{\sum_{i=1}^{n+1}a_ib_i}{\sum_{i=1}^{n+1}a_i+\frac{b_i}{2}} = \frac{\Big(\sum_{i=1}^{n}a_ib_i\Big) + a_{n+1}b_{n+1}}{\Big( \sum_{i=1}^{n}a_i+\frac{b_i}{2}\Big) + a_{n+1}+\frac{b_{n+1}}{2}} \overset{?}{\leq} 6 - 4\sqrt{2}. \end{equation} Note that if either of $a_{n+1}$ or $b_{n+1}$ equals 0, then the inequality above is trivially true since the numerator would be equal to that of (\ref{eq:GPMU}) while the denominator is no less than that of (\ref{eq:GPMU}). Thus assume that both $a_{n+1}$ and $b_{n+1}$ are positive. As shown for the base case, we have \begin{equation*} \frac{a_{n+1}b_{n+1}}{a_{n+1}+\frac{b_{n+1}}{2}} \leq 6-4\sqrt{2}, \end{equation*} which means, \begin{equation}\label{eq:ABXJ} a_{n+1}b_{n+1} \leq (6-4\sqrt{2})\big( a_{n+1}+\frac{b_{n+1}}{2}\big). \end{equation} Replacing (\ref{eq:ABXJ}) and (\ref{eq:MEXC}) into the left-side of the inequality in (\ref{eq:YCXJ}) we get \begin{align*} \frac{\Big(\sum_{i=1}^{n}a_ib_i\Big) + a_{n+1}b_{n+1}}{\Big( \sum_{i=1}^{n}a_i+\frac{b_i}{2}\Big) + a_{n+1}+\frac{b_{n+1}}{2}} &\leq \frac{\bigg( (6 - 4\sqrt{2})\Big(\sum_{i=1}^{n}a_i+\frac{b_i}{2}\Big) \bigg) + (6-4\sqrt{2})\big( a_{n+1}+\frac{b_{n+1}}{2}\big)}{\Big( \sum_{i=1}^{n}a_i+\frac{b_i}{2}\Big) + a_{n+1}+\frac{b_{n+1}}{2}} \\ & \leq \frac{(6-4\sqrt{2})\bigg( \Big( \sum_{i=1}^{n}a_i+\frac{b_i}{2}\Big) + a_{n+1}+\frac{b_{n+1}}{2}\bigg)}{\Big( \sum_{i=1}^{n}a_i+\frac{b_i}{2}\Big) + a_{n+1}+\frac{b_{n+1}}{2}}\\ & \leq 6-4\sqrt{2}, \end{align*} which is the desired bound of inequality (\ref{eq:YCXJ}). \end{proof} \subsection{Applications}\label{sec:applications} The stochastic matching problem has a wide range of applications from {\em kidney exchange} to {\em labor markets} and {\em online dating}. In all these applications, the goal is to find a large (or heavy) matching and the main bottleneck is determining which edges exist in the graph. We overview some of these applications below. \smparagraph{Kidney exchange.} Transplant of a kidney from a living {\em donor} is possible if the recipient ({\em patient}) happens to be medically compatible with his/her donor. This is not always the case, however, kidney exchange provides a way to overcome this. In its simplest form with {\em pairwise exchanges}, two incompatible donor/patient pairs can exchange kidneys. That is, the donor of the first pair donates kidney to the patient of the second pair and vice versa. This gives rise to the notion of a {\em compatibility graph} where we have one vertex for each incompatible donor/patient pair and each edge determines the possibility of an exchange. Therefore, the pairwise exchanges that take place can be expressed as a matching of this graph. There is, however, one crucial problem. The medical records of the patients such as their blood- or tissue-types only rule out a subset of incompatibilities. For the rest, we need more accurate medical tests that are both costly and time consuming. The stochastic matching setting helps in finding a large matching among the pairs who also pass the extra tests while conducting very few medical tests per pair. There is a rich literature on such algorithmic approaches for kidney exchange particularly in stochastic settings \cite{DBLP:conf/sigecom/AkbarpourLG14, DBLP:conf/soda/AndersonAGK15, anderson2015finding, DBLP:conf/ijcai/AwasthiS09, DBLP:conf/aaai/DickersonPS12, DBLP:conf/sigecom/DickersonPS13, DBLP:conf/aaai/DickersonS15, DBLP:journals/jea/ManloveO14, unver2010dynamic}. We refer interested readers to the paper of \cite{DBLP:conf/sigecom/BlumDHPSS15} for a more detailed discussion about the application of stochastic matching in kidney exchange. \smparagraph{Online labor markets.} Online labor markets facilitate working relationships between freelancers and employers. In such platforms, it quite often happens that the users (from either party) have more options than they can consider. We can represent this with a bipartite graph with freelancers on one side and employers on the other. The edges of the {\em compatibility graph}, again, determine possible matches. While the initial job descriptions rule out some of the edges, it is after an interview between an employer and the freelancer that they decide whether to work with each other. Stochastic matching, for such platforms, can be used to recommend interviews. This way, we ensure that with very few interviews, most of the users will find a desired match. \section{Beyond Half Approximation -- Unweighted Graphs} In this section, we devise a process that constructs a large fractional matching on the realized graph by assigning values to both crucial and non-crucial edges. For non-crucial edges, we follow Procedure~\ref{proc:non-crucial} in obtaining the fractional matching. For crucial edges, however, we take a different approach in constructing the fractional matching. Before describing the actual procedure, we emphasize on the following property of Procedure~\ref{proc:non-crucial} which is necessary for augmenting it with crucial edges. \begin{observation} Procedure~\ref{proc:non-crucial} does not look at how the crucial edges are realized. \end{observation} Intuitively, the observation above tells us that the large fractional matching that we obtain on realized non-crucial edges does not adversarially affect the realization of crucial edges since Procedure~\ref{proc:non-crucial} is essentially unaware of the realization of crucial edges. As such, if we are able to construct a large realized fractional matching on the crucial edges, that also (1) does not violate the crucial budget of the vertices, or the blossom inequalities, and that (2) does not ``look" at the realization of the non-crucial edges, we can plug the two fractional matchings together to obtain a valid fractional matching that combines both non-crucial and crucial edges. This is, unfortunately, not possible on the crucial edges and the main obstacle is preserving the per-vertex budgets. To illustrate the above-mentioned problem, consider a graph with $2n$ vertices and $n$ edges where each vertex is connected to exactly one edge, i.e., the graph is a matching of size $n$. Any of these edges that is realized will be part of the realized matching, thus, for any edge $e$ in this graph we have $q_e = p$; which means they are all crucial edges and we have $\qw(C) = pn$. Note that the crucial budget $q^C_v$ of each of the vertices is $p$. Therefore, if we want to preserve these crucial budgets on the realized crucial edges, the fractional value that we assign to each realized edge would be at most $p$ (instead of 1); implying that the expected fractional matching that we get would have a total weight of $p^2n$ in expectation which is only a $p$ fraction of $\qw(C)$. Recall that preserving the crucial/non-crucial per-vertex budgets was to ensure that once we combine the crucial and non-crucial fractional matchings, the total fractional matching connected to each vertex does not exceed 1. To achieve this, a slightly weaker constraint is also sufficient. Consider a vertex $v$ with non-crucial budget $q^N_v$ and crucial budget $q^C_v$. If $q^N_v + q^C_v$ (i.e., $q_v$) is much smaller than 1, we can allow the crucial fractional matching to assign a value of (roughly) up to $1-q^N_v$ to the edges connected to $v$. This, for instance, resolves the issue of the example in the previous paragraph. Thus, it only remains to argue that one can find a large such fractional matching on realized crucial edges. We formalize the procedure for doing this as Procedure~\ref{proc:crucial}. \begin{procedure}{Constructing a fractional matching $x^C$ for unweighted graphs on the realized crucial edges of $S$.}\label{proc:crucial} \begin{procinput}The realized portion $R^C := S_p \cap C$ of the sampled crucial edges. \end{procinput} For any matching $\mu \in S_p \cap C$ define the {\em appearance-probability} $q(\mu | R^C)$ of $\mu$ to be the probability with which $\mu$ is the portion of $S_p \cap C$ that appears in the omniscient optimum, given the realization $R^C$ of the crucial edges. Formally, \begin{equation*} q(\mu|R^C) = \Pr\Big[\mu = \big(\matching{E_p} \cap S_p \cap C\big) \Big| E_p \cap C = R^C\Big]. \end{equation*} Among all matchings in $S_p \cap C$, we draw one according to the appearance-probabilities. Let us denote this matching by $\mu^C$. For any edge $e=(u, v) \in \mu^C$, set $$x^C_e \gets (1-\epsilon) \min \big\{ 1 - q^N_v, 1-q^N_u \big\},$$ and for any other edge $e \in S_p \cap C$ we set $x^C_e \gets 0$. \end{procedure} We first show that by combining Procedures~\ref{proc:non-crucial} and \ref{proc:crucial} we can obtain a $\approx 0.6568$ approximation for unweighted graphs. Define fractional matching $x$ as follows \begin{equation} x_e := x^N_e \qquad \forall e \in N, \qquad\qquad x_e := x^C_e \qquad \forall e \in C. \end{equation} \begin{claim} $x$ is a valid fractional matching that satisfies blossom inequalities of size up to $1/\epsilon$. \end{claim} \begin{proof} Fix any arbitrary subset $U \subseteq V$ of size at most $1/\epsilon$. Lemma~\ref{lem:non-cruciallemma} guarantees that the fractional matching on non-crucial edges of $U$ has size at most $\epsilon \lfloor\frac{|U|-1}{2} \rfloor$. On the other hand, since $\mu^C$ is an integral matching, it has at most $\lfloor \frac{|U|-1}{2} \rfloor$ edges in $U$. Since the fractional matching that we assign each edge of $\mu^C$ is at most $1-\epsilon$, overall the total size of the fractional matching assigned to the edges in $U$ cannot be more than $\epsilon \lfloor\frac{|U|-1}{2} \rfloor + (1-\epsilon) \lfloor \frac{|U|-1}{2} \rfloor = \lfloor\frac{|U|-1}{2} \rfloor$. \end{proof} \begin{theorem}\label{thm:nonadaptiveunweighted} If $G$ is unweighted, the constructed fractional matching $x$ of Procedure~\ref{proc:crucial} has size $\E\big[\sum_e x_e\big] \geq (1-2\epsilon)(4\sqrt{2}-5)\opt$. Therefore, Algorithm~\ref{alg:nonadaptive}, in expectation, achieves an approximation factor of at least $(1-2\epsilon)(4\sqrt{2}-5)$. \end{theorem} \begin{proof} Let us denote by $\alg := \sum_e x_e$ the size of our fractional matching $x$. We know by definition that $\alg = \sum_{e \in N} x^N_e + \sum_{e \in C} x^C_e$. It can be deducted by property 3 of Lemma~\ref{lem:non-cruciallemma} that \begin{equation}\label{eq:p2non-crucial} \E\Big[\sum_{e \in N}x^N_e\Big] \geq (1-\epsilon) \qw(N) = (1-\epsilon) q(N), \end{equation} where the latter equality is due to the assumption that the graph is unweighted. Our goal, now, is to show that $\E[\sum_{e \in C}x^C_e]$ is also large. Take a crucial edge $e=(u, v)$, we know that Algorithm~\ref{alg:nonadaptive} picks $e$ with probability at least $1-\epsilon$ since $e$ is a crucial edge. Assuming that $e$ is picked by Algorithm~\ref{alg:nonadaptive}, $e$ is part of the matching $\mu^C$ picked by Procedure~\ref{proc:crucial} with probability at least $q_e$. And if $e$ is part of $\mu^C$, the fractional matching that will be assigned to it is $(1-\epsilon)\min\{1-q^N_v, 1-q^N_u\}$. Thus, for any crucial edge $e=(u, v)$, we have \begin{equation*} \E\big[x^C_e\big] = (1-\epsilon)\cdot q_e \cdot (1-\epsilon)\min\{1-q^N_v, 1-q^N_u\} \geq (1-2\epsilon)q_e \cdot \min\{1-q^N_v, 1-q^N_u\}. \end{equation*} To get rid of the minimization above, we make the crucial edges directed towards their endpoint with the higher non-crucial budget. Formally, a crucial edge $e = (u, v)$ is directed towards its endpoint $u$ if $q^N_u > q^N_v$ and in case of a tie (i.e., if $q^N_u = q^N_v$), we break it arbitrarily. For any vertex $v$ we denote its incoming crucial edges by $N^{C-}(v)$ and use $q^{C-}_v := \sum_{u \in N^{C-}(v)} q_{(u, v)}$ to denote the total matching probabilities of the edges that are directed towards $v$. With these definitions, we have \begin{align}\label{eq:p2h} \nonumber\E\Big[\sum_{e \in C} x^C_e \Big] &= \sum_v (1-2\epsilon) (1-q^N_v)q^{C-}_v\\ \nonumber &= (1-2\epsilon)\sum_v\big(q^{C-}_v - q^N_v q^{C-}_v\big) \\ \nonumber &= (1-2\epsilon)\sum_v q^{C-}_v - (1-2\epsilon) \sum_v q^N_v q^{C-}_v \\ &= (1-2\epsilon)q(C) - (1-2\epsilon)\sum_v q^N_v q^{C-}_v. \end{align} Combining (\ref{eq:p2non-crucial}) and (\ref{eq:p2h}) we get \begin{align*} \E[\alg] = \E\big[\sum_{e \in N} x^N_e \big] + \E[\sum_{e \in C} x^C_e \big] &\geq (1-\epsilon)q(N) + (1-2\epsilon)q(C) - (1-2\epsilon)\sum_v q^N_v q^{C-}_v,\\ &\geq (1-2\epsilon) \bigg( q(N) + q(C) - \sum_v q^N_v q^{C-}_v\bigg). \end{align*} On the other hand, recall that $\opt = q(N) + q(C)$, thus we have \begin{equation}\label{eq:XBST} \frac{\E[\alg]}{\opt} \geq \frac{(1-2\epsilon)\Big( q(N) + q(C) - \sum_v q^N_v q^{C-}_v\Big)}{q(N)+q(C)} \geq (1-2\epsilon)\bigg( 1 - \frac{\sum_v q^N_v q^{C-}_v}{q(N)+q(C)}\bigg). \end{equation} Note that since each crucial edge is directed towards exactly one of its endpoints, we have $q(C) = \sum_v q^{C-}_v$. On the other hand, we have $\sum_v q^N_v = 2q(N)$ since the matching probability of each non-crucial edge $(u, v)$ will contribute both to $q^N_v$ and $q^N_u$. Combining these two observations, we have \begin{equation}\label{eq:CTAU} q(N) + q(C) = \sum_v q^{C-}_v + \frac{q^N_v}{2}. \end{equation} Combining (\ref{eq:XBST}) and (\ref{eq:CTAU}) we get \begin{equation}\label{eq:XBGC} \frac{\E[\alg]}{\opt} \geq (1-2\epsilon)\bigg( 1 - \frac{\sum_v q^N_v q^{C-}_v}{\sum_v q^{C-}_v + \frac{q^N_v}{2}}\bigg). \end{equation} We use the following mathematical lemma to show the desired bound on this ratio. \begin{lemma}\label{lem:mathratio} Given any set of numbers $a_1, \ldots, a_n$ and $b_1, \ldots, b_n$ such that \begin{enumerate}[label=(\roman*)] \item $a_i \geq 0$, $b_i \geq 0$, and $a_i + b_i \leq 1$ for any $i\in[n]$, and \item $\sum_{i=1}^N a_i + b_i > 0$, \end{enumerate} we have $\frac{\sum_{i=1}^{n}a_ib_i}{\sum_{i=1}^{n} a_i + \frac{b_i}{2}} \leq 6-4\sqrt{2}.$ \end{lemma} For any vertex $v$ we have $q^N_v \in (0, 1)$ and $q^{C-}_v \in (0, 1)$ and clearly $q^N_v + q^{C-}_v \leq 1$ since $q$ is a valid fractional matching and the amount of matching incident to each vertex is at most 1, therefore, condition (i) of Lemma~\ref{lem:mathratio} is satisfied. Furthermore, condition (ii) of Lemma~\ref{lem:mathratio} also holds so long as $\opt > 0$ which is always the case unless the graph is empty, thus we have \begin{equation*} \frac{\sum_v q^N_v q^{C-}_v}{\sum_v q^{C-}_v + \frac{q^N_v}{2}} \leq 6-4\sqrt{2}, \qquad \text{therefore,} \qquad 1- \frac{\sum_v q^N_v q^{C-}_v}{\sum_v q^{C-}_v + \frac{q^N_v}{2}} \geq 1-(6-4\sqrt{2}) = 4\sqrt{2}-5. \end{equation*} Replacing this in Inequality (\ref{eq:XBGC}) we get $\frac{\E[\alg]}{\opt} \geq (1-2\epsilon)(4\sqrt{2}-5)$ or equivalently the desired bound in Theorem~\ref{thm:nonadaptiveunweighted} that $\E[\alg] \geq (1-2\epsilon)(4\sqrt{2}-5)\opt$. \end{proof} We next show that our analysis in Theorem~\ref{thm:nonadaptiveunweighted} for the fractional matching $x$ constructed via the above-mentioned procedures is tight. \begin{lemma}\label{lem:unweightedtight} There exists a bipartite unweighted graph $G$, for which the fractional matching $x$ construct via Procedures~\ref{proc:non-crucial} and \ref{proc:crucial} has an approximation factor of less than $4\sqrt{2} - 5 + o(1)$. \end{lemma} \begin{proof} \begin{figure} \centering \includegraphics[scale=0.90]{figs/unweightedhardness} \caption{An unweighted bipartite graph for which the fractional matching composed of Procedures~\ref{proc:non-crucial} and \ref{proc:crucial} does not provide a better than $4\sqrt{2}-5$ approximation.} \label{fig:unweightedhardness} \end{figure} For a sufficiently large $L$, construct a graph $G'$ (refer to Figure~\ref{fig:unweightedhardness} for the illustration of the graph) with four sets $A, B, A', B'$ of $L$ vertices, i.e., the graph has $4L$ vertices in total. There is a complete bipartite graph between the vertices in $B$ and $B'$. There is also a perfect matching between $A$ and $B$ and a perfect matching between $B'$ and $A'$. Moreover, we set the realization probability $p$ of the graph to be $p=\sqrt{2}-1$. The optimal way of constructing a matching in a realization $G'_p$ of $G'$ is to first add all the realized edges between $A$ and $B$ or $A'$ and $B'$ to the matching; and then complement it via the realized edges between the unmatched vertices in $B$ and $B'$. Since there is a complete bipartite graph between the unmatched vertices in $B$ and $B'$, one can find a realized matching that is almost perfect. That is, this realized matching matches $1-o(1)$ fraction of the unmatched vertices in $B$ and $B'$. Thus, overall, we have \begin{align*} \E[\opt] &= \underbrace{p \times 2L}_{\text{matching between $A$ and $B$ or between $A'$ and $B'$}} + \underbrace{(1-o(1))(1-p)L}_{\text{matching between $B$ and $B'$}}\\ &\geq (1+p-o(1))L\\ &\geq (\sqrt{2}-o(1))L. \end{align*} The crucial edges of $G$ are those between $A$ and $B$ and those between $A'$ and $B'$. The rest of the edges are non-crucial. We have $\qw(C) = 2pL = (2\sqrt{2}-2)L$ and we have $\qw(N) = (2-\sqrt{2}-o(1))L$. Thus, the non-crucial budget of each vertex in $B$ or $B'$, which is $\qw(N)/L$, is equal to $(2-\sqrt{2}-o(1))$. The fractional matching that we construct by combining Procedures~\ref{proc:non-crucial} and \ref{proc:crucial} first obtains a fractional matching of size $(1-\epsilon)\qw(L)$ on the non-crucial edges. However, on each of the crucial edges $e=(u, v)$ that are realized, it puts a fractional matching of size $$(1-\epsilon)\min\{1-q^N_u, 1-q^N_v\} = (1-\epsilon)\Big(1-\big(2-\sqrt{2}-o(1)\big)\Big) = (1-\epsilon)(\sqrt{2}-1+o(1)).$$ Meaning that overall, we construct a fractional matching of size only $(1-\epsilon)\Big(\sqrt{2}-1+o(1)\Big)\cdot p \cdot 2L = (1-\epsilon)(6-4\sqrt{2}+o(1))L$ on the crucial edges. Overall, the approximation factor would be \begin{align*} \frac{\E[\alg]}{\E[\opt]} &= \frac{(1-\epsilon)\Big(\overbrace{(2-\sqrt{2}-o(1))L}^{\text{Procedure~\ref{proc:non-crucial}}} + \overbrace{(6-4\sqrt{2}+o(1))L}^{\text{Procedure~\ref{proc:crucial}}} \Big)}{(\sqrt{2}-o(1))L}\\ &\leq \frac{(1-\epsilon)\big(8-5\sqrt{2}+o(1)\big)}{\sqrt{2}-o(1)}\\ &\leq (1-\epsilon)\big(4\sqrt{2}-5+o(1)\big). \end{align*} This completes the proof and almost matches the guarantee provided by Theorem~\ref{thm:nonadaptiveunweighted}. \end{proof} \section{Beyond Half Approximation -- Weighted Graphs} We showed in the previous section that Procedure~\ref{proc:crucial} guarantees a $\approx 0.6568$-approximation for unweighted graphs. However, unfortunately, it does not provide anything better than a half approximation for weighted graphs. Recall by Corollary~\ref{cor:halfapprox} that we already achieve an almost half-approximation by combining Lemmas~\ref{lem:non-cruciallemma} and \ref{lem:sampleallcrucial}. Thus, Procedure~\ref{proc:crucial} does not have any benefits in the case of weighted graphs. In this section, we modify this procedure to bypass the half approximation barrier for weighted graphs. \begin{figure} \centering \includegraphics[scale=0.95]{figs/weightedapproxhard} \caption{Example showing that Procedure~\ref{proc:crucial} does not provide a better than 0.5005 approximation on weighted graphs.} \label{fig:weightedapproxhard} \end{figure} We start the discussion of this section by an example that illustrates the main difficulty in the analysis of weighted graphs which also shows why Procedure~\ref{proc:crucial} does not provide a better than half approximation. Consider a star graph (Figure~\ref{fig:weightedapproxhard}) with one crucial edge $e$ of weight $w_e = 999$ and matching probability $q_e = 0.001$. The rest of the edges are non-crucial, each with a weight of 1 and sum of their matching probabilities is 0.999. These weights and probabilities are set in a way that makes the expected matching of both crucial and non-crucial edges equal (i.e., $\qw(C) = \qw(N) = 0.999$) while at the same time, assigning significantly different matching probabilities to them (observe that $q(C)=0.001$ while $q(N)=0.999$). The total expected matching of the graph is $\qw(C)+\qw(N) = 1.998$, however, the weight of the fractional matching obtained by Procedure~\ref{proc:crucial} is only\footnote{For clarity of exposition we hide the $1-\epsilon$ factors here.} \begin{equation*} \underbrace{0.999 \times 1}_{\text{from non-crucial edges}} + \underbrace{0.001}_{\text{\parbox{2.3cm}{probability that $e$ appears in $\mu^C$}}} \times \underbrace{(1-0.999)}_{\text{\parbox{2.9cm}{budget remaining for $e$}}} \times 999 = 0.999999, \end{equation*} which provides only a $0.5005$-approximation. Using the same approach one can construct examples that show the approximation factor of Procedure~\ref{proc:crucial} is at most $0.5 + o(1)$. \begin{remark} We remark that for weighted graphs, there is no procedure that allocates budgets to crucial and non-crucial edges prior to looking at the actual realizations, that has approximation factor better than $0.5 + o(1)$. \end{remark} To overcome the above-mentioned challenge, we devise a procedure that has dynamic budgets. That is, the procedure first looks at the realization of crucial edges, and then adjusts the budgets of non-crucial edges. Before delving into the details of the procedure, we describe how it is possible to obtain a near optimal approximation for the example of Figure~\ref{fig:weightedapproxhard}. Similar to the case of unweighted graphs, we can first use Procedure~\ref{proc:non-crucial} to construct a fractional matching on the non-crucial edges that does not violate the non-crucial budgets of the vertices. This provides a fractional matching of weight $\qw(N)$ and a half approximation. Next, we look at the realization of the crucial edges. If our crucial edge $e$ is not realized, then we report the fractional matching that we already have. However, if $e$ happens to be realized, we remove the fractional matching on the non-crucial edges and assign a fractional matching of 1 to edge $e$ which has a significantly higher weight. The expected weight of the fractional matching provided by this procedure is \begin{equation*} \underbrace{0.999 \times (0.999 \cdot 1)}_{\text{if $e$ is not realized}} + \underbrace{0.001 \times 999}_{\text{if $e$ is realized}} \simeq 1.997, \end{equation*} which is very close to the expected matching of the original graph which is 1.998. The main intuition, here, was to {\em allow a crucial edge \underline{that is realized} to decrease the fractional matching on its incident non-crucial edges if that increases the total weight}. We formalize this approach in the following procedure and show that indeed it provides better than 0.5 approximation for weighted graphs. \begin{procedure}{Constructing a fractional matching $x$ for weighted graphs on the realized edges of $S$.}\label{proc:crucialweighted} Consider the realization on sampled crucial edges and their realized portion $R^C := S_p \cap C$. Among all matchings in $S_p \cap C$, we draw one according to their appearance-probabilities based on $R^C$ (refer to Procedure~\ref{proc:crucial} for definition of appearance-probabilities). Let us denote this matching by $\mu^C$. For any edge $e=(u, v) \in \mu^C$, set $$x^C_e \gets (1-\epsilon) \argmax_{0 \le \alpha \le 1} \bigg( \frac{\min\{q^N_v, 1-\alpha \}}{q^N_v} \cdot \qw^N_v + \frac{\min\{q^N_u, 1-\alpha \}}{q^N_u} \cdot \qw^N_u + \alpha \cdot w_e \bigg),$$ and for any other edge $e \in S_p \cap C$ we set $x^C_e \gets 0$. \\\\ Let $x^N$ be the fractional matching of non-crucial edges constructed by the Procedure \ref{proc:non-crucial}. We define the fractional matching $x$ as follows. \begin{align*} x_e := x^N_e \qquad \forall e \in N, \qquad\qquad x_e := x^C_e \qquad \forall e \in C. \end{align*} For any vertex $v$ with $x(v) > 1$, scale down the fractional matching on its non-crucial edges by an appropriate factor. \end{procedure} \begin{theorem}\label{thm:nonadaptiveweighted} Algorithm~\ref{alg:nonadaptive}, in expectation, provides a $0.501$ approximation for weighted graphs. \end{theorem} \begin{proof} Recall that by Lemma \ref{lem:non-cruciallemma}, we know that Algorithm \ref{alg:nonadaptive} provides a fractional matching with an expected weight of at least $(1-10\epsilon) \qw(N)$. Also, it satisfies the blossom inequalities of size up to $1/\epsilon$. Therefore, by Lemma~\ref{lem:folklore}, the expected weight of the matching of this algorithm is $(1-11\epsilon) \qw(N)$. Also, by Lemma \ref{lem:sampleallcrucial}, the expected weight of the matching on only crucial edges is at least $(1-\epsilon) \qw(C)$. Since $\opt = \qw(C) + \qw(N)$, if at least one of $\qw(C)$ or $\qw(N)$ are at least $0.5011 \cdot \opt$, we can beat the $(0.5011-11\epsilon)$ approximation factor and get $0.501$ approximation by choosing $\epsilon$ small enough. Otherwise, we have $$ 0.4989 \cdot \opt \le \qw(N), \qw(C) \le 0.5011 \cdot \opt \,. $$ In this case, we show that the expected weight of the matching constructed by Algorithm~\ref{alg:nonadaptive} is at least $0.501 \cdot \opt$. We first define two types of crucial edges and show that if the weight of these edges are greater than a specific threshold, Procedure \ref{proc:crucialweighted} produces a matching with the expected matching at least $0.501 \cdot \opt$. Let $\delta=0.09$, we define these edges as follows. \begin{description} \item[Heavy edges.] We say that a crucial edge $e=(v,u) \in C$ is \textit{heavy} if $w_e \ge (1+\delta)(\qw^N_v + \qw^N_u)$. We use $H$ to denote the set of heavy edges. The weight of any heavy edge is larger than the sum of fractional matching of non-crucial edges of its both end. Therefore, in Procedure \ref{proc:crucialweighted}, a realized heavy edge reduces the fractional matching of non-crucial edges of its both ends to $0$, and we have $x_e^C = (1-\epsilon)$. \item[Semi-heavy edges.] We say that a non-heavy crucial edge $e=(v,u) \in (C \setminus H)$ is \textit{semi-heavy}, if for at least one of its endpoints, say w.l.o.g., vertex $v$, we have $w_e \ge 2(1+\delta) \qw^N_v$ and for its other endpoint we have $q^N_u \le (1-\delta)$ and $q^N_v \ge q^N_u$. We use $H^\star$ to denote the set of semi-heavy edges. The weight of any semi-heavy edge is larger than the fractional matching of non-crucial edges of one of its endpoints. Therefore, it reduces the fractional matching of non-crucial edges on this endpoint. Formally, for any semi-heavy edge $e$ we have $x_e^C \ge (1-\epsilon) (1-q^N_u) \ge (1-\epsilon) \delta$. \end{description} In the following claim, we show that if a large ``portion" of critical edges are heavy or semi-heavy, we can construct a fractional matching with an expected weight of $ 0.501 \cdot \opt$. \begin{claim} If $\qw(H)+\qw(H^\star) \ge 0.09 \cdot \qw(C)$, then the expected weight of the matching produced by Algorithm \ref{alg:nonadaptive} is at least $ 0.501 \cdot \opt$. \end{claim} \begin{proof} Consider a heavy edge $e=(v,u) \in H$, if this edge realized, Procedure \ref{proc:crucialweighted} sets $x^C_e = (1-\epsilon)$, and removes the fractional matching of non-crucial edges of both ends. Therefore, it adds a weight of $(1-\epsilon) w_e - (\qw^N_v+\qw^N_u)$ to our fractional matching which is at least \begin{align} (1-\epsilon) w_e - (\qw^N_v+\qw^N_u) &\ge (1-\epsilon) w_e - \frac{1}{1+\delta} w_e & \text{Since } e \text{ is heavy and } w_e \ge (1+\delta)(\qw^N_v+\qw^N_u). \nonumber \\ & = (\frac{\delta}{1+\delta}-\epsilon) w_e \label{ineq:heavy} \,. \end{align} Moreover, suppose that $e'=(v',u') \in H^\star$ is a semi-heavy edge. By definition of semi-heavy edges, we know that for one of endpoints of $e'$, say $v'$, we have $w_e \ge 2(1+\delta) \qw^N_{v'}$, and for the other endpoint we have $q^N_{u'}\le (1-\delta)$ and $q^N_{v'} \ge q^N_{u'}$. If edge $e'$ realized, it reduces the fractional matching of non-crucial edges of $v'$ to at most $q^N_{u'}$ and Procedure uses at least $(1-\epsilon) (1-q^N_{u'})$ fraction of the edge $e'$. Therefore, the weight that it adds to the weight of the fractional matching produced by Procedure \ref{proc:crucialweighted} is at least \begin{align} &(1-\epsilon) (1- q^N_{u'}) w_{e'} - (q^N_{v'}-q^N_{u'})\qw^N_{v'} \nonumber \\ &\ge(1-\epsilon) (1-q^N_{u'}) w_{e'} - (1-q^N_{u'})\qw^N_{v'} \nonumber \\ &=(1-q^N_{u'}) ((1-\epsilon)w_{e'} -\qw^N_{v'}) \nonumber \\ &\ge (1-q^N_{u'}) ((1-\epsilon) w_{e'} - \frac{1}{2(1+\delta)} w_{e'}) & \text{Since } e' \text{ is semi-heavy and } w_{e'} \ge 2(1+\delta)\qw^N_{v'} \nonumber \\ & = (1-q^N_{u'}) \Big (\frac{1+2\delta}{2(1+\delta)}-\epsilon \Big ) w_{e'} \nonumber \\ & \ge \delta \Big (\frac{1+2\delta}{2(1+\delta)}-\epsilon \Big ) w_{e'} & q^N_{u'} \le (1-\delta) \nonumber\\ & \Big (\frac{\delta+2\delta^2}{2(1+\delta)}-\epsilon \Big ) w_{e'} \label{ineq:semi} \,. \end{align} It follows from inequalities (\ref{ineq:heavy}) and (\ref{ineq:semi}) that the weight of the expected matching is at least \begin{align*} &(1-10\epsilon) \qw(N) + (\frac{\delta}{1+\delta}-\epsilon) \qw(H) + (\frac{\delta+2\delta^2}{2(1+\delta)}-\epsilon) \qw(H^\star) \,. \end{align*} Since $\delta=0.09$, we have $\frac{\delta}{1+\delta} \ge 0.048$ and $\frac{\delta+2\delta^2}{2(1+\delta)} \ge 0.048$. Therefore, the expected weight of the fractional matching is at least \begin{align*} &(1-10\epsilon) \qw(N) + (0.048-\epsilon) (\qw(H)+\qw(H^\star)) \\& \ge (1-10\epsilon) \qw(N) + (0.048-\epsilon) (0.09 \qw(C)) \\& = (1-10\epsilon) (\opt - \qw(C)) + (0.00432 - \epsilon) \qw(C) & \qw(N)+\qw(C) = \opt. \\& \ge (1-10\epsilon) \opt - \qw(C) (1-0.0.00432) \\& \ge (1-10\epsilon) \opt - 0.5011 \cdot \opt (1-0.00432) & \qw(C) \le 0.5011 \cdot \opt \\&\ge (0.50106-10\epsilon) \cdot \opt \,, \end{align*} By Lemma \ref{lem:folklore}, we also lose a factor of $(1-\epsilon)$ to satisfy the blossom inequalities. Therefore, by choosing $\epsilon$ small enough, we can get $0.501$ approximation which proves the claim. \end{proof} \begin{comment} We say that a crucial edge $e=(v,u) \in C$, is \textit{heavy} if $w_e \ge 1.5(\qw^N_v + \qw^N_u)$. Let $H$ be the set of all heavy edges. For a heavy edge $e=(v,u) \in H$, it appears in the matching $\mu^C$ constructed by the Procedure \ref{proc:crucialweighted}, with the probability $q_e$, and the procedure will set $x^C_e = (1-\epsilon)$ for this edge. By setting $x^C_e = (1-\epsilon)$, the amount that we add to fractional matching of light edges is $w_e - (\qw^N_v + \qw^N_u) \ge w_e/3$. Therefore, in the case that $\qw(H) \ge \qw(C)/10$, we have $$ \E \Big [ \sum_{e} x_e \cdot w_e \Big ] \ge \qw(L)+ \qw(H)/3 \ge \qw(L)+ \qw(C)/30 \ge 0.51 \opt \,, $$ and in this case we can prove the theorem. Otherwise, we have $\qw(H) < \qw(C)/10$. Similarly, we say that a non-heavy edge $e=(v,u) \in (C \setminus H)$ is semi-heavy if at least one of the end points of this edge, i.e., vertex $u$ we have $w_e \ge 1.5 \qw^N_u$ and for its other end point we have $ q^N_v \le 0.9$. Let $H^\star$ be the set of semi-heavy edges. Similarly, we can show that if $\qw(H^\star) \ge \qw(C)/10$, the weight of the fractional matching is at least $0.51 \cdot \opt$. \end{comment} By the previous claim, we know that if $\qw(H)+\qw(H^\star) \ge 0.09 \qw(C)$, we already get our desired $0.501$ approximation. Therefore, from now on, we assume that $\qw(H)+\qw(H^\star) < 0.09 \qw(C)$. Though, for ease of exposition, we do not explicitly mention this condition in the forthcoming statements. Define $C^\star := C \setminus (H \cup H^\star)$ to be the set of crucial edges that are not heavy or semi-heavy. We have $\qw(C^\star) \ge (1-0.09) \qw(C) = 0.91 \qw(C)$. \begin{claim} The expected weight of the matching returned by Algorithm \ref{alg:nonadaptive} is at least $ (1-2 \epsilon)0.551(\qw(L)+\qw(C^\star))-8 \epsilon\opt$. \end{claim} \begin{proof} We partition the edges in $C^\star$ into three types, and according to these types, we make the edges directed towards one of their endpoints. Let $e=(v,u) \in C^\star$. W.l.o.g., assume that $q^N_v \ge q^N_u$. We define the following three types: \begin{description} \item[Type 1.] If $\qw^N_v \ge \qw^N_u$, this edge is type~$1$. In this case we direct $e$ towards $v$. \item[Type 2.] If $\qw^N_v < \qw^N_u$ and $w_e \le 2(1+\delta) \qw^N_v$, this edge is type $2$. In this case we direct $e$ towards $v$. \item[Type 3.] For any edge that is not of type 1 or 2, we have $\qw^N_v < \qw^N_u$ and $w_e > 2(1+\delta) \qw^N_v$. These edges are type $3$, and we direct them towards $u$. \end{description} The following observation demonstrates a critical property of edge directions defined above. \begin{observation} \label{obs:heavybound} Let $e=(v,u) \in C^\star$ be a crucial edge which is directed towards $v$. Then we have $w_e \le 2(1+\delta)\qw^N_v$. \end{observation} \begin{proof} It suffices to show that this property holds for all three types of edges. For any type~1 edge $e=(u, v)$, we have $\qw^N_v \ge \qw^N_u$. Since $e$ is not heavy, we have $$ w_e \le (1+\delta)(\qw^N_v + \qw^N_u) \le 2(1+\delta)\qw^N_v \,. $$ For any type~2 edge $e=(u, v)$, we have our desired inequality $w_e \le 2(1+\delta) \qw^N_v$ automatically by definition. For any type~3 edge $e=(u, v)$, if $e$ is directed towards $v$, we have $\qw^N_v > \qw^N_u$. Also $e$ is not heavy, therefore $$ w_e \le (1+\delta)(\qw^N_v + \qw^N_u) < 2(1+\delta)\qw^N_v, $$ which completes the proof. \end{proof} The following observation is also another important property of direction of edges . \begin{observation} \label{obs:qlimit} Let $e=(v,u) \in C^\star$ be an edge such that $q^N_v \ge q^N_u$. If $e$ is directed towards $u$, we have $$ q^N_v \le q^N_u + \delta. $$ \end{observation} \begin{proof} Since $q^N_v \ge q^N_u$, the only case that we direct $e$ towards $u$ is when $e$ is a type $3$ edge. In this case $w_e > 2(1+\delta) \qw^N_v$. Since $e$ is not semi-heavy, we must have $q^N_u > (1-\delta)$. Therefore we get our desired bound that $q^N_v \le 1 < q^N_u+ \delta.$ \end{proof} Let $\bar x$ be a fractional matching obtained by combining Procedures~\ref{proc:non-crucial} and \ref{proc:crucial}. More specifically, let $\bar x^N$ be the fractional matching of Procedure \ref{proc:non-crucial} on non-crucial edges and $\bar x^C$ be the fractional matching of Procedure \ref{proc:crucial} on crucial edges. That is, \begin{align*} \bar x_e := \bar x^N_e \qquad \forall e \in N, \qquad\qquad \bar x_e := x^C_e \qquad \forall e \in C. \end{align*} We show that expected weight of fractional matching $\bar x$ is at least $0.543(\qw(L)+\qw(C^\star))$. For any vertex $v$ we denote its incoming crucial edges in $C^\star$ by $N^{C-}(v)$ and use $\qw^{C-}_v := \sum_{u \in N^{C-}(v)} \qw_{(u, v)}$ to denote the expected weight of the matching of the edges that are directed towards $v$. If a crucial edge $e=(v,u)$ is directed towards vertex $v$, the budget that this edge can have in Procedure \ref{proc:crucial} is $(1-\epsilon) (1- \max\{ q^N_v,q^N_u\})$. If $e$ is directed towards $v$, by Observation \ref{obs:qlimit}, this value is at least $(1-\epsilon) (1-\delta-q^N_v)$. Our algorithm picks each crucial edge with the probability at least $1-\epsilon$. Therefore, for crucial edges in $C^\star$ we have \begin{align*} \E \Big [ \sum_{e \in C^\star} \bar x^C_e \cdot w_e \Big] &\ge \sum_v (1-\epsilon) (1-\epsilon) (1-\delta- q^N_v) \qw^{C-}_v \\&\ge (1-2\epsilon) \sum_v (1-\delta- q^N_v) \qw^{C-}_v \\&= (1-2\epsilon) (1-\delta) \qw(C^\star) - (1-2\epsilon) \sum_v q^N_v \qw^{C-}_v \,. \end{align*} Therefore, the weight of the matching returned by our algorithm is at least \begin{align*} \E [ \alg] &\ge \E [ \sum_{e \in N} \bar x^N_e \cdot w_e] + \E [ \sum_{e \in C^\star} \bar x^C_e \cdot w_e] \\ &\ge (1-10\epsilon) \qw(N)+ (1-2\epsilon) \Big((1-\delta)\qw(C^\star) - \sum_v q^N_v \qw^{C-}_v\Big) \,. \end{align*} Since $\qw(N) \le \opt$, we have \begin{align*} \E [ \alg] -8 \epsilon \opt \ge (1-2\epsilon) \Big(\qw(N)+ (1-\delta)\qw(C^\star) - \sum_v q^N_v \qw^{C-}_v\Big) \,. \end{align*} Therefore, \begin{align*} \frac{\E[\alg]-8 \epsilon \opt}{\qw(N)+ \qw(C^\star)} &\geq \frac{(1-2\epsilon)\Big( \qw(N) + (1-\delta) \qw(C^\star) - \sum_v q^N_v \qw^{C-}_v\Big)}{\qw(N)+\qw(C^\star)}\\ &\geq (1-2\epsilon)\bigg( 1 - \frac{\sum_v \delta \cdot \qw^{C-}_v + q^N_v \qw^{C-}_v}{\qw(N)+\qw(C^\star)}\bigg) \\ &=(1-2\epsilon)\bigg( 1 - \frac{\sum_v \delta \cdot \qw^{C-}_v + q^N_v \qw^{C-}_v}{\sum_v \qw^{C-}_v + \frac{\qw^{N}_v}{2}}\bigg). \end{align*} \begin{claim} \label{clm:singlev} For each vertex $v$, we have \begin{align} \label{eq:SV} \dfrac{\delta \cdot \qw^{C-}_v + q^N_v \qw^{C-}_v}{\qw^{C-}_v + \frac{\qw^{N}_v}{2}} \le 0.449 \,. \end{align} \end{claim} \begin{proof} By Observation \ref{obs:heavybound}, we know that for each edge $e$ directed towards $u$, we have $w_e \le 2(1+\delta) \qw^N_v$. Therefore, $$ \qw^{C-}_v \le 2(1+\delta) q^C_v \qw^N_v \le 2(1+\delta) (1-q^N_v) \qw^N_v \,. $$ Since the left side of (\ref{eq:SV}) is increasing in $\qw^{C-}_v$, and we have $\qw^{C-}_v \le 2(1+\delta)(1-q^N_v) \qw^N_v$, it takes its maximum value when $\qw^{C-}_v = 2(1+\delta) (1-q^N_v) \qw^N_v$. Thus, $$ \dfrac{\delta \cdot \qw^{C-}_v + q^N_v \qw^{C-}_v}{\qw^{C-}_v + \frac{\qw^{N}_v}{2}} \le \dfrac{2(1+\delta) (1-q^N_v) \qw^N_v (\delta + q^N_v)}{\qw^N_v(\frac{1}{2}+ 2(1+\delta) (1-q^N_v))} = \dfrac{2(1+\delta) (1-q^N_v) (\delta + q^N_v)}{\frac{1}{2}+ 2(1+\delta) (1-q^N_v)}. $$ We hide the tedious mathematical calculations here; however, by setting $\delta=0.1$, one can verify that the value above is at most $$ \frac{171-10 \sqrt{146}}{110} \le 0.457 $$ for $0\le q^N_v \le 1$. \end{proof} We use the following simple observation to complete the proof of the claim. \begin{observation} For positive real values $a,b,c,d, \alpha$, suppose that $\frac{a}{b} \le \alpha$ and $\frac{c}{d} \le \alpha$. Then, $\frac{a+c}{b+d} \le \alpha$. \end{observation} Using the observation above and Claim \ref{clm:singlev}, we have $$ \frac{\sum_v \delta \cdot \qw^{C-}_v + q^N_v \qw^{C-}_v}{\sum_v \qw^{C-}_v + \frac{\qw^{N}_v}{2}} \le 0.449 \,. $$ Therefore, we have \begin{align*} \frac{\E[\alg]-8 \epsilon \opt}{\qw(N)+ \qw(C^\star)} & \ge (1-2\epsilon)\bigg( 1 - \frac{\sum_v \delta \cdot \qw^{C-}_v + q^N_v \qw^{C-}_v}{\sum_v \qw^{C-}_v + \frac{\qw^{N}_v}{2}}\bigg) \\ &\ge (1-2 \epsilon) (1-0.449) = (1-10 \epsilon) 0.551 \,. \end{align*} This implies that $$ \E[\alg] \ge (1-10 \epsilon)0.551(\qw(N)+\qw(C^\star)) -8 \epsilon\opt\,, $$ which is the desired bound. \end{proof} By the claim above, we have \begin{align*} \E[\alg] &\ge (1-2 \epsilon)0.551(\qw(N)+\qw(C^\star))-8 \epsilon\opt \\ & \ge (1-2 \epsilon)(0.551 \cdot 0.91)(\qw(N)+\qw(C))-8 \epsilon\opt & \text{Since $\qw(C^\star) \ge 0.91 \qw(C)$.} \\ &\ge (1-2 \epsilon)(0.5014)(\qw(N)+\qw(C))-8 \epsilon\opt\\ &\ge(1-2 \epsilon)(0.5014)\opt-8 \epsilon\opt & \qw(N)+\qw(C)=\opt.\\ & \ge (0.5014-10\epsilon) \cdot \opt \,. \end{align*} Also, this fractional matching satisfies the blossom inequalities of size up to $1/\epsilon$. Therefore, by Lemma~\ref{lem:folklore}, the expected weight of the matching of this algorithm is $(1-\epsilon)(0.5014 -10 \epsilon) \opt$ and by setting $\epsilon$ small enough, it becomes at least $0.501 \cdot \opt$. \end{proof} \section{Introduction}\label{sec:intro} We consider the following {\em stochastic matching} problem on both weighted and unweighted graphs. In its most general form, an edge-weighted graph $G=(V, E, w)$ along with a parameter $p \in (0, 1)$ is given in the input and each edge of $G$ is {\em realized} independently with probability $p$. We are unaware of the edge realizations yet our goal is to find a heavy realized matching. To do this, we can select a degree-bounded (i.e., dependent only on $p$) subgraph $Q$ of $G$, {\em query} all of its edges simultaneously, and report its maximum realized matching. Denoting the expected weight of the maximum realized matching of any subgraph $H$ of $G$ by $\expmatching{H}$, the goal is choose $Q$ such that it maximizes $\expmatching{Q}/\expmatching{G}$ --- which is also known as the approximation factor. The restriction on the number of queries per vertex comes from the fact that the querying process is often {\em time consuming} and/or {\em expensive} in the applications of stochastic matching. Without this restriction, the solution is trivial as one can simply query all the edges of $G$ and report the maximum matching among those that are realized. The algorithms in this setting are categorized as {\em non-adaptive} since they query all the edges simultaneously without any prior knowledge about the realizations. In contrast, {\em adaptive} algorithms have multiple rounds of adaptivity and the queries conducted at each round can depend on the outcome of the prior queries. Non-adaptive algorithms are considered practically more desirable since the queries are not stalled behind each other. In fact, one can see a non-adaptive algorithm as an adaptive algorithm that is restricted to have only one {\em round} of adaptivity; therefore, it is not hard to see that it is generally much more complicated to design and analyze non-adaptive algorithms. While $(1-\epsilon)$-approximate adaptive algorithms are known, even for weighted graphs, the literature has identified breaking half approximation to be a barrier for non-adaptive algorithms \cite{DBLP:conf/sigecom/BlumDHPSS15, DBLP:conf/sigecom/AssadiKL16, DBLP:conf/sigecom/AssadiKL17, DBLP:conf/soda/YamaguchiM18, DBLP:conf/sigecom/BehnezhadR18}. Prior to our work, no such algorithm was known for weighted graphs and even for unweighted graphs, the state-of-the-art non-adaptive algorithm of Assadi et al.~\cite{DBLP:conf/sigecom/AssadiKL17} achieves only a slightly better approximation factor of $0.5001$. We introduce new algorithms and techniques to bypass these bounds. For unweighted graphs, we achieve a 0.6568 approximation and show that the same algorithm bypasses 0.5 approximation for weighted graphs. In both algorithms, we query only $O(\log(1/p)/p)$ edges per-vertex. These results answer several open questions of the literature that we elaborate more on in the forthcoming paragraphs. Apart from the approximation factor, it is not hard to see that any algorithm achieving a constant approximation has to query $\Omega(1/p)$ edges per vertex (see e.g., \cite{DBLP:conf/sigecom/AssadiKL16}). As such, the number of per-vertex queries conducted by our algorithms is optimal up to a factor of $O(\log 1/p)$. \input{relatedwork} \input{applications} \input{furtherrelatedwork} \section{Preliminaries}\label{sec:preliminaries} \paragraph{Notation.} For any edge set $E$, we denote by $\matching{E}$ the weight of the maximum weighted matching in $E$. We may also abuse notation throughout the paper and use $\matching{E}$ to refer to the set of edges in the maximum weighted matching of $E$. When it is clear from the context, we may use maximum matching instead of maximum {\em weighted} matching. For any $U \subseteq V$, we use $G[U]$ to denote the induced subgraph of $G$ over $U$. \subsection{The Model of Stochastic Matching} We are given a graph $G=(V, E)$ with edge weights $w: E \to \mathbb{R}_+$ along with a fixed parameter $p\in [0, 1]$. Each of the edges in $E$ is {\em realized} independently from other edges with probability $p$. The {\em realized graph} $G_p=(V, E_p)$ includes an edge $e \in E$ if and only if it is realized. We are not initially aware of the realized graph $G_p$. Our goal, however, is to compute a heavy matching of $G_p$. To do so, we can {\em query} each edge in $E$ and the outcome is whether the edge is realized. For any $E' \subseteq E$, we denote by $\expmatching{E'} := \E[\matching{E' \cap E_p}]$ the expected weight of the realized matching in $E'$. The benchmark in the stochastic matching problem is the omniscient optimum matching $\expmatching{E}$, which we also denote by $\opt$. A {\em non-adaptive} algorithm in this setting, has to pick a degree-bounded (dependent only on $1/p$) subgraph $Q$ of $G$ such that $\expmatching{Q}/\opt$, which determines the approximation factor, is maximized. If the algorithm is randomized, which is the case in our paper, it should succeed with high probability.\footnote{We note that throughout the paper, for simplicity, we analyze the approximation factor of our algorithms in expectation. However, it is easy to boost the success probability to $1-o(1)$ by running several instances of the algorithm to obtain candidate solutions $Q_1, \ldots, Q_k$, and then reporting $Q := \argmax_{Q_i} \expmatching{Q_i}$ as the solution.} \subsection{Background on the Matching Polytope} Fix a graph $G=(V, E)$. A vector $x \in \mathbb{R}^{E}$ is a {\em fractional matching} of $G$ if for any $e \in E$, we have $x_e \geq 0$ and for any $v \in V$ we have $x_v := \sum_{e \ni v} x_e \leq 1$. An integral matching can be seen as a fractional matching where for any $e \in E$ we have $x_e \in \{0, 1\}$. The {\em matching polytope} $\mathcal{P}(G)$ of $G$, is the convex hull of all integral matchings of $G$ represented as above. Edmonds~\cite{edmonds1965maximum} showed in 1965 that $\mathcal{P}(G)$ is the solution set of linear program: \begin{flalign*} && & x_e \geq 0 \qquad &\forall e \in E && \\ && & x_v \leq 1 \qquad &\forall v \in V && \\ && & x(U) \leq \lfloor |U|/2 \rfloor &\forall U \subseteq V \text{ with odd } |U| && \end{flalign*} where $x(U)$ denotes $\sum_{e \in G[U]} x_e$. Note that the first two constraints only ensure that $x$ is a valid fractional matching. Constraints of the third type are known as {\em blossom inequalities}. A corollary of Edmond's theorem is the following: \begin{corollary}\label{cor:edmonds} Let $x$ be a fractional matching of an edge weighted graph $G$ that satisfies blossom inequalities, i.e., $x \in \mathcal{P}(G)$. Then $G$ has an integral matching $y$ where $\sum_e y_e w_e \geq \sum_e x_e w_e$. \end{corollary} We can even relax the blossom inequalities and consider only subsets of size at most $1/\epsilon$, and ensure that the weight of no fractional matching exceeds maximum weight of integral matchings by a larger than $1/(1-\epsilon)$ factor. This is captured by the following folklore lemma. \begin{lemma}[folklore]\label{lem:folklore} Let $x$ be a fractional matching of an edge weighted graph $G$ where for any $U \subseteq V$ with $|U| \leq 1/\epsilon$, it satisfies $x(U) \leq \lfloor |U|/2 \rfloor$. Then $G$ has an integral matching $y$ where $\sum_e y_e w_e \geq (1-\epsilon) \sum_e x_e w_e$. \end{lemma} \begin{proof}[Proof sketch] Define $z = x/(1+\epsilon)$. Since $x_v \leq 1$ for any $v$, one can show easily that $z$ satisfies all blossom inequalities. Therefore, by Corollary~\ref{cor:edmonds}, there must exist an integral matching of weight at least that of $z$ which by definition is $\sum_e z_e w_e = (\sum_e x_e w_e)/(1+\epsilon) \geq (1-\epsilon)\sum_e x_e w_e$. \end{proof} We refer interested readers to Section~25.2 of \cite{schrijver2003combinatorial} for a comprehensive overview of the matching polytope. \section{The Algorithm}\label{sec:nonadaptive} In this section, we introduce a non-adaptive algorithm formalized as Algorithm~\ref{alg:nonadaptive} as well as a number of analytical tools that we use in analyzing it for weighted and unweighted graphs. We note that for the sake of brevity, we did not attempt to optimize the constant factors in the description of Algorithm~\ref{alg:nonadaptive}. \begin{algorithm} \caption{A non-adaptive algorithm for the weighted stochastic matching problem.} \label{alg:nonadaptive} \begin{algorithmic}[1] \Statex \textbf{Input:} Input graph $G=(V, E)$, edge weights $w: E \to \mathbb{R}_+$ and realization probability $p \in [0, 1]$. \Statex \textbf{Parameter:} $R = \frac{2000 \log(1/\epsilon)\log(1/\epsilon p)}{\epsilon^4 p}$. \State $S \gets \emptyset$ \For{$r = 1, \ldots, R$} \State Construct a realization $\mathcal{G}_r=(V, \mathcal{E}_r)$ of $G$, where any edge $e \in E$ appears in $\mathcal{E}_r$ independently with probability $p$. \State Add the edges in maximum weighted matching $\matching{\mathcal{E}_r}$ of $\mathcal{G}_r$ to to $S$. \EndFor \State Query the edges in $S$ and report the maximum weighted matching of it. \end{algorithmic} \end{algorithm} The main challenge in analyzing Algorithm~\ref{alg:nonadaptive} comes from the fact that the realizations $\mathcal{G}_1, \ldots, \mathcal{G}_R$ that are picked may be very different from the actual realization $G_p$ of $G$ on which the algorithm has to perform well. Take, for instance, the maximum matching $M_1$ of $\mathcal{G}_1$ that we add to $S$ during the first iteration of Algorithm~\ref{alg:nonadaptive}. Since the realization $\mathcal{G}_1$ is drawn from the same distribution that the actual realization $G_p$ is drawn from, one can argue that $M_1$ is as large as $M(E_p)$ in expectation. However, the problem is that only $p$ fraction of the edges in $M_1$ are expected to appear in $E_p$. This means that the realized matching $M(M_1 \cap E_p)$ found by round 1 guarantees only an approximation factor of $p$ which can be arbitrarily small. To achieve our desired approximation factor, we need to argue that the realized edges of $M_1, \ldots, M_R$ can be combined with each other to construct a heavy matching. To show this, we introduce a procedure that constructs a large fractional matching over the realized edges of $S$ and use this to argue that there must exist a heavy realized matching among the edges in $S$. For simplicity of the analysis, we assume that for any realization $\mathcal{G}=(V, \mathcal{E})$ of $G$, the maximum weighted matching denoted by $\matching{\mathcal{E}}$ is unique. This can be guaranteed by either using a deterministic algorithm for finding the matching $\matching{\mathcal{E}}$ or initially perturbing the edge weights by sufficiently small factors so that the maximum weighted matching becomes unique. Having this, we start with the following definition. \begin{definition} For any edge $e$, we denote by $q_e := \Pr_{E_p}[e \in \matching{E_p}]$ the probability with which $e$ appears in the (unique) maximum weighted matching of realization $E_p$. We refer to $q_e$ as the {\em matching probability} of edge $e$. Moreover, for any edge subset $F \subseteq E$, we denote by $q(F) := \sum_{e \in F} q_e$ the sum of matching probabilities of the edges in $F$. We further use $\qw_e$ to denote $q_e \cdot w_e$ and use $\qw(F)$ to denote $\sum_{e \in F}\qw_e$. We call $\qw_e$ (resp. $\qw(F)$) the {\em expected matching weight} of $e$ (resp. $F$). \end{definition} Now, based on their matching probabilities, we partition the edges into two sets of {\em crucial} and {\em non-crucial} edges. \begin{definition}[Crucial and non-crucial edges] For threshold $\tau = \frac{\epsilon^3 p}{20 \log(1/\epsilon)}$, we call any edge with $q_e < \tau$ a {\em non-crucial} edge and any edge with $q_e \geq \tau$ a {\em crucial} edge. We denote by $N$ the set of all non-crucial edges in $E$ and denote by $C$ the set of all crucial edges in $E$. \end{definition} We start with a couple of simple observations that will help both in gaining more insights on the definitions above and will be useful in our proofs later. \begin{observation}\label{obs:optisqwhqwl} $\opt = \qw(N) + \qw(C).$ \end{observation} \begin{proof} By definition, we know $\opt = \sum_{e \in E} q_e \cdot w_e = \sum_{e \in E} \qw_e$. Since $E = C \cup N$ and $C \cap N = \emptyset$, we have $\opt =\sum_{e \in N} \qw_e + \sum_{e \in C} \qw_e = \qw(N) + \qw(C)$. \end{proof} \begin{observation}\label{obs:samplingprob} An edge $e \in E$ is chosen to be in set $S$ by Algorithm~\ref{alg:nonadaptive} with probability exactly $1-(1-q_e)^R$. \end{observation} \begin{proof} In each iteration of Algorithm~\ref{alg:nonadaptive} edge $e$ appears in the maximum weighted matching $\matching{\mathcal{G}_i}$ with probability exactly $q_e$. Since Algorithm~\ref{alg:nonadaptive} is composed of $R$ independent iterations (i.e., the realizations $\mathcal{G}_i$ picked at different rounds are independent of each other), the probability that edge $e$ is not picked in any of these rounds is $(1-q_e)^R$ and therefore it appears in $S$ with probability $1-(1-q_e)^R$. \end{proof} As demonstrated by Observation~\ref{obs:samplingprob}, the crucial edges have a higher chance of appearing in the sample $S$. In fact, each crucial edge is sampled in each iteration of Algorithm~\ref{alg:nonadaptive} with probability at least $\tau$ and the number of iterations $R$ of Algorithm~\ref{alg:nonadaptive} is much larger than $1/\tau$; thus we expect almost every crucial edge to be sampled in $S$. We formalize this intuition in the following lemma whose proof we defer to Appendix~\ref{sec:otheromitted}. \begin{lemma}[crucial edges lemma]\label{lem:sampleallcrucial} Let $S$ be the sample obtained by Algorithm~\ref{alg:nonadaptive}. Then, we have $\E[\qw(S \cap C)] \geq (1-\epsilon) \qw(C).$ \end{lemma} \begin{observation}\label{obs:expmatchinggtq} $\expmatching{S} \geq \qw(S)$. \end{observation} \begin{proof} Consider $\mu = \matching{E_p} \cap S$, which is clearly a valid realized matching of $S$. It suffices to show that $\E[\text{weight of }\mu] \geq \qw(S)$. Note that any edge $e \in S$ that appears in $\matching{E_p}$ will appear in $\mu$, therefore, each edge $e \in S$ appears in $\mu$ with probability $q_e$. This means that $\E[\text{weight of }\mu] = \sum_{e\in S} q_e \cdot w_e = \qw(S)$ as desired. \end{proof} The combination of Lemma~\ref{lem:sampleallcrucial} and Observation~\ref{obs:expmatchinggtq} implies that Algorithm~\ref{alg:nonadaptive} achieves an expected matching of weight at least $(1-\epsilon)\qw(C)$. This implies that if $\qw(C)$ is sufficiently close to $\opt$ (which is equvialent to $\qw(C) + \qw(N)$ by Observation~\ref{obs:optisqwhqwl}), Algorithm~\ref{alg:nonadaptive} obtains a good approximation. However, it might be the case that indeed the expected weight $\qw(C)$ of the crucial edges is very small or even $0$ with $\qw(N)$ being close to $\opt$. To handle this, we need a different argument for non-crucial edges. The challenge is that the matching probability of a non-crucial edge can be arbitrarily small, and may even depend on $n$. Consider for example the complete bipartite graph $G_{n, n}$ with all edge weights of 1 (i.e., the graph is unweighted). One can show that the expected matching of $G_{n, n}$ is as large as $n - o(1)$ with high probability (see e.g., \cite{DBLP:conf/sigecom/BlumDHPSS15}) while the matching probability of every edge\footnote{Here for the sake of this example, we assume that the algorithm to obtain the maximum matching of a realization of $G_{2n}$ is not biased towards including any specific edge.} in $G_{n, n}$ is roughly $\sfrac{1}{n}$. Therefore, since $S$ is of constant degree, $q(S)$ will not be even a constant fraction of $n - o(1)$ and we cannot use Observation~\ref{obs:expmatchinggtq} to argue that the \expmatching{S} is large. To alleviate the above-mentioned problem, we need to be able to get a large matching among the non-crucial edges too. This is the issue that we address next. \paragraph{A lemma for non-crucial edges.} We describe a procedure -- formalized as Procedure~\ref{proc:non-crucial} -- to construct a heavy fractional matching on the realized portion of the non-crucial edges $S \cap E_p$ of $S$ which also enjoys some other properties of interest. For simplicity of notation, we use $S_p$ to denote $S \cap E_p$. \begin{procedure}{Constructs a fractional matching $x^N$ on non-crucial realized edges of $S$. }\label{proc:non-crucial} For any edge $e \in S_p$ initially set $\tilde x^N_e \gets 0$. Then update $\tilde x^N$ as follows: \begin{enumerate}[label={(\arabic*)}] \item For any realized sampled non-crucial edge $e$ (i.e., $e \in S_p \cap N$), set $\tilde x^N_e \gets \min\{f_e/p, 2\tau/p \}$ where $f_e$ denotes the fraction of iterations of Algorithm~\ref{alg:nonadaptive} in which edge $e$ is part of the picked matching $M(\mathcal{E}_r)$. \item Initially set the {\em scaling-factor} $s_e$ of each edge $e$ to be $s_e = 1$. Then loop over the vertices $v \in V$ in an arbitrary order and for any $e$ incident to $v$, update $$s_e \gets \min\Big\{s_e, \max\{q^N_v, \epsilon\}/\tilde x^N_v\Big\},$$ where $q^N_v := \sum_{e: e\in N, v \in e}q_e$ denotes the {\em non-crucial-weight} of vertex $v$. \item Scale down the fractional matching in the following way: for any edge $e$, set $x^N_e \gets \tilde x^N_e \cdot s_e$. \end{enumerate} \end{procedure} The following lemma highlights the properties of the procedure above. \begin{lemma}[non-crucial edges lemma]\label{lem:non-cruciallemma} The fractional matching $x^N$ obtained by Procedure~\ref{proc:non-crucial} has the following properties: { \setlength{\abovedisplayskip}{5pt} \setlength{\belowdisplayskip}{2pt} \begin{enumerate} \item For any $U \subseteq V$ with $|U| \le 1/\epsilon$, $x^N$ fills only $\epsilon$ fraction of its blossom inequality. That is, \begin{equation*} x^N(U) \leq \epsilon \lfloor |U|/2 \rfloor \qquad \forall U \subseteq V: |U| \leq 1/\epsilon. \end{equation*} \item The {\em non-crucial budgets} of the vertices are (almost) preserved. More precisely, $$x^N_v \leq \max \{q^N_v, \epsilon\} \qquad \forall v \in V.$$ \item The expected weight of the fractional matching is sufficiently close to that of non-crucial edges, i.e., $$\E\bigg[\sum_{e \in S_p \cap N} x^N_e \cdot w_e \bigg]\geq (1-10\epsilon) \qw(N).$$ \end{enumerate} } \end{lemma} \paragraph{The intuition behind Procedure~\ref{proc:non-crucial}.} Observe that the fractional matching constructed by Procedure~\ref{proc:non-crucial} relies critically on $f_e$, the fraction of iterations in which edge $e$ is sampled by the algorithm. Recall that the probability with which Algorithm~\ref{alg:nonadaptive} samples an edge $e$ is precisely equal to $q_e$. Therefore it is not hard to see that $\E[f_e] = q_e$. Similar to $q$, we can see the collection of $f_e$'s on all edges as a fractional matching. In this regard, since $\E[f_e] = q_e$, we have $\E[\sum_{e \in N} f_e \cdot w_e] = \E[q_{e \in N} \cdot w_e] = \qw(N)$. Despite these similarities, note that by definition, $f$ is non-zero only on the edges sampled by Algorithm~\ref{alg:nonadaptive}. This is desirable since we want to construct a large fractional matching only on the sampled edges. However, we further want our fractional matching to be non-zero only on the {\em realized} sampled edges. To do this, the final fractional matching $x$ that we construct is roughly as follows: $x_e$ is $f_e / p$ if $e$ is realized and it is 0 otherwise. Since each edge is realized with probability $p$, we have $\E[x_e] = p \cdot (f_e/p) + (1-p) \cdot 0 = f_e$. Note, however, that we have to make sure that $x$ is a valid fractional matching. That is, $x$ should not assign a fractional matching of larger than 1 to any vertex. (Properties~1 and 2 even impose stricter restrictions) To do this, we may have to {\em manually} scale down the value of $x$ after observing the realization. However, we need to argue that this does not hurt the total size of it by a significant factor. For this, we use the fact that $f_e$ for most non-crucial edges is very small due to its value being close to $q_e$ which is at most $\tau$ for all non-crucial edges. This, combined with the independence of edge realizations, indicates e.g., that it is very unlikely that $x$ exceeds 1 by a larger than $1+\epsilon$ factor. Note that unfortunately the same procedure does not provide a good approximation on the crucial edges. The reason is that for crucial edges, $f_e$ can be as large as $p$ and the probability that $x$ exceeds 1 will not negligible. As for the proof of Lemma~\ref{lem:non-cruciallemma}, note that the first and the second properties are directly satisfied by Procedure~\ref{proc:non-crucial}. To see this, observe that for any edge $e$, we have $x^N_e \leq 2 \tau/p \le \epsilon^3$. This means that for any subset $U$ of the vertices, we have \begin{align*} x^N(U) &\leq \epsilon^3 \cdot \binom{|U|}{2} = \epsilon^2 \cdot \dfrac{|U| \cdot \big(|U|-1\big)}{2}, \end{align*} which implies for any $U$ with $|U| \leq 1/\epsilon$, that \begin{align*} x^N(U) &\leq \epsilon^3 \cdot \dfrac{|U|\cdot\big(|U|-1\big)}{2} \leq \epsilon^2 \cdot \dfrac{|U|-1}{2} \leq \epsilon^2 \big\lfloor |U|/2 \big\rfloor \leq \epsilon \big\lfloor |U|/2 \big\rfloor, \end{align*} completing the proof of property 1. Property 2 is also simple to prove. In fact, steps 2 and 3 of Procedure~\ref{proc:non-crucial} are solely written to satisfy this property. To see this, take a vertex $v$, if $\tilde x^N_v \leq \max\{q^N_v, \epsilon\}$ the non-crucial budget of $v$ is preserved since the scaling-factors are no more than 1. Otherwise, by the end of step 2 we ensure that for any edge incident to $v$ we have $s_e \leq \max\{q^N_v, \epsilon\}/\tilde x^N_v$. Thus, once completing step 3, we have \begin{equation*} x^N_v = \tilde x^N_v \cdot s_e \leq \tilde x^N_v \cdot \max\{q^N_v, \epsilon\}/\tilde x^N_v = \max\{q^N_v, \epsilon\}, \end{equation*} which is the desired bound for property 2. It only remains to prove that the fractional matching assigned to the realized sampled non-crucial edges is large as required by property 3. The proof of this part is rather technical and to prevent interruptions to the flow of the paper, we defer it to Appendix~\ref{sec:missingproofs}. \paragraph{Implications.} By coupling Lemma~\ref{lem:sampleallcrucial} and Lemma~\ref{lem:non-cruciallemma} we immediately get an analysis that ensures Algorithm~\ref{alg:nonadaptive} obtains an (almost) $1/2$ approximation. To see this, recall by Observation~\ref{obs:optisqwhqwl} that $\opt = \qw(C) + \qw(N)$, thus, either $\qw(C) \geq \opt/2$ or $\qw(N) \geq \opt/2$. If $\qw(C) \geq \opt/2$, then Lemma~\ref{lem:sampleallcrucial} implies that the expected matching weight of our sample is at least $(1-\epsilon)\opt/2$. On the other hand, if $\qw(N) \geq \opt/2$, the fractional matching obtained by Lemma~\ref{lem:non-cruciallemma} which also satisfies blossom inequalities, implies that an integral matching of size at least $(1-10\epsilon)\opt/2$ must exist in the realization. \begin{corollary}\label{cor:halfapprox} For any desirably small $\epsilon$, Algorithm~\ref{alg:nonadaptive} provides a $(\sfrac{1}{2}-\epsilon)$ approximation for weighted graphs by querying $\Ot{1/\epsilon^4 p}$ edges per vertex. \end{corollary} Note that Corollary~\ref{cor:halfapprox} already improves the number of per-vertex queries of known results for weighted graphs due to \cite{DBLP:conf/sigecom/BehnezhadR18, DBLP:conf/soda/YamaguchiM18}. Our goal, however, is to provide a much better guarantee on the approximation factor. Suppose for example, that $\qw(N) = \qw(C) = \opt/2$. In this case, to achieve any approximation factor better than $1/2$, we need to argue that the crucial edges and the non-crucial edges can augment each other to obtain a matching that is much heavier than what they achieve individually. This is the issue that we address in the next two sections. \input{augmentation} \input{augmentation-weighted} \subsection{Prior Work \& Our Contribution}\label{sec:priorworkandresults} \paragraph{Prior work.} The stochastic matching problem has been intensively studied during the past decade due to its diverse applications from {\em kidney exchange} to {\em labor markets} and {\em online dating} (we overview these applications in Section~\ref{sec:applications}). Directly related to the setting that we consider are the papers by Blum et al.~\cite{DBLP:conf/sigecom/BlumDHPSS15} (which introduced this variant of stochastic matching), Assadi et al.~\cite{DBLP:conf/sigecom/AssadiKL16,DBLP:conf/sigecom/AssadiKL17}, Yamaguchi and Maehara \cite{DBLP:conf/soda/YamaguchiM18}, and Behnezhad and Reyhani \cite{DBLP:conf/sigecom/BehnezhadR18}. Table~\ref{table:relatedwork} gives a brief survey of known results due to these papers as well as a comparison to our results. We give a more detailed description of the main differences below. \begin{table} \centering \resizebox{9.5cm}{!}{% \begin{tabular}{?l|l|c|c?} \thickhline & Reference & \multicolumn{1}{l|}{Apx factor} & \multicolumn{1}{l?}{Per-vertex queries} \\ \hline \multirow{4}{*}{Unweighted} & \cite{DBLP:conf/sigecom/BlumDHPSS15} & $0.5-\epsilon$ & $\widetilde{O}(1/p^{2/\epsilon})$ \\ \cline{2-4} & \cite{DBLP:conf/sigecom/AssadiKL16} & $0.5-\epsilon$ & $\widetilde{O}(1/\epsilon p)$ \\ \cline{2-4} & \cite{DBLP:conf/sigecom/AssadiKL17} & $0.5001$ & $\widetilde{O}(1/p)$ \\ \hhline{|~|-|-|-|} & \textbf{This paper} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{1,1,1}}c@{}}$0.6568$\\ $\big(\approx 4\sqrt{2}-5\big)$ \end{tabular} & $\widetilde{O}(1/p)$ \\ \thickhline \multirow{4}{*}{Weighted} & \cite{DBLP:conf/soda/YamaguchiM18} & \multirow{3}{*}{$0.5-\epsilon$ } & $\widetilde{O}(W\log(n)/\epsilon p)$ \\ \cline{2-2}\cline{4-4} & \cite{DBLP:conf/soda/YamaguchiM18} (B) & & $\widetilde{O}(W/\epsilon p)$ \\ \cline{2-2}\cline{4-4} & \cite{DBLP:conf/sigecom/BehnezhadR18} & & $\widetilde{O}(1/\epsilon p^{4/\epsilon})$ \\ \hhline{|~|-|-|-|} & \textbf{This paper} & $0.501$ & $\widetilde{O}(1/p)$ \\ \thickhline \end{tabular} }% \caption{Bounds known for non-adaptive algorithms. We have hidden $\log(1/\epsilon p)$ factors to simplify comparison. The result indicated with (B) in the reference assumes that the input graph is bipartite.} \label{table:relatedwork} \vspace{-0.1cm} \end{table} Blum et al. introduced the following algorithm: \begin{tbox} \textbf{\hypertarget{alga}{Algorithm}} $\mathcal{A}$ (\cite{DBLP:conf/sigecom/BlumDHPSS15}): {\em Pick a maximum matching $M_i$ from $G$ and remove all of its edges. Repeat this for $R$ iterations, then query the edges in $M_1 \cup \ldots \cup M_R$ simultaneously and report the maximum realized matching among them.} \end{tbox} \newcommand{\alga}[0]{Algorithm~\hyperlink{alga}{$\mathcal{A}$}} \newcommand{\algant}[0]{Algorithm~$\mathcal{A}$} It is easy to see that $R$, in \alga{}, determines the per-vertex queries. This means that it suffices to argue that a small value for $R$ is sufficient to get our desired approximation factors. Blum et al.~\cite{DBLP:conf/sigecom/BlumDHPSS15} showed that for unweighted graphs, setting $R = 1/p^{O(1/\epsilon)}$ is sufficient to get a $0.5-\epsilon$ approximation. Interestingly, the follow-up results were achieved by the same algorithms (with minor changes) and differed mainly in the analysis. Assadi et al.~\cite{DBLP:conf/sigecom/AssadiKL16} showed that setting $R= \widetilde{O}(1/\epsilon p)$ suffices to achieve a $0.5-\epsilon$ approximation improving the exponential dependence on $1/\epsilon$.\footnote{The algorithm of Assadi et al.~\cite{DBLP:conf/sigecom/AssadiKL16} also incorporates a {\em sparsification step} to ensure $\opt = \Omega(n)$.} Yamaguchi and Maehara~\cite{DBLP:conf/soda/YamaguchiM18} generalized these results to weighted graphs.\footnote{The generalization of Blum et al.'s algorithm to weighted graphs is simply to pick maximum weighted matchings in each round/iteration.} They showed that it suffices to set $R=O(W \log n /\epsilon p)$ to achieve the same approximation factor of $0.5-\epsilon$ where $W$ denotes the maximum integer edge weight. Behnezhad and Reyhani \cite{DBLP:conf/sigecom/BehnezhadR18} further showed that the same approximation factor of $0.5-\epsilon$ can be achieved for weighted graphs by setting $R = O(1/\epsilon p^{4/\epsilon})$. While this removes the dependence on $W$ and $n$, making the bound a constant, it has a worse dependence on $1/\epsilon$ than that of \cite{DBLP:conf/soda/YamaguchiM18}. Observe that the approximation factor of all the algorithms mentioned above is the same. The only exception in the literature is the algorithm of Assadi et al.~\cite{DBLP:conf/sigecom/AssadiKL17} which achieves a $0.5001$ approximation for unweighted graphs. Their algorithm first extracts a {\em large} $b$-matching (which depends on the expected size of the realized matching) from the graph and then applies \alga{} on the remaining graph. They interestingly show that the edges chosen by \alga{} can be used to augment the realized matching among the edges of the $b$-matching which leads to bypassing the half approximation barrier for unweighted graphs. \smparagraph{Our contribution.} Despite the theoretical guarantees of the literature for \alga{}, it has its drawbacks. Blum et al.~\cite[Theorem 5.2]{DBLP:journals/corr/BlumHPS14} give examples on which it does not achieve better than a $\sfrac{5}{6}$ approximation. It also seems notoriously difficult (if not impossible) to analyze anything better than a $0.5$ approximation for \alga{} alone. We consider another algorithm which is also very simple and natural: \begin{tbox} \textbf{Algorithm} \hypertarget{algb}{$\mathcal{B}$} (Formally as Algorithm~\ref{alg:nonadaptive}): First draw $R$ realizations $\mathcal{G}_1, \ldots, \mathcal{G}_R$ of $G$ independently. Then from each of these realizations $\mathcal{G}_i$, pick a maximum (weighted) matching $M_i$. Finally, query the edges that appear in $M_1 \cup \ldots \cup M_R$ simultaneously and report the maximum realized matching among them. \end{tbox} \newcommand{\algb}[0]{Algorithm~\hyperlink{algb}{$\mathcal{B}$}} \newcommand{\algbnt}[0]{Algorithm~$\mathcal{B}$} Similar to \alga{}, here $R$ determines the number of per-vertex queries. We analyze \algb{} for both weighted and unweighted graphs. \begin{graytbox} \begin{result}[formally as Theorem~\ref{thm:nonadaptiveweighted}]\label{res:weightednonadaptive} For $R=O(\frac{\log (1/p)}{p})$, \algb{} achieves a $0.501$ approximation on weighted graphs. \end{result} \end{graytbox} Result~\ref{res:weightednonadaptive} implies the first non-adaptive algorithm that breaks the $0.5$ approximation barrier for weighted graphs. The number of per-vertex queries of this result also improves that of $0.5-\epsilon$ approximations of \cite{DBLP:conf/soda/YamaguchiM18} and \cite{DBLP:conf/sigecom/BehnezhadR18}. \begin{graytbox} \begin{result}[formally as Theorem~\ref{thm:nonadaptiveunweighted}]\label{res:unweighted} For $R=O(\frac{\log (1/p)}{p})$, \algb{} achieves a \mbox{$0.6568$} approximation on unweighted graphs. \end{result} \end{graytbox} Result~\ref{res:unweighted} improves over the state-of-the-art $0.5001$ approximate algorithm of Assadi et al.~\cite{DBLP:conf/sigecom/AssadiKL17}.\footnote{For the case of unweighted graphs, in an independent work, Assadi and Bernstein~\cite{assadisosa} give an (almost) $2/3$ approximation which is slightly better than our factor. Their algorithm, however, is highly tailored for unweighted graphs and gives no guarantee for the weighted case.} In our analysis, we devise different {\em procedures}, that given query outcomes, they construct large {\em fractional matchings} over the realized edges. Then based on the size of this fractional matching, we get that there must also be a large integral realized matching. We give more high-level ideas and intuitions about these procedures in Section~\ref{sec:techniques}. \section{Technical Overview}\label{sec:techniques} To give an intuition about the true differences between our algorithm (\algb{}) and the standard non-adaptive algorithm of the literature (\alga{}), we start by restating the bad example of Blum et al.~\cite[Theorem 5.2]{DBLP:journals/corr/BlumHPS14} for \alga{} and describing how \algb{} overcomes it. We then proceed to give intuitions on how we analyze the performance of \algb{}. \smparagraph{A comparison of \algant{} and \algbnt{}.} Consider the graph $G=(V, E)$ of Figure~\ref{fig:badexample}-(a) whose vertex set is partitioned into six subsets $A$, $B_1$, $B_2$, $C_1$, $C_2$, and $D$, each of size $N$. The edge set of the graph contains complete bipartite graphs between pairs $(A, B_1)$, $(A, B_2)$, $(D, C_1)$, and $(D, C_2)$ and perfect matchings between pairs $(B_1, C_1)$ and $(B_2, C_2)$. Assume also that the realization probability $p$ is $0.5$. \begin{figure}[h] \centering \includegraphics[scale=0.85]{figs/badexample-nohighlight} \vspace{-0.2cm} \caption{Figure (a) illustrates the input graph. Figure (b) illustrates a potential subset of queried edges by \algant{}. Figure (c) illustrates the expected structure of queried edges of \algbnt{}.} \label{fig:badexample} \end{figure} \vspace{-0.1cm} It is not hard to confirm that the expected omniscient optimum matching of $G_p$ is an almost perfect matching of size $3N - o(N)$. It suffices to add the realized edges between $(B_1, C_1)$ and $(B_2, C_2)$ to $\opt$ which roughly matches half of the vertices of each of these sets in expectation and then find large realized matchings between the remaining vertices and those in $A$ and $D$. Recall that \alga{} picks an arbitrary maximum matching $M_i$ in each iteration and removes it from the graph. Suppose that these matchings are as follows: The first matching $M_1$ contains the edges in $(B_1, C_1)$, a perfect matching in $(A, B_2)$, and a perfect matching in $(D, C_2)$. Matching $M_2$ contains the edges in $(B_2, C_2)$, a perfect matching in $(A, B_1)$, and a perfect matching in $(D, C_1)$. Each of the remaining matchings $M_3, \ldots, M_R$ is the union of a perfect matching in $(A, B_2)$ and a perfect matching in $(D, C_2)$. The queried edges by \alga{} are illustrated in Figure~\ref{fig:badexample}-(b). Since for every vertex in $B_1$ or $C_2$, only two edges are queried and $p=0.5$, we expect $1/4$ fraction of these vertices to have no realized queried edges. This means that \alga{} cannot construct a near perfect matching. Since \algb{} incorporates a randomization throughout the process, particularly in choosing realizations $\mathcal{G}_1, \ldots, \mathcal{G}_R$ from which it picks matchings $M_1, \ldots, M_R$, bad cases such as the one described above cannot happen. In particular, for the graph of Figure~\ref{fig:badexample}, for every vertex in $B$ or $C$, in roughly half of the realizations, they are matched to a vertex in $A$ and $D$, thus we query $\widetilde{\Omega}(R/2)$ edges for each of these vertices and it is not hard to show that for a constant $R$ depending only on $\epsilon$ and $p$, \algb{} achieves a $1-\epsilon$ approximation for this example (see Figure~\ref{fig:badexample}-(c)). \paragraph{Roadmap for analyzing \algbnt{}.} To convey the main intuitions behind the analysis, we make a few simplifying assumptions. First, assume that the input graph is unweighted. Denote the set of queried edges of \algb{} by $S$ and further denote by $S_p$ those edges in $S$ that are realized. Our goal is to show that in expectation, there exists a matching of size $0.65\opt$ in $S_p$, or in other words, $\expmatching{S} \geq 0.65\opt$. To do this, by Lemma~\ref{lem:folklore}, it suffices to show that there exists a fractional matching of size $0.65\opt$ in $S_p$ that also satisfies blossom inequalities. Let us further assume that $G$ is bipartite so that any fractional matching satisfies blossom inequalities automatically. Denote by $q_e$ the probability that edge $e$ appears in the omniscient optimum matching.\footnote{We assume that given a realization, the edges that belong to the maximum matching are unique. This can be achieved by using a deterministic matching algorithm.} Recall that in each iteration of \algb{}, we draw a realization and add its maximum matching to $S$. Therefore, $q_e$ also denotes the probability that we sample edge $e$ in each iteration of \algb{}. One can easily confirm that for any vertex $v$, we have $\sum_{e \ni v}q_e \leq 1$. Therefore, one can think of $q_e$'s as a fractional matching with some other nice properties. Denote this fractional matching by $q$. The reader soon notices the following useful properties of $q$: \begin{enumerate}[label={(P\arabic*)}] \item For any edge $e$, we have $q_e \leq p$.\\ {\em Proof sketch. Each edge is realized w.p.\footnote{Throughout, we use w.p. to abbreviate ``with probability".} $p$ and thus appears in \opt{} w.p. at most $p$.} \item For any set $F \subseteq E$, the expected matching \expmatching{F} of $F$ has size at least $q(F) := \sum_{e\in F} q_e$.\\ {\em Proof sketch. Suffices for each realization $E_p$ of $E$ to consider matching $F \cap \matching{E_p}$.} \end{enumerate} \begin{figure}[hbt] \centering \vspace{-0.4cm} \includegraphics[scale=0.9]{figs/crucialnoncrucialexample} \vspace{-0.2cm} \caption{} \vspace{-0.3cm} \label{fig:crucialnoncrucial} \end{figure} We set a threshold $\tau \approx \delta p$ for a sufficiently small constant $\delta < 1$ and partition $E$ into two subsets of {\em crucial} edges $C := \{ e \, | \, q_e \geq \tau\}$ and {\em non-crucial} edges $N := \{e \, | \, q_e < \tau\}$. Figure~\ref{fig:crucialnoncrucial} illustrates the values of $q_e$ over a simple example for which $p=0.5$. In this example, each wavy edge on the side that is realized appears in \opt{}, thus they all have $q_e = p = 0.5$ and are crucial. The edges in between are significantly less likely to be in \opt{} and for all of them $q_e < 0.006$, thus they are all considered non-crucial. Note that $q$ is merely a function of the graph's structure and is independent of our algorithms. Our goal is to show that within only $R = \Ot{1/\tau} = \Ot{1/p}$ iterations, \algb{} achieves our desired guarantee. To do this, we prove two canonical lemmas. \argparagraph{Crucial edges lemma}{Formally as Lemma~\ref{lem:sampleallcrucial}} \algb{} samples almost all crucial edges. Therefore, by (P2), the expected matching $\expmatching{S \cap C}$ has size at least $(1-\epsilon) q(C)$ where $\epsilon$ is any desirably small constant ($\epsilon$ and $\delta$ are interdependent). \vspace{0.2cm} For non-crucial edges, the argument above does not work. The reason is that, as illustrated in Figure~\ref{fig:crucialnoncrucial}, the number of non-crucial edges connected to each vertex can be much more than the maximum degree of $S$ (which determines the number of per-vertex queries), thus, we can only sample a small portion of non-crucial edges which means $q(S \cap N)$ can be arbitrarily smaller than $q(N)$. Instead, we take a different approach for non-crucial edges. \argparagraph{Non-crucial edges lemma}{Formally as Lemma~\ref{lem:non-cruciallemma}} One can construct a fractional matching $x$ over the realized non-crucial edges of $S$ (i.e., over the edges in $E_p \cap S \cap N$) whose size is at least $(1-\epsilon)q(N)$. Moreover, for any vertex $v$, $x_v$ is no more than $\max\{q^N_v, \epsilon\}$ where we call $q^N_v := \sum_{e \ni v : e\in N} q_e$ the {\em non-crucial budget} of each vertex. \vspace{0.2cm} The precise proof of the non-crucial edges lemma is out of the scope of this section. However, it relies critically on the fact that $q_e$ of non-crucial edges is small. For example, if we use the same technique to construct a fractional matching for the crucial edges, we only end up with a fractional matching of size $\approx 0.4 q(C)$. The combination of the two lemmas above immediately implies a $0.5 - \epsilon$ approximation. For this, one can easily show that $q(C) + q(N) = \opt$, and thus, either $q(C) \geq 0.5\opt$ or $q(N) \geq 0.5\opt$. For the former case, we can use the crucial edges lemma to argue that we get an almost $0.5$ approximation and for the latter we can use the non-crucial edges lemma. However, as mentioned before, our goal is to provide a much better approximation guarantee than $0.5-\epsilon$. Therefore, we have to show that the realized portions of the crucial and non-crucial edges can be augmented to construct a much larger matching. To do this, we have to devise more involved procedures that construct large fractional matchings over the realized edges of $S$ by combining both crucial and non-crucial edges. Note that these procedures are merely analytical tools and our algorithm is still \algb{}. For unweighted graphs, the procedure that we use --- formalized as Procedure~\ref{proc:crucial} --- is roughly as follows: We first use the non-crucial edges lemma to construct a fractional matching $x$ of size $(1-\epsilon)q(N)$ on the non-crucial edges without ``looking" at the realization of crucial edges. Independently, we reveal realized crucial edges, and pick a large realized matching $\mu^C$ among them.\footnote{For technical details, matching $\mu^C$ is not simply the largest realized matching of crucial edges and has to be drawn according to a specific distribution. See Procedure~\ref{proc:crucial} for more details.} Then in our fractional matching $x$, we allocate the maximum possible fractional matching value to the edges in $\mu^C$ while ensuring that $x$ remains a valid fractional matching. In Theorem~\ref{thm:nonadaptiveunweighted}, we give an analysis that shows Procedure~\ref{proc:crucial} in expectation constructs a fractional matching of size $(1-\epsilon)(4\sqrt{2}-5)\opt$. This implies that \algb{} achieves an (almost) $(4\sqrt{2}-5)\approx 0.6568$ approximation. We note that in the analysis, the second property of non-crucial edges lemma, where we show the non-crucial budget of each vertex is not violated by the constructed fractional matching plays an important role. While we have no upper bound on the best provable approximation factor for \algb{}, we show that at least for Procedure~\ref{proc:crucial}, our analysis is tight. That is, we give an example in Lemma~\ref{lem:unweightedtight} for which the fractional matching constructed by Procedure~\ref{proc:crucial} has size no more than $(4\sqrt{2}-5+o(1))\opt$. \smparagraph{Generalization to weighted graphs.} In generalizing our results to weighted graphs, we follow the same approach in partitioning the edges into crucial and non-crucial subsets. In fact, both the crucial and non-crucial edges lemmas can be adapted seamlessly to the weighted graphs leading to a simple (almost) half approximation as described above. However, we show that a large class of procedures (including Procedure~\ref{proc:crucial}) achieve no more than a $0.5$ approximation for weighted graphs. The authors find this strikingly surprising which further highlights the true challenge in beating half approximation for weighted graphs. As a result, the procedure that we use to bypass half approximation for weighted graphs (formalized as Procedure~\ref{proc:crucialweighted}) is much more intricate and achieves an approximation factor of only $0.501$ (see Theorem~\ref{thm:nonadaptiveweighted}). \section{Appendix: Used Inequalities} \begin{lemma}[Chernoff bound]\label{lem:chernoff} Given a real number $b>0$, let $X_1, X_2, \ldots X_n$ be $n$ random variables such that $0 \le X_i \le b$ for every $X_i$. Let $X= \sum_{i=1}^n X_i$ and $\mu= \E[X]$. Then for $0 \le \delta \le 1$, $$ P [ X \ge (1+ \delta)\mu ] \le \exp\Big(-\dfrac{\delta^2 \mu}{3 b}\Big) \,. $$ Also, for $\delta \ge 1$, $$ P [ X \ge (1+ \delta)\mu ] \le \exp\Big(-\dfrac{\delta \mu}{3 b}\Big) \,. $$ \end{lemma} \begin{lemma} \label{lem:oneovere} Let $f(x) = (1-x)^{1/x}$. Then, for any $0<x \le 1$, $f(x) \le \frac{1}{e}$. \end{lemma} \begin{proof} [Proof of Lemma~\ref{lem:oneovere}] We want to find the maximum value of $f$ for $0 < x \le 1$. By taking derivative with respect to $x$ we have $$f'(x) = -\frac{1}{x} \,.$$ Therefore, $f$ is an decreasing function in $x$ and its maximum value is when $x$ is very close to $0$. Formally, $$ f(x) \le \lim_{x \to 0} f(x) $$ It can easily be verified that limit of $f$ as $x$ approaches $0$ is $1/e$. Therefore, $$f(x) \le \lim_{x \to 0} f(x) \le \frac{1}{e}.$$ \end{proof}
{ "timestamp": "2018-11-09T02:06:06", "yymm": "1811", "arxiv_id": "1811.03224", "language": "en", "url": "https://arxiv.org/abs/1811.03224" }